Sample records for simple random samples

  1. Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels

    NASA Technical Reports Server (NTRS)

    Lennington, R. K.; Abotteen, K. M. (Principal Investigator)

    1980-01-01

    The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.

  2. Optimal spatial sampling techniques for ground truth data in microwave remote sensing of soil moisture

    NASA Technical Reports Server (NTRS)

    Rao, R. G. S.; Ulaby, F. T.

    1977-01-01

    The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.

  3. A simple and efficient alternative to implementing systematic random sampling in stereological designs without a motorized microscope stage.

    PubMed

    Melvin, Neal R; Poda, Daniel; Sutherland, Robert J

    2007-10-01

    When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.

  4. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST)

    PubMed Central

    Xu, Chonggang; Gertner, George

    2013-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037

  5. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST).

    PubMed

    Xu, Chonggang; Gertner, George

    2011-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.

  6. [Comparison study on sampling methods of Oncomelania hupensis snail survey in marshland schistosomiasis epidemic areas in China].

    PubMed

    An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang

    2016-06-29

    To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.

  7. RECAL: A Computer Program for Selecting Sample Days for Recreation Use Estimation

    Treesearch

    D.L. Erickson; C.J. Liu; H. Ken Cordell; W.L. Chen

    1980-01-01

    Recreation Calendar (RECAL) is a computer program in PL/I for drawing a sample of days for estimating recreation use. With RECAL, a sampling period of any length may be chosen; simple random, stratified random, and factorial designs can be accommodated. The program randomly allocates days to strata and locations.

  8. Final report : sampling plan for pavement condition ratings of secondary roads.

    DOT National Transportation Integrated Search

    1984-01-01

    The purpose of this project was to develop a random sampling plan for use in selecting segments of the secondary highway system for evaluation under the Department's PMS. The plan developed is described here. It is a simple, workable, random sampling...

  9. The Expected Sample Variance of Uncorrelated Random Variables with a Common Mean and Some Applications in Unbalanced Random Effects Models

    ERIC Educational Resources Information Center

    Vardeman, Stephen B.; Wendelberger, Joanne R.

    2005-01-01

    There is a little-known but very simple generalization of the standard result that for uncorrelated random variables with common mean [mu] and variance [sigma][superscript 2], the expected value of the sample variance is [sigma][superscript 2]. The generalization justifies the use of the usual standard error of the sample mean in possibly…

  10. A nonparametric method to generate synthetic populations to adjust for complex sampling design features.

    PubMed

    Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E

    2014-06-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.

  11. A nonparametric method to generate synthetic populations to adjust for complex sampling design features

    PubMed Central

    Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608

  12. Use of Matrix Sampling Procedures to Assess Achievement in Solving Open Addition and Subtraction Sentences.

    ERIC Educational Resources Information Center

    Montague, Margariete A.

    This study investigated the feasibility of concurrently and randomly sampling examinees and items in order to estimate group achievement. Seven 32-item tests reflecting a 640-item universe of simple open sentences were used such that item selection (random, systematic) and assignment (random, systematic) of items (four, eight, sixteen) to forms…

  13. Determination of the influence of dispersion pattern of pesticide-resistant individuals on the reliability of resistance estimates using different sampling plans.

    PubMed

    Shah, R; Worner, S P; Chapman, R B

    2012-10-01

    Pesticide resistance monitoring includes resistance detection and subsequent documentation/ measurement. Resistance detection would require at least one (≥1) resistant individual(s) to be present in a sample to initiate management strategies. Resistance documentation, on the other hand, would attempt to get an estimate of the entire population (≥90%) of the resistant individuals. A computer simulation model was used to compare the efficiency of simple random and systematic sampling plans to detect resistant individuals and to document their frequencies when the resistant individuals were randomly or patchily distributed. A patchy dispersion pattern of resistant individuals influenced the sampling efficiency of systematic sampling plans while the efficiency of random sampling was independent of such patchiness. When resistant individuals were randomly distributed, sample sizes required to detect at least one resistant individual (resistance detection) with a probability of 0.95 were 300 (1%) and 50 (10% and 20%); whereas, when resistant individuals were patchily distributed, using systematic sampling, sample sizes required for such detection were 6000 (1%), 600 (10%) and 300 (20%). Sample sizes of 900 and 400 would be required to detect ≥90% of resistant individuals (resistance documentation) with a probability of 0.95 when resistant individuals were randomly dispersed and present at a frequency of 10% and 20%, respectively; whereas, when resistant individuals were patchily distributed, using systematic sampling, a sample size of 3000 and 1500, respectively, was necessary. Small sample sizes either underestimated or overestimated the resistance frequency. A simple random sampling plan is, therefore, recommended for insecticide resistance detection and subsequent documentation.

  14. Extending cluster Lot Quality Assurance Sampling designs for surveillance programs

    PubMed Central

    Hund, Lauren; Pagano, Marcello

    2014-01-01

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance based on the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible non-parametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. PMID:24633656

  15. Extending cluster lot quality assurance sampling designs for surveillance programs.

    PubMed

    Hund, Lauren; Pagano, Marcello

    2014-07-20

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance on the basis of the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible nonparametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. Copyright © 2014 John Wiley & Sons, Ltd.

  16. The Relationship between Teachers Commitment and Female Students Academic Achievements in Some Selected Secondary School in Wolaita Zone, Southern Ethiopia

    ERIC Educational Resources Information Center

    Bibiso, Abyot; Olango, Menna; Bibiso, Mesfin

    2017-01-01

    The purpose of this study was to investigate the relationship between teacher's commitment and female students academic achievement in selected secondary school of Wolaita zone, Southern Ethiopia. The research method employed was survey study and the sampling techniques were purposive, simple random and stratified random sampling. Questionnaire…

  17. Teachers' Methodologies and Sources of Information on HIV/AIDS for Students with Visual Impairments in Selected Residential and Integrated Schools in Ghana

    ERIC Educational Resources Information Center

    Hayford, Samuel K.; Ocansey, Frederick

    2017-01-01

    This study reports part of a national survey on sources of information, education and communication materials on HIV/AIDS available to students with visual impairments in residential, segregated, and integrated schools in Ghana. A multi-staged stratified random sampling procedure and a purposive and simple random sampling approach, where…

  18. Computationally Efficient Resampling of Nonuniform Oversampled SAR Data

    DTIC Science & Technology

    2010-05-01

    noncoherently . The resample data is calculated using both a simple average and a weighted average of the demodulated data. The average nonuniform...trials with randomly varying accelerations. The results are shown in Fig. 5 for the noncoherent power difference and Fig. 6 for and coherent power...simple average. Figure 5. Noncoherent difference between SAR imagery generated with uniform sampling and nonuniform sampling that was resampled

  19. Effects of the Physical Laboratory versus the Virtual Laboratory in Teaching Simple Electric Circuits on Conceptual Achievement and Attitudes Towards the Subject

    ERIC Educational Resources Information Center

    Tekbiyik, Ahmet; Ercan, Orhan

    2015-01-01

    Current study examined the effects of virtual and physical laboratory practices on students' conceptual achievement in the subject of electricity and their attitudes towards simple electric circuits. Two groups (virtual and physical) selected through simple random sampling was taught with web-aided material called "Electricity in Our…

  20. Methodology Series Module 5: Sampling Strategies.

    PubMed

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.

  1. Efficient sampling of complex network with modified random walk strategies

    NASA Astrophysics Data System (ADS)

    Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei

    2018-02-01

    We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.

  2. Simple-random-sampling-based multiclass text classification algorithm.

    PubMed

    Liu, Wuying; Wang, Lin; Yi, Mianzhu

    2014-01-01

    Multiclass text classification (MTC) is a challenging issue and the corresponding MTC algorithms can be used in many applications. The space-time overhead of the algorithms must be concerned about the era of big data. Through the investigation of the token frequency distribution in a Chinese web document collection, this paper reexamines the power law and proposes a simple-random-sampling-based MTC (SRSMTC) algorithm. Supported by a token level memory to store labeled documents, the SRSMTC algorithm uses a text retrieval approach to solve text classification problems. The experimental results on the TanCorp data set show that SRSMTC algorithm can achieve the state-of-the-art performance at greatly reduced space-time requirements.

  3. Methodology Series Module 5: Sampling Strategies

    PubMed Central

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438

  4. A Simulation Study on the Performance of the Simple Difference and Covariance-Adjusted Scores in Randomized Experimental Designs.

    PubMed

    Petscher, Yaacov; Schatschneider, Christopher

    2011-01-01

    Research by Huck and McLean (1975) demonstrated that the covariance-adjusted score is more powerful than the simple difference score, yet recent reviews indicate researchers are equally likely to use either score type in two-wave randomized experimental designs. A Monte Carlo simulation was conducted to examine the conditions under which the simple difference and covariance-adjusted scores were more or less powerful to detect treatment effects when relaxing certain assumptions made by Huck and McLean (1975). Four factors were manipulated in the design including sample size, normality of the pretest and posttest distributions, the correlation between pretest and posttest, and posttest variance. A 5 × 5 × 4 × 3 mostly crossed design was run with 1,000 replications per condition, resulting in 226,000 unique samples. The gain score was nearly as powerful as the covariance-adjusted score when pretest and posttest variances were equal, and as powerful in fan-spread growth conditions; thus, under certain circumstances the gain score could be used in two-wave randomized experimental designs.

  5. A Simulation Study on the Performance of the Simple Difference and Covariance-Adjusted Scores in Randomized Experimental Designs

    PubMed Central

    Petscher, Yaacov; Schatschneider, Christopher

    2015-01-01

    Research by Huck and McLean (1975) demonstrated that the covariance-adjusted score is more powerful than the simple difference score, yet recent reviews indicate researchers are equally likely to use either score type in two-wave randomized experimental designs. A Monte Carlo simulation was conducted to examine the conditions under which the simple difference and covariance-adjusted scores were more or less powerful to detect treatment effects when relaxing certain assumptions made by Huck and McLean (1975). Four factors were manipulated in the design including sample size, normality of the pretest and posttest distributions, the correlation between pretest and posttest, and posttest variance. A 5 × 5 × 4 × 3 mostly crossed design was run with 1,000 replications per condition, resulting in 226,000 unique samples. The gain score was nearly as powerful as the covariance-adjusted score when pretest and posttest variances were equal, and as powerful in fan-spread growth conditions; thus, under certain circumstances the gain score could be used in two-wave randomized experimental designs. PMID:26379310

  6. Evaluation of a Class of Simple and Effective Uncertainty Methods for Sparse Samples of Random Variables and Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero, Vicente; Bonney, Matthew; Schroeder, Benjamin

    When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a classmore » of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10 -4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.« less

  7. Simple Example of Backtest Overfitting (SEBO)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    In the field of mathematical finance, a "backtest" is the usage of historical market data to assess the performance of a proposed trading strategy. It is a relatively simple matter for a present-day computer system to explore thousands, millions or even billions of variations of a proposed strategy, and pick the best performing variant as the "optimal" strategy "in sample" (i.e., on the input dataset). Unfortunately, such an "optimal" strategy often performs very poorly "out of sample" (i.e. on another dataset), because the parameters of the invest strategy have been oversit to the in-sample data, a situation known as "backtestmore » overfitting". While the mathematics of backtest overfitting has been examined in several recent theoretical studies, here we pursue a more tangible analysis of this problem, in the form of an online simulator tool. Given a input random walk time series, the tool develops an "optimal" variant of a simple strategy by exhaustively exploring all integer parameter values among a handful of parameters. That "optimal" strategy is overfit, since by definition a random walk is unpredictable. Then the tool tests the resulting "optimal" strategy on a second random walk time series. In most runs using our online tool, the "optimal" strategy derived from the first time series performs poorly on the second time series, demonstrating how hard it is not to overfit a backtest. We offer this online tool, "Simple Example of Backtest Overfitting (SEBO)", to facilitate further research in this area.« less

  8. Honest Importance Sampling with Multiple Markov Chains

    PubMed Central

    Tan, Aixin; Doss, Hani; Hobert, James P.

    2017-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection. PMID:28701855

  9. Honest Importance Sampling with Multiple Markov Chains.

    PubMed

    Tan, Aixin; Doss, Hani; Hobert, James P

    2015-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.

  10. Validation of optical codes based on 3D nanostructures

    NASA Astrophysics Data System (ADS)

    Carnicer, Artur; Javidi, Bahram

    2017-05-01

    Image information encoding using random phase masks produce speckle-like noise distributions when the sample is propagated in the Fresnel domain. As a result, information cannot be accessed by simple visual inspection. Phase masks can be easily implemented in practice by attaching cello-tape to the plain-text message. Conventional 2D-phase masks can be generalized to 3D by combining glass and diffusers resulting in a more complex, physical unclonable function. In this communication, we model the behavior of a 3D phase mask using a simple approach: light is propagated trough glass using the angular spectrum of plane waves whereas the diffusor is described as a random phase mask and a blurring effect on the amplitude of the propagated wave. Using different designs for the 3D phase mask and multiple samples, we demonstrate that classification is possible using the k-nearest neighbors and random forests machine learning algorithms.

  11. Assessing map accuracy in a remotely sensed, ecoregion-scale cover map

    USGS Publications Warehouse

    Edwards, T.C.; Moisen, Gretchen G.; Cutler, D.R.

    1998-01-01

    Landscape- and ecoregion-based conservation efforts increasingly use a spatial component to organize data for analysis and interpretation. A challenge particular to remotely sensed cover maps generated from these efforts is how best to assess the accuracy of the cover maps, especially when they can exceed 1000 s/km2 in size. Here we develop and describe a methodological approach for assessing the accuracy of large-area cover maps, using as a test case the 21.9 million ha cover map developed for Utah Gap Analysis. As part of our design process, we first reviewed the effect of intracluster correlation and a simple cost function on the relative efficiency of cluster sample designs to simple random designs. Our design ultimately combined clustered and subsampled field data stratified by ecological modeling unit and accessibility (hereafter a mixed design). We next outline estimation formulas for simple map accuracy measures under our mixed design and report results for eight major cover types and the three ecoregions mapped as part of the Utah Gap Analysis. Overall accuracy of the map was 83.2% (SE=1.4). Within ecoregions, accuracy ranged from 78.9% to 85.0%. Accuracy by cover type varied, ranging from a low of 50.4% for barren to a high of 90.6% for man modified. In addition, we examined gains in efficiency of our mixed design compared with a simple random sample approach. In regard to precision, our mixed design was more precise than a simple random design, given fixed sample costs. We close with a discussion of the logistical constraints facing attempts to assess the accuracy of large-area, remotely sensed cover maps.

  12. Group Matching: Is This a Research Technique to Be Avoided?

    ERIC Educational Resources Information Center

    Ross, Donald C.; Klein, Donald F.

    1988-01-01

    The variance of the sample difference and the power of the "F" test for mean differences were studied under group matching on covariates and also under random assignment. Results shed light on systematic assignment procedures advocated to provide more precise estimates of treatment effects than simple random assignment. (TJH)

  13. Random bit generation at tunable rates using a chaotic semiconductor laser under distributed feedback.

    PubMed

    Li, Xiao-Zhou; Li, Song-Sui; Zhuang, Jun-Ping; Chan, Sze-Chun

    2015-09-01

    A semiconductor laser with distributed feedback from a fiber Bragg grating (FBG) is investigated for random bit generation (RBG). The feedback perturbs the laser to emit chaotically with the intensity being sampled periodically. The samples are then converted into random bits by a simple postprocessing of self-differencing and selecting bits. Unlike a conventional mirror that provides localized feedback, the FBG provides distributed feedback which effectively suppresses the information of the round-trip feedback delay time. Randomness is ensured even when the sampling period is commensurate with the feedback delay between the laser and the grating. Consequently, in RBG, the FBG feedback enables continuous tuning of the output bit rate, reduces the minimum sampling period, and increases the number of bits selected per sample. RBG is experimentally investigated at a sampling period continuously tunable from over 16 ns down to 50 ps, while the feedback delay is fixed at 7.7 ns. By selecting 5 least-significant bits per sample, output bit rates from 0.3 to 100 Gbps are achieved with randomness examined by the National Institute of Standards and Technology test suite.

  14. Piecewise SALT sampling for estimating suspended sediment yields

    Treesearch

    Robert B. Thomas

    1989-01-01

    A probability sampling method called SALT (Selection At List Time) has been developed for collecting and summarizing data on delivery of suspended sediment in rivers. It is based on sampling and estimating yield using a suspended-sediment rating curve for high discharges and simple random sampling for low flows. The method gives unbiased estimates of total yield and...

  15. Sampling estimators of total mill receipts for use in timber product output studies

    Treesearch

    John P. Brown; Richard G. Oderwald

    2012-01-01

    Data from the 2001 timber product output study for Georgia was explored to determine new methods for stratifying mills and finding suitable sampling estimators. Estimators for roundwood receipts totals comprised several types: simple random sample, ratio, stratified sample, and combined ratio. Two stratification methods were examined: the Dalenius-Hodges (DH) square...

  16. Statistical Sampling Handbook for Student Aid Programs: A Reference for Non-Statisticians. Winter 1984.

    ERIC Educational Resources Information Center

    Office of Student Financial Assistance (ED), Washington, DC.

    A manual on sampling is presented to assist audit and program reviewers, project officers, managers, and program specialists of the U.S. Office of Student Financial Assistance (OSFA). For each of the following types of samples, definitions and examples are provided, along with information on advantages and disadvantages: simple random sampling,…

  17. 45 CFR 1356.71 - Federal review of the eligibility of children in foster care and the eligibility of foster care...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... by ACF statistical staff from the Adoption and Foster Care Analysis and Reporting System (AFCARS... primary review utilizing probability sampling methodologies. Usually, the chosen methodology will be simple random sampling, but other probability samples may be utilized, when necessary and appropriate. (3...

  18. Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach

    ERIC Educational Resources Information Center

    Rotondi, Michael A.; Donner, Allan

    2009-01-01

    The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…

  19. Teachers' Attitude towards Implementation of Learner-Centered Methodology in Science Education in Kenya

    ERIC Educational Resources Information Center

    Ndirangu, Caroline

    2017-01-01

    This study aims to evaluate teachers' attitude towards implementation of learner-centered methodology in science education in Kenya. The study used a survey design methodology, adopting the purposive, stratified random and simple random sampling procedures and hypothesised that there was no significant relationship between the head teachers'…

  20. Remote Sensing, Sampling and Simulation Applications in Analyses of Insect Dispersion and Abundance in Cotton

    Treesearch

    J. L. Willers; J. M. McKinion; J. N. Jenkins

    2006-01-01

    Simulation was employed to create stratified simple random samples of different sample unit sizes to represent tarnished plant bug abundance at different densities within various habitats of simulated cotton fields. These samples were used to investigate dispersion patterns of this cotton insect. It was found that the assessment of spatial pattern varied as a function...

  1. Fitting distributions to microbial contamination data collected with an unequal probability sampling design.

    PubMed

    Williams, M S; Ebel, E D; Cao, Y

    2013-01-01

    The fitting of statistical distributions to microbial sampling data is a common application in quantitative microbiology and risk assessment applications. An underlying assumption of most fitting techniques is that data are collected with simple random sampling, which is often times not the case. This study develops a weighted maximum likelihood estimation framework that is appropriate for microbiological samples that are collected with unequal probabilities of selection. A weighted maximum likelihood estimation framework is proposed for microbiological samples that are collected with unequal probabilities of selection. Two examples, based on the collection of food samples during processing, are provided to demonstrate the method and highlight the magnitude of biases in the maximum likelihood estimator when data are inappropriately treated as a simple random sample. Failure to properly weight samples to account for how data are collected can introduce substantial biases into inferences drawn from the data. The proposed methodology will reduce or eliminate an important source of bias in inferences drawn from the analysis of microbial data. This will also make comparisons between studies and the combination of results from different studies more reliable, which is important for risk assessment applications. © 2012 No claim to US Government works.

  2. Health indicators: eliminating bias from convenience sampling estimators.

    PubMed

    Hedt, Bethany L; Pagano, Marcello

    2011-02-28

    Public health practitioners are often called upon to make inference about a health indicator for a population at large when the sole available information are data gathered from a convenience sample, such as data gathered on visitors to a clinic. These data may be of the highest quality and quite extensive, but the biases inherent in a convenience sample preclude the legitimate use of powerful inferential tools that are usually associated with a random sample. In general, we know nothing about those who do not visit the clinic beyond the fact that they do not visit the clinic. An alternative is to take a random sample of the population. However, we show that this solution would be wasteful if it excluded the use of available information. Hence, we present a simple annealing methodology that combines a relatively small, and presumably far less expensive, random sample with the convenience sample. This allows us to not only take advantage of powerful inferential tools, but also provides more accurate information than that available from just using data from the random sample alone. Copyright © 2011 John Wiley & Sons, Ltd.

  3. Using known map category marginal frequencies to improve estimates of thematic map accuracy

    NASA Technical Reports Server (NTRS)

    Card, D. H.

    1982-01-01

    By means of two simple sampling plans suggested in the accuracy-assessment literature, it is shown how one can use knowledge of map-category relative sizes to improve estimates of various probabilities. The fact that maximum likelihood estimates of cell probabilities for the simple random sampling and map category-stratified sampling were identical has permitted a unified treatment of the contingency-table analysis. A rigorous analysis of the effect of sampling independently within map categories is made possible by results for the stratified case. It is noted that such matters as optimal sample size selection for the achievement of a desired level of precision in various estimators are irrelevant, since the estimators derived are valid irrespective of how sample sizes are chosen.

  4. Prediction of soil attributes through interpolators in a deglaciated environment with complex landforms

    NASA Astrophysics Data System (ADS)

    Schünemann, Adriano Luis; Inácio Fernandes Filho, Elpídio; Rocha Francelino, Marcio; Rodrigues Santos, Gérson; Thomazini, Andre; Batista Pereira, Antônio; Gonçalves Reynaud Schaefer, Carlos Ernesto

    2017-04-01

    The knowledge of environmental variables values, in non-sampled sites from a minimum data set can be accessed through interpolation technique. Kriging and the classifier Random Forest algorithm are examples of predictors with this aim. The objective of this work was to compare methods of soil attributes spatialization in a recent deglaciated environment with complex landforms. Prediction of the selected soil attributes (potassium, calcium and magnesium) from ice-free areas were tested by using morphometric covariables, and geostatistical models without these covariables. For this, 106 soil samples were collected at 0-10 cm depth in Keller Peninsula, King George Island, Maritime Antarctica. Soil chemical analysis was performed by the gravimetric method, determining values of potassium, calcium and magnesium for each sampled point. Digital terrain models (DTMs) were obtained by using Terrestrial Laser Scanner. DTMs were generated from a cloud of points with spatial resolutions of 1, 5, 10, 20 and 30 m. Hence, 40 morphometric covariates were generated. Simple Kriging was performed using the R package software. The same data set coupled with morphometric covariates, was used to predict values of the studied attributes in non-sampled sites through Random Forest interpolator. Little differences were observed on the DTMs generated by Simple kriging and Random Forest interpolators. Also, DTMs with better spatial resolution did not improved the quality of soil attributes prediction. Results revealed that Simple Kriging can be used as interpolator when morphometric covariates are not available, with little impact regarding quality. It is necessary to go further in soil chemical attributes prediction techniques, especially in periglacial areas with complex landforms.

  5. Modeling the Stress Complexities of Teaching and Learning of School Physics in Nigeria

    ERIC Educational Resources Information Center

    Emetere, Moses E.

    2014-01-01

    This study was designed to investigate the validity of the stress complexity model (SCM) to teaching and learning of school physics in Abuja municipal area council of Abuja, North. About two hundred students were randomly selected by a simple random sampling technique from some schools within the Abuja municipal area council. A survey research…

  6. The Bootstrap, the Jackknife, and the Randomization Test: A Sampling Taxonomy.

    PubMed

    Rodgers, J L

    1999-10-01

    A simple sampling taxonomy is defined that shows the differences between and relationships among the bootstrap, the jackknife, and the randomization test. Each method has as its goal the creation of an empirical sampling distribution that can be used to test statistical hypotheses, estimate standard errors, and/or create confidence intervals. Distinctions between the methods can be made based on the sampling approach (with replacement versus without replacement) and the sample size (replacing the whole original sample versus replacing a subset of the original sample). The taxonomy is useful for teaching the goals and purposes of resampling schemes. An extension of the taxonomy implies other possible resampling approaches that have not previously been considered. Univariate and multivariate examples are presented.

  7. Academic Self-Efficacy Perceptions of Teacher Candidates

    ERIC Educational Resources Information Center

    Yesilyurt, Etem

    2013-01-01

    This study aims determining academic self-efficacy perception of teacher candidates. It is survey model. Population of the study consists of teacher candidates in 2010-2011 academic years at Ahmet Kelesoglu Education Faculty of Education Formation of Selcuk University. A simple random sample was selected as sampling method and the study was…

  8. Accounting for selection bias in association studies with complex survey data.

    PubMed

    Wirth, Kathleen E; Tchetgen Tchetgen, Eric J

    2014-05-01

    Obtaining representative information from hidden and hard-to-reach populations is fundamental to describe the epidemiology of many sexually transmitted diseases, including HIV. Unfortunately, simple random sampling is impractical in these settings, as no registry of names exists from which to sample the population at random. However, complex sampling designs can be used, as members of these populations tend to congregate at known locations, which can be enumerated and sampled at random. For example, female sex workers may be found at brothels and street corners, whereas injection drug users often come together at shooting galleries. Despite the logistical appeal, complex sampling schemes lead to unequal probabilities of selection, and failure to account for this differential selection can result in biased estimates of population averages and relative risks. However, standard techniques to account for selection can lead to substantial losses in efficiency. Consequently, researchers implement a variety of strategies in an effort to balance validity and efficiency. Some researchers fully or partially account for the survey design, whereas others do nothing and treat the sample as a realization of the population of interest. We use directed acyclic graphs to show how certain survey sampling designs, combined with subject-matter considerations unique to individual exposure-outcome associations, can induce selection bias. Finally, we present a novel yet simple maximum likelihood approach for analyzing complex survey data; this approach optimizes statistical efficiency at no cost to validity. We use simulated data to illustrate this method and compare it with other analytic techniques.

  9. Quantifying errors without random sampling.

    PubMed

    Phillips, Carl V; LaPole, Luwanna M

    2003-06-12

    All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.

  10. Methods for sample size determination in cluster randomized trials

    PubMed Central

    Rutterford, Clare; Copas, Andrew; Eldridge, Sandra

    2015-01-01

    Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515

  11. On the importance of incorporating sampling weights in ...

    EPA Pesticide Factsheets

    Occupancy models are used extensively to assess wildlife-habitat associations and to predict species distributions across large geographic regions. Occupancy models were developed as a tool to properly account for imperfect detection of a species. Current guidelines on survey design requirements for occupancy models focus on the number of sample units and the pattern of revisits to a sample unit within a season. We focus on the sampling design or how the sample units are selected in geographic space (e.g., stratified, simple random, unequal probability, etc). In a probability design, each sample unit has a sample weight which quantifies the number of sample units it represents in the finite (oftentimes areal) sampling frame. We demonstrate the importance of including sampling weights in occupancy model estimation when the design is not a simple random sample or equal probability design. We assume a finite areal sampling frame as proposed for a national bat monitoring program. We compare several unequal and equal probability designs and varying sampling intensity within a simulation study. We found the traditional single season occupancy model produced biased estimates of occupancy and lower confidence interval coverage rates compared to occupancy models that accounted for the sampling design. We also discuss how our findings inform the analyses proposed for the nascent North American Bat Monitoring Program and other collaborative synthesis efforts that propose h

  12. Types of Bullying in the Senior High Schools in Ghana

    ERIC Educational Resources Information Center

    Antiri, Kwasi Otopa

    2016-01-01

    The main objective of the study was to examine the types of bullying that were taking place in the senior high schools in Ghana. A multi-stage sampling procedure, comprising purposive, simple random and snowball sampling technique, was used in the selection of the sample. A total of 354 respondents were drawn six schools in Ashanti, Central and…

  13. Estimating the encounter rate variance in distance sampling

    USGS Publications Warehouse

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  14. Adaptive Peer Sampling with Newscast

    NASA Astrophysics Data System (ADS)

    Tölgyesi, Norbert; Jelasity, Márk

    The peer sampling service is a middleware service that provides random samples from a large decentralized network to support gossip-based applications such as multicast, data aggregation and overlay topology management. Lightweight gossip-based implementations of the peer sampling service have been shown to provide good quality random sampling while also being extremely robust to many failure scenarios, including node churn and catastrophic failure. We identify two problems with these approaches. The first problem is related to message drop failures: if a node experiences a higher-than-average message drop rate then the probability of sampling this node in the network will decrease. The second problem is that the application layer at different nodes might request random samples at very different rates which can result in very poor random sampling especially at nodes with high request rates. We propose solutions for both problems. We focus on Newscast, a robust implementation of the peer sampling service. Our solution is based on simple extensions of the protocol and an adaptive self-control mechanism for its parameters, namely—without involving failure detectors—nodes passively monitor local protocol events using them as feedback for a local control loop for self-tuning the protocol parameters. The proposed solution is evaluated by simulation experiments.

  15. A Study on Chocolate Consumption in Prospective Teachers

    ERIC Educational Resources Information Center

    Ozgen, Leyla

    2016-01-01

    This study was planned and conducted to determine the chocolate consumption habits of prospective teachers. The study population was comprised of students attending the Faculty of Education at Gazi University in Ankara and the sample consisted of 251 prospective teachers selected with simple random sampling. 96.4% and 3.6% of the prospective…

  16. A Study of Occupational Stress and Organizational Climate of Higher Secondary Teachers

    ERIC Educational Resources Information Center

    Benedicta, A. Sneha

    2014-01-01

    This study mainly aims to describe the occupational stress and organizational climate of higher secondary teachers with regard to gender, locality, family type, experience and type of management. Simple random sampling technique was adopted for the selection of sample. The data is collected from 200 higher secondary teachers from government and…

  17. The Evaluation of Teachers' Job Performance Based on Total Quality Management (TQM)

    ERIC Educational Resources Information Center

    Shahmohammadi, Nayereh

    2017-01-01

    This study aimed to evaluate teachers' job performance based on total quality management (TQM) model. This was a descriptive survey study. The target population consisted of all primary school teachers in Karaj (N = 2917). Using Cochran formula and simple random sampling, 340 participants were selected as sample. A total quality management…

  18. Academic Optimism and Organizational Citizenship Behaviour amongst Secondary School Teachers

    ERIC Educational Resources Information Center

    Makvandi, Abdollah; Naderi, Farah; Makvandi, Behnam; Pasha, Reza; Ehteshamzadeh, Parvin

    2018-01-01

    The purpose of the study was to investigate the simple and multiple relationships between academic optimism and organizational-citizenship behavior amongst high school teachers in Ramhormoz, Iran. The sample consisted of 250 (125 female and 125 male) teachers, selected by stratified random sampling in 2016- 2017. The measurement tools included…

  19. Motivational Factors and Teachers Commitment in Public Secondary Schools in Mbale Municipality

    ERIC Educational Resources Information Center

    Olurotimi, Ogunlade Joseph; Asad, Kamonges Wahab; Abdulrauf, Abdulkadir

    2015-01-01

    The study investigated the influence of motivational factors on teachers' commitment in public Secondary School in Mbale Municipality. The study employed Cross-sectional survey design. The sampling technique used to select was simple random sampling technique. The instrument used to collect data was a self designed questionnaire. The data…

  20. Resolution of plasma sample mix-ups through comparison of patient antibody patterns to E. coli.

    PubMed

    Vetter, Beatrice N; Orlowski, Vanessa; Schüpbach, Jörg; Böni, Jürg; Rühe, Bettina; Huder, Jon B

    2015-12-01

    Accidental sample mix-ups and the need for their swift resolution is a challenge faced by every analytical laboratory. To this end, we developed a simple immunoblot-based method, making use of a patient's characteristic plasma antibody profile to Escherichia coli (E. coli) proteins. Nitrocellulose strips of size-separated proteins from E. coli whole-cell lysates were incubated with patient plasma and visualised with an enzyme-coupled secondary antibody and substrate. Plasma samples of 20 random patients as well as five longitudinal samples of three patients were analysed for antibody band patterns, to evaluate uniqueness and consistency over time, respectively. For sample mix-ups, antibody band patterns of questionable samples were compared with samples of known identity. Comparison of anti-E. coli antibody patterns of 20 random patients showed a unique antibody profile for each patient. Antibody profiles remained consistent over time, as shown for three patients over several years. Three example cases demonstrate the use of this methodology in mis-labelling or -pipetting incidences. Our simple method for resolving plasma sample mix-ups between non-related individuals can be performed with basic laboratory equipment and thus can easily be adopted by analytical laboratories. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Persistence of radiation-induced chromosome aberrations in a long-term cell culture.

    PubMed

    Duran, Assumpta; Barquinero, Joan Francesc; Caballín, María Rosa; Ribas, Montserrat; Barrios, Leonardo

    2009-04-01

    The aim of the present study was to evaluate the persistence of chromosome aberrations induced by X rays. FISH painting and mFISH techniques were applied to long-term cultures of irradiated cells. With painting, at 2 Gy the frequency of apparently simple translocations remained almost invariable during all the culture, whereas at 4 Gy a rapid decline was observed between the first and the second samples, followed by a slight decrease until the end of the culture. Apparently simple dicentrics and complex aberrations disappeared after the first sample at 2 and 4 Gy. By mFISH, at 2 Gy the frequency of complete plus one-way translocations remained invariable between the first and last sample, but at 4 Gy a 60% decline was observed. True incomplete simple translocations disappeared at 2 and 4 Gy, indicating that incompleteness could be a factor to consider when the persistence of translocations is analyzed. The analysis by mFISH showed that the frequency of complex aberrations and their complexity increased with dose and tended to disappear in the last sample. Our results indicate that the influence of dose on the decrease in the frequency of simple translocations with time postirradiation cannot be fully explained by the disappearance of true incomplete translocations and complex aberrations. The chromosome involvement was random for radiation-induced exchange aberrations and non-random for total aberrations. Chromosome 7 showed the highest deviations from expected, being less and more involved than expected in the first and last samples, respectively. Some preferential chromosome-chromosome associations were observed, including a coincidence with a cluster from radiogenic chromosome aberrations described in other studies.

  2. Sampling Methods in Cardiovascular Nursing Research: An Overview.

    PubMed

    Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie

    2014-01-01

    Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.

  3. Sampling pig farms at the abattoir in a cross-sectional study - Evaluation of a sampling method.

    PubMed

    Birkegård, Anna Camilla; Halasa, Tariq; Toft, Nils

    2017-09-15

    A cross-sectional study design is relatively inexpensive, fast and easy to conduct when compared to other study designs. Careful planning is essential to obtaining a representative sample of the population, and the recommended approach is to use simple random sampling from an exhaustive list of units in the target population. This approach is rarely feasible in practice, and other sampling procedures must often be adopted. For example, when slaughter pigs are the target population, sampling the pigs on the slaughter line may be an alternative to on-site sampling at a list of farms. However, it is difficult to sample a large number of farms from an exact predefined list, due to the logistics and workflow of an abattoir. Therefore, it is necessary to have a systematic sampling procedure and to evaluate the obtained sample with respect to the study objective. We propose a method for 1) planning, 2) conducting, and 3) evaluating the representativeness and reproducibility of a cross-sectional study when simple random sampling is not possible. We used an example of a cross-sectional study with the aim of quantifying the association of antimicrobial resistance and antimicrobial consumption in Danish slaughter pigs. It was not possible to visit farms within the designated timeframe. Therefore, it was decided to use convenience sampling at the abattoir. Our approach was carried out in three steps: 1) planning: using data from meat inspection to plan at which abattoirs and how many farms to sample; 2) conducting: sampling was carried out at five abattoirs; 3) evaluation: representativeness was evaluated by comparing sampled and non-sampled farms, and the reproducibility of the study was assessed through simulated sampling based on meat inspection data from the period where the actual data collection was carried out. In the cross-sectional study samples were taken from 681 Danish pig farms, during five weeks from February to March 2015. The evaluation showed that the sampling procedure was reproducible with results comparable to the collected sample. However, the sampling procedure favoured sampling of large farms. Furthermore, both under-sampled and over-sampled areas were found using scan statistics. In conclusion, sampling conducted at abattoirs can provide a spatially representative sample. Hence it is a possible cost-effective alternative to simple random sampling. However, it is important to assess the properties of the resulting sample so that any potential selection bias can be addressed when reporting the findings. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. An exploratory study of Taiwanese consumers' experiences of using health-related websites.

    PubMed

    Hsu, Li-Ling

    2005-06-01

    It is manifest that the rapid growth of Internet use and improvement of information technology have changed our lifestyles. In recent years, Internet use in Taiwan has increased dramatically, from 3 million users in 1998 to approximately 8.6 million by the end of 2002. The statistics imply that not only health care professionals but also laypersons rely on the Internet for health information. The purpose of this study was to explore Taiwan consumers' preferences and information needs, and the problems they encountered when getting information from medical websites. Using simple random sampling and systematic random sampling, a survey was conducted in Taipei from August 26, 2002 to October 30, 2002. Using simple random sampling and systematic random sampling, 28 boroughs (Li) were selected; the total sample number was 1043. Over one-quarter (26.8 %) of the respondents reported having never accessed the Internet, while 763 (73.2%) reported having accessed the Internet. Of the Internet users, only 396 (51.9%) had accessed health-related websites, and 367 (48.1%) reported having never accessed health-related websites. The most popular topics were disease information (46.5%), followed by diet consultation (34.8%), medical news (28.5%), and cosmetology (28.5%). The results of the survey show that a large percentage of people in Taiwan have never made good use of health information available on the websites. The reasons for not using the websites included a lack of time or Internet access skills, no motivation, dissatisfaction with the information, unreliable information be provided, and inability to locate the information needed. The author recommends to enhance health information access skills, understand the needs and preferences of consumers, promote the quality of medical websites, and improve the functions of medical websites.

  5. Network Sampling with Memory: A proposal for more efficient sampling from social networks.

    PubMed

    Mouw, Ted; Verdery, Ashton M

    2012-08-01

    Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)-the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a "List" mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a "Search" mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS.

  6. Network Sampling with Memory: A proposal for more efficient sampling from social networks

    PubMed Central

    Mouw, Ted; Verdery, Ashton M.

    2013-01-01

    Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)—the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a “List” mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a “Search” mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS. PMID:24159246

  7. Jackknifing Techniques for Evaluation of Equating Accuracy. Research Report. ETS RR-09-39

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; Lee, Yi-Hsuan; Qian, Jiahe

    2009-01-01

    Grouped jackknifing may be used to evaluate the stability of equating procedures with respect to sampling error and with respect to changes in anchor selection. Properties of grouped jackknifing are reviewed for simple-random and stratified sampling, and its use is described for comparisons of anchor sets. Application is made to examples of item…

  8. Paradoxes in Film Ratings

    ERIC Educational Resources Information Center

    Moore, Thomas L.

    2006-01-01

    The author selected a simple random sample of 100 movies from the "Movie and Video Guide" (1996), by Leonard Maltin. The author's intent was to obtain some basic information on the population of roughly 19,000 movies through a small sample. The "Movie and Video Guide" by Leonard Maltin is an annual ratings guide to movies. While not all films ever…

  9. Socio-Economic Background and Access to Internet as Correlates of Students' Achievement in Agricultural Science

    ERIC Educational Resources Information Center

    Adegoke, Sunday Paul; Osokoya, Modupe M.

    2015-01-01

    This study investigated access to internet and socio-economic background as correlates of students' achievement in Agricultural Science among selected Senior Secondary Schools Two Students in Ogbomoso South and North Local Government Areas. The study adopted multi-stage sampling technique. Simple random sampling was used to select 30 students from…

  10. Comparison of sampling designs for estimating deforestation from landsat TM and MODIS imagery: a case study in Mato Grosso, Brazil.

    PubMed

    Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin

    2014-01-01

    Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.

  11. Comparison of Sampling Designs for Estimating Deforestation from Landsat TM and MODIS Imagery: A Case Study in Mato Grosso, Brazil

    PubMed Central

    Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin

    2014-01-01

    Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block. PMID:25258742

  12. A Bayesian Hierarchical Model for Large-Scale Educational Surveys: An Application to the National Assessment of Educational Progress. Research Report. ETS RR-04-38

    ERIC Educational Resources Information Center

    Johnson, Matthew S.; Jenkins, Frank

    2005-01-01

    Large-scale educational assessments such as the National Assessment of Educational Progress (NAEP) sample examinees to whom an exam will be administered. In most situations the sampling design is not a simple random sample and must be accounted for in the estimating model. After reviewing the current operational estimation procedure for NAEP, this…

  13. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    PubMed

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

  14. Sampling guidelines for oral fluid-based surveys of group-housed animals.

    PubMed

    Rotolo, Marisa L; Sun, Yaxuan; Wang, Chong; Giménez-Lirola, Luis; Baum, David H; Gauger, Phillip C; Harmon, Karen M; Hoogland, Marlin; Main, Rodger; Zimmerman, Jeffrey J

    2017-09-01

    Formulas and software for calculating sample size for surveys based on individual animal samples are readily available. However, sample size formulas are not available for oral fluids and other aggregate samples that are increasingly used in production settings. Therefore, the objective of this study was to develop sampling guidelines for oral fluid-based porcine reproductive and respiratory syndrome virus (PRRSV) surveys in commercial swine farms. Oral fluid samples were collected in 9 weekly samplings from all pens in 3 barns on one production site beginning shortly after placement of weaned pigs. Samples (n=972) were tested by real-time reverse-transcription PCR (RT-rtPCR) and the binary results analyzed using a piecewise exponential survival model for interval-censored, time-to-event data with misclassification. Thereafter, simulation studies were used to study the barn-level probability of PRRSV detection as a function of sample size, sample allocation (simple random sampling vs fixed spatial sampling), assay diagnostic sensitivity and specificity, and pen-level prevalence. These studies provided estimates of the probability of detection by sample size and within-barn prevalence. Detection using fixed spatial sampling was as good as, or better than, simple random sampling. Sampling multiple barns on a site increased the probability of detection with the number of barns sampled. These results are relevant to PRRSV control or elimination projects at the herd, regional, or national levels, but the results are also broadly applicable to contagious pathogens of swine for which oral fluid tests of equivalent performance are available. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Sampling procedures for throughfall monitoring: A simulation study

    NASA Astrophysics Data System (ADS)

    Zimmermann, Beate; Zimmermann, Alexander; Lark, Richard Murray; Elsenbeer, Helmut

    2010-01-01

    What is the most appropriate sampling scheme to estimate event-based average throughfall? A satisfactory answer to this seemingly simple question has yet to be found, a failure which we attribute to previous efforts' dependence on empirical studies. Here we try to answer this question by simulating stochastic throughfall fields based on parameters for statistical models of large monitoring data sets. We subsequently sampled these fields with different sampling designs and variable sample supports. We evaluated the performance of a particular sampling scheme with respect to the uncertainty of possible estimated means of throughfall volumes. Even for a relative error limit of 20%, an impractically large number of small, funnel-type collectors would be required to estimate mean throughfall, particularly for small events. While stratification of the target area is not superior to simple random sampling, cluster random sampling involves the risk of being less efficient. A larger sample support, e.g., the use of trough-type collectors, considerably reduces the necessary sample sizes and eliminates the sensitivity of the mean to outliers. Since the gain in time associated with the manual handling of troughs versus funnels depends on the local precipitation regime, the employment of automatically recording clusters of long troughs emerges as the most promising sampling scheme. Even so, a relative error of less than 5% appears out of reach for throughfall under heterogeneous canopies. We therefore suspect a considerable uncertainty of input parameters for interception models derived from measured throughfall, in particular, for those requiring data of small throughfall events.

  16. Statistical methods for efficient design of community surveys of response to noise: Random coefficients regression models

    NASA Technical Reports Server (NTRS)

    Tomberlin, T. J.

    1985-01-01

    Research studies of residents' responses to noise consist of interviews with samples of individuals who are drawn from a number of different compact study areas. The statistical techniques developed provide a basis for those sample design decisions. These techniques are suitable for a wide range of sample survey applications. A sample may consist of a random sample of residents selected from a sample of compact study areas, or in a more complex design, of a sample of residents selected from a sample of larger areas (e.g., cities). The techniques may be applied to estimates of the effects on annoyance of noise level, numbers of noise events, the time-of-day of the events, ambient noise levels, or other factors. Methods are provided for determining, in advance, how accurately these effects can be estimated for different sample sizes and study designs. Using a simple cost function, they also provide for optimum allocation of the sample across the stages of the design for estimating these effects. These techniques are developed via a regression model in which the regression coefficients are assumed to be random, with components of variance associated with the various stages of a multi-stage sample design.

  17. The Effect of Cluster Sampling Design in Survey Research on the Standard Error Statistic.

    ERIC Educational Resources Information Center

    Wang, Lin; Fan, Xitao

    Standard statistical methods are used to analyze data that is assumed to be collected using a simple random sampling scheme. These methods, however, tend to underestimate variance when the data is collected with a cluster design, which is often found in educational survey research. The purposes of this paper are to demonstrate how a cluster design…

  18. Challenges to Successful Total Quality Management Implementation in Public Secondary Schools: A Case Study of Kohat District, Pakistan

    ERIC Educational Resources Information Center

    Suleman, Qaiser; Gul, Rizwana

    2015-01-01

    The current study explores the challenges faced by public secondary schools in successful implementation of total quality management (TQM) in Kohat District. A sample of 25 heads and 75 secondary school teachers selected from 25 public secondary schools through simple random sampling technique was used. Descriptive research designed was used and a…

  19. Sample-based estimation of tree species richness in a wet tropical forest compartment

    Treesearch

    Steen Magnussen; Raphael Pelissier

    2007-01-01

    Petersen's capture-recapture ratio estimator and the well-known bootstrap estimator are compared across a range of simulated low-intensity simple random sampling with fixed-area plots of 100 m? in a rich wet tropical forest compartment with 93 tree species in the Western Ghats of India. Petersen's ratio estimator was uniformly superior to the bootstrap...

  20. The Relationship between Temperament, Gender, and Behavioural Problems in Preschool Children

    ERIC Educational Resources Information Center

    Yoleri, Sibel

    2014-01-01

    The aim of this study is to examine the relationship between gender and the temperamental characteristics of children between the ages of five and six, as well as to assess their behavioural problems. The sample included 128 children selected by simple random sampling from 5-6 year old children, receiving preschool education in the city centre of…

  1. Occupational Stress in Secondary Education in Cyprus: Causes, Symptoms, Consequences and Stress Management

    ERIC Educational Resources Information Center

    Hadjisymeou, Georgia

    2010-01-01

    The survey attempted to look into the causes, symptoms and consequences that occupational stress has on teachers in Secondary Education in Cyprus and find ways to manage it. Thirty eight schools with 553 teachers participated in the survey. The sample chosen is a result of a simple random sampling and it is representative of the country's…

  2. Evaluating sampling designs by computer simulation: A case study with the Missouri bladderpod

    USGS Publications Warehouse

    Morrison, L.W.; Smith, D.R.; Young, C.; Nichols, D.W.

    2008-01-01

    To effectively manage rare populations, accurate monitoring data are critical. Yet many monitoring programs are initiated without careful consideration of whether chosen sampling designs will provide accurate estimates of population parameters. Obtaining accurate estimates is especially difficult when natural variability is high, or limited budgets determine that only a small fraction of the population can be sampled. The Missouri bladderpod, Lesquerella filiformis Rollins, is a federally threatened winter annual that has an aggregated distribution pattern and exhibits dramatic interannual population fluctuations. Using the simulation program SAMPLE, we evaluated five candidate sampling designs appropriate for rare populations, based on 4 years of field data: (1) simple random sampling, (2) adaptive simple random sampling, (3) grid-based systematic sampling, (4) adaptive grid-based systematic sampling, and (5) GIS-based adaptive sampling. We compared the designs based on the precision of density estimates for fixed sample size, cost, and distance traveled. Sampling fraction and cost were the most important factors determining precision of density estimates, and relative design performance changed across the range of sampling fractions. Adaptive designs did not provide uniformly more precise estimates than conventional designs, in part because the spatial distribution of L. filiformis was relatively widespread within the study site. Adaptive designs tended to perform better as sampling fraction increased and when sampling costs, particularly distance traveled, were taken into account. The rate that units occupied by L. filiformis were encountered was higher for adaptive than for conventional designs. Overall, grid-based systematic designs were more efficient and practically implemented than the others. ?? 2008 The Society of Population Ecology and Springer.

  3. Finite-key analysis for quantum key distribution with weak coherent pulses based on Bernoulli sampling

    NASA Astrophysics Data System (ADS)

    Kawakami, Shun; Sasaki, Toshihiko; Koashi, Masato

    2017-07-01

    An essential step in quantum key distribution is the estimation of parameters related to the leaked amount of information, which is usually done by sampling of the communication data. When the data size is finite, the final key rate depends on how the estimation process handles statistical fluctuations. Many of the present security analyses are based on the method with simple random sampling, where hypergeometric distribution or its known bounds are used for the estimation. Here we propose a concise method based on Bernoulli sampling, which is related to binomial distribution. Our method is suitable for the Bennett-Brassard 1984 (BB84) protocol with weak coherent pulses [C. H. Bennett and G. Brassard, Proceedings of the IEEE Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984), Vol. 175], reducing the number of estimated parameters to achieve a higher key generation rate compared to the method with simple random sampling. We also apply the method to prove the security of the differential-quadrature-phase-shift (DQPS) protocol in the finite-key regime. The result indicates that the advantage of the DQPS protocol over the phase-encoding BB84 protocol in terms of the key rate, which was previously confirmed in the asymptotic regime, persists in the finite-key regime.

  4. A multiple-objective optimal exploration strategy

    USGS Publications Warehouse

    Christakos, G.; Olea, R.A.

    1988-01-01

    Exploration for natural resources is accomplished through partial sampling of extensive domains. Such imperfect knowledge is subject to sampling error. Complex systems of equations resulting from modelling based on the theory of correlated random fields are reduced to simple analytical expressions providing global indices of estimation variance. The indices are utilized by multiple objective decision criteria to find the best sampling strategies. The approach is not limited by geometric nature of the sampling, covers a wide range in spatial continuity and leads to a step-by-step procedure. ?? 1988.

  5. Linking species abundance distributions in numerical abundance and biomass through simple assumptions about community structure.

    PubMed

    Henderson, Peter A; Magurran, Anne E

    2010-05-22

    Species abundance distributions (SADs) are widely used as a tool for summarizing ecological communities but may have different shapes, depending on the currency used to measure species importance. We develop a simple plotting method that links SADs in the alternative currencies of numerical abundance and biomass and is underpinned by testable predictions about how organisms occupy physical space. When log numerical abundance is plotted against log biomass, the species lie within an approximately triangular region. Simple energetic and sampling constraints explain the triangular form. The dispersion of species within this triangle is the key to understanding why SADs of numerical abundance and biomass can differ. Given regular or random species dispersion, we can predict the shape of the SAD for both currencies under a variety of sampling regimes. We argue that this dispersion pattern will lie between regular and random for the following reasons. First, regular dispersion patterns will result if communities are comprised groups of organisms that use different components of the physical space (e.g. open water, the sea bed surface or rock crevices in a marine fish assemblage), and if the abundance of species in each of these spatial guilds is linked to the way individuals of varying size use the habitat. Second, temporal variation in abundance and sampling error will tend to randomize this regular pattern. Data from two intensively studied marine ecosystems offer empirical support for these predictions. Our approach also has application in environmental monitoring and the recognition of anthropogenic disturbance, which may change the shape of the triangular region by, for example, the loss of large body size top predators that occur at low abundance.

  6. Linking species abundance distributions in numerical abundance and biomass through simple assumptions about community structure

    PubMed Central

    Henderson, Peter A.; Magurran, Anne E.

    2010-01-01

    Species abundance distributions (SADs) are widely used as a tool for summarizing ecological communities but may have different shapes, depending on the currency used to measure species importance. We develop a simple plotting method that links SADs in the alternative currencies of numerical abundance and biomass and is underpinned by testable predictions about how organisms occupy physical space. When log numerical abundance is plotted against log biomass, the species lie within an approximately triangular region. Simple energetic and sampling constraints explain the triangular form. The dispersion of species within this triangle is the key to understanding why SADs of numerical abundance and biomass can differ. Given regular or random species dispersion, we can predict the shape of the SAD for both currencies under a variety of sampling regimes. We argue that this dispersion pattern will lie between regular and random for the following reasons. First, regular dispersion patterns will result if communities are comprised groups of organisms that use different components of the physical space (e.g. open water, the sea bed surface or rock crevices in a marine fish assemblage), and if the abundance of species in each of these spatial guilds is linked to the way individuals of varying size use the habitat. Second, temporal variation in abundance and sampling error will tend to randomize this regular pattern. Data from two intensively studied marine ecosystems offer empirical support for these predictions. Our approach also has application in environmental monitoring and the recognition of anthropogenic disturbance, which may change the shape of the triangular region by, for example, the loss of large body size top predators that occur at low abundance. PMID:20071388

  7. The Relationship between the Levels of Alienation of the Education Faculty Students and Their Attitudes towards the Teaching Profession

    ERIC Educational Resources Information Center

    Caglar, Caglar

    2013-01-01

    It was intended in this study to ascertain the relationship between the levels of alienation of the education faculty students, and their attitudes towards the teaching profession. The sample of the research was composed of the 875 students appointed via simple random sampling out of the total population of 2600 of the Education Faculty of…

  8. Factors Influencing Teachers' Competence in Developing Resilience in Vulnerable Children in Primary Schools in Uasin Gishu County, Kenya

    ERIC Educational Resources Information Center

    Silyvier, Tsindoli; Nyandusi, Charles

    2015-01-01

    The purpose of the study was to assess the effect of teacher characteristics on their competence in developing resilience in vulnerable primary school children. A descriptive survey research design was used. This study was based on resiliency theory as proposed by Krovetz (1998). Simple random sampling was used to select a sample size of 108…

  9. Lecturers and Postgraduates Perception of Libraries as Promoters of Teaching, Learning, and Research at the University of Ibadan, Nigeria

    ERIC Educational Resources Information Center

    Oyewole, Olawale; Adetimirin, Airen

    2015-01-01

    Lecturers and postgraduates are among the users of the university libraries and their perception of the libraries has influence on utilization of the information resources, hence the need for this study. Survey method was adopted for the study and simple random sampling method was used to select sample size of 38 lecturers and 233 postgraduates.…

  10. The coalescent process in models with selection and recombination.

    PubMed

    Hudson, R R; Kaplan, N L

    1988-11-01

    The statistical properties of the process describing the genealogical history of a random sample of genes at a selectively neutral locus which is linked to a locus at which natural selection operates are investigated. It is found that the equations describing this process are simple modifications of the equations describing the process assuming that the two loci are completely linked. Thus, the statistical properties of the genealogical process for a random sample at a neutral locus linked to a locus with selection follow from the results obtained for the selected locus. Sequence data from the alcohol dehydrogenase (Adh) region of Drosophila melanogaster are examined and compared to predictions based on the theory. It is found that the spatial distribution of nucleotide differences between Fast and Slow alleles of Adh is very similar to the spatial distribution predicted if balancing selection operates to maintain the allozyme variation at the Adh locus. The spatial distribution of nucleotide differences between different Slow alleles of Adh do not match the predictions of this simple model very well.

  11. Classification of epileptic EEG signals based on simple random sampling and sequential feature selection.

    PubMed

    Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui

    2016-06-01

    Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential feature selection (SFS) algorithm is applied to select the key features and to reduce the dimensionality of the data. Finally, the selected features are forwarded to a least square support vector machine (LS_SVM) classifier to classify the EEG signals. The LS_SVM classifier classified the features which are extracted and selected from the SRS and the SFS. The experimental results show that the method achieves 99.90, 99.80 and 100 % for classification accuracy, sensitivity and specificity, respectively.

  12. Multiple Imputation in Two-Stage Cluster Samples Using The Weighted Finite Population Bayesian Bootstrap.

    PubMed

    Zhou, Hanzhi; Elliott, Michael R; Raghunathan, Trivellore E

    2016-06-01

    Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in "Delta-V," a key crash severity measure.

  13. Multiple Imputation in Two-Stage Cluster Samples Using The Weighted Finite Population Bayesian Bootstrap

    PubMed Central

    Zhou, Hanzhi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in “Delta-V,” a key crash severity measure. PMID:29226161

  14. Reproduction and optical analysis of Morpho-inspired polymeric nanostructures

    NASA Astrophysics Data System (ADS)

    Tippets, Cary A.; Fu, Yulan; Jackson, Anne-Martine; Donev, Eugenii U.; Lopez, Rene

    2016-06-01

    The brilliant blue coloration of the Morpho rhetenor butterfly originates from complex nanostructures found on the surface of its wings. The Morpho butterfly exhibits strong short-wavelength reflection and a unique two-lobe optical signature in the incident (θ) and reflected (ϕ) angular space. Here, we report the large-area fabrication of a Morpho-like structure and its reproduction in perfluoropolyether. Reflection comparisons of periodic and quasi-random ‘polymer butterfly’ nanostructures show similar normal-incidence spectra but differ in the angular θ-ϕ dependence. The periodic sample shows strong specular reflection and simple diffraction. However, the quasi-random sample produces a two-lobe angular reflection pattern with minimal specular refection, approximating the real butterfly’s optical behavior. Finite-difference time-domain simulations confirm that this pattern results from the quasi-random periodicity and highlights the significance of the inherent randomness in the Morpho’s photonic structure.

  15. Restricted random search method based on taboo search in the multiple minima problem

    NASA Astrophysics Data System (ADS)

    Hong, Seung Do; Jhon, Mu Shik

    1997-03-01

    The restricted random search method is proposed as a simple Monte Carlo sampling method to search minima fast in the multiple minima problem. This method is based on taboo search applied recently to continuous test functions. The concept of the taboo region instead of the taboo list is used and therefore the sampling of a region near an old configuration is restricted in this method. This method is applied to 2-dimensional test functions and the argon clusters. This method is found to be a practical and efficient method to search near-global configurations of test functions and the argon clusters.

  16. SAS procedures for designing and analyzing sample surveys

    USGS Publications Warehouse

    Stafford, Joshua D.; Reinecke, Kenneth J.; Kaminski, Richard M.

    2003-01-01

    Complex surveys often are necessary to estimate occurrence (or distribution), density, and abundance of plants and animals for purposes of re-search and conservation. Most scientists are familiar with simple random sampling, where sample units are selected from a population of interest (sampling frame) with equal probability. However, the goal of ecological surveys often is to make inferences about populations over large or complex spatial areas where organisms are not homogeneously distributed or sampling frames are in-convenient or impossible to construct. Candidate sampling strategies for such complex surveys include stratified,multistage, and adaptive sampling (Thompson 1992, Buckland 1994).

  17. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation

    PubMed Central

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-01-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037

  18. SNP selection and classification of genome-wide SNP data using stratified sampling random forests.

    PubMed

    Wu, Qingyao; Ye, Yunming; Liu, Yang; Ng, Michael K

    2012-09-01

    For high dimensional genome-wide association (GWA) case-control data of complex disease, there are usually a large portion of single-nucleotide polymorphisms (SNPs) that are irrelevant with the disease. A simple random sampling method in random forest using default mtry parameter to choose feature subspace, will select too many subspaces without informative SNPs. Exhaustive searching an optimal mtry is often required in order to include useful and relevant SNPs and get rid of vast of non-informative SNPs. However, it is too time-consuming and not favorable in GWA for high-dimensional data. The main aim of this paper is to propose a stratified sampling method for feature subspace selection to generate decision trees in a random forest for GWA high-dimensional data. Our idea is to design an equal-width discretization scheme for informativeness to divide SNPs into multiple groups. In feature subspace selection, we randomly select the same number of SNPs from each group and combine them to form a subspace to generate a decision tree. The advantage of this stratified sampling procedure can make sure each subspace contains enough useful SNPs, but can avoid a very high computational cost of exhaustive search of an optimal mtry, and maintain the randomness of a random forest. We employ two genome-wide SNP data sets (Parkinson case-control data comprised of 408 803 SNPs and Alzheimer case-control data comprised of 380 157 SNPs) to demonstrate that the proposed stratified sampling method is effective, and it can generate better random forest with higher accuracy and lower error bound than those by Breiman's random forest generation method. For Parkinson data, we also show some interesting genes identified by the method, which may be associated with neurological disorders for further biological investigations.

  19. Influence of Host Community on Industrial Relations Practices and Policies: A Survey of Agbara Community and Power Holding Company of Nigeria (PHCN)

    ERIC Educational Resources Information Center

    Chidi, Christopher O.; Shadare, Oluseyi A.

    2011-01-01

    This study investigated the influence of host community on industrial relations practices and policies using Agbara community and Power Holding Company of Nigeria PLC as a case. The study adopted both the qualitative and quantitative methods. A total of 120 samples were drawn from the population using the simple random sampling technique in which…

  20. A Short Research Note on Calculating Exact Distribution Functions and Random Sampling for the 3D NFW Profile

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.; Howlett, Cullan

    2018-06-01

    In this short note we publish the analytic quantile function for the Navarro, Frenk & White (NFW) profile. All known published and coded methods for sampling from the 3D NFW PDF use either accept-reject, or numeric interpolation (sometimes via a lookup table) for projecting random Uniform samples through the quantile distribution function to produce samples of the radius. This is a common requirement in N-body initial condition (IC), halo occupation distribution (HOD), and semi-analytic modelling (SAM) work for correctly assigning particles or galaxies to positions given an assumed concentration for the NFW profile. Using this analytic description allows for much faster and cleaner code to solve a common numeric problem in modern astronomy. We release R and Python versions of simple code that achieves this sampling, which we note is trivial to reproduce in any modern programming language.

  1. On the repeated measures designs and sample sizes for randomized controlled trials.

    PubMed

    Tango, Toshiro

    2016-04-01

    For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. An integrate-over-temperature approach for enhanced sampling.

    PubMed

    Gao, Yi Qin

    2008-02-14

    A simple method is introduced to achieve efficient random walking in the energy space in molecular dynamics simulations which thus enhances the sampling over a large energy range. The approach is closely related to multicanonical and replica exchange simulation methods in that it allows configurations of the system to be sampled in a wide energy range by making use of Boltzmann distribution functions at multiple temperatures. A biased potential is quickly generated using this method and is then used in accelerated molecular dynamics simulations.

  3. The contribution of simple random sampling to observed variations in faecal egg counts.

    PubMed

    Torgerson, Paul R; Paul, Michaela; Lewis, Fraser I

    2012-09-10

    It has been over 100 years since the classical paper published by Gosset in 1907, under the pseudonym "Student", demonstrated that yeast cells suspended in a fluid and measured by a haemocytometer conformed to a Poisson process. Similarly parasite eggs in a faecal suspension also conform to a Poisson process. Despite this there are common misconceptions how to analyse or interpret observations from the McMaster or similar quantitative parasitic diagnostic techniques, widely used for evaluating parasite eggs in faeces. The McMaster technique can easily be shown from a theoretical perspective to give variable results that inevitably arise from the random distribution of parasite eggs in a well mixed faecal sample. The Poisson processes that lead to this variability are described and illustrative examples of the potentially large confidence intervals that can arise from observed faecal eggs counts that are calculated from the observations on a McMaster slide. Attempts to modify the McMaster technique, or indeed other quantitative techniques, to ensure uniform egg counts are doomed to failure and belie ignorance of Poisson processes. A simple method to immediately identify excess variation/poor sampling from replicate counts is provided. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Using auxiliary information to improve wildlife disease surveillance when infected animals are not detected: a Bayesian approach

    USGS Publications Warehouse

    Heisey, Dennis M.; Jennelle, Christopher S.; Russell, Robin E.; Walsh, Daniel P.

    2014-01-01

    There are numerous situations in which it is important to determine whether a particular disease of interest is present in a free-ranging wildlife population. However adequate disease surveillance can be labor-intensive and expensive and thus there is substantial motivation to conduct it as efficiently as possible. Surveillance is often based on the assumption of a simple random sample, but this can almost always be improved upon if there is auxiliary information available about disease risk factors. We present a Bayesian approach to disease surveillance when auxiliary risk information is available which will usually allow for substantial improvements over simple random sampling. Others have employed risk weights in surveillance, but this can result in overly optimistic statements regarding freedom from disease due to not accounting for the uncertainty in the auxiliary information; our approach remedies this. We compare our Bayesian approach to a published example of risk weights applied to chronic wasting disease in deer in Colorado, and we also present calculations to examine when uncertainty in the auxiliary information has a serious impact on the risk weights approach. Our approach allows “apples-to-apples” comparisons of surveillance efficiencies between units where heterogeneous samples were collected

  5. Determinants of Teachers' Attitudes towards E- Learning in Tanzanian Higher Learning Institutions

    ERIC Educational Resources Information Center

    Kisanga, Dalton H.

    2016-01-01

    This survey research study presents the findings on determinants of teachers' attitudes towards e-learning in Tanzanian higher learning institutions. The study involved 258 teachers from 4 higher learning institutions obtained through stratified, simple random sampling. Questionnaires and documentary review were used in data collection. Data were…

  6. Self-Medication among School Students

    ERIC Educational Resources Information Center

    ALBashtawy, Mohammed; Batiha, Abdul-Monim; Tawalbeh, Loai; Tubaishat, Ahmad; AlAzzam, Manar

    2015-01-01

    Self-medication, usually with over-the-counter (OTC) medication, is reported as a community health problem that affects many people worldwide. Most self-medication practice usually begins with the onset of adolescence. A school-based cross-sectional study was conducted in Mafraq Governorate, Jordan, using a simple random sampling method to select…

  7. Usefulness of fire ant genetics in insecticide efficacy trials

    USDA-ARS?s Scientific Manuscript database

    Mature fire ant colonies contain an average of 80,000 worker ants. For this study, eight fire ant workers were randomly sampled from each colony. DNA fingerprints for each individual ant were generated using 21 simple sequence repeats (SSR) markers that were developed from fire ant DNA by other lab...

  8. Statistical design and analysis of environmental studies for plutonium and other transuranics at NAEG ''safety-shot'' sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, R.O.; Eberhardt, L.L.; Fowler, E.B.

    This paper is centered around the use of stratified random sampling for estimating the total amount (inventory) of $sup 239-240$Pu and uranium in surface soil at ten ''safety-shot'' sites on the Nevada Test Site (NTS) and Tonopah Test Range (TTR) that are currently being studied by the Nevada Applied Ecology Group (NAEG). The use of stratified random sampling has resulted in estimates of inventory at these desert study sites that have smaller standard errors than would have been the case had simple random sampling (no stratification) been used. Estimates of inventory are given for $sup 235$U, $sup 238$U, and $supmore » 239-240$Pu in soil at A Site of Area 11 on the NTS. Other results presented include average concentrations of one or more of these isotopes in soil and vegetation and in soil profile samples at depths to 25 cm. The regression relationship between soil and vegetation concentrations of $sup 235$U and $sup 238$U at adjacent sampling locations is also examined using three different models. The applicability of stratified random sampling to the estimation of concentration contours of $sup 239-240$Pu in surface soil using computer algorithms is also investigated. Estimates of such contours are obtained using several different methods. The planning of field sampling plans for estimating inventory and distribution is discussed. (auth)« less

  9. Geographic Information Systems to Assess External Validity in Randomized Trials.

    PubMed

    Savoca, Margaret R; Ludwig, David A; Jones, Stedman T; Jason Clodfelter, K; Sloop, Joseph B; Bollhalter, Linda Y; Bertoni, Alain G

    2017-08-01

    To support claims that RCTs can reduce health disparities (i.e., are translational), it is imperative that methodologies exist to evaluate the tenability of external validity in RCTs when probabilistic sampling of participants is not employed. Typically, attempts at establishing post hoc external validity are limited to a few comparisons across convenience variables, which must be available in both sample and population. A Type 2 diabetes RCT was used as an example of a method that uses a geographic information system to assess external validity in the absence of a priori probabilistic community-wide diabetes risk sampling strategy. A geographic information system, 2009-2013 county death certificate records, and 2013-2014 electronic medical records were used to identify community-wide diabetes prevalence. Color-coded diabetes density maps provided visual representation of these densities. Chi-square goodness of fit statistic/analysis tested the degree to which distribution of RCT participants varied across density classes compared to what would be expected, given simple random sampling of the county population. Analyses were conducted in 2016. Diabetes prevalence areas as represented by death certificate and electronic medical records were distributed similarly. The simple random sample model was not a good fit for death certificate record (chi-square, 17.63; p=0.0001) and electronic medical record data (chi-square, 28.92; p<0.0001). Generally, RCT participants were oversampled in high-diabetes density areas. Location is a highly reliable "principal variable" associated with health disparities. It serves as a directly measurable proxy for high-risk underserved communities, thus offering an effective and practical approach for examining external validity of RCTs. Copyright © 2017 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  10. Correcting Evaluation Bias of Relational Classifiers with Network Cross Validation

    DTIC Science & Technology

    2010-01-01

    classi- fication algorithms: simple random resampling (RRS), equal-instance random resampling (ERS), and network cross-validation ( NCV ). The first two... NCV procedure that eliminates overlap between test sets altogether. The procedure samples for k disjoint test sets that will be used for evaluation...propLabeled ∗ S) nodes from train Pool in f erenceSet =network − trainSet F = F ∪ < trainSet, test Set, in f erenceSet > end for output: F NCV addresses

  11. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    USGS Publications Warehouse

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  12. General statistical considerations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eberhardt, L L; Gilbert, R O

    From NAEG plutonium environmental studies program meeting; Las Vegas, Nevada, USA (2 Oct 1973). The high sampling variability encountered in environmental plutonium studies along with high analytical costs makes it very important that efficient soil sampling plans be used. However, efficient sampling depends on explicit and simple statements of the objectives of the study. When there are multiple objectives it may be difficult to devise a wholly suitable sampling scheme. Sampling for long-term changes in plutonium concentration in soils may also be complex and expensive. Further attention to problems associated with compositing samples is recommended, as is the consistent usemore » of random sampling as a basic technique. (auth)« less

  13. Advantage of multiple spot urine collections for estimating daily sodium excretion: comparison with two 24-h urine collections as reference.

    PubMed

    Uechi, Ken; Asakura, Keiko; Ri, Yui; Masayasu, Shizuko; Sasaki, Satoshi

    2016-02-01

    Several estimation methods for 24-h sodium excretion using spot urine sample have been reported, but accurate estimation at the individual level remains difficult. We aimed to clarify the most accurate method of estimating 24-h sodium excretion with different numbers of available spot urine samples. A total of 370 participants from throughout Japan collected multiple 24-h urine and spot urine samples independently. Participants were allocated randomly into a development and a validation dataset. Two estimation methods were established in the development dataset using the two 24-h sodium excretion samples as reference: the 'simple mean method' estimated by multiplying the sodium-creatinine ratio by predicted 24-h creatinine excretion, whereas the 'regression method' employed linear regression analysis. The accuracy of the two methods was examined by comparing the estimated means and concordance correlation coefficients (CCC) in the validation dataset. Mean sodium excretion by the simple mean method with three spot urine samples was closest to that by 24-h collection (difference: -1.62  mmol/day). CCC with the simple mean method increased with an increased number of spot urine samples at 0.20, 0.31, and 0.42 using one, two, and three samples, respectively. This method with three spot urine samples yielded higher CCC than the regression method (0.40). When only one spot urine sample was available for each study participant, CCC was higher with the regression method (0.36). The simple mean method with three spot urine samples yielded the most accurate estimates of sodium excretion. When only one spot urine sample was available, the regression method was preferable.

  14. Dynamical traps in Wang-Landau sampling of continuous systems: Mechanism and solution

    NASA Astrophysics Data System (ADS)

    Koh, Yang Wei; Sim, Adelene Y. L.; Lee, Hwee Kuan

    2015-08-01

    We study the mechanism behind dynamical trappings experienced during Wang-Landau sampling of continuous systems reported by several authors. Trapping is caused by the random walker coming close to a local energy extremum, although the mechanism is different from that of the critical slowing-down encountered in conventional molecular dynamics or Monte Carlo simulations. When trapped, the random walker misses the entire or even several stages of Wang-Landau modification factor reduction, leading to inadequate sampling of the configuration space and a rough density of states, even though the modification factor has been reduced to very small values. Trapping is dependent on specific systems, the choice of energy bins, and the Monte Carlo step size, making it highly unpredictable. A general, simple, and effective solution is proposed where the configurations of multiple parallel Wang-Landau trajectories are interswapped to prevent trapping. We also explain why swapping frees the random walker from such traps. The efficacy of the proposed algorithm is demonstrated.

  15. Optimal sampling design for estimating spatial distribution and abundance of a freshwater mussel population

    USGS Publications Warehouse

    Pooler, P.S.; Smith, D.R.

    2005-01-01

    We compared the ability of simple random sampling (SRS) and a variety of systematic sampling (SYS) designs to estimate abundance, quantify spatial clustering, and predict spatial distribution of freshwater mussels. Sampling simulations were conducted using data obtained from a census of freshwater mussels in a 40 X 33 m section of the Cacapon River near Capon Bridge, West Virginia, and from a simulated spatially random population generated to have the same abundance as the real population. Sampling units that were 0.25 m 2 gave more accurate and precise abundance estimates and generally better spatial predictions than 1-m2 sampling units. Systematic sampling with ???2 random starts was more efficient than SRS. Estimates of abundance based on SYS were more accurate when the distance between sampling units across the stream was less than or equal to the distance between sampling units along the stream. Three measures for quantifying spatial clustering were examined: Hopkins Statistic, the Clumping Index, and Morisita's Index. Morisita's Index was the most reliable, and the Hopkins Statistic was prone to false rejection of complete spatial randomness. SYS designs with units spaced equally across and up stream provided the most accurate predictions when estimating the spatial distribution by kriging. Our research indicates that SYS designs with sampling units equally spaced both across and along the stream would be appropriate for sampling freshwater mussels even if no information about the true underlying spatial distribution of the population were available to guide the design choice. ?? 2005 by The North American Benthological Society.

  16. The Influence of Leadership Practices on Faculty Job Satisfaction in Baccalaureate Degree Nursing Program

    ERIC Educational Resources Information Center

    Afam, Clifford C.

    2012-01-01

    Using a correlational, cross-sectional study design with self-administered questionnaires, this study explored the extent to which leadership practices of deans and department heads influence faculty job satisfaction in baccalaureate degree nursing programs. Using a simple random sampling technique, the study survey was sent to 400 faculty…

  17. Logging utilization in Idaho: Current and past trends

    Treesearch

    Eric A. Simmons; Todd A. Morgan; Erik C. Berg; Stanley J. Zarnoch; Steven W. Hayes; Mike T. Thompson

    2014-01-01

    A study of commercial timber-harvesting activities in Idaho was conducted during 2008 and 2011 to characterize current tree utilization, logging operations, and changes from previous Idaho logging utilization studies. A two-stage simple random sampling design was used to select sites and felled trees for measurement within active logging sites. Thirty-three logging...

  18. Science Teachers' Information Processing Behaviours in Nepal: A Reflective Comparative Study

    ERIC Educational Resources Information Center

    Acharya, Kamal Prasad

    2017-01-01

    This study examines the investigation of the information processing behaviours of secondary level science teachers. It is based on the data collected from 50 secondary level school science teachers working in Kathmandy valley. The simple random sampling and the Cognitive Style Inventory have been used respectively as the technique and tool to…

  19. A National Study of Work-Family Balance and Job Satisfaction among Agriculture Teachers

    ERIC Educational Resources Information Center

    Sorensen, Tyson J.; McKim, Aaron J.; Velez, Jonathan J.

    2016-01-01

    This national study sought to extend previous research on the work-family balance (WFB) ability of secondary school agriculture teachers. We utilized data from a simple random sample of agriculture teachers to explore the relationships between work and family characteristics, WFB ability, and job satisfaction. Work role characteristics of interest…

  20. Oral Health Patterns among Schoolchildren in Mafraq Governorate, Jordan

    ERIC Educational Resources Information Center

    ALBashtawy, Mohammed

    2012-01-01

    Little is known about the oral hygiene patterns among schoolchildren in Jordan. A school-based cross-sectional study was performed from January to March 2010. A simple random sampling method was used. Each student participant completed a detailed questionnaire regarding oral hygiene habits. Data were coded and analyzed using SPSS software version…

  1. Communication Channels as Implementation Determinants of Performance Management Framework in Kenya

    ERIC Educational Resources Information Center

    Sang, Jane

    2016-01-01

    The purpose of this study to assess communication channels as implementation determinants of performance management framework In Kenya at Moi Teaching and Referral Hospital (MTRH). The communication theory was used to inform the study. This study adopted an explanatory design. The target sampled 510 respondents through simple random and stratified…

  2. Motivation among Public Primary School Teachers in Mauritius

    ERIC Educational Resources Information Center

    Seebaluck, Ashley Keshwar; Seegum, Trisha Devi

    2013-01-01

    Purpose: The purpose of this study was to critically analyse the factors that affect the motivation of public primary school teachers and also to investigate if there is any relationship between teacher motivation and job satisfaction in Mauritius. Design/methodology/approach: Simple random sampling method was used to collect data from 250 primary…

  3. A Confirmatory Factor Analysis of the Professional Opinion Scale

    ERIC Educational Resources Information Center

    Greeno, Elizabeth J.; Hughes, Anne K.; Hayward, R. Anna; Parker, Karen L.

    2007-01-01

    The Professional Opinion Scale (POS) was developed to measure social work values orientation. Objective: A confirmatory factor analysis was performed on the POS. Method: This cross-sectional study used a mailed survey design with a national random (simple) sample of members of the National Association of Social Workers. Results: The study…

  4. "Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2009-01-01

    Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…

  5. Examining Middle School Students' Views on Text Bullying

    ERIC Educational Resources Information Center

    Semerci, Ali

    2016-01-01

    This study aimed to examine middle school students' views on text bullying in regard to gender, grade level, reactions to bullying and frequency of internet use. The participating 872 students were selected through simple random sampling method among 525 schools located in central Ankara. The data were collected via a questionnaire and a survey…

  6. Relationship between Study Habits and Test Anxiety of Higher Secondary Students

    ERIC Educational Resources Information Center

    Lawrence, Arul A. S.

    2014-01-01

    The present study aims to probe the relationship between study habits and test anxiety of higher secondary students. In this normative study survey method was employed. The population for the present study consisted of higher secondary students studying in Tirunelveli district. The investigator used the simple random sampling technique. The sample…

  7. The Relationship between Prospective Teachers' Critical Thinking Dispositions and Their Educational Philosophies

    ERIC Educational Resources Information Center

    Aybek, Birsel; Aslan, Serkan

    2017-01-01

    The aim of this research is to investigate the relationship between prospective teachers' critical thinking dispositions and their educational philosophies. The research used relational screening model. The study hosts a total of 429 prospective teachers selected by the simple random sampling method. Research data has been collected through…

  8. Effects of different preservation methods on inter simple sequence repeat (ISSR) and random amplified polymorphic DNA (RAPD) molecular markers in botanic samples.

    PubMed

    Wang, Xiaolong; Li, Lin; Zhao, Jiaxin; Li, Fangliang; Guo, Wei; Chen, Xia

    2017-04-01

    To evaluate the effects of different preservation methods (stored in a -20°C ice chest, preserved in liquid nitrogen and dried in silica gel) on inter simple sequence repeat (ISSR) or random amplified polymorphic DNA (RAPD) analyses in various botanical specimens (including broad-leaved plants, needle-leaved plants and succulent plants) for different times (three weeks and three years), we used a statistical analysis based on the number of bands, genetic index and cluster analysis. The results demonstrate that methods used to preserve samples can provide sufficient amounts of genomic DNA for ISSR and RAPD analyses; however, the effect of different preservation methods on these analyses vary significantly, and the preservation time has little effect on these analyses. Our results provide a reference for researchers to select the most suitable preservation method depending on their study subject for the analysis of molecular markers based on genomic DNA. Copyright © 2017 Académie des sciences. Published by Elsevier Masson SAS. All rights reserved.

  9. Using known populations of pronghorn to evaluate sampling plans and estimators

    USGS Publications Warehouse

    Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.

    1995-01-01

    Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.

  10. Statistical inferences for data from studies conducted with an aggregated multivariate outcome-dependent sample design

    PubMed Central

    Lu, Tsui-Shan; Longnecker, Matthew P.; Zhou, Haibo

    2016-01-01

    Outcome-dependent sampling (ODS) scheme is a cost-effective sampling scheme where one observes the exposure with a probability that depends on the outcome. The well-known such design is the case-control design for binary response, the case-cohort design for the failure time data and the general ODS design for a continuous response. While substantial work has been done for the univariate response case, statistical inference and design for the ODS with multivariate cases remain under-developed. Motivated by the need in biological studies for taking the advantage of the available responses for subjects in a cluster, we propose a multivariate outcome dependent sampling (Multivariate-ODS) design that is based on a general selection of the continuous responses within a cluster. The proposed inference procedure for the Multivariate-ODS design is semiparametric where all the underlying distributions of covariates are modeled nonparametrically using the empirical likelihood methods. We show that the proposed estimator is consistent and developed the asymptotically normality properties. Simulation studies show that the proposed estimator is more efficient than the estimator obtained using only the simple-random-sample portion of the Multivariate-ODS or the estimator from a simple random sample with the same sample size. The Multivariate-ODS design together with the proposed estimator provides an approach to further improve study efficiency for a given fixed study budget. We illustrate the proposed design and estimator with an analysis of association of PCB exposure to hearing loss in children born to the Collaborative Perinatal Study. PMID:27966260

  11. Binary Colloidal Alloy Test-5: Phase Separation

    NASA Technical Reports Server (NTRS)

    Lynch, Matthew; Weitz, David A.; Lu, Peter J.

    2008-01-01

    The Binary Colloidal Alloy Test - 5: Phase Separation (BCAT-5-PhaseSep) experiment will photograph initially randomized colloidal samples onboard the ISS to determine their resulting structure over time. This allows the scientists to capture the kinetics (evolution) of their samples, as well as the final equilibrium state of each sample. BCAT-5-PhaseSep studies collapse (phase separation rates that impact product shelf-life); in microgravity the physics of collapse is not masked by being reduced to a simple top and bottom phase as it is on Earth.

  12. MAP: an iterative experimental design methodology for the optimization of catalytic search space structure modeling.

    PubMed

    Baumes, Laurent A

    2006-01-01

    One of the main problems in high-throughput research for materials is still the design of experiments. At early stages of discovery programs, purely exploratory methodologies coupled with fast screening tools should be employed. This should lead to opportunities to find unexpected catalytic results and identify the "groups" of catalyst outputs, providing well-defined boundaries for future optimizations. However, very few new papers deal with strategies that guide exploratory studies. Mostly, traditional designs, homogeneous covering, or simple random samplings are exploited. Typical catalytic output distributions exhibit unbalanced datasets for which an efficient learning is hardly carried out, and interesting but rare classes are usually unrecognized. Here is suggested a new iterative algorithm for the characterization of the search space structure, working independently of learning processes. It enhances recognition rates by transferring catalysts to be screened from "performance-stable" space zones to "unsteady" ones which necessitate more experiments to be well-modeled. The evaluation of new algorithm attempts through benchmarks is compulsory due to the lack of past proofs about their efficiency. The method is detailed and thoroughly tested with mathematical functions exhibiting different levels of complexity. The strategy is not only empirically evaluated, the effect or efficiency of sampling on future Machine Learning performances is also quantified. The minimum sample size required by the algorithm for being statistically discriminated from simple random sampling is investigated.

  13. A simple homogeneous model for regular and irregular metallic wire media samples

    NASA Astrophysics Data System (ADS)

    Kosulnikov, S. Y.; Mirmoosa, M. S.; Simovski, C. R.

    2018-02-01

    To simplify the solution of electromagnetic problems with wire media samples, it is reasonable to treat them as the samples of a homogeneous material without spatial dispersion. The account of spatial dispersion implies additional boundary conditions and makes the solution of boundary problems difficult especially if the sample is not an infinitely extended layer. Moreover, for a novel type of wire media - arrays of randomly tilted wires - a spatially dispersive model has not been developed. Here, we introduce a simplistic heuristic model of wire media samples shaped as bricks. Our model covers WM of both regularly and irregularly stretched wires.

  14. Theory and implementation of a very high throughput true random number generator in field programmable gate array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yonggang, E-mail: wangyg@ustc.edu.cn; Hui, Cong; Liu, Chong

    The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving,more » so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.« less

  15. Theory and implementation of a very high throughput true random number generator in field programmable gate array.

    PubMed

    Wang, Yonggang; Hui, Cong; Liu, Chong; Xu, Chao

    2016-04-01

    The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving, so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.

  16. Generalized estimators of avian abundance from count survey data

    USGS Publications Warehouse

    Royle, J. Andrew

    2004-01-01

    I consider modeling avian abundance from spatially referenced bird count data collected according to common protocols such as capture?recapture, multiple observer, removal sampling and simple point counts. Small sample sizes and large numbers of parameters have motivated many analyses that disregard the spatial indexing of the data, and thus do not provide an adequate treatment of spatial structure. I describe a general framework for modeling spatially replicated data that regards local abundance as a random process, motivated by the view that the set of spatially referenced local populations (at the sample locations) constitute a metapopulation. Under this view, attention can be focused on developing a model for the variation in local abundance independent of the sampling protocol being considered. The metapopulation model structure, when combined with the data generating model, define a simple hierarchical model that can be analyzed using conventional methods. The proposed modeling framework is completely general in the sense that broad classes of metapopulation models may be considered, site level covariates on detection and abundance may be considered, and estimates of abundance and related quantities may be obtained for sample locations, groups of locations, unsampled locations. Two brief examples are given, the first involving simple point counts, and the second based on temporary removal counts. Extension of these models to open systems is briefly discussed.

  17. A Comparative Study of Factors Influencing Male and Female Lecturers' Job Satisfaction in Ghanaian Higher Education

    ERIC Educational Resources Information Center

    Amos, Patricia Mawusi; Acquah, Sakina; Antwi, Theresa; Adzifome, Nixon Saba

    2015-01-01

    The study sought to compare factors influencing male and female lecturers' job satisfaction. Cross-sectional survey designs employing both quantitative and qualitative approaches were adopted for the study. Simple random sampling was used to select 163 lecturers from the four oldest public universities in Ghana. Celep's (2000) Organisational…

  18. Extrinsic Motivation as Correlates of Work Attitude of the Nigerian Police Force: Implications for Counselling

    ERIC Educational Resources Information Center

    Igun, Sylvester Nosakhare

    2008-01-01

    The study examined Extrinsic motivation as correlates of work attitude of the Nigeria Police Force and its implications for counselling. 300 Police personnel were selected by random sampling technique from six departments that make up police force Headquarters, Abuja. The personnel were selected from each department using simple sampling…

  19. The Examination of Teacher Stress among Turkish Early Childhood Education Teachers

    ERIC Educational Resources Information Center

    Erdiller, Z. B.; Dogan, Ö.

    2015-01-01

    The purpose of this study is to examine the level of teacher stress experienced by Turkish early childhood education teachers working in public and private preschools serving children from three to six years of age. The participants of the study include 1119 early childhood education teachers gathered through simple random sampling. The data are…

  20. Coping with Resource Management Challenges in Mumias Sub-County, Kakamega County, Kenya

    ERIC Educational Resources Information Center

    Anyango, Onginjo Rose; Orodho, John Aluko

    2016-01-01

    The gist of the study was to examine the main coping strategies used to manage resources in public secondary schools in Mumias Sub-County, Kakamega County, Kenya. The study was premised on Hunts (2007) theory on project management. A descriptive survey design was adopted. A combination of purposive and simple random sampling techniques were used…

  1. Truancy and Its Influence on Students' Learning in Dormaa Senior High School

    ERIC Educational Resources Information Center

    Henry, Gyimah; Yelkpieri, Daniel

    2017-01-01

    The study instigated the incidence of truancy among students and its influence on learning in the Dormaa Senior High School. A descriptive survey design was adopted in carrying out the study. The study population consisted of teachers, students, parents and opinion leaders in the study area. The simple random and purposive samplings were used in…

  2. Determinants of Differing Teacher Attitudes towards Inclusive Education Practice

    ERIC Educational Resources Information Center

    Gyimah, Emmanuel K.; Ackah, Francis R., Jr.; Yarquah, John A.

    2010-01-01

    An examination of literature reveals that teacher attitude is fundamental to the practice of inclusive education. In order to verify the extent to which the assertion is applicable in Ghana, 132 teachers were selected from 16 regular schools in the Cape Coast Metropolis using purposive and simple random sampling techniques to respond to a four…

  3. Performing an Event Study: An Exercise for Finance Students

    ERIC Educational Resources Information Center

    Reese, William A., Jr.; Robins, Russell P.

    2017-01-01

    This exercise helps instructors teach students how to perform a simple event study. The study tests to see if stocks earn abnormal returns when added to the S&P 500. Students select a random sample of stocks that were added to the index between January 2000 and July 2015. The accompanying spreadsheet calculates cumulative abnormal returns and…

  4. The Contribution of Counseling Providers to the Success or Failure of Marriages

    ERIC Educational Resources Information Center

    Ansah-Hughes, Winifred

    2015-01-01

    This study is an investigation into the contribution of counseling providers to the success or failure of marriages. The purposive and the simple random sampling methods were used to select eight churches and 259 respondents (married people) in the Techiman Municipality. The instrument used to collect data was a 26-item questionnaire including a…

  5. Prevalence Incidence Mixture Models

    Cancer.gov

    The R package and webtool fits Prevalence Incidence Mixture models to left-censored and irregularly interval-censored time to event data that is commonly found in screening cohorts assembled from electronic health records. Absolute and relative risk can be estimated for simple random sampling, and stratified sampling (the two approaches of superpopulation and a finite population are supported for target populations). Non-parametric (absolute risks only), semi-parametric, weakly-parametric (using B-splines), and some fully parametric (such as the logistic-Weibull) models are supported.

  6. Random walks and diffusion on networks

    NASA Astrophysics Data System (ADS)

    Masuda, Naoki; Porter, Mason A.; Lambiotte, Renaud

    2017-11-01

    Random walks are ubiquitous in the sciences, and they are interesting from both theoretical and practical perspectives. They are one of the most fundamental types of stochastic processes; can be used to model numerous phenomena, including diffusion, interactions, and opinions among humans and animals; and can be used to extract information about important entities or dense groups of entities in a network. Random walks have been studied for many decades on both regular lattices and (especially in the last couple of decades) on networks with a variety of structures. In the present article, we survey the theory and applications of random walks on networks, restricting ourselves to simple cases of single and non-adaptive random walkers. We distinguish three main types of random walks: discrete-time random walks, node-centric continuous-time random walks, and edge-centric continuous-time random walks. We first briefly survey random walks on a line, and then we consider random walks on various types of networks. We extensively discuss applications of random walks, including ranking of nodes (e.g., PageRank), community detection, respondent-driven sampling, and opinion models such as voter models.

  7. Asymptotic Linear Spectral Statistics for Spiked Hermitian Random Matrices

    NASA Astrophysics Data System (ADS)

    Passemier, Damien; McKay, Matthew R.; Chen, Yang

    2015-07-01

    Using the Coulomb Fluid method, this paper derives central limit theorems (CLTs) for linear spectral statistics of three "spiked" Hermitian random matrix ensembles. These include Johnstone's spiked model (i.e., central Wishart with spiked correlation), non-central Wishart with rank-one non-centrality, and a related class of non-central matrices. For a generic linear statistic, we derive simple and explicit CLT expressions as the matrix dimensions grow large. For all three ensembles under consideration, we find that the primary effect of the spike is to introduce an correction term to the asymptotic mean of the linear spectral statistic, which we characterize with simple formulas. The utility of our proposed framework is demonstrated through application to three different linear statistics problems: the classical likelihood ratio test for a population covariance, the capacity analysis of multi-antenna wireless communication systems with a line-of-sight transmission path, and a classical multiple sample significance testing problem.

  8. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments

    PubMed Central

    2013-01-01

    Background Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. Results To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations. The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. Conclusions We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs. PMID:24160725

  9. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    PubMed

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  10. Statistical power and optimal design in experiments in which samples of participants respond to samples of stimuli.

    PubMed

    Westfall, Jacob; Kenny, David A; Judd, Charles M

    2014-10-01

    Researchers designing experiments in which a sample of participants responds to a sample of stimuli are faced with difficult questions about optimal study design. The conventional procedures of statistical power analysis fail to provide appropriate answers to these questions because they are based on statistical models in which stimuli are not assumed to be a source of random variation in the data, models that are inappropriate for experiments involving crossed random factors of participants and stimuli. In this article, we present new methods of power analysis for designs with crossed random factors, and we give detailed, practical guidance to psychology researchers planning experiments in which a sample of participants responds to a sample of stimuli. We extensively examine 5 commonly used experimental designs, describe how to estimate statistical power in each, and provide power analysis results based on a reasonable set of default parameter values. We then develop general conclusions and formulate rules of thumb concerning the optimal design of experiments in which a sample of participants responds to a sample of stimuli. We show that in crossed designs, statistical power typically does not approach unity as the number of participants goes to infinity but instead approaches a maximum attainable power value that is possibly small, depending on the stimulus sample. We also consider the statistical merits of designs involving multiple stimulus blocks. Finally, we provide a simple and flexible Web-based power application to aid researchers in planning studies with samples of stimuli.

  11. Statistical inferences for data from studies conducted with an aggregated multivariate outcome-dependent sample design.

    PubMed

    Lu, Tsui-Shan; Longnecker, Matthew P; Zhou, Haibo

    2017-03-15

    Outcome-dependent sampling (ODS) scheme is a cost-effective sampling scheme where one observes the exposure with a probability that depends on the outcome. The well-known such design is the case-control design for binary response, the case-cohort design for the failure time data, and the general ODS design for a continuous response. While substantial work has been carried out for the univariate response case, statistical inference and design for the ODS with multivariate cases remain under-developed. Motivated by the need in biological studies for taking the advantage of the available responses for subjects in a cluster, we propose a multivariate outcome-dependent sampling (multivariate-ODS) design that is based on a general selection of the continuous responses within a cluster. The proposed inference procedure for the multivariate-ODS design is semiparametric where all the underlying distributions of covariates are modeled nonparametrically using the empirical likelihood methods. We show that the proposed estimator is consistent and developed the asymptotically normality properties. Simulation studies show that the proposed estimator is more efficient than the estimator obtained using only the simple-random-sample portion of the multivariate-ODS or the estimator from a simple random sample with the same sample size. The multivariate-ODS design together with the proposed estimator provides an approach to further improve study efficiency for a given fixed study budget. We illustrate the proposed design and estimator with an analysis of association of polychlorinated biphenyl exposure to hearing loss in children born to the Collaborative Perinatal Study. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Random forests ensemble classifier trained with data resampling strategy to improve cardiac arrhythmia diagnosis.

    PubMed

    Ozçift, Akin

    2011-05-01

    Supervised classification algorithms are commonly used in the designing of computer-aided diagnosis systems. In this study, we present a resampling strategy based Random Forests (RF) ensemble classifier to improve diagnosis of cardiac arrhythmia. Random forests is an ensemble classifier that consists of many decision trees and outputs the class that is the mode of the class's output by individual trees. In this way, an RF ensemble classifier performs better than a single tree from classification performance point of view. In general, multiclass datasets having unbalanced distribution of sample sizes are difficult to analyze in terms of class discrimination. Cardiac arrhythmia is such a dataset that has multiple classes with small sample sizes and it is therefore adequate to test our resampling based training strategy. The dataset contains 452 samples in fourteen types of arrhythmias and eleven of these classes have sample sizes less than 15. Our diagnosis strategy consists of two parts: (i) a correlation based feature selection algorithm is used to select relevant features from cardiac arrhythmia dataset. (ii) RF machine learning algorithm is used to evaluate the performance of selected features with and without simple random sampling to evaluate the efficiency of proposed training strategy. The resultant accuracy of the classifier is found to be 90.0% and this is a quite high diagnosis performance for cardiac arrhythmia. Furthermore, three case studies, i.e., thyroid, cardiotocography and audiology, are used to benchmark the effectiveness of the proposed method. The results of experiments demonstrated the efficiency of random sampling strategy in training RF ensemble classification algorithm. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Principals Leadership Styles and Gender Influence on Teachers Morale in Public Secondary Schools

    ERIC Educational Resources Information Center

    Eboka, Obiajulu Chinyelum

    2016-01-01

    The study investigated the perception of teachers on the influence of principals' leadership styles and gender on teacher morale. Four research questions and four research hypotheses guided the study. An ex-post facto research design was adopted in the study. Through the simple random sampling technique a total of 72 principals and 2,506 in 72…

  14. Sociodemographic Variables in Relation to Social Appearance Anxiety in Adolescents

    ERIC Educational Resources Information Center

    Sahin, Ertugrul; Barut, Yasar; Ersanli, Ercüment

    2013-01-01

    This study examined the effects of gender, age, grade level, and the educational level of the mother and father on social appearance anxiety in Turkish adolescents. This was a cross-sectional study in which a simple random sampling method was used. Participants were 2,219 adolescents (1089 boys, 1130 girls) with a mean age of 12.76 years old (SD =…

  15. Home Influences on the Academic Performance of Agricultural Science Students in Ikwuano Local Government Area of Abia State, Nigeria

    ERIC Educational Resources Information Center

    Ndirika, Maryann C.; Njoku, U. J.

    2012-01-01

    This study was conducted to investigate the home influences on the academic performance of agricultural science secondary school students in Ikwuano Local Government Area of Abia State. The instrument used in data collection was a validated questionnaire structured on a two point rating scale. Simple random sampling technique was used to select…

  16. The Influence of Parental Background on Students' Academic Performance in Physics in WASSCS 2000-2005

    ERIC Educational Resources Information Center

    Ebong, Samuel T.

    2015-01-01

    The study investigated parental background on student's academic performance in secondary schools in Abak local government, Akwa Ibom State, Nigeria. A survey design was adopted for the study. One thousand four hundred and forty (1440) senior secondary three (SS3) Physics students were drawn by simple random sampling from 12 Schools, six (6) each…

  17. A Study of the Relationship between Key Factors of Academic Innovation and Faculties' Teaching Goals--The Mediatory Role of Knowledge

    ERIC Educational Resources Information Center

    Mohammadi, Mehdi; Marzooghi, Rahmatullah; Dehghani, Fatemeh

    2017-01-01

    The following research tries to study the Relationship between key factors of academic innovations and faculties' teaching goals with the mediatory role of their pedagogical, technological and content knowledge. The statistical population in this research included faculty members of Shiraz University. By simple random sampling, 127 faculty members…

  18. Implementation of Quality Assurance Standards and Principals' Administrative Effectiveness in Public Secondary Schools in Edo and Delta States

    ERIC Educational Resources Information Center

    Momoh, U.; Osagiobare, Emmanuel Osamiro

    2015-01-01

    The study investigated principals' implementation of quality assurance standards and administrative effectiveness in public secondary schools in Edo and Delta States. To guide the study, four research questions and hypotheses were raised. Descriptive research design was adopted for the study and the simple random sampling technique was used to…

  19. Internet Access and Usage by Secondary School Students in Morogoro Municipality, Tanzania

    ERIC Educational Resources Information Center

    Tarimo, Ronald; Kavishe, George

    2017-01-01

    The purpose of this paper was to report results of a study on the investigation of the Internet access and usage by secondary school students in Morogoro municipality in Tanzania. A simple random sampling technique was used to select 120 students from six schools. The data was collected through a questionnaire. A quantitative approach using the…

  20. Quality of Work Life and Organizational Climate of Schools Located along the Thai-Cambodian Borders

    ERIC Educational Resources Information Center

    Kitratporn, Poonsook; Puncreobutr, Vichian

    2016-01-01

    The purpose of the study is to measure the Quality of Work Life and Organizational Climate of Schools located along the Thai-Cambodian borders. The study intended to measure the relationship between the two underlying variables quality of work life and organizational climate. Simple random sample of 384 respondents were administrators and teachers…

  1. Effect of Self Regulated Learning Approach on Junior Secondary School Students' Achievement in Basic Science

    ERIC Educational Resources Information Center

    Nwafor, Chika E.; Obodo, Abigail Chikaodinaka; Okafor, Gabriel

    2015-01-01

    This study explored the effect of self-regulated learning approach on junior secondary school students' achievement in basic science. Quasi-experimental design was used for the study.Two co-educational schools were drawn for the study through simple random sampling technique. One school was assigned to the treatment group while the other was…

  2. Agro-Students' Appraisal of Online Registration of Academic Courses in the Federal University of Agriculture Abeokuta, Ogun State Nigeria

    ERIC Educational Resources Information Center

    Lawal-Adebowale, O. A.; Oyekunle, O.

    2014-01-01

    With integration of information technology tool for academic course registration in the Federal University of Agriculture, Abeokuta, the study assessed the agro-students' appraisal of the online tool for course registration. A simple random sampling technique was used to select 325 agrostudents; and validated and reliable questionnaire was used…

  3. The Relationship between University Students' Attitude to Listening to Music and Their Level of Optimism

    ERIC Educational Resources Information Center

    Aksoy, Nil

    2014-01-01

    The purpose of this study is to analyse the relationship between university students' attitude to listening to music and their level of optimism. The study group for the research consists of 508 students who studied at Aksaray University in the 2012-13 academic year. Simple random sampling is used. In this study, the "Attitude Scale for…

  4. Urban Land Cover Mapping Accuracy Assessment - A Cost-benefit Analysis Approach

    NASA Astrophysics Data System (ADS)

    Xiao, T.

    2012-12-01

    One of the most important components in urban land cover mapping is mapping accuracy assessment. Many statistical models have been developed to help design simple schemes based on both accuracy and confidence levels. It is intuitive that an increased number of samples increases the accuracy as well as the cost of an assessment. Understanding cost and sampling size is crucial in implementing efficient and effective of field data collection. Few studies have included a cost calculation component as part of the assessment. In this study, a cost-benefit sampling analysis model was created by combining sample size design and sampling cost calculation. The sampling cost included transportation cost, field data collection cost, and laboratory data analysis cost. Simple Random Sampling (SRS) and Modified Systematic Sampling (MSS) methods were used to design sample locations and to extract land cover data in ArcGIS. High resolution land cover data layers of Denver, CO and Sacramento, CA, street networks, and parcel GIS data layers were used in this study to test and verify the model. The relationship between the cost and accuracy was used to determine the effectiveness of each sample method. The results of this study can be applied to other environmental studies that require spatial sampling.

  5. Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Sample Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

  6. Simple and Multivariate Relationships Between Spiritual Intelligence with General Health and Happiness.

    PubMed

    Amirian, Mohammad-Elyas; Fazilat-Pour, Masoud

    2016-08-01

    The present study examined simple and multivariate relationships of spiritual intelligence with general health and happiness. The employed method was descriptive and correlational. King's Spiritual Quotient scales, GHQ-28 and Oxford Happiness Inventory, are filled out by a sample consisted of 384 students, which were selected using stratified random sampling from the students of Shahid Bahonar University of Kerman. Data are subjected to descriptive and inferential statistics including correlations and multivariate regressions. Bivariate correlations support positive and significant predictive value of spiritual intelligence toward general health and happiness. Further analysis showed that among the Spiritual Intelligence' subscales, Existential Critical Thinking Predicted General Health and Happiness, reversely. In addition, happiness was positively predicted by generation of personal meaning and transcendental awareness. The findings are discussed in line with the previous studies and the relevant theoretical background.

  7. Synthesis and Structural Characterization of CdFe2O4 Nanostructures

    NASA Astrophysics Data System (ADS)

    Kalpanadevi, K.; Sinduja, C. R.; Manimekalai, R.

    The synthesis of CdFe2O4 nanoparticles has been achieved by a simple thermal decomposition method from the inorganic precursor, [CdFe2(cin)3(N2H4)3], which was obtained by a simple precipitation method from the corresponding metal salts, cinnamic acid and hydrazine hydrate. The precursor was characterized by hydrazine and metal analyses, infrared spectral analysis and thermo gravimetric analysis. On appropriate annealing, [CdFe2(cin)3(N2H4)3] yielded CdFe2O4 nanoparticles. The XRD studies showed that the crystallite size of the particles was 13nm. The results of HRTEM studies also agreed well with those of XRD. SAED pattern of the sample established the polycrystalline nature of the nanoparticles. SEM images displayed a random distribution of grains in the sample.

  8. The Role of Counselling and Parental Encouragement on Re-Entry of Adolescents into Secondary Schools in Abia State, Nigeria

    ERIC Educational Resources Information Center

    Alika, Henrietta Ijeoma; Ohanaka, Blessing Ijeoma

    2013-01-01

    This paper examined the role of counselling, and parental encouragement on re-entry of adolescents into secondary school in Abia State, Nigeria. A total of 353 adolescents who re-entered school were selected from six secondary schools in the State through a simple random sampling technique. A validated questionnaire was used for data analysis.…

  9. Instructional Resources as Determinants of English Language Performance of Secondary School High-Achieving Students in Ibadan, Oyo State

    ERIC Educational Resources Information Center

    Adelodun, Gboyega Adelowo; Asiru, Abdulahi Babatunde

    2015-01-01

    This study examined the role played by instructional resources in enhancing performance of students, especially that of high-achievers, in English Language. The study is descriptive in nature and it adopted a survey design. Simple random sampling technique was used for the selection of fifty (50) SSI-SSIII students from five schools in Ibadan…

  10. The Effect of Primary School Students' Writing Attitudes and Writing Self-Efficacy Beliefs on Their Summary Writing Achievement

    ERIC Educational Resources Information Center

    Bulut, Pinar

    2017-01-01

    In this study, the effect of writing attitude and writing self-efficacy beliefs on the summarization achievement of the 4th grade primary school students was examined using the structural equation modeling. The study employed the relational survey model. The study group constructed by means of simple random sampling method is comprised of 335…

  11. Perception of Pre-Service Teachers' towards the Teaching Practice Programme in College of Technology Education, University of Education, Winneba

    ERIC Educational Resources Information Center

    Amankwah, Francis; Oti-Agyen, Philip; Sam, Francis Kwame

    2017-01-01

    The descriptive survey design was used to find out the perception of pre-service teachers on teaching practice (on-campus) as an initial teacher preparation programme in University of Education, Winneba. A simple random sampling was used to select 226 pre-service teachers from the College of Technology Education, Kumasi. Data for the study were…

  12. The Contribution of Teachers' Continuous Professional Development (CPD) Program to Quality of Education and Its Teacher-Related Challenging Factors at Chagni Primary Schools, Awi Zone, Ethiopia

    ERIC Educational Resources Information Center

    Belay, Sintayehu

    2016-01-01

    This study examined the contribution of teachers' Continuous Professional Development (CPD) to quality of education and its challenging factors related with teachers. For this purpose, the study employed descriptive survey method. 76 or 40.86% participant teachers were selected using simple random sampling technique. Close-ended questionnaire was…

  13. Benchmarking protein classification algorithms via supervised cross-validation.

    PubMed

    Kertész-Farkas, Attila; Dhir, Somdutta; Sonego, Paolo; Pacurar, Mircea; Netoteia, Sergiu; Nijveen, Harm; Kuzniar, Arnold; Leunissen, Jack A M; Kocsor, András; Pongor, Sándor

    2008-04-24

    Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold, leave-one-out, etc.) may not give reliable estimates on how an algorithm will generalize to novel, distantly related subtypes of the known protein classes. Supervised cross-validation, i.e., selection of test and train sets according to the known subtypes within a database has been successfully used earlier in conjunction with the SCOP database. Our goal was to extend this principle to other databases and to design standardized benchmark datasets for protein classification. Hierarchical classification trees of protein categories provide a simple and general framework for designing supervised cross-validation strategies for protein classification. Benchmark datasets can be designed at various levels of the concept hierarchy using a simple graph-theoretic distance. A combination of supervised and random sampling was selected to construct reduced size model datasets, suitable for algorithm comparison. Over 3000 new classification tasks were added to our recently established protein classification benchmark collection that currently includes protein sequence (including protein domains and entire proteins), protein structure and reading frame DNA sequence data. We carried out an extensive evaluation based on various machine-learning algorithms such as nearest neighbor, support vector machines, artificial neural networks, random forests and logistic regression, used in conjunction with comparison algorithms, BLAST, Smith-Waterman, Needleman-Wunsch, as well as 3D comparison methods DALI and PRIDE. The resulting datasets provide lower, and in our opinion more realistic estimates of the classifier performance than do random cross-validation schemes. A combination of supervised and random sampling was used to construct model datasets, suitable for algorithm comparison.

  14. A model-based 'varimax' sampling strategy for a heterogeneous population.

    PubMed

    Akram, Nuzhat A; Farooqi, Shakeel R

    2014-01-01

    Sampling strategies are planned to enhance the homogeneity of a sample, hence to minimize confounding errors. A sampling strategy was developed to minimize the variation within population groups. Karachi, the largest urban agglomeration in Pakistan, was used as a model population. Blood groups ABO and Rh factor were determined for 3000 unrelated individuals selected through simple random sampling. Among them five population groups, namely Balochi, Muhajir, Pathan, Punjabi and Sindhi, based on paternal ethnicity were identified. An index was designed to measure the proportion of admixture at parental and grandparental levels. Population models based on index score were proposed. For validation, 175 individuals selected through stratified random sampling were genotyped for the three STR loci CSF1PO, TPOX and TH01. ANOVA showed significant differences across the population groups for blood groups and STR loci distribution. Gene diversity was higher across the sub-population model than in the agglomerated population. At parental level gene diversities are significantly higher across No admixture models than Admixture models. At grandparental level the difference was not significant. A sub-population model with no admixture at parental level was justified for sampling the heterogeneous population of Karachi.

  15. Methodological considerations in using complex survey data: an applied example with the Head Start Family and Child Experiences Survey.

    PubMed

    Hahs-Vaughn, Debbie L; McWayne, Christine M; Bulotsky-Shearer, Rebecca J; Wen, Xiaoli; Faria, Ann-Marie

    2011-06-01

    Complex survey data are collected by means other than simple random samples. This creates two analytical issues: nonindependence and unequal selection probability. Failing to address these issues results in underestimated standard errors and biased parameter estimates. Using data from the nationally representative Head Start Family and Child Experiences Survey (FACES; 1997 and 2000 cohorts), three diverse multilevel models are presented that illustrate differences in results depending on addressing or ignoring the complex sampling issues. Limitations of using complex survey data are reported, along with recommendations for reporting complex sample results. © The Author(s) 2011

  16. Spatial inventory integrating raster databases and point sample data. [Geographic Information System for timber inventory

    NASA Technical Reports Server (NTRS)

    Strahler, A. H.; Woodcock, C. E.; Logan, T. L.

    1983-01-01

    A timber inventory of the Eldorado National Forest, located in east-central California, provides an example of the use of a Geographic Information System (GIS) to stratify large areas of land for sampling and the collection of statistical data. The raster-based GIS format of the VICAR/IBIS software system allows simple and rapid tabulation of areas, and facilitates the selection of random locations for ground sampling. Algorithms that simplify the complex spatial pattern of raster-based information, and convert raster format data to strings of coordinate vectors, provide a link to conventional vector-based geographic information systems.

  17. The influence of social capital towards the quality of community tourism services in Lake Toba Parapat North Sumatera

    NASA Astrophysics Data System (ADS)

    Revida, Erika; Yanti Siahaan, Asima; Purba, Sukarman

    2018-03-01

    The objective of the research was to analyze the influence of social capital towards the quality of community tourism service In Lake Toba Parapat North Sumatera. The method used the combination between quantitative and qualitative research. Sample was taken from the Community in the area around Lake Toba Parapat North Sumatera with sample of 150 head of the family. The sampling technique was Simple Random Sampling. Data collection techniques used documentary studies, questionnaires, interview and observations, while the data analysis used were Product Moment and Simple Linear Regression analysis. The results of the research showed that there were positive and significant influence between Social Capital and the Quality of Community Tourism Services in Lake Toba Parapat North Sumatera. This research recommend the need to enhance Social Capital such as trust, norms and network and the quality of community tourism services such as Tangibles, Reliability, Responsiveness, Assurance, and Empathy by giving communications, information and education continuously from the families, institutions formal and informal, community leaders, religious figures and all communities in Lake Toba Parapat North Sumatera.

  18. The impact of traffic sign deficit on road traffic accidents in Nigeria.

    PubMed

    Ezeibe, Christian; Ilo, Chukwudi; Oguonu, Chika; Ali, Alphonsus; Abada, Ifeanyi; Ezeibe, Ezinwanne; Oguonu, Chukwunonso; Abada, Felicia; Izueke, Edwin; Agbo, Humphrey

    2018-04-04

    This study assesses the impact of traffic sign deficit on road traffic accidents in Nigeria. The participants were 720 commercial vehicle drivers. While simple random sampling was used to select 6 out of 137 federal highways, stratified random sampling was used to select six categories of commercial vehicle drivers. The study used qual-dominant mixed methods approach comprising key informant interviews; group interviews; field observation; policy appraisal and secondary literature on traffic signs. Result shows that the failure of government to provide and maintain traffic signs in order to guide road users through the numerous accident black spots on the highways is the major cause of road accidents in Nigeria. The study argues that provision and maintenance of traffic signs present opportunity to promoting safety on the highways and achieving the sustainable development goals.

  19. Sampling considerations for disease surveillance in wildlife populations

    USGS Publications Warehouse

    Nusser, S.M.; Clark, W.R.; Otis, D.L.; Huang, L.

    2008-01-01

    Disease surveillance in wildlife populations involves detecting the presence of a disease, characterizing its prevalence and spread, and subsequent monitoring. A probability sample of animals selected from the population and corresponding estimators of disease prevalence and detection provide estimates with quantifiable statistical properties, but this approach is rarely used. Although wildlife scientists often assume probability sampling and random disease distributions to calculate sample sizes, convenience samples (i.e., samples of readily available animals) are typically used, and disease distributions are rarely random. We demonstrate how landscape-based simulation can be used to explore properties of estimators from convenience samples in relation to probability samples. We used simulation methods to model what is known about the habitat preferences of the wildlife population, the disease distribution, and the potential biases of the convenience-sample approach. Using chronic wasting disease in free-ranging deer (Odocoileus virginianus) as a simple illustration, we show that using probability sample designs with appropriate estimators provides unbiased surveillance parameter estimates but that the selection bias and coverage errors associated with convenience samples can lead to biased and misleading results. We also suggest practical alternatives to convenience samples that mix probability and convenience sampling. For example, a sample of land areas can be selected using a probability design that oversamples areas with larger animal populations, followed by harvesting of individual animals within sampled areas using a convenience sampling method.

  20. How Random Is Social Behaviour? Disentangling Social Complexity through the Study of a Wild House Mouse Population

    PubMed Central

    Perony, Nicolas; Tessone, Claudio J.; König, Barbara; Schweitzer, Frank

    2012-01-01

    Out of all the complex phenomena displayed in the behaviour of animal groups, many are thought to be emergent properties of rather simple decisions at the individual level. Some of these phenomena may also be explained by random processes only. Here we investigate to what extent the interaction dynamics of a population of wild house mice (Mus domesticus) in their natural environment can be explained by a simple stochastic model. We first introduce the notion of perceptual landscape, a novel tool used here to describe the utilisation of space by the mouse colony based on the sampling of individuals in discrete locations. We then implement the behavioural assumptions of the perceptual landscape in a multi-agent simulation to verify their accuracy in the reproduction of observed social patterns. We find that many high-level features – with the exception of territoriality – of our behavioural dataset can be accounted for at the population level through the use of this simplified representation. Our findings underline the potential importance of random factors in the apparent complexity of the mice's social structure. These results resonate in the general context of adaptive behaviour versus elementary environmental interactions. PMID:23209394

  1. Investigation of microsatellite instability in Turkish breast cancer patients.

    PubMed

    Demokan, Semra; Muslumanoglu, Mahmut; Yazici, H; Igci, Abdullah; Dalay, Nejat

    2002-01-01

    Multiple somatic and inherited genetic changes that lead to loss of growth control may contribute to the development of breast cancer. Microsatellites are tandem repeats of simple sequences that occur abundantly and at random throughout most eucaryotic genomes. Microsatellite instability (MI), characterized by the presence of random contractions or expansions in the length of simple sequence repeats or microsatellites, is observed in a variety of tumors. The aim of this study was to compare tumor DNA fingerprints with constitutional DNA fingerprints to investigate changes specific to breast cancer and evaluate its correlation with clinical characteristics. Tumor and normal tissue samples of 38 patients with breast cancer were investigated by comparing PCR-amplified microsatellite sequences D2S443 and D21S1436. Microsatellite instability at D21S1436 and D2S443 was found in 5 (13%) and 7 (18%) patients, respectively. Two patients displayed instability at both marker loci. No association was found between MI and age, family history, lymph node involvement and other clinical parameters.

  2. Use of virtual reality intervention to improve reaction time in children with cerebral palsy: A randomized controlled trial.

    PubMed

    Pourazar, Morteza; Mirakhori, Fatemeh; Hemayattalab, Rasool; Bagherzadeh, Fazlolah

    2017-09-21

    The purpose of this study was to investigate the training effects of Virtual Reality (VR) intervention program on reaction time in children with cerebral palsy. Thirty boys ranging from 7 to 12 years (mean = 11.20; SD = .76) were selected by available sampling method and randomly divided into the experimental and control groups. Simple Reaction Time (SRT) and Discriminative Reaction Time (DRT) were measured at baseline and 1 day after completion of VR intervention. Multivariate analysis of variance (MANOVA) and paired sample t-test were performed to analyze the results. MANOVA test revealed significant effects for group in posttest phase, with lower reaction time in both measures for the experimental group. Based on paired sample t-test results, both RT measures significantly improved in experimental group following the VR intervention program. This paper proposes VR as a promising tool into the rehabilitation process for improving reaction time in children with cerebral palsy.

  3. Perception of Students on Causes of Poor Performance in Chemistry in External Examinations in Umuahia North Local Government of Abia State

    ERIC Educational Resources Information Center

    Ojukwu, M. O.

    2016-01-01

    The aim of this study was to investigate the perception of students on the causes of their poor performance in external chemistry examinations in Umuahia North Local Government Area of Abia State. Descriptive survey design was used for the study. Two hundred and forty (240) students were selected through simple random sampling for the study. A…

  4. Angular intensity and polarization dependence of diffuse transmission through random media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eliyahu, D.; Rosenbluh, M.; Feund, I.

    1993-03-01

    A simple theoretical model involving only a single sample parameter, the depolarization ratio [rho] for linearly polarized normally incident and normally scattered light, is developed to describe the angular intensity and all other polarization-dependent properties of diffuse transmission through multiple-scattering media. Initial experimental results that tend to support the theory are presented. Results for diffuse reflection are also described. 63 refs., 15 figs.

  5. Replica exchange and expanded ensemble simulations as Gibbs sampling: simple improvements for enhanced mixing.

    PubMed

    Chodera, John D; Shirts, Michael R

    2011-11-21

    The widespread popularity of replica exchange and expanded ensemble algorithms for simulating complex molecular systems in chemistry and biophysics has generated much interest in discovering new ways to enhance the phase space mixing of these protocols in order to improve sampling of uncorrelated configurations. Here, we demonstrate how both of these classes of algorithms can be considered as special cases of Gibbs sampling within a Markov chain Monte Carlo framework. Gibbs sampling is a well-studied scheme in the field of statistical inference in which different random variables are alternately updated from conditional distributions. While the update of the conformational degrees of freedom by Metropolis Monte Carlo or molecular dynamics unavoidably generates correlated samples, we show how judicious updating of the thermodynamic state indices--corresponding to thermodynamic parameters such as temperature or alchemical coupling variables--can substantially increase mixing while still sampling from the desired distributions. We show how state update methods in common use can lead to suboptimal mixing, and present some simple, inexpensive alternatives that can increase mixing of the overall Markov chain, reducing simulation times necessary to obtain estimates of the desired precision. These improved schemes are demonstrated for several common applications, including an alchemical expanded ensemble simulation, parallel tempering, and multidimensional replica exchange umbrella sampling.

  6. Theoretical and Experimental Investigation of Random Gust Loads Part I : Aerodynamic Transfer Function of a Simple Wing Configuration in Incompressible Flow

    NASA Technical Reports Server (NTRS)

    Hakkinen, Raimo J; Richardson, A S , Jr

    1957-01-01

    Sinusoidally oscillating downwash and lift produced on a simple rigid airfoil were measured and compared with calculated values. Statistically stationary random downwash and the corresponding lift on a simple rigid airfoil were also measured and the transfer functions between their power spectra determined. The random experimental values are compared with theoretically approximated values. Limitations of the experimental technique and the need for more extensive experimental data are discussed.

  7. Confidence intervals for a difference between lognormal means in cluster randomization trials.

    PubMed

    Poirier, Julia; Zou, G Y; Koval, John

    2017-04-01

    Cluster randomization trials, in which intact social units are randomized to different interventions, have become popular in the last 25 years. Outcomes from these trials in many cases are positively skewed, following approximately lognormal distributions. When inference is focused on the difference between treatment arm arithmetic means, existent confidence interval procedures either make restricting assumptions or are complex to implement. We approach this problem by assuming log-transformed outcomes from each treatment arm follow a one-way random effects model. The treatment arm means are functions of multiple parameters for which separate confidence intervals are readily available, suggesting that the method of variance estimates recovery may be applied to obtain closed-form confidence intervals. A simulation study showed that this simple approach performs well in small sample sizes in terms of empirical coverage, relatively balanced tail errors, and interval widths as compared to existing methods. The methods are illustrated using data arising from a cluster randomization trial investigating a critical pathway for the treatment of community acquired pneumonia.

  8. Emergence of an optimal search strategy from a simple random walk

    PubMed Central

    Sakiyama, Tomoko; Gunji, Yukio-Pegio

    2013-01-01

    In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths. PMID:23804445

  9. Emergence of an optimal search strategy from a simple random walk.

    PubMed

    Sakiyama, Tomoko; Gunji, Yukio-Pegio

    2013-09-06

    In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths.

  10. FOCIS: A forest classification and inventory system using LANDSAT and digital terrain data

    NASA Technical Reports Server (NTRS)

    Strahler, A. H.; Franklin, J.; Woodcook, C. E.; Logan, T. L.

    1981-01-01

    Accurate, cost-effective stratification of forest vegetation and timber inventory is the primary goal of a Forest Classification and Inventory System (FOCIS). Conventional timber stratification using photointerpretation can be time-consuming, costly, and inconsistent from analyst to analyst. FOCIS was designed to overcome these problems by using machine processing techniques to extract and process tonal, textural, and terrain information from registered LANDSAT multispectral and digital terrain data. Comparison of samples from timber strata identified by conventional procedures showed that both have about the same potential to reduce the variance of timber volume estimates over simple random sampling.

  11. The usefulness of Skylab/EREP S-190 and S-192 imagery in multistage forest surveys

    NASA Technical Reports Server (NTRS)

    Langley, P. G.; Vanroessel, J. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. The RMSE of point location achieved with the annotation system on S190A imagery was 100 m and 90 m in the x and y direction, respectively. Potential gains in sampling precision attributable to space derived imagery ranged from 4.9 to 43.3 percent depending on the image type, interpretation method, time of year, and sampling method applied. Seasonal variation was significant. S190A products obtained in September yielded higher gains than those obtained in June. Using 100 primary sample units as a base under simple random sampling, the revenue made available for incorporating space acquired data into the sample design to estimate timber volume was as high as $39,400.00.

  12. Efficient correction of wavefront inhomogeneities in X-ray holographic nanotomography by random sample displacement

    NASA Astrophysics Data System (ADS)

    Hubert, Maxime; Pacureanu, Alexandra; Guilloud, Cyril; Yang, Yang; da Silva, Julio C.; Laurencin, Jerome; Lefebvre-Joud, Florence; Cloetens, Peter

    2018-05-01

    In X-ray tomography, ring-shaped artifacts present in the reconstructed slices are an inherent problem degrading the global image quality and hindering the extraction of quantitative information. To overcome this issue, we propose a strategy for suppression of ring artifacts originating from the coherent mixing of the incident wave and the object. We discuss the limits of validity of the empty beam correction in the framework of a simple formalism. We then deduce a correction method based on two-dimensional random sample displacement, with minimal cost in terms of spatial resolution, acquisition, and processing time. The method is demonstrated on bone tissue and on a hydrogen electrode of a ceramic-metallic solid oxide cell. Compared to the standard empty beam correction, we obtain high quality nanotomography images revealing detailed object features. The resulting absence of artifacts allows straightforward segmentation and posterior quantification of the data.

  13. Factors of quality of financial report of local government in Indonesia

    NASA Astrophysics Data System (ADS)

    Muda, Iskandar; Haris Harahap, Abdul; Erlina; Ginting, Syafruddin; Maksum, Azhar; Abubakar, Erwin

    2018-03-01

    The purpose of this research is to find out whether the Accounting Information System and Internal Control in Local Revenue Office to the affect the Quality of Financial Report of Local Government. The sampling was conducted by using simple random sampling method in which the sample was determined without considering strata. The data research was conducted by distributing the questionnaires. The results showed that the degree of Accounting Information System and Internal Control simultaneously affect the Quality of Financial Report of Local Government. However, partially, Partially, accounting information system influence to the quality of financial report of local government and the internal control does not affect the quality of financial report.

  14. Computer simulation of stochastic processes through model-sampling (Monte Carlo) techniques.

    PubMed

    Sheppard, C W.

    1969-03-01

    A simple Monte Carlo simulation program is outlined which can be used for the investigation of random-walk problems, for example in diffusion, or the movement of tracers in the blood circulation. The results given by the simulation are compared with those predicted by well-established theory, and it is shown how the model can be expanded to deal with drift, and with reflexion from or adsorption at a boundary.

  15. Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries

    USGS Publications Warehouse

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2013-01-01

    Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.

  16. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less

  17. Double Sampling with Multiple Imputation to Answer Large Sample Meta-Research Questions: Introduction and Illustration by Evaluating Adherence to Two Simple CONSORT Guidelines

    PubMed Central

    Capers, Patrice L.; Brown, Andrew W.; Dawson, John A.; Allison, David B.

    2015-01-01

    Background: Meta-research can involve manual retrieval and evaluation of research, which is resource intensive. Creation of high throughput methods (e.g., search heuristics, crowdsourcing) has improved feasibility of large meta-research questions, but possibly at the cost of accuracy. Objective: To evaluate the use of double sampling combined with multiple imputation (DS + MI) to address meta-research questions, using as an example adherence of PubMed entries to two simple consolidated standards of reporting trials guidelines for titles and abstracts. Methods: For the DS large sample, we retrieved all PubMed entries satisfying the filters: RCT, human, abstract available, and English language (n = 322, 107). For the DS subsample, we randomly sampled 500 entries from the large sample. The large sample was evaluated with a lower rigor, higher throughput (RLOTHI) method using search heuristics, while the subsample was evaluated using a higher rigor, lower throughput (RHITLO) human rating method. Multiple imputation of the missing-completely at-random RHITLO data for the large sample was informed by: RHITLO data from the subsample; RLOTHI data from the large sample; whether a study was an RCT; and country and year of publication. Results: The RHITLO and RLOTHI methods in the subsample largely agreed (phi coefficients: title = 1.00, abstract = 0.92). Compliance with abstract and title criteria has increased over time, with non-US countries improving more rapidly. DS + MI logistic regression estimates were more precise than subsample estimates (e.g., 95% CI for change in title and abstract compliance by year: subsample RHITLO 1.050–1.174 vs. DS + MI 1.082–1.151). As evidence of improved accuracy, DS + MI coefficient estimates were closer to RHITLO than the large sample RLOTHI. Conclusion: Our results support our hypothesis that DS + MI would result in improved precision and accuracy. This method is flexible and may provide a practical way to examine large corpora of literature. PMID:25988135

  18. Application of random effects to the study of resource selection by animals

    USGS Publications Warehouse

    Gillies, C.S.; Hebblewhite, M.; Nielsen, S.E.; Krawchuk, M.A.; Aldridge, Cameron L.; Frair, J.L.; Saher, D.J.; Stevens, C.E.; Jerde, C.L.

    2006-01-01

    1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence.2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability.3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed.4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects.5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection.6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.

  19. Application of random effects to the study of resource selection by animals.

    PubMed

    Gillies, Cameron S; Hebblewhite, Mark; Nielsen, Scott E; Krawchuk, Meg A; Aldridge, Cameron L; Frair, Jacqueline L; Saher, D Joanne; Stevens, Cameron E; Jerde, Christopher L

    2006-07-01

    1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence. 2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability. 3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed. 4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects. 5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection. 6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.

  20. Pre-Survey Text Messages (SMS) Improve Participation Rate in an Australian Mobile Telephone Survey: An Experimental Study.

    PubMed

    Dal Grande, Eleonora; Chittleborough, Catherine Ruth; Campostrini, Stefano; Dollard, Maureen; Taylor, Anne Winifred

    2016-01-01

    Mobile telephone numbers are increasingly being included in household surveys samples. As approach letters cannot be sent because many do not have address details, alternatives approaches have been considered. This study assesses the effectiveness of sending a short message service (SMS) to a random sample of mobile telephone numbers to increase response rates. A simple random sample of 9000 Australian mobile telephone numbers: 4500 were randomly assigned to be sent a pre-notification SMS, and the remaining 4500 did not have a SMS sent. Adults aged 18 years and over, and currently in paid employment, were eligible to participate. American Association for Public Opinion Research formulas were used to calculated response cooperation and refusal rates. Response and cooperation rate were higher for the SMS groups (12.4% and 28.6% respectively) than the group with no SMS (7.7% and 16.0%). Refusal rates were lower for the SMS group (27.3%) than the group with no SMS (35.9%). When asked, 85.8% of the pre-notification group indicated they remembered receiving a SMS about the study. Sending a pre-notification SMS is effective in improving participation in population-based surveys. Response rates were increased by 60% and cooperation rates by 79%.

  1. Sample size calculations for stepped wedge and cluster randomised trials: a unified approach

    PubMed Central

    Hemming, Karla; Taljaard, Monica

    2016-01-01

    Objectives To clarify and illustrate sample size calculations for the cross-sectional stepped wedge cluster randomized trial (SW-CRT) and to present a simple approach for comparing the efficiencies of competing designs within a unified framework. Study Design and Setting We summarize design effects for the SW-CRT, the parallel cluster randomized trial (CRT), and the parallel cluster randomized trial with before and after observations (CRT-BA), assuming cross-sectional samples are selected over time. We present new formulas that enable trialists to determine the required cluster size for a given number of clusters. We illustrate by example how to implement the presented design effects and give practical guidance on the design of stepped wedge studies. Results For a fixed total cluster size, the choice of study design that provides the greatest power depends on the intracluster correlation coefficient (ICC) and the cluster size. When the ICC is small, the CRT tends to be more efficient; when the ICC is large, the SW-CRT tends to be more efficient and can serve as an alternative design when the CRT is an infeasible design. Conclusion Our unified approach allows trialists to easily compare the efficiencies of three competing designs to inform the decision about the most efficient design in a given scenario. PMID:26344808

  2. Methods and participant characteristics of a randomized intervention to promote physical activity and healthy eating among brazilian high school students: the Saude na Boa project.

    PubMed

    Nahas, Markus V; de Barros, Mauro V G; de Assis, Maria Alice A; Hallal, Pedro C; Florindo, Alex A; Konrad, Lisandra

    2009-03-01

    A cross-cultural, randomized study was proposed to observe the effects of a school-based intervention designed to promote physical activity and healthy eating among high school students in 2 cities from different regions in Brazil: Recife and Florianopolis. The objective of this article is to describe the methodology and subjects enrolled in the project. Ten schools from each region were matched and randomized into intervention and control conditions. A questionnaire and anthropometry were used to collect data in the first and last month of the 2006 school year. The sample (n=2155 at baseline; 55.7% females; 49.1% in the experimental group) included students 15 to 24 years, attending nighttime classes. The intervention focused on simple environmental/organizational changes, diet and physical activity education, and personnel training. The central aspects of the intervention have been implemented in all 10 intervention schools. Problems during the intervention included teachers' strikes in both sites and lack of involvement of the canteen owners in schools. The Saude na Boa study provides evidence that public high schools in Brazil represent an important environment for health promotion. Its design and simple measurements increase the chances of it being sustained and disseminated to similar schools in Brazil.

  3. [Prognostic value on recovery rates for the application of sperm preparation techniques and their evaluation in sperm function].

    PubMed

    Barroso, Gerardo; Chaya, Miguel; Bolaños, Rubén; Rosado, Yadira; García León, Fernando; Ibarrola, Eduardo

    2005-05-01

    To evaluate sperm recovery and total sperm motility in three different sperm preparation techniques (density gradient, simple washing and swim-up). A total of 290 subjects were randomly evaluated from November 2001 to March 2003. The density gradient method required Isolate (upper and lower layers). Centrifugation was performed at 400 g for 10 minutes and evaluation was done using the Makler counting chamber. The simple washing method included the use of HTF-M complemented with 7.5% of SSS, with centrifugation at 250 g, obtaining at the end 0.5 mL of the sperm sample. The swim-up method required HTF-M complemented with 7.5% of SSS, with an incubation period of 60 minutes at 37 degrees C. The demographic characteristics evaluated through their standard error, 95% ICC, and 50th percentile were similar. The application of multiple comparison tests and analysis of variance showed significant differences between the sperm preparations before and after capacitation. It was observed a superior recovery rate with the density gradient and swim-up methods; nevertheless, the samples used for the simple washing method showed a diminished sperm recovery from the original sample. Sperm preparation techniques have become very useful in male infertility treatments allowing higher sperm recovery and motility rates. The seminal parameters evaluated from the original sperm sample will determine the best sperm preparation technique in those patients who require it.

  4. Random Sampling with Interspike-Intervals of the Exponential Integrate and Fire Neuron: A Computational Interpretation of UP-States.

    PubMed

    Steimer, Andreas; Schindler, Kaspar

    2015-01-01

    Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational theories about UP states during slow wave sleep and present possible extensions of the model in the context of spike-frequency adaptation.

  5. Randomization Does Not Help Much, Comparability Does

    PubMed Central

    Saint-Mont, Uwe

    2015-01-01

    According to R.A. Fisher, randomization “relieves the experimenter from the anxiety of considering innumerable causes by which the data may be disturbed.” Since, in particular, it is said to control for known and unknown nuisance factors that may considerably challenge the validity of a result, it has become very popular. This contribution challenges the received view. First, looking for quantitative support, we study a number of straightforward, mathematically simple models. They all demonstrate that the optimism surrounding randomization is questionable: In small to medium-sized samples, random allocation of units to treatments typically yields a considerable imbalance between the groups, i.e., confounding due to randomization is the rule rather than the exception. In the second part of this contribution, the reasoning is extended to a number of traditional arguments in favour of randomization. This discussion is rather non-technical, and sometimes touches on the rather fundamental Frequentist/Bayesian debate. However, the result of this analysis turns out to be quite similar: While the contribution of randomization remains doubtful, comparability contributes much to a compelling conclusion. Summing up, classical experimentation based on sound background theory and the systematic construction of exchangeable groups seems to be advisable. PMID:26193621

  6. Impact of Jos Crises on Pattern of Students/Teachers' Population in Schools and Its Implication on the Quality of Teaching and Peaceful Co-Existence in Nigeria

    ERIC Educational Resources Information Center

    Jacob, Sunday

    2015-01-01

    This study examined the pattern of students/teachers' population in schools as a result of the crises witnessed in Jos and its consequences on quality of teaching as well as peaceful living in Jos. Stratified simple random sampling technique was used to select the 18 schools that were used for this study. Questionnaire was used to collect…

  7. Assessment of DoD Wounded Warrior Matters - Wounded Warrior Battalion - West Headquarters and Southern California Units

    DTIC Science & Technology

    2012-08-22

    urine drug tesL~ (UDTs). A BUM ED Publication wi ll follow the lnterim Guidance for sustainment. Two training programs have also been created to...line DEPARTMENT OF DEFENSE 800.424.9098 Fraud, Waste, Mismanagement, Abuse of Authority Suspected Threats to Homeland Security Unauthorized...determined how many Service members were required to be interviewed, then we applied a simple random sample approach to determine the Service members we

  8. Network Sampling and Classification:An Investigation of Network Model Representations

    PubMed Central

    Airoldi, Edoardo M.; Bai, Xue; Carley, Kathleen M.

    2011-01-01

    Methods for generating a random sample of networks with desired properties are important tools for the analysis of social, biological, and information networks. Algorithm-based approaches to sampling networks have received a great deal of attention in recent literature. Most of these algorithms are based on simple intuitions that associate the full features of connectivity patterns with specific values of only one or two network metrics. Substantive conclusions are crucially dependent on this association holding true. However, the extent to which this simple intuition holds true is not yet known. In this paper, we examine the association between the connectivity patterns that a network sampling algorithm aims to generate and the connectivity patterns of the generated networks, measured by an existing set of popular network metrics. We find that different network sampling algorithms can yield networks with similar connectivity patterns. We also find that the alternative algorithms for the same connectivity pattern can yield networks with different connectivity patterns. We argue that conclusions based on simulated network studies must focus on the full features of the connectivity patterns of a network instead of on the limited set of network metrics for a specific network type. This fact has important implications for network data analysis: for instance, implications related to the way significance is currently assessed. PMID:21666773

  9. Simple Random Sampling-Based Probe Station Selection for Fault Detection in Wireless Sensor Networks

    PubMed Central

    Huang, Rimao; Qiu, Xuesong; Rui, Lanlan

    2011-01-01

    Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate. PMID:22163789

  10. Simple random sampling-based probe station selection for fault detection in wireless sensor networks.

    PubMed

    Huang, Rimao; Qiu, Xuesong; Rui, Lanlan

    2011-01-01

    Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate.

  11. Rates of profit as correlated sums of random variables

    NASA Astrophysics Data System (ADS)

    Greenblatt, R. E.

    2013-10-01

    Profit realization is the dominant feature of market-based economic systems, determining their dynamics to a large extent. Rather than attaining an equilibrium, profit rates vary widely across firms, and the variation persists over time. Differing definitions of profit result in differing empirical distributions. To study the statistical properties of profit rates, I used data from a publicly available database for the US Economy for 2009-2010 (Risk Management Association). For each of three profit rate measures, the sample space consists of 771 points. Each point represents aggregate data from a small number of US manufacturing firms of similar size and type (NAICS code of principal product). When comparing the empirical distributions of profit rates, significant ‘heavy tails’ were observed, corresponding principally to a number of firms with larger profit rates than would be expected from simple models. An apparently novel correlated sum of random variables statistical model was used to model the data. In the case of operating and net profit rates, a number of firms show negative profits (losses), ruling out simple gamma or lognormal distributions as complete models for these data.

  12. A simple equation to estimate body fat percentage in children with overweightness or obesity: a retrospective study.

    PubMed

    Cortés-Castell, Ernesto; Juste, Mercedes; Palazón-Bru, Antonio; Monge, Laura; Sánchez-Ferrer, Francisco; Rizo-Baeza, María Mercedes

    2017-01-01

    Dual-energy X-ray absorptiometry (DXA) provides separate measurements of fat mass, fat-free mass and bone mass, and is a quick, accurate, and safe technique, yet one that is not readily available in routine clinical practice. Consequently, we aimed to develop statistical formulas to predict fat mass (%) and fat mass index (FMI) with simple parameters (age, sex, weight and height). We conducted a retrospective observational cross-sectional study in 416 overweight or obese patients aged 4-18 years that involved assessing adiposity by DXA (fat mass percentage and FMI), body mass index (BMI), sex and age. We randomly divided the sample into two parts (construction and validation). In the construction sample, we developed formulas to predict fat mass and FMI using linear multiple regression models. The formulas were validated in the other sample, calculating the intraclass correlation coefficient via bootstrapping. The fat mass percentage formula had a coefficient of determination of 0.65. This value was 0.86 for FMI. In the validation, the constructed formulas had an intraclass correlation coefficient of 0.77 for fat mass percentage and 0.92 for FMI. Our predictive formulas accurately predicted fat mass and FMI with simple parameters (BMI, sex and age) in children with overweight and obesity. The proposed methodology could be applied in other fields. Further studies are needed to externally validate these formulas.

  13. Comparison of tensile strength among simple interrupted, cruciate, intradermal, and subdermal suture patterns for incision closure in ex vivo canine skin specimens.

    PubMed

    Zellner, Eric M; Hedlund, Cheryl S; Kraus, Karl H; Burton, Andrew F; Kieves, Nina R

    2016-06-15

    OBJECTIVE To compare suture placement time, tension at skin separation and suture line failure, and mode of failure among 4 suture patterns. DESIGN Randomized trial. SAMPLE 60 skin specimens from the pelvic limbs of 30 purpose-bred Beagles. PROCEDURES Skin specimens were harvested within 2 hours after euthanasia and tested within 6 hours after harvest. An 8-cm incision was made in each specimen and sutured with 1 of 4 randomly assigned suture patterns (simple interrupted, cruciate, intradermal, or subdermal). Suture placement time and percentage of skin apposition were evaluated. Specimens were mounted in a calibrated material testing machine and distracted until suture line failure. Tensile strength at skin-edge separation and suture-line failure and mode of failure were compared among the 4 patterns. RESULTS Mean suture placement time for the cruciate pattern was significantly less than that for other patterns. Percentage of skin apposition did not differ among the 4 patterns. Mean tensile strength at skin-edge separation and suture-line failure for the simple interrupted and cruciate patterns were significantly higher than those for the intradermal and subdermal patterns. Mean tensile strength at skin-edge separation and suture-line failure did not differ significantly between the intradermal and subdermal patterns or the simple interrupted and cruciate patterns. The primary mode of failure for the simple interrupted pattern was suture breakage, whereas that for the cruciate, intradermal, and subdermal patterns was tissue failure. CONCLUSIONS AND CLINICAL RELEVANCE Results suggested external skin sutures may be preferred for closure of incisions under tension to reduce risk of dehiscence.

  14. A complete sample of double-lobed radio quasars for VLBI tests of source models - Definition and statistics

    NASA Technical Reports Server (NTRS)

    Hough, D. H.; Readhead, A. C. S.

    1989-01-01

    A complete, flux-density-limited sample of double-lobed radio quasars is defined, with nuclei bright enough to be mapped with the Mark III VLBI system. It is shown that the statistics of linear size, nuclear strength, and curvature are consistent with the assumption of random source orientations and simple relativistic beaming in the nuclei. However, these statistics are also consistent with the effects of interaction between the beams and the surrounding medium. The distribution of jet velocities in the nuclei, as measured with VLBI, will provide a powerful test of physical theories of extragalactic radio sources.

  15. Lead Determination and Heterogeneity Analysis in Soil from a Former Firing Range

    NASA Astrophysics Data System (ADS)

    Urrutia-Goyes, Ricardo; Argyraki, Ariadne; Ornelas-Soto, Nancy

    2017-07-01

    Public places can have an unknown past of pollutants deposition. The exposition to such contaminants can create environmental and health issues. The characterization of a former firing range in Athens, Greece will allow its monitoring and encourage its remediation. This study is focused on Pb contamination in the site due to its presence in ammunition. A dense sampling design with 91 location (10 m apart) was used to determine the spatial distribution of the element in the surface soil of the study area. Duplicates samples were also collected one meter apart from 8 random locations to estimate the heterogeneity of the site. Elemental concentrations were measured using a portable XRF device after simple sample homogenization in the field. Robust Analysis of Variance showed that the contributions to the total variance were 11% from sampling, 1% analytical, and 88% geochemical; reflecting the suitability of the technique. Moreover, the extended random uncertainty relative to the mean concentration was 91.5%; confirming the high heterogeneity of the site. Statistical analysis defined a very high contamination in the area yielding to suggest the need for more in-depth analysis of other contaminants and possible health risks.

  16. Differentials in colostrum feeding among lactating women of block RS Pura of J and K: A lesson for nursing practice.

    PubMed

    Raina, Sunil Kumar; Mengi, Vijay; Singh, Gurdeep

    2012-07-01

    Breast feeding is universally and traditionally practicised in India. Experts advocate breast feeding as the best method of feeding young infants. To assess the role of various factors in determining colostrum feeding in block R. S. Pura of district Jammu. A stratified two-stage design with villages as the primary sampling unit and lactating mothers as secondary sampling unit. Villages were divided into different clusters on the basis of population and sampling units were selected by a simple random technique. Breastfeeding is almost universal in R. S. Pura. Differentials in discarding the first milk were not found to be important among various socioeconomic groups and the phenomenon appeared more general than specific.

  17. A two-stage Monte Carlo approach to the expression of uncertainty with finite sample sizes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowder, Stephen Vernon; Moyer, Robert D.

    2005-05-01

    Proposed supplement I to the GUM outlines a 'propagation of distributions' approach to deriving the distribution of a measurand for any non-linear function and for any set of random inputs. The supplement's proposed Monte Carlo approach assumes that the distributions of the random inputs are known exactly. This implies that the sample sizes are effectively infinite. In this case, the mean of the measurand can be determined precisely using a large number of Monte Carlo simulations. In practice, however, the distributions of the inputs will rarely be known exactly, but must be estimated using possibly small samples. If these approximatedmore » distributions are treated as exact, the uncertainty in estimating the mean is not properly taken into account. In this paper, we propose a two-stage Monte Carlo procedure that explicitly takes into account the finite sample sizes used to estimate parameters of the input distributions. We will illustrate the approach with a case study involving the efficiency of a thermistor mount power sensor. The performance of the proposed approach will be compared to the standard GUM approach for finite samples using simple non-linear measurement equations. We will investigate performance in terms of coverage probabilities of derived confidence intervals.« less

  18. Characterization of addressability by simultaneous randomized benchmarking.

    PubMed

    Gambetta, Jay M; Córcoles, A D; Merkel, S T; Johnson, B R; Smolin, John A; Chow, Jerry M; Ryan, Colm A; Rigetti, Chad; Poletto, S; Ohki, Thomas A; Ketchen, Mark B; Steffen, M

    2012-12-14

    The control and handling of errors arising from cross talk and unwanted interactions in multiqubit systems is an important issue in quantum information processing architectures. We introduce a benchmarking protocol that provides information about the amount of addressability present in the system and implement it on coupled superconducting qubits. The protocol consists of randomized benchmarking experiments run both individually and simultaneously on pairs of qubits. A relevant figure of merit for the addressability is then related to the differences in the measured average gate fidelities in the two experiments. We present results from two similar samples with differing cross talk and unwanted qubit-qubit interactions. The results agree with predictions based on simple models of the classical cross talk and Stark shifts.

  19. Approximation algorithms for the min-power symmetric connectivity problem

    NASA Astrophysics Data System (ADS)

    Plotnikov, Roman; Erzin, Adil; Mladenovic, Nenad

    2016-10-01

    We consider the NP-hard problem of synthesis of optimal spanning communication subgraph in a given arbitrary simple edge-weighted graph. This problem occurs in the wireless networks while minimizing the total transmission power consumptions. We propose several new heuristics based on the variable neighborhood search metaheuristic for the approximation solution of the problem. We have performed a numerical experiment where all proposed algorithms have been executed on the randomly generated test samples. For these instances, on average, our algorithms outperform the previously known heuristics.

  20. Knowledge, attitude, and practice (KAP) of food hygiene among schools students' in Majmaah city, Saudi Arabia.

    PubMed

    Almansour, Mohammed; Sami, Waqas; Al-Rashedy, Oliyan Shoqer; Alsaab, Rayan Saad; Alfayez, Abdulrahman Saad; Almarri, Nawaf Rashed

    2016-04-01

    To determine the level of knowledge, attitude, and practice of food hygiene among primary, intermediate and high school students and explore association, if any, with socio-demographic differences. The observational cross-sectional study was conducted at boy's schools in Majmaah, Kingdom of Saudi Arabia, from February to May 2014. Data was collected using stratified random sampling technique from students aged 8-25 year. Two schools from each level (primary, intermediate and high school) were randomly selected and data was collected from the selected schools using simple random sampling method. A self-administered modified Sharif and Al-Malki questionnaire for knowledge, attitude and practice of food hygiene was used with Arabic translation. The mean age of 377 male students in the study was 14.53±2.647 years. Knowledge levels was less in primary school students compared to high school students (p=0.026). Attitude level was high in primary school students compared to intermediate school students (p< 0.001). No significant difference was observed between groups with regard to practice levels (p=0.152). The students exhibited good practice levels, despite fair knowledge and attitude levels.

  1. Complex versus simple ankle movement training in stroke using telerehabilitation: a randomized controlled trial.

    PubMed

    Deng, Huiqiong; Durfee, William K; Nuckley, David J; Rheude, Brandon S; Severson, Amy E; Skluzacek, Katie M; Spindler, Kristen K; Davey, Cynthia S; Carey, James R

    2012-02-01

    Telerehabilitation allows rehabilitative training to continue remotely after discharge from acute care and can include complex tasks known to create rich conditions for neural change. The purposes of this study were: (1) to explore the feasibility of using telerehabilitation to improve ankle dorsiflexion during the swing phase of gait in people with stroke and (2) to compare complex versus simple movements of the ankle in promoting behavioral change and brain reorganization. This study was a pilot randomized controlled trial. Training was done in the participant's home. Testing was done in separate research labs involving functional magnetic resonance imaging (fMRI) and multi-camera gait analysis. Sixteen participants with chronic stroke and impaired ankle dorsiflexion were assigned randomly to receive 4 weeks of telerehabilitation of the paretic ankle. Participants received either computerized complex movement training (track group) or simple movement training (move group). Behavioral changes were measured with the 10-m walk test and gait analysis using a motion capture system. Brain reorganization was measured with ankle tracking during fMRI. Dorsiflexion during gait was significantly larger in the track group compared with the move group. For fMRI, although the volume, percent volume, and intensity of cortical activation failed to show significant changes, the frequency count of the number of participants showing an increase versus a decrease in these values from pretest to posttest measurements was significantly different between the 2 groups, with the track group decreasing and the move group increasing. Limitations of this study were that no follow-up test was conducted and that a small sample size was used. The results suggest that telerehabilitation, emphasizing complex task training with the paretic limb, is feasible and can be effective in promoting further dorsiflexion in people with chronic stroke.

  2. A Comparison of Techniques for Scheduling Earth-Observing Satellites

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna

    2004-01-01

    Scheduling observations by coordinated fleets of Earth Observing Satellites (EOS) involves large search spaces, complex constraints and poorly understood bottlenecks, conditions where evolutionary and related algorithms are often effective. However, there are many such algorithms and the best one to use is not clear. Here we compare multiple variants of the genetic algorithm: stochastic hill climbing, simulated annealing, squeaky wheel optimization and iterated sampling on ten realistically-sized EOS scheduling problems. Schedules are represented by a permutation (non-temperal ordering) of the observation requests. A simple deterministic scheduler assigns times and resources to each observation request in the order indicated by the permutation, discarding those that violate the constraints created by previously scheduled observations. Simulated annealing performs best. Random mutation outperform a more 'intelligent' mutator. Furthermore, the best mutator, by a small margin, was a novel approach we call temperature dependent random sampling that makes large changes in the early stages of evolution and smaller changes towards the end of search.

  3. Application and testing of a procedure to evaluate transferability of habitat suitability criteria

    USGS Publications Warehouse

    Thomas, Jeff A.; Bovee, Ken D.

    1993-01-01

    A procedure designed to test the transferability of habitat suitability criteria was evaluated in the Cache la Poudre River, Colorado. Habitat suitability criteria were developed for active adult and juvenile rainbow trout in the South Platte River, Colorado. These criteria were tested by comparing microhabitat use predicted from the criteria with observed microhabitat use by adult rainbow trout in the Cache la Poudre River. A one-sided X2 test, using counts of occupied and unoccupied cells in each suitability classification, was used to test for non-random selection for optimum habitat use over usable habitat and for suitable over unsuitable habitat. Criteria for adult rainbow trout were judged to be transferable to the Cache la Poudre River, but juvenile criteria (applied to adults) were not transferable. Random subsampling of occupied and unoccupied cells was conducted to determine the effect of sample size on the reliability of the test procedure. The incidence of type I and type II errors increased rapidly as the sample size was reduced below 55 occupied and 200 unoccupied cells. Recommended modifications to the procedure included the adoption of a systematic or randomized sampling design and direct measurement of microhabitat variables. With these modifications, the procedure is economical, simple and reliable. Use of the procedure as a quality assurance device in routine applications of the instream flow incremental methodology was encouraged.

  4. Random vs. systematic sampling from administrative databases involving human subjects.

    PubMed

    Hagino, C; Lo, R J

    1998-09-01

    Two sampling techniques, simple random sampling (SRS) and systematic sampling (SS), were compared to determine whether they yield similar and accurate distributions for the following four factors: age, gender, geographic location and years in practice. Any point estimate within 7 yr or 7 percentage points of its reference standard (SRS or the entire data set, i.e., the target population) was considered "acceptably similar" to the reference standard. The sampling frame was from the entire membership database of the Canadian Chiropractic Association. The two sampling methods were tested using eight different sample sizes of n (50, 100, 150, 200, 250, 300, 500, 800). From the profile/characteristics, summaries of four known factors [gender, average age, number (%) of chiropractors in each province and years in practice], between- and within-methods chi 2 tests and unpaired t tests were performed to determine whether any of the differences [descriptively greater than 7% or 7 yr] were also statistically significant. The strengths of the agreements between the provincial distributions were quantified by calculating the percent agreements for each (provincial pairwise-comparison methods). Any percent agreement less than 70% was judged to be unacceptable. Our assessments of the two sampling methods (SRS and SS) for the different sample sizes tested suggest that SRS and SS yielded acceptably similar results. Both methods started to yield "correct" sample profiles at approximately the same sample size (n > 200). SS is not only convenient, it can be recommended for sampling from large databases in which the data are listed without any inherent order biases other than alphabetical listing by surname.

  5. Secondary outcome analysis for data from an outcome-dependent sampling design.

    PubMed

    Pan, Yinghao; Cai, Jianwen; Longnecker, Matthew P; Zhou, Haibo

    2018-04-22

    Outcome-dependent sampling (ODS) scheme is a cost-effective way to conduct a study. For a study with continuous primary outcome, an ODS scheme can be implemented where the expensive exposure is only measured on a simple random sample and supplemental samples selected from 2 tails of the primary outcome variable. With the tremendous cost invested in collecting the primary exposure information, investigators often would like to use the available data to study the relationship between a secondary outcome and the obtained exposure variable. This is referred as secondary analysis. Secondary analysis in ODS designs can be tricky, as the ODS sample is not a random sample from the general population. In this article, we use the inverse probability weighted and augmented inverse probability weighted estimating equations to analyze the secondary outcome for data obtained from the ODS design. We do not make any parametric assumptions on the primary and secondary outcome and only specify the form of the regression mean models, thus allow an arbitrary error distribution. Our approach is robust to second- and higher-order moment misspecification. It also leads to more precise estimates of the parameters by effectively using all the available participants. Through simulation studies, we show that the proposed estimator is consistent and asymptotically normal. Data from the Collaborative Perinatal Project are analyzed to illustrate our method. Copyright © 2018 John Wiley & Sons, Ltd.

  6. A simple cryogenic holder for tensile testing of soft biological tissues.

    PubMed

    Lepetit, J; Favier, R; Grajales, A; Skjervold, P O

    2004-04-01

    To overcome the difficulty of gripping soft biological materials for tensile test, a simple inexpensive cryogenic holder was developed which allows rapid (3 min) preparation of samples. It is made of 6 parts, built in a bakelite cloth, which is an excellent thermal isolant, and is used with rectangular (8x10(-2)x10(-2)x10(-2)m) samples. The holder with the sample inside is completely immersed in liquid nitrogen for 50 s. This duration allows the freezing of the sample ends on a 10(-2)m length and gives a very flat freezing surface throughout the sample cross section. The 6x10(-2)m central part of the sample remained at ambient temperature. Two parts of the holder help maintain the sample until its ends are vertically gripped in the tensile machine thus avoiding any sample deformation during this step. No pressure was applied on the frozen part of the sample by grips of the tensile machine and this avoids breaks in this region. The sample is fixed by adhesion forces (>1 kN) between its frozen parts and 2 pieces of the holder. The procedure has been successfully tested with bovine and salmon muscle samples and results show tensile breaks randomly distributed in the unfrozen region of the samples. Particular attention has been paid to obtain a very flat freezing surface so that the axial strain is equal throughout the sample and therefore any strain-related mechanical parameters can be accurately determined. The dimensions of the holder can be easily modified to fit other sample geometries and can be used with other biological materials.

  7. Simple and rapid detection of the porcine reproductive and respiratory syndrome virus from pig whole blood using filter paper.

    PubMed

    Inoue, Ryo; Tsukahara, Takamitsu; Sunaba, Chinatsu; Itoh, Mitsugi; Ushida, Kazunari

    2007-04-01

    The combination of Flinders Technology Associates filter papers (FTA cards) and real-time PCR was examined to establish a simple and rapid technique for the detection of porcine reproductive and respiratory syndrome virus (PRRSV) from whole pig blood. A modified live PRRS vaccine was diluted with either sterilised saline or pig whole blood, and the suspensions were applied onto the FTA cards. The real-time RT-PCR detection of PRRSV was performed directly with the samples applied to the FTA card without the RNA extraction step. Six whole blood samples from at random selected piglets in the PRRSV infected farm were also assayed in this study. The expected PCR product was successfully amplified from either saline diluted or pig whole blood diluted vaccine. The same PCR ampliocon was detected from all blood samples assayed in this study. This study suggested that the combination of an FTA card and real-time PCR is a rapid and easy technique for the detection of PRRSV. This technique can remarkably shorten the time required for PRRSV detection from whole blood and makes the procedure much easier.

  8. The effect of using bomb calorimeter in improving science process skills of physics students

    NASA Astrophysics Data System (ADS)

    Edie, S. S.; Masturi; Safitri, H. N.; Alighiri, D.; Susilawati; Sari, L. M. E. K.; Marwoto, P.; Iswari, R. S.

    2018-03-01

    The bomb calorimeter is laboratory equipment which serves to calculate the value of combustion heat or heat capacity of a sample in excess oxygen combustion. This study aims to determine the effect of using bomb calorimeter on science process skill of physics students. Influences include the effectiveness of using the equipment and knowing the improvement of students’ science process skills before and after using tools. The sample used simple random sampling with one group pretest-posttest research design. The instrument that used is written test that adjusts with science process skills aspect. Analysis of the effectiveness of bomb calorimeter showed useful result 87.88%, while the study of science skill improvement showed n-gain value 0.64 that is the medium category.

  9. Improving the chi-squared approximation for bivariate normal tolerance regions

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.

    1993-01-01

    Let X be a two-dimensional random variable distributed according to N2(mu,Sigma) and let bar-X and S be the respective sample mean and covariance matrix calculated from N observations of X. Given a containment probability beta and a level of confidence gamma, we seek a number c, depending only on N, beta, and gamma such that the ellipsoid R = (x: (x - bar-X)'S(exp -1) (x - bar-X) less than or = c) is a tolerance region of content beta and level gamma; i.e., R has probability gamma of containing at least 100 beta percent of the distribution of X. Various approximations for c exist in the literature, but one of the simplest to compute -- a multiple of the ratio of certain chi-squared percentage points -- is badly biased for small N. For the bivariate normal case, most of the bias can be removed by simple adjustment using a factor A which depends on beta and gamma. This paper provides values of A for various beta and gamma so that the simple approximation for c can be made viable for any reasonable sample size. The methodology provides an illustrative example of how a combination of Monte-Carlo simulation and simple regression modelling can be used to improve an existing approximation.

  10. Random sphere packing model of heterogeneous propellants

    NASA Astrophysics Data System (ADS)

    Kochevets, Sergei Victorovich

    It is well recognized that combustion of heterogeneous propellants is strongly dependent on the propellant morphology. Recent developments in computing systems make it possible to start three-dimensional modeling of heterogeneous propellant combustion. A key component of such large scale computations is a realistic model of industrial propellants which retains the true morphology---a goal never achieved before. The research presented develops the Random Sphere Packing Model of heterogeneous propellants and generates numerical samples of actual industrial propellants. This is done by developing a sphere packing algorithm which randomly packs a large number of spheres with a polydisperse size distribution within a rectangular domain. First, the packing code is developed, optimized for performance, and parallelized using the OpenMP shared memory architecture. Second, the morphology and packing fraction of two simple cases of unimodal and bimodal packs are investigated computationally and analytically. It is shown that both the Loose Random Packing and Dense Random Packing limits are not well defined and the growth rate of the spheres is identified as the key parameter controlling the efficiency of the packing. For a properly chosen growth rate, computational results are found to be in excellent agreement with experimental data. Third, two strategies are developed to define numerical samples of polydisperse heterogeneous propellants: the Deterministic Strategy and the Random Selection Strategy. Using these strategies, numerical samples of industrial propellants are generated. The packing fraction is investigated and it is shown that the experimental values of the packing fraction can be achieved computationally. It is strongly believed that this Random Sphere Packing Model of propellants is a major step forward in the realistic computational modeling of heterogeneous propellant of combustion. In addition, a method of analysis of the morphology of heterogeneous propellants is developed which uses the concept of multi-point correlation functions. A set of intrinsic length scales of local density fluctuations in random heterogeneous propellants is identified by performing a Monte-Carlo study of the correlation functions. This method of analysis shows great promise for understanding the origins of the combustion instability of heterogeneous propellants, and is believed to become a valuable tool for the development of safe and reliable rocket engines.

  11. Representation of limb kinematics in Purkinje cell simple spike discharge is conserved across multiple tasks

    PubMed Central

    Hewitt, Angela L.; Popa, Laurentiu S.; Pasalar, Siavash; Hendrix, Claudia M.

    2011-01-01

    Encoding of movement kinematics in Purkinje cell simple spike discharge has important implications for hypotheses of cerebellar cortical function. Several outstanding questions remain regarding representation of these kinematic signals. It is uncertain whether kinematic encoding occurs in unpredictable, feedback-dependent tasks or kinematic signals are conserved across tasks. Additionally, there is a need to understand the signals encoded in the instantaneous discharge of single cells without averaging across trials or time. To address these questions, this study recorded Purkinje cell firing in monkeys trained to perform a manual random tracking task in addition to circular tracking and center-out reach. Random tracking provides for extensive coverage of kinematic workspaces. Direction and speed errors are significantly greater during random than circular tracking. Cross-correlation analyses comparing hand and target velocity profiles show that hand velocity lags target velocity during random tracking. Correlations between simple spike firing from 120 Purkinje cells and hand position, velocity, and speed were evaluated with linear regression models including a time constant, τ, as a measure of the firing lead/lag relative to the kinematic parameters. Across the population, velocity accounts for the majority of simple spike firing variability (63 ± 30% of Radj2), followed by position (28 ± 24% of Radj2) and speed (11 ± 19% of Radj2). Simple spike firing often leads hand kinematics. Comparison of regression models based on averaged vs. nonaveraged firing and kinematics reveals lower Radj2 values for nonaveraged data; however, regression coefficients and τ values are highly similar. Finally, for most cells, model coefficients generated from random tracking accurately estimate simple spike firing in either circular tracking or center-out reach. These findings imply that the cerebellum controls movement kinematics, consistent with a forward internal model that predicts upcoming limb kinematics. PMID:21795616

  12. Evaluating the Quantitative Capabilities of Metagenomic Analysis Software.

    PubMed

    Kerepesi, Csaba; Grolmusz, Vince

    2016-05-01

    DNA sequencing technologies are applied widely and frequently today to describe metagenomes, i.e., microbial communities in environmental or clinical samples, without the need for culturing them. These technologies usually return short (100-300 base-pairs long) DNA reads, and these reads are processed by metagenomic analysis software that assign phylogenetic composition-information to the dataset. Here we evaluate three metagenomic analysis software (AmphoraNet--a webserver implementation of AMPHORA2--, MG-RAST, and MEGAN5) for their capabilities of assigning quantitative phylogenetic information for the data, describing the frequency of appearance of the microorganisms of the same taxa in the sample. The difficulties of the task arise from the fact that longer genomes produce more reads from the same organism than shorter genomes, and some software assign higher frequencies to species with longer genomes than to those with shorter ones. This phenomenon is called the "genome length bias." Dozens of complex artificial metagenome benchmarks can be found in the literature. Because of the complexity of those benchmarks, it is usually difficult to judge the resistance of a metagenomic software to this "genome length bias." Therefore, we have made a simple benchmark for the evaluation of the "taxon-counting" in a metagenomic sample: we have taken the same number of copies of three full bacterial genomes of different lengths, break them up randomly to short reads of average length of 150 bp, and mixed the reads, creating our simple benchmark. Because of its simplicity, the benchmark is not supposed to serve as a mock metagenome, but if a software fails on that simple task, it will surely fail on most real metagenomes. We applied three software for the benchmark. The ideal quantitative solution would assign the same proportion to the three bacterial taxa. We have found that AMPHORA2/AmphoraNet gave the most accurate results and the other two software were under-performers: they counted quite reliably each short read to their respective taxon, producing the typical genome length bias. The benchmark dataset is available at http://pitgroup.org/static/3RandomGenome-100kavg150bps.fna.

  13. Observed intra-cluster correlation coefficients in a cluster survey sample of patient encounters in general practice in Australia

    PubMed Central

    Knox, Stephanie A; Chondros, Patty

    2004-01-01

    Background Cluster sample study designs are cost effective, however cluster samples violate the simple random sample assumption of independence of observations. Failure to account for the intra-cluster correlation of observations when sampling through clusters may lead to an under-powered study. Researchers therefore need estimates of intra-cluster correlation for a range of outcomes to calculate sample size. We report intra-cluster correlation coefficients observed within a large-scale cross-sectional study of general practice in Australia, where the general practitioner (GP) was the primary sampling unit and the patient encounter was the unit of inference. Methods Each year the Bettering the Evaluation and Care of Health (BEACH) study recruits a random sample of approximately 1,000 GPs across Australia. Each GP completes details of 100 consecutive patient encounters. Intra-cluster correlation coefficients were estimated for patient demographics, morbidity managed and treatments received. Intra-cluster correlation coefficients were estimated for descriptive outcomes and for associations between outcomes and predictors and were compared across two independent samples of GPs drawn three years apart. Results Between April 1999 and March 2000, a random sample of 1,047 Australian general practitioners recorded details of 104,700 patient encounters. Intra-cluster correlation coefficients for patient demographics ranged from 0.055 for patient sex to 0.451 for language spoken at home. Intra-cluster correlations for morbidity variables ranged from 0.005 for the management of eye problems to 0.059 for management of psychological problems. Intra-cluster correlation for the association between two variables was smaller than the descriptive intra-cluster correlation of each variable. When compared with the April 2002 to March 2003 sample (1,008 GPs) the estimated intra-cluster correlation coefficients were found to be consistent across samples. Conclusions The demonstrated precision and reliability of the estimated intra-cluster correlations indicate that these coefficients will be useful for calculating sample sizes in future general practice surveys that use the GP as the primary sampling unit. PMID:15613248

  14. The Self-Adapting Focused Review System. Probability sampling of medical records to monitor utilization and quality of care.

    PubMed

    Ash, A; Schwartz, M; Payne, S M; Restuccia, J D

    1990-11-01

    Medical record review is increasing in importance as the need to identify and monitor utilization and quality of care problems grow. To conserve resources, reviews are usually performed on a subset of cases. If judgment is used to identify subgroups for review, this raises the following questions: How should subgroups be determined, particularly since the locus of problems can change over time? What standard of comparison should be used in interpreting rates of problems found in subgroups? How can population problem rates be estimated from observed subgroup rates? How can the bias be avoided that arises because reviewers know that selected cases are suspected of having problems? How can changes in problem rates over time be interpreted when evaluating intervention programs? Simple random sampling, an alternative to subgroup review, overcomes the problems implied by these questions but is inefficient. The Self-Adapting Focused Review System (SAFRS), introduced and described here, provides an adaptive approach to record selection that is based upon model-weighted probability sampling. It retains the desirable inferential properties of random sampling while allowing reviews to be concentrated on cases currently thought most likely to be problematic. Model development and evaluation are illustrated using hospital data to predict inappropriate admissions.

  15. The prevalence of borderline personality symptoms in adolescents.

    PubMed

    Mohammadi, Mohammad Reza; Shamohammadi, Morteza; Salmanian, Maryam

    2014-07-01

    This study aimed to assess the prevalence of borderline personality symptoms in 16-18 year old adolescents. In this cross sectional - descriptive study, 422 high school students (211 boys, 211 girls) aged 16-18 were selected by cluster random sampling and simple random sampling in 2011-2012. The participants were assessed using the revised diagnostic interview for borderline questionnaire (DIB-R) and demographic questionnaire. Data were analyzed using Pearson correlation coefficient and Spearman correlation coefficient. Of the participants, 0/9% (0/22 % of the 16 year olds, 0.45 % of the 17 year olds and 0/22% of the 18 year olds) were diagnosed with borderline personality symptoms. Also, the prevalence of borderline personality symptoms in boys was 0/45 % of the total sample and it was 0/45 % of the total sample in girls. With respect to the relationship between demographic variables (age, sex, location, parents' occupation, parents' kinship, parents' education and birth order) and borderline personality symptoms, only parents' kinship showed a weak correlation with borderline personality symptoms. In the view of the prevalence of 0.9% of the borderline personality symptoms in adolescents, attention should be paid to the diagnosis and treatment of this disorder. Furthermore, works need to be done to improve the mental health and quality of life of adolescents.

  16. Variance Estimation, Design Effects, and Sample Size Calculations for Respondent-Driven Sampling

    PubMed Central

    2006-01-01

    Hidden populations, such as injection drug users and sex workers, are central to a number of public health problems. However, because of the nature of these groups, it is difficult to collect accurate information about them, and this difficulty complicates disease prevention efforts. A recently developed statistical approach called respondent-driven sampling improves our ability to study hidden populations by allowing researchers to make unbiased estimates of the prevalence of certain traits in these populations. Yet, not enough is known about the sample-to-sample variability of these prevalence estimates. In this paper, we present a bootstrap method for constructing confidence intervals around respondent-driven sampling estimates and demonstrate in simulations that it outperforms the naive method currently in use. We also use simulations and real data to estimate the design effects for respondent-driven sampling in a number of situations. We conclude with practical advice about the power calculations that are needed to determine the appropriate sample size for a study using respondent-driven sampling. In general, we recommend a sample size twice as large as would be needed under simple random sampling. PMID:16937083

  17. Random Sampling with Interspike-Intervals of the Exponential Integrate and Fire Neuron: A Computational Interpretation of UP-States

    PubMed Central

    Steimer, Andreas; Schindler, Kaspar

    2015-01-01

    Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon’s implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike’s preceding ISI. As we show, the EIF’s exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron’s ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational theories about UP states during slow wave sleep and present possible extensions of the model in the context of spike-frequency adaptation. PMID:26203657

  18. Improved Horvitz-Thompson Estimation of Model Parameters from Two-phase Stratified Samples: Applications in Epidemiology

    PubMed Central

    Breslow, Norman E.; Lumley, Thomas; Ballantyne, Christie M; Chambless, Lloyd E.; Kulich, Michal

    2009-01-01

    The case-cohort study involves two-phase sampling: simple random sampling from an infinite super-population at phase one and stratified random sampling from a finite cohort at phase two. Standard analyses of case-cohort data involve solution of inverse probability weighted (IPW) estimating equations, with weights determined by the known phase two sampling fractions. The variance of parameter estimates in (semi)parametric models, including the Cox model, is the sum of two terms: (i) the model based variance of the usual estimates that would be calculated if full data were available for the entire cohort; and (ii) the design based variance from IPW estimation of the unknown cohort total of the efficient influence function (IF) contributions. This second variance component may be reduced by adjusting the sampling weights, either by calibration to known cohort totals of auxiliary variables correlated with the IF contributions or by their estimation using these same auxiliary variables. Both adjustment methods are implemented in the R survey package. We derive the limit laws of coefficients estimated using adjusted weights. The asymptotic results suggest practical methods for construction of auxiliary variables that are evaluated by simulation of case-cohort samples from the National Wilms Tumor Study and by log-linear modeling of case-cohort data from the Atherosclerosis Risk in Communities Study. Although not semiparametric efficient, estimators based on adjusted weights may come close to achieving full efficiency within the class of augmented IPW estimators. PMID:20174455

  19. Analysis of creative mathematic thinking ability in problem based learning model based on self-regulation learning

    NASA Astrophysics Data System (ADS)

    Munahefi, D. N.; Waluya, S. B.; Rochmad

    2018-03-01

    The purpose of this research identified the effectiveness of Problem Based Learning (PBL) models based on Self Regulation Leaning (SRL) on the ability of mathematical creative thinking and analyzed the ability of mathematical creative thinking of high school students in solving mathematical problems. The population of this study was students of grade X SMA N 3 Klaten. The research method used in this research was sequential explanatory. Quantitative stages with simple random sampling technique, where two classes were selected randomly as experimental class was taught with the PBL model based on SRL and control class was taught with expository model. The selection of samples at the qualitative stage was non-probability sampling technique in which each selected 3 students were high, medium, and low academic levels. PBL model with SRL approach effectived to students’ mathematical creative thinking ability. The ability of mathematical creative thinking of low academic level students with PBL model approach of SRL were achieving the aspect of fluency and flexibility. Students of academic level were achieving fluency and flexibility aspects well. But the originality of students at the academic level was not yet well structured. Students of high academic level could reach the aspect of originality.

  20. Design and simulation study of the immunization Data Quality Audit (DQA).

    PubMed

    Woodard, Stacy; Archer, Linda; Zell, Elizabeth; Ronveaux, Olivier; Birmingham, Maureen

    2007-08-01

    The goal of the Data Quality Audit (DQA) is to assess whether the Global Alliance for Vaccines and Immunization-funded countries are adequately reporting the number of diphtheria-tetanus-pertussis immunizations given, on which the "shares" are awarded. Given that this sampling design is a modified two-stage cluster sample (modified because a stratified, rather than a simple, random sample of health facilities is obtained from the selected clusters); the formula for the calculation of the standard error for the estimate is unknown. An approximated standard error has been proposed, and the first goal of this simulation is to assess the accuracy of the standard error. Results from the simulations based on hypothetical populations were found not to be representative of the actual DQAs that were conducted. Additional simulations were then conducted on the actual DQA data to better access the precision of the DQ with both the original and the increased sample sizes.

  1. Optimal auxiliary-covariate-based two-phase sampling design for semiparametric efficient estimation of a mean or mean difference, with application to clinical trials.

    PubMed

    Gilbert, Peter B; Yu, Xuesong; Rotnitzky, Andrea

    2014-03-15

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean 'importance-weighted' breadth (Y) of the T-cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method. Copyright © 2013 John Wiley & Sons, Ltd.

  2. Optimal Auxiliary-Covariate Based Two-Phase Sampling Design for Semiparametric Efficient Estimation of a Mean or Mean Difference, with Application to Clinical Trials

    PubMed Central

    Gilbert, Peter B.; Yu, Xuesong; Rotnitzky, Andrea

    2014-01-01

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semi-parametric efficient estimator is applied. This approach is made efficient by specifying the phase-two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. Simulations are performed to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. Proofs and R code are provided. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean “importance-weighted” breadth (Y) of the T cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y, and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24% in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y∣W] is important for realizing the efficiency gain, which is aided by an ample phase-two sample and by using a robust fitting method. PMID:24123289

  3. Statistical uncertainty analysis applied to the DRAGONv4 code lattice calculations and based on JENDL-4 covariance data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez-Solis, A.; Demaziere, C.; Ekberg, C.

    2012-07-01

    In this paper, multi-group microscopic cross-section uncertainty is propagated through the DRAGON (Version 4) lattice code, in order to perform uncertainty analysis on k{infinity} and 2-group homogenized macroscopic cross-sections predictions. A statistical methodology is employed for such purposes, where cross-sections of certain isotopes of various elements belonging to the 172 groups DRAGLIB library format, are considered as normal random variables. This library is based on JENDL-4 data, because JENDL-4 contains the largest amount of isotopic covariance matrixes among the different major nuclear data libraries. The aim is to propagate multi-group nuclide uncertainty by running the DRAGONv4 code 500 times, andmore » to assess the output uncertainty of a test case corresponding to a 17 x 17 PWR fuel assembly segment without poison. The chosen sampling strategy for the current study is Latin Hypercube Sampling (LHS). The quasi-random LHS allows a much better coverage of the input uncertainties than simple random sampling (SRS) because it densely stratifies across the range of each input probability distribution. Output uncertainty assessment is based on the tolerance limits concept, where the sample formed by the code calculations infers to cover 95% of the output population with at least a 95% of confidence. This analysis is the first attempt to propagate parameter uncertainties of modern multi-group libraries, which are used to feed advanced lattice codes that perform state of the art resonant self-shielding calculations such as DRAGONv4. (authors)« less

  4. Estimating accuracy of land-cover composition from two-stage cluster sampling

    USGS Publications Warehouse

    Stehman, S.V.; Wickham, J.D.; Fattorini, L.; Wade, T.D.; Baffetta, F.; Smith, J.H.

    2009-01-01

    Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), root mean square error (RMSE), and correlation (CORR) to quantify accuracy of land-cover composition for a general two-stage cluster sampling design, and for the special case of simple random sampling without replacement (SRSWOR) at each stage. The bias of the estimators for the two-stage SRSWOR design is evaluated via a simulation study. The estimators of RMSE and CORR have small bias except when sample size is small and the land-cover class is rare. The estimator of MAD is biased for both rare and common land-cover classes except when sample size is large. A general recommendation is that rare land-cover classes require large sample sizes to ensure that the accuracy estimators have small bias. ?? 2009 Elsevier Inc.

  5. [A comparison of convenience sampling and purposive sampling].

    PubMed

    Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien

    2014-06-01

    Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.

  6. Minimalist design of a robust real-time quantum random number generator

    NASA Astrophysics Data System (ADS)

    Kravtsov, K. S.; Radchenko, I. V.; Kulik, S. P.; Molotkov, S. N.

    2015-08-01

    We present a simple and robust construction of a real-time quantum random number generator (QRNG). Our minimalist approach ensures stable operation of the device as well as its simple and straightforward hardware implementation as a stand-alone module. As a source of randomness the device uses measurements of time intervals between clicks of a single-photon detector. The obtained raw sequence is then filtered and processed by a deterministic randomness extractor, which is realized as a look-up table. This enables high speed on-the-fly processing without the need of extensive computations. The overall performance of the device is around 1 random bit per detector click, resulting in 1.2 Mbit/s generation rate in our implementation.

  7. Comparative analysis of used car price evaluation models

    NASA Astrophysics Data System (ADS)

    Chen, Chuancan; Hao, Lulu; Xu, Cong

    2017-05-01

    An accurate used car price evaluation is a catalyst for the healthy development of used car market. Data mining has been applied to predict used car price in several articles. However, little is studied on the comparison of using different algorithms in used car price estimation. This paper collects more than 100,000 used car dealing records throughout China to do empirical analysis on a thorough comparison of two algorithms: linear regression and random forest. These two algorithms are used to predict used car price in three different models: model for a certain car make, model for a certain car series and universal model. Results show that random forest has a stable but not ideal effect in price evaluation model for a certain car make, but it shows great advantage in the universal model compared with linear regression. This indicates that random forest is an optimal algorithm when handling complex models with a large number of variables and samples, yet it shows no obvious advantage when coping with simple models with less variables.

  8. A Simulation Study on the Performance of the Simple Difference and Covariance-Adjusted Scores in Randomized Experimental Designs

    ERIC Educational Resources Information Center

    Petscher, Yaacov; Schatschneider, Christopher

    2011-01-01

    Research by Huck and McLean (1975) demonstrated that the covariance-adjusted score is more powerful than the simple difference score, yet recent reviews indicate researchers are equally likely to use either score type in two-wave randomized experimental designs. A Monte Carlo simulation was conducted to examine the conditions under which the…

  9. The Effect of Herrmann Whole Brain Teaching Method on Students' Understanding of Simple Electric Circuits

    ERIC Educational Resources Information Center

    Bawaneh, Ali Khalid Ali; Nurulazam Md Zain, Ahmad; Salmiza, Saleh

    2011-01-01

    The purpose of this study was to investigate the effect of Herrmann Whole Brain Teaching Method over conventional teaching method on eight graders in their understanding of simple electric circuits in Jordan. Participants (N = 273 students; M = 139, F = 134) were randomly selected from Bani Kenanah region-North of Jordan and randomly assigned to…

  10. Constrained sampling experiments reveal principles of detection in natural scenes.

    PubMed

    Sebastian, Stephen; Abrams, Jared; Geisler, Wilson S

    2017-07-11

    A fundamental everyday visual task is to detect target objects within a background scene. Using relatively simple stimuli, vision science has identified several major factors that affect detection thresholds, including the luminance of the background, the contrast of the background, the spatial similarity of the background to the target, and uncertainty due to random variations in the properties of the background and in the amplitude of the target. Here we use an experimental approach based on constrained sampling from multidimensional histograms of natural stimuli, together with a theoretical analysis based on signal detection theory, to discover how these factors affect detection in natural scenes. We sorted a large collection of natural image backgrounds into multidimensional histograms, where each bin corresponds to a particular luminance, contrast, and similarity. Detection thresholds were measured for a subset of bins spanning the space, where a natural background was randomly sampled from a bin on each trial. In low-uncertainty conditions, both the background bin and the amplitude of the target were fixed, and, in high-uncertainty conditions, they varied randomly on each trial. We found that thresholds increase approximately linearly along all three dimensions and that detection accuracy is unaffected by background bin and target amplitude uncertainty. The results are predicted from first principles by a normalized matched-template detector, where the dynamic normalizing gain factor follows directly from the statistical properties of the natural backgrounds. The results provide an explanation for classic laws of psychophysics and their underlying neural mechanisms.

  11. Using pilot data to size a two-arm randomized trial to find a nearly optimal personalized treatment strategy.

    PubMed

    Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R

    2016-04-15

    A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Sampling design for spatially distributed hydrogeologic and environmental processes

    USGS Publications Warehouse

    Christakos, G.; Olea, R.A.

    1992-01-01

    A methodology for the design of sampling networks over space is proposed. The methodology is based on spatial random field representations of nonhomogeneous natural processes, and on optimal spatial estimation techniques. One of the most important results of random field theory for physical sciences is its rationalization of correlations in spatial variability of natural processes. This correlation is extremely important both for interpreting spatially distributed observations and for predictive performance. The extent of site sampling and the types of data to be collected will depend on the relationship of subsurface variability to predictive uncertainty. While hypothesis formulation and initial identification of spatial variability characteristics are based on scientific understanding (such as knowledge of the physics of the underlying phenomena, geological interpretations, intuition and experience), the support offered by field data is statistically modelled. This model is not limited by the geometric nature of sampling and covers a wide range in subsurface uncertainties. A factorization scheme of the sampling error variance is derived, which possesses certain atttactive properties allowing significant savings in computations. By means of this scheme, a practical sampling design procedure providing suitable indices of the sampling error variance is established. These indices can be used by way of multiobjective decision criteria to obtain the best sampling strategy. Neither the actual implementation of the in-situ sampling nor the solution of the large spatial estimation systems of equations are necessary. The required values of the accuracy parameters involved in the network design are derived using reference charts (readily available for various combinations of data configurations and spatial variability parameters) and certain simple yet accurate analytical formulas. Insight is gained by applying the proposed sampling procedure to realistic examples related to sampling problems in two dimensions. ?? 1992.

  13. Undergraduate student mental health at Makerere University, Uganda

    PubMed Central

    OVUGA, EMILIO; BOARDMAN, JED; WASSERMAN, DANUTA

    2006-01-01

    There is little information on the current mental health of University students in Uganda. The present study was carried out to determine the prevalence of depressed mood and suicidal ideation among students at Makerere University. Two student samples participated. Sample I comprised 253 fresh students admitted to all faculties at the University in the academic year 2000/2001, selected by a simple random sampling procedure. Sample II comprised 101 students admitted to the Faculty of Medicine during the academic year 2002/2003. The prevalence of depressed mood was measured using the 13-item Beck Depression Inventory (BDI). The prevalence of depressed mood (BDI score 10 or more) was significantly higher in sample I (16.2%) than sample II (4.0%). Sample I members were significantly more likely than those of sample II to report lifetime and past week suicide ideation. Thus, there is a high prevalence of mental health problems among the general population of new students entering Makerere University and this is significantly higher than for new students in the Faculty of Medicine. PMID:16757997

  14. Record of hospitalizations for ambulatory care sensitive conditions: validation of the hospital information system.

    PubMed

    Rehem, Tania Cristina Morais Santa Barbara; de Oliveira, Maria Regina Fernandes; Ciosak, Suely Itsuko; Egry, Emiko Yoshikawa

    2013-01-01

    To estimate the sensitivity, specificity and positive and negative predictive values of the Unified Health System's Hospital Information System for the appropriate recording of hospitalizations for ambulatory care-sensitive conditions. The hospital information system records for conditions which are sensitive to ambulatory care, and for those which are not, were considered for analysis, taking the medical records as the gold standard. Through simple random sampling, a sample of 816 medical records was defined and selected by means of a list of random numbers using the Statistical Package for Social Sciences. The sensitivity was 81.89%, specificity was 95.19%, the positive predictive value was 77.61% and the negative predictive value was 96.27%. In the study setting, the Hospital Information System (SIH) was more specific than sensitive, with nearly 20% of care sensitive conditions not detected. There are no validation studies in Brazil of the Hospital Information System records for the hospitalizations which are sensitive to primary health care. These results are relevant when one considers that this system is one of the bases for assessment of the effectiveness of primary health care.

  15. Cotton fabric-based electrochemical device for lactate measurement in saliva.

    PubMed

    Malon, Radha S P; Chua, K Y; Wicaksono, Dedy H B; Córcoles, Emma P

    2014-06-21

    Lactate measurement is vital in clinical diagnostics especially among trauma and sepsis patients. In recent years, it has been shown that saliva samples are an excellent applicable alternative for non-invasive measurement of lactate. In this study, we describe a method for the determination of lactate concentration in saliva samples by using a simple and low-cost cotton fabric-based electrochemical device (FED). The device was fabricated using template method for patterning the electrodes and wax-patterning technique for creating the sample placement/reaction zone. Lactate oxidase (LOx) enzyme was immobilised at the reaction zone using a simple entrapment method. The LOx enzymatic reaction product, hydrogen peroxide (H2O2) was measured using chronoamperometric measurements at the optimal detection potential (-0.2 V vs. Ag/AgCl), in which the device exhibited a linear working range between 0.1 to 5 mM, sensitivity (slope) of 0.3169 μA mM(-1) and detection limit of 0.3 mM. The low detection limit and wide linear range were suitable to measure salivary lactate (SL) concentration, thus saliva samples obtained under fasting conditions and after meals were evaluated using the FED. The measured SL varied among subjects and increased after meals randomly. The proposed device provides a suitable analytical alternative for rapid and non-invasive determination of lactate in saliva samples. The device can also be adapted to a variety of other assays that requires simplicity, low-cost, portability and flexibility.

  16. Ranked set sampling: cost and optimal set size.

    PubMed

    Nahhas, Ramzi W; Wolfe, Douglas A; Chen, Haiying

    2002-12-01

    McIntyre (1952, Australian Journal of Agricultural Research 3, 385-390) introduced ranked set sampling (RSS) as a method for improving estimation of a population mean in settings where sampling and ranking of units from the population are inexpensive when compared with actual measurement of the units. Two of the major factors in the usefulness of RSS are the set size and the relative costs of the various operations of sampling, ranking, and measurement. In this article, we consider ranking error models and cost models that enable us to assess the effect of different cost structures on the optimal set size for RSS. For reasonable cost structures, we find that the optimal RSS set sizes are generally larger than had been anticipated previously. These results will provide a useful tool for determining whether RSS is likely to lead to an improvement over simple random sampling in a given setting and, if so, what RSS set size is best to use in this case.

  17. Efficient quantum pseudorandomness with simple graph states

    NASA Astrophysics Data System (ADS)

    Mezher, Rawad; Ghalbouni, Joe; Dgheim, Joseph; Markham, Damian

    2018-02-01

    Measurement based (MB) quantum computation allows for universal quantum computing by measuring individual qubits prepared in entangled multipartite states, known as graph states. Unless corrected for, the randomness of the measurements leads to the generation of ensembles of random unitaries, where each random unitary is identified with a string of possible measurement results. We show that repeating an MB scheme an efficient number of times, on a simple graph state, with measurements at fixed angles and no feedforward corrections, produces a random unitary ensemble that is an ɛ -approximate t design on n qubits. Unlike previous constructions, the graph is regular and is also a universal resource for measurement based quantum computing, closely related to the brickwork state.

  18. Composing Music with Complex Networks

    NASA Astrophysics Data System (ADS)

    Liu, Xiaofan; Tse, Chi K.; Small, Michael

    In this paper we study the network structure in music and attempt to compose music artificially. Networks are constructed with nodes and edges corresponding to musical notes and their co-occurrences. We analyze sample compositions from Bach, Mozart, Chopin, as well as other types of music including Chinese pop music. We observe remarkably similar properties in all networks constructed from the selected compositions. Power-law exponents of degree distributions, mean degrees, clustering coefficients, mean geodesic distances, etc. are reported. With the network constructed, music can be created by using a biased random walk algorithm, which begins with a randomly chosen note and selects the subsequent notes according to a simple set of rules that compares the weights of the edges, weights of the nodes, and/or the degrees of nodes. The newly created music from complex networks will be played in the presentation.

  19. Comparison of Efficacy of Eye Movement, Desensitization and Reprocessing and Cognitive Behavioral Therapy Therapeutic Methods for Reducing Anxiety and Depression of Iranian Combatant Afflicted by Post Traumatic Stress Disorder

    NASA Astrophysics Data System (ADS)

    Narimani, M.; Sadeghieh Ahari, S.; Rajabi, S.

    This research aims to determine efficacy of two therapeutic methods and compare them; Eye Movement, Desensitization and Reprocessing (EMDR) and Cognitive Behavioral Therapy (CBT) for reduction of anxiety and depression of Iranian combatant afflicted with Post traumatic Stress Disorder (PTSD) after imposed war. Statistical population of current study includes combatants afflicted with PTSD that were hospitalized in Isar Hospital of Ardabil province or were inhabited in Ardabil. These persons were selected through simple random sampling and were randomly located in three groups. The method was extended test method and study design was multi-group test-retest. Used tools include hospital anxiety and depression scale. This survey showed that exercise of EMDR and CBT has caused significant reduction of anxiety and depression.

  20. Sampling procedures for inventory of commercial volume tree species in Amazon Forest.

    PubMed

    Netto, Sylvio P; Pelissari, Allan L; Cysneiros, Vinicius C; Bonazza, Marcelo; Sanquetta, Carlos R

    2017-01-01

    The spatial distribution of tropical tree species can affect the consistency of the estimators in commercial forest inventories, therefore, appropriate sampling procedures are required to survey species with different spatial patterns in the Amazon Forest. For this, the present study aims to evaluate the conventional sampling procedures and introduce the adaptive cluster sampling for volumetric inventories of Amazonian tree species, considering the hypotheses that the density, the spatial distribution and the zero-plots affect the consistency of the estimators, and that the adaptive cluster sampling allows to obtain more accurate volumetric estimation. We use data from a census carried out in Jamari National Forest, Brazil, where trees with diameters equal to or higher than 40 cm were measured in 1,355 plots. Species with different spatial patterns were selected and sampled with simple random sampling, systematic sampling, linear cluster sampling and adaptive cluster sampling, whereby the accuracy of the volumetric estimation and presence of zero-plots were evaluated. The sampling procedures applied to species were affected by the low density of trees and the large number of zero-plots, wherein the adaptive clusters allowed concentrating the sampling effort in plots with trees and, thus, agglutinating more representative samples to estimate the commercial volume.

  1. Combining structured and unstructured data to identify a cohort of ICU patients who received dialysis

    PubMed Central

    Abhyankar, Swapna; Demner-Fushman, Dina; Callaghan, Fiona M; McDonald, Clement J

    2014-01-01

    Objective To develop a generalizable method for identifying patient cohorts from electronic health record (EHR) data—in this case, patients having dialysis—that uses simple information retrieval (IR) tools. Methods We used the coded data and clinical notes from the 24 506 adult patients in the Multiparameter Intelligent Monitoring in Intensive Care database to identify patients who had dialysis. We used SQL queries to search the procedure, diagnosis, and coded nursing observations tables based on ICD-9 and local codes. We used a domain-specific search engine to find clinical notes containing terms related to dialysis. We manually validated the available records for a 10% random sample of patients who potentially had dialysis and a random sample of 200 patients who were not identified as having dialysis based on any of the sources. Results We identified 1844 patients that potentially had dialysis: 1481 from the three coded sources and 1624 from the clinical notes. Precision for identifying dialysis patients based on available data was estimated to be 78.4% (95% CI 71.9% to 84.2%) and recall was 100% (95% CI 86% to 100%). Conclusions Combining structured EHR data with information from clinical notes using simple queries increases the utility of both types of data for cohort identification. Patients identified by more than one source are more likely to meet the inclusion criteria; however, including patients found in any of the sources increases recall. This method is attractive because it is available to researchers with access to EHR data and off-the-shelf IR tools. PMID:24384230

  2. Maximum likelihood estimation of correction for dilution bias in simple linear regression using replicates from subjects with extreme first measurements.

    PubMed

    Berglund, Lars; Garmo, Hans; Lindbäck, Johan; Svärdsudd, Kurt; Zethelius, Björn

    2008-09-30

    The least-squares estimator of the slope in a simple linear regression model is biased towards zero when the predictor is measured with random error. A corrected slope may be estimated by adding data from a reliability study, which comprises a subset of subjects from the main study. The precision of this corrected slope depends on the design of the reliability study and estimator choice. Previous work has assumed that the reliability study constitutes a random sample from the main study. A more efficient design is to use subjects with extreme values on their first measurement. Previously, we published a variance formula for the corrected slope, when the correction factor is the slope in the regression of the second measurement on the first. In this paper we show that both designs improve by maximum likelihood estimation (MLE). The precision gain is explained by the inclusion of data from all subjects for estimation of the predictor's variance and by the use of the second measurement for estimation of the covariance between response and predictor. The gain of MLE enhances with stronger true relationship between response and predictor and with lower precision in the predictor measurements. We present a real data example on the relationship between fasting insulin, a surrogate marker, and true insulin sensitivity measured by a gold-standard euglycaemic insulin clamp, and simulations, where the behavior of profile-likelihood-based confidence intervals is examined. MLE was shown to be a robust estimator for non-normal distributions and efficient for small sample situations. Copyright (c) 2008 John Wiley & Sons, Ltd.

  3. Theory and applications of a deterministic approximation to the coalescent model

    PubMed Central

    Jewett, Ethan M.; Rosenberg, Noah A.

    2014-01-01

    Under the coalescent model, the random number nt of lineages ancestral to a sample is nearly deterministic as a function of time when nt is moderate to large in value, and it is well approximated by its expectation E[nt]. In turn, this expectation is well approximated by simple deterministic functions that are easy to compute. Such deterministic functions have been applied to estimate allele age, effective population size, and genetic diversity, and they have been used to study properties of models of infectious disease dynamics. Although a number of simple approximations of E[nt] have been derived and applied to problems of population-genetic inference, the theoretical accuracy of the formulas and the inferences obtained using these approximations is not known, and the range of problems to which they can be applied is not well understood. Here, we demonstrate general procedures by which the approximation nt ≈ E[nt] can be used to reduce the computational complexity of coalescent formulas, and we show that the resulting approximations converge to their true values under simple assumptions. Such approximations provide alternatives to exact formulas that are computationally intractable or numerically unstable when the number of sampled lineages is moderate or large. We also extend an existing class of approximations of E[nt] to the case of multiple populations of time-varying size with migration among them. Our results facilitate the use of the deterministic approximation nt ≈ E[nt] for deriving functionally simple, computationally efficient, and numerically stable approximations of coalescent formulas under complicated demographic scenarios. PMID:24412419

  4. Representation of limb kinematics in Purkinje cell simple spike discharge is conserved across multiple tasks.

    PubMed

    Hewitt, Angela L; Popa, Laurentiu S; Pasalar, Siavash; Hendrix, Claudia M; Ebner, Timothy J

    2011-11-01

    Encoding of movement kinematics in Purkinje cell simple spike discharge has important implications for hypotheses of cerebellar cortical function. Several outstanding questions remain regarding representation of these kinematic signals. It is uncertain whether kinematic encoding occurs in unpredictable, feedback-dependent tasks or kinematic signals are conserved across tasks. Additionally, there is a need to understand the signals encoded in the instantaneous discharge of single cells without averaging across trials or time. To address these questions, this study recorded Purkinje cell firing in monkeys trained to perform a manual random tracking task in addition to circular tracking and center-out reach. Random tracking provides for extensive coverage of kinematic workspaces. Direction and speed errors are significantly greater during random than circular tracking. Cross-correlation analyses comparing hand and target velocity profiles show that hand velocity lags target velocity during random tracking. Correlations between simple spike firing from 120 Purkinje cells and hand position, velocity, and speed were evaluated with linear regression models including a time constant, τ, as a measure of the firing lead/lag relative to the kinematic parameters. Across the population, velocity accounts for the majority of simple spike firing variability (63 ± 30% of R(adj)(2)), followed by position (28 ± 24% of R(adj)(2)) and speed (11 ± 19% of R(adj)(2)). Simple spike firing often leads hand kinematics. Comparison of regression models based on averaged vs. nonaveraged firing and kinematics reveals lower R(adj)(2) values for nonaveraged data; however, regression coefficients and τ values are highly similar. Finally, for most cells, model coefficients generated from random tracking accurately estimate simple spike firing in either circular tracking or center-out reach. These findings imply that the cerebellum controls movement kinematics, consistent with a forward internal model that predicts upcoming limb kinematics.

  5. Interplay of Determinism and Randomness: From Irreversibility to Chaos, Fractals, and Stochasticity

    NASA Astrophysics Data System (ADS)

    Tsonis, A.

    2017-12-01

    We will start our discussion into randomness by looking exclusively at our formal mathematical system to show that even in this pure and strictly logical system one cannot do away with randomness. By employing simple mathematical models, we will identify the three possible sources of randomness: randomness due to inability to find the rules (irreversibility), randomness due to inability to have infinite power (chaos), and randomness due to stochastic processes. Subsequently we will move from the mathematical system to our physical world to show that randomness, through the quantum mechanical character of small scales, through chaos, and because of the second law of thermodynamics, is an intrinsic property of nature as well. We will subsequently argue that the randomness in the physical world is consistent with the three sources of randomness suggested from the study of simple mathematical systems. Many examples ranging from purely mathematical to natural processes will be presented, which clearly demonstrate how the combination of rules and randomness produces the world we live in. Finally, the principle of least effort or the principle of minimum energy consumption will be suggested as the underlying principle behind this symbiosis between determinism and randomness.

  6. Fossils out of sequence: Computer simulations and strategies for dealing with stratigraphic disorder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cutler, A.H.; Flessa, K.W.

    Microstratigraphic resolution is limited by vertical mixing and reworking of fossils. Stratigraphic disorder is the degree to which fossils within a stratigraphic sequence are not in proper chronological order. Stratigraphic disorder arises through in situ vertical mixing of fossils and reworking of older fossils into younger deposits. The authors simulated the effects of mixing and reworking by simple computer models, and measured stratigraphic disorder using rank correlation between age and stratigraphic position (Spearman and Kendall coefficients). Mixing was simulated by randomly transposing pairs of adjacent fossils in a sequence. Reworking was simulated by randomly inserting older fossils into a youngermore » sequence. Mixing is an inefficient means of producing disorder; after 500 mixing steps stratigraphic order is still significant at the 99% to 95% level, depending on the coefficient used. Reworking disorders sequences very efficiently: significant order begins to be lost when reworked shells make up 35% of the sequence. Thus a sequence can be dominated by undisturbed, autochthonous shells and still be disordered. The effects of mixing-produced disorder can be minimized by increasing sample size at each horizon. Increased spacing between samples is of limited utility in dealing with disordered sequences: while widely separated samples are more likely to be stratigraphically ordered, the smaller number of samples makes the detection of trends problematic.« less

  7. Quantity and quality of information, education and communication during antenatal visit at private and public sector hospitals of Bahawalpur, Pakistan.

    PubMed

    Mahar, Benazeer; Kumar, Ramesh; Rizvi, Narjis; Bahalkani, Habib Akhtar; Haq, Mahboobul; Soomro, Jamila

    2012-01-01

    Information, education and communication (IEC) by health care provider to pregnant woman during the antenatal visit are very crucial for healthier outcome of pregnancy. This study analysed the quality and quantity of antenatal visit at a private and a public hospital of Bahawalpur, Pakistan. An exit interview was conducted from 216 pregnant women by using validated, reliable and pre-tested adapted questionnaire. First sample was selected by simple random sampling, for rest of the sample selection systematic random sampling was adapted by selecting every 7th women for interview. Ethical considerations were taken. Average communication time among pregnant woman and her healthcare provider was 3 minute in public and 8 minutes in private hospital. IEC mainly focused on diet and nutrition in private (86%) and (53%) public, advice for family planning after delivery was discussed with 13% versus 7% in public and private setting. None of the respondents in both facilities got advice or counselling on breastfeeding and neonatal care. Birth preparedness components were discussed, woman in public and private hospital respectively. In both settings antenatal clients were not received information and education communication according to World Health Organization guidelines. Quality and quantity of IEC during antenatal care was found very poor in both public and private sector hospitals of urban Pakistan.

  8. A Simple Probabilistic Combat Model

    DTIC Science & Technology

    2016-06-13

    This page intentionally left blank. 1. INTRODUCTION The Lanchester combat model1 is a simple way to assess the effects of quantity and quality...case model. For the random case, assume R red weapons are allocated to B blue weapons randomly. We are interested in the distribution of weapons...since the initial condition is very close to the break even line. What is more interesting is that the probability density tends to concentrate at

  9. Complex Versus Simple Ankle Movement Training in Stroke Using Telerehabilitation: A Randomized Controlled Trial

    PubMed Central

    Deng, Huiqiong; Durfee, William K.; Nuckley, David J.; Rheude, Brandon S.; Severson, Amy E.; Skluzacek, Katie M.; Spindler, Kristen K.; Davey, Cynthia S.

    2012-01-01

    Background Telerehabilitation allows rehabilitative training to continue remotely after discharge from acute care and can include complex tasks known to create rich conditions for neural change. Objectives The purposes of this study were: (1) to explore the feasibility of using telerehabilitation to improve ankle dorsiflexion during the swing phase of gait in people with stroke and (2) to compare complex versus simple movements of the ankle in promoting behavioral change and brain reorganization. Design This study was a pilot randomized controlled trial. Setting Training was done in the participant's home. Testing was done in separate research labs involving functional magnetic resonance imaging (fMRI) and multi-camera gait analysis. Patients Sixteen participants with chronic stroke and impaired ankle dorsiflexion were assigned randomly to receive 4 weeks of telerehabilitation of the paretic ankle. Intervention Participants received either computerized complex movement training (track group) or simple movement training (move group). Measurements Behavioral changes were measured with the 10-m walk test and gait analysis using a motion capture system. Brain reorganization was measured with ankle tracking during fMRI. Results Dorsiflexion during gait was significantly larger in the track group compared with the move group. For fMRI, although the volume, percent volume, and intensity of cortical activation failed to show significant changes, the frequency count of the number of participants showing an increase versus a decrease in these values from pretest to posttest measurements was significantly different between the 2 groups, with the track group decreasing and the move group increasing. Limitations Limitations of this study were that no follow-up test was conducted and that a small sample size was used. Conclusions The results suggest that telerehabilitation, emphasizing complex task training with the paretic limb, is feasible and can be effective in promoting further dorsiflexion in people with chronic stroke. PMID:22095209

  10. A comparative study of Conroy and Monte Carlo methods applied to multiple quadratures and multiple scattering

    NASA Technical Reports Server (NTRS)

    Deepak, A.; Fluellen, A.

    1978-01-01

    An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.

  11. A random walk rule for phase I clinical trials.

    PubMed

    Durham, S D; Flournoy, N; Rosenberger, W F

    1997-06-01

    We describe a family of random walk rules for the sequential allocation of dose levels to patients in a dose-response study, or phase I clinical trial. Patients are sequentially assigned the next higher, same, or next lower dose level according to some probability distribution, which may be determined by ethical considerations as well as the patient's response. It is shown that one can choose these probabilities in order to center dose level assignments unimodally around any target quantile of interest. Estimation of the quantile is discussed; the maximum likelihood estimator and its variance are derived under a two-parameter logistic distribution, and the maximum likelihood estimator is compared with other nonparametric estimators. Random walk rules have clear advantages: they are simple to implement, and finite and asymptotic distribution theory is completely worked out. For a specific random walk rule, we compute finite and asymptotic properties and give examples of its use in planning studies. Having the finite distribution theory available and tractable obviates the need for elaborate simulation studies to analyze the properties of the design. The small sample properties of our rule, as determined by exact theory, compare favorably to those of the continual reassessment method, determined by simulation.

  12. Connections between survey calibration estimators and semiparametric models for incomplete data

    PubMed Central

    Lumley, Thomas; Shaw, Pamela A.; Dai, James Y.

    2012-01-01

    Survey calibration (or generalized raking) estimators are a standard approach to the use of auxiliary information in survey sampling, improving on the simple Horvitz–Thompson estimator. In this paper we relate the survey calibration estimators to the semiparametric incomplete-data estimators of Robins and coworkers, and to adjustment for baseline variables in a randomized trial. The development based on calibration estimators explains the ‘estimated weights’ paradox and provides useful heuristics for constructing practical estimators. We present some examples of using calibration to gain precision without making additional modelling assumptions in a variety of regression models. PMID:23833390

  13. The Masked Sample Covariance Estimator: An Analysis via the Matrix Laplace Transform

    DTIC Science & Technology

    2012-02-01

    Variables: Suppose that we divide the stock market into disjoint sectors, and we would like to study the interactions among the monthly returns for...vector to conform with the market sectors, and we estimate only the entries in the diagonal blocks. Spatial or Temporal Localization: A simple random model...eαW1A c ] ≤ 4p e−B/2κ 2 = 1 n . Introduce this expression into (4.11) to conclude that E[exp(2θεM xx∗)1A c ] 4 1 n · I. (4.17) 20 RICHARD Y. CHEN

  14. Fitting parametric random effects models in very large data sets with application to VHA national data

    PubMed Central

    2012-01-01

    Background With the current focus on personalized medicine, patient/subject level inference is often of key interest in translational research. As a result, random effects models (REM) are becoming popular for patient level inference. However, for very large data sets that are characterized by large sample size, it can be difficult to fit REM using commonly available statistical software such as SAS since they require inordinate amounts of computer time and memory allocations beyond what are available preventing model convergence. For example, in a retrospective cohort study of over 800,000 Veterans with type 2 diabetes with longitudinal data over 5 years, fitting REM via generalized linear mixed modeling using currently available standard procedures in SAS (e.g. PROC GLIMMIX) was very difficult and same problems exist in Stata’s gllamm or R’s lme packages. Thus, this study proposes and assesses the performance of a meta regression approach and makes comparison with methods based on sampling of the full data. Data We use both simulated and real data from a national cohort of Veterans with type 2 diabetes (n=890,394) which was created by linking multiple patient and administrative files resulting in a cohort with longitudinal data collected over 5 years. Methods and results The outcome of interest was mean annual HbA1c measured over a 5 years period. Using this outcome, we compared parameter estimates from the proposed random effects meta regression (REMR) with estimates based on simple random sampling and VISN (Veterans Integrated Service Networks) based stratified sampling of the full data. Our results indicate that REMR provides parameter estimates that are less likely to be biased with tighter confidence intervals when the VISN level estimates are homogenous. Conclusion When the interest is to fit REM in repeated measures data with very large sample size, REMR can be used as a good alternative. It leads to reasonable inference for both Gaussian and non-Gaussian responses if parameter estimates are homogeneous across VISNs. PMID:23095325

  15. Empirical likelihood-based tests for stochastic ordering

    PubMed Central

    BARMI, HAMMOU EL; MCKEAGUE, IAN W.

    2013-01-01

    This paper develops an empirical likelihood approach to testing for the presence of stochastic ordering among univariate distributions based on independent random samples from each distribution. The proposed test statistic is formed by integrating a localized empirical likelihood statistic with respect to the empirical distribution of the pooled sample. The asymptotic null distribution of this test statistic is found to have a simple distribution-free representation in terms of standard Brownian bridge processes. The approach is used to compare the lengths of rule of Roman Emperors over various historical periods, including the “decline and fall” phase of the empire. In a simulation study, the power of the proposed test is found to improve substantially upon that of a competing test due to El Barmi and Mukerjee. PMID:23874142

  16. Technical Report 1205: A Simple Probabilistic Combat Model

    DTIC Science & Technology

    2016-07-08

    This page intentionally left blank. 1. INTRODUCTION The Lanchester combat model1 is a simple way to assess the effects of quantity and quality...model. For the random case, assume R red weapons are allocated to B blue weapons randomly. We are interested in the distribution of weapons assigned...the initial condition is very close to the break even line. What is more interesting is that the probability density tends to concentrate at either a

  17. Rapid Quantification of Mutant Fitness in Diverse Bacteria by Sequencing Randomly Bar-Coded Transposons

    PubMed Central

    Wetmore, Kelly M.; Price, Morgan N.; Waters, Robert J.; Lamson, Jacob S.; He, Jennifer; Hoover, Cindi A.; Blow, Matthew J.; Bristow, James; Butland, Gareth

    2015-01-01

    ABSTRACT Transposon mutagenesis with next-generation sequencing (TnSeq) is a powerful approach to annotate gene function in bacteria, but existing protocols for TnSeq require laborious preparation of every sample before sequencing. Thus, the existing protocols are not amenable to the throughput necessary to identify phenotypes and functions for the majority of genes in diverse bacteria. Here, we present a method, random bar code transposon-site sequencing (RB-TnSeq), which increases the throughput of mutant fitness profiling by incorporating random DNA bar codes into Tn5 and mariner transposons and by using bar code sequencing (BarSeq) to assay mutant fitness. RB-TnSeq can be used with any transposon, and TnSeq is performed once per organism instead of once per sample. Each BarSeq assay requires only a simple PCR, and 48 to 96 samples can be sequenced on one lane of an Illumina HiSeq system. We demonstrate the reproducibility and biological significance of RB-TnSeq with Escherichia coli, Phaeobacter inhibens, Pseudomonas stutzeri, Shewanella amazonensis, and Shewanella oneidensis. To demonstrate the increased throughput of RB-TnSeq, we performed 387 successful genome-wide mutant fitness assays representing 130 different bacterium-carbon source combinations and identified 5,196 genes with significant phenotypes across the five bacteria. In P. inhibens, we used our mutant fitness data to identify genes important for the utilization of diverse carbon substrates, including a putative d-mannose isomerase that is required for mannitol catabolism. RB-TnSeq will enable the cost-effective functional annotation of diverse bacteria using mutant fitness profiling. PMID:25968644

  18. Statistical inference for the additive hazards model under outcome-dependent sampling.

    PubMed

    Yu, Jichang; Liu, Yanyan; Sandler, Dale P; Zhou, Haibo

    2015-09-01

    Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer.

  19. Statistical inference for the additive hazards model under outcome-dependent sampling

    PubMed Central

    Yu, Jichang; Liu, Yanyan; Sandler, Dale P.; Zhou, Haibo

    2015-01-01

    Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer. PMID:26379363

  20. Host-Associated Metagenomics: A Guide to Generating Infectious RNA Viromes

    PubMed Central

    Robert, Catherine; Pascalis, Hervé; Michelle, Caroline; Jardot, Priscilla; Charrel, Rémi; Raoult, Didier; Desnues, Christelle

    2015-01-01

    Background Metagenomic analyses have been widely used in the last decade to describe viral communities in various environments or to identify the etiology of human, animal, and plant pathologies. Here, we present a simple and standardized protocol that allows for the purification and sequencing of RNA viromes from complex biological samples with an important reduction of host DNA and RNA contaminants, while preserving the infectivity of viral particles. Principal Findings We evaluated different viral purification steps, random reverse transcriptions and sequence-independent amplifications of a pool of representative RNA viruses. Viruses remained infectious after the purification process. We then validated the protocol by sequencing the RNA virome of human body lice engorged in vitro with artificially contaminated human blood. The full genomes of the most abundant viruses absorbed by the lice during the blood meal were successfully sequenced. Interestingly, random amplifications differed in the genome coverage of segmented RNA viruses. Moreover, the majority of reads were taxonomically identified, and only 7–15% of all reads were classified as “unknown”, depending on the random amplification method. Conclusion The protocol reported here could easily be applied to generate RNA viral metagenomes from complex biological samples of different origins. Our protocol allows further virological characterizations of the described viral communities because it preserves the infectivity of viral particles and allows for the isolation of viruses. PMID:26431175

  1. Sampling methods to the statistical control of the production of blood components.

    PubMed

    Pereira, Paulo; Seghatchian, Jerard; Caldeira, Beatriz; Santos, Paula; Castro, Rosa; Fernandes, Teresa; Xavier, Sandra; de Sousa, Gracinda; de Almeida E Sousa, João Paulo

    2017-12-01

    The control of blood components specifications is a requirement generalized in Europe by the European Commission Directives and in the US by the AABB standards. The use of a statistical process control methodology is recommended in the related literature, including the EDQM guideline. The control reliability is dependent of the sampling. However, a correct sampling methodology seems not to be systematically applied. Commonly, the sampling is intended to comply uniquely with the 1% specification to the produced blood components. Nevertheless, on a purely statistical viewpoint, this model could be argued not to be related to a consistent sampling technique. This could be a severe limitation to detect abnormal patterns and to assure that the production has a non-significant probability of producing nonconforming components. This article discusses what is happening in blood establishments. Three statistical methodologies are proposed: simple random sampling, sampling based on the proportion of a finite population, and sampling based on the inspection level. The empirical results demonstrate that these models are practicable in blood establishments contributing to the robustness of sampling and related statistical process control decisions for the purpose they are suggested for. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Two-way ANOVA Problems with Simple Numbers.

    ERIC Educational Resources Information Center

    Read, K. L. Q.; Shihab, L. H.

    1998-01-01

    Describes how to construct simple numerical examples in two-way ANOVAs, specifically randomized blocks, balanced two-way layouts, and Latin squares. Indicates that working through simple numerical problems is helpful to students meeting a technique for the first time and should be followed by computer-based analysis of larger, real datasets when…

  3. Predictability of monthly temperature and precipitation using automatic time series forecasting methods

    NASA Astrophysics Data System (ADS)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2018-02-01

    We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.

  4. On the estimation variance for the specific Euler-Poincaré characteristic of random networks.

    PubMed

    Tscheschel, A; Stoyan, D

    2003-07-01

    The specific Euler number is an important topological characteristic in many applications. It is considered here for the case of random networks, which may appear in microscopy either as primary objects of investigation or as secondary objects describing in an approximate way other structures such as, for example, porous media. For random networks there is a simple and natural estimator of the specific Euler number. For its estimation variance, a simple Poisson approximation is given. It is based on the general exact formula for the estimation variance. In two examples of quite different nature and topology application of the formulas is demonstrated.

  5. Transcription, intercellular variability and correlated random walk.

    PubMed

    Müller, Johannes; Kuttler, Christina; Hense, Burkhard A; Zeiser, Stefan; Liebscher, Volkmar

    2008-11-01

    We develop a simple model for the random distribution of a gene product. It is assumed that the only source of variance is due to switching transcription on and off by a random process. Under the condition that the transition rates between on and off are constant we find that the amount of mRNA follows a scaled Beta distribution. Additionally, a simple positive feedback loop is considered. The simplicity of the model allows for an explicit solution also in this setting. These findings in turn allow, e.g., for easy parameter scans. We find that bistable behavior translates into bimodal distributions. These theoretical findings are in line with experimental results.

  6. Enhancing local health department disaster response capacity with rapid community needs assessments: validation of a computerized program for binary attribute cluster sampling.

    PubMed

    Groenewold, Matthew R

    2006-01-01

    Local health departments are among the first agencies to respond to disasters or other mass emergencies. However, they often lack the ability to handle large-scale events. Plans including locally developed and deployed tools may enhance local response. Simplified cluster sampling methods can be useful in assessing community needs after a sudden-onset, short duration event. Using an adaptation of the methodology used by the World Health Organization Expanded Programme on Immunization (EPI), a Microsoft Access-based application for two-stage cluster sampling of residential addresses in Louisville/Jefferson County Metro, Kentucky was developed. The sampling frame was derived from geographically referenced data on residential addresses and political districts available through the Louisville/Jefferson County Information Consortium (LOJIC). The program randomly selected 30 clusters, defined as election precincts, from within the area of interest, and then, randomly selected 10 residential addresses from each cluster. The program, called the Rapid Assessment Tools Package (RATP), was tested in terms of accuracy and precision using data on a dichotomous characteristic of residential addresses available from the local tax assessor database. A series of 30 samples were produced and analyzed with respect to their precision and accuracy in estimating the prevalence of the study attribute. Point estimates with 95% confidence intervals were calculated by determining the proportion of the study attribute values in each of the samples and compared with the population proportion. To estimate the design effect, corresponding simple random samples of 300 addresses were taken after each of the 30 cluster samples. The sample proportion fell within +/-10 absolute percentage points of the true proportion in 80% of the samples. In 93.3% of the samples, the point estimate fell within +/-12.5%, and 96.7% fell within +/-15%. All of the point estimates fell within +/-20% of the true proportion. Estimates of the design effect ranged from 0.926 to 1.436 (mean = 1.157, median = 1.170) for the 30 samples. Although prospective evaluation of its performance in field trials or a real emergency is required to confirm its utility, this study suggests that the RATP, a locally designed and deployed tool, may provide population-based estimates of community needs or the extent of event-related consequences that are precise enough to serve as the basis for the initial post-event decisions regarding relief efforts.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bromberger, Seth A.; Klymko, Christine F.; Henderson, Keith A.

    Betweenness centrality is a graph statistic used to nd vertices that are participants in a large number of shortest paths in a graph. This centrality measure is commonly used in path and network interdiction problems and its complete form requires the calculation of all-pairs shortest paths for each vertex. This leads to a time complexity of O(jV jjEj), which is impractical for large graphs. Estimation of betweenness centrality has focused on performing shortest-path calculations on a subset of randomly- selected vertices. This reduces the complexity of the centrality estimation to O(jSjjEj); jSj < jV j, which can be scaled appropriatelymore » based on the computing resources available. An estimation strategy that uses random selection of vertices for seed selection is fast and simple to implement, but may not provide optimal estimation of betweenness centrality when the number of samples is constrained. Our experimentation has identi ed a number of alternate seed-selection strategies that provide lower error than random selection in common scale-free graphs. These strategies are discussed and experimental results are presented.« less

  8. Programmable disorder in random DNA tilings

    NASA Astrophysics Data System (ADS)

    Tikhomirov, Grigory; Petersen, Philip; Qian, Lulu

    2017-03-01

    Scaling up the complexity and diversity of synthetic molecular structures will require strategies that exploit the inherent stochasticity of molecular systems in a controlled fashion. Here we demonstrate a framework for programming random DNA tilings and show how to control the properties of global patterns through simple, local rules. We constructed three general forms of planar network—random loops, mazes and trees—on the surface of self-assembled DNA origami arrays on the micrometre scale with nanometre resolution. Using simple molecular building blocks and robust experimental conditions, we demonstrate control of a wide range of properties of the random networks, including the branching rules, the growth directions, the proximity between adjacent networks and the size distribution. Much as combinatorial approaches for generating random one-dimensional chains of polymers have been used to revolutionize chemical synthesis and the selection of functional nucleic acids, our strategy extends these principles to random two-dimensional networks of molecules and creates new opportunities for fabricating more complex molecular devices that are organized by DNA nanostructures.

  9. The triglyceride composition of 17 seed fats rich in octanoic, decanoic, or lauric acid.

    PubMed

    Litchfield, C; Miller, E; Harlow, R D; Reiser, R

    1967-07-01

    Seed fats of eight species ofLauraceae (laurel family), six species ofCuphea (Lythraceae family), and three species ofUlmaceae (elm family) were extracted, and the triglycerides were isolated by preparative thin-layer chromatography. GLC of the triglycerides on a silicone column resolved 10 to 18 peaks with a 22 to 58 carbon number range for each fat. These carbon number distributions yielded considerable information about triglyceride compositions of the fats.The most interesting finding was withLaurus nobilis seed fat, which contained 58.4% lauric acid and 29.2-29.8% trilaurin. A maximum of 19.9% trilaurin would be predicted by a 1, 2, 3-random, a 1, 3-random-2-random, or a 1-random-2-random-3-random distribution of the lauric acid(3). This indicates a specificity for the biosynthesis of a simple triglyceride byLaurus nobilis seed enzymes.Cuphea lanceolata seed fat also contained more simple triglyceride (tridecanoin) than would be predicted by the fatty acid distribution theories.

  10. Simulation methods to estimate design power: an overview for applied research.

    PubMed

    Arnold, Benjamin F; Hogan, Daniel R; Colford, John M; Hubbard, Alan E

    2011-06-20

    Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research.

  11. Simulation methods to estimate design power: an overview for applied research

    PubMed Central

    2011-01-01

    Background Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. Methods We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. Results We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Conclusions Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research. PMID:21689447

  12. Methods and analysis of realizing randomized grouping.

    PubMed

    Hu, Liang-Ping; Bao, Xiao-Lei; Wang, Qi

    2011-07-01

    Randomization is one of the four basic principles of research design. The meaning of randomization includes two aspects: one is to randomly select samples from the population, which is known as random sampling; the other is to randomly group all the samples, which is called randomized grouping. Randomized grouping can be subdivided into three categories: completely, stratified and dynamically randomized grouping. This article mainly introduces the steps of complete randomization, the definition of dynamic randomization and the realization of random sampling and grouping by SAS software.

  13. Measurement of thermal conductivity and thermal diffusivity using a thermoelectric module

    NASA Astrophysics Data System (ADS)

    Beltrán-Pitarch, Braulio; Márquez-García, Lourdes; Min, Gao; García-Cañadas, Jorge

    2017-04-01

    A proof of concept of using a thermoelectric module to measure both thermal conductivity and thermal diffusivity of bulk disc samples at room temperature is demonstrated. The method involves the calculation of the integral area from an impedance spectrum, which empirically correlates with the thermal properties of the sample through an exponential relationship. This relationship was obtained employing different reference materials. The impedance spectroscopy measurements are performed in a very simple setup, comprising a thermoelectric module, which is soldered at its bottom side to a Cu block (heat sink) and thermally connected with the sample at its top side employing thermal grease. Random and systematic errors of the method were calculated for the thermal conductivity (18.6% and 10.9%, respectively) and thermal diffusivity (14.2% and 14.7%, respectively) employing a BCR724 standard reference material. Although errors are somewhat high, the technique could be useful for screening purposes or high-throughput measurements at its current state. This new method establishes a new application for thermoelectric modules as thermal properties sensors. It involves the use of a very simple setup in conjunction with a frequency response analyzer, which provides a low cost alternative to most of currently available apparatus in the market. In addition, impedance analyzers are reliable and widely spread equipment, which facilities the sometimes difficult access to thermal conductivity facilities.

  14. Comparing the Administration of Letrozole and Megestrol Acetate in the Treatment of Women with Simple Endometrial Hyperplasia without Atypia: A Randomized Clinical Trial.

    PubMed

    Moradan, Sanam; Nikkhah, Niaz; Mirmohammadkhanai, Majid

    2017-05-01

    The present study was conducted as a pilot to compare the therapeutic effects and the potential side effects of oral Megestrol acetate and Letrozole in the treatment of simple hyperplasia in perimenopausal women. The participants of this randomized clinical trial consisted of two groups of 25 women aged 44-50 presenting with abnormal uterine bleeding diagnosed with simple endometrial hyperplasia without cytologic atypia confirmed by transvaginal ultrasonography and biopsy. The first group received 40-mg doses of Megestrol acetate for 2 weeks per month for a total period of 2 months. The second group received 2.5-mg daily doses of Letrozole for a total period of 2 months. The differences in terms of quantitative measurements were analyzed using the independent two-sample t test and the paired t test. To compare the two groups in terms of the distribution of the categorical variables, Pearson's Chi square and Fisher's Exact tests were used at the significance level of 0.05 by Stata-9.2. Although the intervention led to significant improvements in both groups (P < .001), there was no difference between the groups in terms of accomplishing resolution (P = .74) [seven (28%) patients in the Letrozole group and five (20%) in the Megestrol group], while two patients in the Letrozole group and nine in the Megestrol group suffered from side effects, suggesting significantly lower side effects in the Letrozole group (P = .02). Letrozole and Megestrol acetate seem to have similar effects on the treatment of simple endometrial hyperplasia, the only difference being that Letrozole presents fewer side effects than Megestrol acetate in patients with this condition. Abnormal Uterine Bleeding Research Center of Semnan University of Medical Sciences, Semnan, Iran. IRCT2015031011504N5.

  15. Violations of the international code of marketing of breast milk substitutes: prevalence in four countries

    PubMed Central

    Taylor, Anna

    1998-01-01

    Objective: To estimate the prevalence of violations of the international code of marketing of substitutes for breast milk in one city in each of Bangladesh, Poland, South Africa, and Thailand. Design: Multistage random sampling was used to select pregnant women and mothers of infants ⩽6 months old to interview at health facilities. Women were asked whether they had received free samples of substitutes for breast milk (including infant formula designed to meet the nutritional needs of infants from birth to 4 to 6 months of age, follow on formula designed to replace infant formula at the age of 4 to 6 months, and complementary foods for infants aged ⩽6 months), bottles, or teats. The source of the free sample and when it had been given to the women was also determined. 3 health workers were interviewed at each facility to assess whether the facility had received free samples, to determine how they had been used, and to determine whether gifts had been given to health workers by companies that manufactured or distributed breast milk substitutes. Compliance with the marketing code for information given to health workers was evaluated using a checklist. Setting: Health facilities in Dhaka, Bangladesh; Warsaw, Poland; Durban, South Africa; and Bangkok, Thailand. Subjects: 1468 pregnant women, 1582 mothers of infants aged ⩽6 months, and 466 health workers at 165 health facilities. Main outcome measures: Number of free samples received by pregnant women, mothers, and health workers; number of gifts given to health workers; and availability of information that violated the code in health facilities. Results: 97 out of 370 (26%) mothers in Bangkok reported receiving free samples of breast milk substitutes, infant formula, bottles, or teats compared with only 1 out of 385 mothers in Dhaka. Across the four cities from 3 out of 40 (8%) to 20 out of 40 (50%) health facilities had received free samples which were not being used for research or professional evaluation; from 2 out of 123 (2%) to 21 out of 119 (18%) health workers had received gifts from companies involved in the manufacturing or distribution of breast milk substitutes. From 6 out of 40 (15%) to 22 out of 39 (56%) health facilities information that violated the code had been provided by companies and was available to staff. Conclusion: Violations of the code were detected with a simple survey instrument in all of the four countries studied. Governmental and non-governmental agencies should monitor the prevalence of code violations using the simple methodology developed for this study. Key messages A simple multistage random sampling procedure can be used to interview women and health professionals to assess whether violations of the international code of marketing of substitutes for breast milk are occurring 3050 women and 466 health professionals were interviewed at 165 health facilities in Bangladesh, Poland, South Africa, and Thailand 97 out of 370 mothers in Bangkok reported receiving free samples of breast milk substitutes, infant formula, bottles, or teats compared with only 1 out of 385 mothers in Dhaka. In Bangkok health workers reported that 20 out of 40 health facilities had also received free samples. Most free samples were distributed by health facilities In Warsaw 56% of facilities surveyed were found to have information available for health workers that had been provided by manufacturers or distributors of breast milk substitutes in contravention of the code; 18% of health workers in Warsaw had received free gifts from manufacturers PMID:9552947

  16. Species identification and sex determination of the genus Nepenthes (Nepenthaceae).

    PubMed

    Mokkamul, Piya; Chaveerach, Arunrat; Sudmoon, Runglawan; Tanee, Tawatchai

    2007-02-15

    Nepenthes species are well known for their ornamentally attractive pitchers. The species diversity was randomly surveyed in some conservation areas of Thailand and three species were found, namely N. gracilis Korth., N. mirabilis Druce. and N. smilesii Hemsl. Young plants as unknown species from Chatuchak market were added in plant sampled set. Thirty two Inter Simple Sequence Repeat (ISSR) primers were screened and 13 successful primers were used to produce DNA banding patterns for constructing a dendrogram. The dendrogram is potentially power tool to identify unknown species from Chatuchak market, differentiate species population, population by geographical areas and sex determination. The geographical area of N. mirabilis was specified to Southern and Northeastern regions and finally, subdivided into exact areas according to province. Male and female plants of N. gracilis at Phu Wua Wildlife Sanctuary and N. mirabilis at Bung Khonglong non-hunting area were determined. Two unknown species from Chatuchak market were analyzed to be N. mirabilis with the genetic similarities (S) 77.2 to 84.7. Be more sex specific in all sample studied, 37 Random Amplified Polymorphic DNA (RAPD) primers were investigated. The result shows that only one RAPD primer show high resolution results at about 750 bp specific male-related marker.

  17. Ensembles of Spiking Neurons with Noise Support Optimal Probabilistic Inference in a Dynamically Changing Environment

    PubMed Central

    Legenstein, Robert; Maass, Wolfgang

    2014-01-01

    It has recently been shown that networks of spiking neurons with noise can emulate simple forms of probabilistic inference through “neural sampling”, i.e., by treating spikes as samples from a probability distribution of network states that is encoded in the network. Deficiencies of the existing model are its reliance on single neurons for sampling from each random variable, and the resulting limitation in representing quickly varying probabilistic information. We show that both deficiencies can be overcome by moving to a biologically more realistic encoding of each salient random variable through the stochastic firing activity of an ensemble of neurons. The resulting model demonstrates that networks of spiking neurons with noise can easily track and carry out basic computational operations on rapidly varying probability distributions, such as the odds of getting rewarded for a specific behavior. We demonstrate the viability of this new approach towards neural coding and computation, which makes use of the inherent parallelism of generic neural circuits, by showing that this model can explain experimentally observed firing activity of cortical neurons for a variety of tasks that require rapid temporal integration of sensory information. PMID:25340749

  18. Combined effect of new complete dentures and simple dietary advice on nutritional status in edentulous patients: study protocol for a randomized controlled trial.

    PubMed

    Komagamine, Yuriko; Kanazawa, Manabu; Iwaki, Maiko; Jo, Ayami; Suzuki, Hiroyuki; Amagai, Noriko; Minakuchi, Shunsuke

    2016-11-09

    Individuals who are edentulous have a lower intake of fruit, vegetables, fiber, and protein compared with their dentate counterparts because tooth loss is accompanied by a decrease in ability to chew. Whether or not a combination of prosthetic rehabilitation and simple dietary advice produces improvement in dietary intake among edentulous persons is unclear. We aim to investigate the effect of a simultaneous combination of simple dietary advice delivered by dentists and provision of new complete dentures on dietary intake in edentulous individuals who request new dentures. Through a double-blinded, parallel, randomized controlled trial in which 70 edentate persons who request new complete dentures will be enrolled, eligible study participants will be randomly allocated to either a dietary intervention group receiving dietary advice or to a control group receiving only advice on the care and maintenance of dentures. Outcome measures include daily intake of nutrients and food items, assessed using a brief self-administered diet history questionnaire; antioxidant capacity, determined using blood and urine samples; nutritional status, assessed with the Mini-Nutritional Assessment-Short Form; oral health-related quality of life, assessed with the Japanese version of the Oral Health Impact Profile-EDENT and the Geriatric Oral Health Assessment Index; subjective chewing ability; masticatory performance, assessed using a color-changeable chewing gum and a gummy jelly; patient self-assessment of dentures; mild cognitive impairment, assessed with the Japanese version of the Montreal Cognitive Assessment; and functional capacity, assessed with the Japan Science and Technology Agency Index of Competence. Outcome measures, except for antioxidant capacity, are to be implemented at three time points: at baseline and at 3 and 6 months following intervention. Antioxidant capacity data are to be collected twice: at baseline and at 3 months following intervention. Differences between the groups at 3 and 6 months and within-group changes are to be compared using the paired t test. Simple dietary advice that can be implemented by a dentist would be more practical in clinical practice than tailored dietary counseling. The results of this study will provide beneficial information on dietary intake changes for both edentulous individuals requesting new complete dentures and dentists. University Hospital Medical Information Network Center Unique Trial Number: UMIN000017879 . Registered on 12 June 2015.

  19. Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs

    NASA Astrophysics Data System (ADS)

    Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.

    2016-07-01

    Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and number of points, varies with the abundance, size and distributional pattern of target biota. Therefore, we advocate either the incorporation of prior knowledge or the use of baseline surveys to establish key properties of intended target biota in the initial stages of monitoring programs.

  20. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling

    PubMed Central

    Zhou, Fuqun; Zhang, Aining

    2016-01-01

    Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2–3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests’ features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data. PMID:27792152

  1. Pseudo-Random Sequence Modifications for Ion Mobility Orthogonal Time of Flight Mass Spectrometry

    PubMed Central

    Clowers, Brian H.; Belov, Mikhail E.; Prior, David C.; Danielson, William F.; Ibrahim, Yehia; Smith, Richard D.

    2008-01-01

    Due to the inherently low duty cycle of ion mobility spectrometry (IMS) experiments that sample from continuous ion sources, a range of experimental advances have been developed to maximize ion utilization efficiency. The use of ion trapping mechanisms prior to the ion mobility drift tube has demonstrated significant gains over discrete sampling from continuous sources; however, these technologies have traditionally relied upon a signal averaging to attain analytically relevant signal-to-noise ratios (SNR). Multiplexed (MP) techniques based upon the Hadamard transform offer an alternative experimental approach by which ion utilization efficiency can be elevated to ∼ 50 %. Recently, our research group demonstrated a unique multiplexed ion mobility time-of-flight (MP-IMS-TOF) approach that incorporates ion trapping and can extend ion utilization efficiency beyond 50 %. However, the spectral reconstruction of the multiplexed signal using this experiment approach requires the use of sample-specific weighing designs. Though general weighing designs have been shown to significantly enhance ion utilization efficiency using this MP technique, such weighing designs cannot be applied to all samples. By modifying both the ion funnel trap and the pseudo random sequence (PRS) used for the MP experiment we have eliminated the need for complex weighing matrices. For both simple and complex mixtures SNR enhancements of up to 13 were routinely observed as compared to the SA-IMS-TOF experiment. In addition, this new class of PRS provides a two fold enhancement in ion throughput compared to the traditional HT-IMS experiment. PMID:18311942

  2. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling.

    PubMed

    Zhou, Fuqun; Zhang, Aining

    2016-10-25

    Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.

  3. Random Item Generation Is Affected by Age

    ERIC Educational Resources Information Center

    Multani, Namita; Rudzicz, Frank; Wong, Wing Yiu Stephanie; Namasivayam, Aravind Kumar; van Lieshout, Pascal

    2016-01-01

    Purpose: Random item generation (RIG) involves central executive functioning. Measuring aspects of random sequences can therefore provide a simple method to complement other tools for cognitive assessment. We examine the extent to which RIG relates to specific measures of cognitive function, and whether those measures can be estimated using RIG…

  4. [Krigle estimation and its simulated sampling of Chilo suppressalis population density].

    PubMed

    Yuan, Zheming; Bai, Lianyang; Wang, Kuiwu; Hu, Xiangyue

    2004-07-01

    In order to draw up a rational sampling plan for the larvae population of Chilo suppressalis, an original population and its two derivative populations, random population and sequence population, were sampled and compared with random sampling, gap-range-random sampling, and a new systematic sampling integrated Krigle interpolation and random original position. As for the original population whose distribution was up to aggregative and dependence range in line direction was 115 cm (6.9 units), gap-range-random sampling in line direction was more precise than random sampling. Distinguishing the population pattern correctly is the key to get a better precision. Gap-range-random sampling and random sampling are fit for aggregated population and random population, respectively, but both of them are difficult to apply in practice. Therefore, a new systematic sampling named as Krigle sample (n = 441) was developed to estimate the density of partial sample (partial estimation, n = 441) and population (overall estimation, N = 1500). As for original population, the estimated precision of Krigle sample to partial sample and population was better than that of investigation sample. With the increase of the aggregation intensity of population, Krigel sample was more effective than investigation sample in both partial estimation and overall estimation in the appropriate sampling gap according to the dependence range.

  5. A simple method for assessing occupational exposure via the one-way random effects model.

    PubMed

    Krishnamoorthy, K; Mathew, Thomas; Peng, Jie

    2016-11-01

    A one-way random effects model is postulated for the log-transformed shift-long personal exposure measurements, where the random effect in the model represents an effect due to the worker. Simple closed-form confidence intervals are proposed for the relevant parameters of interest using the method of variance estimates recovery (MOVER). The performance of the confidence bounds is evaluated and compared with those based on the generalized confidence interval approach. Comparison studies indicate that the proposed MOVER confidence bounds are better than the generalized confidence bounds for the overall mean exposure and an upper percentile of the exposure distribution. The proposed methods are illustrated using a few examples involving industrial hygiene data.

  6. Some practical problems in implementing randomization.

    PubMed

    Downs, Matt; Tucker, Kathryn; Christ-Schmidt, Heidi; Wittes, Janet

    2010-06-01

    While often theoretically simple, implementing randomization to treatment in a masked, but confirmable, fashion can prove difficult in practice. At least three categories of problems occur in randomization: (1) bad judgment in the choice of method, (2) design and programming errors in implementing the method, and (3) human error during the conduct of the trial. This article focuses on these latter two types of errors, dealing operationally with what can go wrong after trial designers have selected the allocation method. We offer several case studies and corresponding recommendations for lessening the frequency of problems in allocating treatment or for mitigating the consequences of errors. Recommendations include: (1) reviewing the randomization schedule before starting a trial, (2) being especially cautious of systems that use on-demand random number generators, (3) drafting unambiguous randomization specifications, (4) performing thorough testing before entering a randomization system into production, (5) maintaining a dataset that captures the values investigators used to randomize participants, thereby allowing the process of treatment allocation to be reproduced and verified, (6) resisting the urge to correct errors that occur in individual treatment assignments, (7) preventing inadvertent unmasking to treatment assignments in kit allocations, and (8) checking a sample of study drug kits to allow detection of errors in drug packaging and labeling. Although we performed a literature search of documented randomization errors, the examples that we provide and the resultant recommendations are based largely on our own experience in industry-sponsored clinical trials. We do not know how representative our experience is or how common errors of the type we have seen occur. Our experience underscores the importance of verifying the integrity of the treatment allocation process before and during a trial. Clinical Trials 2010; 7: 235-245. http://ctj.sagepub.com.

  7. 3D statistical shape models incorporating 3D random forest regression voting for robust CT liver segmentation

    NASA Astrophysics Data System (ADS)

    Norajitra, Tobias; Meinzer, Hans-Peter; Maier-Hein, Klaus H.

    2015-03-01

    During image segmentation, 3D Statistical Shape Models (SSM) usually conduct a limited search for target landmarks within one-dimensional search profiles perpendicular to the model surface. In addition, landmark appearance is modeled only locally based on linear profiles and weak learners, altogether leading to segmentation errors from landmark ambiguities and limited search coverage. We present a new method for 3D SSM segmentation based on 3D Random Forest Regression Voting. For each surface landmark, a Random Regression Forest is trained that learns a 3D spatial displacement function between the according reference landmark and a set of surrounding sample points, based on an infinite set of non-local randomized 3D Haar-like features. Landmark search is then conducted omni-directionally within 3D search spaces, where voxelwise forest predictions on landmark position contribute to a common voting map which reflects the overall position estimate. Segmentation experiments were conducted on a set of 45 CT volumes of the human liver, of which 40 images were randomly chosen for training and 5 for testing. Without parameter optimization, using a simple candidate selection and a single resolution approach, excellent results were achieved, while faster convergence and better concavity segmentation were observed, altogether underlining the potential of our approach in terms of increased robustness from distinct landmark detection and from better search coverage.

  8. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages.

    PubMed

    Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry

    2013-08-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.

  9. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages

    PubMed Central

    Kim, Yoonsang; Emery, Sherry

    2013-01-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415

  10. Lead contamination in Eugenia dyeriana herbal preparations from different commercial sources in Malaysia.

    PubMed

    Ang, H H

    2008-06-01

    The Drug Control Authority (DCA) of Malaysia implemented the phase three registration of traditional medicines on 1 January, 1992. A total of 100 products in various pharmaceutical dosage forms of a herbal preparation, containing Eugenia dyeriana, either single or combined preparations (more than one medicinal plant), were analyzed for the presence of lead contamination, using atomic absorption spectrophotometry. These samples were bought from different commercial sources in the Malaysian market, after performing a simple random sampling. Results showed that 22% of the above products failed to comply with the quality requirement for traditional medicines in Malaysia. Although this study showed that 78% of the products fully complied with the quality requirement for traditional medicines in Malaysia pertaining to lead, however, they cannot be assumed safe from lead contamination because of batch-to-batch inconsistency.

  11. How Many U.S. High School Students Have a Foreign Language Reading "Disability"? Reading Without Meaning and the Simple View.

    PubMed

    Sparks, Richard L; Luebbers, Julie

    Conventional wisdom suggests that students classified as learning disabled will exhibit difficulties with foreign language (FL) learning, but evidence has not supported a relationship between FL learning problems and learning disabilities. The simple view of reading model posits that reading comprehension is the product of word decoding and language comprehension and that there are good readers and 3 types of poor readers-dyslexic, hyperlexic, and garden variety-who exhibit different profiles of strengths and/or deficits in word decoding and language comprehension. In this study, a random sample of U.S. high school students completing first-, second-, and third-year Spanish courses were administered standardized measures of Spanish word decoding and reading comprehension, compared with monolingual Spanish readers from first to eleventh grades, and classified into reader types according to the simple view of reading. The majority of students fit the hyperlexic profile, and no participants fit the good reader profile until they were compared with first- and second-grade monolingual Spanish readers. Findings call into question the practice of diagnosing an FL "disability" before a student engages in FL study.

  12. What is the best?: simple versus visitor restricted rest period.

    PubMed

    Silvius-Byron, Stephanie A; Florimonte, Christine; Panganiban, Elizabeth G; Ulmer, Janice Fitzgerald

    2014-05-01

    The aim of this study was to compare a highly structured planned rest protocol that includes visitor and healthcare personnel restrictions with a simple planned rest period that encourages patients to rest during a designated time without restriction of visitors and healthcare personnel. Many hospitals acute care have begun to restrict visitors and nonessential health team interventions during specific times despite the lack of experimentally designed studies. Using a convenience sample of 52 intermediate care unit patients, a randomized experimental design study compared a highly structured planned rest protocol with restriction of visitors/healthcare personnel to a simple planned rest period without restrictions. The primary outcome variable was the patient's perceived quality of rest after a 2-hour rest period. Intermediate care patients' perception of rest and sleep during a designated rest period was similar whether elaborate rest strategies were used, including visitor and healthcare personnel restrictions, or if it was only suggested they rest and the door to their room closed. The restriction of visitors and healthcare personnel during a 2-hour rest period did not improve the patient's perception of rest or how long it took them to go to sleep.

  13. The male-taller norm: Lack of evidence from a developing country.

    PubMed

    Sohn, K

    2015-08-01

    In general, women prefer men taller than themselves; this is referred to as the male-taller norm. However, since women are shorter than men on average, it is difficult to determine whether the fact that married women are on average shorter than their husbands results from the norm or is a simple artifact generated by the shorter stature of women. This study addresses the question by comparing the rate of adherence to the male-taller norm between actual mating and hypothetical random mating. A total of 7954 actually married couples are drawn from the last follow-up of the Indonesian Family Life Survey, a nationally representative survey. Their heights were measured by trained nurses. About 10,000 individuals are randomly sampled from the actual couples and randomly matched. An alternative random mating of about 100,000 couples is also performed, taking into account an age difference of 5 years within a couple. The rate of adherence to the male-taller norm is 93.4% for actual couples and 88.8% for random couples. The difference between the two figures is statistically significant, but it is emphasized that it is very small. The alternative random mating produces a rate of 91.4%. The male-taller norm exists in Indonesia, but only in a statistical sense. The small difference suggests that the norm is mostly explained by the fact that women are shorter than men on average. Copyright © 2015 Elsevier GmbH. All rights reserved.

  14. Nursing work life in acute care.

    PubMed

    Brooks, Beth A; Anderson, Mary Ann

    2004-01-01

    The purpose of this project was to explore how acute care nurses in a midwestern state rate the quality of their work life. A simple random sample of 1500 registered nurses was surveyed. Data were collected using Brooks' Quality of Nursing Worklife Survey (Brooks BA. Development of an Instrument to Measure Quality of Nursing Work Life [unpublished doctoral dissertation]. Chicago: University of Illinois at Chicago; 2001). Findings suggested that nursing workload was too heavy, and there was not enough time todo the job well. This study revealed that there remain ongoing and fundamental work life concerns for staff nurses that the profession has neither addressed nor resolved in any meaningful, long-term way.

  15. Employee resourcing strategies and universities' corporate image: A survey dataset.

    PubMed

    Falola, Hezekiah Olubusayo; Oludayo, Olumuyiwa Akinrole; Olokundun, Maxwell Ayodele; Salau, Odunayo Paul; Ibidunni, Ayodotun Stephen; Igbinoba, Ebe

    2018-06-01

    The data examined the effect of employee resourcing strategies on corporate image. The data were generated from a total of 500 copies of questionnaire administered to the academic staff of the six (6) selected private Universities in Southwest, Nigeria, out of which four hundred and forty-three (443) were retrieved. Stratified and simple random sampling techniques were used to select the respondents for this study. Descriptive and Linear Regression, were used for the presentation of the data. Mean score was used as statistical tool of analysis. Therefore, the data presented in this article is made available to facilitate further and more comprehensive investigation on the subject matter.

  16. Systematic versus random sampling in stereological studies.

    PubMed

    West, Mark J

    2012-12-01

    The sampling that takes place at all levels of an experimental design must be random if the estimate is to be unbiased in a statistical sense. There are two fundamental ways by which one can make a random sample of the sections and positions to be probed on the sections. Using a card-sampling analogy, one can pick any card at all out of a deck of cards. This is referred to as independent random sampling because the sampling of any one card is made without reference to the position of the other cards. The other approach to obtaining a random sample would be to pick a card within a set number of cards and others at equal intervals within the deck. Systematic sampling along one axis of many biological structures is more efficient than random sampling, because most biological structures are not randomly organized. This article discusses the merits of systematic versus random sampling in stereological studies.

  17. Transfer and alignment of random single-walled carbon nanotube films by contact printing.

    PubMed

    Liu, Huaping; Takagi, Daisuke; Chiashi, Shohei; Homma, Yoshikazu

    2010-02-23

    We present a simple method to transfer large-area random single-walled carbon nanotube (SWCNT) films grown on SiO(2) substrates onto another surface through a simple contact printing process. The transferred random SWCNT films can be assembled into highly ordered, dense regular arrays with high uniformity and reproducibility by sliding the growth substrate during the transfer process. The position of the transferred SWCNT film can be controlled by predefined patterns on the receiver substrates. The process is compatible with a variety of substrates, and even metal meshes for transmission electron microscopy (TEM) can be used as receiver substrates. Thus, suspended web-like SWCNT networks and aligned SWCNT arrays can be formed over the grids of TEM meshes, so that the structures of the transferred SWCNTs can be directly observed by TEM. This simple technique can be used to controllably transfer SWCNTs for property studies, for the fabrication of devices, or even as support films for TEM meshes.

  18. Nurse Family Partnership: Comparing Costs per Family in Randomized Trials Versus Scale-Up.

    PubMed

    Miller, Ted R; Hendrie, Delia

    2015-12-01

    The literature that addresses cost differences between randomized trials and full-scale replications is quite sparse. This paper examines how costs differed among three randomized trials and six statewide scale-ups of nurse family partnership (NFP) intensive home visitation to low income first-time mothers. A literature review provided data on pertinent trials. At our request, six well-established programs reported their total expenditures. We adjusted the costs to national prices based on mean hourly wages for registered nurses and then inflated them to 2010 dollars. A centralized data system provided utilization. Replications had fewer home visits per family than trials (25 vs. 31, p = .05), lower costs per client ($8860 vs. $12,398, p = .01), and lower costs per visit ($354 vs. $400, p = .30). Sample size limited the significance of these differences. In this type of labor intensive program, costs probably were lower in scale-up than in randomized trials. Key cost drivers were attrition and the stable caseload size possible in an ongoing program. Our estimates reveal a wide variation in cost per visit across six state programs, which suggests that those planning replications should not expect a simple rule to guide cost estimations for scale-ups. Nevertheless, NFP replications probably achieved some economies of scale.

  19. Regression dilution bias: tools for correction methods and sample size calculation.

    PubMed

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  20. Wave Propagation inside Random Media

    NASA Astrophysics Data System (ADS)

    Cheng, Xiaojun

    This thesis presents results of studies of wave scattering within and transmission through random and periodic systems. The main focus is on energy profiles inside quasi-1D and 1D random media. The connection between transport and the states of the medium is manifested in the equivalence of the dimensionless conductance, g, and the Thouless number which is the ratio of the average linewidth and spacing of energy levels. This equivalence and theories regarding the energy profiles inside random media are based on the assumption that LDOS is uniform throughout the samples. We have conducted microwave measurements of the longitudinal energy profiles within disordered samples contained in a copper tube supporting multiple waveguide channels with an antenna moving along a slit on the tube. These measurements allow us to determine the local density of states (LDOS) at a location which is the sum of energy from all incoming channels on both sides. For diffusive samples, the LDOS is uniform and the energy profile decays linearly as expected. However, for localized samples, we find that the LDOS drops sharply towards the middle of the sample and the energy profile does not follow the result of the local diffusion theory where the LDOS is assumed to be uniform. We analyze the field spectra into quasi-normal modes and found that the mode linewidth and the number of modes saturates as the sample length increases. Thus the Thouless number saturates while the dimensionless conductance g continues to fall with increasing length, indicating that the modes are localized near the boundaries. This is in contrast to the general believing that g and Thouless number follow the same scaling behavior. Previous measurements show that single parameter scaling (SPS) still holds in the same sample where the LDOS is suppressed te{shi2014microwave}. We explore the extension of SPS to the interior of the sample by analyzing statistics of the logrithm of the energy density ln W(x) and found that =-x/l where l is the transport mean free path. The result does not depend on the sample length, which is counterintuitive yet remarkably simple. More supprisingly, the linear fall-off of energy profile holds for totally disordered random 1D layered samples in simulations where the LDOS is uniform as well as for single mode random waveguide experiments and 1D nearly periodic samples where the LDOS is suppressed in the middle of the sample. The generalization of the transmission matrix to the interior of quasi-1D random samples, which is defined as the field matrix, and its eigenvalues statistics are also discussed. The maximum energy deposition at a location is not the intensity of the first transmission eigenchannel but the eigenvalue of the first energy density eigenchannels at that cross section, which can be much greater than the average value. The contrast, which is the ratio of the intensity at the focused point to the background intensity, in optimal focusing is determined by the participation number of the energy density eigenvalues and its inverse gives the variance of the energy density at that cross section in a single configuration. We have also studied topological states in photonic structures. We have demonstrated robust propagation of electromagnetic waves along reconfigurable pathways within a topological photonic metacrystal. Since the wave is confined within the domain wall, which is the boundary between two distinct topological insulating systems, we can freely steer the wave by reconstructing the photonic structure. Other topics, such as speckle pattern evolutions and the effects of boundary conditions on the statistics of transmission eigenvalues and energy profiles are also discussed.

  1. Effect of second dose of measles vaccine on measles antibody status: a randomized controlled trial.

    PubMed

    Fazilli, Anjum; Mir, Abid Ali; Shah, Rohul Jabeen; Bhat, Imtiyaz Ali; Fomda, Bashir Ahmad; Bhat, Mushtaq Ahmad

    2013-05-08

    To evaluate the effect of the second dose of measles vaccine on measles antibody status during childhood. Immunization centre of Under-five Clinic of the Department of Community Medicine at a tertiary-hospital. Randomized Controlled trial. Children from 6 years to 17 year old. 188 with simple obesity, and 431 with obesity and metabolic abnormalities. 274 age and gender-matched healthy children as controls. Blood samples were collected from all subjects for baseline measles serology by heel puncture at 9-12 months of age. All subjects were given the first dose of measels vaccine. At second visit (3-5 months later), after collecting the blood sample from all, half the children were randomized to receive the second dose of measles vaccine (study group), followed by collection of the third sample six weeks later in all the subjects. A total of 78 children were enrolled and 30 children in each group could be analyzed. 11(36.6%) children in the study group and 13 (43.3%) children in the control group had protective levels of measles IgG at baseline. Around 93.3% of children in the study group had protective measles antibody titers as against 50% in the control group at the end of the trial. The Geometric Mean Titre (GMT) of measles IgG increased from 14.8 NTU/mL to 18.2 NTU/mL from baseline to six weeks following receipt of the second dose of the vaccine in the study group, as compared to a decrease from 16.8 NTU/mL to 12.8 NTU/mL in the control group. A second dose of measles vaccine boosts the measles antibody status in the study population as compared to those who receive only a single dose.

  2. Rare event simulation in radiation transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kollman, Craig

    1993-10-01

    This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved,more » even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiple by the likelihood ratio between the true and simulated probabilities so as to keep the estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive ``learning`` algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give with probability one, a sequence of estimates converging exponentially fast to the true solution.« less

  3. A Multivariate Randomization Text of Association Applied to Cognitive Test Results

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert; Beard, Bettina

    2009-01-01

    Randomization tests provide a conceptually simple, distribution-free way to implement significance testing. We have applied this method to the problem of evaluating the significance of the association among a number (k) of variables. The randomization method was the random re-ordering of k-1 of the variables. The criterion variable was the value of the largest eigenvalue of the correlation matrix.

  4. Sydney Playground Project: A Cluster-Randomized Trial to Increase Physical Activity, Play, and Social Skills

    ERIC Educational Resources Information Center

    Bundy, Anita; Engelen, Lina; Wyver, Shirley; Tranter, Paul; Ragen, Jo; Bauman, Adrian; Baur, Louise; Schiller, Wendy; Simpson, Judy M.; Niehues, Anita N.; Perry, Gabrielle; Jessup, Glenda; Naughton, Geraldine

    2017-01-01

    Background: We assessed the effectiveness of a simple intervention for increasing children's physical activity, play, perceived competence/social acceptance, and social skills. Methods: A cluster-randomized controlled trial was conducted, in which schools were the clusters. Twelve Sydney (Australia) primary schools were randomly allocated to…

  5. S-SPatt: simple statistics for patterns on Markov chains.

    PubMed

    Nuel, Grégory

    2005-07-01

    S-SPatt allows the counting of patterns occurrences in text files and, assuming these texts are generated from a random Markovian source, the computation of the P-value of a given observation using a simple binomial approximation.

  6. Mail merge can be used to create personalized questionnaires in complex surveys.

    PubMed

    Taljaard, Monica; Chaudhry, Shazia Hira; Brehaut, Jamie C; Weijer, Charles; Grimshaw, Jeremy M

    2015-10-16

    Low response rates and inadequate question comprehension threaten the validity of survey results. We describe a simple procedure to implement personalized-as opposed to generically worded-questionnaires in the context of a complex web-based survey of corresponding authors of a random sample of 300 published cluster randomized trials. The purpose of the survey was to gather more detailed information about informed consent procedures used in the trial, over and above basic information provided in the trial report. We describe our approach-which allowed extensive personalization without the need for specialized computer technology-and discuss its potential application in similar settings. The mail merge feature of standard word processing software was used to generate unique, personalized questionnaires for each author by incorporating specific information from the article, including naming the randomization unit (e.g., family practice, school, worksite), and identifying specific individuals who may have been considered research participants at the cluster level (family doctors, teachers, employers) and individual level (patients, students, employees) in questions regarding informed consent procedures in the trial. The response rate was relatively high (64%, 182/285) and did not vary significantly by author, publication, or study characteristics. The refusal rate was low (7%). While controlled studies are required to examine the specific effects of our approach on comprehension, quality of responses, and response rates, we showed how mail merge can be used as a simple but useful tool to add personalized fields to complex survey questionnaires, or to request additional information required from study authors. One potential application is in eliciting specific information about published articles from study authors when conducting systematic reviews and meta-analyses.

  7. Bohman-Frieze-Wormald model on the lattice, yielding a discontinuous percolation transition

    NASA Astrophysics Data System (ADS)

    Schrenk, K. J.; Felder, A.; Deflorin, S.; Araújo, N. A. M.; D'Souza, R. M.; Herrmann, H. J.

    2012-03-01

    The BFW model introduced by Bohman, Frieze, and Wormald [Random Struct. Algorithms1042-983210.1002/rsa.20038, 25, 432 (2004)], and recently investigated in the framework of discontinuous percolation by Chen and D'Souza [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.106.115701 106, 115701 (2011)], is studied on the square and simple-cubic lattices. In two and three dimensions, we find numerical evidence for a strongly discontinuous transition. In two dimensions, the clusters at the threshold are compact with a fractal surface of fractal dimension df=1.49±0.02. On the simple-cubic lattice, distinct jumps in the size of the largest cluster are observed. We proceed to analyze the tree-like version of the model, where only merging bonds are sampled, for dimension two to seven. The transition is again discontinuous in any considered dimension. Finally, the dependence of the cluster-size distribution at the threshold on the spatial dimension is also investigated.

  8. Morphological integration of anatomical, developmental, and functional postcranial modules in the crab-eating macaque (Macaca fascicularis).

    PubMed

    Conaway, Mark A; Schroeder, Lauren; von Cramon-Taubadel, Noreen

    2018-03-22

    Integration and modularity reflect the coordinated action of past evolutionary processes and, in turn, constrain or facilitate phenotypic evolvability. Here, we analyze magnitudes of integration in the macaque postcranium to test whether 20 a priori defined modules are (1) more tightly integrated than random sets of postcranial traits, and (2) are differentiated based on mode of definition, with developmental modules expected to be more integrated than functional or anatomical modules. The 3D morphometric data collected for eight limb and girdle bones for 60 macaques were collated into anatomical, developmental, and functional modules. A resampling technique was used to create random samples of integration values for each module for statistical comparison. Our results found that not all a priori defined modules were more strongly integrated than random samples of postcranial traits and that specific types of modules did not present consistent patterns of integration. Rather, girdle and joint modules were consistently less integrated than limb modules, and forelimb elements were less integrated than hindlimbs. The results suggest that morphometrically complex modules tend to be less integrated than simple limb bones, irrespective of the number of available traits. However, differences in integration of the fore- and hindlimb more likely reflects the multitude of locomotory, feeding, and social functions involved. It remains to be tested whether patterns of integration identified here are primate universals, and to what extent they vary depending on phylogenetic or functional factors. © 2018 Wiley Periodicals, Inc.

  9. Attendance at cultural events and physical exercise and health: a randomized controlled study.

    PubMed

    Konlaan, B B; Björby, N; Bygren, L O; Weissglas, G; Karlsson, L G; Widmark, M

    2000-09-01

    The aim of this study was to assess the specific biomedico-social effects of participating in cultural events and gentle physical exercise effects apart from the general effect of participating in group activities. This was a randomized controlled investigation using a factorial design, where attending cultural events and taking easy physical exercise were tested simultaneously. The 21 participants, aged between 18 and 74 y were from a simple random sample of people registered as residents in Umeå, a town in northern Sweden. Among the 1000 in the sample, 21 individuals (11 men, 10 women) were recruited into the experiment. Two out of the 21 subjects dropped out and were discounted from our analysis. Nine people were encouraged to engage in cultural activity for a two-month period. Diastolic blood pressure in eight of these nine was significantly reduced following the experiment. There were no marked changes observed in either systolic or diastolic blood pressure in those not required to engage in any form of extra-cultural activity. A decrease in the levels of both adrenocorticotropical hormone (ACTH) and s-prolactin was observed in culturally stimulated subjects, whereas the average baseline s-prolactin level of 7 ng/l for the non-culturally stimulated group was unchanged after the experiment. Physical exercise produced an increase in the high density lipoprotein (HDL) cholesterol level and in the ratio of HDL to LDL (low density lipoprotein). It was concluded that cultural stimulation may have specific effects on health related determinants.

  10. Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling

    PubMed Central

    Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2016-01-01

    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464

  11. Sample survey methods as a quality assurance tool in a general practice immunisation audit.

    PubMed

    Cullen, R

    1994-04-27

    In a multidoctor family practice there are often just too many sets of patients records to make it practical to repeat an audit by census of even an age band of the practice on a regular basis. This paper attempts to demonstrate how sample survey methodology can be incorporated into the quality assurance cycle. A simple random sample (with replacement) of 120 from 580 children with permanent records who were aged between 6 weeks and 2 years old from an Auckland general practice was performed, with sample size selected to give a predetermined precision. The survey was then repeated after 4 weeks. Both surveys were able to be completed within the course of a normal working day. An unexpectedly low level of under 2 years olds that were recorded as not overdue for any immunisations was found (22.5%) with only a modest improvement after a standard telephone/letter catch up campaign. Seventy-two percent of the sample held a group one community services card. The advantages of properly conducted sample surveys in producing useful estimates of known precision without disrupting office routines excessively were demonstrated. Through some attention to methodology, the trauma of a practice census can be avoided.

  12. A low-rank matrix recovery approach for energy efficient EEG acquisition for a wireless body area network.

    PubMed

    Majumdar, Angshul; Gogna, Anupriya; Ward, Rabab

    2014-08-25

    We address the problem of acquiring and transmitting EEG signals in Wireless Body Area Networks (WBAN) in an energy efficient fashion. In WBANs, the energy is consumed by three operations: sensing (sampling), processing and transmission. Previous studies only addressed the problem of reducing the transmission energy. For the first time, in this work, we propose a technique to reduce sensing and processing energy as well: this is achieved by randomly under-sampling the EEG signal. We depart from previous Compressed Sensing based approaches and formulate signal recovery (from under-sampled measurements) as a matrix completion problem. A new algorithm to solve the matrix completion problem is derived here. We test our proposed method and find that the reconstruction accuracy of our method is significantly better than state-of-the-art techniques; and we achieve this while saving sensing, processing and transmission energy. Simple power analysis shows that our proposed methodology consumes considerably less power compared to previous CS based techniques.

  13. [Factors associated with the use of dental health services].

    PubMed

    Dho, María Silvina

    2018-02-01

    This paper seeks to analyze the factors associated with the use of dental health services (UDHS) by adults in the city of Corrientes, Argentina. A cross-sectional study was conducted. Information concerning the study variables was collected via a home survey. The sample size was established with a 95% confidence interval level (381 individuals). A simple random sampling design was used, which was complemented with a non-probability quota sampling. The data was analyzed using SPSS version 21.0 and Epidat version 3.1 softwares. Socio-economic level, dental health coverage, perception of oral health care, perception of oral health, knowledge about oral health, and oral hygiene habits were significantly associated with the UDHS over the last twelve months. These same factors, excluding dental health coverage and knowledge about oral health, were associated with the UDHS for routine dental check-ups. Measures should be implemented to increase the UDHS for prevention purposes in men and women of all socio-economic levels, particularly in less-privileged individuals.

  14. Dispersion Morphology of Poly(methyl acrylate)/Silica Nanocomposites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D Janes; J Moll; S Harton

    Nearly monodisperse poly(methyl acrylate) (PMA) and spherical SiO{sub 2} nanoparticles (NP, d = 14 {+-} 4 nm) were co-cast from 2-butanone, a mutually good solvent and a displacer of adsorbed PMA from silica. The effects of NP content and post-casting sample history on the dispersion morphology were found by small-angle X-ray scattering supplemented by transmission electron microscopy. Analysis of the X-ray results show that cast and thermally annealed samples exhibited a nearly random particle dispersion. That the same samples, prior to annealing, were not well-dispersed is indicative of thermodynamic miscibility during thermal annealing over the range of NP loadings studied.more » A simple mean-field thermodynamic model suggests that miscibility results primarily from favorable polymer segment/NP surface interactions. The model also indicates, and experiments confirm, that subsequent exposure of the composites to the likely displacer ethyl acetate results in entropic destabilization and demixing into NP-rich and NP-lean phases.« less

  15. An analysis of the feasibility of the copra business in the village of Pendowo Harjo, sub-district of Sungsang, Banyuasin Regency

    NASA Astrophysics Data System (ADS)

    Purba, Y. Z. W.; Saleh, W.

    2018-01-01

    Copra is used as raw material of coconut oil and exported commodity. This study was conducted in the Tidal Land of Pendowo Harjo Village, Subdistrict of Sungsang, Banyuasin Regency, which aims to calculate the production costs incurred, the income earned, and to analyze the feasibility of the business of producing copra. In this research, sampling was conducted by simple random sampling method. The number of samples taken in this study was 10 individuals who were the copra business people out of 117 people of population. The results of analysis show that the productioncost incurred is Rp 1,198,076.12, and the income earned is Rp 414,598.88 per unit of the production process. Financially, the value of NPV obtained is Rp 19,668,343.86, the value of the IRR is 60.75 percent and the value of the Net B/C is 1.74. Therefore, economically, the copra business is feasible to be developed.

  16. Simple Emergent Power Spectra from Complex Inflationary Physics

    NASA Astrophysics Data System (ADS)

    Dias, Mafalda; Frazer, Jonathan; Marsh, M. C. David

    2016-09-01

    We construct ensembles of random scalar potentials for Nf-interacting scalar fields using nonequilibrium random matrix theory, and use these to study the generation of observables during small-field inflation. For Nf=O (few ), these heavily featured scalar potentials give rise to power spectra that are highly nonlinear, at odds with observations. For Nf≫1 , the superhorizon evolution of the perturbations is generically substantial, yet the power spectra simplify considerably and become more predictive, with most realizations being well approximated by a linear power spectrum. This provides proof of principle that complex inflationary physics can give rise to simple emergent power spectra. We explain how these results can be understood in terms of large Nf universality of random matrix theory.

  17. Simple Emergent Power Spectra from Complex Inflationary Physics.

    PubMed

    Dias, Mafalda; Frazer, Jonathan; Marsh, M C David

    2016-09-30

    We construct ensembles of random scalar potentials for N_{f}-interacting scalar fields using nonequilibrium random matrix theory, and use these to study the generation of observables during small-field inflation. For N_{f}=O(few), these heavily featured scalar potentials give rise to power spectra that are highly nonlinear, at odds with observations. For N_{f}≫1, the superhorizon evolution of the perturbations is generically substantial, yet the power spectra simplify considerably and become more predictive, with most realizations being well approximated by a linear power spectrum. This provides proof of principle that complex inflationary physics can give rise to simple emergent power spectra. We explain how these results can be understood in terms of large N_{f} universality of random matrix theory.

  18. The correlation structure of several popular pseudorandom number generators

    NASA Technical Reports Server (NTRS)

    Neuman, F.; Merrick, R.; Martin, C. F.

    1973-01-01

    One of the desirable properties of a pseudorandom number generator is that the sequence of numbers it generates should have very low autocorrelation for all shifts except for zero shift and those that are multiples of its cycle length. Due to the simple methods of constructing random numbers, the ideal is often not quite fulfilled. A simple method of examining any random generator for previously unsuspected regularities is discussed. Once they are discovered it is often easy to derive the mathematical relationships, which describe the mathematical relationships, which describe the regular behavior. As examples, it is shown that high correlation exists in mixed and multiplicative congruential random number generators and prime moduli Lehmer generators for shifts a fraction of their cycle lengths.

  19. Constraining Thermal Histories by Monte Carlo Simulation of Mg-Fe Isotopic Profiles in Olivine

    NASA Astrophysics Data System (ADS)

    Sio, C. K. I.; Dauphas, N.

    2016-12-01

    In thermochronology, random time-temperature (t-T) paths are generated and used as inputs to model fission track data. This random search method is used to identify a range of acceptable thermal histories that can describe the data. We have extended this modeling approach to magmatic systems. This approach utilizes both the chemical and stable isotope profiles measured in crystals as model constraints. Specifically, the isotopic profiles are used to determine the relative contribution of crystal growth vs. diffusion in generating chemical profiles, and to detect changes in melt composition. With this information, tighter constraints can be placed on the thermal evolution of magmatic bodies. We use an olivine phenocryst from the Kilauea Iki lava lake, HI, to demonstrate proof of concept. We treat this sample as one with little geologic context, then compare our modeling results to the known thermal history experienced by that sample. To complete forward modeling, we use MELTS to estimate the boundary condition, initial and quench temperatures. We also assume a simple relationship between crystal growth and cooling rate. Another important parameter is the isotopic effect for diffusion (i.e., the relative diffusivity of the light vs. heavy isotope of an element). The isotopic effects for Mg and Fe diffusion in olivine have been estimated based on natural samples; experiments to better constrain these parameters are underway. We find that 40% of the random t-T paths can be used to fit the Mg-Fe chemical profiles. However, only a few can be used to simultaneously fit the Mg-Fe isotopic profiles. These few t-T paths are close to the independently determined t-T history of the sample. This modeling approach can be further extended other igneous and metamorphic systems where data exist for diffusion rates, crystal growth rates, and isotopic effects for diffusion.

  20. Forecasting the brittle failure of heterogeneous, porous geomaterials

    NASA Astrophysics Data System (ADS)

    Vasseur, Jérémie; Wadsworth, Fabian; Heap, Michael; Main, Ian; Lavallée, Yan; Dingwell, Donald

    2017-04-01

    Heterogeneity develops in magmas during ascent and is dominated by the development of crystal and importantly, bubble populations or pore-network clusters which grow, interact, localize, coalesce, outgas and resorb. Pore-scale heterogeneity is also ubiquitous in sedimentary basin fill during diagenesis. As a first step, we construct numerical simulations in 3D in which randomly generated heterogeneous and polydisperse spheres are placed in volumes and which are permitted to overlap with one another, designed to represent the random growth and interaction of bubbles in a liquid volume. We use these simulated geometries to show that statistical predictions of the inter-bubble lengthscales and evolving bubble surface area or cluster densities can be made based on fundamental percolation theory. As a second step, we take a range of well constrained random heterogeneous rock samples including sandstones, andesites, synthetic partially sintered glass bead samples, and intact glass samples and subject them to a variety of stress loading conditions at a range of temperatures until failure. We record in real time the evolution of the number of acoustic events that precede failure and show that in all scenarios, the acoustic event rate accelerates toward failure, consistent with previous findings. Applying tools designed to forecast the failure time based on these precursory signals, we constrain the absolute error on the forecast time. We find that for all sample types, the error associated with an accurate forecast of failure scales non-linearly with the lengthscale between the pore clusters in the material. Moreover, using a simple micromechanical model for the deformation of porous elastic bodies, we show that the ratio between the equilibrium sub-critical crack length emanating from the pore clusters relative to the inter-pore lengthscale, provides a scaling for the error on forecast accuracy. Thus for the first time we provide a potential quantitative correction for forecasting the failure of porous brittle solids that build the Earth's crust.

  1. Meta-analysis of multiple outcomes: a multilevel approach.

    PubMed

    Van den Noortgate, Wim; López-López, José Antonio; Marín-Martínez, Fulgencio; Sánchez-Meca, Julio

    2015-12-01

    In meta-analysis, dependent effect sizes are very common. An example is where in one or more studies the effect of an intervention is evaluated on multiple outcome variables for the same sample of participants. In this paper, we evaluate a three-level meta-analytic model to account for this kind of dependence, extending the simulation results of Van den Noortgate, López-López, Marín-Martínez, and Sánchez-Meca Behavior Research Methods, 45, 576-594 (2013) by allowing for a variation in the number of effect sizes per study, in the between-study variance, in the correlations between pairs of outcomes, and in the sample size of the studies. At the same time, we explore the performance of the approach if the outcomes used in a study can be regarded as a random sample from a population of outcomes. We conclude that although this approach is relatively simple and does not require prior estimates of the sampling covariances between effect sizes, it gives appropriate mean effect size estimates, standard error estimates, and confidence interval coverage proportions in a variety of realistic situations.

  2. Estimation of a partially linear additive model for data from an outcome-dependent sampling design with a continuous outcome

    PubMed Central

    Tan, Ziwen; Qin, Guoyou; Zhou, Haibo

    2016-01-01

    Outcome-dependent sampling (ODS) designs have been well recognized as a cost-effective way to enhance study efficiency in both statistical literature and biomedical and epidemiologic studies. A partially linear additive model (PLAM) is widely applied in real problems because it allows for a flexible specification of the dependence of the response on some covariates in a linear fashion and other covariates in a nonlinear non-parametric fashion. Motivated by an epidemiological study investigating the effect of prenatal polychlorinated biphenyls exposure on children's intelligence quotient (IQ) at age 7 years, we propose a PLAM in this article to investigate a more flexible non-parametric inference on the relationships among the response and covariates under the ODS scheme. We propose the estimation method and establish the asymptotic properties of the proposed estimator. Simulation studies are conducted to show the improved efficiency of the proposed ODS estimator for PLAM compared with that from a traditional simple random sampling design with the same sample size. The data of the above-mentioned study is analyzed to illustrate the proposed method. PMID:27006375

  3. Determination of Pesticides Residues in Cucumbers Grown in Greenhouse and the Effect of Some Procedures on Their Residues.

    PubMed

    Leili, Mostafa; Pirmoghani, Amin; Samadi, Mohammad Taghi; Shokoohi, Reza; Roshanaei, Ghodratollah; Poormohammadi, Ali

    2016-11-01

    The objective of this study was to determine the residual concentrations of ethion and imidacloprid in cucumbers grown in greenhouse. The effect of some simple processing procedures on both ethion and imidacloprid residues were also studied. Ten active greenhouses that produce cucumber were randomly selected. Ethion and imidacloprid as the most widely used pesticides were measured in cucumber samples of studied greenhouses. Moreover, the effect of storing, washing, and peeling as simple processing procedures on both ethion and imidacloprid residues were investigated. One hour after pesticide application; the maximum residue levels (MRLs) of ethion and imidacloprid were higher than that of Codex standard level. One day after pesticide application, the levels of pesticides were decreased about 35 and 31% for ethion and imidacloprid, respectively, which still were higher than the MRL. Washing procedure led to about 51 and 42.5% loss in ethion and imidacloprid residues, respectively. Peeling procedure also led to highest loss of 93.4 and 63.7% in ethion and imidacloprid residues, respectively. The recovery for both target analytes was in the range between 88 and 102%. The residue values in collected samples one hour after pesticides application were higher than standard value. The storing, washing, and peeling procedures lead to the decrease of pesticide residues in greenhouse cucumbers. Among them, the peeling procedure has the greatest impact on residual reduction. Therefore, these procedures can be used as simple and effective processing techniques for reducing and removing pesticides from greenhouse products before their consumption.

  4. Aspergillus tubingensis and Aspergillus niger as the dominant black Aspergillus, use of simple PCR-RFLP for preliminary differentiation.

    PubMed

    Mirhendi, H; Zarei, F; Motamedi, M; Nouripour-Sisakht, S

    2016-03-01

    This work aimed to identify the species distribution of common clinical and environmental isolates of black Aspergilli based on simple restriction fragment length polymorphism (RFLP) analysis of the β-tubulin gene. A total of 149 clinical and environmental strains of black Aspergilli were collected and subjected to preliminary morphological examination. Total genomic DNAs were extracted, and PCR was performed to amplify part of the β-tubulin gene. At first, 52 randomly selected samples were species-delineated by sequence analysis. In order to distinguish the most common species, PCR amplicons of 117 black Aspergillus strains were identified by simple PCR-RFLP analysis using the enzyme TasI. Among 52 sequenced isolates, 28 were Aspergillus tubingensis, 21 Aspergillus niger, and the three remaining isolates included Aspergillus uvarum, Aspergillus awamori, and Aspergillus acidus. All 100 environmental and 17 BAL samples subjected to TasI-RFLP analysis of the β-tubulin gene, fell into two groups, consisting of about 59% (n=69) A. tubingensis and 41% (n=48) A. niger. Therefore, the method successfully and rapidly distinguished A. tubingensis and A. niger as the most common species among the clinical and environmental isolates. Although tardy, the Ehrlich test was also able to differentiate A. tubingensis and A. niger according to the yellow color reaction specific to A. niger. A. tubingensis and A. niger are the most common black Aspergillus in both clinical and environmental isolates in Iran. PCR-RFLP using TasI digestion of β-tubulin DNA enables rapid screening for these common species. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  5. A simple model for pollen-parent fecundity distributions in bee-pollinated forage legume polycrosses

    USDA-ARS?s Scientific Manuscript database

    Random mating or panmixis is a fundamental assumption in quantitative genetic theory. Random mating is sometimes thought to occur in actual fact although a large body of empirical work shows that this is often not the case in nature. Models have been developed to model many non-random mating phenome...

  6. Under What Circumstances Does External Knowledge about the Correlation Structure Improve Power in Cluster Randomized Designs?

    ERIC Educational Resources Information Center

    Rhoads, Christopher

    2014-01-01

    Recent publications have drawn attention to the idea of utilizing prior information about the correlation structure to improve statistical power in cluster randomized experiments. Because power in cluster randomized designs is a function of many different parameters, it has been difficult for applied researchers to discern a simple rule explaining…

  7. Sampling for Patient Exit Interviews: Assessment of Methods Using Mathematical Derivation and Computer Simulations.

    PubMed

    Geldsetzer, Pascal; Fink, Günther; Vaikath, Maria; Bärnighausen, Till

    2018-02-01

    (1) To evaluate the operational efficiency of various sampling methods for patient exit interviews; (2) to discuss under what circumstances each method yields an unbiased sample; and (3) to propose a new, operationally efficient, and unbiased sampling method. Literature review, mathematical derivation, and Monte Carlo simulations. Our simulations show that in patient exit interviews it is most operationally efficient if the interviewer, after completing an interview, selects the next patient exiting the clinical consultation. We demonstrate mathematically that this method yields a biased sample: patients who spend a longer time with the clinician are overrepresented. This bias can be removed by selecting the next patient who enters, rather than exits, the consultation room. We show that this sampling method is operationally more efficient than alternative methods (systematic and simple random sampling) in most primary health care settings. Under the assumption that the order in which patients enter the consultation room is unrelated to the length of time spent with the clinician and the interviewer, selecting the next patient entering the consultation room tends to be the operationally most efficient unbiased sampling method for patient exit interviews. © 2016 The Authors. Health Services Research published by Wiley Periodicals, Inc. on behalf of Health Research and Educational Trust.

  8. A Fast Multiple Sampling Method for Low-Noise CMOS Image Sensors With Column-Parallel 12-bit SAR ADCs.

    PubMed

    Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong

    2015-12-26

    This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB.

  9. A simple highly sensitive and selective aptamer-based colorimetric sensor for environmental toxins microcystin-LR in water samples.

    PubMed

    Li, Xiuyan; Cheng, Ruojie; Shi, Huijie; Tang, Bo; Xiao, Hanshuang; Zhao, Guohua

    2016-03-05

    A simple and highly sensitive aptamer-based colorimetric sensor was developed for selective detection of Microcystin-LR (MC-LR). The aptamer (ABA) was employed as recognition element which could bind MC-LR with high-affinity, while gold nanoparticles (AuNPs) worked as sensing materials whose plasma resonance absorption peaks red shifted upon binding of the targets at a high concentration of sodium chloride. With the addition of MC-LR, the random coil aptamer adsorbed on Au NPs altered into regulated structure to form MC-LR-aptamer complexes and broke away from the surface of Au NPs, leading to the aggregation of AuNPs, and the color converted from red to blue due to the interparticle plasmon coupling. Results showed that our aptamer-based colorimetric sensor exhibited rapid and sensitive detection performance for MC-LR with linear range from 0.5 nM to 7.5 μM and the detection limit reached 0.37 nM. Meanwhile, the pollutants usually coexisting with MC-LR in pollutant water samples had not demonstrated disturbance for detecting of MC-LR. The mechanism was also proposed suggesting that high affinity interaction between aptamer and MC-LR significantly enhanced the sensitivity and selectivity for MC-LR detection. Besides, the established method was utilized in analyzing real water samples and splendid sensitivity and selectivity were obtained as well. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Simulating and mapping spatial complexity using multi-scale techniques

    USGS Publications Warehouse

    De Cola, L.

    1994-01-01

    A central problem in spatial analysis is the mapping of data for complex spatial fields using relatively simple data structures, such as those of a conventional GIS. This complexity can be measured using such indices as multi-scale variance, which reflects spatial autocorrelation, and multi-fractal dimension, which characterizes the values of fields. These indices are computed for three spatial processes: Gaussian noise, a simple mathematical function, and data for a random walk. Fractal analysis is then used to produce a vegetation map of the central region of California based on a satellite image. This analysis suggests that real world data lie on a continuum between the simple and the random, and that a major GIS challenge is the scientific representation and understanding of rapidly changing multi-scale fields. -Author

  11. An epidemiological study of prevalence and comorbidity of obsessive compulsive disorder symptoms (SOCD) and stress in Pakistani Adults.

    PubMed

    Ashraf, Farzana; Malik, Sadia; Arif, Amna

    2017-01-01

    To investigate the prevalence and comorbidity of subclinical obsessive compulsive disorder (SOCD) symptoms and stress across gender, marital and employment statuses. A cross-sectional research was conducted from December, 2016 to March 2017 at two universities of cosmopolitan city Lahore. Two self-report scales measuring SOCD symptoms and stress were used to collect data from 377 adults selected through simple random sampling technique, proportionately distributed across gender, marital and employment status. From the total sample, 52% reported low level of stress and 48% faced high level of stress. Significant differences in prevalence were observed across marital and employment statuses whereas for men and women, it was observed same (24%). Comorbidity of high level of SOCD symptoms and high level of stress was seen 34%. Significant prevalence and comorbidity exists between SOCD symptoms and stress and more studies addressing diverse population are needed.

  12. Survey data on cost and benefits of climate smart agricultural technologies in western Kenya.

    PubMed

    Ng'ang'a, S K; Mwungu, C M; Mwongera, C; Kinyua, I; Notenbaert, A; Girvetz, E

    2018-02-01

    This paper describes data that were collected in three counties of western Kenya, namely Siaya, Bungoma, and Kakamega. The main aim of collecting the data was to assess the climate smartness, profitability and returns of soil protection and rehabilitation measures. The data were collected from 88 households. The households were selected using simple random sampling technique from a primary sampling frame of 180 farm households provided by the ministry of agriculture through the counties agricultural officers. The surveys were administered by trained research assistants using a structured questionnaire that was designed in Census and Survey Processing System (CSPro). Later, the data was exported to STATA version 14.1 for cleaning and management purposes. The data are hosted in an open source dataverse to allow other researchers generate new insights from the data (http://dx.doi.org/10.7910/DVN/K6JQXC).

  13. Statistics of lattice animals

    NASA Astrophysics Data System (ADS)

    Hsu, Hsiao-Ping; Nadler, Walder; Grassberger, Peter

    2005-07-01

    The scaling behavior of randomly branched polymers in a good solvent is studied in two to nine dimensions, modeled by lattice animals on simple hypercubic lattices. For the simulations, we use a biased sequential sampling algorithm with re-sampling, similar to the pruned-enriched Rosenbluth method (PERM) used extensively for linear polymers. We obtain high statistics of animals with up to several thousand sites in all dimension 2⩽d⩽9. The partition sum (number of different animals) and gyration radii are estimated. In all dimensions we verify the Parisi-Sourlas prediction, and we verify all exactly known critical exponents in dimensions 2, 3, 4, and ⩾8. In addition, we present the hitherto most precise estimates for growth constants in d⩾3. For clusters with one site attached to an attractive surface, we verify the superuniversality of the cross-over exponent at the adsorption transition predicted by Janssen and Lyssy.

  14. The need and its influence factors for community-based rehabilitation services for disabled persons in one district in Beijing.

    PubMed

    Dai, Hong; Xue, Hui; Yin, Zong-Jie; Xiao, Zhong-Xin

    2006-12-01

    To explore the needs for basic community-based rehabilitation services for disabled persons in Xuanwu District, Beijing, China, and to identify factors which influence disabled persons to accept rehabilitation services. One hundred and eight disabled persons were selected by systematic sampling and simple random sampling to assess their needs for community-based rehabilitation services. Of the interviewees, 57.4% needed the community-based rehabilitation services, but only 13.9% took advantage of it. The main factors influencing the interviewees to accept these services were cost (P < 0.05), knowledge about rehabilitation medicine (P < 0.05); and the belief in the therapeutic benefit of the community-based rehabilitation service (P < 0.05). A considerable gap exists between the supply of community-based rehabilitation services in Beijing and the needs for these services by disabled residents underscoring the need for improved availability, and for additional research.

  15. Assessing differential gene expression with small sample sizes in oligonucleotide arrays using a mean-variance model.

    PubMed

    Hu, Jianhua; Wright, Fred A

    2007-03-01

    The identification of the genes that are differentially expressed in two-sample microarray experiments remains a difficult problem when the number of arrays is very small. We discuss the implications of using ordinary t-statistics and examine other commonly used variants. For oligonucleotide arrays with multiple probes per gene, we introduce a simple model relating the mean and variance of expression, possibly with gene-specific random effects. Parameter estimates from the model have natural shrinkage properties that guard against inappropriately small variance estimates, and the model is used to obtain a differential expression statistic. A limiting value to the positive false discovery rate (pFDR) for ordinary t-tests provides motivation for our use of the data structure to improve variance estimates. Our approach performs well compared to other proposed approaches in terms of the false discovery rate.

  16. Transactional sex, condom and lubricant use among men who have sex with men in Lagos State, Nigeria.

    PubMed

    Ayoola, Oluyemisi O; Sekoni, Adekemi O; Odeyemi, Kofoworola A

    2013-12-01

    Men who have unprotected sex with men may also have unprotected sex with women and thus serve as an epidemiological bridge for HIV to the general population. This cross sectional descriptive study assessed condom and lubricant use and practice of transactional sex among men who have sex with men (MSM) in Lagos state. Simple random sampling was used to select three community centres and snowball sampling technique was used to recruit 321 respondents. Almost half (50.9%) had received payment for sex while 45.4% had paid for sex in the past. Consistent condom use was practiced by 40.5% of respondents during the last 10 sexual encounters, 85.6% used lubricants mostly with condom, products used were KY jelly, body cream, saliva and Vaseline. There is need for behavioural change to reduce risky practices which predisposes this group of MSM to HIV and sexually transmitted infections.

  17. Contamination of mercury in tongkat Ali hitam herbal preparations.

    PubMed

    Ang, H H; Lee, K L

    2006-08-01

    The DCA (Drug Control Authority), Malaysia has implemented the phase three registration of traditional medicines on 1 January 1992. As such, a total of 100 products in various pharmaceutical dosage forms of a herbal preparation found in Malaysia, containing tongkat Ali hitam, either single or combined preparations, were analyzed for the presence of a heavy toxic metal, mercury, using atomic absorption spectrophotometer, after performing a simple random sampling to enable each sample an equal chance of being selected in an unbiased manner. Results showed that 26% of these products possessed 0.53-2.35 ppm of mercury, and therefore, do not comply with the quality requirement for traditional medicines in Malaysia. The quality requirement for traditional medicines in Malaysia is not exceeding 0.5 ppm for mercury. Out of these 26 products, four products have already registered with the DCA, Malaysia whilst the rest, however, have not registered with the DCA, Malaysia.

  18. Effect of autogenic relaxation on depression among menopausal women in rural areas of Thiruvallur District (Tamil Nadu).

    PubMed

    Sujithra, S

    2014-01-01

    An experimental study was conducted among 60 menopausal women, 30 each in experimental and control group who met inclusion criteria. The menopausal women were identified in both the groups and level of depression was assessed using Cornell Dysthmia rating scale. Simple random sampling technique by lottery method was used for selecting the sample. Autogenic relaxation was practiced by the menopausal women for four weeks. The findings revealed that in experimental group, after intervention of autogenic relaxation on depression among menopausal women, 23 (76.7%) had mild depression. There was a statistically significant effectiveness in experimental group at the level of p < 0.05. There was a statistically significant association between the effectiveness of autogenic relaxation on depression among menopausal women in the post-experimental group with the type of family at the level of p < 0.05.

  19. Self-assembled antireflection coatings for light trapping based on SiGe random metasurfaces

    NASA Astrophysics Data System (ADS)

    Bouabdellaoui, Mohammed; Checcucci, Simona; Wood, Thomas; Naffouti, Meher; Sena, Robert Paria; Liu, Kailang; Ruiz, Carmen M.; Duche, David; le Rouzo, Judikael; Escoubas, Ludovic; Berginc, Gerard; Bonod, Nicolas; Zazoui, Mimoun; Favre, Luc; Metayer, Leo; Ronda, Antoine; Berbezier, Isabelle; Grosso, David; Gurioli, Massimo; Abbarchi, Marco

    2018-03-01

    We demonstrate a simple self-assembly method based on solid state dewetting of ultrathin silicon films and germanium deposition for the fabrication of efficient antireflection coatings on silicon for light trapping. We fabricate SiGe islands with a high surface density, randomly positioned and broadly varied in size. This allows one to reduce the reflectance to low values in a broad spectral range (from 500 nm to 2500 nm) and a broad angle (up to 55°) and to trap within the wafer a large portion of the impinging light (˜40 % ) also below the band gap, where the Si substrate is nonabsorbing. Theoretical simulations agree with the experimental results, showing that the efficient light coupling into the substrate is mediated by Mie resonances formed within the SiGe islands. This lithography-free method can be implemented on arbitrarily thick or thin SiO2 layers and its duration only depends on the sample thickness and on the annealing temperature.

  20. Construction, Characterization, and Preliminary BAC-End Sequence Analysis of a Bacterial Artificial Chromosome Library of the Tea Plant (Camellia sinensis)

    PubMed Central

    Lin, Jinke; Kudrna, Dave; Wing, Rod A.

    2011-01-01

    We describe the construction and characterization of a publicly available BAC library for the tea plant, Camellia sinensis. Using modified methods, the library was constructed with the aim of developing public molecular resources to advance tea plant genomics research. The library consists of a total of 401,280 clones with an average insert size of 135 kb, providing an approximate coverage of 13.5 haploid genome equivalents. No empty vector clones were observed in a random sampling of 576 BAC clones. Further analysis of 182 BAC-end sequences from randomly selected clones revealed a GC content of 40.35% and low chloroplast and mitochondrial contamination. Repetitive sequence analyses indicated that LTR retrotransposons were the most predominant sequence class (86.93%–87.24%), followed by DNA retrotransposons (11.16%–11.69%). Additionally, we found 25 simple sequence repeats (SSRs) that could potentially be used as genetic markers. PMID:21234344

  1. Self-Efficacy and Blood Pressure Self-Care Behaviors in Patients on Chronic Hemodialysis.

    PubMed

    Kauric-Klein, Zorica; Peters, Rosalind M; Yarandi, Hossein N

    2017-07-01

    This study examined the effects of an educative, self-regulation intervention on blood pressure self-efficacy, self-care outcomes, and blood pressure control in adults receiving hemodialysis. Simple randomization was done at the hemodialysis unit level. One hundred eighteen participants were randomized to usual care ( n = 59) or intervention group ( n = 59). The intervention group received blood pressure education sessions and 12 weeks of individual counseling on self-regulation of blood pressure, fluid, and salt intake. There was no significant increase in self-efficacy scores within ( F = .55, p = .46) or between groups at 12 weeks ( F = 2.76, p = .10). Although the intervention was not successful, results from the total sample ( N = 118) revealed that self-efficacy was significantly related to a number of self-care outcomes including decreased salt intake, lower interdialytic weight gain, increased adherence to blood pressure medications, and fewer missed hemodialysis appointments. Increased blood pressure self-efficacy was also associated with lower diastolic blood pressure.

  2. Reducing DNA context dependence in bacterial promoters

    PubMed Central

    Carr, Swati B.; Densmore, Douglas M.

    2017-01-01

    Variation in the DNA sequence upstream of bacterial promoters is known to affect the expression levels of the products they regulate, sometimes dramatically. While neutral synthetic insulator sequences have been found to buffer promoters from upstream DNA context, there are no established methods for designing effective insulator sequences with predictable effects on expression levels. We address this problem with Degenerate Insulation Screening (DIS), a novel method based on a randomized 36-nucleotide insulator library and a simple, high-throughput, flow-cytometry-based screen that randomly samples from a library of 436 potential insulated promoters. The results of this screen can then be compared against a reference uninsulated device to select a set of insulated promoters providing a precise level of expression. We verify this method by insulating the constitutive, inducible, and repressible promotors of a four transcriptional-unit inverter (NOT-gate) circuit, finding both that order dependence is largely eliminated by insulation and that circuit performance is also significantly improved, with a 5.8-fold mean improvement in on/off ratio. PMID:28422998

  3. Simulated Tempering Distributed Replica Sampling, Virtual Replica Exchange, and Other Generalized-Ensemble Methods for Conformational Sampling.

    PubMed

    Rauscher, Sarah; Neale, Chris; Pomès, Régis

    2009-10-13

    Generalized-ensemble algorithms in temperature space have become popular tools to enhance conformational sampling in biomolecular simulations. A random walk in temperature leads to a corresponding random walk in potential energy, which can be used to cross over energetic barriers and overcome the problem of quasi-nonergodicity. In this paper, we introduce two novel methods: simulated tempering distributed replica sampling (STDR) and virtual replica exchange (VREX). These methods are designed to address the practical issues inherent in the replica exchange (RE), simulated tempering (ST), and serial replica exchange (SREM) algorithms. RE requires a large, dedicated, and homogeneous cluster of CPUs to function efficiently when applied to complex systems. ST and SREM both have the drawback of requiring extensive initial simulations, possibly adaptive, for the calculation of weight factors or potential energy distribution functions. STDR and VREX alleviate the need for lengthy initial simulations, and for synchronization and extensive communication between replicas. Both methods are therefore suitable for distributed or heterogeneous computing platforms. We perform an objective comparison of all five algorithms in terms of both implementation issues and sampling efficiency. We use disordered peptides in explicit water as test systems, for a total simulation time of over 42 μs. Efficiency is defined in terms of both structural convergence and temperature diffusion, and we show that these definitions of efficiency are in fact correlated. Importantly, we find that ST-based methods exhibit faster temperature diffusion and correspondingly faster convergence of structural properties compared to RE-based methods. Within the RE-based methods, VREX is superior to both SREM and RE. On the basis of our observations, we conclude that ST is ideal for simple systems, while STDR is well-suited for complex systems.

  4. A cross sectional study on factors associated with harmful traditional practices among children less than 5 years in Axum town, north Ethiopia, 2013

    PubMed Central

    2014-01-01

    Background Every social grouping in the world has its own cultural practices and beliefs which guide its members on how they should live or behave. Harmful traditional practices that affect children are Female genital mutilation, Milk teeth extraction, Food taboo, Uvula cutting, keeping babies out of exposure to sun, and Feeding fresh butter to new born babies. The objective of this study was to assess factors associated with harmful traditional practices among children less than 5 years of age in Axum town, North Ethiopia. Methods Community based cross sectional study was conducted in 752 participants who were selected using multi stage sampling; Simple random sampling method was used to select ketenas from all kebelles of Axum town. After proportional allocation of sample size, systematic random sampling method was used to get the study participants. Data was collected using interviewer administered Tigrigna version questionnaire, it was entered and analyzed using SPSS version 16. Descriptive statistics was calculated and logistic regressions were used to analyze the data. Results Out of the total sample size 50.7% children were females, the mean age of children was 26.28 months and majority of mothers had no formal education. About 87.8% mothers had performed at least one traditional practice to their children; uvula cutting was practiced on 86.9% children followed by milk teeth extraction 12.5% and eye borrows incision 2.4% children. Fear of swelling, pus and rapture of the uvula was the main reason to perform uvula cutting. Conclusion The factors associated with harmful traditional practices were educational status, occupation, religion of mothers and harmful traditional practices performed on the mothers. PMID:24952584

  5. A cross sectional study on factors associated with harmful traditional practices among children less than 5 years in Axum town, north Ethiopia, 2013.

    PubMed

    Gebrekirstos, Kahsu; Abebe, Mesfin; Fantahun, Atsede

    2014-06-21

    Every social grouping in the world has its own cultural practices and beliefs which guide its members on how they should live or behave. Harmful traditional practices that affect children are Female genital mutilation, Milk teeth extraction, Food taboo, Uvula cutting, keeping babies out of exposure to sun, and Feeding fresh butter to new born babies. The objective of this study was to assess factors associated with harmful traditional practices among children less than 5 years of age in Axum town, North Ethiopia. Community based cross sectional study was conducted in 752 participants who were selected using multi stage sampling; Simple random sampling method was used to select ketenas from all kebelles of Axum town. After proportional allocation of sample size, systematic random sampling method was used to get the study participants. Data was collected using interviewer administered Tigrigna version questionnaire, it was entered and analyzed using SPSS version 16. Descriptive statistics was calculated and logistic regressions were used to analyze the data. Out of the total sample size 50.7% children were females, the mean age of children was 26.28 months and majority of mothers had no formal education. About 87.8% mothers had performed at least one traditional practice to their children; uvula cutting was practiced on 86.9% children followed by milk teeth extraction 12.5% and eye borrows incision 2.4% children. Fear of swelling, pus and rapture of the uvula was the main reason to perform uvula cutting. The factors associated with harmful traditional practices were educational status, occupation, religion of mothers and harmful traditional practices performed on the mothers.

  6. The 'number needed to sample' in primary care research. Comparison of two primary care sampling frames for chronic back pain.

    PubMed

    Smith, Blair H; Hannaford, Philip C; Elliott, Alison M; Smith, W Cairns; Chambers, W Alastair

    2005-04-01

    Sampling for primary care research must strike a balance between efficiency and external validity. For most conditions, even a large population sample will yield a small number of cases, yet other sampling techniques risk problems with extrapolation of findings. To compare the efficiency and external validity of two sampling methods for both an intervention study and epidemiological research in primary care--a convenience sample and a general population sample--comparing the response and follow-up rates, the demographic and clinical characteristics of each sample, and calculating the 'number needed to sample' (NNS) for a hypothetical randomized controlled trial. In 1996, we selected two random samples of adults from 29 general practices in Grampian, for an epidemiological study of chronic pain. One sample of 4175 was identified by an electronic questionnaire that listed patients receiving regular analgesic prescriptions--the 'repeat prescription sample'. The other sample of 5036 was identified from all patients on practice lists--the 'general population sample'. Questionnaires, including demographic, pain and general health measures, were sent to all. A similar follow-up questionnaire was sent in 2000 to all those agreeing to participate in further research. We identified a potential group of subjects for a hypothetical trial in primary care based on a recently published trial (those aged 25-64, with severe chronic back pain, willing to participate in further research). The repeat prescription sample produced better response rates than the general sample overall (86% compared with 82%, P < 0.001), from both genders and from the oldest and youngest age groups. The NNS using convenience sampling was 10 for each member of the final potential trial sample, compared with 55 using general population sampling. There were important differences between the samples in age, marital and employment status, social class and educational level. However, among the potential trial sample, there were no demographic differences. Those from the repeat prescription sample had poorer indices than the general population sample in all pain and health measures. The repeat prescription sampling method was approximately five times more efficient than the general population method. However demographic and clinical differences in the repeat prescription sample might hamper extrapolation of findings to the general population, particularly in an epidemiological study, and demonstrate that simple comparison with age and gender of the target population is insufficient.

  7. Determining the effectiveness of the third person interview in the level of insight psychotic patients.

    PubMed

    Mehdizadeh, Mahsa; Rezaei, Omid; Dolatshahi, Behrouz

    2016-11-30

    The goal of this study was to determine the effectiveness of the third person interview in increasing the level of insight and cooperation in psychotic patients. We used a quasi-experimental posttest design with an alternative method group. A number of 40 individuals with a definite diagnosis of psychosis were selected using a simple random sampling, and were put randomly in an experimental group (third person interview) and an alternative control group (clinical interview). The results indicated that using the third person interview, the insight level of the psychotic patients increased in all dimensions of insight, except awareness of flat or blunted affect and awareness of unsociability. The results of the independent t-test samples showed no significant difference in cooperation between the two groups of psychotic patients. It seems that the ability to consider one's mental viewpoint from other's, is dependent on the relative ability of psychotic patients to represent other's mental states (theory of mind). But, psychotic patients have severe impairment in the ability to represent their own mental states, resulting in an impairment in the recognition of their mental disorder, psychotic symptoms, the need for therapy, and social consequences of their mental disorder. Copyright © 2016. Published by Elsevier Ireland Ltd.

  8. Determinants of Prelacteal Feeding in Rural Northern India

    PubMed Central

    Roy, Manas Pratim; Mohan, Uday; Singh, Shivendra Kumar; Singh, Vijay Kumar; Srivastava, Anand Kumar

    2014-01-01

    Background: Prelacteal feeding is an underestimated problem in a developing country like India, where infant mortality rate is quite high. The present study tried to find out the factors determining prelacteal feeding in rural areas of north India. Methods: A crosssectional study was conducted among recently delivered women of rural Uttar Pradesh, India. Multistage random sampling was used for selecting villages. From them, 352 recently delivered women were selected as the subjects, following systematic random sampling. Chi-square test and logistic regression were used to find out the predictors for prelacteal feeding. Results: Overall, 40.1% of mothers gave prelacteal feeding to their newborn. Factors significantly associated with such practice, after simple logistic regression, were age, caste, socioeconomic status, and place of delivery. At multivariate level, age (odds ratio (OR) = 1.76, 95% confidence interval (CI) = 1.13-2.74), caste and place of delivery (OR = 2.23, 95% CI = 1.21-4.10) were found to determine prelacteal feeding significantly, indicating that young age, high caste, and home deliveries could affect the practice positively. Conclusions: The problem of prelacteal feeding is still prevalent in rural India. Age, caste, and place of delivery were associated with the problem. For ensuring neonatal health, the problem should be addressed with due gravity, with emphasis on exclusive breast feeding. PMID:24932400

  9. Prevalence of cardiovascular risk factors amongst traders in an urban market in Lagos, Nigeria.

    PubMed

    Odugbemi, T O; Onajole, A T; Osibogun, A O

    2012-03-01

    A descriptive cross-sectional study was carried out to determine the prevalence of cardiovascular risk factors amongst traders in an urban market in Lagos State. Tejuosho market, one of the large popular markets was selected from a list of markets that met the inclusion criteria of being major markets dealing in general goods using a simple random sampling technique by balloting. Four hundred (400) traders were selected using a systematic random sampling. Each trader was interviewed with a well-structured questionnaire and had blood pressure and anthropometric measurements (height, weight and body mass index). Female traders made up (74.3%) 297 of the total population. The mean age was 45.48+11.88 and 42.29+10.96 years for males and females respectively. Majority 239 (59.8%) fell within the age range of 35 - 55 years. The cardiovascular risk factors identified and their prevalence rates were hypertension (34.8%), physical inactivity (92%), previously diagnosed diabetes mellitus (0.8%), risky alcohol consumption (1%), cigarette smoking (0.3%) in females and (17.5%) in males, obesity (12.3%) and overweight (39.9%). The study recommended that any health promoting, preventive or intervention programme for this population would have to be worked into their market activities if it is to make an impact.

  10. Estimation of AUC or Partial AUC under Test-Result-Dependent Sampling.

    PubMed

    Wang, Xiaofei; Ma, Junling; George, Stephen; Zhou, Haibo

    2012-01-01

    The area under the ROC curve (AUC) and partial area under the ROC curve (pAUC) are summary measures used to assess the accuracy of a biomarker in discriminating true disease status. The standard sampling approach used in biomarker validation studies is often inefficient and costly, especially when ascertaining the true disease status is costly and invasive. To improve efficiency and reduce the cost of biomarker validation studies, we consider a test-result-dependent sampling (TDS) scheme, in which subject selection for determining the disease state is dependent on the result of a biomarker assay. We first estimate the test-result distribution using data arising from the TDS design. With the estimated empirical test-result distribution, we propose consistent nonparametric estimators for AUC and pAUC and establish the asymptotic properties of the proposed estimators. Simulation studies show that the proposed estimators have good finite sample properties and that the TDS design yields more efficient AUC and pAUC estimates than a simple random sampling (SRS) design. A data example based on an ongoing cancer clinical trial is provided to illustrate the TDS design and the proposed estimators. This work can find broad applications in design and analysis of biomarker validation studies.

  11. Intraherd correlation coefficients and design effects for bovine viral diarrhoea, infectious bovine rhinotracheitis, leptospirosis and neosporosis in cow-calf system herds in North-eastern Mexico.

    PubMed

    Segura-Correa, J C; Domínguez-Díaz, D; Avalos-Ramírez, R; Argaez-Sosa, J

    2010-09-01

    Knowledge of the intraherd correlation coefficient (ICC) and design (D) effect for infectious diseases could be of interest in sample size calculation and to provide the correct standard errors of prevalence estimates in cluster or two-stage samplings surveys. Information on 813 animals from 48 non-vaccinated cow-calf herds from North-eastern Mexico was used. The ICC for the bovine viral diarrhoea (BVD), infectious bovine rhinotracheitis (IBR), leptospirosis and neosporosis diseases were calculated using a Bayesian approach adjusting for the sensitivity and specificity of the diagnostic tests. The ICC and D values for BVD, IBR, leptospirosis and neosporosis were 0.31 and 5.91, 0.18 and 3.88, 0.22 and 4.53, and 0.11 and 2.68, respectively. The ICC and D values were different from 0 and D greater than 1, therefore large sample sizes are required to obtain the same precision in prevalence estimates than for a random simple sampling design. The report of ICC and D values is of great help in planning and designing two-stage sampling studies. 2010 Elsevier B.V. All rights reserved.

  12. The cooling rate dependence of cation distributions in CoFe2O4

    NASA Technical Reports Server (NTRS)

    De Guire, Mark R.; O'Handley, Robert C.; Kalonji, Gretchen

    1989-01-01

    The room-temperature cation distributions in bulk CoFe2O4 samples, cooled at rates between less than 0.01 and about 1000 C/sec, have been determined using Mossbauer spectroscopy in an 80-kOe magnetic field. With increasing cooling rate, the quenched structure departs increasingly from the mostly ordered cation distribution ordinarily observed at room temperature. However, the cation disorder appears to saturate just short of a random distribution at very high cooling rates. These results are interpreted in terms of a simple relaxation model of cation redistribution kinetics. The disordered cation distributions should lead to increased magnetization and decreased coercivity in CoFe2O4.

  13. A Framework for Designing Cluster Randomized Trials with Binary Outcomes

    ERIC Educational Resources Information Center

    Spybrook, Jessaca; Martinez, Andres

    2011-01-01

    The purpose of this paper is to provide a frame work for approaching a power analysis for a CRT (cluster randomized trial) with a binary outcome. The authors suggest a framework in the context of a simple CRT and then extend it to a blocked design, or a multi-site cluster randomized trial (MSCRT). The framework is based on proportions, an…

  14. Understanding Statistical Power in Cluster Randomized Trials: Challenges Posed by Differences in Notation and Terminology

    ERIC Educational Resources Information Center

    Spybrook, Jessaca; Hedges, Larry; Borenstein, Michael

    2014-01-01

    Research designs in which clusters are the unit of randomization are quite common in the social sciences. Given the multilevel nature of these studies, the power analyses for these studies are more complex than in a simple individually randomized trial. Tools are now available to help researchers conduct power analyses for cluster randomized…

  15. Modeling 2D and 3D diffusion.

    PubMed

    Saxton, Michael J

    2007-01-01

    Modeling obstructed diffusion is essential to the understanding of diffusion-mediated processes in the crowded cellular environment. Simple Monte Carlo techniques for modeling obstructed random walks are explained and related to Brownian dynamics and more complicated Monte Carlo methods. Random number generation is reviewed in the context of random walk simulations. Programming techniques and event-driven algorithms are discussed as ways to speed simulations.

  16. In-situ pre-concentration through repeated sampling and pyrolysis for ultrasensitive determination of thallium in drinking water by electrothermal atomic absorption spectrometry.

    PubMed

    Liu, Liwei; Zheng, Huaili; Xu, Bincheng; Xiao, Lang; Chigan, Yong; Zhangluo, Yilan

    2018-03-01

    In this paper, a procedure for in-situ pre-concentration in graphite furnace by repeated sampling and pyrolysis is proposed for the determination of ultra-trace thallium in drinking water by graphite furnace atomic absorption spectrometry (GF-AAS). Without any other laborious enrichment processes that routinely result in analyte loss and contamination, thallium was directly concentrated in the graphite furnace automatically and subsequently subject to analysis. The effects of several key factors, such as the temperature for pyrolysis and atomization, the chemical modifier, and the repeated sampling times were investigated. Under the optimized conditions, a limit of detection of 0.01µgL -1 was obtained, which fulfilled thallium determination in drinking water by GB 5749-2006 regulated by China. Successful analysis of thallium in certified water samples and drinking water samples was demonstrated, with analytical results in good agreement with the certified values and those by inductively coupled plasma mass spectrometry (ICP-MS), respectively. Routine spike-recovery tests with randomly selected drinking water samples showed satisfactory results of 80-96%. The proposed method is simple and sensitive for screening of ultra-trace thallium in drinking water samples. Copyright © 2017. Published by Elsevier B.V.

  17. Methods for estimating the amount of vernal pool habitat in the northeastern United States

    USGS Publications Warehouse

    Van Meter, R.; Bailey, L.L.; Grant, E.H.C.

    2008-01-01

    The loss of small, seasonal wetlands is a major concern for a variety of state, local, and federal organizations in the northeastern U.S. Identifying and estimating the number of vernal pools within a given region is critical to developing long-term conservation and management strategies for these unique habitats and their faunal communities. We use three probabilistic sampling methods (simple random sampling, adaptive cluster sampling, and the dual frame method) to estimate the number of vernal pools on protected, forested lands. Overall, these methods yielded similar values of vernal pool abundance for each study area, and suggest that photographic interpretation alone may grossly underestimate the number of vernal pools in forested habitats. We compare the relative efficiency of each method and discuss ways of improving precision. Acknowledging that the objectives of a study or monitoring program ultimately determine which sampling designs are most appropriate, we recommend that some type of probabilistic sampling method be applied. We view the dual-frame method as an especially useful way of combining incomplete remote sensing methods, such as aerial photograph interpretation, with a probabilistic sample of the entire area of interest to provide more robust estimates of the number of vernal pools and a more representative sample of existing vernal pool habitats.

  18. Generating constrained randomized sequences: item frequency matters.

    PubMed

    French, Robert M; Perruchet, Pierre

    2009-11-01

    All experimental psychologists understand the importance of randomizing lists of items. However, randomization is generally constrained, and these constraints-in particular, not allowing immediately repeated items-which are designed to eliminate particular biases, frequently engender others. We describe a simple Monte Carlo randomization technique that solves a number of these problems. However, in many experimental settings, we are concerned not only with the number and distribution of items but also with the number and distribution of transitions between items. The algorithm mentioned above provides no control over this. We therefore introduce a simple technique that uses transition tables for generating correctly randomized sequences. We present an analytic method of producing item-pair frequency tables and item-pair transitional probability tables when immediate repetitions are not allowed. We illustrate these difficulties and how to overcome them, with reference to a classic article on word segmentation in infants. Finally, we provide free access to an Excel file that allows users to generate transition tables with up to 10 different item types, as well as to generate appropriately distributed randomized sequences of any length without immediately repeated elements. This file is freely available from http://leadserv.u-bourgogne.fr/IMG/xls/TransitionMatrix.xls.

  19. The Method of Randomization for Cluster-Randomized Trials: Challenges of Including Patients with Multiple Chronic Conditions

    PubMed Central

    Esserman, Denise; Allore, Heather G.; Travison, Thomas G.

    2016-01-01

    Cluster-randomized clinical trials (CRT) are trials in which the unit of randomization is not a participant but a group (e.g. healthcare systems or community centers). They are suitable when the intervention applies naturally to the cluster (e.g. healthcare policy); when lack of independence among participants may occur (e.g. nursing home hygiene); or when it is most ethical to apply an intervention to all within a group (e.g. school-level immunization). Because participants in the same cluster receive the same intervention, CRT may approximate clinical practice, and may produce generalizable findings. However, when not properly designed or interpreted, CRT may induce biased results. CRT designs have features that add complexity to statistical estimation and inference. Chief among these is the cluster-level correlation in response measurements induced by the randomization. A critical consideration is the experimental unit of inference; often it is desirable to consider intervention effects at the level of the individual rather than the cluster. Finally, given that the number of clusters available may be limited, simple forms of randomization may not achieve balance between intervention and control arms at either the cluster- or participant-level. In non-clustered clinical trials, balance of key factors may be easier to achieve because the sample can be homogenous by exclusion of participants with multiple chronic conditions (MCC). CRTs, which are often pragmatic, may eschew such restrictions. Failure to account for imbalance may induce bias and reducing validity. This article focuses on the complexities of randomization in the design of CRTs, such as the inclusion of patients with MCC, and imbalances in covariate factors across clusters. PMID:27478520

  20. Development of Maps of Simple and Complex Cells in the Primary Visual Cortex

    PubMed Central

    Antolík, Ján; Bednar, James A.

    2011-01-01

    Hubel and Wiesel (1962) classified primary visual cortex (V1) neurons as either simple, with responses modulated by the spatial phase of a sine grating, or complex, i.e., largely phase invariant. Much progress has been made in understanding how simple-cells develop, and there are now detailed computational models establishing how they can form topographic maps ordered by orientation preference. There are also models of how complex cells can develop using outputs from simple cells with different phase preferences, but no model of how a topographic orientation map of complex cells could be formed based on the actual connectivity patterns found in V1. Addressing this question is important, because the majority of existing developmental models of simple-cell maps group neurons selective to similar spatial phases together, which is contrary to experimental evidence, and makes it difficult to construct complex cells. Overcoming this limitation is not trivial, because mechanisms responsible for map development drive receptive fields (RF) of nearby neurons to be highly correlated, while co-oriented RFs of opposite phases are anti-correlated. In this work, we model V1 as two topographically organized sheets representing cortical layer 4 and 2/3. Only layer 4 receives direct thalamic input. Both sheets are connected with narrow feed-forward and feedback connectivity. Only layer 2/3 contains strong long-range lateral connectivity, in line with current anatomical findings. Initially all weights in the model are random, and each is modified via a Hebbian learning rule. The model develops smooth, matching, orientation preference maps in both sheets. Layer 4 units become simple cells, with phase preference arranged randomly, while those in layer 2/3 are primarily complex cells. To our knowledge this model is the first explaining how simple cells can develop with random phase preference, and how maps of complex cells can develop, using only realistic patterns of connectivity. PMID:21559067

  1. Catalytic micromotor generating self-propelled regular motion through random fluctuation.

    PubMed

    Yamamoto, Daigo; Mukai, Atsushi; Okita, Naoaki; Yoshikawa, Kenichi; Shioi, Akihisa

    2013-07-21

    Most of the current studies on nano∕microscale motors to generate regular motion have adapted the strategy to fabricate a composite with different materials. In this paper, we report that a simple object solely made of platinum generates regular motion driven by a catalytic chemical reaction with hydrogen peroxide. Depending on the morphological symmetry of the catalytic particles, a rich variety of random and regular motions are observed. The experimental trend is well reproduced by a simple theoretical model by taking into account of the anisotropic viscous effect on the self-propelled active Brownian fluctuation.

  2. Catalytic micromotor generating self-propelled regular motion through random fluctuation

    NASA Astrophysics Data System (ADS)

    Yamamoto, Daigo; Mukai, Atsushi; Okita, Naoaki; Yoshikawa, Kenichi; Shioi, Akihisa

    2013-07-01

    Most of the current studies on nano/microscale motors to generate regular motion have adapted the strategy to fabricate a composite with different materials. In this paper, we report that a simple object solely made of platinum generates regular motion driven by a catalytic chemical reaction with hydrogen peroxide. Depending on the morphological symmetry of the catalytic particles, a rich variety of random and regular motions are observed. The experimental trend is well reproduced by a simple theoretical model by taking into account of the anisotropic viscous effect on the self-propelled active Brownian fluctuation.

  3. Verification of intravenous catheter placement by auscultation--a simple, noninvasive technique.

    PubMed

    Lehavi, Amit; Rudich, Utay; Schechtman, Moshe; Katz, Yeshayahu Shai

    2014-01-01

    Verification of proper placement of an intravenous catheter may not always be simple. We evaluated the auscultation technique for this purpose. Twenty healthy volunteers were randomized for 18G catheter inserted intravenously either in the right (12) or left arm (8), and subcutaneously in the opposite arm. A standard stethoscope was placed over an area approximately 3 cm proximal to the tip of the catheter in the presumed direction of the vein to grade on a 0-6 scale the murmur heard by rapidly injecting 2 mL of NaCl 0.9% solution. The auscultation was evaluated by a blinded staff anesthesiologist. All 20 intravenous injection were evaluated as flow murmurs, and were graded an average 5.65 (±0.98), whereas all 20 subcutaneous injections were evaluated as either crackles or no sound, and were graded an average 2.00 (±1.38), without negative results. Sensitivity was calculated as 95%. Specificity and Kappa could not be calculated due to an empty false-positive group. Being simple, handy and noninvasive, we recommend to use the auscultation technique for verification of the proper placement of an intravenous catheter when uncertain of its position. Data obtained in our limited sample of healthy subjects need to be confirmed in the clinical setting.

  4. Identification of apple cultivars on the basis of simple sequence repeat markers.

    PubMed

    Liu, G S; Zhang, Y G; Tao, R; Fang, J G; Dai, H Y

    2014-09-12

    DNA markers are useful tools that play an important role in plant cultivar identification. They are usually based on polymerase chain reaction (PCR) and include simple sequence repeats (SSRs), inter-simple sequence repeats, and random amplified polymorphic DNA. However, DNA markers were not used effectively in the complete identification of plant cultivars because of the lack of known DNA fingerprints. Recently, a novel approach called the cultivar identification diagram (CID) strategy was developed to facilitate the use of DNA markers for separate plant individuals. The CID was designed whereby a polymorphic maker was generated from each PCR that directly allowed for cultivar sample separation at each step. Therefore, it could be used to identify cultivars and varieties easily with fewer primers. In this study, 60 apple cultivars, including a few main cultivars in fields and varieties from descendants (Fuji x Telamon) were examined. Of the 20 pairs of SSR primers screened, 8 pairs gave reproducible, polymorphic DNA amplification patterns. The banding patterns obtained from these 8 primers were used to construct a CID map. Each cultivar or variety in this study was distinguished from the others completely, indicating that this method can be used for efficient cultivar identification. The result contributed to studies on germplasm resources and the seedling industry in fruit trees.

  5. Personal child and mother carbon monoxide exposures and kitchen levels: methods and results from a randomized trial of woodfired chimney cookstoves in Guatemala (RESPIRE).

    PubMed

    Smith, Kirk R; McCracken, John P; Thompson, Lisa; Edwards, Rufus; Shields, Kyra N; Canuz, Eduardo; Bruce, Nigel

    2010-07-01

    During the first randomized intervention trial (RESPIRE: Randomized Exposure Study of Pollution Indoors and Respiratory Effects) in air pollution epidemiology, we pioneered application of passive carbon monoxide (CO) diffusion tubes to measure long-term personal exposures to woodsmoke. Here we report on the protocols and validations of the method, trends in personal exposure for mothers and their young children, and the efficacy of the introduced improved chimney stove in reducing personal exposures and kitchen concentrations. Passive diffusion tubes originally developed for industrial hygiene applications were deployed on a quarterly basis to measure 48-hour integrated personal carbon monoxide exposures among 515 children 0-18 months of age and 532 mothers aged 15-55 years and area samples in a subsample of 77 kitchens, in households randomized into control and intervention groups. Instrument comparisons among types of passive diffusion tubes and against a continuous electrochemical CO monitor indicated that tubes responded nonlinearly to CO, and regression calibration was used to reduce this bias. Before stove introduction, the baseline arithmetic (geometric) mean 48-h child (n=270), mother (n=529) and kitchen (n=65) levels were, respectively, 3.4 (2.8), 3.4 (2.8) and 10.2 (8.4) p.p.m. The between-group analysis of the 3355 post-baseline measurements found CO levels to be significantly lower among the intervention group during the trial period: kitchen levels: -90%; mothers: -61%; and children: -52% in geometric means. No significant deterioration in stove effect was observed over the 18 months of surveillance. The reliability of these findings is strengthened by the large sample size made feasible by these unobtrusive and inexpensive tubes, measurement error reduction through instrument calibration, and a randomized, longitudinal study design. These results from the first randomized trial of improved household energy technology in a developing country and demonstrate that a simple chimney stove can substantially reduce chronic exposures to harmful indoor air pollutants among women and infants.

  6. Personal child and mother carbon monoxide exposures and kitchen levels: Methods and results from a randomized trial of woodfired chimney cookstoves in Guatemala (RESPIRE)

    PubMed Central

    SMITH, KIRK R.; McCRACKEN, JOHN P.; THOMPSON, LISA; EDWARDS, RUFUS; SHIELDS, KYRA N.; CANUZ, EDUARDO; BRUCE, NIGEL

    2015-01-01

    During the first randomized intervention trial (RESPIRE: Randomized Exposure Study of Pollution Indoors and Respiratory Effects) in air pollution epidemiology, we pioneered application of passive carbon monoxide (CO) diffusion tubes to measure long-term personal exposures to woodsmoke. Here we report on the protocols and validations of the method, trends in personal exposure for mothers and their young children, and the efficacy of the introduced improved chimney stove in reducing personal exposures and kitchen concentrations. Passive diffusion tubes originally developed for industrial hygiene applications were deployed on a quarterly basis to measure 48-hour integrated personal carbon monoxide exposures among 515 children 0–18 months of age and 532 mothers aged 15–55 years and area samples in a subsample of 77 kitchens, in households randomized into control and intervention groups. Instrument comparisons among types of passive diffusion tubes and against a continuous electrochemical CO monitor indicated that tubes responded nonlinearly to CO, and regression calibration was used to reduce this bias. Before stove introduction, the baseline arithmetic (geometric) mean 48-h child (n=270), mother (n=529) and kitchen (n=65) levels were, respectively, 3.4 (2.8), 3.4 (2.8) and 10.2 (8.4) p.p.m. The between-group analysis of the 3355 post-baseline measurements found CO levels to be significantly lower among the intervention group during the trial period: kitchen levels: −90%; mothers: −61%; and children: −52% in geometric means. No significant deterioration in stove effect was observed over the 18 months of surveillance. The reliability of these findings is strengthened by the large sample size made feasible by these unobtrusive and inexpensive tubes, measurement error reduction through instrument calibration, and a randomized, longitudinal study design. These results from the first randomized trial of improved household energy technology in a developing country and demonstrate that a simple chimney stove can substantially reduce chronic exposures to harmful indoor air pollutants among women and infants. PMID:19536077

  7. Nonuniform sampling theorems for random signals in the linear canonical transform domain

    NASA Astrophysics Data System (ADS)

    Shuiqing, Xu; Congmei, Jiang; Yi, Chai; Youqiang, Hu; Lei, Huang

    2018-06-01

    Nonuniform sampling can be encountered in various practical processes because of random events or poor timebase. The analysis and applications of the nonuniform sampling for deterministic signals related to the linear canonical transform (LCT) have been well considered and researched, but up to now no papers have been published regarding the various nonuniform sampling theorems for random signals related to the LCT. The aim of this article is to explore the nonuniform sampling and reconstruction of random signals associated with the LCT. First, some special nonuniform sampling models are briefly introduced. Second, based on these models, some reconstruction theorems for random signals from various nonuniform samples associated with the LCT have been derived. Finally, the simulation results are made to prove the accuracy of the sampling theorems. In addition, the latent real practices of the nonuniform sampling for random signals have been also discussed.

  8. Determining Phylogenetic Relationships Among Date Palm Cultivars Using Random Amplified Polymorphic DNA (RAPD) and Inter-Simple Sequence Repeat (ISSR) Markers.

    PubMed

    Haider, Nadia

    2017-01-01

    Investigation of genetic variation and phylogenetic relationships among date palm (Phoenix dactylifera L.) cultivars is useful for their conservation and genetic improvement. Various molecular markers such as restriction fragment length polymorphisms (RFLPs), simple sequence repeat (SSR), representational difference analysis (RDA), and amplified fragment length polymorphism (AFLP) have been developed to molecularly characterize date palm cultivars. PCR-based markers random amplified polymorphic DNA (RAPD) and inter-simple sequence repeat (ISSR) are powerful tools to determine the relatedness of date palm cultivars that are difficult to distinguish morphologically. In this chapter, the principles, materials, and methods of RAPD and ISSR techniques are presented. Analysis of data generated from these two techniques and the use of these data to reveal phylogenetic relationships among date palm cultivars are also discussed.

  9. Mining Distance Based Outliers in Near Linear Time with Randomization and a Simple Pruning Rule

    NASA Technical Reports Server (NTRS)

    Bay, Stephen D.; Schwabacher, Mark

    2003-01-01

    Defining outliers by their distance to neighboring examples is a popular approach to finding unusual examples in a data set. Recently, much work has been conducted with the goal of finding fast algorithms for this task. We show that a simple nested loop algorithm that in the worst case is quadratic can give near linear time performance when the data is in random order and a simple pruning rule is used. We test our algorithm on real high-dimensional data sets with millions of examples and show that the near linear scaling holds over several orders of magnitude. Our average case analysis suggests that much of the efficiency is because the time to process non-outliers, which are the majority of examples, does not depend on the size of the data set.

  10. Mobile access to virtual randomization for investigator-initiated trials.

    PubMed

    Deserno, Thomas M; Keszei, András P

    2017-08-01

    Background/aims Randomization is indispensable in clinical trials in order to provide unbiased treatment allocation and a valid statistical inference. Improper handling of allocation lists can be avoided using central systems, for example, human-based services. However, central systems are unaffordable for investigator-initiated trials and might be inaccessible from some places, where study subjects need allocations. We propose mobile access to virtual randomization, where the randomization lists are non-existent and the appropriate allocation is computed on demand. Methods The core of the system architecture is an electronic data capture system or a clinical trial management system, which is extended by an R interface connecting the R server using the Java R Interface. Mobile devices communicate via the representational state transfer web services. Furthermore, a simple web-based setup allows configuring the appropriate statistics by non-statisticians. Our comprehensive R script supports simple randomization, restricted randomization using a random allocation rule, block randomization, and stratified randomization for un-blinded, single-blinded, and double-blinded trials. For each trial, the electronic data capture system or the clinical trial management system stores the randomization parameters and the subject assignments. Results Apps are provided for iOS and Android and subjects are randomized using smartphones. After logging onto the system, the user selects the trial and the subject, and the allocation number and treatment arm are displayed instantaneously and stored in the core system. So far, 156 subjects have been allocated from mobile devices serving five investigator-initiated trials. Conclusion Transforming pre-printed allocation lists into virtual ones ensures the correct conduct of trials and guarantees a strictly sequential processing in all trial sites. Covering 88% of all randomization models that are used in recent trials, virtual randomization becomes available for investigator-initiated trials and potentially for large multi-center trials.

  11. Eradication of Helicobacter pylori for prevention of ulcer recurrence after simple closure of perforated peptic ulcer: a meta-analysis of randomized controlled trials.

    PubMed

    Wong, Chung-Shun; Chia, Chee-Fah; Lee, Hung-Chia; Wei, Po-Li; Ma, Hon-Ping; Tsai, Shin-Han; Wu, Chih-Hsiung; Tam, Ka-Wai

    2013-06-15

    Eradication of Helicobacter pylori has become part of the standard therapy for peptic ulcer. However, the role of H pylori eradication in perforation of peptic ulcers remains controversial. It is unclear whether eradication of the bacterium confers prolonged ulcer remission after simple repair of perforated peptic ulcer. A systematic review and meta-analysis of randomized controlled trials was performed to evaluate the effects of H pylori eradication on prevention of ulcer recurrence after simple closure of perforated peptic ulcers. The primary outcome to evaluate these effects was the incidence of postoperative ulcers; the secondary outcome was the rate of H pylori elimination. The meta-analysis included five randomized controlled trials and 401 patients. A high prevalence of H pylori infection occurred in patients with perforated peptic ulcers. Eradication of H pylori significantly reduced the incidence of ulcer recurrence at 8 wk (risk ratio 2.97; 95% confidence interval: 1.06-8.29) and 1 y (risk ratio 1.49; 95% confidence interval: 1.10-2.03) postoperation. The rate of H pylori eradication was significantly higher in the treatment group than in the nontreatment group. Eradication therapy should be provided to patients with H pylori infection after simple closure of perforated gastroduodenal ulcers. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. The Effects of Direct Oxygen Supply During Static Cold Preservation of Rat Livers: An Experimental Study.

    PubMed

    Zumrutdal, Emin; Karateke, Faruk; Eser, Pınar Eylem; Turan, Umit; Ozyazici, Sefa; Sozutek, Alper; Gulkaya, Mustafa; Kunt, Mevlut

    2016-12-01

    We aimed to determine the biochemical and histopathologic effects of direct oxygen supply to the preservation fluid of static cold storage system with a simple method on rat livers. Sixteen rats were randomly divided into 2 groups: the control group, which contained Ringer's lactate as preservation fluid; and the oxygen group, which contained oxygen and Ringer's lactate for preservation. Each liver was placed in a bag containing 50 mL Ringer's lactate and placed in ice-filled storage containers. One hundred percent oxygen supplies were given via a simple, inexpensive system created in our laboratory, to the livers in oxygen group. We obtained samples for histopathologic evaluation in the 12th hour. In addition, 3 mL of preservation fluid was subjected to biochemical analysis at 0, sixth, and twelfth hours. Aspartate aminotransferase, alanine aminotransferase, lactate dehydrogenase, and pH levels were measured from the preservation fluid. In oxygen-supplemented group, the acceleration speed of increase in alanine aminotransferase and lactate dehydrogenase levels at sixth hour and lactate dehydrogenase, alanine aminotransferase, and lactate dehydrogenase levels at 12th hour were statistically significantly reduced. In histopathologic examination, all parameters except ballooning were statistically significantly better in the oxygen-supplemented group. This simple system for oxygenation of liver tissues during static cold storage was shown to be effective with good results in biochemical and histopathologic assessments. Because this is a simple, inexpensive, and easily available method, larger studies are warranted to evaluate its effects (especially in humans).

  13. Blessing of dimensionality: mathematical foundations of the statistical physics of data.

    PubMed

    Gorban, A N; Tyukin, I Y

    2018-04-28

    The concentrations of measure phenomena were discovered as the mathematical background to statistical mechanics at the end of the nineteenth/beginning of the twentieth century and have been explored in mathematics ever since. At the beginning of the twenty-first century, it became clear that the proper utilization of these phenomena in machine learning might transform the curse of dimensionality into the blessing of dimensionality This paper summarizes recently discovered phenomena of measure concentration which drastically simplify some machine learning problems in high dimension, and allow us to correct legacy artificial intelligence systems. The classical concentration of measure theorems state that i.i.d. random points are concentrated in a thin layer near a surface (a sphere or equators of a sphere, an average or median-level set of energy or another Lipschitz function, etc.). The new stochastic separation theorems describe the thin structure of these thin layers: the random points are not only concentrated in a thin layer but are all linearly separable from the rest of the set, even for exponentially large random sets. The linear functionals for separation of points can be selected in the form of the linear Fisher's discriminant. All artificial intelligence systems make errors. Non-destructive correction requires separation of the situations (samples) with errors from the samples corresponding to correct behaviour by a simple and robust classifier. The stochastic separation theorems provide us with such classifiers and determine a non-iterative (one-shot) procedure for their construction.This article is part of the theme issue 'Hilbert's sixth problem'. © 2018 The Author(s).

  14. Blessing of dimensionality: mathematical foundations of the statistical physics of data

    NASA Astrophysics Data System (ADS)

    Gorban, A. N.; Tyukin, I. Y.

    2018-04-01

    The concentrations of measure phenomena were discovered as the mathematical background to statistical mechanics at the end of the nineteenth/beginning of the twentieth century and have been explored in mathematics ever since. At the beginning of the twenty-first century, it became clear that the proper utilization of these phenomena in machine learning might transform the curse of dimensionality into the blessing of dimensionality. This paper summarizes recently discovered phenomena of measure concentration which drastically simplify some machine learning problems in high dimension, and allow us to correct legacy artificial intelligence systems. The classical concentration of measure theorems state that i.i.d. random points are concentrated in a thin layer near a surface (a sphere or equators of a sphere, an average or median-level set of energy or another Lipschitz function, etc.). The new stochastic separation theorems describe the thin structure of these thin layers: the random points are not only concentrated in a thin layer but are all linearly separable from the rest of the set, even for exponentially large random sets. The linear functionals for separation of points can be selected in the form of the linear Fisher's discriminant. All artificial intelligence systems make errors. Non-destructive correction requires separation of the situations (samples) with errors from the samples corresponding to correct behaviour by a simple and robust classifier. The stochastic separation theorems provide us with such classifiers and determine a non-iterative (one-shot) procedure for their construction. This article is part of the theme issue `Hilbert's sixth problem'.

  15. Assessing the Relationship of Ancient and Modern Populations

    PubMed Central

    Schraiber, Joshua G.

    2018-01-01

    Genetic material sequenced from ancient samples is revolutionizing our understanding of the recent evolutionary past. However, ancient DNA is often degraded, resulting in low coverage, error-prone sequencing. Several solutions exist to this problem, ranging from simple approach, such as selecting a read at random for each site, to more complicated approaches involving genotype likelihoods. In this work, we present a novel method for assessing the relationship of an ancient sample with a modern population, while accounting for sequencing error and postmortem damage by analyzing raw reads from multiple ancient individuals simultaneously. We show that, when analyzing SNP data, it is better to sequence more ancient samples to low coverage: two samples sequenced to 0.5× coverage provide better resolution than a single sample sequenced to 2× coverage. We also examined the power to detect whether an ancient sample is directly ancestral to a modern population, finding that, with even a few high coverage individuals, even ancient samples that are very slightly diverged from the modern population can be detected with ease. When we applied our approach to European samples, we found that no ancient samples represent direct ancestors of modern Europeans. We also found that, as shown previously, the most ancient Europeans appear to have had the smallest effective population sizes, indicating a role for agriculture in modern population growth. PMID:29167200

  16. From Complex to Simple: Interdisciplinary Stochastic Models

    ERIC Educational Resources Information Center

    Mazilu, D. A.; Zamora, G.; Mazilu, I.

    2012-01-01

    We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…

  17. Simple to complex modeling of breathing volume using a motion sensor.

    PubMed

    John, Dinesh; Staudenmayer, John; Freedson, Patty

    2013-06-01

    To compare simple and complex modeling techniques to estimate categories of low, medium, and high ventilation (VE) from ActiGraph™ activity counts. Vertical axis ActiGraph™ GT1M activity counts, oxygen consumption and VE were measured during treadmill walking and running, sports, household chores and labor-intensive employment activities. Categories of low (<19.3 l/min), medium (19.3 to 35.4 l/min) and high (>35.4 l/min) VEs were derived from activity intensity classifications (light <2.9 METs, moderate 3.0 to 5.9 METs and vigorous >6.0 METs). We examined the accuracy of two simple techniques (multiple regression and activity count cut-point analyses) and one complex (random forest technique) modeling technique in predicting VE from activity counts. Prediction accuracy of the complex random forest technique was marginally better than the simple multiple regression method. Both techniques accurately predicted VE categories almost 80% of the time. The multiple regression and random forest techniques were more accurate (85 to 88%) in predicting medium VE. Both techniques predicted the high VE (70 to 73%) with greater accuracy than low VE (57 to 60%). Actigraph™ cut-points for light, medium and high VEs were <1381, 1381 to 3660 and >3660 cpm. There were minor differences in prediction accuracy between the multiple regression and the random forest technique. This study provides methods to objectively estimate VE categories using activity monitors that can easily be deployed in the field. Objective estimates of VE should provide a better understanding of the dose-response relationship between internal exposure to pollutants and disease. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. The potential of at-home prediction of the formation of urolithiasis by simple multi-frequency electrical conductivity of the urine and the comparison of its performance with urine ion-related indices, color and specific gravity.

    PubMed

    Silverio, Angelito A; Chung, Wen-Yaw; Cheng, Cheanyeh; Wang, Hai-Lung; Kung, Chien-Min; Chen, Jun; Tsai, Vincent F S

    2016-04-01

    It is important to control daily diet, water intake and life style as well as monitor the quality of urine for urolithiasis prevention. For decades, many ion-related indices have been developed for predicting the formation of urinary stones or urolithiasis, such as EQUILs, relative supersaturation (RSS), Tiselius indices (TI), Robertson risk factor algorithms (RRFA) and more recently, the Bonn risk index. However, they mostly demand robust laboratory analysis, are work-intensive, and even require complex computational programs to get the concentration patterns of several urine analytes. A simple and fast platform for measuring multi-frequency electrical conductivity (MFEC) of morning spot urine (random urine) to predict the onset of urolithiasis was implemented in this study. The performance thereof was compared to ion-related indices, urine color and specific gravity. The concentrations of relevant ions, color, specific gravity (SG) and MFEC (MFEC tested at 1, 10, 100, 5001 KHz and 1 MHz) of 80 random urine samples were examined after collection. Then, the urine samples were stored at 4 °C for 24 h to determine whether sedimentation would occur or not. Ion-activity product index of calcium oxalate (AP(CaOx) EQ2) was calculated. The correlation between AP(CaOx) EQ2, urine color, SG and MFEC were analyzed. AP(CaOx) EQ2, urine color and MFEC (at 5 frequencies) all demonstrated good prediction (p = 0.01, 0.01, 0.01, respectively) for stone formation. The positive correlation between AP(CaOx) EQ2 and MFEC is also significant (p = 0.01). MFEC provides a good metric for predicting the onset of urolithiasis, which is comparable to conventional ion-related indices and urine color. This technology can be implemented with much ease for objectively monitoring the quality of urine at points-of-care or at home.

  19. Use of electronic healthcare records in large-scale simple randomized trials at the point of care for the documentation of value-based medicine.

    PubMed

    van Staa, T-P; Klungel, O; Smeeth, L

    2014-06-01

    A solid foundation of evidence of the effects of an intervention is a prerequisite of evidence-based medicine. The best source of such evidence is considered to be randomized trials, which are able to avoid confounding. However, they may not always estimate effectiveness in clinical practice. Databases that collate anonymized electronic health records (EHRs) from different clinical centres have been widely used for many years in observational studies. Randomized point-of-care trials have been initiated recently to recruit and follow patients using the data from EHR databases. In this review, we describe how EHR databases can be used for conducting large-scale simple trials and discuss the advantages and disadvantages of their use. © 2014 The Association for the Publication of the Journal of Internal Medicine.

  20. PERMutation Using Transposase Engineering (PERMUTE): A Simple Approach for Constructing Circularly Permuted Protein Libraries.

    PubMed

    Jones, Alicia M; Atkinson, Joshua T; Silberg, Jonathan J

    2017-01-01

    Rearrangements that alter the order of a protein's sequence are used in the lab to study protein folding, improve activity, and build molecular switches. One of the simplest ways to rearrange a protein sequence is through random circular permutation, where native protein termini are linked together and new termini are created elsewhere through random backbone fission. Transposase mutagenesis has emerged as a simple way to generate libraries encoding different circularly permuted variants of proteins. With this approach, a synthetic transposon (called a permuteposon) is randomly inserted throughout a circularized gene to generate vectors that express different permuted variants of a protein. In this chapter, we outline the protocol for constructing combinatorial libraries of circularly permuted proteins using transposase mutagenesis, and we describe the different permuteposons that have been developed to facilitate library construction.

  1. Differential expression analysis for RNAseq using Poisson mixed models

    PubMed Central

    Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny

    2017-01-01

    Abstract Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. PMID:28369632

  2. Robust online tracking via adaptive samples selection with saliency detection

    NASA Astrophysics Data System (ADS)

    Yan, Jia; Chen, Xi; Zhu, QiuPing

    2013-12-01

    Online tracking has shown to be successful in tracking of previously unknown objects. However, there are two important factors which lead to drift problem of online tracking, the one is how to select the exact labeled samples even when the target locations are inaccurate, and the other is how to handle the confusors which have similar features with the target. In this article, we propose a robust online tracking algorithm with adaptive samples selection based on saliency detection to overcome the drift problem. To deal with the problem of degrading the classifiers using mis-aligned samples, we introduce the saliency detection method to our tracking problem. Saliency maps and the strong classifiers are combined to extract the most correct positive samples. Our approach employs a simple yet saliency detection algorithm based on image spectral residual analysis. Furthermore, instead of using the random patches as the negative samples, we propose a reasonable selection criterion, in which both the saliency confidence and similarity are considered with the benefits that confusors in the surrounding background are incorporated into the classifiers update process before the drift occurs. The tracking task is formulated as a binary classification via online boosting framework. Experiment results in several challenging video sequences demonstrate the accuracy and stability of our tracker.

  3. Casimir rack and pinion as a miniaturized kinetic energy harvester

    NASA Astrophysics Data System (ADS)

    Miri, MirFaez; Etesami, Zahra

    2016-08-01

    We study a nanoscale machine composed of a rack and a pinion with no contact, but intermeshed via the lateral Casimir force. We adopt a simple model for the random velocity of the rack subject to external random forces, namely, a dichotomous noise with zero mean value. We show that the pinion, even when it experiences random thermal torque, can do work against a load. The device thus converts the kinetic energy of the random motions of the rack into useful work.

  4. Simple non-laboratory- and laboratory-based risk assessment algorithms and nomogram for detecting undiagnosed diabetes mellitus.

    PubMed

    Wong, Carlos K H; Siu, Shing-Chung; Wan, Eric Y F; Jiao, Fang-Fang; Yu, Esther Y T; Fung, Colman S C; Wong, Ka-Wai; Leung, Angela Y M; Lam, Cindy L K

    2016-05-01

    The aim of the present study was to develop a simple nomogram that can be used to predict the risk of diabetes mellitus (DM) in the asymptomatic non-diabetic subjects based on non-laboratory- and laboratory-based risk algorithms. Anthropometric data, plasma fasting glucose, full lipid profile, exercise habits, and family history of DM were collected from Chinese non-diabetic subjects aged 18-70 years. Logistic regression analysis was performed on a random sample of 2518 subjects to construct non-laboratory- and laboratory-based risk assessment algorithms for detection of undiagnosed DM; both algorithms were validated on data of the remaining sample (n = 839). The Hosmer-Lemeshow test and area under the receiver operating characteristic (ROC) curve (AUC) were used to assess the calibration and discrimination of the DM risk algorithms. Of 3357 subjects recruited, 271 (8.1%) had undiagnosed DM defined by fasting glucose ≥7.0 mmol/L or 2-h post-load plasma glucose ≥11.1 mmol/L after an oral glucose tolerance test. The non-laboratory-based risk algorithm, with scores ranging from 0 to 33, included age, body mass index, family history of DM, regular exercise, and uncontrolled blood pressure; the laboratory-based risk algorithm, with scores ranging from 0 to 37, added triglyceride level to the risk factors. Both algorithms demonstrated acceptable calibration (Hosmer-Lemeshow test: P = 0.229 and P = 0.483) and discrimination (AUC 0.709 and 0.711) for detection of undiagnosed DM. A simple-to-use nomogram for detecting undiagnosed DM has been developed using validated non-laboratory-based and laboratory-based risk algorithms. © 2015 Ruijin Hospital, Shanghai Jiaotong University School of Medicine and Wiley Publishing Asia Pty Ltd.

  5. Reasons, perceived efficacy, and factors associated with complementary and alternative medicine use among Malaysian patients with HIV/AIDS.

    PubMed

    Hasan, Syed Shahzad; See, Choon Keong; Choong, Christopher Lee Kwok; Ahmed, Syed Imran; Ahmadi, Keivan; Anwar, Mudassir

    2010-11-01

    The primary objective of this study was to evaluate the pattern of use, reasons for use, and perceived effect of complementary and alternative medicine (CAM), accompanied by identification and comparison of the factors that are potentially associated with CAM use. This cross-sectional study was carried out in 325 randomly sampled patients with human immunodeficiency virus/acquired immune deficiency syndrome (HIV/AIDS), at HIV/AIDS referral clinics in the Hospital Sungai Buloh, Malaysia. Simple random sampling was used, where randomization was done using patients' medical record numbers. Semistructured face-to-face interviews were conducted using 38 questions pertaining to type, pattern, perceived efficacy, adverse effects, and influential factors associated with CAM use. In addition, CD4 count and viral load readings were recorded. Of 325 randomly sampled patients with HIV/AIDS, 254 of them were using some forms of CAM, resulting in a utilization rate of 78.2%. Vitamins and supplements (52.6%), herbal products (33.8%), and massage (16.6%) were the top three most frequently used CAM modalities. Sociodemographic factors including education level (p = 0.021, r(s) = 0.148), monthly income (p = 0.001, r(s) = 0.260), and family history of CAM use (p = 0.001, r(s) = 0.231) were significantly associated and positively correlated with CAM use. However, the majority of these patients (68%) did not disclose CAM use to health care professionals. About half of those who rated their health as good or very good perceived it as a result of CAM use. This study confirmed the range of 30%-100% CAM use among individuals infected with HIV/AIDS. Although, on the one hand some types of CAM reduced viral load and enhanced the immune system, on the other hand some forms of CAM produced a detrimental effect on the virological suppression, opening this platform to more research and investigation in order to optimize the use of CAM among patients with HIV/AIDS.

  6. Geostatistical Sampling Methods for Efficient Uncertainty Analysis in Flow and Transport Problems

    NASA Astrophysics Data System (ADS)

    Liodakis, Stylianos; Kyriakidis, Phaedon; Gaganis, Petros

    2015-04-01

    In hydrogeological applications involving flow and transport of in heterogeneous porous media the spatial distribution of hydraulic conductivity is often parameterized in terms of a lognormal random field based on a histogram and variogram model inferred from data and/or synthesized from relevant knowledge. Realizations of simulated conductivity fields are then generated using geostatistical simulation involving simple random (SR) sampling and are subsequently used as inputs to physically-based simulators of flow and transport in a Monte Carlo framework for evaluating the uncertainty in the spatial distribution of solute concentration due to the uncertainty in the spatial distribution of hydraulic con- ductivity [1]. Realistic uncertainty analysis, however, calls for a large number of simulated concentration fields; hence, can become expensive in terms of both time and computer re- sources. A more efficient alternative to SR sampling is Latin hypercube (LH) sampling, a special case of stratified random sampling, which yields a more representative distribution of simulated attribute values with fewer realizations [2]. Here, term representative implies realizations spanning efficiently the range of possible conductivity values corresponding to the lognormal random field. In this work we investigate the efficiency of alternative methods to classical LH sampling within the context of simulation of flow and transport in a heterogeneous porous medium. More precisely, we consider the stratified likelihood (SL) sampling method of [3], in which attribute realizations are generated using the polar simulation method by exploring the geometrical properties of the multivariate Gaussian distribution function. In addition, we propose a more efficient version of the above method, here termed minimum energy (ME) sampling, whereby a set of N representative conductivity realizations at M locations is constructed by: (i) generating a representative set of N points distributed on the surface of a M-dimensional, unit radius hyper-sphere, (ii) relocating the N points on a representative set of N hyper-spheres of different radii, and (iii) transforming the coordinates of those points to lie on N different hyper-ellipsoids spanning the multivariate Gaussian distribution. The above method is applied in a dimensionality reduction context by defining flow-controlling points over which representative sampling of hydraulic conductivity is performed, thus also accounting for the sensitivity of the flow and transport model to the input hydraulic conductivity field. The performance of the various stratified sampling methods, LH, SL, and ME, is compared to that of SR sampling in terms of reproduction of ensemble statistics of hydraulic conductivity and solute concentration for different sample sizes N (numbers of realizations). The results indicate that ME sampling constitutes an equally if not more efficient simulation method than LH and SL sampling, as it can reproduce to a similar extent statistics of the conductivity and concentration fields, yet with smaller sampling variability than SR sampling. References [1] Gutjahr A.L. and Bras R.L. Spatial variability in subsurface flow and transport: A review. Reliability Engineering & System Safety, 42, 293-316, (1993). [2] Helton J.C. and Davis F.J. Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Reliability Engineering & System Safety, 81, 23-69, (2003). [3] Switzer P. Multiple simulation of spatial fields. In: Heuvelink G, Lemmens M (eds) Proceedings of the 4th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences, Coronet Books Inc., pp 629?635 (2000).

  7. Urinary pregnandiol-3-glucuronide and estrone conjugates to creatinine ratios in early pregnancies complicated by vaginal bleeding.

    PubMed

    Davidson, B J

    1986-10-01

    There is no simple and rapid test available to predict the outcome of an early pregnancy complicated by vaginal bleeding. In this prospective study, 15 women with normal pregnancies collected a weekly urine sample between 6 and 13 weeks' gestation. A single random urine sample was obtained from 15 women with bleeding who continued to carry their child and 50 women who proceeded to have a spontaneous abortion (SAB). Pregnandiol-3-glucuronide (PDG) was determined with the use of enzyme-multiplied immunoassay technique (EMIT) and estrone conjugates (E1C) were measured by radioimmunoassay (RIA). The ratios of these metabolites to creatinine (C) were calculated. PDG/C ratios in normal women rose gradually from 6 weeks on. All women with bleeding during a normal pregnancy had ratios in the normal range, but 94% of women with a SAB had ratios below the normal range. The E1C/C ratio remained unchanged from 6 to 11 weeks and then rose rapidly. Until 11 weeks, there was no clear separation between the E1C/C ratios of the women with a SAB and the women with bleeding who continued their pregnancies. The prognosis of threatened abortion can be made by a urinary PDG/C ratio but not by an E1C/C ratio. EMIT is simple and quick and uses technology present in many laboratories.

  8. A Fast Multiple Sampling Method for Low-Noise CMOS Image Sensors With Column-Parallel 12-bit SAR ADCs

    PubMed Central

    Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong

    2015-01-01

    This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB. PMID:26712765

  9. Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pfeiffer, M., E-mail: mpfeiffer@irs.uni-stuttgart.de; Nizenkov, P., E-mail: nizenkov@irs.uni-stuttgart.de; Mirza, A., E-mail: mirza@irs.uni-stuttgart.de

    2016-02-15

    Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn’s Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methodsmore » are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.« less

  10. Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases

    NASA Astrophysics Data System (ADS)

    Pfeiffer, M.; Nizenkov, P.; Mirza, A.; Fasoulas, S.

    2016-02-01

    Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn's Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.

  11. Assessing topology and surface orientation of an antimicrobial peptide magainin 2 using mechanically aligned bilayers and electron paramagnetic resonance spectroscopy.

    PubMed

    Mayo, Daniel J; Sahu, Indra D; Lorigan, Gary A

    2018-07-01

    Aligned CW-EPR membrane protein samples provide additional topology interactions that are absent from conventional randomly dispersed samples. These samples are aptly suited to studying antimicrobial peptides because of their dynamic peripheral topology. In this study, four consecutive substitutions of the model antimicrobial peptide magainin 2 were synthesized and labeled with the rigid TOAC spin label. The results revealed the helical tilts to be 66° ± 5°, 76° ± 5°, 70° ± 5°, and 72° ± 5° for the TOAC substitutions H7, S8, A9, and K10 respectively. These results are consistent with previously published literature. Using the EPR (electron paramagnetic resonance) mechanical alignment technique, these substitutions were used to critically assess the topology and surface orientation of the peptide with respect to the membrane. This methodology offers a rapid and simple approach to investigate the structural topology of antimicrobial peptides. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Effect of freeze/thaw cycles on several biomarkers in urine from patients with kidney disease.

    PubMed

    Zhang, Yinan; Luo, Yi; Lu, Huijuan; Wang, Niansong; Shen, Yixie; Chen, Ruihua; Fang, Pingyan; Yu, Hong; Wang, Congrong; Jia, Weiping

    2015-04-01

    Urine samples were collected from eleven randomly selected patients with kidney disease, including diabetic nephropathy, chronic nephritis, and nephritic syndrome. Urine samples were treated with one of four protocols for freezing and thawing: freeze directly and thaw directly; freeze directly and thaw by temperature gradient; freeze by temperature gradient and thaw directly; and freeze by temperature gradient and thaw by temperature gradient. After one to six freeze/thaw cycles at -20°C or -80°C, different biomarkers showed differential stabilities. The concentrations of total protein, calcium, and potassium did not change significantly after five freeze/thaw cycles at either -20°C or -80°C. Albumin could only sustain three freeze/thaw cycles at -20°C before it started to degrade. We recommend that urine be stored at -80°C as albumin and the organic ions could sustain five and six freeze/thaw cycles, respectively, using the simple "direct freeze and direct thaw" protocol. Furthermore, in most cases, gradient freeze/thaw cycles are not necessary for urine sample storage.

  13. Estimating the price elasticity of beer: meta-analysis of data with heterogeneity, dependence, and publication bias.

    PubMed

    Nelson, Jon P

    2014-01-01

    Precise estimates of price elasticities are important for alcohol tax policy. Using meta-analysis, this paper corrects average beer elasticities for heterogeneity, dependence, and publication selection bias. A sample of 191 estimates is obtained from 114 primary studies. Simple and weighted means are reported. Dependence is addressed by restricting number of estimates per study, author-restricted samples, and author-specific variables. Publication bias is addressed using funnel graph, trim-and-fill, and Egger's intercept model. Heterogeneity and selection bias are examined jointly in meta-regressions containing moderator variables for econometric methodology, primary data, and precision of estimates. Results for fixed- and random-effects regressions are reported. Country-specific effects and sample time periods are unimportant, but several methodology variables help explain the dispersion of estimates. In models that correct for selection bias and heterogeneity, the average beer price elasticity is about -0.20, which is less elastic by 50% compared to values commonly used in alcohol tax policy simulations. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Simulations Using Random-Generated DNA and RNA Sequences

    ERIC Educational Resources Information Center

    Bryce, C. F. A.

    1977-01-01

    Using a very simple computer program written in BASIC, a very large number of random-generated DNA or RNA sequences are obtained. Students use these sequences to predict complementary sequences and translational products, evaluate base compositions, determine frequencies of particular triplet codons, and suggest possible secondary structures.…

  15. THE USE OF RANDOMIZED CONTROLLED TRIALS OF IN-HOME DRINKING WATER TREATMENT TO STUDY ENDEMIC WATERBORNE DISEASE

    EPA Science Inventory

    Randomized trials of water treatment have demonstrated the ability of simple water treatments to significantly reduce the incidence of gastrointestinal illnesses in developing countries where drinking water is of poor quality. Whether or not additional treatment at the tap reduc...

  16. Human mammary epithelial cells exhibit a bimodal correlated random walk pattern.

    PubMed

    Potdar, Alka A; Jeon, Junhwan; Weaver, Alissa M; Quaranta, Vito; Cummings, Peter T

    2010-03-10

    Organisms, at scales ranging from unicellular to mammals, have been known to exhibit foraging behavior described by random walks whose segments confirm to Lévy or exponential distributions. For the first time, we present evidence that single cells (mammary epithelial cells) that exist in multi-cellular organisms (humans) follow a bimodal correlated random walk (BCRW). Cellular tracks of MCF-10A pBabe, neuN and neuT random migration on 2-D plastic substrates, analyzed using bimodal analysis, were found to reveal the BCRW pattern. We find two types of exponentially distributed correlated flights (corresponding to what we refer to as the directional and re-orientation phases) each having its own correlation between move step-lengths within flights. The exponential distribution of flight lengths was confirmed using different analysis methods (logarithmic binning with normalization, survival frequency plots and maximum likelihood estimation). Because of the presence of non-uniform turn angle distribution of move step-lengths within a flight and two different types of flights, we propose that the epithelial random walk is a BCRW comprising of two alternating modes with varying degree of correlations, rather than a simple persistent random walk. A BCRW model rather than a simple persistent random walk correctly matches the super-diffusivity in the cell migration paths as indicated by simulations based on the BCRW model.

  17. Disparities in Cervical Cancer Characteristics and Survival Between White Hispanics and White Non-Hispanic Women.

    PubMed

    Khan, Hafiz M R; Gabbidon, Kemesha; Saxena, Anshul; Abdool-Ghany, Faheema; Dodge, John M; Lenzmeier, Taylor

    2016-10-01

    Cervical cancer is the second most common cancer among women resulting in nearly 500,000 cases annually. Screening leads to better treatment and survival time. However, human papillomavirus (HPV) exposure, screening, and treatment vary among races and ethnicities in the United States. The purpose of this study is to examine disparities in characteristics of cervical cancer and survival of cases between White Hispanic (WH) and White non-Hispanic (WNH) women in the United States. We used a stratified random sampling method to select cervical cancer patient records from nine states; a simple random sampling method to extract the demographic and disease characteristics data within states from the Surveillance Epidemiology and End Results (SEER) database. We used statistical probability distribution methods for discrete and continuous data. The chi-square test and independent samples t-test were used to evaluate statistically significant differences. Furthermore, the Cox Proportional Regression and the Kaplan-Meier survival estimators were used to compare WH and WNH population survival times in the United States. The samples of WNH and WH women included 4,000 cervical cancer cases from 1973-2009. There were statistically significant differences between ethnicities: marital status (p < 0.001); primary site of cancer (p < 0.001); lymph node involvement (p < 0.001); grading and differentiation (p < 0.0001); and tumor behavior (p < 0.001). The mean age of diagnosis for both groups showed no statistical differences. However, the mean survival time for WNH was 221.7 (standard deviation [SD] = 118.1) months and for WH was 190.3 (SD = 120.3), which differed significantly (p < 0.001). Clear disparities exist in risk factors, cervical cancer characteristics, and survival time between WH and WNH women.

  18. Rare Event Simulation for T-cell Activation

    NASA Astrophysics Data System (ADS)

    Lipsmeier, Florian; Baake, Ellen

    2009-02-01

    The problem of statistical recognition is considered, as it arises in immunobiology, namely, the discrimination of foreign antigens against a background of the body's own molecules. The precise mechanism of this foreign-self-distinction, though one of the major tasks of the immune system, continues to be a fundamental puzzle. Recent progress has been made by van den Berg, Rand, and Burroughs (J. Theor. Biol. 209:465-486, 2001), who modelled the probabilistic nature of the interaction between the relevant cell types, namely, T-cells and antigen-presenting cells (APCs). Here, the stochasticity is due to the random sample of antigens present on the surface of every APC, and to the random receptor type that characterises individual T-cells. It has been shown previously (van den Berg et al. in J. Theor. Biol. 209:465-486, 2001; Zint et al. in J. Math. Biol. 57:841-861, 2008) that this model, though highly idealised, is capable of reproducing important aspects of the recognition phenomenon, and of explaining them on the basis of stochastic rare events. These results were obtained with the help of a refined large deviation theorem and were thus asymptotic in nature. Simulations have, so far, been restricted to the straightforward simple sampling approach, which does not allow for sample sizes large enough to address more detailed questions. Building on the available large deviation results, we develop an importance sampling technique that allows for a convenient exploration of the relevant tail events by means of simulation. With its help, we investigate the mechanism of statistical recognition in some depth. In particular, we illustrate how a foreign antigen can stand out against the self background if it is present in sufficiently many copies, although no a priori difference between self and nonself is built into the model.

  19. A computational method for optimizing fuel treatment locations

    Treesearch

    Mark A. Finney

    2006-01-01

    Modeling and experiments have suggested that spatial fuel treatment patterns can influence the movement of large fires. On simple theoretical landscapes consisting of two fuel types (treated and untreated) optimal patterns can be analytically derived that disrupt fire growth efficiently (i.e. with less area treated than random patterns). Although conceptually simple,...

  20. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2018-03-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2}). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3}) and the level sets of the Gaussian free field ({d≥ 3}). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  1. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2017-12-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2} ). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3} ) and the level sets of the Gaussian free field ({d≥ 3} ). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  2. What Randomized Benchmarking Actually Measures

    DOE PAGES

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; ...

    2017-09-28

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less

  3. Statistical analysis for improving data precision in the SPME GC-MS analysis of blackberry (Rubus ulmifolius Schott) volatiles.

    PubMed

    D'Agostino, M F; Sanz, J; Martínez-Castro, I; Giuffrè, A M; Sicari, V; Soria, A C

    2014-07-01

    Statistical analysis has been used for the first time to evaluate the dispersion of quantitative data in the solid-phase microextraction (SPME) followed by gas chromatography-mass spectrometry (GC-MS) analysis of blackberry (Rubus ulmifolius Schott) volatiles with the aim of improving their precision. Experimental and randomly simulated data were compared using different statistical parameters (correlation coefficients, Principal Component Analysis loadings and eigenvalues). Non-random factors were shown to significantly contribute to total dispersion; groups of volatile compounds could be associated with these factors. A significant improvement of precision was achieved when considering percent concentration ratios, rather than percent values, among those blackberry volatiles with a similar dispersion behavior. As novelty over previous references, and to complement this main objective, the presence of non-random dispersion trends in data from simple blackberry model systems was evidenced. Although the influence of the type of matrix on data precision was proved, the possibility of a better understanding of the dispersion patterns in real samples was not possible from model systems. The approach here used was validated for the first time through the multicomponent characterization of Italian blackberries from different harvest years. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Design of an aid to visual inspection workstation

    NASA Astrophysics Data System (ADS)

    Tait, Robert; Harding, Kevin

    2016-05-01

    Visual Inspection is the most common means for inspecting manufactured parts for random defects such as pits, scratches, breaks, corrosion or general wear. The reason for the need for visual inspection is the very random nature of what might be a defect. Some defects may be very rare, being seen once or twice a year, but May still be critical to part performance. Because of this random and rare nature, even the most sophisticated image analysis programs have not been able to recognize all possible defects. Key to any future automation of inspection is obtaining good sample images of what might be a defect. However, most visual check take no images and consequently generate no digital data or historical record beyond a simple count. Any additional tool to captures such images must be able to do so without taking addition time. This paper outlines the design of a potential visual inspection station that would be compatible with current visual inspection methods, but afford the means for reliable digital imaging and in many cases augmented capabilities to assist the inspection. Considerations in this study included: resolution, depth of field, feature highlighting, and ease of digital capture, annotations and inspection augmentation for repeatable registration as well as operator assistance and training.

  5. Emergence of Primary Teeth in Children of Sunsari District of Eastern Nepal

    PubMed Central

    Gupta, Anita; Hiremath, SS; Singh, SK; Poudyal, S; Niraula, SR; Baral, DD; Singh, RK

    2007-01-01

    This study assessed the timing and eruption sequence of primary teeth in children of Sunsari district of Eastern Nepal and compared the eruption pattern of males & females between various, ethnic groups. Method This cross-sectional study, included 501 subjects, aged 3 months to 60 months selected by simple random sampling method. The determinant variables such as age, gender, ethnicity, and eruption of teeth were recorded. Results This study provides a model data on emergence of primary teeth and number of deciduous teeth in these children. This is a first study of its kind in Nepal. The findings of this study will help as a reference data for optimal use in clinical, academic, and research activities, especially for children of Eastern Nepal. PMID:18523631

  6. Data survey on the effect of product features on competitive advantage of selected firms in Nigeria.

    PubMed

    Olokundun, Maxwell; Iyiola, Oladele; Ibidunni, Stephen; Falola, Hezekiah; Salau, Odunayo; Amaihian, Augusta; Peter, Fred; Borishade, Taiye

    2018-06-01

    The main objective of this study was to present a data article that investigates the effect product features on firm's competitive advantage. Few studies have examined how the features of a product could help in driving the competitive advantage of a firm. Descriptive research method was used. Statistical Package for Social Sciences (SPSS 22) was engaged for analysis of one hundred and fifty (150) valid questionnaire which were completed by small business owners registered under small and medium scale enterprises development of Nigeria (SMEDAN). Stratified and simple random sampling techniques were employed; reliability and validity procedures were also confirmed. The field data set is made publicly available to enable critical or extended analysis.

  7. High-fidelity meshes from tissue samples for diffusion MRI simulations.

    PubMed

    Panagiotaki, Eleftheria; Hall, Matt G; Zhang, Hui; Siow, Bernard; Lythgoe, Mark F; Alexander, Daniel C

    2010-01-01

    This paper presents a method for constructing detailed geometric models of tissue microstructure for synthesizing realistic diffusion MRI data. We construct three-dimensional mesh models from confocal microscopy image stacks using the marching cubes algorithm. Random-walk simulations within the resulting meshes provide synthetic diffusion MRI measurements. Experiments optimise simulation parameters and complexity of the meshes to achieve accuracy and reproducibility while minimizing computation time. Finally we assess the quality of the synthesized data from the mesh models by comparison with scanner data as well as synthetic data from simple geometric models and simplified meshes that vary only in two dimensions. The results support the extra complexity of the three-dimensional mesh compared to simpler models although sensitivity to the mesh resolution is quite robust.

  8. An audit strategy for time-to-event outcomes measured with error: application to five randomized controlled trials in oncology.

    PubMed

    Dodd, Lori E; Korn, Edward L; Freidlin, Boris; Gu, Wenjuan; Abrams, Jeffrey S; Bushnell, William D; Canetta, Renzo; Doroshow, James H; Gray, Robert J; Sridhara, Rajeshwari

    2013-10-01

    Measurement error in time-to-event end points complicates interpretation of treatment effects in clinical trials. Non-differential measurement error is unlikely to produce large bias [1]. When error depends on treatment arm, bias is of greater concern. Blinded-independent central review (BICR) of all images from a trial is commonly undertaken to mitigate differential measurement-error bias that may be present in hazard ratios (HRs) based on local evaluations. Similar BICR and local evaluation HRs may provide reassurance about the treatment effect, but BICR adds considerable time and expense to trials. We describe a BICR audit strategy [2] and apply it to five randomized controlled trials to evaluate its use and to provide practical guidelines. The strategy requires BICR on a subset of study subjects, rather than a complete-case BICR, and makes use of an auxiliary-variable estimator. When the effect size is relatively large, the method provides a substantial reduction in the size of the BICRs. In a trial with 722 participants and a HR of 0.48, an average audit of 28% of the data was needed and always confirmed the treatment effect as assessed by local evaluations. More moderate effect sizes and/or smaller trial sizes required larger proportions of audited images, ranging from 57% to 100% for HRs ranging from 0.55 to 0.77 and sample sizes between 209 and 737. The method is developed for a simple random sample of study subjects. In studies with low event rates, more efficient estimation may result from sampling individuals with events at a higher rate. The proposed strategy can greatly decrease the costs and time associated with BICR, by reducing the number of images undergoing review. The savings will depend on the underlying treatment effect and trial size, with larger treatment effects and larger trials requiring smaller proportions of audited data.

  9. Sampling Strategies for Evaluating the Rate of Adventitious Transgene Presence in Non-Genetically Modified Crop Fields.

    PubMed

    Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine

    2017-09-01

    According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.

  10. Sampling Large Graphs for Anticipatory Analytics

    DTIC Science & Technology

    2015-05-15

    low. C. Random Area Sampling Random area sampling [8] is a “ snowball ” sampling method in which a set of random seed vertices are selected and areas... Sampling Large Graphs for Anticipatory Analytics Lauren Edwards, Luke Johnson, Maja Milosavljevic, Vijay Gadepally, Benjamin A. Miller Lincoln...systems, greater human-in-the-loop involvement, or through complex algorithms. We are investigating the use of sampling to mitigate these challenges

  11. Electromagnetic Scattering by Fully Ordered and Quasi-Random Rigid Particulate Samples

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Mackowski, Daniel W.

    2016-01-01

    In this paper we have analyzed circumstances under which a rigid particulate sample can behave optically as a true discrete random medium consisting of particles randomly moving relative to each other during measurement. To this end, we applied the numerically exact superposition T-matrix method to model far-field scattering characteristics of fully ordered and quasi-randomly arranged rigid multiparticle groups in fixed and random orientations. We have shown that, in and of itself, averaging optical observables over movements of a rigid sample as a whole is insufficient unless it is combined with a quasi-random arrangement of the constituent particles in the sample. Otherwise, certain scattering effects typical of discrete random media (including some manifestations of coherent backscattering) may not be accurately replicated.

  12. Epitaxial Growth of YBa2Cu3O7 Films onto LaAlO3 (100) by Using Oxalates

    NASA Astrophysics Data System (ADS)

    Dominguez, A. Bustamante; Felix, L. León; Garcia, J.; Santibañez, J. Flores; Valladares, L. De Los Santos; Gonzalez, J. C.; Anaya, A. Osorio; Pillaca, M.

    Due to the current necessity to obtain epitaxial superconductor films at low cost, we report the growth of YBa2Cu3O7 (Y123) films by chemical deposition. The procedure involved simple steps such as precipitation of stoichiometric amounts of yttrium, barium and copper acetates in oxalic acid (H2C2O4). The precursor solution was dripped onto LaAlO3 (100) substrates with the help of a Fisher pipette. The films were annealed in oxygen atmosphere during 12 h at three different temperatures: 820 °C, 840 °C and 860 °C. After 820 °C and 860 °C annealing, X-ray diffraction (XRD) analysis revealed high intensity of the (00l) reflections denoting that most of the Y123 grains were c-axis oriented. In addition, we also observed a-axis oriented grains ((h00) reflexion), minor randomly oriented grains and other phases (such as Y2BaCuO5 and CuO). In contrast, the sample treated at 840 °C, we noticed c - and a-axis oriented grains, very small amounts of randomly oriented grains without formation of other phases. From the magnetization versus temperature measurements, the critical temperatures were estimated at 70K and 90K for the samples annealed at 820 °C and 860 °C respectively.

  13. RANDOM EVOLUTIONS, MARKOV CHAINS, AND SYSTEMS OF PARTIAL DIFFERENTIAL EQUATIONS

    PubMed Central

    Griego, R. J.; Hersh, R.

    1969-01-01

    Several authors have considered Markov processes defined by the motion of a particle on a fixed line with a random velocity1, 6, 8, 10 or a random diffusivity.5, 12 A “random evolution” is a natural but apparently new generalization of this notion. In this note we hope to show that this concept leads to simple and powerful applications of probabilistic tools to initial-value problems of both parabolic and hyperbolic type. We obtain existence theorems, representation theorems, and asymptotic formulas, both old and new. PMID:16578690

  14. Ergonomics intervention in an Iranian television manufacturing industry.

    PubMed

    Motamedzade, M; Mohseni, M; Golmohammadi, R; Mahjoob, H

    2011-01-01

    The primary goal of this study was to use the Strain Index (SI) to assess the risk of developing upper extremity musculoskeletal disorders in a television (TV) manufacturing industry and evaluate the effectiveness of an educational intervention. The project was designed and implemented in two stages. In first stage, the SI score was calculated and the Nordic Musculoskeletal Questionnaire (NMQ) was completed. Following this, hazardous jobs were identified and existing risk factors in these jobs were studied. Based on these data, an educational intervention was designed and implemented. In the second stage, three months after implementing the interventions, the SI score was re-calculated and the Nordic Musculoskeletal Questionnaire (NMQ) completed again. 80 assembly workers of an Iranian TV manufacturing industry were randomly selected using simple random sampling approach. The results showed that the SI score had a good correlation with the symptoms of musculoskeletal disorders. It was also observed that the difference between prevalence of signs and symptoms of musculoskeletal disorders, before and after intervention, was significantly reduced. A well conducted implementation of an interventional program with total participation of all stakeholders can lead to a decrease in musculoskeletal disorders.

  15. Comparing the Efficiency of Two Different Extraction Techniques in Removal of Maxillary Third Molars: A Randomized Controlled Trial.

    PubMed

    Edward, Joseph; Aziz, Mubarak A; Madhu Usha, Arjun; Narayanan, Jyothi K

    2017-12-01

    Extractions are routine procedures in dental surgery. Traditional extraction techniques use a combination of severing the periodontal attachment, luxation with an elevator, and removal with forceps. A new technique of extraction of maxillary third molar is introduced in this study-Joedds technique, which is compared with the conventional technique. One hundred people were included in the study, the people were divided into two groups by means of simple random sampling. In one group conventional technique of maxillary third molar extraction was used and on second Joedds technique was used. Statistical analysis was carried out with student's t test. Analysis of 100 patients based on parameters showed that the novel joedds technique had minimal trauma to surrounding tissues, less tuberosity and root fractures and the time taken for extraction was <2 min while compared to other group of patients. This novel technique has proved to be better than conventional third molar extraction technique, with minimal complications. If Proper selection of cases and right technique are used.

  16. Prevalence of faecal incontinence in community-dwelling older people in Bali, Indonesia.

    PubMed

    Suyasa, I Gede Putu Darma; Xiao, Lily Dongxia; Lynn, Penelope Ann; Skuza, Pawel Piotr; Paterson, Jan

    2015-06-01

    To explore the prevalence rate of faecal incontinence in community-dwelling older people, associated factors, impact on quality of life and practices in managing faecal incontinence. Using a cross-sectional design, 600 older people aged 60+ were randomly selected from a population of 2916 in Bali, Indonesia using a simple random sampling technique. Three hundred and three participants were interviewed (response rate 51%). The prevalence of faecal incontinence was 22.4% (95% confidence interval (CI) 18.0-26.8). Self-reported constipation (odds ratio (OR) 3.68, 95% CI 1.87-7.24) and loose stools (OR 2.66, 95% CI 1.47-4.78) were significantly associated with faecal incontinence. There was a strong positive correlation between total bowel control score and total quality-of-life score (P < 0.001, rs = 0.61) indicating significant alterations in quality of life. The current management practices varied from changing diet, visiting health-care professionals, and using modern and traditional medicines. Faecal incontinence is common among community-dwelling older people in Bali. © 2014 ACOTA.

  17. A null model for Pearson coexpression networks.

    PubMed

    Gobbi, Andrea; Jurman, Giuseppe

    2015-01-01

    Gene coexpression networks inferred by correlation from high-throughput profiling such as microarray data represent simple but effective structures for discovering and interpreting linear gene relationships. In recent years, several approaches have been proposed to tackle the problem of deciding when the resulting correlation values are statistically significant. This is most crucial when the number of samples is small, yielding a non-negligible chance that even high correlation values are due to random effects. Here we introduce a novel hard thresholding solution based on the assumption that a coexpression network inferred by randomly generated data is expected to be empty. The threshold is theoretically derived by means of an analytic approach and, as a deterministic independent null model, it depends only on the dimensions of the starting data matrix, with assumptions on the skewness of the data distribution compatible with the structure of gene expression levels data. We show, on synthetic and array datasets, that the proposed threshold is effective in eliminating all false positive links, with an offsetting cost in terms of false negative detected edges.

  18. Layers: A molecular surface peeling algorithm and its applications to analyze protein structures

    PubMed Central

    Karampudi, Naga Bhushana Rao; Bahadur, Ranjit Prasad

    2015-01-01

    We present an algorithm ‘Layers’ to peel the atoms of proteins as layers. Using Layers we show an efficient way to transform protein structures into 2D pattern, named residue transition pattern (RTP), which is independent of molecular orientations. RTP explains the folding patterns of proteins and hence identification of similarity between proteins is simple and reliable using RTP than with the standard sequence or structure based methods. Moreover, Layers generates a fine-tunable coarse model for the molecular surface by using non-random sampling. The coarse model can be used for shape comparison, protein recognition and ligand design. Additionally, Layers can be used to develop biased initial configuration of molecules for protein folding simulations. We have developed a random forest classifier to predict the RTP of a given polypeptide sequence. Layers is a standalone application; however, it can be merged with other applications to reduce the computational load when working with large datasets of protein structures. Layers is available freely at http://www.csb.iitkgp.ernet.in/applications/mol_layers/main. PMID:26553411

  19. Assessing significance in a Markov chain without mixing.

    PubMed

    Chikina, Maria; Frieze, Alan; Pegden, Wesley

    2017-03-14

    We present a statistical test to detect that a presented state of a reversible Markov chain was not chosen from a stationary distribution. In particular, given a value function for the states of the Markov chain, we would like to show rigorously that the presented state is an outlier with respect to the values, by establishing a [Formula: see text] value under the null hypothesis that it was chosen from a stationary distribution of the chain. A simple heuristic used in practice is to sample ranks of states from long random trajectories on the Markov chain and compare these with the rank of the presented state; if the presented state is a [Formula: see text] outlier compared with the sampled ranks (its rank is in the bottom [Formula: see text] of sampled ranks), then this observation should correspond to a [Formula: see text] value of [Formula: see text] This significance is not rigorous, however, without good bounds on the mixing time of the Markov chain. Our test is the following: Given the presented state in the Markov chain, take a random walk from the presented state for any number of steps. We prove that observing that the presented state is an [Formula: see text]-outlier on the walk is significant at [Formula: see text] under the null hypothesis that the state was chosen from a stationary distribution. We assume nothing about the Markov chain beyond reversibility and show that significance at [Formula: see text] is best possible in general. We illustrate the use of our test with a potential application to the rigorous detection of gerrymandering in Congressional districting.

  20. Attendance at cultural events, reading books or periodicals, and making music or singing in a choir as determinants for survival: Swedish interview survey of living conditions.

    PubMed

    Bygren, L O; Konlaan, B B; Johansson, S E

    To investigate the possible influence of attendance at cultural events, reading books or periodicals, making music or singing in a choir as determinants for survival. A simple random sample was drawn of 15,198 individuals aged 16-74 years. Of these, 85% (12,982) were interviewed by trained non-medical interviewers between 1982 and 1983 about cultural activities. They were followed up with respect to survival until 31 December 1991. Swedish interview survey of living conditions comprising a random sample of the adult Swedish population. 12,675 people interviewed between 1982 and 1983. Survival of subjects after controlling for eight confounding variables: age, sex, education level, income, long term disease, social network, smoking, and physical exercise. 6,301 men and 6,374 women were followed up; 533 men and 314 women died during this period. The control variables influenced survival in the expected directions except for social network for men; a significant negative effective was found when the analysis was made separately for men and women. We found an influence on mortality when the eight control variables were controlled for in people who rarely attended events compared with those attending most often, the relative risk being 1.57 (95% confidence interval 1.18 to 2.09). Attendance at cultural events may have a positive influence on survival. Long term follow up of large samples with confounders that are well controlled for and with the cultural stimulation more highly specified should be used to try to falsify the hypothesis before experiments start.

  1. Investigating the magnetic inclination angle distribution of γ-ray-loud radio pulsars

    NASA Astrophysics Data System (ADS)

    Rookyard, S. C.; Weltevrede, P.; Johnston, S.

    2015-02-01

    Several studies have shown the distribution of pulsars' magnetic inclination angles to be skewed towards low values compared with the distribution expected if the rotation and magnetic axes are placed randomly on the star. Here, we focus on a sample of 28 γ-ray-detected pulsars using data taken as part of the Parkes telescope's FERMI timing program. In doing so, we find a preference in the sample for low magnetic inclination angles, α, in stark contrast to both the expectation that the magnetic and rotation axes are orientated randomly at the birth of the pulsar and to γ-ray-emission-model-based expected biases. In this paper, after exploring potential explanations, we conclude that there are two possible causes of this preference, namely that low α values are intrinsic to the sample, or that the emission regions extend outside what is traditionally thought to be the open-field-line region in a way which is dependent on the magnetic inclination. Each possibility is expected to have important consequences, ranging from supernova physics to population studies of pulsars and considerations of the radio beaming fraction. We also present a simple conversion scheme between the observed and intrinsic magnetic inclinations which is valid under the assumption that the observed skew is not intrinsic and which can be applied to all existing measurements. We argue that extending the active-field-line region will help to resolve the existing tension between emission geometries derived from radio polarization measurements and those required to model γ-ray light curves.

  2. Assessing significance in a Markov chain without mixing

    PubMed Central

    Chikina, Maria; Frieze, Alan; Pegden, Wesley

    2017-01-01

    We present a statistical test to detect that a presented state of a reversible Markov chain was not chosen from a stationary distribution. In particular, given a value function for the states of the Markov chain, we would like to show rigorously that the presented state is an outlier with respect to the values, by establishing a p value under the null hypothesis that it was chosen from a stationary distribution of the chain. A simple heuristic used in practice is to sample ranks of states from long random trajectories on the Markov chain and compare these with the rank of the presented state; if the presented state is a 0.1% outlier compared with the sampled ranks (its rank is in the bottom 0.1% of sampled ranks), then this observation should correspond to a p value of 0.001. This significance is not rigorous, however, without good bounds on the mixing time of the Markov chain. Our test is the following: Given the presented state in the Markov chain, take a random walk from the presented state for any number of steps. We prove that observing that the presented state is an ε-outlier on the walk is significant at p=2ε under the null hypothesis that the state was chosen from a stationary distribution. We assume nothing about the Markov chain beyond reversibility and show that significance at p≈ε is best possible in general. We illustrate the use of our test with a potential application to the rigorous detection of gerrymandering in Congressional districting. PMID:28246331

  3. Emissivity of half-space random media. [in passive remote sensing

    NASA Technical Reports Server (NTRS)

    Tsang, L.; Kong, J. A.

    1976-01-01

    Scattering of electromagnetic waves by a half-space random medium with three-dimensional correlation functions is studied with the Born approximation. The emissivity is calculated from a simple integral and is illustrated for various cases. The results are valid over a wavelength range smaller or larger than the correlation lengths.

  4. Assessment of wadeable stream resources in the driftless area ecoregion in Western Wisconsin using a probabilistic sampling design.

    PubMed

    Miller, Michael A; Colby, Alison C C; Kanehl, Paul D; Blocksom, Karen

    2009-03-01

    The Wisconsin Department of Natural Resources (WDNR), with support from the U.S. EPA, conducted an assessment of wadeable streams in the Driftless Area ecoregion in western Wisconsin using a probabilistic sampling design. This ecoregion encompasses 20% of Wisconsin's land area and contains 8,800 miles of perennial streams. Randomly-selected stream sites (n = 60) equally distributed among stream orders 1-4 were sampled. Watershed land use, riparian and in-stream habitat, water chemistry, macroinvertebrate, and fish assemblage data were collected at each true random site and an associated "modified-random" site on each stream that was accessed via a road crossing nearest to the true random site. Targeted least-disturbed reference sites (n = 22) were also sampled to develop reference conditions for various physical, chemical, and biological measures. Cumulative distribution function plots of various measures collected at the true random sites evaluated with reference condition thresholds, indicate that high proportions of the random sites (and by inference the entire Driftless Area wadeable stream population) show some level of degradation. Study results show no statistically significant differences between the true random and modified-random sample sites for any of the nine physical habitat, 11 water chemistry, seven macroinvertebrate, or eight fish metrics analyzed. In Wisconsin's Driftless Area, 79% of wadeable stream lengths were accessible via road crossings. While further evaluation of the statistical rigor of using a modified-random sampling design is warranted, sampling randomly-selected stream sites accessed via the nearest road crossing may provide a more economical way to apply probabilistic sampling in stream monitoring programs.

  5. Exploring Measurement Error with Cookies: A Real and Virtual Approach via Interactive Excel

    ERIC Educational Resources Information Center

    Sinex, Scott A; Gage, Barbara A.; Beck, Peggy J.

    2007-01-01

    A simple, guided-inquiry investigation using stacked sandwich cookies is employed to develop a simple linear mathematical model and to explore measurement error by incorporating errors as part of the investigation. Both random and systematic errors are presented. The model and errors are then investigated further by engaging with an interactive…

  6. Piezoelectric and Electrostrictive Materials for Transducer Applications.

    DTIC Science & Technology

    1984-05-01

    the stress is applied to sample using a simple lever arm to provide high load at the center point of a piston of hardened steel. To avoid poisson ratio...relatively simple ’screening test’ for PZT powders, powder samples were prepared from six different PZT transducer formulations supplied by the Navy...to 6000C showed the largest broadening. Heat treatment of this sample to 1.1000C reduced the broadening markedly indicating that simple chemical co

  7. Ion mobility spectrometry fingerprints: A rapid detection technology for adulteration of sesame oil.

    PubMed

    Zhang, Liangxiao; Shuai, Qian; Li, Peiwu; Zhang, Qi; Ma, Fei; Zhang, Wen; Ding, Xiaoxia

    2016-02-01

    A simple and rapid detection technology was proposed based on ion mobility spectrometry (IMS) fingerprints to determine potential adulteration of sesame oil. Oil samples were diluted by n-hexane and analyzed by IMS for 20s. Then, chemometric methods were employed to establish discriminant models for sesame oils and four other edible oils, pure and adulterated sesame oils, and pure and counterfeit sesame oils, respectively. Finally, Random Forests (RF) classification model could correctly classify all five types of edible oils. The detection results indicated that the discriminant models built by recursive support vector machine (R-SVM) method could identify adulterated sesame oil samples (⩾ 10%) with an accuracy value of 94.2%. Therefore, IMS was shown to be an effective method to detect the adulterated sesame oils. Meanwhile, IMS fingerprints work well to detect the counterfeit sesame oils produced by adding sesame oil essence into cheaper edible oils. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. There is More than a Power Law in Zipf

    PubMed Central

    Cristelli, Matthieu; Batty, Michael; Pietronero, Luciano

    2012-01-01

    The largest cities, the most frequently used words, the income of the richest countries, and the most wealthy billionaires, can be all described in terms of Zipf’s Law, a rank-size rule capturing the relation between the frequency of a set of objects or events and their size. It is assumed to be one of many manifestations of an underlying power law like Pareto’s or Benford’s, but contrary to popular belief, from a distribution of, say, city sizes and a simple random sampling, one does not obtain Zipf’s law for the largest cities. This pathology is reflected in the fact that Zipf’s Law has a functional form depending on the number of events N. This requires a fundamental property of the sample distribution which we call ‘coherence’ and it corresponds to a ‘screening’ between various elements of the set. We show how it should be accounted for when fitting Zipf’s Law. PMID:23139862

  9. Mercury exposure in a high fish eating Bolivian Amazonian population with intense small-scale gold-mining activities.

    PubMed

    Barbieri, Flavia Laura; Cournil, Amandine; Gardon, Jacques

    2009-08-01

    Methylmercury exposure in Amazonian communities through fish consumption has been widely documented in Brazil. There is still a lack of data in other Amazonian countries, which is why we conducted this study in the Bolivian Amazon basin. Simple random sampling was used from a small village located in the lower Beni River, where there is intense gold mining and high fish consumption. All participants were interviewed and hair samples were taken to measure total mercury concentrations. The hair mercury geometric mean in the general population was 3.02 microg/g (CI: 2.69-3.37; range: 0.42-15.65). Age and gender were not directly associated with mercury levels. Fish consumption showed a positive relation and so did occupation, especially small-scale gold mining. Hair mercury levels were lower than those found in Brazilian studies, but still higher than in non-exposed populations. It is necessary to assess mercury exposure in the Amazonian regions where data is still lacking, using a standardized indicator.

  10. Variability of reflectance measurements with sensor altitude and canopy type

    NASA Technical Reports Server (NTRS)

    Daughtry, C. S. T.; Vanderbilt, V. C.; Pollara, V. J.

    1981-01-01

    Data were acquired on canopies of mature corn planted in 76 cm rows, mature soybeans planted in 96 cm rows with 71 percent soil cover, and mature soybeans planed in 76 cm rows with 100 percent soil cover. A LANDSAT band radiometer with a 15 degree field of view was used at ten altitudes ranging from 0.2 m to 10 m above the canopy. At each altitude, measurements were taken at 15 cm intervals also a 2.0 m transect perpendicular to the crop row direction. Reflectance data were plotted as a function of altitude and horizontal position to verify that the variance of measurements at low altitudes was attributable to row effects which disappear at higher altitudes where the sensor integrate across several rows. The coefficient of variation of reflectance decreased exponentially as the sensor was elevated. Systematic sampling (at odd multiples of 0.5 times the row spacing interval) required fewer measurements than simple random sampling over row crop canopies.

  11. The impact of spousal bereavement on hospitalisations: Evidence from the Scottish Longitudinal Study.

    PubMed

    Tseng, Fu-Min; Petrie, Dennis; Wang, Shaolin; Macduff, Colin; Stephen, Audrey I

    2018-02-01

    This paper estimates the impact of spousal bereavement on hospital inpatient use for the surviving bereaved by following the experience of 94,272 married Scottish individuals from 1991 until 2009 using a difference-in-difference model. We also consider the sample selection issues related to differences in survival between the bereaved and non-bereaved using a simple Cox Proportional-Hazard model. Before conducting these estimations, propensity score approaches are used to re-weight the non-bereaved to generate a more random-like comparison sample for the bereaved. We find that those bereaved who survive are both more likely to be admitted and to stay longer in hospital than a comparable non-bereaved cohort. Bereavement is estimated to induce on average an extra 0.24 (95% CI [0.15, 0.33]) hospital inpatient days per year. Similar to previous studies, we estimate the bereaved have a 19.2% (95% CI [12.5%, 26.3%]) higher mortality rate than the comparable non-bereaved cohort. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Assessment of pollutant mean concentrations in the Yangtze estuary based on MSN theory.

    PubMed

    Ren, Jing; Gao, Bing-Bo; Fan, Hai-Mei; Zhang, Zhi-Hong; Zhang, Yao; Wang, Jin-Feng

    2016-12-15

    Reliable assessment of water quality is a critical issue for estuaries. Nutrient concentrations show significant spatial distinctions between areas under the influence of fresh-sea water interaction and anthropogenic effects. For this situation, given the limitations of general mean estimation approaches, a new method for surfaces with non-homogeneity (MSN) was applied to obtain optimized linear unbiased estimations of the mean nutrient concentrations in the study area in the Yangtze estuary from 2011 to 2013. Other mean estimation methods, including block Kriging (BK), simple random sampling (SS) and stratified sampling (ST) inference, were applied simultaneously for comparison. Their performance was evaluated by estimation error. The results show that MSN had the highest accuracy, while SS had the highest estimation error. ST and BK were intermediate in terms of their performance. Thus, MSN is an appropriate method that can be adopted to reduce the uncertainty of mean pollutant estimation in estuaries. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Psychiatric disorders among the Mapuche in Chile.

    PubMed

    Vicente, Benjamin; Kohn, Robert; Rioseco, Pedro; Saldivia, Sandra; Torres, Silverio

    2005-06-01

    The Mapuche are the largest indigenous group in Chile; yet almost all data on the mental health of indigenous populations are from North America. The study examines the differential DSM-III-R prevalence rates of psychiatric disorders and service utilization among indigenous and non-indigenous community residence. The Composite International Diagnostic Interview (CIDI) was administered to a stratified random sample of 75 Mapuche and 434 non-Mapuche residents of the province of Cautín. Lifetime prevalence and 12-month prevalence rates were estimated. Approximately 28.4% of the Mapuche population had a lifetime, and 15.7% a 12-month, prevalent psychiatric disorder compared to 38.0% and 25.7%, respectively, of the non-Mapuche. Few significant differences were noted between the two groups; however, generalized anxiety disorder, simple phobia, and drug dependence were less prevalent among the Mapuche. Service utilization among the Mapuche with mental illness was low. This is a preliminary study based on a small sample size. Further research on the mental health of indigenous populations of South America is needed.

  14. An investigation of the role of job satisfaction in employees' organizational citizenship behavior.

    PubMed

    Talachi, Rahil Kazemi; Gorji, Mohammad Bagher; Boerhannoeddin, Ali Bin

    2014-06-01

    Job satisfaction, as an integral part of organizational environment, can affect organizational citizenship behavior. Therefore, the present paper aimed at determination of the relationship between these two factors among the employees to provide an appropriate model. The population of this study consisted of all employees of Golestan Province industry, mine and trade organization (Iran), the number of whom is 154, out of which, 120 employees were selected as a sample by the simple random sampling method. For collecting the data, two questionnaires of job satisfaction and organizational citizenship behavior were applied, and the obtained data was analyzed using the statistical methods of Kolmogorov-Smirnov test, Spearman's correlation, Pearson's correlation coefficient, Regression analysis, F-test and T-test. From the results, it was found that the variable of job satisfaction had a significant positive relationship with organizational citizenship behavior and one unit increase in organizational citizenship behavior is resulted from 0.622 unit increase in job satisfaction.

  15. Overlap between treatment and control distributions as an effect size measure in experiments.

    PubMed

    Hedges, Larry V; Olkin, Ingram

    2016-03-01

    The proportion π of treatment group observations that exceed the control group mean has been proposed as an effect size measure for experiments that randomly assign independent units into 2 groups. We give the exact distribution of a simple estimator of π based on the standardized mean difference and use it to study the small sample bias of this estimator. We also give the minimum variance unbiased estimator of π under 2 models, one in which the variance of the mean difference is known and one in which the variance is unknown. We show how to use the relation between the standardized mean difference and the overlap measure to compute confidence intervals for π and show that these results can be used to obtain unbiased estimators, large sample variances, and confidence intervals for 3 related effect size measures based on the overlap. Finally, we show how the effect size π can be used in a meta-analysis. (c) 2016 APA, all rights reserved).

  16. The North American Breeding Bird Survey

    USGS Publications Warehouse

    Bystrak, D.; Ralph, C. John; Scott, J. Michael

    1981-01-01

    A brief history of the North American Breeding Bird Survey (BBS) and a discussion of the technique are presented. The approximately 2000 random roadside routes conducted yearly during the breeding season throughout North America produce an enormous bank of data on distribution and abundance of breeding birds with great potential use. Data on about one million total birds of 500 species per year are on computer tape to facilitate accessibility and are available to any serious investigator. The BBS includes the advantages of wide geographic coverage, sampling of most habitat types, standardization of data collection, and a relatively simple format. The Survey is limited by placement of roads (e.g., marshes and rugged mountainous areas are not well sampled), traffic noise interference in some cases and preference of some bird species for roadside habitats. These and other problems and biases of the BBS are discussed. The uniformity of the technique allows for detecting changes in populations and for creation of maps of relative abundance. Examples of each are presented.

  17. Age and sex prevalence of infectious dermatoses among primary school children in a rural South-Eastern Nigerian community

    PubMed Central

    Kalu, Eziyi Iche; Wagbatsoma, Victoria; Ogbaini-Emovon, Ephraim; Nwadike, Victor Ugochukwu; Ojide, Chiedozie Kingsley

    2015-01-01

    Introduction Various dermatoses, due to their morbidity characteristics, have been shown to negatively impact on learning. The most epidemiologically important seem to be the infectious types because of their transmissibility and amenability to simple school-health measures. The aim of this study was to assess the prevalence and sex/age correlates of infectious dermatoses in a rural South-eastern Nigerian community. Methods The pupils were proportionately recruited from the three primary schools based on school population. Stratified simple random sampling method was adopted and a table of random numbers was used to select required pupils from each arm. Clinical and laboratory examination was done to establish diagnoses of infectious skin disease. Data collected were analyzed using SPSS version 16. Results The 400 pupils consisted of 153 males and 247 females. Age range was between 6 and 12 years. The prevalence of infectious dermatoses was 72.3%. The five most prevalent clinical forms of infectious dermatoses, in order of decreasing prevalence, were tinea capitis (35.2%), scabies (10.5%), tinea corporis (5.8%), tinea pedis (5.5%), and impetigo (5.0%). More cases, generally, occurred among males than females (80.4% vs 67.2%)); while some specific clinical types, pediculosis and seborrheic dermatitis, exhibited predilection for females. Pyodermas and scabies were significantly more prevalent in the 7-9 age-group; while tinea capitis, tinea corporis, seborrheic dermatitis and pediculosis were more associated with ≥10 age-group. Conclusion Infectious dermatoses were highly prevalent in the surveyed population. Many of the clinical types exhibited sex- and age-specificity. PMID:26430479

  18. Factors affecting the informal payments in public and teaching hospitals.

    PubMed

    Aboutorabi, Ali; Ghiasipour, Maryam; Rezapour, Aziz; Pourreza, Abolghasem; Sarabi Asiabar, Ali; Tanoomand, Asghar

    2016-01-01

    Informal payments in the health sector of many developing countries are considered as a major impediment to health care reforms. Informal payments are a form of systemic fraud and have adverse effects on the performance of the health system. In this study, the frequency and extent of informal payments as well as the determinants of these payments were investigated in general hospitals affiliated to Tehran University of Medical Sciences. In this cross-sectional study, 300 discharged patients were selected using multi-stage random sampling method. First, three hospitals were selected randomly; then, through a simple random sampling, we recruited 300 discharged patients from internal, surgery, emergency, ICU & CCU wards. All data were collected by structured telephone interviews and questionnaire. We analyzed data using Chi- square, Kruskal-Wallis and Mann-Whitney tests. The results indicated that 21% (n=63) of individuals paid informally to the staff. About 4% (n=12) of the participants were faced with informal payment requests from hospital staff. There was a significant relationship between frequency of informal payments with marital status of participants and type of hospitals. According to our findings, none of the respondents had informal payments to physicians. The most frequent informal payments were in cash and were made to the hospitals' housekeeping staff to ensure more and better services. There was no significant relationship between the informal payments with socio-demographic characteristics, residential area and insurance status. Our findings revealed that many strategies can be used for both controlling and reducing informal payments. These include training patients and hospitals' staff, increasing income levels of employees, improving the quantity and quality of health services and changing the entrenched beliefs that necessitate informal payments.

  19. Routine programs of health care systems as an opportunity toward communication skills training for family physicians: A randomized field trial

    PubMed Central

    Zamani, Ahmad Reza; Motamedi, Narges; Farajzadegan, Ziba

    2015-01-01

    Background: To have high-quality primary health care services, an adequate doctor–patient communication is necessary. Because of time restrictions and limited budget in health system, an effective, feasible, and continuous training approach is important. The aim of this study is to assess the appropriateness of a communication skills training program simultaneously with routine programs of health care system. Materials and Methods: It was a randomized field trial in two health network settings during 2013. Twenty-eight family physicians through simple random sampling and 140 patients through convenience sampling participated as intervention and control group. The physicians in the intervention group (n = 14) attended six educational sessions, simultaneous organization meeting, with case discussion and peer education method. In both the groups, physicians completed communication skills knowledge and attitude questionnaires, and patients completed patient satisfaction of medical interview questionnaire at baseline, immediately after intervention, and four months postintervention. Physicians and health network administrators (stakeholders), completed a set of program evaluation forms. Descriptive statistics and Chi-square test, t-test, and repeated measure analysis of variance were used to analyze the data. Results: Use of routine program as a strategy of training was rated by stakeholders highly on “feasibility” (80.5%), “acceptability” (93.5%), “educational content and method appropriateness” (80.75%), and “ability to integrating in the health system programs” (approximate 60%). Significant improvements were found in physicians’ knowledge (P < 0.001), attitude (P < 0.001), and patients’ satisfaction (P = 0.002) in intervention group. Conclusions: Communication skills training program, simultaneous organization meeting was successfully implemented and well received by stakeholders, without considering extra time and manpower. Therefore it can be a valuable opportunity toward communication skills training. PMID:27462613

  20. The Effect of Using Self-ligating Brackets on Maxillary Canine Retraction: A Split-mouth Design Randomized Controlled Trial.

    PubMed

    Hassan, Siba E; Hajeer, Mohammad Y; Alali, Osama H; Kaddah, Ayham S

    2016-06-01

    The results of previous studies about the efficacy of using self-ligating brackets (SLBs) in controlling canine movement during retraction are not in harmony. Therefore, the current study aimed to compare the effects of using new passive SLBs on maxillary canine retraction with sliding mechanics vs conventional ligating brackets (CLBs) tied with metal ligatures. The sample comprised 15 adult patients (4 males, 11 females; 18-24 years) requiring bilateral extraction of maxillary first premolars. Units of randomization are the left or right maxillary canines within the same patient. The two maxillary canines in each patient were randomly assigned to one of the two groups in a simple split-mouth design. The canines in the SLBs group (n = 15) were bracketed with SLBs (Damon Q™), while the canines in the CLBs group (n = 15) were bracketed with conventional brackets (Mini Master Series). Transpalatal bars were used for anchorage. After leveling and alignment, 0.019 × 0.025" stainless steel working archwires were placed. Canines were retracted using a nickel-titanium close-coil springs with a 150 gm force. The amount and rate of maxillary canine retraction, canine rotation, and loss of anchorage were measured on study models collected at the beginning of canine retraction (T0) and 12 weeks later (T1). Differences were analyzed using paired-samples t-tests. The effect differences were statistically significant (p < 0.001). Using Damon Q™ SLBs, the amount and rate of canine retraction were greater, while canine rotation and anchorage loss were less. From a clinical perspective, extraction space closure can be accomplished more effectively using SLBs. Self-ligating brackets gave better results compared to the CLBs in terms of rate of movement, amount of canine rotation following extraction, and anchorage loss.

  1. A Randomized Study Comparing the Sniffing Position with Simple Head Extension for Glottis Visualization and Difficulty in Intubation during Direct Laryngoscopy.

    PubMed

    Akhtar, Mehmooda; Ali, Zulfiqar; Hassan, Nelofar; Mehdi, Saqib; Wani, Gh Mohammad; Mir, Aabid Hussain

    2017-01-01

    Proper positioning of the head and neck is important for an optimal laryngeal visualization. Traditionally, sniffing position (SP) is recommended to provide a superior glottic visualization, during direct laryngoscopy, enhancing the ease of intubation. Various studies in the last decade of this belief have challenged the need for sniffing position during intubation. We conducted a prospective study comparing the sniffing head position with simple head extension to study the laryngoscopic view and intubation difficulty during direct laryngoscopy. Five-hundred patients were included in this study and randomly distributed to SP or simple head extension. In the sniffing group, an incompressible head ring was placed under the head to raise its height by 7 cm from the neutral plane followed by maximal extension of the head. In the simple extension group, no headrest was placed under the head; however, maximal head extension was given at the time of laryngoscopy. Various factors as ability to mask ventilate, laryngoscopic visualization, intubation difficulty, and posture of the anesthesiologist during laryngoscopy and tracheal intubation were noted. In the incidence of difficult laryngoscopy (Cormack Grade III and IV), Intubation Difficulty Scale (IDS score) was compared between the two groups. There was no significant difference between two groups in Cormack grades. The IDS score differed significantly between sniffing group and simple extension group ( P = 0.000) with an increased difficulty during intubation in the simple head extension. Patients with simple head extension needed more lifting force, increased use of external laryngeal manipulation, and an increased use of alternate techniques during intubation when compared to SP. We conclude that compared to the simple head extension position, the SP should be used as a standard head position for intubation attempts under general anesthesia.

  2. The Shark Random Swim - (Lévy Flight with Memory)

    NASA Astrophysics Data System (ADS)

    Businger, Silvia

    2018-05-01

    The Elephant Random Walk (ERW), first introduced by Schütz and Trimper (Phys Rev E 70:045101, 2004), is a one-dimensional simple random walk on Z having a memory about the whole past. We study the Shark Random Swim, a random walk with memory about the whole past, whose steps are α -stable distributed with α \\in (0,2] . Our aim in this work is to study the impact of the heavy tailed step distributions on the asymptotic behavior of the random walk. We shall see that, as for the ERW, the asymptotic behavior of the Shark Random Swim depends on its memory parameter p, and that a phase transition can be observed at the critical value p=1/α.

  3. A random spatial sampling method in a rural developing nation

    Treesearch

    Michelle C. Kondo; Kent D.W. Bream; Frances K. Barg; Charles C. Branas

    2014-01-01

    Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method...

  4. A Simulation Approach to Assessing Sampling Strategies for Insect Pests: An Example with the Balsam Gall Midge

    PubMed Central

    Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.

    2013-01-01

    Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556

  5. The Beneficial Role of Random Strategies in Social and Financial Systems

    NASA Astrophysics Data System (ADS)

    Biondo, Alessio Emanuele; Pluchino, Alessandro; Rapisarda, Andrea

    2013-05-01

    In this paper we focus on the beneficial role of random strategies in social sciences by means of simple mathematical and computational models. We briefly review recent results obtained by two of us in previous contributions for the case of the Peter principle and the efficiency of a Parliament. Then, we develop a new application of random strategies to the case of financial trading and discuss in detail our findings about forecasts of markets dynamics.

  6. Reduction of display artifacts by random sampling

    NASA Technical Reports Server (NTRS)

    Ahumada, A. J., Jr.; Nagel, D. C.; Watson, A. B.; Yellott, J. I., Jr.

    1983-01-01

    The application of random-sampling techniques to remove visible artifacts (such as flicker, moire patterns, and paradoxical motion) introduced in TV-type displays by discrete sequential scanning is discussed and demonstrated. Sequential-scanning artifacts are described; the window of visibility defined in spatiotemporal frequency space by Watson and Ahumada (1982 and 1983) and Watson et al. (1983) is explained; the basic principles of random sampling are reviewed and illustrated by the case of the human retina; and it is proposed that the sampling artifacts can be replaced by random noise, which can then be shifted to frequency-space regions outside the window of visibility. Vertical sequential, single-random-sequence, and continuously renewed random-sequence plotting displays generating 128 points at update rates up to 130 Hz are applied to images of stationary and moving lines, and best results are obtained with the single random sequence for the stationary lines and with the renewed random sequence for the moving lines.

  7. On Edge Exchangeable Random Graphs

    NASA Astrophysics Data System (ADS)

    Janson, Svante

    2017-06-01

    We study a recent model for edge exchangeable random graphs introduced by Crane and Dempsey; in particular we study asymptotic properties of the random simple graph obtained by merging multiple edges. We study a number of examples, and show that the model can produce dense, sparse and extremely sparse random graphs. One example yields a power-law degree distribution. We give some examples where the random graph is dense and converges a.s. in the sense of graph limit theory, but also an example where a.s. every graph limit is the limit of some subsequence. Another example is sparse and yields convergence to a non-integrable generalized graphon defined on (0,∞).

  8. Differential expression analysis for RNAseq using Poisson mixed models.

    PubMed

    Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny; Zhou, Xiang

    2017-06-20

    Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. Histological and Thermometric Examination of Soft Tissue De-Epithelialization Using Digitally Controlled Er:YAG Laser Handpiece: An Ex Vivo Study.

    PubMed

    Grzech-Leśniak, Kinga; Matys, Jacek; Jurczyszyn, Kamil; Ziółkowski, Piotr; Dominiak, Marzena; Brugnera Junior, Aldo; Romeo, Umberto

    2018-06-01

    The purpose of this study was histological and thermometric examination of soft tissue de-epithelialization using digitally controlled laser handpiece (DCLH) - X-Runner. Commonly used techniques for de-epithelialization include scalpel, abrasion with diamond bur, or a combination of the two. Despite being simple, inexpensive and effective, these techniques are invasive and may produce unwanted side effects. It is important to look for alternative techniques using novel tools, which are minimally invasive and effective. 114 porcine samples sized 6 × 6 mm were collected from the attached gingiva (AG) of the alveolar process of the mandible using 15C scalpel blade. The samples were irradiated by means of Er:YAG laser (LightWalker, Fotona, Slovenia), using X-Runner and HO 2 handpieces at different parameters; 80, 100, and 140 mJ/20 Hz in time of 6 or 16 sec, respectively. The temperature was measured with a K-type thermocouple. For the histopathological analysis of efficiency of epithelium removal and thermal injury, 3 random samples were de-epithelialized with an HO 2 handpiece, and 9 random samples with an X-Runner handpiece with different parameters. For the samples irradiated with DCLH, we have used three different settings, which resulted in removing 1 to 3 layers of the soft tissue. The efficiency of epithelium removal and the rise of temperature were analyzed. DCLH has induced significantly lower temperature increase compared with HO 2 at each energy to frequency ratio. The histological examination revealed total epithelium removal when HO 2 handpiece was used at 100 and 140 mJ/20 Hz and when DCLH was used for two- and threefold lasing at 80, 100, and 140 mJ/20 Hz. Er:YAG laser with DCLH handpiece may be an efficient tool in epithelium removal without excessive thermal damage.

  10. Evaluation of some random effects methodology applicable to bird ringing data

    USGS Publications Warehouse

    Burnham, K.P.; White, Gary C.

    2002-01-01

    Existing models for ring recovery and recapture data analysis treat temporal variations in annual survival probability (S) as fixed effects. Often there is no explainable structure to the temporal variation in S1,..., Sk; random effects can then be a useful model: Si = E(S) + ??i. Here, the temporal variation in survival probability is treated as random with average value E(??2) = ??2. This random effects model can now be fit in program MARK. Resultant inferences include point and interval estimation for process variation, ??2, estimation of E(S) and var (E??(S)) where the latter includes a component for ??2 as well as the traditional component for v??ar(S??\\S??). Furthermore, the random effects model leads to shrinkage estimates, Si, as improved (in mean square error) estimators of Si compared to the MLE, S??i, from the unrestricted time-effects model. Appropriate confidence intervals based on the Si are also provided. In addition, AIC has been generalized to random effects models. This paper presents results of a Monte Carlo evaluation of inference performance under the simple random effects model. Examined by simulation, under the simple one group Cormack-Jolly-Seber (CJS) model, are issues such as bias of ??s2, confidence interval coverage on ??2, coverage and mean square error comparisons for inference about Si based on shrinkage versus maximum likelihood estimators, and performance of AIC model selection over three models: Si ??? S (no effects), Si = E(S) + ??i (random effects), and S1,..., Sk (fixed effects). For the cases simulated, the random effects methods performed well and were uniformly better than fixed effects MLE for the Si.

  11. Revisiting sample size: are big trials the answer?

    PubMed

    Lurati Buse, Giovanna A L; Botto, Fernando; Devereaux, P J

    2012-07-18

    The superiority of the evidence generated in randomized controlled trials over observational data is not only conditional to randomization. Randomized controlled trials require proper design and implementation to provide a reliable effect estimate. Adequate random sequence generation, allocation implementation, analyses based on the intention-to-treat principle, and sufficient power are crucial to the quality of a randomized controlled trial. Power, or the probability of the trial to detect a difference when a real difference between treatments exists, strongly depends on sample size. The quality of orthopaedic randomized controlled trials is frequently threatened by a limited sample size. This paper reviews basic concepts and pitfalls in sample-size estimation and focuses on the importance of large trials in the generation of valid evidence.

  12. Random Walks on a Simple Cubic Lattice, the Multinomial Theorem, and Configurational Properties of Polymers

    ERIC Educational Resources Information Center

    Hladky, Paul W.

    2007-01-01

    Random-climb models enable undergraduate chemistry students to visualize polymer molecules, quantify their configurational properties, and relate molecular structure to a variety of physical properties. The model could serve as an introduction to more elaborate models of polymer molecules and could help in learning topics such as lattice models of…

  13. Predicting bending stiffness of randomly oriented hybrid panels

    Treesearch

    Laura Moya; William T.Y. Tze; Jerrold E. Winandy

    2010-01-01

    This study was conducted to develop a simple model to predict the bending modulus of elasticity (MOE) of randomly oriented hybrid panels. The modeling process involved three modules: the behavior of a single layer was computed by applying micromechanics equations, layer properties were adjusted for densification effects, and the entire panel was modeled as a three-...

  14. Financial Incentives and Student Achievement: Evidence from Randomized Trials. NBER Working Paper No. 15898

    ERIC Educational Resources Information Center

    Fryer, Roland G., Jr.

    2010-01-01

    This paper describes a series of school-based randomized trials in over 250 urban schools designed to test the impact of financial incentives on student achievement. In stark contrast to simple economic models, our results suggest that student incentives increase achievement when the rewards are given for inputs to the educational production…

  15. Multiple Imputation of Item Scores in Test and Questionnaire Data, and Influence on Psychometric Results

    ERIC Educational Resources Information Center

    van Ginkel, Joost R.; van der Ark, L. Andries; Sijtsma, Klaas

    2007-01-01

    The performance of five simple multiple imputation methods for dealing with missing data were compared. In addition, random imputation and multivariate normal imputation were used as lower and upper benchmark, respectively. Test data were simulated and item scores were deleted such that they were either missing completely at random, missing at…

  16. A Simple Spreadsheet Program to Simulate and Analyze the Far-UV Circular Dichroism Spectra of Proteins

    ERIC Educational Resources Information Center

    Abriata, Luciano A.

    2011-01-01

    A simple algorithm was implemented in a spreadsheet program to simulate the circular dichroism spectra of proteins from their secondary structure content and to fit [alpha]-helix, [beta]-sheet, and random coil contents from experimental far-UV circular dichroism spectra. The physical basis of the method is briefly reviewed within the context of…

  17. Contribution of Temporal Preparation and Processing Speed to Simple Reaction Time in Persons with Alzheimer's Disease and Mild Cognitive Impairment

    ERIC Educational Resources Information Center

    Sylvain-Roy, Stephanie; Bherer, Louis; Belleville, Sylvie

    2010-01-01

    Temporal preparation was assessed in 15 Alzheimer's disease (AD) patients, 20 persons with mild cognitive impairment (MCI) and 28 healthy older adults. Participants completed a simple reaction time task in which the preparatory interval duration varied randomly within two blocks (short versus long temporal window). Results indicated that AD and…

  18. Sampling Errors in Monthly Rainfall Totals for TRMM and SSM/I, Based on Statistics of Retrieved Rain Rates and Simple Models

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.

  19. SIMPL Systems, or: Can We Design Cryptographic Hardware without Secret Key Information?

    NASA Astrophysics Data System (ADS)

    Rührmair, Ulrich

    This paper discusses a new cryptographic primitive termed SIMPL system. Roughly speaking, a SIMPL system is a special type of Physical Unclonable Function (PUF) which possesses a binary description that allows its (slow) public simulation and prediction. Besides this public key like functionality, SIMPL systems have another advantage: No secret information is, or needs to be, contained in SIMPL systems in order to enable cryptographic protocols - neither in the form of a standard binary key, nor as secret information hidden in random, analog features, as it is the case for PUFs. The cryptographic security of SIMPLs instead rests on (i) a physical assumption on their unclonability, and (ii) a computational assumption regarding the complexity of simulating their output. This novel property makes SIMPL systems potentially immune against many known hardware and software attacks, including malware, side channel, invasive, or modeling attacks.

  20. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    PubMed

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. The rationale and design of the Shockless IMPLant Evaluation (SIMPLE) trial: a randomized, controlled trial of defibrillation testing at the time of defibrillator implantation.

    PubMed

    Healey, Jeff S; Hohnloser, Stefan H; Glikson, Michael; Neuzner, Joerg; Viñolas, Xavier; Mabo, Philippe; Kautzner, Josef; O'Hara, Gilles; Van Erven, Liselot; Gadler, Frederick; Appl, Ursula; Connolly, Stuart J

    2012-08-01

    Defibrillation testing (DT) has been an integral part of defibrillator (implantable cardioverter defibrillator [ICD]) implantation; however, there is little evidence that it improves outcomes. Surveys show a trend toward ICD implantation without DT, which now exceeds 30% to 60% in some regions. Because there is no evidence to support dramatic shift in practice, a randomized trial is urgently needed. The SIMPLE trial will determine if ICD implantation without any DT is noninferior to implantation with DT. Patients will be eligible if they are receiving their first ICD using a Boston Scientific device (Boston Scientific, Natick, MA). Patients will be randomized to DT or no DT at the time of ICD implantation. In the DT arm, physicians will make all reasonable efforts to ensure 1 successful intraoperative defibrillation at 17 J or 2 at 21 J. The first clinical shock in all tachycardia zones will be set to 31 J for all patients. The primary outcome of SIMPLE will be the composite of ineffective appropriate shock or arrhythmic death. The safety outcome of SIMPLE will include a composite of potentially DT-related procedural complications within 30 days of ICD implantation. Several secondary outcomes will be evaluated, including all-cause mortality and heart failure hospitalization. Enrollment of 2,500 patients with 3.5-year mean follow-up will provide sufficient statistical power to demonstrate noninferiority. The study is being performed at approximately 90 centers in Canada, Europe, Israel, and Asia Pacific with final results expected in 2013. Copyright © 2012 Mosby, Inc. All rights reserved.

  2. Implementing self sustained quality control procedures in a clinical laboratory.

    PubMed

    Khatri, Roshan; K C, Sanjay; Shrestha, Prabodh; Sinha, J N

    2013-01-01

    Quality control is an essential component in every clinical laboratory which maintains the excellence of laboratory standards, supplementing to proper disease diagnosis, patient care and resulting in overall strengthening of health care system. Numerous quality control schemes are available, with combinations of procedures, most of which are tedious, time consuming and can be "too technical" whereas commercially available quality control materials can be expensive especially for laboratories in developing nations like Nepal. Here, we present a procedure performed at our centre with self prepared control serum and use of simple statistical tools for quality assurance. The pooled serum was prepared as per guidelines for preparation of stabilized liquid quality control serum from human sera. Internal Quality Assessment was performed on this sample, on a daily basis which included measurement of 12 routine biochemical parameters. The results were plotted on Levey-Jennings charts and analysed with quality control rules, for a period of one month. The mean levels of biochemical analytes in self prepared control serum were within normal physiological range. This serum was evaluated every day along with patients' samples. The results obtained were plotted on control charts and analysed using common quality control rules to identify possible systematic and random errors. Immediate mitigation measures were taken and the dispatch of erroneous reports was avoided. In this study we try to highlight on a simple internal quality control procedure which can be performed by laboratories, with minimum technology, expenditure, and expertise and improve reliability and validity of the test reports.

  3. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  4. Smoothing the redshift distributions of random samples for the baryon acoustic oscillations: applications to the SDSS-III BOSS DR12 and QPM mock samples

    NASA Astrophysics Data System (ADS)

    Wang, Shao-Jiang; Guo, Qi; Cai, Rong-Gen

    2017-12-01

    We investigate the impact of different redshift distributions of random samples on the baryon acoustic oscillations (BAO) measurements of D_V(z)r_d^fid/r_d from the two-point correlation functions of galaxies in the Data Release 12 of the Baryon Oscillation Spectroscopic Survey (BOSS). Big surveys, such as BOSS, usually assign redshifts to the random samples by randomly drawing values from the measured redshift distributions of the data, which would necessarily introduce fiducial signals of fluctuations into the random samples, weakening the signals of BAO, if the cosmic variance cannot be ignored. We propose a smooth function of redshift distribution that fits the data well to populate the random galaxy samples. The resulting cosmological parameters match the input parameters of the mock catalogue very well. The significance of BAO signals has been improved by 0.33σ for a low-redshift sample and by 0.03σ for a constant-stellar-mass sample, though the absolute values do not change significantly. Given the precision of the measurements of current cosmological parameters, it would be appreciated for the future improvements on the measurements of galaxy clustering.

  5. Empirical entropic contributions in computational docking: evaluation in APS reductase complexes.

    PubMed

    Chang, Max W; Belew, Richard K; Carroll, Kate S; Olson, Arthur J; Goodsell, David S

    2008-08-01

    The results from reiterated docking experiments may be used to evaluate an empirical vibrational entropy of binding in ligand-protein complexes. We have tested several methods for evaluating the vibrational contribution to binding of 22 nucleotide analogues to the enzyme APS reductase. These include two cluster size methods that measure the probability of finding a particular conformation, a method that estimates the extent of the local energetic well by looking at the scatter of conformations within clustered results, and an RMSD-based method that uses the overall scatter and clustering of all conformations. We have also directly characterized the local energy landscape by randomly sampling around docked conformations. The simple cluster size method shows the best performance, improving the identification of correct conformations in multiple docking experiments. 2008 Wiley Periodicals, Inc.

  6. Data article on the effect of work engagement strategies on faculty staff behavioural outcomes in private universities.

    PubMed

    Falola, Hezekiah Olubusayo; Olokundun, Maxwell Ayodele; Salau, Odunayo Paul; Oludayo, Olumuyiwa Akinrole; Ibidunni, Ayodotun Stephen

    2018-06-01

    The main objective of this study was to present a data article that investigate the effect of work engagement strategies on faculty behavioural outcomes. Few studies analyse how work engagement strategies could help in driving standard work behaviour particularly in higher institutions. In an attempt to bridge this gap, this study was carried out using descriptive research method and Structural Equation Model (AMOS 22) for the analysis of four hundred and forty one (441) valid questionnaire which were completed by the faculty members of the six selected private universities in Nigeria using stratified and simple random sampling techniques. Factor model which shows high-reliability and good fit was generated, while construct validity was provided through convergent and discriminant analyses.

  7. Study protocol: a randomized controlled trial investigating the effects of a psychosexual training program for adolescents with autism spectrum disorder.

    PubMed

    Visser, Kirsten; Greaves-Lord, Kirstin; Tick, Nouchka T; Verhulst, Frank C; Maras, Athanasios; van der Vegt, Esther J M

    2015-08-28

    Previous research shows that adolescents with autism spectrum disorder (ASD) run several risks in their psychosexual development and that these adolescents can have limited access to reliable information on puberty and sexuality, emphasizing the need for specific guidance of adolescents with ASD in their psychosexual development. Few studies have investigated the effects of psychosexual training programs for adolescents with ASD and to date no randomized controlled trials are available to study the effects of psychosexual interventions for this target group. The randomized controlled trial (RCT) described in this study protocol aims to investigate the effects of the Tackling Teenage Training (TTT) program on the psychosexual development of adolescents with ASD. This parallel clinical trial, conducted in the South-West of the Netherlands, has a simple equal randomization design with an intervention and a waiting-list control condition. Two hundred adolescents and their parents participate in this study. We assess the participants in both conditions using self-report as well as parent-report questionnaires at three time points during 1 year: at baseline (T1), post-treatment (T2), and for follow-up (T3). To our knowledge, the current study is the first that uses a randomized controlled design to study the effects of a psychosexual training program for adolescents with ASD. It has a number of methodological strengths, namely a large sample size, a wide range of functionally relevant outcome measures, the use of multiple informants, and a standardized research and intervention protocol. Also some limitations of the described study are identified, for instance not making a comparison between two treatment conditions, and no use of blinded observational measures to investigate the ecological validity of the research results. Dutch Trial Register NTR2860. Registered on 20 April 2011.

  8. Pattern of Antibiotic Resistance Among Community Derived Isolates of Enterobacteriaceae Using Urine Sample: A Study From Northern India

    PubMed Central

    Lohiya, Ayush; Kapil, Arti; Gupta, Sanjeev Kumar; Misra, Puneet; Rai, Sanjay K.

    2015-01-01

    Background Despite world-wide evidence of increased antibiotic resistance, there is scarce data on antibiotic resistance in community settings. One of the reason being difficulty in collection of biological specimen (traditionally stool) in community from apparently healthy individuals. Hence, finding an alternative specimen that is easier to obtain in a community setting or in large scale surveys for the purpose, is crucial. We conducted this study to explore the feasibility of using urine samples for deriving community based estimates of antibiotic resistance and to estimate the magnitude of resistance among urinary isolates of Escherichia coli and Klebsiella pneumonia against multiple antibiotics in apparently healthy individuals residing in a rural community of Haryana, North India. Materials and Methods Eligible individuals were apparently healthy, aged 18 years or older. Using the health management information system (HMIS) of Ballabgarh Health Demographic Surveillance System (HDSS), sampling frame was prepared. Potential individuals were identified using simple random sampling. Random urine sample was collected in a sterile container and transported to laboratory under ambient condition. Species identification and antibiotic susceptibility testing for Enterobacteriaceae was done using Clinical Laboratory and Standards Institute (CLSI) 2012 guidelines. Multi-drug resistant (MDR) Enterobacteriaceae, Extended Spectrum Beta Lactamase (ESBL) producing Enterobacteriaceae, and Carbapenem producing Enterobacteriaceae (CRE) were identified from the urine samples. Results A total of 433 individuals participated in the study (non-response rate – 13.4%), out of which 58 (13.4%) were positive for Enterobacteriaceae, 8.1% for E. coli and 5.3% for K. pneumoniae. Resistance against penicillin (amoxicillin/ampicillin) for E. coli and K. pneumoniae was 62.8% and 100.0% respectively. Isolates resistant to co-trimoxazole were 5.7% and 0.0% respectively. None of the isolates were resistant to imipenem, and meropenem. Conclusion and recommendations It is feasible to use urine sample to study magnitude of antibiotic resistance in population based surveys. At community level, resistance to amoxicillin was considerable, negligible for co-trimoxazole, and to higher antibiotics including carbapenems. PMID:26393150

  9. Open quantum random walks: Bistability on pure states and ballistically induced diffusion

    NASA Astrophysics Data System (ADS)

    Bauer, Michel; Bernard, Denis; Tilloy, Antoine

    2013-12-01

    Open quantum random walks (OQRWs) deal with quantum random motions on a line for systems with internal and orbital degrees of freedom. The internal system behaves as a quantum random gyroscope coding for the direction of the orbital moves. We reveal the existence of a transition, depending on OQRW moduli, in the internal system behaviors from simple oscillations to random flips between two unstable pure states. This induces a transition in the orbital motions from the usual diffusion to ballistically induced diffusion with a large mean free path and large effective diffusion constant at large times. We also show that mixed states of the internal system are converted into random pure states during the process. We touch upon possible experimental realizations.

  10. Biomechanical advantages of robot-assisted pedicle screw fixation in posterior lumbar interbody fusion compared with freehand technique in a prospective randomized controlled trial-perspective for patient-specific finite element analysis.

    PubMed

    Kim, Ho-Joong; Kang, Kyoung-Tak; Park, Sung-Cheol; Kwon, Oh-Hyo; Son, Juhyun; Chang, Bong-Soon; Lee, Choon-Ki; Yeom, Jin S; Lenke, Lawrence G

    2017-05-01

    There have been conflicting results on the surgical outcome of lumbar fusion surgery using two different techniques: robot-assisted pedicle screw fixation and conventional freehand technique. In addition, there have been no studies about the biomechanical issues between both techniques. This study aimed to investigate the biomechanical properties in terms of stress at adjacent segments using robot-assisted pedicle screw insertion technique (robot-assisted, minimally invasive posterior lumbar interbody fusion, Rom-PLIF) and freehand technique (conventional, freehand, open approach, posterior lumbar interbody fusion, Cop-PLIF) for instrumented lumbar fusion surgery. This is an additional post-hoc analysis for patient-specific finite element (FE) model. The sample is composed of patients with degenerative lumbar disease. Intradiscal pressure and facet contact force are the outcome measures. Patients were randomly assigned to undergo an instrumented PLIF procedure using a Rom-PLIF (37 patients) or a Cop-PLIF (41), respectively. Five patients in each group were selected using a simple random sampling method after operation, and 10 preoperative and postoperative lumbar spines were modeled from preoperative high-resolution computed tomography of 10 patients using the same method for a validated lumbar spine model. Under four pure moments of 7.5 Nm, the changes in intradiscal pressure and facet joint contact force at the proximal adjacent segment following fusion surgery were analyzed and compared with preoperative states. The representativeness of random samples was verified. Both groups showed significant increases in postoperative intradiscal pressure at the proximal adjacent segment under four moments, compared with the preoperative state. The Cop-PLIF models demonstrated significantly higher percent increments of intradiscal pressure at proximal adjacent segments under extension, lateral bending, and torsion moments than the Rom-PLIF models (p=.032, p=.008, and p=.016, respectively). Furthermore, the percent increment of facet contact force was significantly higher in the Cop-PLIF models under extension and torsion moments than in the Rom-PLIF models (p=.016 under both extension and torsion moments). The present study showed the clinical application of subject-specific FE analysis in the spine. Even though there was biomechanical superiority of the robot-assisted insertions in terms of alleviation of stress increments at adjacent segments after fusion, cautious interpretation is needed because of the small sample size. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. The effect of group counseling based on self-awareness skill on sexual risk-taking among girl students in Gorgan, Iran: a randomized trial.

    PubMed

    Kabiri, Golnoosh; Ziaei, Tayebe; Aval, Masumeh Rezaei; Vakili, Mohammad Ali

    2017-09-15

    Background Sexual puberty in adolescents occurs before their mental and emotional maturity and exposes them to high-risk sexual behaviors. Because sexual risk-taking occurs before adolescents become involved in a sexual relationship, this study was conducted to identify the effect of group counseling based on self-awareness skill on sexual risk-taking among female high school students in Gorgan in order to suggest some preventative measures. Methods The present parallel study is a randomized field trial conducted on 96 girl students who were studying in grades 10, 11 and 12 of high school with an age range of 14-18 years old. Sampling was done based on a multi-stage process. In the first stage, through the randomized clustering approach, four centers among six health centers were selected. In the second stage, 96 samples were collected through consecutive sampling. Finally, the samples were divided into two intervention and control groups (each one having 48 subjects) through the simple randomized approach. It has to be noted that no blinding was done in the present study. The data were collected using a demographic specifications form and the Iranian Adolescents Risk-Taking Scale (IARS). The consultation sessions based on self-awareness skill were explained to an intervention group through 60-min sessions over 7 weeks. The pretest was conducted for both groups and the posttest was completed 1 week and 1 month after the intervention by the intervention and control groups. Finally, after the loss of follow-up/drop out, a total of 80 subjects remained in the study; 42 subjects in the intervention group and 38 subjects in the control group. Data analyses were done using SPSS v.16 along with the Freidman non-parametric test and the Mann-Whitney and Wilcoxon tests. Results The results showed that the sexual risk-taking mean scores in the intervention group (10.54 ± 15.64) were reduced by applying 1-week (8.03 ± 12.82) and 1-month (4.91 ± 10.10) follow-ups after the intervention. This reduction was statistically significant (p = 14%). However, no statistically significant difference was observed in the control group. Conclusion Group counseling based on self-awareness skill decreased the sexual risk-taking in girl students of the high school. As prevention is prior to treatment, this method could be proposed as the prevention of high-risk sexual behavior to healthcare centers and educational environments and non-government organizations (NGOs) interacting with adolescents.

  12. An On-Demand Optical Quantum Random Number Generator with In-Future Action and Ultra-Fast Response

    PubMed Central

    Stipčević, Mario; Ursin, Rupert

    2015-01-01

    Random numbers are essential for our modern information based society e.g. in cryptography. Unlike frequently used pseudo-random generators, physical random number generators do not depend on complex algorithms but rather on a physicsal process to provide true randomness. Quantum random number generators (QRNG) do rely on a process, wich can be described by a probabilistic theory only, even in principle. Here we present a conceptualy simple implementation, which offers a 100% efficiency of producing a random bit upon a request and simultaneously exhibits an ultra low latency. A careful technical and statistical analysis demonstrates its robustness against imperfections of the actual implemented technology and enables to quickly estimate randomness of very long sequences. Generated random numbers pass standard statistical tests without any post-processing. The setup described, as well as the theory presented here, demonstrate the maturity and overall understanding of the technology. PMID:26057576

  13. A generator for unique quantum random numbers based on vacuum states

    NASA Astrophysics Data System (ADS)

    Gabriel, Christian; Wittmann, Christoffer; Sych, Denis; Dong, Ruifang; Mauerer, Wolfgang; Andersen, Ulrik L.; Marquardt, Christoph; Leuchs, Gerd

    2010-10-01

    Random numbers are a valuable component in diverse applications that range from simulations over gambling to cryptography. The quest for true randomness in these applications has engendered a large variety of different proposals for producing random numbers based on the foundational unpredictability of quantum mechanics. However, most approaches do not consider that a potential adversary could have knowledge about the generated numbers, so the numbers are not verifiably random and unique. Here we present a simple experimental setup based on homodyne measurements that uses the purity of a continuous-variable quantum vacuum state to generate unique random numbers. We use the intrinsic randomness in measuring the quadratures of a mode in the lowest energy vacuum state, which cannot be correlated to any other state. The simplicity of our source, combined with its verifiably unique randomness, are important attributes for achieving high-reliability, high-speed and low-cost quantum random number generators.

  14. Network-Physics (NP) BEC DIGITAL(#)-VULNERABILITY; ``Q-Computing"=Simple-Arithmetic;Modular-Congruences=SignalXNoise PRODUCTS=Clock-model;BEC-Factorization;RANDOM-# Definition;P=/=NP TRIVIAL Proof!!!

    NASA Astrophysics Data System (ADS)

    Pi, E. I.; Siegel, E.

    2010-03-01

    Siegel[AMS Natl.Mtg.(2002)-Abs.973-60-124] digits logarithmic- law inversion to ONLY BEQS BEC:Quanta/Bosons=#: EMP-like SEVERE VULNERABILITY of ONLY #-networks(VS.ANALOG INvulnerability) via Barabasi NP(VS.dynamics[Not.AMS(5/2009)] critique);(so called)``quantum-computing''(QC) = simple-arithmetic (sansdivision);algorithmiccomplexities:INtractibility/UNdecidabi lity/INefficiency/NONcomputability/HARDNESS(so MIScalled) ``noise''-induced-phase-transition(NIT)ACCELERATION:Cook-Levin theorem Reducibility = RG fixed-points; #-Randomness DEFINITION via WHAT? Query(VS. Goldreich[Not.AMS(2002)] How? mea culpa)= ONLY MBCS hot-plasma v #-clumping NON-random BEC; Modular-Arithmetic Congruences = Signal x Noise PRODUCTS = clock-model; NON-Shor[Physica A,341,586(04)]BEC logarithmic-law inversion factorization: Watkins #-theory U statistical- physics); P=/=NP C-S TRIVIAL Proof: Euclid!!! [(So Miscalled) computational-complexity J-O obviation(3 millennia AGO geometry: NO:CC,``CS'';``Feet of Clay!!!'']; Query WHAT?:Definition: (so MIScalled)``complexity''=UTTER-SIMPLICITY!! v COMPLICATEDNESS MEASURE(S).

  15. Simple, Efficient Estimators of Treatment Effects in Randomized Trials Using Generalized Linear Models to Leverage Baseline Variables

    PubMed Central

    Rosenblum, Michael; van der Laan, Mark J.

    2010-01-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636

  16. Occupational exposure decisions: can limited data interpretation training help improve accuracy?

    PubMed

    Logan, Perry; Ramachandran, Gurumurthy; Mulhausen, John; Hewett, Paul

    2009-06-01

    Accurate exposure assessments are critical for ensuring that potentially hazardous exposures are properly identified and controlled. The availability and accuracy of exposure assessments can determine whether resources are appropriately allocated to engineering and administrative controls, medical surveillance, personal protective equipment and other programs designed to protect workers. A desktop study was performed using videos, task information and sampling data to evaluate the accuracy and potential bias of participants' exposure judgments. Desktop exposure judgments were obtained from occupational hygienists for material handling jobs with small air sampling data sets (0-8 samples) and without the aid of computers. In addition, data interpretation tests (DITs) were administered to participants where they were asked to estimate the 95th percentile of an underlying log-normal exposure distribution from small data sets. Participants were presented with an exposure data interpretation or rule of thumb training which included a simple set of rules for estimating 95th percentiles for small data sets from a log-normal population. DIT was given to each participant before and after the rule of thumb training. Results of each DIT and qualitative and quantitative exposure judgments were compared with a reference judgment obtained through a Bayesian probabilistic analysis of the sampling data to investigate overall judgment accuracy and bias. There were a total of 4386 participant-task-chemical judgments for all data collections: 552 qualitative judgments made without sampling data and 3834 quantitative judgments with sampling data. The DITs and quantitative judgments were significantly better than random chance and much improved by the rule of thumb training. In addition, the rule of thumb training reduced the amount of bias in the DITs and quantitative judgments. The mean DIT % correct scores increased from 47 to 64% after the rule of thumb training (P < 0.001). The accuracy for quantitative desktop judgments increased from 43 to 63% correct after the rule of thumb training (P < 0.001). The rule of thumb training did not significantly impact accuracy for qualitative desktop judgments. The finding that even some simple statistical rules of thumb improve judgment accuracy significantly suggests that hygienists need to routinely use statistical tools while making exposure judgments using monitoring data.

  17. A structural model of the dimensions of teacher stress.

    PubMed

    Boyle, G J; Borg, M G; Falzon, J M; Baglioni, A J

    1995-03-01

    A comprehensive survey of teacher stress, job satisfaction and career commitment among 710 full-time primary school teachers was undertaken by Borg, Riding & Falzon (1991) in the Mediterranean islands of Malta and Gozo. A principal components analysis of a 20-item sources of teacher stress inventory had suggested four distinct dimensions which were labelled: Pupil Misbehaviour, Time/Resource Difficulties, Professional Recognition Needs, and Poor Relationships, respectively. To check on the validity of the Borg et al. factor solution, the group of 710 teachers was randomly split into two separate samples. Exploratory factor analysis was carried out on the data from Sample 1 (N = 335), while Sample 2 (N = 375) provided the cross-validational data for a LISREL confirmatory factor analysis. Results supported the proposed dimensionality of the sources of teacher stress (measurement model), along with evidence of an additional teacher stress factor (Workload). Consequently, structural modelling of the 'causal relationships' between the various latent variables and self-reported stress was undertaken on the combined samples (N = 710). Although both non-recursive and recursive models incorporating Poor Colleague Relations as a mediating variable were tested for their goodness-of-fit, a simple regression model provided the most parsimonious fit to the empirical data, wherein Workload and Student Misbehaviour accounted for most of the variance in predicting teaching stress.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swarctz, Christopher; Alijallis, Elias; Hunter, Scott Robert

    In this study, a closed loop low-temperature wind tunnel was custom-built and uniquely used to investigate the anti-icing mechanism of superhydrophobic surfaces in regulated flow velocities, temperatures, humidity, and water moisture particle sizes. Silica nanoparticle-based hydrophobic coatings were tested as superhydrophobic surface models. During tests, images of ice formation were captured by a camera and used for analysis of ice morphology. Prior to and after wind tunnel testing, apparent contact angles of water sessile droplets on samples were measured by a contact angle meter to check degradation of surface superhydrophobicity. A simple peel test was also performed to estimate adhesionmore » of ice on the surfaces. When compared to an untreated sample, superhydrophobic surfaces inhibited initial ice formation. After a period of time, random droplet strikes attached to the superhydrophobic surfaces and started to coalesce with previously deposited ice droplets. These sites appear as mounds of accreted ice across the surface. The appearance of the ice formations on the superhydrophobic samples is white rather than transparent, and is due to trapped air. These ice formations resemble soft rime ice rather than the transparent glaze ice seen on the untreated sample. Compared to untreated surfaces, the icing film formed on superhydrophobic surfaces was easy to peel off by shear flows.« less

  19. Automated semantic indexing of figure captions to improve radiology image retrieval.

    PubMed

    Kahn, Charles E; Rubin, Daniel L

    2009-01-01

    We explored automated concept-based indexing of unstructured figure captions to improve retrieval of images from radiology journals. The MetaMap Transfer program (MMTx) was used to map the text of 84,846 figure captions from 9,004 peer-reviewed, English-language articles to concepts in three controlled vocabularies from the UMLS Metathesaurus, version 2006AA. Sampling procedures were used to estimate the standard information-retrieval metrics of precision and recall, and to evaluate the degree to which concept-based retrieval improved image retrieval. Precision was estimated based on a sample of 250 concepts. Recall was estimated based on a sample of 40 concepts. The authors measured the impact of concept-based retrieval to improve upon keyword-based retrieval in a random sample of 10,000 search queries issued by users of a radiology image search engine. Estimated precision was 0.897 (95% confidence interval, 0.857-0.937). Estimated recall was 0.930 (95% confidence interval, 0.838-1.000). In 5,535 of 10,000 search queries (55%), concept-based retrieval found results not identified by simple keyword matching; in 2,086 searches (21%), more than 75% of the results were found by concept-based search alone. Concept-based indexing of radiology journal figure captions achieved very high precision and recall, and significantly improved image retrieval.

  20. Factors associated with chronic diseases among the elderly receiving treatment under the Family Health Strategy.

    PubMed

    Pimenta, Fernanda Batista; Pinho, Lucinéia; Silveira, Marise Fagundes; Botelho, Ana Cristina de Carvalho

    2015-08-01

    The profile of a sample population of elderly receiving treatment under the Family Health Strategy in the municipality of Teófilo Otoni, State of Minas Gerais, Brazil, is described, and the factors associated with diseases prevalence examined. Using simple random sampling, 385 elderly were interviewed using Form A and Elderly Form from the Primary Health Care Information System. The majority of the sample (83.1%) self-reported at least one disease, 69.9% had hypertension, and 17.7% had diabetes. Poisson regression analysis showed that the main factors associated with hypertension and other diseases were being non-white, having a low level of education, medication use, dental prosthesis use, and lack of a private health plan. The prevalence of diabetes was greater among women and individuals who depended on other people to live. It can be concluded that this sample population of elderly has a generally low socioeconomic status and are more susceptible to developing diseases, particularly hypertension. Diabetes should be controlled although had relatively low prevalence. It is suggested investments in structuring the health system network to provide adequate care for the elderly and in training health professionals to play an effective role in improving the quality of life of the elderly in Brazil.

Top