Sample records for simple random selection

  1. The Effect of Herrmann Whole Brain Teaching Method on Students' Understanding of Simple Electric Circuits

    ERIC Educational Resources Information Center

    Bawaneh, Ali Khalid Ali; Nurulazam Md Zain, Ahmad; Salmiza, Saleh

    2011-01-01

    The purpose of this study was to investigate the effect of Herrmann Whole Brain Teaching Method over conventional teaching method on eight graders in their understanding of simple electric circuits in Jordan. Participants (N = 273 students; M = 139, F = 134) were randomly selected from Bani Kenanah region-North of Jordan and randomly assigned to…

  2. The Relationship between Teachers Commitment and Female Students Academic Achievements in Some Selected Secondary School in Wolaita Zone, Southern Ethiopia

    ERIC Educational Resources Information Center

    Bibiso, Abyot; Olango, Menna; Bibiso, Mesfin

    2017-01-01

    The purpose of this study was to investigate the relationship between teacher's commitment and female students academic achievement in selected secondary school of Wolaita zone, Southern Ethiopia. The research method employed was survey study and the sampling techniques were purposive, simple random and stratified random sampling. Questionnaire…

  3. Attitude and Motivation as Predictors of Academic Achievement of Students in Clothing and Textiles

    ERIC Educational Resources Information Center

    Uwameiye, B. E.; Osho, L. E.

    2011-01-01

    This study investigated attitude and motivation as predictors of academic achievement of students in clothing and textiles. Three colleges of education in Edo and Delta States were randomly selected for use in this study. From each school, 40 students were selected from Year III using simple random technique yielding a total of 240 students. The…

  4. RECAL: A Computer Program for Selecting Sample Days for Recreation Use Estimation

    Treesearch

    D.L. Erickson; C.J. Liu; H. Ken Cordell; W.L. Chen

    1980-01-01

    Recreation Calendar (RECAL) is a computer program in PL/I for drawing a sample of days for estimating recreation use. With RECAL, a sampling period of any length may be chosen; simple random, stratified random, and factorial designs can be accommodated. The program randomly allocates days to strata and locations.

  5. The coalescent process in models with selection and recombination.

    PubMed

    Hudson, R R; Kaplan, N L

    1988-11-01

    The statistical properties of the process describing the genealogical history of a random sample of genes at a selectively neutral locus which is linked to a locus at which natural selection operates are investigated. It is found that the equations describing this process are simple modifications of the equations describing the process assuming that the two loci are completely linked. Thus, the statistical properties of the genealogical process for a random sample at a neutral locus linked to a locus with selection follow from the results obtained for the selected locus. Sequence data from the alcohol dehydrogenase (Adh) region of Drosophila melanogaster are examined and compared to predictions based on the theory. It is found that the spatial distribution of nucleotide differences between Fast and Slow alleles of Adh is very similar to the spatial distribution predicted if balancing selection operates to maintain the allozyme variation at the Adh locus. The spatial distribution of nucleotide differences between different Slow alleles of Adh do not match the predictions of this simple model very well.

  6. Classification of epileptic EEG signals based on simple random sampling and sequential feature selection.

    PubMed

    Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui

    2016-06-01

    Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential feature selection (SFS) algorithm is applied to select the key features and to reduce the dimensionality of the data. Finally, the selected features are forwarded to a least square support vector machine (LS_SVM) classifier to classify the EEG signals. The LS_SVM classifier classified the features which are extracted and selected from the SRS and the SFS. The experimental results show that the method achieves 99.90, 99.80 and 100 % for classification accuracy, sensitivity and specificity, respectively.

  7. Effects of the Physical Laboratory versus the Virtual Laboratory in Teaching Simple Electric Circuits on Conceptual Achievement and Attitudes Towards the Subject

    ERIC Educational Resources Information Center

    Tekbiyik, Ahmet; Ercan, Orhan

    2015-01-01

    Current study examined the effects of virtual and physical laboratory practices on students' conceptual achievement in the subject of electricity and their attitudes towards simple electric circuits. Two groups (virtual and physical) selected through simple random sampling was taught with web-aided material called "Electricity in Our…

  8. Use of Matrix Sampling Procedures to Assess Achievement in Solving Open Addition and Subtraction Sentences.

    ERIC Educational Resources Information Center

    Montague, Margariete A.

    This study investigated the feasibility of concurrently and randomly sampling examinees and items in order to estimate group achievement. Seven 32-item tests reflecting a 640-item universe of simple open sentences were used such that item selection (random, systematic) and assignment (random, systematic) of items (four, eight, sixteen) to forms…

  9. Final report : sampling plan for pavement condition ratings of secondary roads.

    DOT National Transportation Integrated Search

    1984-01-01

    The purpose of this project was to develop a random sampling plan for use in selecting segments of the secondary highway system for evaluation under the Department's PMS. The plan developed is described here. It is a simple, workable, random sampling...

  10. Modeling the Stress Complexities of Teaching and Learning of School Physics in Nigeria

    ERIC Educational Resources Information Center

    Emetere, Moses E.

    2014-01-01

    This study was designed to investigate the validity of the stress complexity model (SCM) to teaching and learning of school physics in Abuja municipal area council of Abuja, North. About two hundred students were randomly selected by a simple random sampling technique from some schools within the Abuja municipal area council. A survey research…

  11. Selecting Statistical Quality Control Procedures for Limiting the Impact of Increases in Analytical Random Error on Patient Safety.

    PubMed

    Yago, Martín

    2017-05-01

    QC planning based on risk management concepts can reduce the probability of harming patients due to an undetected out-of-control error condition. It does this by selecting appropriate QC procedures to decrease the number of erroneous results reported. The selection can be easily made by using published nomograms for simple QC rules when the out-of-control condition results in increased systematic error. However, increases in random error also occur frequently and are difficult to detect, which can result in erroneously reported patient results. A statistical model was used to construct charts for the 1 ks and X /χ 2 rules. The charts relate the increase in the number of unacceptable patient results reported due to an increase in random error with the capability of the measurement procedure. They thus allow for QC planning based on the risk of patient harm due to the reporting of erroneous results. 1 ks Rules are simple, all-around rules. Their ability to deal with increases in within-run imprecision is minimally affected by the possible presence of significant, stable, between-run imprecision. X /χ 2 rules perform better when the number of controls analyzed during each QC event is increased to improve QC performance. Using nomograms simplifies the selection of statistical QC procedures to limit the number of erroneous patient results reported due to an increase in analytical random error. The selection largely depends on the presence or absence of stable between-run imprecision. © 2017 American Association for Clinical Chemistry.

  12. Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex.

    PubMed

    Lindsay, Grace W; Rigotti, Mattia; Warden, Melissa R; Miller, Earl K; Fusi, Stefano

    2017-11-08

    Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear "mixed" selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli-and in particular, to combinations of stimuli ("mixed selectivity")-is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. Copyright © 2017 the authors 0270-6474/17/3711021-16$15.00/0.

  13. Programmable disorder in random DNA tilings

    NASA Astrophysics Data System (ADS)

    Tikhomirov, Grigory; Petersen, Philip; Qian, Lulu

    2017-03-01

    Scaling up the complexity and diversity of synthetic molecular structures will require strategies that exploit the inherent stochasticity of molecular systems in a controlled fashion. Here we demonstrate a framework for programming random DNA tilings and show how to control the properties of global patterns through simple, local rules. We constructed three general forms of planar network—random loops, mazes and trees—on the surface of self-assembled DNA origami arrays on the micrometre scale with nanometre resolution. Using simple molecular building blocks and robust experimental conditions, we demonstrate control of a wide range of properties of the random networks, including the branching rules, the growth directions, the proximity between adjacent networks and the size distribution. Much as combinatorial approaches for generating random one-dimensional chains of polymers have been used to revolutionize chemical synthesis and the selection of functional nucleic acids, our strategy extends these principles to random two-dimensional networks of molecules and creates new opportunities for fabricating more complex molecular devices that are organized by DNA nanostructures.

  14. Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex

    PubMed Central

    Lindsay, Grace W.

    2017-01-01

    Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear “mixed” selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli—and in particular, to combinations of stimuli (“mixed selectivity”)—is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. PMID:28986463

  15. Teachers' Methodologies and Sources of Information on HIV/AIDS for Students with Visual Impairments in Selected Residential and Integrated Schools in Ghana

    ERIC Educational Resources Information Center

    Hayford, Samuel K.; Ocansey, Frederick

    2017-01-01

    This study reports part of a national survey on sources of information, education and communication materials on HIV/AIDS available to students with visual impairments in residential, segregated, and integrated schools in Ghana. A multi-staged stratified random sampling procedure and a purposive and simple random sampling approach, where…

  16. Measuring CAMD technique performance. 2. How "druglike" are drugs? Implications of Random test set selection exemplified using druglikeness classification models.

    PubMed

    Good, Andrew C; Hermsmeier, Mark A

    2007-01-01

    Research into the advancement of computer-aided molecular design (CAMD) has a tendency to focus on the discipline of algorithm development. Such efforts are often wrought to the detriment of the data set selection and analysis used in said algorithm validation. Here we highlight the potential problems this can cause in the context of druglikeness classification. More rigorous efforts are applied to the selection of decoy (nondruglike) molecules from the ACD. Comparisons are made between model performance using the standard technique of random test set creation with test sets derived from explicit ontological separation by drug class. The dangers of viewing druglike space as sufficiently coherent to permit simple classification are highlighted. In addition the issues inherent in applying unfiltered data and random test set selection to (Q)SAR models utilizing large and supposedly heterogeneous databases are discussed.

  17. Extrinsic Motivation as Correlates of Work Attitude of the Nigerian Police Force: Implications for Counselling

    ERIC Educational Resources Information Center

    Igun, Sylvester Nosakhare

    2008-01-01

    The study examined Extrinsic motivation as correlates of work attitude of the Nigeria Police Force and its implications for counselling. 300 Police personnel were selected by random sampling technique from six departments that make up police force Headquarters, Abuja. The personnel were selected from each department using simple sampling…

  18. Selection Dynamics in Joint Matching to Rate and Magnitude of Reinforcement

    ERIC Educational Resources Information Center

    McDowell, J. J.; Popa, Andrei; Calvin, Nicholas T.

    2012-01-01

    Virtual organisms animated by a selectionist theory of behavior dynamics worked on concurrent random interval schedules where both the rate and magnitude of reinforcement were varied. The selectionist theory consists of a set of simple rules of selection, recombination, and mutation that act on a population of potential behaviors by means of a…

  19. Socio-Economic Background and Access to Internet as Correlates of Students' Achievement in Agricultural Science

    ERIC Educational Resources Information Center

    Adegoke, Sunday Paul; Osokoya, Modupe M.

    2015-01-01

    This study investigated access to internet and socio-economic background as correlates of students' achievement in Agricultural Science among selected Senior Secondary Schools Two Students in Ogbomoso South and North Local Government Areas. The study adopted multi-stage sampling technique. Simple random sampling was used to select 30 students from…

  20. Accounting for selection bias in association studies with complex survey data.

    PubMed

    Wirth, Kathleen E; Tchetgen Tchetgen, Eric J

    2014-05-01

    Obtaining representative information from hidden and hard-to-reach populations is fundamental to describe the epidemiology of many sexually transmitted diseases, including HIV. Unfortunately, simple random sampling is impractical in these settings, as no registry of names exists from which to sample the population at random. However, complex sampling designs can be used, as members of these populations tend to congregate at known locations, which can be enumerated and sampled at random. For example, female sex workers may be found at brothels and street corners, whereas injection drug users often come together at shooting galleries. Despite the logistical appeal, complex sampling schemes lead to unequal probabilities of selection, and failure to account for this differential selection can result in biased estimates of population averages and relative risks. However, standard techniques to account for selection can lead to substantial losses in efficiency. Consequently, researchers implement a variety of strategies in an effort to balance validity and efficiency. Some researchers fully or partially account for the survey design, whereas others do nothing and treat the sample as a realization of the population of interest. We use directed acyclic graphs to show how certain survey sampling designs, combined with subject-matter considerations unique to individual exposure-outcome associations, can induce selection bias. Finally, we present a novel yet simple maximum likelihood approach for analyzing complex survey data; this approach optimizes statistical efficiency at no cost to validity. We use simulated data to illustrate this method and compare it with other analytic techniques.

  1. A simple rule for the evolution of cooperation on graphs and social networks.

    PubMed

    Ohtsuki, Hisashi; Hauert, Christoph; Lieberman, Erez; Nowak, Martin A

    2006-05-25

    A fundamental aspect of all biological systems is cooperation. Cooperative interactions are required for many levels of biological organization ranging from single cells to groups of animals. Human society is based to a large extent on mechanisms that promote cooperation. It is well known that in unstructured populations, natural selection favours defectors over cooperators. There is much current interest, however, in studying evolutionary games in structured populations and on graphs. These efforts recognize the fact that who-meets-whom is not random, but determined by spatial relationships or social networks. Here we describe a surprisingly simple rule that is a good approximation for all graphs that we have analysed, including cycles, spatial lattices, random regular graphs, random graphs and scale-free networks: natural selection favours cooperation, if the benefit of the altruistic act, b, divided by the cost, c, exceeds the average number of neighbours, k, which means b/c > k. In this case, cooperation can evolve as a consequence of 'social viscosity' even in the absence of reputation effects or strategic complexity.

  2. Application of random effects to the study of resource selection by animals

    USGS Publications Warehouse

    Gillies, C.S.; Hebblewhite, M.; Nielsen, S.E.; Krawchuk, M.A.; Aldridge, Cameron L.; Frair, J.L.; Saher, D.J.; Stevens, C.E.; Jerde, C.L.

    2006-01-01

    1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence.2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability.3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed.4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects.5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection.6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.

  3. Application of random effects to the study of resource selection by animals.

    PubMed

    Gillies, Cameron S; Hebblewhite, Mark; Nielsen, Scott E; Krawchuk, Meg A; Aldridge, Cameron L; Frair, Jacqueline L; Saher, D Joanne; Stevens, Cameron E; Jerde, Christopher L

    2006-07-01

    1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence. 2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability. 3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed. 4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects. 5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection. 6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.

  4. Influence of Counselling Services on Perceived Academic Performance of Secondary School Students in Lagos State

    ERIC Educational Resources Information Center

    Bolu-Steve, Foluke; Oredugba, Oluwabunmi Olayinka

    2017-01-01

    This study aimed at looking at the influence of counseling services on perceived academic performance of secondary school students in Lagos State. At the first stage, the researchers purposively selected Ikorodu L.G.A in Lagos State. At the researchers selected two schools (1 Private schools, & 1 Public schools), using simple random technique.…

  5. A nonparametric method to generate synthetic populations to adjust for complex sampling design features.

    PubMed

    Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E

    2014-06-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.

  6. A nonparametric method to generate synthetic populations to adjust for complex sampling design features

    PubMed Central

    Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608

  7. Random bit generation at tunable rates using a chaotic semiconductor laser under distributed feedback.

    PubMed

    Li, Xiao-Zhou; Li, Song-Sui; Zhuang, Jun-Ping; Chan, Sze-Chun

    2015-09-01

    A semiconductor laser with distributed feedback from a fiber Bragg grating (FBG) is investigated for random bit generation (RBG). The feedback perturbs the laser to emit chaotically with the intensity being sampled periodically. The samples are then converted into random bits by a simple postprocessing of self-differencing and selecting bits. Unlike a conventional mirror that provides localized feedback, the FBG provides distributed feedback which effectively suppresses the information of the round-trip feedback delay time. Randomness is ensured even when the sampling period is commensurate with the feedback delay between the laser and the grating. Consequently, in RBG, the FBG feedback enables continuous tuning of the output bit rate, reduces the minimum sampling period, and increases the number of bits selected per sample. RBG is experimentally investigated at a sampling period continuously tunable from over 16 ns down to 50 ps, while the feedback delay is fixed at 7.7 ns. By selecting 5 least-significant bits per sample, output bit rates from 0.3 to 100 Gbps are achieved with randomness examined by the National Institute of Standards and Technology test suite.

  8. [Comparison study on sampling methods of Oncomelania hupensis snail survey in marshland schistosomiasis epidemic areas in China].

    PubMed

    An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang

    2016-06-29

    To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bromberger, Seth A.; Klymko, Christine F.; Henderson, Keith A.

    Betweenness centrality is a graph statistic used to nd vertices that are participants in a large number of shortest paths in a graph. This centrality measure is commonly used in path and network interdiction problems and its complete form requires the calculation of all-pairs shortest paths for each vertex. This leads to a time complexity of O(jV jjEj), which is impractical for large graphs. Estimation of betweenness centrality has focused on performing shortest-path calculations on a subset of randomly- selected vertices. This reduces the complexity of the centrality estimation to O(jSjjEj); jSj < jV j, which can be scaled appropriatelymore » based on the computing resources available. An estimation strategy that uses random selection of vertices for seed selection is fast and simple to implement, but may not provide optimal estimation of betweenness centrality when the number of samples is constrained. Our experimentation has identi ed a number of alternate seed-selection strategies that provide lower error than random selection in common scale-free graphs. These strategies are discussed and experimental results are presented.« less

  10. SNP selection and classification of genome-wide SNP data using stratified sampling random forests.

    PubMed

    Wu, Qingyao; Ye, Yunming; Liu, Yang; Ng, Michael K

    2012-09-01

    For high dimensional genome-wide association (GWA) case-control data of complex disease, there are usually a large portion of single-nucleotide polymorphisms (SNPs) that are irrelevant with the disease. A simple random sampling method in random forest using default mtry parameter to choose feature subspace, will select too many subspaces without informative SNPs. Exhaustive searching an optimal mtry is often required in order to include useful and relevant SNPs and get rid of vast of non-informative SNPs. However, it is too time-consuming and not favorable in GWA for high-dimensional data. The main aim of this paper is to propose a stratified sampling method for feature subspace selection to generate decision trees in a random forest for GWA high-dimensional data. Our idea is to design an equal-width discretization scheme for informativeness to divide SNPs into multiple groups. In feature subspace selection, we randomly select the same number of SNPs from each group and combine them to form a subspace to generate a decision tree. The advantage of this stratified sampling procedure can make sure each subspace contains enough useful SNPs, but can avoid a very high computational cost of exhaustive search of an optimal mtry, and maintain the randomness of a random forest. We employ two genome-wide SNP data sets (Parkinson case-control data comprised of 408 803 SNPs and Alzheimer case-control data comprised of 380 157 SNPs) to demonstrate that the proposed stratified sampling method is effective, and it can generate better random forest with higher accuracy and lower error bound than those by Breiman's random forest generation method. For Parkinson data, we also show some interesting genes identified by the method, which may be associated with neurological disorders for further biological investigations.

  11. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    USGS Publications Warehouse

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  12. Academic Self-Efficacy Perceptions of Teacher Candidates

    ERIC Educational Resources Information Center

    Yesilyurt, Etem

    2013-01-01

    This study aims determining academic self-efficacy perception of teacher candidates. It is survey model. Population of the study consists of teacher candidates in 2010-2011 academic years at Ahmet Kelesoglu Education Faculty of Education Formation of Selcuk University. A simple random sample was selected as sampling method and the study was…

  13. Children's Ability to Comprehend Main Ideas After Reading Expository Prose.

    ERIC Educational Resources Information Center

    Baumann, James F.

    A study was conducted to evaluate children's ability to comprehend main ideas after reading connected discourse and to develop and validate a straightforward and intuitively simple system for identifying main ideas in prose. Three experimental passages were randomly selected from third and sixth grade social studies textbooks, and education…

  14. Unbiased split variable selection for random survival forests using maximally selected rank statistics.

    PubMed

    Wright, Marvin N; Dankowski, Theresa; Ziegler, Andreas

    2017-04-15

    The most popular approach for analyzing survival data is the Cox regression model. The Cox model may, however, be misspecified, and its proportionality assumption may not always be fulfilled. An alternative approach for survival prediction is random forests for survival outcomes. The standard split criterion for random survival forests is the log-rank test statistic, which favors splitting variables with many possible split points. Conditional inference forests avoid this split variable selection bias. However, linear rank statistics are utilized by default in conditional inference forests to select the optimal splitting variable, which cannot detect non-linear effects in the independent variables. An alternative is to use maximally selected rank statistics for the split point selection. As in conditional inference forests, splitting variables are compared on the p-value scale. However, instead of the conditional Monte-Carlo approach used in conditional inference forests, p-value approximations are employed. We describe several p-value approximations and the implementation of the proposed random forest approach. A simulation study demonstrates that unbiased split variable selection is possible. However, there is a trade-off between unbiased split variable selection and runtime. In benchmark studies of prediction performance on simulated and real datasets, the new method performs better than random survival forests if informative dichotomous variables are combined with uninformative variables with more categories and better than conditional inference forests if non-linear covariate effects are included. In a runtime comparison, the method proves to be computationally faster than both alternatives, if a simple p-value approximation is used. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Logging utilization in Idaho: Current and past trends

    Treesearch

    Eric A. Simmons; Todd A. Morgan; Erik C. Berg; Stanley J. Zarnoch; Steven W. Hayes; Mike T. Thompson

    2014-01-01

    A study of commercial timber-harvesting activities in Idaho was conducted during 2008 and 2011 to characterize current tree utilization, logging operations, and changes from previous Idaho logging utilization studies. A two-stage simple random sampling design was used to select sites and felled trees for measurement within active logging sites. Thirty-three logging...

  16. A Study on Chocolate Consumption in Prospective Teachers

    ERIC Educational Resources Information Center

    Ozgen, Leyla

    2016-01-01

    This study was planned and conducted to determine the chocolate consumption habits of prospective teachers. The study population was comprised of students attending the Faculty of Education at Gazi University in Ankara and the sample consisted of 251 prospective teachers selected with simple random sampling. 96.4% and 3.6% of the prospective…

  17. A Study of Occupational Stress and Organizational Climate of Higher Secondary Teachers

    ERIC Educational Resources Information Center

    Benedicta, A. Sneha

    2014-01-01

    This study mainly aims to describe the occupational stress and organizational climate of higher secondary teachers with regard to gender, locality, family type, experience and type of management. Simple random sampling technique was adopted for the selection of sample. The data is collected from 200 higher secondary teachers from government and…

  18. The Evaluation of Teachers' Job Performance Based on Total Quality Management (TQM)

    ERIC Educational Resources Information Center

    Shahmohammadi, Nayereh

    2017-01-01

    This study aimed to evaluate teachers' job performance based on total quality management (TQM) model. This was a descriptive survey study. The target population consisted of all primary school teachers in Karaj (N = 2917). Using Cochran formula and simple random sampling, 340 participants were selected as sample. A total quality management…

  19. Academic Optimism and Organizational Citizenship Behaviour amongst Secondary School Teachers

    ERIC Educational Resources Information Center

    Makvandi, Abdollah; Naderi, Farah; Makvandi, Behnam; Pasha, Reza; Ehteshamzadeh, Parvin

    2018-01-01

    The purpose of the study was to investigate the simple and multiple relationships between academic optimism and organizational-citizenship behavior amongst high school teachers in Ramhormoz, Iran. The sample consisted of 250 (125 female and 125 male) teachers, selected by stratified random sampling in 2016- 2017. The measurement tools included…

  20. Examining Middle School Students' Views on Text Bullying

    ERIC Educational Resources Information Center

    Semerci, Ali

    2016-01-01

    This study aimed to examine middle school students' views on text bullying in regard to gender, grade level, reactions to bullying and frequency of internet use. The participating 872 students were selected through simple random sampling method among 525 schools located in central Ankara. The data were collected via a questionnaire and a survey…

  1. Landing the Best Trustees in Your Boardroom

    ERIC Educational Resources Information Center

    Shultz, Susan F.

    2004-01-01

    Isn't it ironic that the one organization with the power to set the direction for schools, the board of education, is the one organization whose members are often randomly selected, rarely evaluated and almost never held accountable to measurable standards of excellence? The author's premise is simple: Better boards of education mean better…

  2. Motivational Factors and Teachers Commitment in Public Secondary Schools in Mbale Municipality

    ERIC Educational Resources Information Center

    Olurotimi, Ogunlade Joseph; Asad, Kamonges Wahab; Abdulrauf, Abdulkadir

    2015-01-01

    The study investigated the influence of motivational factors on teachers' commitment in public Secondary School in Mbale Municipality. The study employed Cross-sectional survey design. The sampling technique used to select was simple random sampling technique. The instrument used to collect data was a self designed questionnaire. The data…

  3. The Relationship between Prospective Teachers' Critical Thinking Dispositions and Their Educational Philosophies

    ERIC Educational Resources Information Center

    Aybek, Birsel; Aslan, Serkan

    2017-01-01

    The aim of this research is to investigate the relationship between prospective teachers' critical thinking dispositions and their educational philosophies. The research used relational screening model. The study hosts a total of 429 prospective teachers selected by the simple random sampling method. Research data has been collected through…

  4. Hierarchy and extremes in selections from pools of randomized proteins

    PubMed Central

    Boyer, Sébastien; Biswas, Dipanwita; Kumar Soshee, Ananda; Scaramozzino, Natale; Nizak, Clément; Rivoire, Olivier

    2016-01-01

    Variation and selection are the core principles of Darwinian evolution, but quantitatively relating the diversity of a population to its capacity to respond to selection is challenging. Here, we examine this problem at a molecular level in the context of populations of partially randomized proteins selected for binding to well-defined targets. We built several minimal protein libraries, screened them in vitro by phage display, and analyzed their response to selection by high-throughput sequencing. A statistical analysis of the results reveals two main findings. First, libraries with the same sequence diversity but built around different “frameworks” typically have vastly different responses; second, the distribution of responses of the best binders in a library follows a simple scaling law. We show how an elementary probabilistic model based on extreme value theory rationalizes the latter finding. Our results have implications for designing synthetic protein libraries, estimating the density of functional biomolecules in sequence space, characterizing diversity in natural populations, and experimentally investigating evolvability (i.e., the potential for future evolution). PMID:26969726

  5. Hierarchy and extremes in selections from pools of randomized proteins.

    PubMed

    Boyer, Sébastien; Biswas, Dipanwita; Kumar Soshee, Ananda; Scaramozzino, Natale; Nizak, Clément; Rivoire, Olivier

    2016-03-29

    Variation and selection are the core principles of Darwinian evolution, but quantitatively relating the diversity of a population to its capacity to respond to selection is challenging. Here, we examine this problem at a molecular level in the context of populations of partially randomized proteins selected for binding to well-defined targets. We built several minimal protein libraries, screened them in vitro by phage display, and analyzed their response to selection by high-throughput sequencing. A statistical analysis of the results reveals two main findings. First, libraries with the same sequence diversity but built around different "frameworks" typically have vastly different responses; second, the distribution of responses of the best binders in a library follows a simple scaling law. We show how an elementary probabilistic model based on extreme value theory rationalizes the latter finding. Our results have implications for designing synthetic protein libraries, estimating the density of functional biomolecules in sequence space, characterizing diversity in natural populations, and experimentally investigating evolvability (i.e., the potential for future evolution).

  6. Modified Bat Algorithm for Feature Selection with the Wisconsin Diagnosis Breast Cancer (WDBC) Dataset

    PubMed

    Jeyasingh, Suganthi; Veluchamy, Malathi

    2017-05-01

    Early diagnosis of breast cancer is essential to save lives of patients. Usually, medical datasets include a large variety of data that can lead to confusion during diagnosis. The Knowledge Discovery on Database (KDD) process helps to improve efficiency. It requires elimination of inappropriate and repeated data from the dataset before final diagnosis. This can be done using any of the feature selection algorithms available in data mining. Feature selection is considered as a vital step to increase the classification accuracy. This paper proposes a Modified Bat Algorithm (MBA) for feature selection to eliminate irrelevant features from an original dataset. The Bat algorithm was modified using simple random sampling to select the random instances from the dataset. Ranking was with the global best features to recognize the predominant features available in the dataset. The selected features are used to train a Random Forest (RF) classification algorithm. The MBA feature selection algorithm enhanced the classification accuracy of RF in identifying the occurrence of breast cancer. The Wisconsin Diagnosis Breast Cancer Dataset (WDBC) was used for estimating the performance analysis of the proposed MBA feature selection algorithm. The proposed algorithm achieved better performance in terms of Kappa statistic, Mathew’s Correlation Coefficient, Precision, F-measure, Recall, Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Relative Absolute Error (RAE) and Root Relative Squared Error (RRSE). Creative Commons Attribution License

  7. An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution

    NASA Technical Reports Server (NTRS)

    Campbell, C. W.

    1983-01-01

    An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.

  8. Optimizing event selection with the random grid search

    NASA Astrophysics Data System (ADS)

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen; Stewart, Chip

    2018-07-01

    The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.

  9. Optical encrypted holographic memory using triple random phase-encoded multiplexing in photorefractive LiNbO3:Fe crystal

    NASA Astrophysics Data System (ADS)

    Tang, Li-Chuan; Hu, Guang W.; Russell, Kendra L.; Chang, Chen S.; Chang, Chi Ching

    2000-10-01

    We propose a new holographic memory scheme based on random phase-encoded multiplexing in a photorefractive LiNbO3:Fe crystal. Experimental results show that rotating a diffuser placed as a random phase modulator in the path of the reference beam provides a simple yet effective method of increasing the holographic storage capabilities of the crystal. Combining this rotational multiplexing with angular multiplexing offers further advantages. Storage capabilities can be optimized by using a post-image random phase plate in the path of the object beam. The technique is applied to a triple phase-encoded optical security system that takes advantage of the high angular selectivity of the angular-rotational multiplexing components.

  10. Comparison of inguinal approach, scrotal sclerotherapy and subinguinal antegrade sclerotherapy in varicocele treatment: a randomized prospective study.

    PubMed

    Fayez, A; El Shantaly, K M; Abbas, M; Hauser, S; Müller, S C; Fathy, A

    2010-01-01

    We compared outcome and complications of three simple varicocelectomy techniques. Groups were divided according to whether they would receive the Ivanissevich technique (n = 55), Tauber's technique (n = 51) or subinguinal sclerotherapy (n = 49). Selection criteria were: infertility >1 year, subnormal semen, sonographic diameter of veins >3 mm and time of regurge >2 s. Patients were randomly assigned to the groups of treatment, with follow-up every 3 months for 1 year. Improvement was only in sperm count and total motility for all groups. Pregnancy rates were 20, 13.73 and 12.24%, respectively, with no significant difference between groups. Hydrocele occurred only in the group which received the Ivanissevich technique (5.5%). Tauber's technique is simple; however, it has the disadvantage of multiple branching of small veins. Copyright © 2010 S. Karger AG, Basel.

  11. Piecewise SALT sampling for estimating suspended sediment yields

    Treesearch

    Robert B. Thomas

    1989-01-01

    A probability sampling method called SALT (Selection At List Time) has been developed for collecting and summarizing data on delivery of suspended sediment in rivers. It is based on sampling and estimating yield using a suspended-sediment rating curve for high discharges and simple random sampling for low flows. The method gives unbiased estimates of total yield and...

  12. Jackknifing Techniques for Evaluation of Equating Accuracy. Research Report. ETS RR-09-39

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; Lee, Yi-Hsuan; Qian, Jiahe

    2009-01-01

    Grouped jackknifing may be used to evaluate the stability of equating procedures with respect to sampling error and with respect to changes in anchor selection. Properties of grouped jackknifing are reviewed for simple-random and stratified sampling, and its use is described for comparisons of anchor sets. Application is made to examples of item…

  13. A Comparative Study of Factors Influencing Male and Female Lecturers' Job Satisfaction in Ghanaian Higher Education

    ERIC Educational Resources Information Center

    Amos, Patricia Mawusi; Acquah, Sakina; Antwi, Theresa; Adzifome, Nixon Saba

    2015-01-01

    The study sought to compare factors influencing male and female lecturers' job satisfaction. Cross-sectional survey designs employing both quantitative and qualitative approaches were adopted for the study. Simple random sampling was used to select 163 lecturers from the four oldest public universities in Ghana. Celep's (2000) Organisational…

  14. Paradoxes in Film Ratings

    ERIC Educational Resources Information Center

    Moore, Thomas L.

    2006-01-01

    The author selected a simple random sample of 100 movies from the "Movie and Video Guide" (1996), by Leonard Maltin. The author's intent was to obtain some basic information on the population of roughly 19,000 movies through a small sample. The "Movie and Video Guide" by Leonard Maltin is an annual ratings guide to movies. While not all films ever…

  15. Determinants of Differing Teacher Attitudes towards Inclusive Education Practice

    ERIC Educational Resources Information Center

    Gyimah, Emmanuel K.; Ackah, Francis R., Jr.; Yarquah, John A.

    2010-01-01

    An examination of literature reveals that teacher attitude is fundamental to the practice of inclusive education. In order to verify the extent to which the assertion is applicable in Ghana, 132 teachers were selected from 16 regular schools in the Cape Coast Metropolis using purposive and simple random sampling techniques to respond to a four…

  16. Performing an Event Study: An Exercise for Finance Students

    ERIC Educational Resources Information Center

    Reese, William A., Jr.; Robins, Russell P.

    2017-01-01

    This exercise helps instructors teach students how to perform a simple event study. The study tests to see if stocks earn abnormal returns when added to the S&P 500. Students select a random sample of stocks that were added to the index between January 2000 and July 2015. The accompanying spreadsheet calculates cumulative abnormal returns and…

  17. The Contribution of Counseling Providers to the Success or Failure of Marriages

    ERIC Educational Resources Information Center

    Ansah-Hughes, Winifred

    2015-01-01

    This study is an investigation into the contribution of counseling providers to the success or failure of marriages. The purposive and the simple random sampling methods were used to select eight churches and 259 respondents (married people) in the Techiman Municipality. The instrument used to collect data was a 26-item questionnaire including a…

  18. Multilabel learning via random label selection for protein subcellular multilocations prediction.

    PubMed

    Wang, Xiao; Li, Guo-Zheng

    2013-01-01

    Prediction of protein subcellular localization is an important but challenging problem, particularly when proteins may simultaneously exist at, or move between, two or more different subcellular location sites. Most of the existing protein subcellular localization methods are only used to deal with the single-location proteins. In the past few years, only a few methods have been proposed to tackle proteins with multiple locations. However, they only adopt a simple strategy, that is, transforming the multilocation proteins to multiple proteins with single location, which does not take correlations among different subcellular locations into account. In this paper, a novel method named random label selection (RALS) (multilabel learning via RALS), which extends the simple binary relevance (BR) method, is proposed to learn from multilocation proteins in an effective and efficient way. RALS does not explicitly find the correlations among labels, but rather implicitly attempts to learn the label correlations from data by augmenting original feature space with randomly selected labels as its additional input features. Through the fivefold cross-validation test on a benchmark data set, we demonstrate our proposed method with consideration of label correlations obviously outperforms the baseline BR method without consideration of label correlations, indicating correlations among different subcellular locations really exist and contribute to improvement of prediction performance. Experimental results on two benchmark data sets also show that our proposed methods achieve significantly higher performance than some other state-of-the-art methods in predicting subcellular multilocations of proteins. The prediction web server is available at >http://levis.tongji.edu.cn:8080/bioinfo/MLPred-Euk/ for the public usage.

  19. Mobile access to virtual randomization for investigator-initiated trials.

    PubMed

    Deserno, Thomas M; Keszei, András P

    2017-08-01

    Background/aims Randomization is indispensable in clinical trials in order to provide unbiased treatment allocation and a valid statistical inference. Improper handling of allocation lists can be avoided using central systems, for example, human-based services. However, central systems are unaffordable for investigator-initiated trials and might be inaccessible from some places, where study subjects need allocations. We propose mobile access to virtual randomization, where the randomization lists are non-existent and the appropriate allocation is computed on demand. Methods The core of the system architecture is an electronic data capture system or a clinical trial management system, which is extended by an R interface connecting the R server using the Java R Interface. Mobile devices communicate via the representational state transfer web services. Furthermore, a simple web-based setup allows configuring the appropriate statistics by non-statisticians. Our comprehensive R script supports simple randomization, restricted randomization using a random allocation rule, block randomization, and stratified randomization for un-blinded, single-blinded, and double-blinded trials. For each trial, the electronic data capture system or the clinical trial management system stores the randomization parameters and the subject assignments. Results Apps are provided for iOS and Android and subjects are randomized using smartphones. After logging onto the system, the user selects the trial and the subject, and the allocation number and treatment arm are displayed instantaneously and stored in the core system. So far, 156 subjects have been allocated from mobile devices serving five investigator-initiated trials. Conclusion Transforming pre-printed allocation lists into virtual ones ensures the correct conduct of trials and guarantees a strictly sequential processing in all trial sites. Covering 88% of all randomization models that are used in recent trials, virtual randomization becomes available for investigator-initiated trials and potentially for large multi-center trials.

  20. Experimental analysis of multivariate female choice in gray treefrogs (Hyla versicolor): evidence for directional and stabilizing selection.

    PubMed

    Gerhardt, H Carl; Brooks, Robert

    2009-10-01

    Even simple biological signals vary in several measurable dimensions. Understanding their evolution requires, therefore, a multivariate understanding of selection, including how different properties interact to determine the effectiveness of the signal. We combined experimental manipulation with multivariate selection analysis to assess female mate choice on the simple trilled calls of male gray treefrogs. We independently and randomly varied five behaviorally relevant acoustic properties in 154 synthetic calls. We compared response times of each of 154 females to one of these calls with its response to a standard call that had mean values of the five properties. We found directional and quadratic selection on two properties indicative of the amount of signaling, pulse number, and call rate. Canonical rotation of the fitness surface showed that these properties, along with pulse rate, contributed heavily to a major axis of stabilizing selection, a result consistent with univariate studies showing diminishing effects of increasing pulse number well beyond the mean. Spectral properties contributed to a second major axis of stabilizing selection. The single major axis of disruptive selection suggested that a combination of two temporal and two spectral properties with values differing from the mean should be especially attractive.

  1. Development of Maps of Simple and Complex Cells in the Primary Visual Cortex

    PubMed Central

    Antolík, Ján; Bednar, James A.

    2011-01-01

    Hubel and Wiesel (1962) classified primary visual cortex (V1) neurons as either simple, with responses modulated by the spatial phase of a sine grating, or complex, i.e., largely phase invariant. Much progress has been made in understanding how simple-cells develop, and there are now detailed computational models establishing how they can form topographic maps ordered by orientation preference. There are also models of how complex cells can develop using outputs from simple cells with different phase preferences, but no model of how a topographic orientation map of complex cells could be formed based on the actual connectivity patterns found in V1. Addressing this question is important, because the majority of existing developmental models of simple-cell maps group neurons selective to similar spatial phases together, which is contrary to experimental evidence, and makes it difficult to construct complex cells. Overcoming this limitation is not trivial, because mechanisms responsible for map development drive receptive fields (RF) of nearby neurons to be highly correlated, while co-oriented RFs of opposite phases are anti-correlated. In this work, we model V1 as two topographically organized sheets representing cortical layer 4 and 2/3. Only layer 4 receives direct thalamic input. Both sheets are connected with narrow feed-forward and feedback connectivity. Only layer 2/3 contains strong long-range lateral connectivity, in line with current anatomical findings. Initially all weights in the model are random, and each is modified via a Hebbian learning rule. The model develops smooth, matching, orientation preference maps in both sheets. Layer 4 units become simple cells, with phase preference arranged randomly, while those in layer 2/3 are primarily complex cells. To our knowledge this model is the first explaining how simple cells can develop with random phase preference, and how maps of complex cells can develop, using only realistic patterns of connectivity. PMID:21559067

  2. Extending cluster Lot Quality Assurance Sampling designs for surveillance programs

    PubMed Central

    Hund, Lauren; Pagano, Marcello

    2014-01-01

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance based on the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible non-parametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. PMID:24633656

  3. Extending cluster lot quality assurance sampling designs for surveillance programs.

    PubMed

    Hund, Lauren; Pagano, Marcello

    2014-07-20

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance on the basis of the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible nonparametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Optimizing event selection with the random grid search

    DOE PAGES

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen; ...

    2018-02-27

    In this paper, the random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector bosonmore » fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less

  5. Optimizing Event Selection with the Random Grid Search

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen

    2017-06-29

    The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events inmore » the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less

  6. Optimizing event selection with the random grid search

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen

    In this paper, the random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector bosonmore » fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less

  7. Types of Bullying in the Senior High Schools in Ghana

    ERIC Educational Resources Information Center

    Antiri, Kwasi Otopa

    2016-01-01

    The main objective of the study was to examine the types of bullying that were taking place in the senior high schools in Ghana. A multi-stage sampling procedure, comprising purposive, simple random and snowball sampling technique, was used in the selection of the sample. A total of 354 respondents were drawn six schools in Ashanti, Central and…

  8. Challenges to Successful Total Quality Management Implementation in Public Secondary Schools: A Case Study of Kohat District, Pakistan

    ERIC Educational Resources Information Center

    Suleman, Qaiser; Gul, Rizwana

    2015-01-01

    The current study explores the challenges faced by public secondary schools in successful implementation of total quality management (TQM) in Kohat District. A sample of 25 heads and 75 secondary school teachers selected from 25 public secondary schools through simple random sampling technique was used. Descriptive research designed was used and a…

  9. The Relationship between Temperament, Gender, and Behavioural Problems in Preschool Children

    ERIC Educational Resources Information Center

    Yoleri, Sibel

    2014-01-01

    The aim of this study is to examine the relationship between gender and the temperamental characteristics of children between the ages of five and six, as well as to assess their behavioural problems. The sample included 128 children selected by simple random sampling from 5-6 year old children, receiving preschool education in the city centre of…

  10. Internet Access and Usage by Secondary School Students in Morogoro Municipality, Tanzania

    ERIC Educational Resources Information Center

    Tarimo, Ronald; Kavishe, George

    2017-01-01

    The purpose of this paper was to report results of a study on the investigation of the Internet access and usage by secondary school students in Morogoro municipality in Tanzania. A simple random sampling technique was used to select 120 students from six schools. The data was collected through a questionnaire. A quantitative approach using the…

  11. Agro-Students' Appraisal of Online Registration of Academic Courses in the Federal University of Agriculture Abeokuta, Ogun State Nigeria

    ERIC Educational Resources Information Center

    Lawal-Adebowale, O. A.; Oyekunle, O.

    2014-01-01

    With integration of information technology tool for academic course registration in the Federal University of Agriculture, Abeokuta, the study assessed the agro-students' appraisal of the online tool for course registration. A simple random sampling technique was used to select 325 agrostudents; and validated and reliable questionnaire was used…

  12. Fitting distributions to microbial contamination data collected with an unequal probability sampling design.

    PubMed

    Williams, M S; Ebel, E D; Cao, Y

    2013-01-01

    The fitting of statistical distributions to microbial sampling data is a common application in quantitative microbiology and risk assessment applications. An underlying assumption of most fitting techniques is that data are collected with simple random sampling, which is often times not the case. This study develops a weighted maximum likelihood estimation framework that is appropriate for microbiological samples that are collected with unequal probabilities of selection. A weighted maximum likelihood estimation framework is proposed for microbiological samples that are collected with unequal probabilities of selection. Two examples, based on the collection of food samples during processing, are provided to demonstrate the method and highlight the magnitude of biases in the maximum likelihood estimator when data are inappropriately treated as a simple random sample. Failure to properly weight samples to account for how data are collected can introduce substantial biases into inferences drawn from the data. The proposed methodology will reduce or eliminate an important source of bias in inferences drawn from the analysis of microbial data. This will also make comparisons between studies and the combination of results from different studies more reliable, which is important for risk assessment applications. © 2012 No claim to US Government works.

  13. Methodology Series Module 5: Sampling Strategies.

    PubMed

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.

  14. Methodology Series Module 5: Sampling Strategies

    PubMed Central

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438

  15. Climate-driven extinctions shape the phylogenetic structure of temperate tree floras.

    PubMed

    Eiserhardt, Wolf L; Borchsenius, Finn; Plum, Christoffer M; Ordonez, Alejandro; Svenning, Jens-Christian

    2015-03-01

    When taxa go extinct, unique evolutionary history is lost. If extinction is selective, and the intrinsic vulnerabilities of taxa show phylogenetic signal, more evolutionary history may be lost than expected under random extinction. Under what conditions this occurs is insufficiently known. We show that late Cenozoic climate change induced phylogenetically selective regional extinction of northern temperate trees because of phylogenetic signal in cold tolerance, leading to significantly and substantially larger than random losses of phylogenetic diversity (PD). The surviving floras in regions that experienced stronger extinction are phylogenetically more clustered, indicating that non-random losses of PD are of increasing concern with increasing extinction severity. Using simulations, we show that a simple threshold model of survival given a physiological trait with phylogenetic signal reproduces our findings. Our results send a strong warning that we may expect future assemblages to be phylogenetically and possibly functionally depauperate if anthropogenic climate change affects taxa similarly. © 2015 John Wiley & Sons Ltd/CNRS.

  16. Simple-MSSM: a simple and efficient method for simultaneous multi-site saturation mutagenesis.

    PubMed

    Cheng, Feng; Xu, Jian-Miao; Xiang, Chao; Liu, Zhi-Qiang; Zhao, Li-Qing; Zheng, Yu-Guo

    2017-04-01

    To develop a practically simple and robust multi-site saturation mutagenesis (MSSM) method that enables simultaneously recombination of amino acid positions for focused mutant library generation. A general restriction enzyme-free and ligase-free MSSM method (Simple-MSSM) based on prolonged overlap extension PCR (POE-PCR) and Simple Cloning techniques. As a proof of principle of Simple-MSSM, the gene of eGFP (enhanced green fluorescent protein) was used as a template gene for simultaneous mutagenesis of five codons. Forty-eight randomly selected clones were sequenced. Sequencing revealed that all the 48 clones showed at least one mutant codon (mutation efficiency = 100%), and 46 out of the 48 clones had mutations at all the five codons. The obtained diversities at these five codons are 27, 24, 26, 26 and 22, respectively, which correspond to 84, 75, 81, 81, 69% of the theoretical diversity offered by NNK-degeneration (32 codons; NNK, K = T or G). The enzyme-free Simple-MSSM method can simultaneously and efficiently saturate five codons within one day, and therefore avoid missing interactions between residues in interacting amino acid networks.

  17. The Role of Counselling and Parental Encouragement on Re-Entry of Adolescents into Secondary Schools in Abia State, Nigeria

    ERIC Educational Resources Information Center

    Alika, Henrietta Ijeoma; Ohanaka, Blessing Ijeoma

    2013-01-01

    This paper examined the role of counselling, and parental encouragement on re-entry of adolescents into secondary school in Abia State, Nigeria. A total of 353 adolescents who re-entered school were selected from six secondary schools in the State through a simple random sampling technique. A validated questionnaire was used for data analysis.…

  18. Instructional Resources as Determinants of English Language Performance of Secondary School High-Achieving Students in Ibadan, Oyo State

    ERIC Educational Resources Information Center

    Adelodun, Gboyega Adelowo; Asiru, Abdulahi Babatunde

    2015-01-01

    This study examined the role played by instructional resources in enhancing performance of students, especially that of high-achievers, in English Language. The study is descriptive in nature and it adopted a survey design. Simple random sampling technique was used for the selection of fifty (50) SSI-SSIII students from five schools in Ibadan…

  19. Factors Influencing Teachers' Competence in Developing Resilience in Vulnerable Children in Primary Schools in Uasin Gishu County, Kenya

    ERIC Educational Resources Information Center

    Silyvier, Tsindoli; Nyandusi, Charles

    2015-01-01

    The purpose of the study was to assess the effect of teacher characteristics on their competence in developing resilience in vulnerable primary school children. A descriptive survey research design was used. This study was based on resiliency theory as proposed by Krovetz (1998). Simple random sampling was used to select a sample size of 108…

  20. Lecturers and Postgraduates Perception of Libraries as Promoters of Teaching, Learning, and Research at the University of Ibadan, Nigeria

    ERIC Educational Resources Information Center

    Oyewole, Olawale; Adetimirin, Airen

    2015-01-01

    Lecturers and postgraduates are among the users of the university libraries and their perception of the libraries has influence on utilization of the information resources, hence the need for this study. Survey method was adopted for the study and simple random sampling method was used to select sample size of 38 lecturers and 233 postgraduates.…

  1. Perception of Pre-Service Teachers' towards the Teaching Practice Programme in College of Technology Education, University of Education, Winneba

    ERIC Educational Resources Information Center

    Amankwah, Francis; Oti-Agyen, Philip; Sam, Francis Kwame

    2017-01-01

    The descriptive survey design was used to find out the perception of pre-service teachers on teaching practice (on-campus) as an initial teacher preparation programme in University of Education, Winneba. A simple random sampling was used to select 226 pre-service teachers from the College of Technology Education, Kumasi. Data for the study were…

  2. The Contribution of Teachers' Continuous Professional Development (CPD) Program to Quality of Education and Its Teacher-Related Challenging Factors at Chagni Primary Schools, Awi Zone, Ethiopia

    ERIC Educational Resources Information Center

    Belay, Sintayehu

    2016-01-01

    This study examined the contribution of teachers' Continuous Professional Development (CPD) to quality of education and its challenging factors related with teachers. For this purpose, the study employed descriptive survey method. 76 or 40.86% participant teachers were selected using simple random sampling technique. Close-ended questionnaire was…

  3. Why the null matters: statistical tests, random walks and evolution.

    PubMed

    Sheets, H D; Mitchell, C E

    2001-01-01

    A number of statistical tests have been developed to determine what type of dynamics underlie observed changes in morphology in evolutionary time series, based on the pattern of change within the time series. The theory of the 'scaled maximum', the 'log-rate-interval' (LRI) method, and the Hurst exponent all operate on the same principle of comparing the maximum change, or rate of change, in the observed dataset to the maximum change expected of a random walk. Less change in a dataset than expected of a random walk has been interpreted as indicating stabilizing selection, while more change implies directional selection. The 'runs test' in contrast, operates on the sequencing of steps, rather than on excursion. Applications of these tests to computer generated, simulated time series of known dynamical form and various levels of additive noise indicate that there is a fundamental asymmetry in the rate of type II errors of the tests based on excursion: they are all highly sensitive to noise in models of directional selection that result in a linear trend within a time series, but are largely noise immune in the case of a simple model of stabilizing selection. Additionally, the LRI method has a lower sensitivity than originally claimed, due to the large range of LRI rates produced by random walks. Examination of the published results of these tests show that they have seldom produced a conclusion that an observed evolutionary time series was due to directional selection, a result which needs closer examination in light of the asymmetric response of these tests.

  4. Composing Music with Complex Networks

    NASA Astrophysics Data System (ADS)

    Liu, Xiaofan; Tse, Chi K.; Small, Michael

    In this paper we study the network structure in music and attempt to compose music artificially. Networks are constructed with nodes and edges corresponding to musical notes and their co-occurrences. We analyze sample compositions from Bach, Mozart, Chopin, as well as other types of music including Chinese pop music. We observe remarkably similar properties in all networks constructed from the selected compositions. Power-law exponents of degree distributions, mean degrees, clustering coefficients, mean geodesic distances, etc. are reported. With the network constructed, music can be created by using a biased random walk algorithm, which begins with a randomly chosen note and selects the subsequent notes according to a simple set of rules that compares the weights of the edges, weights of the nodes, and/or the degrees of nodes. The newly created music from complex networks will be played in the presentation.

  5. The impact of traffic sign deficit on road traffic accidents in Nigeria.

    PubMed

    Ezeibe, Christian; Ilo, Chukwudi; Oguonu, Chika; Ali, Alphonsus; Abada, Ifeanyi; Ezeibe, Ezinwanne; Oguonu, Chukwunonso; Abada, Felicia; Izueke, Edwin; Agbo, Humphrey

    2018-04-04

    This study assesses the impact of traffic sign deficit on road traffic accidents in Nigeria. The participants were 720 commercial vehicle drivers. While simple random sampling was used to select 6 out of 137 federal highways, stratified random sampling was used to select six categories of commercial vehicle drivers. The study used qual-dominant mixed methods approach comprising key informant interviews; group interviews; field observation; policy appraisal and secondary literature on traffic signs. Result shows that the failure of government to provide and maintain traffic signs in order to guide road users through the numerous accident black spots on the highways is the major cause of road accidents in Nigeria. The study argues that provision and maintenance of traffic signs present opportunity to promoting safety on the highways and achieving the sustainable development goals.

  6. Influence of stochastic geometric imperfections on the load-carrying behaviour of thin-walled structures using constrained random fields

    NASA Astrophysics Data System (ADS)

    Lauterbach, S.; Fina, M.; Wagner, W.

    2018-04-01

    Since structural engineering requires highly developed and optimized structures, the thickness dependency is one of the most controversially debated topics. This paper deals with stability analysis of lightweight thin structures combined with arbitrary geometrical imperfections. Generally known design guidelines only consider imperfections for simple shapes and loading, whereas for complex structures the lower-bound design philosophy still holds. Herein, uncertainties are considered with an empirical knockdown factor representing a lower bound of existing measurements. To fully understand and predict expected bearable loads, numerical investigations are essential, including geometrical imperfections. These are implemented into a stand-alone program code with a stochastic approach to compute random fields as geometric imperfections that are applied to nodes of the finite element mesh of selected structural examples. The stochastic approach uses the Karhunen-Loève expansion for the random field discretization. For this approach, the so-called correlation length l_c controls the random field in a powerful way. This parameter has a major influence on the buckling shape, and also on the stability load. First, the impact of the correlation length is studied for simple structures. Second, since most structures for engineering devices are more complex and combined structures, these are intensively discussed with the focus on constrained random fields for e.g. flange-web-intersections. Specific constraints for those random fields are pointed out with regard to the finite element model. Further, geometrical imperfections vanish where the structure is supported.

  7. Benchmarking protein classification algorithms via supervised cross-validation.

    PubMed

    Kertész-Farkas, Attila; Dhir, Somdutta; Sonego, Paolo; Pacurar, Mircea; Netoteia, Sergiu; Nijveen, Harm; Kuzniar, Arnold; Leunissen, Jack A M; Kocsor, András; Pongor, Sándor

    2008-04-24

    Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold, leave-one-out, etc.) may not give reliable estimates on how an algorithm will generalize to novel, distantly related subtypes of the known protein classes. Supervised cross-validation, i.e., selection of test and train sets according to the known subtypes within a database has been successfully used earlier in conjunction with the SCOP database. Our goal was to extend this principle to other databases and to design standardized benchmark datasets for protein classification. Hierarchical classification trees of protein categories provide a simple and general framework for designing supervised cross-validation strategies for protein classification. Benchmark datasets can be designed at various levels of the concept hierarchy using a simple graph-theoretic distance. A combination of supervised and random sampling was selected to construct reduced size model datasets, suitable for algorithm comparison. Over 3000 new classification tasks were added to our recently established protein classification benchmark collection that currently includes protein sequence (including protein domains and entire proteins), protein structure and reading frame DNA sequence data. We carried out an extensive evaluation based on various machine-learning algorithms such as nearest neighbor, support vector machines, artificial neural networks, random forests and logistic regression, used in conjunction with comparison algorithms, BLAST, Smith-Waterman, Needleman-Wunsch, as well as 3D comparison methods DALI and PRIDE. The resulting datasets provide lower, and in our opinion more realistic estimates of the classifier performance than do random cross-validation schemes. A combination of supervised and random sampling was used to construct model datasets, suitable for algorithm comparison.

  8. Evaluation of some random effects methodology applicable to bird ringing data

    USGS Publications Warehouse

    Burnham, K.P.; White, Gary C.

    2002-01-01

    Existing models for ring recovery and recapture data analysis treat temporal variations in annual survival probability (S) as fixed effects. Often there is no explainable structure to the temporal variation in S1,..., Sk; random effects can then be a useful model: Si = E(S) + ??i. Here, the temporal variation in survival probability is treated as random with average value E(??2) = ??2. This random effects model can now be fit in program MARK. Resultant inferences include point and interval estimation for process variation, ??2, estimation of E(S) and var (E??(S)) where the latter includes a component for ??2 as well as the traditional component for v??ar(S??\\S??). Furthermore, the random effects model leads to shrinkage estimates, Si, as improved (in mean square error) estimators of Si compared to the MLE, S??i, from the unrestricted time-effects model. Appropriate confidence intervals based on the Si are also provided. In addition, AIC has been generalized to random effects models. This paper presents results of a Monte Carlo evaluation of inference performance under the simple random effects model. Examined by simulation, under the simple one group Cormack-Jolly-Seber (CJS) model, are issues such as bias of ??s2, confidence interval coverage on ??2, coverage and mean square error comparisons for inference about Si based on shrinkage versus maximum likelihood estimators, and performance of AIC model selection over three models: Si ??? S (no effects), Si = E(S) + ??i (random effects), and S1,..., Sk (fixed effects). For the cases simulated, the random effects methods performed well and were uniformly better than fixed effects MLE for the Si.

  9. Honest Importance Sampling with Multiple Markov Chains

    PubMed Central

    Tan, Aixin; Doss, Hani; Hobert, James P.

    2017-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection. PMID:28701855

  10. Honest Importance Sampling with Multiple Markov Chains.

    PubMed

    Tan, Aixin; Doss, Hani; Hobert, James P

    2015-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.

  11. Greedy Gossip With Eavesdropping

    NASA Astrophysics Data System (ADS)

    Ustebay, Deniz; Oreshkin, Boris N.; Coates, Mark J.; Rabbat, Michael G.

    2010-07-01

    This paper presents greedy gossip with eavesdropping (GGE), a novel randomized gossip algorithm for distributed computation of the average consensus problem. In gossip algorithms, nodes in the network randomly communicate with their neighbors and exchange information iteratively. The algorithms are simple and decentralized, making them attractive for wireless network applications. In general, gossip algorithms are robust to unreliable wireless conditions and time varying network topologies. In this paper we introduce GGE and demonstrate that greedy updates lead to rapid convergence. We do not require nodes to have any location information. Instead, greedy updates are made possible by exploiting the broadcast nature of wireless communications. During the operation of GGE, when a node decides to gossip, instead of choosing one of its neighbors at random, it makes a greedy selection, choosing the node which has the value most different from its own. In order to make this selection, nodes need to know their neighbors' values. Therefore, we assume that all transmissions are wireless broadcasts and nodes keep track of their neighbors' values by eavesdropping on their communications. We show that the convergence of GGE is guaranteed for connected network topologies. We also study the rates of convergence and illustrate, through theoretical bounds and numerical simulations, that GGE consistently outperforms randomized gossip and performs comparably to geographic gossip on moderate-sized random geometric graph topologies.

  12. Perception of Students on Causes of Poor Performance in Chemistry in External Examinations in Umuahia North Local Government of Abia State

    ERIC Educational Resources Information Center

    Ojukwu, M. O.

    2016-01-01

    The aim of this study was to investigate the perception of students on the causes of their poor performance in external chemistry examinations in Umuahia North Local Government Area of Abia State. Descriptive survey design was used for the study. Two hundred and forty (240) students were selected through simple random sampling for the study. A…

  13. An enhanced deterministic K-Means clustering algorithm for cancer subtype prediction from gene expression data.

    PubMed

    Nidheesh, N; Abdul Nazeer, K A; Ameer, P M

    2017-12-01

    Clustering algorithms with steps involving randomness usually give different results on different executions for the same dataset. This non-deterministic nature of algorithms such as the K-Means clustering algorithm limits their applicability in areas such as cancer subtype prediction using gene expression data. It is hard to sensibly compare the results of such algorithms with those of other algorithms. The non-deterministic nature of K-Means is due to its random selection of data points as initial centroids. We propose an improved, density based version of K-Means, which involves a novel and systematic method for selecting initial centroids. The key idea of the algorithm is to select data points which belong to dense regions and which are adequately separated in feature space as the initial centroids. We compared the proposed algorithm to a set of eleven widely used single clustering algorithms and a prominent ensemble clustering algorithm which is being used for cancer data classification, based on the performances on a set of datasets comprising ten cancer gene expression datasets. The proposed algorithm has shown better overall performance than the others. There is a pressing need in the Biomedical domain for simple, easy-to-use and more accurate Machine Learning tools for cancer subtype prediction. The proposed algorithm is simple, easy-to-use and gives stable results. Moreover, it provides comparatively better predictions of cancer subtypes from gene expression data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Lifestyle and specific dietary habits in the Italian population: focus on sugar intake and association with anthropometric parameters-the LIZ (Liquidi e Zuccheri nella popolazione Italiana) study.

    PubMed

    Marangoni, Franca; Brignoli, Ovidio; Cricelli, Claudio; Poli, Andrea

    2017-06-01

    In order to collect information on food intake, lifestyle and health status of the Italian population, a random cohort of about 2000 adults was selected in collaboration with the Italian society of general practitioners' network (SIMG). Cohort subjects underwent a full clinical evaluation, by their family doctor, who also collected anthropometric data and information on the prevalence of cardiovascular disease risk factors; they were also administered diary forms developed to assess dietary use of simple sugars, of sugar-containing food and of selected food items. Data obtained indicate that the consumption of simple sugars (either added or as natural part of food) by the Italian adult population is, on average, not high (65 and 67 g/day, among women and men, respectively) and mostly derived from food items such as fruit, milk and yogurt. In addition, no correlations were found, in this low-sugar-consuming cohort, between sugar intake and weight, body mass index and waist circumference. Intakes of simple sugars in the LIZ cohort are not associated with weight, BMI and waist circumference. Prospective data, from cohorts like the LIZ one, might shed further light on the contribution of simple sugar intake to health in countries like Italy.

  15. A New Random Walk for Replica Detection in WSNs.

    PubMed

    Aalsalem, Mohammed Y; Khan, Wazir Zada; Saad, N M; Hossain, Md Shohrab; Atiquzzaman, Mohammed; Khan, Muhammad Khurram

    2016-01-01

    Wireless Sensor Networks (WSNs) are vulnerable to Node Replication attacks or Clone attacks. Among all the existing clone detection protocols in WSNs, RAWL shows the most promising results by employing Simple Random Walk (SRW). More recently, RAND outperforms RAWL by incorporating Network Division with SRW. Both RAND and RAWL have used SRW for random selection of witness nodes which is problematic because of frequently revisiting the previously passed nodes that leads to longer delays, high expenditures of energy with lower probability that witness nodes intersect. To circumvent this problem, we propose to employ a new kind of constrained random walk, namely Single Stage Memory Random Walk and present a distributed technique called SSRWND (Single Stage Memory Random Walk with Network Division). In SSRWND, single stage memory random walk is combined with network division aiming to decrease the communication and memory costs while keeping the detection probability higher. Through intensive simulations it is verified that SSRWND guarantees higher witness node security with moderate communication and memory overheads. SSRWND is expedient for security oriented application fields of WSNs like military and medical.

  16. A New Random Walk for Replica Detection in WSNs

    PubMed Central

    Aalsalem, Mohammed Y.; Saad, N. M.; Hossain, Md. Shohrab; Atiquzzaman, Mohammed; Khan, Muhammad Khurram

    2016-01-01

    Wireless Sensor Networks (WSNs) are vulnerable to Node Replication attacks or Clone attacks. Among all the existing clone detection protocols in WSNs, RAWL shows the most promising results by employing Simple Random Walk (SRW). More recently, RAND outperforms RAWL by incorporating Network Division with SRW. Both RAND and RAWL have used SRW for random selection of witness nodes which is problematic because of frequently revisiting the previously passed nodes that leads to longer delays, high expenditures of energy with lower probability that witness nodes intersect. To circumvent this problem, we propose to employ a new kind of constrained random walk, namely Single Stage Memory Random Walk and present a distributed technique called SSRWND (Single Stage Memory Random Walk with Network Division). In SSRWND, single stage memory random walk is combined with network division aiming to decrease the communication and memory costs while keeping the detection probability higher. Through intensive simulations it is verified that SSRWND guarantees higher witness node security with moderate communication and memory overheads. SSRWND is expedient for security oriented application fields of WSNs like military and medical. PMID:27409082

  17. Identification of cultivars and validation of genetic relationships in Mangifera indica L. using RAPD markers.

    PubMed

    Schnell, R J; Ronning, C M; Knight, R J

    1995-02-01

    Twenty-five accessions of mango were examined for random amplified polymorphic DNA (RAPD) genetic markers with 80 10-mer random primers. Of the 80 primers screened, 33 did not amplify, 19 were monomorphic, and 28 gave reproducible, polymorphic DNA amplification patterns. Eleven primers were selected from the 28 for the study. The number of bands generated was primer- and genotype-dependent, and ranged from 1 to 10. No primer gave unique banding patterns for each of the 25 accessions; however, ten different combinations of 2 primer banding patterns produced unique fingerprints for each accession. A maternal half-sib (MHS) family was included among the 25 accessions to see if genetic relationships could be detected. RAPD data were used to generate simple matching coefficients, which were analyzed phenetically and by means of principal coordinate analysis (PCA). The MHS clustered together in both the phenetic and the PCA while the randomly selected accessions were scattered with no apparent pattern. The uses of RAPD analysis for Mangifera germ plasm classification and clonal identification are discussed.

  18. Bet-hedging as a complex interaction among developmental instability, environmental heterogeneity, dispersal, and life-history strategy.

    PubMed

    Scheiner, Samuel M

    2014-02-01

    One potential evolutionary response to environmental heterogeneity is the production of randomly variable offspring through developmental instability, a type of bet-hedging. I used an individual-based, genetically explicit model to examine the evolution of developmental instability. The model considered both temporal and spatial heterogeneity alone and in combination, the effect of migration pattern (stepping stone vs. island), and life-history strategy. I confirmed that temporal heterogeneity alone requires a threshold amount of variation to select for a substantial amount of developmental instability. For spatial heterogeneity only, the response to selection on developmental instability depended on the life-history strategy and the form and pattern of dispersal with the greatest response for island migration when selection occurred before dispersal. Both spatial and temporal variation alone select for similar amounts of instability, but in combination resulted in substantially more instability than either alone. Local adaptation traded off against bet-hedging, but not in a simple linear fashion. I found higher-order interactions between life-history patterns, dispersal rates, dispersal patterns, and environmental heterogeneity that are not explainable by simple intuition. We need additional modeling efforts to understand these interactions and empirical tests that explicitly account for all of these factors.

  19. Search efficiency of biased migration towards stationary or moving targets in heterogeneously structured environments

    NASA Astrophysics Data System (ADS)

    Azimzade, Youness; Mashaghi, Alireza

    2017-12-01

    Efficient search acts as a strong selective force in biological systems ranging from cellular populations to predator-prey systems. The search processes commonly involve finding a stationary or mobile target within a heterogeneously structured environment where obstacles limit migration. An open generic question is whether random or directionally biased motions or a combination of both provide an optimal search efficiency and how that depends on the motility and density of targets and obstacles. To address this question, we develop a simple model that involves a random walker searching for its targets in a heterogeneous medium of bond percolation square lattice and used mean first passage time (〈T 〉 ) as an indication of average search time. Our analysis reveals a dual effect of directional bias on the minimum value of 〈T 〉 . For a homogeneous medium, directionality always decreases 〈T 〉 and a pure directional migration (a ballistic motion) serves as the optimized strategy, while for a heterogeneous environment, we find that the optimized strategy involves a combination of directed and random migrations. The relative contribution of these modes is determined by the density of obstacles and motility of targets. Existence of randomness and motility of targets add to the efficiency of search. Our study reveals generic and simple rules that govern search efficiency. Our findings might find application in a number of areas including immunology, cell biology, ecology, and robotics.

  20. Nest construction by a ground-nesting bird represents a potential trade-off between egg crypticity and thermoregulation.

    PubMed

    Mayer, Paul M; Smith, Levica M; Ford, Robert G; Watterson, Dustin C; McCutchen, Marshall D; Ryan, Mark R

    2009-04-01

    Predation selects against conspicuous colors in bird eggs and nests, while thermoregulatory constraints select for nest-building behavior that regulates incubation temperatures. We present results that suggest a trade-off between nest crypticity and thermoregulation of eggs based on selection of nest materials by piping plovers (Charadrius melodus), a ground-nesting bird that constructs simple, pebble-lined nests highly vulnerable to predators and exposed to temperature extremes. Piping plovers selected pebbles that were whiter and appeared closer in color to eggs than randomly available pebbles, suggesting a crypsis function. However, nests that were more contrasting in color to surrounding substrates were at greater risk of predation, suggesting an alternate strategy driving selection of white rocks. Near-infrared reflectance of nest pebbles was higher than randomly available pebbles, indicating a direct physical mechanism for heat control through pebble selection. Artificial nests constructed of randomly available pebbles heated more quickly and conferred heat to model eggs, causing eggs to heat more rapidly than in nests constructed from piping plover nest pebbles. Thermal models and field data indicated that temperatures inside nests may remain up to 2-6 degrees C cooler than surrounding substrates. Thermal models indicated that nests heat especially rapidly if not incubated, suggesting that nest construction behavior may serve to keep eggs cooler during the unattended laying period. Thus, pebble selection suggests a potential trade-off between maximizing heat reflectance to improve egg microclimate and minimizing conspicuous contrast of nests with the surrounding substrate to conceal eggs from predators. Nest construction behavior that employs light-colored, thermally reflective materials may represent an evolutionary response by birds and other egg-laying organisms to egg predation and heat stress.

  1. De novo selection of oncogenes.

    PubMed

    Chacón, Kelly M; Petti, Lisa M; Scheideman, Elizabeth H; Pirazzoli, Valentina; Politi, Katerina; DiMaio, Daniel

    2014-01-07

    All cellular proteins are derived from preexisting ones by natural selection. Because of the random nature of this process, many potentially useful protein structures never arose or were discarded during evolution. Here, we used a single round of genetic selection in mouse cells to isolate chemically simple, biologically active transmembrane proteins that do not contain any amino acid sequences from preexisting proteins. We screened a retroviral library expressing hundreds of thousands of proteins consisting of hydrophobic amino acids in random order to isolate four 29-aa proteins that induced focus formation in mouse and human fibroblasts and tumors in mice. These proteins share no amino acid sequences with known cellular or viral proteins, and the simplest of them contains only seven different amino acids. They transformed cells by forming a stable complex with the platelet-derived growth factor β receptor transmembrane domain and causing ligand-independent receptor activation. We term this approach de novo selection and suggest that it can be used to generate structures and activities not observed in nature, create prototypes for novel research reagents and therapeutics, and provide insight into cell biology, transmembrane protein-protein interactions, and possibly virus evolution and the origin of life.

  2. Predicting the accuracy of ligand overlay methods with Random Forest models.

    PubMed

    Nandigam, Ravi K; Evans, David A; Erickson, Jon A; Kim, Sangtae; Sutherland, Jeffrey J

    2008-12-01

    The accuracy of binding mode prediction using standard molecular overlay methods (ROCS, FlexS, Phase, and FieldCompare) is studied. Previous work has shown that simple decision tree modeling can be used to improve accuracy by selection of the best overlay template. This concept is extended to the use of Random Forest (RF) modeling for template and algorithm selection. An extensive data set of 815 ligand-bound X-ray structures representing 5 gene families was used for generating ca. 70,000 overlays using four programs. RF models, trained using standard measures of ligand and protein similarity and Lipinski-related descriptors, are used for automatically selecting the reference ligand and overlay method maximizing the probability of reproducing the overlay deduced from X-ray structures (i.e., using rmsd < or = 2 A as the criteria for success). RF model scores are highly predictive of overlay accuracy, and their use in template and method selection produces correct overlays in 57% of cases for 349 overlay ligands not used for training RF models. The inclusion in the models of protein sequence similarity enables the use of templates bound to related protein structures, yielding useful results even for proteins having no available X-ray structures.

  3. Inter-rater agreement among orthodontists in a blocked experiment.

    PubMed

    Korn, E L; Baumrind, S

    1985-01-01

    Five orthodontists were asked to predict for 64 patients a particular dichotomous outcome of treatment based on pre-treatment X-ray films. The orthodontists rated the cases in blocks of size 4-6 with the knowledge of the number of positive outcomes in each block. We discuss the reasons why this blocked design is appropriate whenever clinicians are asked to rate cases which have not been randomly selected from a clinical practice similar to their own. We give a simple description of the inter-rater agreement for this type of blocked experiment as well as a procedure to test that the agreement is no better than that expected by random independent assignment.

  4. Random walk, diffusion and mixing in simulations of scalar transport in fluid flows

    NASA Astrophysics Data System (ADS)

    Klimenko, A. Y.

    2008-12-01

    Physical similarity and mathematical equivalence of continuous diffusion and particle random walk form one of the cornerstones of modern physics and the theory of stochastic processes. In many applied models used in simulation of turbulent transport and turbulent combustion, mixing between particles is used to reflect the influence of the continuous diffusion terms in the transport equations. We show that the continuous scalar transport and diffusion can be accurately specified by means of mixing between randomly walking Lagrangian particles with scalar properties and assess errors associated with this scheme. This gives an alternative formulation for the stochastic process which is selected to represent the continuous diffusion. This paper focuses on statistical errors and deals with relatively simple cases, where one-particle distributions are sufficient for a complete description of the problem.

  5. Employee resourcing strategies and universities' corporate image: A survey dataset.

    PubMed

    Falola, Hezekiah Olubusayo; Oludayo, Olumuyiwa Akinrole; Olokundun, Maxwell Ayodele; Salau, Odunayo Paul; Ibidunni, Ayodotun Stephen; Igbinoba, Ebe

    2018-06-01

    The data examined the effect of employee resourcing strategies on corporate image. The data were generated from a total of 500 copies of questionnaire administered to the academic staff of the six (6) selected private Universities in Southwest, Nigeria, out of which four hundred and forty-three (443) were retrieved. Stratified and simple random sampling techniques were used to select the respondents for this study. Descriptive and Linear Regression, were used for the presentation of the data. Mean score was used as statistical tool of analysis. Therefore, the data presented in this article is made available to facilitate further and more comprehensive investigation on the subject matter.

  6. Solving da Vinci stereopsis with depth-edge-selective V2 cells

    PubMed Central

    Assee, Andrew; Qian, Ning

    2007-01-01

    We propose a new model for da Vinci stereopsis based on a coarse-to-fine disparity-energy computation in V1 and disparity-boundary-selective units in V2. Unlike previous work, our model contains only binocular cells, relies on distributed representations of disparity, and has a simple V1-to-V2 feedforward structure. We demonstrate with random dot stereograms that the V2 stage of our model is able to determine the location and the eye-of-origin of monocularly occluded regions and improve disparity map computation. We also examine a few related issues. First, we argue that since monocular regions are binocularly defined, they cannot generally be detected by monocular cells. Second, we show that our coarse-to-fine V1 model for conventional stereopsis explains double matching in Panum’s limiting case. This provides computational support to the notion that the perceived depth of a monocular bar next to a binocular rectangle may not be da Vinci stereopsis per se (Gillam et al., 2003). Third, we demonstrate that some stimuli previously deemed invalid have simple, valid geometric interpretations. Our work suggests that studies of da Vinci stereopsis should focus on stimuli more general than the bar-and-rectangle type and that disparity-boundary-selective V2 cells may provide a simple physiological mechanism for da Vinci stereopsis. PMID:17698163

  7. A comparative study of Bilvadi Yoga Ashchyotana and eye drops in Vataja Abhishyanda (Simple Allergic Conjunctivitis).

    PubMed

    Udani, Jayshree; Vaghela, D B; Rajagopala, Manjusha; Matalia, P D

    2012-01-01

    Simple allergic conjunctivitis is the most common form of ocular allergy (prevalence 5 - 22 %). It is a hypersensitivity reaction to specific airborne antigens. The disease Vataja Abhishyanda, which is due to vitiation of Vata Pradhana Tridosha is comparable with this condition. The management of simple allergic conjunctivitis in modern ophthalmology is very expensive and it should be followed lifelong and Ayurveda can provide better relief in such manifestation. This is the first research study on Vataja Abhishyanda. Patients were selected from the Outpatient Department (OPD), Inpatient Department (IPD), of the Shalakya Tantra Department and were randomly divided into two groups. In Group-A Bilvadi Ashchyotana and in Group-B Bilvadi eye drops were instilled for three months. Total 32 patients were registered and 27 patients completed the course of treatment. Bilvadi Ashchyotana gave better results in Toda, Sangharsha, Parushya, Kandu and Ragata as compared with Bilvadi Eye Drops in Vataja Abhishyanda.

  8. Impact of Jos Crises on Pattern of Students/Teachers' Population in Schools and Its Implication on the Quality of Teaching and Peaceful Co-Existence in Nigeria

    ERIC Educational Resources Information Center

    Jacob, Sunday

    2015-01-01

    This study examined the pattern of students/teachers' population in schools as a result of the crises witnessed in Jos and its consequences on quality of teaching as well as peaceful living in Jos. Stratified simple random sampling technique was used to select the 18 schools that were used for this study. Questionnaire was used to collect…

  9. Percolation model for a selective response of the resistance of composite semiconducting np systems with respect to reducing gases.

    PubMed

    Russ, Stefanie

    2014-08-01

    It is shown that a two-component percolation model on a simple cubic lattice can explain an experimentally observed behavior [Savage et al., Sens. Actuators B 79, 17 (2001); Sens. Actuators B 72, 239 (2001).], namely, that a network built up by a mixture of sintered nanocrystalline semiconducting n and p grains can exhibit selective behavior, i.e., respond with a resistance increase when exposed to a reducing gas A and with a resistance decrease in response to another reducing gas B. To this end, a simple model is developed, where the n and p grains are simulated by overlapping spheres, based on realistic assumptions about the gas reactions on the grain surfaces. The resistance is calculated by random walk simulations with nn, pp, and np bonds between the grains, and the results are found in very good agreement with the experiments. Contrary to former assumptions, the np bonds are crucial to obtain this accordance.

  10. Acute personalized habitual caffeine doses improve attention and have selective effects when considering the fractionation of executive functions.

    PubMed

    Lanini, Juliana; Galduróz, José Carlos Fernandes; Pompéia, Sabine

    2016-01-01

    Caffeine is widely used, often consumed with food, and improves simple and complex/executive attention under fasting conditions. We investigated whether these cognitive effects are observed when personalized habitual doses of caffeine are ingested by caffeine consumers, whether they are influenced by nutriments and if various executive domains are susceptible to improvement. This was a double-blind, placebo-controlled study including 60 young, healthy, rested males randomly assigned to one of four treatments: placebo fasting, caffeine fasting, placebo meal and caffeine meal. Caffeine doses were individualized for each participant based on their self-reported caffeine consumption at the time of testing (morning). The test battery included measures of simple and sustained attention, executive domains (inhibiting, updating, shifting, dual tasking, planning and accessing long-term memory), control measures of subjective alterations, glucose and insulin levels, skin conductance, heart rate and pupil dilation. Regardless of meal intake, acute habitual doses of caffeine decreased fatigue, and improved simple and sustained attention and executive updating. This executive effect was not secondary to the habitual weekly dose consumed, changes in simple and sustained attention, mood, meal ingestion and increases in cognitive effort. We conclude that the morning caffeine "fix" has positive attentional effects and selectively improved executive updating whether or not caffeine is consumed with food. Copyright © 2015 John Wiley & Sons, Ltd.

  11. Using known map category marginal frequencies to improve estimates of thematic map accuracy

    NASA Technical Reports Server (NTRS)

    Card, D. H.

    1982-01-01

    By means of two simple sampling plans suggested in the accuracy-assessment literature, it is shown how one can use knowledge of map-category relative sizes to improve estimates of various probabilities. The fact that maximum likelihood estimates of cell probabilities for the simple random sampling and map category-stratified sampling were identical has permitted a unified treatment of the contingency-table analysis. A rigorous analysis of the effect of sampling independently within map categories is made possible by results for the stratified case. It is noted that such matters as optimal sample size selection for the achievement of a desired level of precision in various estimators are irrelevant, since the estimators derived are valid irrespective of how sample sizes are chosen.

  12. Determination of the Optimal Chromosomal Location(s) for a DNA Element in Escherichia coli Using a Novel Transposon-mediated Approach.

    PubMed

    Frimodt-Møller, Jakob; Charbon, Godefroid; Krogfelt, Karen A; Løbner-Olesen, Anders

    2017-09-11

    The optimal chromosomal position(s) of a given DNA element was/were determined by transposon-mediated random insertion followed by fitness selection. In bacteria, the impact of the genetic context on the function of a genetic element can be difficult to assess. Several mechanisms, including topological effects, transcriptional interference from neighboring genes, and/or replication-associated gene dosage, may affect the function of a given genetic element. Here, we describe a method that permits the random integration of a DNA element into the chromosome of Escherichia coli and select the most favorable locations using a simple growth competition experiment. The method takes advantage of a well-described transposon-based system of random insertion, coupled with a selection of the fittest clone(s) by growth advantage, a procedure that is easily adjustable to experimental needs. The nature of the fittest clone(s) can be determined by whole-genome sequencing on a complex multi-clonal population or by easy gene walking for the rapid identification of selected clones. Here, the non-coding DNA region DARS2, which controls the initiation of chromosome replication in E. coli, was used as an example. The function of DARS2 is known to be affected by replication-associated gene dosage; the closer DARS2 gets to the origin of DNA replication, the more active it becomes. DARS2 was randomly inserted into the chromosome of a DARS2-deleted strain. The resultant clones containing individual insertions were pooled and competed against one another for hundreds of generations. Finally, the fittest clones were characterized and found to contain DARS2 inserted in close proximity to the original DARS2 location.

  13. Prediction of soil attributes through interpolators in a deglaciated environment with complex landforms

    NASA Astrophysics Data System (ADS)

    Schünemann, Adriano Luis; Inácio Fernandes Filho, Elpídio; Rocha Francelino, Marcio; Rodrigues Santos, Gérson; Thomazini, Andre; Batista Pereira, Antônio; Gonçalves Reynaud Schaefer, Carlos Ernesto

    2017-04-01

    The knowledge of environmental variables values, in non-sampled sites from a minimum data set can be accessed through interpolation technique. Kriging and the classifier Random Forest algorithm are examples of predictors with this aim. The objective of this work was to compare methods of soil attributes spatialization in a recent deglaciated environment with complex landforms. Prediction of the selected soil attributes (potassium, calcium and magnesium) from ice-free areas were tested by using morphometric covariables, and geostatistical models without these covariables. For this, 106 soil samples were collected at 0-10 cm depth in Keller Peninsula, King George Island, Maritime Antarctica. Soil chemical analysis was performed by the gravimetric method, determining values of potassium, calcium and magnesium for each sampled point. Digital terrain models (DTMs) were obtained by using Terrestrial Laser Scanner. DTMs were generated from a cloud of points with spatial resolutions of 1, 5, 10, 20 and 30 m. Hence, 40 morphometric covariates were generated. Simple Kriging was performed using the R package software. The same data set coupled with morphometric covariates, was used to predict values of the studied attributes in non-sampled sites through Random Forest interpolator. Little differences were observed on the DTMs generated by Simple kriging and Random Forest interpolators. Also, DTMs with better spatial resolution did not improved the quality of soil attributes prediction. Results revealed that Simple Kriging can be used as interpolator when morphometric covariates are not available, with little impact regarding quality. It is necessary to go further in soil chemical attributes prediction techniques, especially in periglacial areas with complex landforms.

  14. Random forests ensemble classifier trained with data resampling strategy to improve cardiac arrhythmia diagnosis.

    PubMed

    Ozçift, Akin

    2011-05-01

    Supervised classification algorithms are commonly used in the designing of computer-aided diagnosis systems. In this study, we present a resampling strategy based Random Forests (RF) ensemble classifier to improve diagnosis of cardiac arrhythmia. Random forests is an ensemble classifier that consists of many decision trees and outputs the class that is the mode of the class's output by individual trees. In this way, an RF ensemble classifier performs better than a single tree from classification performance point of view. In general, multiclass datasets having unbalanced distribution of sample sizes are difficult to analyze in terms of class discrimination. Cardiac arrhythmia is such a dataset that has multiple classes with small sample sizes and it is therefore adequate to test our resampling based training strategy. The dataset contains 452 samples in fourteen types of arrhythmias and eleven of these classes have sample sizes less than 15. Our diagnosis strategy consists of two parts: (i) a correlation based feature selection algorithm is used to select relevant features from cardiac arrhythmia dataset. (ii) RF machine learning algorithm is used to evaluate the performance of selected features with and without simple random sampling to evaluate the efficiency of proposed training strategy. The resultant accuracy of the classifier is found to be 90.0% and this is a quite high diagnosis performance for cardiac arrhythmia. Furthermore, three case studies, i.e., thyroid, cardiotocography and audiology, are used to benchmark the effectiveness of the proposed method. The results of experiments demonstrated the efficiency of random sampling strategy in training RF ensemble classification algorithm. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Theoretical and Experimental Investigation of Random Gust Loads Part I : Aerodynamic Transfer Function of a Simple Wing Configuration in Incompressible Flow

    NASA Technical Reports Server (NTRS)

    Hakkinen, Raimo J; Richardson, A S , Jr

    1957-01-01

    Sinusoidally oscillating downwash and lift produced on a simple rigid airfoil were measured and compared with calculated values. Statistically stationary random downwash and the corresponding lift on a simple rigid airfoil were also measured and the transfer functions between their power spectra determined. The random experimental values are compared with theoretically approximated values. Limitations of the experimental technique and the need for more extensive experimental data are discussed.

  16. Effects of different preservation methods on inter simple sequence repeat (ISSR) and random amplified polymorphic DNA (RAPD) molecular markers in botanic samples.

    PubMed

    Wang, Xiaolong; Li, Lin; Zhao, Jiaxin; Li, Fangliang; Guo, Wei; Chen, Xia

    2017-04-01

    To evaluate the effects of different preservation methods (stored in a -20°C ice chest, preserved in liquid nitrogen and dried in silica gel) on inter simple sequence repeat (ISSR) or random amplified polymorphic DNA (RAPD) analyses in various botanical specimens (including broad-leaved plants, needle-leaved plants and succulent plants) for different times (three weeks and three years), we used a statistical analysis based on the number of bands, genetic index and cluster analysis. The results demonstrate that methods used to preserve samples can provide sufficient amounts of genomic DNA for ISSR and RAPD analyses; however, the effect of different preservation methods on these analyses vary significantly, and the preservation time has little effect on these analyses. Our results provide a reference for researchers to select the most suitable preservation method depending on their study subject for the analysis of molecular markers based on genomic DNA. Copyright © 2017 Académie des sciences. Published by Elsevier Masson SAS. All rights reserved.

  17. Effects of 3 dimensional crystal geometry and orientation on 1D and 2D time-scale determinations of magmatic processes using olivine and orthopyroxene

    NASA Astrophysics Data System (ADS)

    Shea, Thomas; Krimer, Daniel; Costa, Fidel; Hammer, Julia

    2014-05-01

    One of the achievements in recent years in volcanology is the determination of time-scales of magmatic processes via diffusion in minerals and its addition to the petrologists' and volcanologists' toolbox. The method typically requires one-dimensional modeling of randomly cut crystals from two-dimensional thin sections. Here we address the question whether using 1D (traverse) or 2D (surface) datasets exploited from randomly cut 3D crystals introduces a bias or dispersion in the time-scales estimated, and how this error can be improved or eliminated. Computational simulations were performed using a concentration-dependent, finite-difference solution to the diffusion equation in 3D. The starting numerical models involved simple geometries (spheres, parallelepipeds), Mg/Fe zoning patterns (either normal or reverse), and isotropic diffusion coefficients. Subsequent models progressively incorporated more complexity, 3D olivines possessing representative polyhedral morphologies, diffusion anisotropy along the different crystallographic axes, and more intricate core-rim zoning patterns. Sections and profiles used to compare 1, 2 and 3D diffusion models were selected to be (1) parallel to the crystal axes, (2) randomly oriented but passing through the olivine center, or (3) randomly oriented and sectioned. Results show that time-scales estimated on randomly cut traverses (1D) or surfaces (2D) can be widely distributed around the actual durations of 3D diffusion (~0.2 to 10 times the true diffusion time). The magnitude over- or underestimations of duration are a complex combination of the geometry of the crystal, the zoning pattern, the orientation of the cuts with respect to the crystallographic axes, and the degree of diffusion anisotropy. Errors on estimated time-scales retrieved from such models may thus be significant. Drastic reductions in the uncertainty of calculated diffusion times can be obtained by following some simple guidelines during the course of data collection (i.e. selection of crystals and concentration profiles, acquisition of crystallographic orientation data), thus allowing derivation of robust time-scales.

  18. Statistical methods for efficient design of community surveys of response to noise: Random coefficients regression models

    NASA Technical Reports Server (NTRS)

    Tomberlin, T. J.

    1985-01-01

    Research studies of residents' responses to noise consist of interviews with samples of individuals who are drawn from a number of different compact study areas. The statistical techniques developed provide a basis for those sample design decisions. These techniques are suitable for a wide range of sample survey applications. A sample may consist of a random sample of residents selected from a sample of compact study areas, or in a more complex design, of a sample of residents selected from a sample of larger areas (e.g., cities). The techniques may be applied to estimates of the effects on annoyance of noise level, numbers of noise events, the time-of-day of the events, ambient noise levels, or other factors. Methods are provided for determining, in advance, how accurately these effects can be estimated for different sample sizes and study designs. Using a simple cost function, they also provide for optimum allocation of the sample across the stages of the design for estimating these effects. These techniques are developed via a regression model in which the regression coefficients are assumed to be random, with components of variance associated with the various stages of a multi-stage sample design.

  19. Sampling Methods in Cardiovascular Nursing Research: An Overview.

    PubMed

    Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie

    2014-01-01

    Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.

  20. Knowledge, attitude, and practice (KAP) of food hygiene among schools students' in Majmaah city, Saudi Arabia.

    PubMed

    Almansour, Mohammed; Sami, Waqas; Al-Rashedy, Oliyan Shoqer; Alsaab, Rayan Saad; Alfayez, Abdulrahman Saad; Almarri, Nawaf Rashed

    2016-04-01

    To determine the level of knowledge, attitude, and practice of food hygiene among primary, intermediate and high school students and explore association, if any, with socio-demographic differences. The observational cross-sectional study was conducted at boy's schools in Majmaah, Kingdom of Saudi Arabia, from February to May 2014. Data was collected using stratified random sampling technique from students aged 8-25 year. Two schools from each level (primary, intermediate and high school) were randomly selected and data was collected from the selected schools using simple random sampling method. A self-administered modified Sharif and Al-Malki questionnaire for knowledge, attitude and practice of food hygiene was used with Arabic translation. The mean age of 377 male students in the study was 14.53±2.647 years. Knowledge levels was less in primary school students compared to high school students (p=0.026). Attitude level was high in primary school students compared to intermediate school students (p< 0.001). No significant difference was observed between groups with regard to practice levels (p=0.152). The students exhibited good practice levels, despite fair knowledge and attitude levels.

  1. Comparison of Structural Optimization Techniques for a Nuclear Electric Space Vehicle

    NASA Technical Reports Server (NTRS)

    Benford, Andrew

    2003-01-01

    The purpose of this paper is to utilize the optimization method of genetic algorithms (GA) for truss design on a nuclear propulsion vehicle. Genetic Algorithms are a guided, random search that mirrors Darwin s theory of natural selection and survival of the fittest. To verify the GA s capabilities, other traditional optimization methods were used to compare the results obtained by the GA's, first on simple 2-D structures, and eventually on full-scale 3-D truss designs.

  2. Entropy of level-cut random Gaussian structures at different volume fractions

    NASA Astrophysics Data System (ADS)

    Marčelja, Stjepan

    2017-10-01

    Cutting random Gaussian fields at a given level can create a variety of morphologically different two- or several-phase structures that have often been used to describe physical systems. The entropy of such structures depends on the covariance function of the generating Gaussian random field, which in turn depends on its spectral density. But the entropy of level-cut structures also depends on the volume fractions of different phases, which is determined by the selection of the cutting level. This dependence has been neglected in earlier work. We evaluate the entropy of several lattice models to show that, even in the cases of strongly coupled systems, the dependence of the entropy of level-cut structures on molar fractions of the constituents scales with the simple ideal noninteracting system formula. In the last section, we discuss the application of the results to binary or ternary fluids and microemulsions.

  3. Emergence of an optimal search strategy from a simple random walk

    PubMed Central

    Sakiyama, Tomoko; Gunji, Yukio-Pegio

    2013-01-01

    In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths. PMID:23804445

  4. Emergence of an optimal search strategy from a simple random walk.

    PubMed

    Sakiyama, Tomoko; Gunji, Yukio-Pegio

    2013-09-06

    In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths.

  5. An Improved Ensemble of Random Vector Functional Link Networks Based on Particle Swarm Optimization with Double Optimization Strategy

    PubMed Central

    Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang

    2016-01-01

    For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system. PMID:27835638

  6. An Improved Ensemble of Random Vector Functional Link Networks Based on Particle Swarm Optimization with Double Optimization Strategy.

    PubMed

    Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang

    2016-01-01

    For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system.

  7. Models of Protocellular Structure, Function and Evolution

    NASA Technical Reports Server (NTRS)

    New, Michael H.; Pohorille, Andrew; Szostak, Jack W.; Keefe, Tony; Lanyi, Janos K.

    2001-01-01

    In the absence of any record of protocells, the most direct way to test our understanding of the origin of cellular life is to construct laboratory models that capture important features of protocellular systems. Such efforts are currently underway in a collaborative project between NASA-Ames, Harvard Medical School and University of California. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures. The centerpiece of this project is a method for the in vitro evolution of protein enzymes toward arbitrary catalytic targets. A similar approach has already been developed for nucleic acids in which a small number of functional molecules are selected from a large, random population of candidates. The selected molecules are next vastly multiplied using the polymerase chain reaction. A mutagenic approach, in which the sequences of selected molecules are randomly altered, can yield further improvements in performance or alterations of specificities. Unfortunately, the catalytic potential of nucleic acids is rather limited. Proteins are more catalytically capable but cannot be directly amplified. In the new technique, this problem is circumvented by covalently linking each protein of the initial, diverse, pool to the RNA sequence that codes for it. Then, selection is performed on the proteins, but the nucleic acids are replicated. Additional information is contained in the original extended abstract.

  8. Selection of DNA aptamers against Human Cardiac Troponin I for colorimetric sensor based dot blot application.

    PubMed

    Dorraj, Ghamar Soltan; Rassaee, Mohammad Javad; Latifi, Ali Mohammad; Pishgoo, Bahram; Tavallaei, Mahmood

    2015-08-20

    Troponin T and I are ideal markers which are highly sensitive and specific for myocardial injury and have shown better efficacy than earlier markers. Since aptamers are ssDNA or RNA that bind to a wide variety of target molecules, the purpose of this research was to select an aptamer from a 79bp single-stranded DNA (ssDNA) random library that was used to bind the Human Cardiac Troponin I from a synthetic nucleic acids library by systematic evolution of ligands exponential enrichment (Selex) based on several selection and amplification steps. Human Cardiac Troponin I protein was coated onto the surface of streptavidin magnetic beads to extract specific aptamer from a large and diverse random ssDNA initial oligonucleotide library. As a result, several aptamers were selected and further examined for binding affinity and specificity. Finally TnIApt 23 showed beast affinity in nanomolar range (2.69nM) toward the target protein. A simple and rapid colorimetric detection assay for Human Cardiac Troponin I using the novel and specific aptamer-AuNPs conjugates based on dot blot assay was developed. The detection limit for this protein using aptamer-AuNPs-based assay was found to be 5ng/ml. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Defining fitness in an uncertain world.

    PubMed

    Crewe, Paul; Gratwick, Richard; Grafen, Alan

    2018-04-01

    The recently elucidated definition of fitness employed by Fisher in his fundamental theorem of natural selection is combined with reproductive values as appropriately defined in the context of both random environments and continuing fluctuations in the distribution over classes in a class-structured population. We obtain astonishingly simple results, generalisations of the Price Equation and the fundamental theorem, that show natural selection acting only through the arithmetic expectation of fitness over all uncertainties, in contrast to previous studies with fluctuating demography, in which natural selection looks rather complicated. Furthermore, our setting permits each class to have its characteristic ploidy, thus covering haploidy, diploidy and haplodiploidy at the same time; and allows arbitrary classes, including continuous variables such as condition. The simplicity is achieved by focussing just on the effects of natural selection on genotype frequencies: while other causes are present in the model, and the effect of natural selection is assessed in their presence, these causes will have their own further effects on genoytpe frequencies that are not assessed here. Also, Fisher's uses of reproductive value are shown to have two ambivalences, and a new axiomatic foundation for reproductive value is endorsed. The results continue the formal darwinism project, and extend support for the individual-as-maximising-agent analogy to finite populations with random environments and fluctuating class-distributions. The model may also lead to improved ways to measure fitness in real populations.

  10. Analysis of creative mathematic thinking ability in problem based learning model based on self-regulation learning

    NASA Astrophysics Data System (ADS)

    Munahefi, D. N.; Waluya, S. B.; Rochmad

    2018-03-01

    The purpose of this research identified the effectiveness of Problem Based Learning (PBL) models based on Self Regulation Leaning (SRL) on the ability of mathematical creative thinking and analyzed the ability of mathematical creative thinking of high school students in solving mathematical problems. The population of this study was students of grade X SMA N 3 Klaten. The research method used in this research was sequential explanatory. Quantitative stages with simple random sampling technique, where two classes were selected randomly as experimental class was taught with the PBL model based on SRL and control class was taught with expository model. The selection of samples at the qualitative stage was non-probability sampling technique in which each selected 3 students were high, medium, and low academic levels. PBL model with SRL approach effectived to students’ mathematical creative thinking ability. The ability of mathematical creative thinking of low academic level students with PBL model approach of SRL were achieving the aspect of fluency and flexibility. Students of academic level were achieving fluency and flexibility aspects well. But the originality of students at the academic level was not yet well structured. Students of high academic level could reach the aspect of originality.

  11. Evaluation of a Class of Simple and Effective Uncertainty Methods for Sparse Samples of Random Variables and Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero, Vicente; Bonney, Matthew; Schroeder, Benjamin

    When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a classmore » of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10 -4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.« less

  12. Eighty routes to a ribonucleotide world; dispersion and stringency in the decisive selection.

    PubMed

    Yarus, Michael

    2018-05-21

    We examine the initial emergence of genetics; that is, of an inherited chemical capability. The crucial actors are ribonucleotides, occasionally meeting in a prebiotic landscape. Previous work identified six influential variables during such random ribonucleotide pooling. Geochemical pools can be in periodic danger (e.g., from tides) or constant danger (e.g., from unfavorable weather). Such pools receive Gaussian nucleotide amounts sporadically, at random times, or get varying substrates simultaneously. Pools use cross-templated RNA synthesis (5'-5' product from 5'-3' template) or para-templated (5'-5' product from 5'-5' template) synthesis. Pools can undergo mild or strong selection, and be recently initiated (early) or late in age. Considering > 80 combinations of these variables, selection calculations identify a superior route. Most likely, an early, sporadically fed, cross-templating pool in constant danger, receiving ≥ 1 mM nucleotides while under strong selection for a coenzyme-like product will host selection of the first encoded biochemical functions. Predominantly templated products emerge from a critical event, the starting bloc selection, which exploits inevitable differences among early pools. Favorable selection has a simple rationale; it is increased by product dispersion (sd/mean), by selection intensity (mild or strong), or by combining these factors as stringency, reciprocal fraction of pools selected (1/sfsel). To summarize: chance utility, acting via a preference for disperse, templated coenzyme-like dinucleotides, uses stringent starting bloc selection to quickly establish majority encoded/genetic expression. Despite its computational origin, starting bloc selection is largely independent of specialized assumptions. This ribodinucleotide route to inheritance may also have facilitated 5'-3' chemical RNA replication. Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  13. A comparative study of Bilvadi Yoga Ashchyotana and eye drops in Vataja Abhishyanda (Simple Allergic Conjunctivitis)

    PubMed Central

    Udani, Jayshree; Vaghela, D. B.; Rajagopala, Manjusha; Matalia, P. D.

    2012-01-01

    Simple allergic conjunctivitis is the most common form of ocular allergy (prevalence 5 – 22 %). It is a hypersensitivity reaction to specific airborne antigens. The disease Vataja Abhishyanda, which is due to vitiation of Vata Pradhana Tridosha is comparable with this condition. The management of simple allergic conjunctivitis in modern ophthalmology is very expensive and it should be followed lifelong and Ayurveda can provide better relief in such manifestation. This is the first research study on Vataja Abhishyanda. Patients were selected from the Outpatient Department (OPD), Inpatient Department (IPD), of the Shalakya Tantra Department and were randomly divided into two groups. In Group-A Bilvadi Ashchyotana and in Group-B Bilvadi eye drops were instilled for three months. Total 32 patients were registered and 27 patients completed the course of treatment. Bilvadi Ashchyotana gave better results in Toda, Sangharsha, Parushya, Kandu and Ragata as compared with Bilvadi Eye Drops in Vataja Abhishyanda. PMID:23049192

  14. REML/BLUP and sequential path analysis in estimating genotypic values and interrelationships among simple maize grain yield-related traits.

    PubMed

    Olivoto, T; Nardino, M; Carvalho, I R; Follmann, D N; Ferrari, M; Szareski, V J; de Pelegrin, A J; de Souza, V Q

    2017-03-22

    Methodologies using restricted maximum likelihood/best linear unbiased prediction (REML/BLUP) in combination with sequential path analysis in maize are still limited in the literature. Therefore, the aims of this study were: i) to use REML/BLUP-based procedures in order to estimate variance components, genetic parameters, and genotypic values of simple maize hybrids, and ii) to fit stepwise regressions considering genotypic values to form a path diagram with multi-order predictors and minimum multicollinearity that explains the relationships of cause and effect among grain yield-related traits. Fifteen commercial simple maize hybrids were evaluated in multi-environment trials in a randomized complete block design with four replications. The environmental variance (78.80%) and genotype-vs-environment variance (20.83%) accounted for more than 99% of the phenotypic variance of grain yield, which difficult the direct selection of breeders for this trait. The sequential path analysis model allowed the selection of traits with high explanatory power and minimum multicollinearity, resulting in models with elevated fit (R 2 > 0.9 and ε < 0.3). The number of kernels per ear (NKE) and thousand-kernel weight (TKW) are the traits with the largest direct effects on grain yield (r = 0.66 and 0.73, respectively). The high accuracy of selection (0.86 and 0.89) associated with the high heritability of the average (0.732 and 0.794) for NKE and TKW, respectively, indicated good reliability and prospects of success in the indirect selection of hybrids with high-yield potential through these traits. The negative direct effect of NKE on TKW (r = -0.856), however, must be considered. The joint use of mixed models and sequential path analysis is effective in the evaluation of maize-breeding trials.

  15. Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels

    NASA Technical Reports Server (NTRS)

    Lennington, R. K.; Abotteen, K. M. (Principal Investigator)

    1980-01-01

    The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.

  16. Percolation model for a selective response of the resistance of composite semiconducting np systems with respect to reducing gases

    NASA Astrophysics Data System (ADS)

    Russ, Stefanie

    2014-08-01

    It is shown that a two-component percolation model on a simple cubic lattice can explain an experimentally observed behavior [Savage et al., Sens. Actuators B 79, 17 (2001), 10.1016/S0925-4005(01)00843-7; Sens. Actuators B 72, 239 (2001)., 10.1016/S0925-4005(00)00676-6], namely, that a network built up by a mixture of sintered nanocrystalline semiconducting n and p grains can exhibit selective behavior, i.e., respond with a resistance increase when exposed to a reducing gas A and with a resistance decrease in response to another reducing gas B. To this end, a simple model is developed, where the n and p grains are simulated by overlapping spheres, based on realistic assumptions about the gas reactions on the grain surfaces. The resistance is calculated by random walk simulations with nn, pp, and np bonds between the grains, and the results are found in very good agreement with the experiments. Contrary to former assumptions, the np bonds are crucial to obtain this accordance.

  17. There is no silver bullet--a guide to low-level data transforms and normalisation methods for microarray data.

    PubMed

    Kreil, David P; Russell, Roslin R

    2005-03-01

    To overcome random experimental variation, even for simple screens, data from multiple microarrays have to be combined. There are, however, systematic differences between arrays, and any bias remaining after experimental measures to ensure consistency needs to be controlled for. It is often difficult to make the right choice of data transformation and normalisation methods to achieve this end. In this tutorial paper we review the problem and a selection of solutions, explaining the basic principles behind normalisation procedures and providing guidance for their application.

  18. Inference from habitat-selection analysis depends on foraging strategies.

    PubMed

    Bastille-Rousseau, Guillaume; Fortin, Daniel; Dussault, Christian

    2010-11-01

    1. Several methods have been developed to assess habitat selection, most of which are based on a comparison between habitat attributes in used vs. unused or random locations, such as the popular resource selection functions (RSFs). Spatial evaluation of residency time has been recently proposed as a promising avenue for studying habitat selection. Residency-time analyses assume a positive relationship between residency time within habitat patches and selection. We demonstrate that RSF and residency-time analyses provide different information about the process of habitat selection. Further, we show how the consideration of switching rate between habitat patches (interpatch movements) together with residency-time analysis can reveal habitat-selection strategies. 2. Spatially explicit, individual-based modelling was used to simulate foragers displaying one of six foraging strategies in a heterogeneous environment. The strategies combined one of three patch-departure rules (fixed-quitting-harvest-rate, fixed-time and fixed-amount strategy), together with one of two interpatch-movement rules (random or biased). Habitat selection of simulated foragers was then assessed using RSF, residency-time and interpatch-movement analyses. 3. Our simulations showed that RSFs and residency times are not always equivalent. When foragers move in a non-random manner and do not increase residency time in richer patches, residency-time analysis can provide misleading assessments of habitat selection. This is because the overall time spent in the various patch types not only depends on residency times, but also on interpatch-movement decisions. 4. We suggest that RSFs provide the outcome of the entire selection process, whereas residency-time and interpatch-movement analyses can be used in combination to reveal the mechanisms behind the selection process. 5. We showed that there is a risk in using residency-time analysis alone to infer habitat selection. Residency-time analyses, however, may enlighten the mechanisms of habitat selection by revealing central components of resource-use strategies. Given that management decisions are often based on resource-selection analyses, the evaluation of resource-use strategies can be key information for the development of efficient habitat-management strategies. Combining RSF, residency-time and interpatch-movement analyses is a simple and efficient way to gain a more comprehensive understanding of habitat selection. © 2010 The Authors. Journal compilation © 2010 British Ecological Society.

  19. Simple and Multivariate Relationships Between Spiritual Intelligence with General Health and Happiness.

    PubMed

    Amirian, Mohammad-Elyas; Fazilat-Pour, Masoud

    2016-08-01

    The present study examined simple and multivariate relationships of spiritual intelligence with general health and happiness. The employed method was descriptive and correlational. King's Spiritual Quotient scales, GHQ-28 and Oxford Happiness Inventory, are filled out by a sample consisted of 384 students, which were selected using stratified random sampling from the students of Shahid Bahonar University of Kerman. Data are subjected to descriptive and inferential statistics including correlations and multivariate regressions. Bivariate correlations support positive and significant predictive value of spiritual intelligence toward general health and happiness. Further analysis showed that among the Spiritual Intelligence' subscales, Existential Critical Thinking Predicted General Health and Happiness, reversely. In addition, happiness was positively predicted by generation of personal meaning and transcendental awareness. The findings are discussed in line with the previous studies and the relevant theoretical background.

  20. Effect of self-deflection on a totally asymmetric simple exclusion process with functions of site assignments

    NASA Astrophysics Data System (ADS)

    Tsuzuki, Satori; Yanagisawa, Daichi; Nishinari, Katsuhiro

    2018-04-01

    This study proposes a model of a totally asymmetric simple exclusion process on a single-channel lane with functions of site assignments along the pit lane. The system model attempts to insert a new particle to the leftmost site at a certain probability by randomly selecting one of the empty sites in the pit lane, and reserving it for the particle. Thereafter, the particle is directed to stop at the site only once during its travel. Recently, the system was determined to show a self-deflection effect, in which the site usage distribution biases spontaneously toward the leftmost site, and the throughput becomes maximum when the site usage distribution is slightly biased to the rightmost site. Our exact analysis describes this deflection effect and show a good agreement with simulations.

  1. Short Communication: Genetic linkage map of Cucurbita maxima with molecular and morphological markers.

    PubMed

    Ge, Y; Li, X; Yang, X X; Cui, C S; Qu, S P

    2015-05-22

    Cucurbita maxima is one of the most widely cultivated vegetables in China and exhibits distinct morphological characteristics. In this study, genetic linkage analysis with 57 simple-sequence repeats, 21 amplified fragment length polymorphisms, 3 random-amplified polymorphic DNA, and one morphological marker revealed 20 genetic linkage groups of C. maxima covering a genetic distance of 991.5 cM with an average of 12.1 cM between adjacent markers. Genetic linkage analysis identified the simple-sequence repeat marker 'PU078072' 5.9 cM away from the locus 'Rc', which controls rind color. The genetic map in the present study will be useful for better mapping, tagging, and cloning of quantitative trait loci/gene(s) affecting economically important traits and for breeding new varieties of C. maxima through marker-assisted selection.

  2. Exercise and Cognitive Functioning in People With Chronic Whiplash-Associated Disorders: A Controlled Laboratory Study.

    PubMed

    Ickmans, Kelly; Meeus, Mira; De Kooning, Margot; De Backer, Annabelle; Kooremans, Daniëlle; Hubloue, Ives; Schmitz, Tom; Van Loo, Michel; Nijs, Jo

    2016-02-01

    Controlled laboratory study. In addition to persistent pain, people with chronic whiplash-associated disorders (WAD) commonly deal with cognitive dysfunctions. In healthy individuals, aerobic exercise has a positive effect on cognitive performance, and preliminary evidence in other chronic pain conditions reveals promising results as well. However, there is evidence that people with chronic WAD may show a worsening of the symptom complex following physical exertion. To examine postexercise cognitive performance in people with chronic WAD. People with chronic WAD (n = 27) and healthy, inactive, sex- and age-matched controls (n = 27) performed a single bout of an incremental submaximal cycling exercise. Before and after the exercise, participants completed 2 performance-based cognitive tests assessing selective and sustained attention, cognitive inhibition, and simple and choice reaction time. At baseline, people with chronic WAD displayed significantly lower scores on sustained attention and simple reaction time (P<.001), but not on selective attention, cognitive inhibition, and choice reaction time (P>.05), compared with healthy controls. Postexercise, both groups showed significantly improved selective attention and choice reaction time (chronic WAD, P = .001; control, P<.001), while simple reaction time significantly increased (P = .037) only in the control group. In both groups, no other significant changes in sustained attention, cognitive inhibition, pain, and fatigue were observed (P>.05). In the short term, postexercise cognitive functioning, pain, and fatigue were not aggravated in people with chronic WAD. However, randomized controlled trials are required to study the longer-term and isolated effects of exercise on cognitive functioning.

  3. Complex network structure of musical compositions: Algorithmic generation of appealing music

    NASA Astrophysics Data System (ADS)

    Liu, Xiao Fan; Tse, Chi K.; Small, Michael

    2010-01-01

    In this paper we construct networks for music and attempt to compose music artificially. Networks are constructed with nodes and edges corresponding to musical notes and their co-occurring connections. We analyze classical music from Bach, Mozart, Chopin, as well as other types of music such as Chinese pop music. We observe remarkably similar properties in all networks constructed from the selected compositions. We conjecture that preserving the universal network properties is a necessary step in artificial composition of music. Power-law exponents of node degree, node strength and/or edge weight distributions, mean degrees, clustering coefficients, mean geodesic distances, etc. are reported. With the network constructed, music can be composed artificially using a controlled random walk algorithm, which begins with a randomly chosen note and selects the subsequent notes according to a simple set of rules that compares the weights of the edges, weights of the nodes, and/or the degrees of nodes. By generating a large number of compositions, we find that this algorithm generates music which has the necessary qualities to be subjectively judged as appealing.

  4. A computational proposal for designing structured RNA pools for in vitro selection of RNAs.

    PubMed

    Kim, Namhee; Gan, Hin Hark; Schlick, Tamar

    2007-04-01

    Although in vitro selection technology is a versatile experimental tool for discovering novel synthetic RNA molecules, finding complex RNA molecules is difficult because most RNAs identified from random sequence pools are simple motifs, consistent with recent computational analysis of such sequence pools. Thus, enriching in vitro selection pools with complex structures could increase the probability of discovering novel RNAs. Here we develop an approach for engineering sequence pools that links RNA sequence space regions with corresponding structural distributions via a "mixing matrix" approach combined with a graph theory analysis. We define five classes of mixing matrices motivated by covariance mutations in RNA; these constructs define nucleotide transition rates and are applied to chosen starting sequences to yield specific nonrandom pools. We examine the coverage of sequence space as a function of the mixing matrix and starting sequence via clustering analysis. We show that, in contrast to random sequences, which are associated only with a local region of sequence space, our designed pools, including a structured pool for GTP aptamers, can target specific motifs. It follows that experimental synthesis of designed pools can benefit from using optimized starting sequences, mixing matrices, and pool fractions associated with each of our constructed pools as a guide. Automation of our approach could provide practical tools for pool design applications for in vitro selection of RNAs and related problems.

  5. Modelling the Probability of Landslides Impacting Road Networks

    NASA Astrophysics Data System (ADS)

    Taylor, F. E.; Malamud, B. D.

    2012-04-01

    During a landslide triggering event, the threat of landslides blocking roads poses a risk to logistics, rescue efforts and communities dependant on those road networks. Here we present preliminary results of a stochastic model we have developed to evaluate the probability of landslides intersecting a simple road network during a landslide triggering event and apply simple network indices to measure the state of the road network in the affected region. A 4000 x 4000 cell array with a 5 m x 5 m resolution was used, with a pre-defined simple road network laid onto it, and landslides 'randomly' dropped onto it. Landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power-law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL; the rollover (maximum probability) occurs at about AL = 400 m2 This statistical distribution was chosen based on three substantially complete triggered landslide inventories recorded in existing literature. The number of landslide areas (NL) selected for each triggered event iteration was chosen to have an average density of 1 landslide km-2, i.e. NL = 400 landslide areas chosen randomly for each iteration, and was based on several existing triggered landslide event inventories. A simple road network was chosen, in a 'T' shape configuration, with one road 1 x 4000 cells (5 m x 20 km) in a 'T' formation with another road 1 x 2000 cells (5 m x 10 km). The landslide areas were then randomly 'dropped' over the road array and indices such as the location, size (ABL) and number of road blockages (NBL) recorded. This process was performed 500 times (iterations) in a Monte-Carlo type simulation. Initial results show that for a landslide triggering event with 400 landslides over a 400 km2 region, the number of road blocks per iteration, NBL,ranges from 0 to 7. The average blockage area for the 500 iterations (A¯ BL) is about 3000 m2, which closely matches the value of A¯ L for the triggered landslide inventories. We further find that over the 500 iterations, the probability of a given number of road blocks occurring on any given iteration, p(NBL) as a function of NBL, follows reasonably well a three-parameter inverse gamma probability density distribution with an exponential rollover (i.e., the most frequent value) at NBL = 1.3. In this paper we have begun to calculate the probability of the number of landslides blocking roads during a triggering event, and have found that this follows an inverse-gamma distribution, which is similar to that found for the statistics of landslide areas resulting from triggers. As we progress to model more realistic road networks, this work will aid in both long-term and disaster management for road networks by allowing probabilistic assessment of road network potential damage during different magnitude landslide triggering event scenarios.

  6. [Analysing the defect of control design of acupuncture: taking RCTs of treating simple obesity with acupuncture for example].

    PubMed

    Zeng, Yi; Qi, Shulan; Meng, Xing; Chen, Yinyin

    2018-03-12

    By analysing the defect of control design in randomized controlled trials (RCTs) of simple obesity treated with acupuncture and using acupuncture as the contrast, presenting the essential factors which should be taken into account as designing the control of clinical trial to further improve the clinical research. Setting RCTs of acupuncture treating simple obesity as a example, we searched RCTs of acupuncture treating simple obesity with acupuncture control. According to the characteristics of acupuncture therapy, this research sorted and analysed the control approach of intervention from aspects of acupoint selection, the penetration of needle, the depth of insertion, etc, then calculated the amount of difference factor between the two groups and analyzed the rationality. In 15 RCTs meeting the inclusion criterias, 7 published in English, 8 in Chinese, the amount of difference factors between two groups greater than 1 was 6 (40%), 4 published in English abroad, 2 in Chinese, while only 1 was 9 (60%), 3 published in English, 6 in Chinese. Control design of acupuncture in some clinical RCTs is unreasonable for not considering the amount of difference factors between the two groups.

  7. Evolutionary dynamics on any population structure

    NASA Astrophysics Data System (ADS)

    Allen, Benjamin; Lippner, Gabor; Chen, Yu-Ting; Fotouhi, Babak; Momeni, Naghmeh; Yau, Shing-Tung; Nowak, Martin A.

    2017-03-01

    Evolution occurs in populations of reproducing individuals. The structure of a population can affect which traits evolve. Understanding evolutionary game dynamics in structured populations remains difficult. Mathematical results are known for special structures in which all individuals have the same number of neighbours. The general case, in which the number of neighbours can vary, has remained open. For arbitrary selection intensity, the problem is in a computational complexity class that suggests there is no efficient algorithm. Whether a simple solution for weak selection exists has remained unanswered. Here we provide a solution for weak selection that applies to any graph or network. Our method relies on calculating the coalescence times of random walks. We evaluate large numbers of diverse population structures for their propensity to favour cooperation. We study how small changes in population structure—graph surgery—affect evolutionary outcomes. We find that cooperation flourishes most in societies that are based on strong pairwise ties.

  8. Multiple Imputation in Two-Stage Cluster Samples Using The Weighted Finite Population Bayesian Bootstrap.

    PubMed

    Zhou, Hanzhi; Elliott, Michael R; Raghunathan, Trivellore E

    2016-06-01

    Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in "Delta-V," a key crash severity measure.

  9. Multi-Aperture-Based Probabilistic Noise Reduction of Random Telegraph Signal Noise and Photon Shot Noise in Semi-Photon-Counting Complementary-Metal-Oxide-Semiconductor Image Sensor

    PubMed Central

    Ishida, Haruki; Kagawa, Keiichiro; Komuro, Takashi; Zhang, Bo; Seo, Min-Woong; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji

    2018-01-01

    A probabilistic method to remove the random telegraph signal (RTS) noise and to increase the signal level is proposed, and was verified by simulation based on measured real sensor noise. Although semi-photon-counting-level (SPCL) ultra-low noise complementary-metal-oxide-semiconductor (CMOS) image sensors (CISs) with high conversion gain pixels have emerged, they still suffer from huge RTS noise, which is inherent to the CISs. The proposed method utilizes a multi-aperture (MA) camera that is composed of multiple sets of an SPCL CIS and a moderately fast and compact imaging lens to emulate a very fast single lens. Due to the redundancy of the MA camera, the RTS noise is removed by the maximum likelihood estimation where noise characteristics are modeled by the probability density distribution. In the proposed method, the photon shot noise is also relatively reduced because of the averaging effect, where the pixel values of all the multiple apertures are considered. An extremely low-light condition that the maximum number of electrons per aperture was the only 2e− was simulated. PSNRs of a test image for simple averaging, selective averaging (our previous method), and the proposed method were 11.92 dB, 11.61 dB, and 13.14 dB, respectively. The selective averaging, which can remove RTS noise, was worse than the simple averaging because it ignores the pixels with RTS noise and photon shot noise was less improved. The simulation results showed that the proposed method provided the best noise reduction performance. PMID:29587424

  10. Multiple Imputation in Two-Stage Cluster Samples Using The Weighted Finite Population Bayesian Bootstrap

    PubMed Central

    Zhou, Hanzhi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in “Delta-V,” a key crash severity measure. PMID:29226161

  11. Multitrait, Random Regression, or Simple Repeatability Model in High-Throughput Phenotyping Data Improve Genomic Prediction for Wheat Grain Yield.

    PubMed

    Sun, Jin; Rutkoski, Jessica E; Poland, Jesse A; Crossa, José; Jannink, Jean-Luc; Sorrells, Mark E

    2017-07-01

    High-throughput phenotyping (HTP) platforms can be used to measure traits that are genetically correlated with wheat ( L.) grain yield across time. Incorporating such secondary traits in the multivariate pedigree and genomic prediction models would be desirable to improve indirect selection for grain yield. In this study, we evaluated three statistical models, simple repeatability (SR), multitrait (MT), and random regression (RR), for the longitudinal data of secondary traits and compared the impact of the proposed models for secondary traits on their predictive abilities for grain yield. Grain yield and secondary traits, canopy temperature (CT) and normalized difference vegetation index (NDVI), were collected in five diverse environments for 557 wheat lines with available pedigree and genomic information. A two-stage analysis was applied for pedigree and genomic selection (GS). First, secondary traits were fitted by SR, MT, or RR models, separately, within each environment. Then, best linear unbiased predictions (BLUPs) of secondary traits from the above models were used in the multivariate prediction models to compare predictive abilities for grain yield. Predictive ability was substantially improved by 70%, on average, from multivariate pedigree and genomic models when including secondary traits in both training and test populations. Additionally, (i) predictive abilities slightly varied for MT, RR, or SR models in this data set, (ii) results indicated that including BLUPs of secondary traits from the MT model was the best in severe drought, and (iii) the RR model was slightly better than SR and MT models under drought environment. Copyright © 2017 Crop Science Society of America.

  12. Predicting contraceptive vigilance in adolescent females: a projective method for assessing ego development.

    PubMed

    Speier, P L; Mélèse-D'Hospital, I A; Tschann, J M; Moore, P J; Adler, N E

    1997-01-01

    To test the hypothesis that ego development would predict contraceptive use. Problems in ego development were defined in terms of three factors: (1) realism, (2) complexity, and (3) discontinuity. Forty-one respondents aged 14-17 years were selected from a group of 233 adolescents who were administered a projective pregnancy scenario and participated in a 12-month follow-up. Twenty of these adolescents were randomly selected from the group determined to be effective contraceptive users, while 21 were randomly selected from the group of poor contraceptors. Chi-square test revealed a significant association (p < .0005) between the composite ego maturity (EM) measure and contraceptive outcome (chi 2 = 13.82, with df-1). Low scores on the ego maturity measure predicted poor contraceptive use. EM was unrelated to age but was associated with race (chi 2 = 7.535, .025 < p < .05). However, EM predicted contraceptive use when controlling for the effects of race. A simple, time-efficient projective pregnancy scenario is an effective way of determining adolescent females at risk for poor contraceptive effectiveness and, therefore, untimely pregnancy. These stories are analyzed using factors related to the ego development of the adolescent. Subjects who scored lower on this measure have poor contraceptive effectiveness while subjects with higher levels demonstrated effective contraception use, at 1-year follow-up.

  13. Investigation of a protein complex network

    NASA Astrophysics Data System (ADS)

    Mashaghi, A. R.; Ramezanpour, A.; Karimipour, V.

    2004-09-01

    The budding yeast Saccharomyces cerevisiae is the first eukaryote whose genome has been completely sequenced. It is also the first eukaryotic cell whose proteome (the set of all proteins) and interactome (the network of all mutual interactions between proteins) has been analyzed. In this paper we study the structure of the yeast protein complex network in which weighted edges between complexes represent the number of shared proteins. It is found that the network of protein complexes is a small world network with scale free behavior for many of its distributions. However we find that there are no strong correlations between the weights and degrees of neighboring complexes. To reveal non-random features of the network we also compare it with a null model in which the complexes randomly select their proteins. Finally we propose a simple evolutionary model based on duplication and divergence of proteins.

  14. Comparison of Efficacy of Eye Movement, Desensitization and Reprocessing and Cognitive Behavioral Therapy Therapeutic Methods for Reducing Anxiety and Depression of Iranian Combatant Afflicted by Post Traumatic Stress Disorder

    NASA Astrophysics Data System (ADS)

    Narimani, M.; Sadeghieh Ahari, S.; Rajabi, S.

    This research aims to determine efficacy of two therapeutic methods and compare them; Eye Movement, Desensitization and Reprocessing (EMDR) and Cognitive Behavioral Therapy (CBT) for reduction of anxiety and depression of Iranian combatant afflicted with Post traumatic Stress Disorder (PTSD) after imposed war. Statistical population of current study includes combatants afflicted with PTSD that were hospitalized in Isar Hospital of Ardabil province or were inhabited in Ardabil. These persons were selected through simple random sampling and were randomly located in three groups. The method was extended test method and study design was multi-group test-retest. Used tools include hospital anxiety and depression scale. This survey showed that exercise of EMDR and CBT has caused significant reduction of anxiety and depression.

  15. Searching for patterns in remote sensing image databases using neural networks

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have investigated a method, based on a successful neural network multispectral image classification system, of searching for single patterns in remote sensing databases. While defining the pattern to search for and the feature to be used for that search (spectral, spatial, temporal, etc.) is challenging, a more difficult task is selecting competing patterns to train against the desired pattern. Schemes for competing pattern selection, including random selection and human interpreted selection, are discussed in the context of an example detection of dense urban areas in Landsat Thematic Mapper imagery. When applying the search to multiple images, a simple normalization method can alleviate the problem of inconsistent image calibration. Another potential problem, that of highly compressed data, was found to have a minimal effect on the ability to detect the desired pattern. The neural network algorithm has been implemented using the PVM (Parallel Virtual Machine) library and nearly-optimal speedups have been obtained that help alleviate the long process of searching through imagery.

  16. An exploratory study of Taiwanese consumers' experiences of using health-related websites.

    PubMed

    Hsu, Li-Ling

    2005-06-01

    It is manifest that the rapid growth of Internet use and improvement of information technology have changed our lifestyles. In recent years, Internet use in Taiwan has increased dramatically, from 3 million users in 1998 to approximately 8.6 million by the end of 2002. The statistics imply that not only health care professionals but also laypersons rely on the Internet for health information. The purpose of this study was to explore Taiwan consumers' preferences and information needs, and the problems they encountered when getting information from medical websites. Using simple random sampling and systematic random sampling, a survey was conducted in Taipei from August 26, 2002 to October 30, 2002. Using simple random sampling and systematic random sampling, 28 boroughs (Li) were selected; the total sample number was 1043. Over one-quarter (26.8 %) of the respondents reported having never accessed the Internet, while 763 (73.2%) reported having accessed the Internet. Of the Internet users, only 396 (51.9%) had accessed health-related websites, and 367 (48.1%) reported having never accessed health-related websites. The most popular topics were disease information (46.5%), followed by diet consultation (34.8%), medical news (28.5%), and cosmetology (28.5%). The results of the survey show that a large percentage of people in Taiwan have never made good use of health information available on the websites. The reasons for not using the websites included a lack of time or Internet access skills, no motivation, dissatisfaction with the information, unreliable information be provided, and inability to locate the information needed. The author recommends to enhance health information access skills, understand the needs and preferences of consumers, promote the quality of medical websites, and improve the functions of medical websites.

  17. Models of Protocellular Structure, Function and Evolution

    NASA Technical Reports Server (NTRS)

    New, Michael H.; Pohorille, Andrew; Szostak, Jack W.; Keefe, Tony; Lanyi, Janos K.; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    In the absence of any record of protocells, the most direct way to test our understanding, of the origin of cellular life is to construct laboratory models that capture important features of protocellular systems. Such efforts are currently underway in a collaborative project between NASA-Ames, Harvard Medical School and University of California. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures. The centerpiece of this project is a method for the in vitro evolution of protein enzymes toward arbitrary catalytic targets. A similar approach has already been developed for nucleic acids in which a small number of functional molecules are selected from a large, random population of candidates. The selected molecules are next vastly multiplied using the polymerase chain reaction.

  18. A simple bias correction in linear regression for quantitative trait association under two-tail extreme selection.

    PubMed

    Kwan, Johnny S H; Kung, Annie W C; Sham, Pak C

    2011-09-01

    Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias.

  19. Epidemic spreading on preferred degree adaptive networks.

    PubMed

    Jolad, Shivakumar; Liu, Wenjia; Schmittmann, B; Zia, R K P

    2012-01-01

    We study the standard SIS model of epidemic spreading on networks where individuals have a fluctuating number of connections around a preferred degree κ. Using very simple rules for forming such preferred degree networks, we find some unusual statistical properties not found in familiar Erdös-Rényi or scale free networks. By letting κ depend on the fraction of infected individuals, we model the behavioral changes in response to how the extent of the epidemic is perceived. In our models, the behavioral adaptations can be either 'blind' or 'selective'--depending on whether a node adapts by cutting or adding links to randomly chosen partners or selectively, based on the state of the partner. For a frozen preferred network, we find that the infection threshold follows the heterogeneous mean field result λ(c)/μ = <κ>/<κ2> and the phase diagram matches the predictions of the annealed adjacency matrix (AAM) approach. With 'blind' adaptations, although the epidemic threshold remains unchanged, the infection level is substantially affected, depending on the details of the adaptation. The 'selective' adaptive SIS models are most interesting. Both the threshold and the level of infection changes, controlled not only by how the adaptations are implemented but also how often the nodes cut/add links (compared to the time scales of the epidemic spreading). A simple mean field theory is presented for the selective adaptations which capture the qualitative and some of the quantitative features of the infection phase diagram.

  20. A simple highly sensitive and selective aptamer-based colorimetric sensor for environmental toxins microcystin-LR in water samples.

    PubMed

    Li, Xiuyan; Cheng, Ruojie; Shi, Huijie; Tang, Bo; Xiao, Hanshuang; Zhao, Guohua

    2016-03-05

    A simple and highly sensitive aptamer-based colorimetric sensor was developed for selective detection of Microcystin-LR (MC-LR). The aptamer (ABA) was employed as recognition element which could bind MC-LR with high-affinity, while gold nanoparticles (AuNPs) worked as sensing materials whose plasma resonance absorption peaks red shifted upon binding of the targets at a high concentration of sodium chloride. With the addition of MC-LR, the random coil aptamer adsorbed on Au NPs altered into regulated structure to form MC-LR-aptamer complexes and broke away from the surface of Au NPs, leading to the aggregation of AuNPs, and the color converted from red to blue due to the interparticle plasmon coupling. Results showed that our aptamer-based colorimetric sensor exhibited rapid and sensitive detection performance for MC-LR with linear range from 0.5 nM to 7.5 μM and the detection limit reached 0.37 nM. Meanwhile, the pollutants usually coexisting with MC-LR in pollutant water samples had not demonstrated disturbance for detecting of MC-LR. The mechanism was also proposed suggesting that high affinity interaction between aptamer and MC-LR significantly enhanced the sensitivity and selectivity for MC-LR detection. Besides, the established method was utilized in analyzing real water samples and splendid sensitivity and selectivity were obtained as well. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Entropy and charge in molecular evolution--the case of phosphate

    NASA Technical Reports Server (NTRS)

    Arrhenius, G.; Sales, B.; Mojzsis, S.; Lee, T.; Bada, J. L. (Principal Investigator)

    1997-01-01

    Biopoesis, the creation of life, implies molecular evolution from simple components, randomly distributed and in a dilute state, to form highly organized, concentrated systems capable of metabolism, replication and mutation. This chain of events must involve environmental processes that can locally lower entropy in several steps; by specific selection from an indiscriminate mixture, by concentration from dilute solution, and in the case of the mineral-induced processes, by particular effectiveness in ordering and selective reaction, directed toward formation of functional biomolecules. Numerous circumstances provide support for the notion that negatively charged molecules were functionally required and geochemically available for biopoesis. Sulfite ion may have been important in bisulfite complex formation with simple aldehydes, facilitating the initial concentration by sorption of aldehydes in positively charged surface active minerals. Borate ion may have played a similar, albeit less investigated role in forming charged sugar complexes. Among anionic species, oligophosphate ions and charged phosphate esters are likely to have been of even more wide ranging importance, reflected in the continued need for phosphate in a proposed RNA world, and extending its central role to evolved biochemistry. Phosphorylation is shown to result in selective concentration by surface sorption of compounds, otherwise too dilute to support condensation reactions. It provides protection against rapid hydrolysis of sugars and, by selective concentration, induces the oligomerization of aldehydes. As a manifestation of life arisen, phosphate already appears in an organic context in the oldest preserved sedimentary record.

  2. The Mechanism for Processing Random-Dot Motion at Various Speeds in Early Visual Cortices

    PubMed Central

    An, Xu; Gong, Hongliang; McLoughlin, Niall; Yang, Yupeng; Wang, Wei

    2014-01-01

    All moving objects generate sequential retinotopic activations representing a series of discrete locations in space and time (motion trajectory). How direction-selective neurons in mammalian early visual cortices process motion trajectory remains to be clarified. Using single-cell recording and optical imaging of intrinsic signals along with mathematical simulation, we studied response properties of cat visual areas 17 and 18 to random dots moving at various speeds. We found that, the motion trajectory at low speed was encoded primarily as a direction signal by groups of neurons preferring that motion direction. Above certain transition speeds, the motion trajectory is perceived as a spatial orientation representing the motion axis of the moving dots. In both areas studied, above these speeds, other groups of direction-selective neurons with perpendicular direction preferences were activated to encode the motion trajectory as motion-axis information. This applied to both simple and complex neurons. The average transition speed for switching between encoding motion direction and axis was about 31°/s in area 18 and 15°/s in area 17. A spatio-temporal energy model predicted the transition speeds accurately in both areas, but not the direction-selective indexes to random-dot stimuli in area 18. In addition, above transition speeds, the change of direction preferences of population responses recorded by optical imaging can be revealed using vector maximum but not vector summation method. Together, this combined processing of motion direction and axis by neurons with orthogonal direction preferences associated with speed may serve as a common principle of early visual motion processing. PMID:24682033

  3. Representation of limb kinematics in Purkinje cell simple spike discharge is conserved across multiple tasks

    PubMed Central

    Hewitt, Angela L.; Popa, Laurentiu S.; Pasalar, Siavash; Hendrix, Claudia M.

    2011-01-01

    Encoding of movement kinematics in Purkinje cell simple spike discharge has important implications for hypotheses of cerebellar cortical function. Several outstanding questions remain regarding representation of these kinematic signals. It is uncertain whether kinematic encoding occurs in unpredictable, feedback-dependent tasks or kinematic signals are conserved across tasks. Additionally, there is a need to understand the signals encoded in the instantaneous discharge of single cells without averaging across trials or time. To address these questions, this study recorded Purkinje cell firing in monkeys trained to perform a manual random tracking task in addition to circular tracking and center-out reach. Random tracking provides for extensive coverage of kinematic workspaces. Direction and speed errors are significantly greater during random than circular tracking. Cross-correlation analyses comparing hand and target velocity profiles show that hand velocity lags target velocity during random tracking. Correlations between simple spike firing from 120 Purkinje cells and hand position, velocity, and speed were evaluated with linear regression models including a time constant, τ, as a measure of the firing lead/lag relative to the kinematic parameters. Across the population, velocity accounts for the majority of simple spike firing variability (63 ± 30% of Radj2), followed by position (28 ± 24% of Radj2) and speed (11 ± 19% of Radj2). Simple spike firing often leads hand kinematics. Comparison of regression models based on averaged vs. nonaveraged firing and kinematics reveals lower Radj2 values for nonaveraged data; however, regression coefficients and τ values are highly similar. Finally, for most cells, model coefficients generated from random tracking accurately estimate simple spike firing in either circular tracking or center-out reach. These findings imply that the cerebellum controls movement kinematics, consistent with a forward internal model that predicts upcoming limb kinematics. PMID:21795616

  4. Zombie states for description of structure and dynamics of multi-electron systems

    NASA Astrophysics Data System (ADS)

    Shalashilin, Dmitrii V.

    2018-05-01

    Canonical Coherent States (CSs) of Harmonic Oscillator have been extensively used as a basis in a number of computational methods of quantum dynamics. However, generalising such techniques for fermionic systems is difficult because Fermionic Coherent States (FCSs) require complicated algebra of Grassmann numbers not well suited for numerical calculations. This paper introduces a coherent antisymmetrised superposition of "dead" and "alive" electronic states called here Zombie State (ZS), which can be used in a manner of FCSs but without Grassmann algebra. Instead, for Zombie States, a very simple sign-changing rule is used in the definition of creation and annihilation operators. Then, calculation of electronic structure Hamiltonian matrix elements between two ZSs becomes very simple and a straightforward technique for time propagation of fermionic wave functions can be developed. By analogy with the existing methods based on Canonical Coherent States of Harmonic Oscillator, fermionic wave functions can be propagated using a set of randomly selected Zombie States as a basis. As a proof of principles, the proposed Coupled Zombie States approach is tested on a simple example showing that the technique is exact.

  5. A cross-national trial of brief interventions with heavy drinkers. WHO Brief Intervention Study Group.

    PubMed Central

    1996-01-01

    OBJECTIVES. The relative effects of simple advice and brief counseling were evaluated with heavy drinkers identified in primary care and other health settings in eight countries. METHODS. Subjects (1260 men, 299 women) with no prior history of alcohol dependence were selected if they consumed alcohol with sufficient frequency or intensity to be considered at risk of alcohol-related problems. Subjects were randomly assigned to a control group, a simple advice group, or a group receiving brief counseling. Seventy-five percent of subjects were evaluated 9 months later. RESULTS. Male patients exposed to the interventions reported approximately 17% lower average daily alcohol consumption than those in the control group. Reductions in the intensity of drinking were approximately 10%. For women, significant reductions were observed in both the control and the intervention groups. Five minutes of simple advice were as effective as 20 minutes of brief counseling. CONCLUSIONS. Brief interventions are consistently robust across health care settings and sociocultural groups and can make a significant contribution to the secondary prevention of alcohol-related problems if they are widely used in primary care. PMID:8669518

  6. Crowding with detection and coarse discrimination of simple visual features.

    PubMed

    Põder, Endel

    2008-04-24

    Some recent studies have suggested that there are actually no crowding effects with detection and coarse discrimination of simple visual features. The present study tests the generality of this idea. A target Gabor patch, surrounded by either 2 or 6 flanker Gabors, was presented briefly at 4 deg eccentricity of the visual field. Each Gabor patch was oriented either vertically or horizontally (selected randomly). Observers' task was either to detect the presence of the target (presented with probability 0.5) or to identify the orientation of the target. The target-flanker distance was varied. Results were similar for the two tasks but different for 2 and 6 flankers. The idea that feature detection and coarse discrimination are immune to crowding may be valid for the two-flanker condition only. With six flankers, a normal crowding effect was observed. It is suggested that the complexity of the full pattern (target plus flankers) could explain the difference.

  7. SAS procedures for designing and analyzing sample surveys

    USGS Publications Warehouse

    Stafford, Joshua D.; Reinecke, Kenneth J.; Kaminski, Richard M.

    2003-01-01

    Complex surveys often are necessary to estimate occurrence (or distribution), density, and abundance of plants and animals for purposes of re-search and conservation. Most scientists are familiar with simple random sampling, where sample units are selected from a population of interest (sampling frame) with equal probability. However, the goal of ecological surveys often is to make inferences about populations over large or complex spatial areas where organisms are not homogeneously distributed or sampling frames are in-convenient or impossible to construct. Candidate sampling strategies for such complex surveys include stratified,multistage, and adaptive sampling (Thompson 1992, Buckland 1994).

  8. Measuring socioeconomic status in multicountry studies: results from the eight-country MAL-ED study

    PubMed Central

    2014-01-01

    Background There is no standardized approach to comparing socioeconomic status (SES) across multiple sites in epidemiological studies. This is particularly problematic when cross-country comparisons are of interest. We sought to develop a simple measure of SES that would perform well across diverse, resource-limited settings. Methods A cross-sectional study was conducted with 800 children aged 24 to 60 months across eight resource-limited settings. Parents were asked to respond to a household SES questionnaire, and the height of each child was measured. A statistical analysis was done in two phases. First, the best approach for selecting and weighting household assets as a proxy for wealth was identified. We compared four approaches to measuring wealth: maternal education, principal components analysis, Multidimensional Poverty Index, and a novel variable selection approach based on the use of random forests. Second, the selected wealth measure was combined with other relevant variables to form a more complete measure of household SES. We used child height-for-age Z-score (HAZ) as the outcome of interest. Results Mean age of study children was 41 months, 52% were boys, and 42% were stunted. Using cross-validation, we found that random forests yielded the lowest prediction error when selecting assets as a measure of household wealth. The final SES index included access to improved water and sanitation, eight selected assets, maternal education, and household income (the WAMI index). A 25% difference in the WAMI index was positively associated with a difference of 0.38 standard deviations in HAZ (95% CI 0.22 to 0.55). Conclusions Statistical learning methods such as random forests provide an alternative to principal components analysis in the development of SES scores. Results from this multicountry study demonstrate the validity of a simplified SES index. With further validation, this simplified index may provide a standard approach for SES adjustment across resource-limited settings. PMID:24656134

  9. Economic decision making and the application of nonparametric prediction models

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2007-01-01

    Sustained increases in energy prices have focused attention on gas resources in low permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are large. Planning and development decisions for extraction of such resources must be area-wide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm the decision to enter such plays depends on reconnaissance level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional scale cost functions. The context of the worked example is the Devonian Antrim shale gas play, Michigan Basin. One finding relates to selection of the resource prediction model to be used with economic models. Models which can best predict aggregate volume over larger areas (many hundreds of sites) may lose granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined by extraneous factors. The paper also shows that when these simple prediction models are used to strategically order drilling prospects, the gain in gas volume over volumes associated with simple random site selection amounts to 15 to 20 percent. It also discusses why the observed benefit of updating predictions from results of new drilling, as opposed to following static predictions, is somewhat smaller. Copyright 2007, Society of Petroleum Engineers.

  10. A Methodology for Multihazards Load Combinations of Earthquake and Heavy Trucks for Bridges

    PubMed Central

    Wang, Xu; Sun, Baitao

    2014-01-01

    Issues of load combinations of earthquakes and heavy trucks are important contents in multihazards bridge design. Current load resistance factor design (LRFD) specifications usually treat extreme hazards alone and have no probabilistic basis in extreme load combinations. Earthquake load and heavy truck load are considered as random processes with respective characteristics, and the maximum combined load is not the simple superimposition of their maximum loads. Traditional Ferry Borges-Castaneda model that considers load lasting duration and occurrence probability well describes random process converting to random variables and load combinations, but this model has strict constraint in time interval selection to obtain precise results. Turkstra's rule considers one load reaching its maximum value in bridge's service life combined with another load with its instantaneous value (or mean value), which looks more rational, but the results are generally unconservative. Therefore, a modified model is presented here considering both advantages of Ferry Borges-Castaneda's model and Turkstra's rule. The modified model is based on conditional probability, which can convert random process to random variables relatively easily and consider the nonmaximum factor in load combinations. Earthquake load and heavy truck load combinations are employed to illustrate the model. Finally, the results of a numerical simulation are used to verify the feasibility and rationality of the model. PMID:24883347

  11. Fabrication of nanowire channels with unidirectional alignment and controlled length by a simple, gas-blowing-assisted, selective-transfer-printing technique.

    PubMed

    Kim, Yong-Kwan; Kang, Pil Soo; Kim, Dae-Il; Shin, Gunchul; Kim, Gyu Tae; Ha, Jeong Sook

    2009-03-01

    A printing-based lithographic technique for the patterning of V(2)O(5) nanowire channels with unidirectional orientation and controlled length is introduced. The simple, directional blowing of a patterned polymer stamp with N(2) gas, inked with randomly distributed V(2)O(5) nanowires, induces alignment of the nanowires perpendicular to the long axis of the line patterns. Subsequent stamping on the amine-terminated surface results in the selective transfer of the aligned nanowires with a controlled length corresponding to the width of the relief region of the polymer stamp. By employing such a gas-blowing-assisted, selective-transfer-printing technique, two kinds of device structures consisting of nanowire channels and two metal electrodes with top contact, whereby the nanowires were aligned either parallel (parallel device) or perpendicular (serial device) to the current flow in the conduction channel, are fabricated. The electrical properties demonstrate a noticeable difference between the two devices, with a large hysteresis in the parallel device but none in the serial device. Systematic analysis of the hysteresis and the electrical stability account for the observed hysteresis in terms of the proton diffusion in the water layer of the V(2)O(5) nanowires, induced by the application of an external bias voltage higher than a certain threshold voltage.

  12. The emergence of collective phenomena in systems with random interactions

    NASA Astrophysics Data System (ADS)

    Abramkina, Volha

    Emergent phenomena are one of the most profound topics in modern science, addressing the ways that collectivities and complex patterns appear due to multiplicity of components and simple interactions. Ensembles of random Hamiltonians allow one to explore emergent phenomena in a statistical way. In this work we adopt a shell model approach with a two-body interaction Hamiltonian. The sets of the two-body interaction strengths are selected at random, resulting in the two-body random ensemble (TBRE). Symmetries such as angular momentum, isospin, and parity entangled with complex many-body dynamics result in surprising order discovered in the spectrum of low-lying excitations. The statistical patterns exhibited in the TBRE are remarkably similar to those observed in real nuclei. Signs of almost every collective feature seen in nuclei, namely, pairing superconductivity, deformation, and vibration, have been observed in random ensembles [3, 4, 5, 6]. In what follows a systematic investigation of nuclear shape collectivities in random ensembles is conducted. The development of the mean field, its geometry, multipole collectivities and their dependence on the underlying two-body interaction are explored. Apart from the role of static symmetries such as SU(2) angular momentum and isospin groups, the emergence of dynamical symmetries including the seniority SU(2), rotational symmetry, as well as the Elliot SU(3) is shown to be an important precursor for the existence of geometric collectivities.

  13. The effects of probiotic and synbiotic supplementation on metabolic syndrome indices in adults at risk of type 2 diabetes: study protocol for a randomized controlled trial.

    PubMed

    Kassaian, Nazila; Aminorroaya, Ashraf; Feizi, Awat; Jafari, Parvaneh; Amini, Masoud

    2017-03-29

    The incidence of type 2 diabetes, cardiovascular diseases, and obesity has been rising dramatically; however, their pathogenesis is particularly intriguing. Recently, dysbiosis of the intestinal microbiota has emerged as a new candidate that may be linked to metabolic diseases. We hypothesize that selective modulation of the intestinal microbiota by probiotic or synbiotic supplementation may improve metabolic dysfunction and prevent diabetes in prediabetics. In this study, a synthesis and study of synbiotics will be carried out for the first time in Iran. In a randomized triple-blind controlled clinical trial, 120 adults with impaired glucose tolerance based on the inclusion criteria will be selected by a simple random sampling method and will be randomly allocated to 6 months of 6 g/d probiotic, synbiotic or placebo. The fecal abundance of bacteria, blood pressure, height, weight, and waist and hip circumferences will be measured at baseline and following treatment. Also, plasma lipid profiles, HbA1C, fasting plasma glucose, and insulin levels, will be measured and insulin resistance (HOMA-IR) and beta-cell function (HOMA-B) will be calculated at baseline and will be repeated at months 3, 6, 12, and 18. The data will be compared within and between groups using statistical methods. The results of this trial could contribute to the evidence-based clinical guidelines that address gut microbiota manipulation to maximize health benefits in prevention and management of metabolic syndrome in prediabetes. Iranian Registry of Clinical Trials: IRCT201511032321N2 . Registered on 27 February 2016.

  14. Artificial neural network study on organ-targeting peptides

    NASA Astrophysics Data System (ADS)

    Jung, Eunkyoung; Kim, Junhyoung; Choi, Seung-Hoon; Kim, Minkyoung; Rhee, Hokyoung; Shin, Jae-Min; Choi, Kihang; Kang, Sang-Kee; Lee, Nam Kyung; Choi, Yun-Jaie; Jung, Dong Hyun

    2010-01-01

    We report a new approach to studying organ targeting of peptides on the basis of peptide sequence information. The positive control data sets consist of organ-targeting peptide sequences identified by the peroral phage-display technique for four organs, and the negative control data are prepared from random sequences. The capacity of our models to make appropriate predictions is validated by statistical indicators including sensitivity, specificity, enrichment curve, and the area under the receiver operating characteristic (ROC) curve (the ROC score). VHSE descriptor produces statistically significant training models and the models with simple neural network architectures show slightly greater predictive power than those with complex ones. The training and test set statistics indicate that our models could discriminate between organ-targeting and random sequences. We anticipate that our models will be applicable to the selection of organ-targeting peptides for generating peptide drugs or peptidomimetics.

  15. The non-random walk of stock prices: the long-term correlation between signs and sizes

    NASA Astrophysics Data System (ADS)

    La Spada, G.; Farmer, J. D.; Lillo, F.

    2008-08-01

    We investigate the random walk of prices by developing a simple model relating the properties of the signs and absolute values of individual price changes to the diffusion rate (volatility) of prices at longer time scales. We show that this benchmark model is unable to reproduce the diffusion properties of real prices. Specifically, we find that for one hour intervals this model consistently over-predicts the volatility of real price series by about 70%, and that this effect becomes stronger as the length of the intervals increases. By selectively shuffling some components of the data while preserving others we are able to show that this discrepancy is caused by a subtle but long-range non-contemporaneous correlation between the signs and sizes of individual returns. We conjecture that this is related to the long-memory of transaction signs and the need to enforce market efficiency.

  16. Minimalist design of a robust real-time quantum random number generator

    NASA Astrophysics Data System (ADS)

    Kravtsov, K. S.; Radchenko, I. V.; Kulik, S. P.; Molotkov, S. N.

    2015-08-01

    We present a simple and robust construction of a real-time quantum random number generator (QRNG). Our minimalist approach ensures stable operation of the device as well as its simple and straightforward hardware implementation as a stand-alone module. As a source of randomness the device uses measurements of time intervals between clicks of a single-photon detector. The obtained raw sequence is then filtered and processed by a deterministic randomness extractor, which is realized as a look-up table. This enables high speed on-the-fly processing without the need of extensive computations. The overall performance of the device is around 1 random bit per detector click, resulting in 1.2 Mbit/s generation rate in our implementation.

  17. A Simulation Study on the Performance of the Simple Difference and Covariance-Adjusted Scores in Randomized Experimental Designs

    ERIC Educational Resources Information Center

    Petscher, Yaacov; Schatschneider, Christopher

    2011-01-01

    Research by Huck and McLean (1975) demonstrated that the covariance-adjusted score is more powerful than the simple difference score, yet recent reviews indicate researchers are equally likely to use either score type in two-wave randomized experimental designs. A Monte Carlo simulation was conducted to examine the conditions under which the…

  18. Globalization, immigration and diabetes self-management: an empirical study amongst immigrants with type 2 diabetes mellitus in Ireland.

    PubMed

    Thabit, H; Shah, S; Nash, M; Brema, I; Nolan, J J; Martin, G

    2009-10-01

    We have previously reported that immigrants in Ireland have poorer glycemic control compared with a matched population of Irish patients. This may be associated with poor diabetes self-care and low health literacy. To compare the diabetes self-care profile of non-Irish-national patients i.e. immigrant patients (IM) and Irish patients (IR) attending a hospital diabetes clinic and to evaluate differences in health literacy between the two cohorts. We studied the differences in diabetes self-management between 52 randomly selected non-Irish-national patients with type 2 diabetes and 48 randomly selected Irish/Caucasian patients. Rapid Estimate of Adult Literacy in Medicine (REALM) was used to assess health literacy. IM had poorer glycemic control than IR (HbA1c 8.0 +/- 1.9 vs. 6.9 +/- 1.4%, P < 0.005). A significant proportion of IM forget to monitor their daily blood glucose (42.1% vs. 12.5%, P < 0.05). Family support is more important amongst IM in performing daily blood glucose monitoring (75% vs. 47.7%, P < 0.05), taking medications (81.7% vs. 42.2%, P = 0.01) and following an appropriate meal plan (87.6% vs. 62.2%, P < 0.05). Fifty-three percent can only understand simple or familiar questions about their diabetes care; 65.9% can only provide information on simple or familiar topics about their diabetes. Health literacy was found to be lower in the IM groups when assessed using REALM (52.7 vs. 61.4, P = 0.01). Those providing diabetes education and care need to be aware of differing patient expectations regarding family involvement in the care of their diabetes and the possible contribution of language problems and lower health literacy to a limited understanding of diabetes self-care.

  19. A model-based 'varimax' sampling strategy for a heterogeneous population.

    PubMed

    Akram, Nuzhat A; Farooqi, Shakeel R

    2014-01-01

    Sampling strategies are planned to enhance the homogeneity of a sample, hence to minimize confounding errors. A sampling strategy was developed to minimize the variation within population groups. Karachi, the largest urban agglomeration in Pakistan, was used as a model population. Blood groups ABO and Rh factor were determined for 3000 unrelated individuals selected through simple random sampling. Among them five population groups, namely Balochi, Muhajir, Pathan, Punjabi and Sindhi, based on paternal ethnicity were identified. An index was designed to measure the proportion of admixture at parental and grandparental levels. Population models based on index score were proposed. For validation, 175 individuals selected through stratified random sampling were genotyped for the three STR loci CSF1PO, TPOX and TH01. ANOVA showed significant differences across the population groups for blood groups and STR loci distribution. Gene diversity was higher across the sub-population model than in the agglomerated population. At parental level gene diversities are significantly higher across No admixture models than Admixture models. At grandparental level the difference was not significant. A sub-population model with no admixture at parental level was justified for sampling the heterogeneous population of Karachi.

  20. [Clinical study of tissue-selecting therapy in the treatment of mixed hemorrhoids: a single-blind randomized controlled trail].

    PubMed

    He, Hongyan; He, Ping; Liu, Ning

    2014-06-01

    To evaluate the clinical efficacy and safety of tissue-selecting therapy (TST) in treatment of mixed hemorrhoids. A single-blind randomized study was carried out. A total of 120 patients with mixed hemorrhoids from January to December 2012 were prospectively enrolled in the study and equally divided into two groups, TST group and procedure for prolapse and hemorrhoids(PPH) group. Surgical data, efficacy and postoperative complications were compared between the two groups. As compared to PPH group, patients in TST group had shorter operation time [(15.9±5.18) min vs. (22.6±7.1) min, P<0.05], lower scores of rectal urgency (0.5±0.2 vs. 1.5±1.4, P<0.05), and shorter hospital stay [(11.2±3.7) d vs. (14.8±3.7) d, P<0.05]. No anastomotic stricture case was found in TST group, while 11 cases(18.3%) developed anastomotic stricture in PPH group. There were no significant differences in effective rate and pain score of first defecation between the two groups. TST is reliable and safe for mixed hemorrhoids with the advantage of simple, rapid recovery and less complications.

  1. Correlation between workplace and occupational burnout syndrome in nurses.

    PubMed

    Ahmadi, Omid; Azizkhani, Reza; Basravi, Monem

    2014-01-01

    This study was conducted to determine the effect of nurses' workplace on burnout syndrome among nurses working in Isfahan's Alzahra Hospital as a reference and typical university affiliated hospital, in 2010. In this cross-sectional study, 100 nurses were randomly selected among those working in emergency, orthopedic, dialysis wards and intensive care unit (ICU). Required data on determination of occupational burnout rate among the nurses of these wards were collected using Maslach Burnout Inventory (MBI) standard and validated questionnaire. Nurses were selected using simple random sampling. The multivariate ANOVA analysis showed that occupational burnout mean values of nurses working in orthopedic and dialysis wards were significantly less than those of nurses working in emergency ward and ICU (P = 0.01). There was also no significant difference between occupational burnout mean values of nurses working in emergency ward and ICU (P > 0.05). t-test showed that there was a difference between occupational burnout values of men and women, as these values for women were higher than those of men (P = 0.001). Results showed that occupational burnout mean values of nurses working in emergency ward and ICU were significantly more than those of nurses working in orthopedic and dialysis wards.

  2. Quantification of moving target cyber defenses

    NASA Astrophysics Data System (ADS)

    Farris, Katheryn A.; Cybenko, George

    2015-05-01

    Current network and information systems are static, making it simple for attackers to maintain an advantage. Adaptive defenses, such as Moving Target Defenses (MTD) have been developed as potential "game-changers" in an effort to increase the attacker's workload. With many new methods being developed, it is difficult to accurately quantify and compare their overall costs and effectiveness. This paper compares the tradeoffs between current approaches to the quantification of MTDs. We present results from an expert opinion survey on quantifying the overall effectiveness, upfront and operating costs of a select set of MTD techniques. We find that gathering informed scientific opinions can be advantageous for evaluating such new technologies as it offers a more comprehensive assessment. We end by presenting a coarse ordering of a set of MTD techniques from most to least dominant. We found that seven out of 23 methods rank as the more dominant techniques. Five of which are techniques of either address space layout randomization or instruction set randomization. The remaining two techniques are applicable to software and computer platforms. Among the techniques that performed the worst are those primarily aimed at network randomization.

  3. Sampling considerations for disease surveillance in wildlife populations

    USGS Publications Warehouse

    Nusser, S.M.; Clark, W.R.; Otis, D.L.; Huang, L.

    2008-01-01

    Disease surveillance in wildlife populations involves detecting the presence of a disease, characterizing its prevalence and spread, and subsequent monitoring. A probability sample of animals selected from the population and corresponding estimators of disease prevalence and detection provide estimates with quantifiable statistical properties, but this approach is rarely used. Although wildlife scientists often assume probability sampling and random disease distributions to calculate sample sizes, convenience samples (i.e., samples of readily available animals) are typically used, and disease distributions are rarely random. We demonstrate how landscape-based simulation can be used to explore properties of estimators from convenience samples in relation to probability samples. We used simulation methods to model what is known about the habitat preferences of the wildlife population, the disease distribution, and the potential biases of the convenience-sample approach. Using chronic wasting disease in free-ranging deer (Odocoileus virginianus) as a simple illustration, we show that using probability sample designs with appropriate estimators provides unbiased surveillance parameter estimates but that the selection bias and coverage errors associated with convenience samples can lead to biased and misleading results. We also suggest practical alternatives to convenience samples that mix probability and convenience sampling. For example, a sample of land areas can be selected using a probability design that oversamples areas with larger animal populations, followed by harvesting of individual animals within sampled areas using a convenience sampling method.

  4. Quantity and quality of information, education and communication during antenatal visit at private and public sector hospitals of Bahawalpur, Pakistan.

    PubMed

    Mahar, Benazeer; Kumar, Ramesh; Rizvi, Narjis; Bahalkani, Habib Akhtar; Haq, Mahboobul; Soomro, Jamila

    2012-01-01

    Information, education and communication (IEC) by health care provider to pregnant woman during the antenatal visit are very crucial for healthier outcome of pregnancy. This study analysed the quality and quantity of antenatal visit at a private and a public hospital of Bahawalpur, Pakistan. An exit interview was conducted from 216 pregnant women by using validated, reliable and pre-tested adapted questionnaire. First sample was selected by simple random sampling, for rest of the sample selection systematic random sampling was adapted by selecting every 7th women for interview. Ethical considerations were taken. Average communication time among pregnant woman and her healthcare provider was 3 minute in public and 8 minutes in private hospital. IEC mainly focused on diet and nutrition in private (86%) and (53%) public, advice for family planning after delivery was discussed with 13% versus 7% in public and private setting. None of the respondents in both facilities got advice or counselling on breastfeeding and neonatal care. Birth preparedness components were discussed, woman in public and private hospital respectively. In both settings antenatal clients were not received information and education communication according to World Health Organization guidelines. Quality and quantity of IEC during antenatal care was found very poor in both public and private sector hospitals of urban Pakistan.

  5. Interacting particle systems on graphs

    NASA Astrophysics Data System (ADS)

    Sood, Vishal

    In this dissertation, the dynamics of socially or biologically interacting populations are investigated. The individual members of the population are treated as particles that interact via links on a social or biological network represented as a graph. The effect of the structure of the graph on the properties of the interacting particle system is studied using statistical physics techniques. In the first chapter, the central concepts of graph theory and social and biological networks are presented. Next, interacting particle systems that are drawn from physics, mathematics and biology are discussed in the second chapter. In the third chapter, the random walk on a graph is studied. The mean time for a random walk to traverse between two arbitrary sites of a random graph is evaluated. Using an effective medium approximation it is found that the mean first-passage time between pairs of sites, as well as all moments of this first-passage time, are insensitive to the density of links in the graph. The inverse of the mean-first passage time varies non-monotonically with the density of links near the percolation transition of the random graph. Much of the behavior can be understood by simple heuristic arguments. Evolutionary dynamics, by which mutants overspread an otherwise uniform population on heterogeneous graphs, are studied in the fourth chapter. Such a process underlies' epidemic propagation, emergence of fads, social cooperation or invasion of an ecological niche by a new species. The first part of this chapter is devoted to neutral dynamics, in which the mutant genotype does not have a selective advantage over the resident genotype. The time to extinction of one of the two genotypes is derived. In the second part of this chapter, selective advantage or fitness is introduced such that the mutant genotype has a higher birth rate or a lower death rate. This selective advantage leads to a dynamical competition in which selection dominates for large populations, while for small populations the dynamics are similar to the neutral case. The likelihood for the fitter mutants to drive the resident genotype to extinction is calculated.

  6. Efficient quantum pseudorandomness with simple graph states

    NASA Astrophysics Data System (ADS)

    Mezher, Rawad; Ghalbouni, Joe; Dgheim, Joseph; Markham, Damian

    2018-02-01

    Measurement based (MB) quantum computation allows for universal quantum computing by measuring individual qubits prepared in entangled multipartite states, known as graph states. Unless corrected for, the randomness of the measurements leads to the generation of ensembles of random unitaries, where each random unitary is identified with a string of possible measurement results. We show that repeating an MB scheme an efficient number of times, on a simple graph state, with measurements at fixed angles and no feedforward corrections, produces a random unitary ensemble that is an ɛ -approximate t design on n qubits. Unlike previous constructions, the graph is regular and is also a universal resource for measurement based quantum computing, closely related to the brickwork state.

  7. Music intervention during daily weaning trials-A 6 day prospective randomized crossover trial.

    PubMed

    Liang, Zhan; Ren, Dianxu; Choi, JiYeon; Happ, Mary Beth; Hravnak, Marylyn; Hoffman, Leslie A

    2016-12-01

    To examine the effect of patient-selected music intervention during daily weaning trials for patients on prolonged mechanical ventilation. Using a crossover repeated measures design, patients were randomized to music vs no music on the first intervention day. Provision of music was alternated for 6 days, resulting in 3 music and 3 no music days. During weaning trials on music days, data were obtained for 30min prior to music listening and continued for 60min while patients listened to selected music (total 90min). On no music days, data were collected for 90min. Outcome measures were heart rate (HR), respiratory rate (RR), oxygen saturation (SpO 2 ), blood pressure (BP), dyspnea and anxiety assessed with a visual analog scale (VAS-D, VAS-A) and weaning duration (meanh per day on music and non-music days). Of 31 patients randomized, 23 completed the 6-day intervention. When comparisons were made between the 3 music and 3 no music days, there were significant decreases in RR and VAS-D and a significant increase in daily weaning duration on music days (p<0.05). A multivariate mixed-effects model analysis that included patients who completed ≥2 days of the intervention (n=28) demonstrated significant decreases in HR, RR, VAS-A, and VAS-D and a significant increase in daily weaning duration on music days (p<0.05). Providing patient selected music during daily weaning trials is a simple, low-cost, potentially beneficial intervention for patients on prolonged mechanical ventilation. Further study is indicated to test ability of this intervention to promote weaning success and benefits earlier in the weaning process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. 47 CFR 1.1602 - Designation for random selection.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Designation for random selection. 1.1602 Section 1.1602 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1602 Designation for random selection...

  9. 47 CFR 1.1602 - Designation for random selection.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Designation for random selection. 1.1602 Section 1.1602 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1602 Designation for random selection...

  10. Representation of limb kinematics in Purkinje cell simple spike discharge is conserved across multiple tasks.

    PubMed

    Hewitt, Angela L; Popa, Laurentiu S; Pasalar, Siavash; Hendrix, Claudia M; Ebner, Timothy J

    2011-11-01

    Encoding of movement kinematics in Purkinje cell simple spike discharge has important implications for hypotheses of cerebellar cortical function. Several outstanding questions remain regarding representation of these kinematic signals. It is uncertain whether kinematic encoding occurs in unpredictable, feedback-dependent tasks or kinematic signals are conserved across tasks. Additionally, there is a need to understand the signals encoded in the instantaneous discharge of single cells without averaging across trials or time. To address these questions, this study recorded Purkinje cell firing in monkeys trained to perform a manual random tracking task in addition to circular tracking and center-out reach. Random tracking provides for extensive coverage of kinematic workspaces. Direction and speed errors are significantly greater during random than circular tracking. Cross-correlation analyses comparing hand and target velocity profiles show that hand velocity lags target velocity during random tracking. Correlations between simple spike firing from 120 Purkinje cells and hand position, velocity, and speed were evaluated with linear regression models including a time constant, τ, as a measure of the firing lead/lag relative to the kinematic parameters. Across the population, velocity accounts for the majority of simple spike firing variability (63 ± 30% of R(adj)(2)), followed by position (28 ± 24% of R(adj)(2)) and speed (11 ± 19% of R(adj)(2)). Simple spike firing often leads hand kinematics. Comparison of regression models based on averaged vs. nonaveraged firing and kinematics reveals lower R(adj)(2) values for nonaveraged data; however, regression coefficients and τ values are highly similar. Finally, for most cells, model coefficients generated from random tracking accurately estimate simple spike firing in either circular tracking or center-out reach. These findings imply that the cerebellum controls movement kinematics, consistent with a forward internal model that predicts upcoming limb kinematics.

  11. Variable selection in subdistribution hazard frailty models with competing risks data

    PubMed Central

    Do Ha, Il; Lee, Minjung; Oh, Seungyoung; Jeong, Jong-Hyeon; Sylvester, Richard; Lee, Youngjo

    2014-01-01

    The proportional subdistribution hazards model (i.e. Fine-Gray model) has been widely used for analyzing univariate competing risks data. Recently, this model has been extended to clustered competing risks data via frailty. To the best of our knowledge, however, there has been no literature on variable selection method for such competing risks frailty models. In this paper, we propose a simple but unified procedure via a penalized h-likelihood (HL) for variable selection of fixed effects in a general class of subdistribution hazard frailty models, in which random effects may be shared or correlated. We consider three penalty functions (LASSO, SCAD and HL) in our variable selection procedure. We show that the proposed method can be easily implemented using a slight modification to existing h-likelihood estimation approaches. Numerical studies demonstrate that the proposed procedure using the HL penalty performs well, providing a higher probability of choosing the true model than LASSO and SCAD methods without losing prediction accuracy. The usefulness of the new method is illustrated using two actual data sets from multi-center clinical trials. PMID:25042872

  12. Interplay of Determinism and Randomness: From Irreversibility to Chaos, Fractals, and Stochasticity

    NASA Astrophysics Data System (ADS)

    Tsonis, A.

    2017-12-01

    We will start our discussion into randomness by looking exclusively at our formal mathematical system to show that even in this pure and strictly logical system one cannot do away with randomness. By employing simple mathematical models, we will identify the three possible sources of randomness: randomness due to inability to find the rules (irreversibility), randomness due to inability to have infinite power (chaos), and randomness due to stochastic processes. Subsequently we will move from the mathematical system to our physical world to show that randomness, through the quantum mechanical character of small scales, through chaos, and because of the second law of thermodynamics, is an intrinsic property of nature as well. We will subsequently argue that the randomness in the physical world is consistent with the three sources of randomness suggested from the study of simple mathematical systems. Many examples ranging from purely mathematical to natural processes will be presented, which clearly demonstrate how the combination of rules and randomness produces the world we live in. Finally, the principle of least effort or the principle of minimum energy consumption will be suggested as the underlying principle behind this symbiosis between determinism and randomness.

  13. Predictors of employment status of treated patients with DSM-III-R diagnosis. Can logistic regression model find a solution?

    PubMed

    Daradkeh, T K; Karim, L

    1994-01-01

    To investigate the predictors of employment status of patients with DSM-III-R diagnosis, 55 patients were selected by a simple random technique from the main psychiatric clinic in Al Ain, United Arab Emirates. Structured and formal assessments were carried out to extract the potential predictors of outcome of schizophrenia. Logistic regression model revealed that being married, absence of schizoid personality, free or with minimum symptoms of the illness, later age of onset, and higher educational attainment were the most significant predictors of employment outcome. The implications of the results of this study are discussed in the text.

  14. Methodological considerations in using complex survey data: an applied example with the Head Start Family and Child Experiences Survey.

    PubMed

    Hahs-Vaughn, Debbie L; McWayne, Christine M; Bulotsky-Shearer, Rebecca J; Wen, Xiaoli; Faria, Ann-Marie

    2011-06-01

    Complex survey data are collected by means other than simple random samples. This creates two analytical issues: nonindependence and unequal selection probability. Failing to address these issues results in underestimated standard errors and biased parameter estimates. Using data from the nationally representative Head Start Family and Child Experiences Survey (FACES; 1997 and 2000 cohorts), three diverse multilevel models are presented that illustrate differences in results depending on addressing or ignoring the complex sampling issues. Limitations of using complex survey data are reported, along with recommendations for reporting complex sample results. © The Author(s) 2011

  15. Spatial inventory integrating raster databases and point sample data. [Geographic Information System for timber inventory

    NASA Technical Reports Server (NTRS)

    Strahler, A. H.; Woodcock, C. E.; Logan, T. L.

    1983-01-01

    A timber inventory of the Eldorado National Forest, located in east-central California, provides an example of the use of a Geographic Information System (GIS) to stratify large areas of land for sampling and the collection of statistical data. The raster-based GIS format of the VICAR/IBIS software system allows simple and rapid tabulation of areas, and facilitates the selection of random locations for ground sampling. Algorithms that simplify the complex spatial pattern of raster-based information, and convert raster format data to strings of coordinate vectors, provide a link to conventional vector-based geographic information systems.

  16. 47 CFR 1.1603 - Conduct of random selection.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Conduct of random selection. 1.1603 Section 1.1603 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1603 Conduct of random selection. The...

  17. 47 CFR 1.1603 - Conduct of random selection.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Conduct of random selection. 1.1603 Section 1.1603 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1603 Conduct of random selection. The...

  18. Selection of Shared and Neoantigen-Reactive T Cells for Adoptive Cell Therapy Based on CD137 Separation.

    PubMed

    Seliktar-Ofir, Sivan; Merhavi-Shoham, Efrat; Itzhaki, Orit; Yunger, Sharon; Markel, Gal; Schachter, Jacob; Besser, Michal J

    2017-01-01

    Adoptive cell therapy (ACT) of autologous tumor infiltrating lymphocytes (TIL) is an effective immunotherapy for patients with solid tumors, yielding objective response rates of around 40% in refractory patients with metastatic melanoma. Most clinical centers utilize bulk, randomly isolated TIL from the tumor tissue for ex vivo expansion and infusion. Only a minor fraction of the administered T cells recognizes tumor antigens, such as shared and mutation-derived neoantigens, and consequently eliminates the tumor. Thus, there are many ongoing effects to identify and select tumor-specific TIL for therapy; however, those approaches are very costly and require months, which is unreasonable for most metastatic patients. CD137 (4-1BB) has been identified as a co-stimulatory marker, which is induced upon the specific interaction of T cells with their target cell. Therefore, CD137 can be a useful biomarker and an important tool for the selection of tumor-reactive T cells. Here, we developed and validated a simple and time efficient method for the selection of CD137-expressing T cells for therapy based on magnetic bead separation. CD137 selection was performed with clinical grade compliant reagents, and TIL were expanded in a large-scale manner to meet cell numbers required for the patient setting in a GMP facility. For the first time, the methodology was designed to comply with both clinical needs and limitations, and its feasibility was assessed. CD137-selected TIL demonstrated significantly increased antitumor reactivity and were enriched for T cells recognizing neoantigens as well as shared tumor antigens. CD137-based selection enabled the enrichment of tumor-reactive T cells without the necessity of knowing the epitope specificity or the antigen type. The direct implementation of the CD137 separation method to the cell production of TIL may provide a simple way to improve the clinical efficiency of TIL ACT.

  19. Selection of Shared and Neoantigen-Reactive T Cells for Adoptive Cell Therapy Based on CD137 Separation

    PubMed Central

    Seliktar-Ofir, Sivan; Merhavi-Shoham, Efrat; Itzhaki, Orit; Yunger, Sharon; Markel, Gal; Schachter, Jacob; Besser, Michal J.

    2017-01-01

    Adoptive cell therapy (ACT) of autologous tumor infiltrating lymphocytes (TIL) is an effective immunotherapy for patients with solid tumors, yielding objective response rates of around 40% in refractory patients with metastatic melanoma. Most clinical centers utilize bulk, randomly isolated TIL from the tumor tissue for ex vivo expansion and infusion. Only a minor fraction of the administered T cells recognizes tumor antigens, such as shared and mutation-derived neoantigens, and consequently eliminates the tumor. Thus, there are many ongoing effects to identify and select tumor-specific TIL for therapy; however, those approaches are very costly and require months, which is unreasonable for most metastatic patients. CD137 (4-1BB) has been identified as a co-stimulatory marker, which is induced upon the specific interaction of T cells with their target cell. Therefore, CD137 can be a useful biomarker and an important tool for the selection of tumor-reactive T cells. Here, we developed and validated a simple and time efficient method for the selection of CD137-expressing T cells for therapy based on magnetic bead separation. CD137 selection was performed with clinical grade compliant reagents, and TIL were expanded in a large-scale manner to meet cell numbers required for the patient setting in a GMP facility. For the first time, the methodology was designed to comply with both clinical needs and limitations, and its feasibility was assessed. CD137-selected TIL demonstrated significantly increased antitumor reactivity and were enriched for T cells recognizing neoantigens as well as shared tumor antigens. CD137-based selection enabled the enrichment of tumor-reactive T cells without the necessity of knowing the epitope specificity or the antigen type. The direct implementation of the CD137 separation method to the cell production of TIL may provide a simple way to improve the clinical efficiency of TIL ACT. PMID:29067023

  20. A Simple Probabilistic Combat Model

    DTIC Science & Technology

    2016-06-13

    This page intentionally left blank. 1. INTRODUCTION The Lanchester combat model1 is a simple way to assess the effects of quantity and quality...case model. For the random case, assume R red weapons are allocated to B blue weapons randomly. We are interested in the distribution of weapons...since the initial condition is very close to the break even line. What is more interesting is that the probability density tends to concentrate at

  1. 3D statistical shape models incorporating 3D random forest regression voting for robust CT liver segmentation

    NASA Astrophysics Data System (ADS)

    Norajitra, Tobias; Meinzer, Hans-Peter; Maier-Hein, Klaus H.

    2015-03-01

    During image segmentation, 3D Statistical Shape Models (SSM) usually conduct a limited search for target landmarks within one-dimensional search profiles perpendicular to the model surface. In addition, landmark appearance is modeled only locally based on linear profiles and weak learners, altogether leading to segmentation errors from landmark ambiguities and limited search coverage. We present a new method for 3D SSM segmentation based on 3D Random Forest Regression Voting. For each surface landmark, a Random Regression Forest is trained that learns a 3D spatial displacement function between the according reference landmark and a set of surrounding sample points, based on an infinite set of non-local randomized 3D Haar-like features. Landmark search is then conducted omni-directionally within 3D search spaces, where voxelwise forest predictions on landmark position contribute to a common voting map which reflects the overall position estimate. Segmentation experiments were conducted on a set of 45 CT volumes of the human liver, of which 40 images were randomly chosen for training and 5 for testing. Without parameter optimization, using a simple candidate selection and a single resolution approach, excellent results were achieved, while faster convergence and better concavity segmentation were observed, altogether underlining the potential of our approach in terms of increased robustness from distinct landmark detection and from better search coverage.

  2. Multi-Criteria Decision Making For Determining A Simple Model of Supplier Selection

    NASA Astrophysics Data System (ADS)

    Harwati

    2017-06-01

    Supplier selection is a decision with many criteria. Supplier selection model usually involves more than five main criteria and more than 10 sub-criteria. In fact many model includes more than 20 criteria. Too many criteria involved in supplier selection models sometimes make it difficult to apply in many companies. This research focuses on designing supplier selection that easy and simple to be applied in the company. Analytical Hierarchy Process (AHP) is used to weighting criteria. The analysis results there are four criteria that are easy and simple can be used to select suppliers: Price (weight 0.4) shipment (weight 0.3), quality (weight 0.2) and services (weight 0.1). A real case simulation shows that simple model provides the same decision with a more complex model.

  3. Managing salinity in Upper Colorado River Basin streams: Selecting catchments for sediment control efforts using watershed characteristics and random forests models

    USGS Publications Warehouse

    Tillman, Fred; Anning, David W.; Heilman, Julian A.; Buto, Susan G.; Miller, Matthew P.

    2018-01-01

    Elevated concentrations of dissolved-solids (salinity) including calcium, sodium, sulfate, and chloride, among others, in the Colorado River cause substantial problems for its water users. Previous efforts to reduce dissolved solids in upper Colorado River basin (UCRB) streams often focused on reducing suspended-sediment transport to streams, but few studies have investigated the relationship between suspended sediment and salinity, or evaluated which watershed characteristics might be associated with this relationship. Are there catchment properties that may help in identifying areas where control of suspended sediment will also reduce salinity transport to streams? A random forests classification analysis was performed on topographic, climate, land cover, geology, rock chemistry, soil, and hydrologic information in 163 UCRB catchments. Two random forests models were developed in this study: one for exploring stream and catchment characteristics associated with stream sites where dissolved solids increase with increasing suspended-sediment concentration, and the other for predicting where these sites are located in unmonitored reaches. Results of variable importance from the exploratory random forests models indicate that no simple source, geochemical process, or transport mechanism can easily explain the relationship between dissolved solids and suspended sediment concentrations at UCRB monitoring sites. Among the most important watershed characteristics in both models were measures of soil hydraulic conductivity, soil erodibility, minimum catchment elevation, catchment area, and the silt component of soil in the catchment. Predictions at key locations in the basin were combined with observations from selected monitoring sites, and presented in map-form to give a complete understanding of where catchment sediment control practices would also benefit control of dissolved solids in streams.

  4. Some practical problems in implementing randomization.

    PubMed

    Downs, Matt; Tucker, Kathryn; Christ-Schmidt, Heidi; Wittes, Janet

    2010-06-01

    While often theoretically simple, implementing randomization to treatment in a masked, but confirmable, fashion can prove difficult in practice. At least three categories of problems occur in randomization: (1) bad judgment in the choice of method, (2) design and programming errors in implementing the method, and (3) human error during the conduct of the trial. This article focuses on these latter two types of errors, dealing operationally with what can go wrong after trial designers have selected the allocation method. We offer several case studies and corresponding recommendations for lessening the frequency of problems in allocating treatment or for mitigating the consequences of errors. Recommendations include: (1) reviewing the randomization schedule before starting a trial, (2) being especially cautious of systems that use on-demand random number generators, (3) drafting unambiguous randomization specifications, (4) performing thorough testing before entering a randomization system into production, (5) maintaining a dataset that captures the values investigators used to randomize participants, thereby allowing the process of treatment allocation to be reproduced and verified, (6) resisting the urge to correct errors that occur in individual treatment assignments, (7) preventing inadvertent unmasking to treatment assignments in kit allocations, and (8) checking a sample of study drug kits to allow detection of errors in drug packaging and labeling. Although we performed a literature search of documented randomization errors, the examples that we provide and the resultant recommendations are based largely on our own experience in industry-sponsored clinical trials. We do not know how representative our experience is or how common errors of the type we have seen occur. Our experience underscores the importance of verifying the integrity of the treatment allocation process before and during a trial. Clinical Trials 2010; 7: 235-245. http://ctj.sagepub.com.

  5. On the importance of incorporating sampling weights in ...

    EPA Pesticide Factsheets

    Occupancy models are used extensively to assess wildlife-habitat associations and to predict species distributions across large geographic regions. Occupancy models were developed as a tool to properly account for imperfect detection of a species. Current guidelines on survey design requirements for occupancy models focus on the number of sample units and the pattern of revisits to a sample unit within a season. We focus on the sampling design or how the sample units are selected in geographic space (e.g., stratified, simple random, unequal probability, etc). In a probability design, each sample unit has a sample weight which quantifies the number of sample units it represents in the finite (oftentimes areal) sampling frame. We demonstrate the importance of including sampling weights in occupancy model estimation when the design is not a simple random sample or equal probability design. We assume a finite areal sampling frame as proposed for a national bat monitoring program. We compare several unequal and equal probability designs and varying sampling intensity within a simulation study. We found the traditional single season occupancy model produced biased estimates of occupancy and lower confidence interval coverage rates compared to occupancy models that accounted for the sampling design. We also discuss how our findings inform the analyses proposed for the nascent North American Bat Monitoring Program and other collaborative synthesis efforts that propose h

  6. Evaluating surrogate endpoints, prognostic markers, and predictive markers: Some simple themes.

    PubMed

    Baker, Stuart G; Kramer, Barnett S

    2015-08-01

    A surrogate endpoint is an endpoint observed earlier than the true endpoint (a health outcome) that is used to draw conclusions about the effect of treatment on the unobserved true endpoint. A prognostic marker is a marker for predicting the risk of an event given a control treatment; it informs treatment decisions when there is information on anticipated benefits and harms of a new treatment applied to persons at high risk. A predictive marker is a marker for predicting the effect of treatment on outcome in a subgroup of patients or study participants; it provides more rigorous information for treatment selection than a prognostic marker when it is based on estimated treatment effects in a randomized trial. We organized our discussion around a different theme for each topic. "Fundamentally an extrapolation" refers to the non-statistical considerations and assumptions needed when using surrogate endpoints to evaluate a new treatment. "Decision analysis to the rescue" refers to use the use of decision analysis to evaluate an additional prognostic marker because it is not possible to choose between purely statistical measures of marker performance. "The appeal of simplicity" refers to a straightforward and efficient use of a single randomized trial to evaluate overall treatment effect and treatment effect within subgroups using predictive markers. The simple themes provide a general guideline for evaluation of surrogate endpoints, prognostic markers, and predictive markers. © The Author(s) 2014.

  7. Regression dilution bias: tools for correction methods and sample size calculation.

    PubMed

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  8. The quality of the evidence base for clinical pathway effectiveness: room for improvement in the design of evaluation trials.

    PubMed

    Rotter, Thomas; Kinsman, Leigh; James, Erica; Machotta, Andreas; Steyerberg, Ewout W

    2012-06-18

    The purpose of this article is to report on the quality of the existing evidence base regarding the effectiveness of clinical pathway (CPW) research in the hospital setting. The analysis is based on a recently published Cochrane review of the effectiveness of CPWs. An integral component of the review process was a rigorous appraisal of the methodological quality of published CPW evaluations. This allowed the identification of strengths and limitations of the evidence base for CPW effectiveness. We followed the validated Cochrane Effective Practice and Organisation of Care Group (EPOC) criteria for randomized and non-randomized clinical pathway evaluations. In addition, we tested the hypotheses that simple pre-post studies tend to overestimate CPW effects reported. Out of the 260 primary studies meeting CPW content criteria, only 27 studies met the EPOC study design criteria, with the majority of CPW studies (more than 70 %) excluded from the review on the basis that they were simple pre-post evaluations, mostly comparing two or more annual patient cohorts. Methodologically poor study designs are often used to evaluate CPWs and this compromises the quality of the existing evidence base. Cochrane EPOC methodological criteria, including the selection of rigorous study designs along with detailed descriptions of CPW development and implementation processes, are recommended for quantitative evaluations to improve the evidence base for the use of CPWs in hospitals.

  9. Simple Random Sampling-Based Probe Station Selection for Fault Detection in Wireless Sensor Networks

    PubMed Central

    Huang, Rimao; Qiu, Xuesong; Rui, Lanlan

    2011-01-01

    Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate. PMID:22163789

  10. Simple random sampling-based probe station selection for fault detection in wireless sensor networks.

    PubMed

    Huang, Rimao; Qiu, Xuesong; Rui, Lanlan

    2011-01-01

    Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate.

  11. Analytical applications of aptamers

    NASA Astrophysics Data System (ADS)

    Tombelli, S.; Minunni, M.; Mascini, M.

    2007-05-01

    Aptamers are single stranded DNA or RNA ligands which can be selected for different targets starting from a library of molecules containing randomly created sequences. Aptamers have been selected to bind very different targets, from proteins to small organic dyes. Aptamers are proposed as alternatives to antibodies as biorecognition elements in analytical devices with ever increasing frequency. This in order to satisfy the demand for quick, cheap, simple and highly reproducible analytical devices, especially for protein detection in the medical field or for the detection of smaller molecules in environmental and food analysis. In our recent experience, DNA and RNA aptamers, specific for three different proteins (Tat, IgE and thrombin), have been exploited as bio-recognition elements to develop specific biosensors (aptasensors). These recognition elements have been coupled to piezoelectric quartz crystals and surface plasmon resonance (SPR) devices as transducers where the aptamers have been immobilized on the gold surface of the crystals electrodes or on SPR chips, respectively.

  12. Large-area fluidic assembly of single-walled carbon nanotubes through dip-coating and directional evaporation

    NASA Astrophysics Data System (ADS)

    Kim, Pilnam; Kang, Tae June

    2017-12-01

    We present a simple and scalable fluidic-assembly approach, in which bundles of single-walled carbon nanotubes (SWCNTs) are selectively aligned and deposited by directionally controlled dip-coating and solvent evaporation processes. The patterned surface with alternating regions of hydrophobic polydimethyl siloxane (PDMS) (height 100 nm) strips and hydrophilic SiO2 substrate was withdrawn vertically at a constant speed ( 3 mm/min) from a solution bath containing SWCNTs ( 0.1 mg/ml), allowing for directional evaporation and subsequent selective deposition of nanotube bundles along the edges of horizontally aligned PDMS strips. In addition, the fluidic assembly was applied to fabricate a field effect transistor (FET) with highly oriented SWCNTs, which demonstrate significantly higher current density as well as high turn-off ratio (T/O ratio 100) as compared to that with randomly distributed carbon nanotube bundles (T/O ratio <10).

  13. DNA methylation polymorphism in a set of elite rice cultivars and its possible contribution to inter-cultivar differential gene expression.

    PubMed

    Wang, Yongming; Lin, Xiuyun; Dong, Bo; Wang, Yingdian; Liu, Bao

    2004-01-01

    RAPD (randomly amplified polymorphic DNA) and ISSR (inter-simple sequence repeat) fingerprinting on HpaII/MspI-digested genomic DNA of nine elite japonica rice cultivars implies inter-cultivar DNA methylation polymorphism. Using both DNA fragments isolated from RAPD or ISSR gels and selected low-copy sequences as probes, methylation-sensitive Southern blot analysis confirms the existence of extensive DNA methylation polymorphism in both genes and DNA repeats among the rice cultivars. The cultivar-specific methylation patterns are stably maintained, and can be used as reliable molecular markers. Transcriptional analysis of four selected sequences (RdRP, AC9, HSP90 and MMR) on leaves and roots from normal and 5-azacytidine-treated seedlings of three representative cultivars shows an association between the transcriptional activity of one of the genes, the mismatch repair (MMR) gene, and its CG methylation patterns.

  14. SSRscanner: a program for reporting distribution and exact location of simple sequence repeats.

    PubMed

    Anwar, Tamanna; Khan, Asad U

    2006-02-20

    Simple sequence repeats (SSRs) have become important molecular markers for a broad range of applications, such as genome mapping and characterization, phenotype mapping, marker assisted selection of crop plants and a range of molecular ecology and diversity studies. These repeated DNA sequences are found in both prokaryotes and eukaryotes. They are distributed almost at random throughout the genome, ranging from mononucleotide to trinucleotide repeats. They are also found at longer lengths (> 6 repeating units) of tracts. Most of the computer programs that find SSRs do not report its exact position. A computer program SSRscanner was written to find out distribution, frequency and exact location of each SSR in the genome. SSRscanner is user friendly. It can search repeats of any length and produce outputs with their exact position on chromosome and their frequency of occurrence in the sequence. This program has been written in PERL and is freely available for non-commercial users by request from the authors. Please contact the authors by E-mail: huzzi99@hotmail.com.

  15. Technical Report 1205: A Simple Probabilistic Combat Model

    DTIC Science & Technology

    2016-07-08

    This page intentionally left blank. 1. INTRODUCTION The Lanchester combat model1 is a simple way to assess the effects of quantity and quality...model. For the random case, assume R red weapons are allocated to B blue weapons randomly. We are interested in the distribution of weapons assigned...the initial condition is very close to the break even line. What is more interesting is that the probability density tends to concentrate at either a

  16. Assessing the difficulty and time cost of de-identification in clinical narratives.

    PubMed

    Dorr, D A; Phillips, W F; Phansalkar, S; Sims, S A; Hurdle, J F

    2006-01-01

    To characterize the difficulty confronting investigators in removing protected health information (PHI) from cross-discipline, free-text clinical notes, an important challenge to clinical informatics research as recalibrated by the introduction of the US Health Insurance Portability and Accountability Act (HIPAA) and similar regulations. Randomized selection of clinical narratives from complete admissions written by diverse providers, reviewed using a two-tiered rater system and simple automated regular expression tools. For manual review, two independent reviewers used simple search and replace algorithms and visual scanning to find PHI as defined by HIPAA, followed by an independent second review to detect any missed PHI. Simple automated review was also performed for the "easy" PHI that are number- or date-based. From 262 notes, 2074 PHI, or 7.9 +/- 6.1 per note, were found. The average recall (or sensitivity) was 95.9% while precision was 99.6% for single reviewers. Agreement between individual reviewers was strong (ICC = 0.99), although some asymmetry in errors was seen between reviewers (p = 0.001). The automated technique had better recall (98.5%) but worse precision (88.4%) for its subset of identifiers. Manually de-identifying a note took 87.3 +/- 61 seconds on average. Manual de-identification of free-text notes is tedious and time-consuming, but even simple PHI is difficult to automatically identify with the exactitude required under HIPAA.

  17. Two-way ANOVA Problems with Simple Numbers.

    ERIC Educational Resources Information Center

    Read, K. L. Q.; Shihab, L. H.

    1998-01-01

    Describes how to construct simple numerical examples in two-way ANOVAs, specifically randomized blocks, balanced two-way layouts, and Latin squares. Indicates that working through simple numerical problems is helpful to students meeting a technique for the first time and should be followed by computer-based analysis of larger, real datasets when…

  18. On the estimation variance for the specific Euler-Poincaré characteristic of random networks.

    PubMed

    Tscheschel, A; Stoyan, D

    2003-07-01

    The specific Euler number is an important topological characteristic in many applications. It is considered here for the case of random networks, which may appear in microscopy either as primary objects of investigation or as secondary objects describing in an approximate way other structures such as, for example, porous media. For random networks there is a simple and natural estimator of the specific Euler number. For its estimation variance, a simple Poisson approximation is given. It is based on the general exact formula for the estimation variance. In two examples of quite different nature and topology application of the formulas is demonstrated.

  19. Transcription, intercellular variability and correlated random walk.

    PubMed

    Müller, Johannes; Kuttler, Christina; Hense, Burkhard A; Zeiser, Stefan; Liebscher, Volkmar

    2008-11-01

    We develop a simple model for the random distribution of a gene product. It is assumed that the only source of variance is due to switching transcription on and off by a random process. Under the condition that the transition rates between on and off are constant we find that the amount of mRNA follows a scaled Beta distribution. Additionally, a simple positive feedback loop is considered. The simplicity of the model allows for an explicit solution also in this setting. These findings in turn allow, e.g., for easy parameter scans. We find that bistable behavior translates into bimodal distributions. These theoretical findings are in line with experimental results.

  20. Differential evolution enhanced with multiobjective sorting-based mutation operators.

    PubMed

    Wang, Jiahai; Liao, Jianjun; Zhou, Ying; Cai, Yiqiao

    2014-12-01

    Differential evolution (DE) is a simple and powerful population-based evolutionary algorithm. The salient feature of DE lies in its mutation mechanism. Generally, the parents in the mutation operator of DE are randomly selected from the population. Hence, all vectors are equally likely to be selected as parents without selective pressure at all. Additionally, the diversity information is always ignored. In order to fully exploit the fitness and diversity information of the population, this paper presents a DE framework with multiobjective sorting-based mutation operator. In the proposed mutation operator, individuals in the current population are firstly sorted according to their fitness and diversity contribution by nondominated sorting. Then parents in the mutation operators are proportionally selected according to their rankings based on fitness and diversity, thus, the promising individuals with better fitness and diversity have more opportunity to be selected as parents. Since fitness and diversity information is simultaneously considered for parent selection, a good balance between exploration and exploitation can be achieved. The proposed operator is applied to original DE algorithms, as well as several advanced DE variants. Experimental results on 48 benchmark functions and 12 real-world application problems show that the proposed operator is an effective approach to enhance the performance of most DE algorithms studied.

  1. School factors and smoking prevalence among high school students in Japan.

    PubMed

    Osaki, Y; Minowa, M

    1996-10-01

    The purpose of this study was to analyze the relationship between student smoking prevalence by school and school factors. Junior and senior high schools were selected from throughout Japan using a simple random sampling. One hundred junior high schools and 50 senior high schools were randomly selected. Of these 70 junior high schools (70%) and 33 senior high schools (66%) responded to this survey. Self-administered anonymous questionnaires were completed by all enrolled students in each school. The principal of each school completed a school questionnaire about school factors. The smoking rate of male teachers was significantly related to the student smoking rate in junior high schools. This factor was still associated with the student smoking rate after adjusting for family smoking status. Surprisingly, the smoking rates for junior high school boys in schools with a school policy against teachers smoking were higher than those of schools without one. The dropout rate and the proportion of students who went on to college were significantly related to the smoking rates among senior high school students of both sexes. The regular-smoker rate of boys in schools with health education on smoking was more likely to be low. It is important to take account of school factors in designing smoking control programs for junior and senior high schools.

  2. The Self-Adapting Focused Review System. Probability sampling of medical records to monitor utilization and quality of care.

    PubMed

    Ash, A; Schwartz, M; Payne, S M; Restuccia, J D

    1990-11-01

    Medical record review is increasing in importance as the need to identify and monitor utilization and quality of care problems grow. To conserve resources, reviews are usually performed on a subset of cases. If judgment is used to identify subgroups for review, this raises the following questions: How should subgroups be determined, particularly since the locus of problems can change over time? What standard of comparison should be used in interpreting rates of problems found in subgroups? How can population problem rates be estimated from observed subgroup rates? How can the bias be avoided that arises because reviewers know that selected cases are suspected of having problems? How can changes in problem rates over time be interpreted when evaluating intervention programs? Simple random sampling, an alternative to subgroup review, overcomes the problems implied by these questions but is inefficient. The Self-Adapting Focused Review System (SAFRS), introduced and described here, provides an adaptive approach to record selection that is based upon model-weighted probability sampling. It retains the desirable inferential properties of random sampling while allowing reviews to be concentrated on cases currently thought most likely to be problematic. Model development and evaluation are illustrated using hospital data to predict inappropriate admissions.

  3. Determinants of Prelacteal Feeding in Rural Northern India

    PubMed Central

    Roy, Manas Pratim; Mohan, Uday; Singh, Shivendra Kumar; Singh, Vijay Kumar; Srivastava, Anand Kumar

    2014-01-01

    Background: Prelacteal feeding is an underestimated problem in a developing country like India, where infant mortality rate is quite high. The present study tried to find out the factors determining prelacteal feeding in rural areas of north India. Methods: A crosssectional study was conducted among recently delivered women of rural Uttar Pradesh, India. Multistage random sampling was used for selecting villages. From them, 352 recently delivered women were selected as the subjects, following systematic random sampling. Chi-square test and logistic regression were used to find out the predictors for prelacteal feeding. Results: Overall, 40.1% of mothers gave prelacteal feeding to their newborn. Factors significantly associated with such practice, after simple logistic regression, were age, caste, socioeconomic status, and place of delivery. At multivariate level, age (odds ratio (OR) = 1.76, 95% confidence interval (CI) = 1.13-2.74), caste and place of delivery (OR = 2.23, 95% CI = 1.21-4.10) were found to determine prelacteal feeding significantly, indicating that young age, high caste, and home deliveries could affect the practice positively. Conclusions: The problem of prelacteal feeding is still prevalent in rural India. Age, caste, and place of delivery were associated with the problem. For ensuring neonatal health, the problem should be addressed with due gravity, with emphasis on exclusive breast feeding. PMID:24932400

  4. Correlation between workplace and occupational burnout syndrome in nurses

    PubMed Central

    Ahmadi, Omid; Azizkhani, Reza; Basravi, Monem

    2014-01-01

    Background: This study was conducted to determine the effect of nurses’ workplace on burnout syndrome among nurses working in Isfahan's Alzahra Hospital as a reference and typical university affiliated hospital, in 2010. Materials and Methods: In this cross-sectional study, 100 nurses were randomly selected among those working in emergency, orthopedic, dialysis wards and intensive care unit (ICU). Required data on determination of occupational burnout rate among the nurses of these wards were collected using Maslach Burnout Inventory (MBI) standard and validated questionnaire. Nurses were selected using simple random sampling. Results: The multivariate ANOVA analysis showed that occupational burnout mean values of nurses working in orthopedic and dialysis wards were significantly less than those of nurses working in emergency ward and ICU (P = 0.01). There was also no significant difference between occupational burnout mean values of nurses working in emergency ward and ICU (P > 0.05). t-test showed that there was a difference between occupational burnout values of men and women, as these values for women were higher than those of men (P = 0.001). Conclusion: Results showed that occupational burnout mean values of nurses working in emergency ward and ICU were significantly more than those of nurses working in orthopedic and dialysis wards. PMID:24627852

  5. Unraveling the non-senescence phenomenon in Hydra.

    PubMed

    Dańko, Maciej J; Kozłowski, Jan; Schaible, Ralf

    2015-10-07

    Unlike other metazoans, Hydra does not experience the distinctive rise in mortality with age known as senescence, which results from an increasing imbalance between cell damage and cell repair. We propose that the Hydra controls damage accumulation mainly through damage-dependent cell selection and cell sloughing. We examine our hypothesis with a model that combines cellular damage with stem cell renewal, differentiation, and elimination. The Hydra individual can be seen as a large single pool of three types of stem cells with some features of differentiated cells. This large stem cell community prevents "cellular damage drift," which is inevitable in complex conglomerate (differentiated) metazoans with numerous and generally isolated pools of stem cells. The process of cellular damage drift is based on changes in the distribution of damage among cells due to random events, and is thus similar to Muller's ratchet in asexual populations. Events in the model that are sources of randomness include budding, cellular death, and cellular damage and repair. Our results suggest that non-senescence is possible only in simple Hydra-like organisms which have a high proportion and number of stem cells, continuous cell divisions, an effective cell selection mechanism, and stem cells with the ability to undertake some roles of differentiated cells. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Stochastic noncooperative and cooperative evolutionary game strategies of a population of biological networks under natural selection.

    PubMed

    Chen, Bor-Sen; Yeh, Chin-Hsun

    2017-12-01

    We review current static and dynamic evolutionary game strategies of biological networks and discuss the lack of random genetic variations and stochastic environmental disturbances in these models. To include these factors, a population of evolving biological networks is modeled as a nonlinear stochastic biological system with Poisson-driven genetic variations and random environmental fluctuations (stimuli). To gain insight into the evolutionary game theory of stochastic biological networks under natural selection, the phenotypic robustness and network evolvability of noncooperative and cooperative evolutionary game strategies are discussed from a stochastic Nash game perspective. The noncooperative strategy can be transformed into an equivalent multi-objective optimization problem and is shown to display significantly improved network robustness to tolerate genetic variations and buffer environmental disturbances, maintaining phenotypic traits for longer than the cooperative strategy. However, the noncooperative case requires greater effort and more compromises between partly conflicting players. Global linearization is used to simplify the problem of solving nonlinear stochastic evolutionary games. Finally, a simple stochastic evolutionary model of a metabolic pathway is simulated to illustrate the procedure of solving for two evolutionary game strategies and to confirm and compare their respective characteristics in the evolutionary process. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Prevalence of cardiovascular risk factors amongst traders in an urban market in Lagos, Nigeria.

    PubMed

    Odugbemi, T O; Onajole, A T; Osibogun, A O

    2012-03-01

    A descriptive cross-sectional study was carried out to determine the prevalence of cardiovascular risk factors amongst traders in an urban market in Lagos State. Tejuosho market, one of the large popular markets was selected from a list of markets that met the inclusion criteria of being major markets dealing in general goods using a simple random sampling technique by balloting. Four hundred (400) traders were selected using a systematic random sampling. Each trader was interviewed with a well-structured questionnaire and had blood pressure and anthropometric measurements (height, weight and body mass index). Female traders made up (74.3%) 297 of the total population. The mean age was 45.48+11.88 and 42.29+10.96 years for males and females respectively. Majority 239 (59.8%) fell within the age range of 35 - 55 years. The cardiovascular risk factors identified and their prevalence rates were hypertension (34.8%), physical inactivity (92%), previously diagnosed diabetes mellitus (0.8%), risky alcohol consumption (1%), cigarette smoking (0.3%) in females and (17.5%) in males, obesity (12.3%) and overweight (39.9%). The study recommended that any health promoting, preventive or intervention programme for this population would have to be worked into their market activities if it is to make an impact.

  8. Synaptic Basis for Differential Orientation Selectivity between Complex and Simple Cells in Mouse Visual Cortex

    PubMed Central

    Li, Ya-tang; Liu, Bao-hua; Chou, Xiao-lin; Zhang, Li I.

    2015-01-01

    In the primary visual cortex (V1), orientation-selective neurons can be categorized into simple and complex cells primarily based on their receptive field (RF) structures. In mouse V1, although previous studies have examined the excitatory/inhibitory interplay underlying orientation selectivity (OS) of simple cells, the synaptic bases for that of complex cells have remained obscure. Here, by combining in vivo loose-patch and whole-cell recordings, we found that complex cells, identified by their overlapping on/off subfields, had significantly weaker OS than simple cells at both spiking and subthreshold membrane potential response levels. Voltage-clamp recordings further revealed that although excitatory inputs to complex and simple cells exhibited a similar degree of OS, inhibition in complex cells was more narrowly tuned than excitation, whereas in simple cells inhibition was more broadly tuned than excitation. The differential inhibitory tuning can primarily account for the difference in OS between complex and simple cells. Interestingly, the differential synaptic tuning correlated well with the spatial organization of synaptic input: the inhibitory visual RF in complex cells was more elongated in shape than its excitatory counterpart and also was more elongated than that in simple cells. Together, our results demonstrate that OS of complex and simple cells is differentially shaped by cortical inhibition based on its orientation tuning profile relative to excitation, which is contributed at least partially by the spatial organization of RFs of presynaptic inhibitory neurons. SIGNIFICANCE STATEMENT Simple and complex cells, two classes of principal neurons in the primary visual cortex (V1), are generally thought to be equally selective for orientation. In mouse V1, we report that complex cells, identified by their overlapping on/off subfields, has significantly weaker orientation selectivity (OS) than simple cells. This can be primarily attributed to the differential tuning selectivity of inhibitory synaptic input: inhibition in complex cells is more narrowly tuned than excitation, whereas in simple cells inhibition is more broadly tuned than excitation. In addition, there is a good correlation between inhibitory tuning selectivity and the spatial organization of inhibitory inputs. These complex and simple cells with differential degree of OS may provide functionally distinct signals to different downstream targets. PMID:26245969

  9. Synaptic Basis for Differential Orientation Selectivity between Complex and Simple Cells in Mouse Visual Cortex.

    PubMed

    Li, Ya-tang; Liu, Bao-hua; Chou, Xiao-lin; Zhang, Li I; Tao, Huizhong W

    2015-08-05

    In the primary visual cortex (V1), orientation-selective neurons can be categorized into simple and complex cells primarily based on their receptive field (RF) structures. In mouse V1, although previous studies have examined the excitatory/inhibitory interplay underlying orientation selectivity (OS) of simple cells, the synaptic bases for that of complex cells have remained obscure. Here, by combining in vivo loose-patch and whole-cell recordings, we found that complex cells, identified by their overlapping on/off subfields, had significantly weaker OS than simple cells at both spiking and subthreshold membrane potential response levels. Voltage-clamp recordings further revealed that although excitatory inputs to complex and simple cells exhibited a similar degree of OS, inhibition in complex cells was more narrowly tuned than excitation, whereas in simple cells inhibition was more broadly tuned than excitation. The differential inhibitory tuning can primarily account for the difference in OS between complex and simple cells. Interestingly, the differential synaptic tuning correlated well with the spatial organization of synaptic input: the inhibitory visual RF in complex cells was more elongated in shape than its excitatory counterpart and also was more elongated than that in simple cells. Together, our results demonstrate that OS of complex and simple cells is differentially shaped by cortical inhibition based on its orientation tuning profile relative to excitation, which is contributed at least partially by the spatial organization of RFs of presynaptic inhibitory neurons. Simple and complex cells, two classes of principal neurons in the primary visual cortex (V1), are generally thought to be equally selective for orientation. In mouse V1, we report that complex cells, identified by their overlapping on/off subfields, has significantly weaker orientation selectivity (OS) than simple cells. This can be primarily attributed to the differential tuning selectivity of inhibitory synaptic input: inhibition in complex cells is more narrowly tuned than excitation, whereas in simple cells inhibition is more broadly tuned than excitation. In addition, there is a good correlation between inhibitory tuning selectivity and the spatial organization of inhibitory inputs. These complex and simple cells with differential degree of OS may provide functionally distinct signals to different downstream targets. Copyright © 2015 the authors 0270-6474/15/3511081-13$15.00/0.

  10. Evolutionary constraints or opportunities?

    PubMed

    Sharov, Alexei A

    2014-09-01

    Natural selection is traditionally viewed as a leading factor of evolution, whereas variation is assumed to be random and non-directional. Any order in variation is attributed to epigenetic or developmental constraints that can hinder the action of natural selection. In contrast I consider the positive role of epigenetic mechanisms in evolution because they provide organisms with opportunities for rapid adaptive change. Because the term "constraint" has negative connotations, I use the term "regulated variation" to emphasize the adaptive nature of phenotypic variation, which helps populations and species to survive and evolve in changing environments. The capacity to produce regulated variation is a phenotypic property, which is not described in the genome. Instead, the genome acts as a switchboard, where mostly random mutations switch "on" or "off" preexisting functional capacities of organism components. Thus, there are two channels of heredity: informational (genomic) and structure-functional (phenotypic). Functional capacities of organisms most likely emerged in a chain of modifications and combinations of more simple ancestral functions. The role of DNA has been to keep records of these changes (without describing the result) so that they can be reproduced in the following generations. Evolutionary opportunities include adjustments of individual functions, multitasking, connection between various components of an organism, and interaction between organisms. The adaptive nature of regulated variation can be explained by the differential success of lineages in macro-evolution. Lineages with more advantageous patterns of regulated variation are likely to produce more species and secure more resources (i.e., long-term lineage selection). Published by Elsevier Ireland Ltd.

  11. The impact of Islamic religious education on anxiety level in primipara mothers.

    PubMed

    Mokhtaryan, Tahereh; Yazdanpanahi, Zahra; Akbarzadeh, Marzieh; Amooee, Sedigheh; Zare, Najaf

    2016-01-01

    Anxiety is among the most common pregnancy complications. This study was conducted to examine the impact of religious teaching on anxiety in primiparous mothers referring to the selected perinatal clinics of Tehran University of Medical Sciences in 2013. This randomized clinical trial was conducted on the pregnant women in 20-28 weeks of gestation referring to the selected clinics of Tehran University of Medical Sciences from July 2013 to June 2014. The subjects were selected through simple random sampling and divided into religious education and control groups. To assess the individuals, a demographic questionnaire, an anxiety trait State-Trait Anxiety Inventory and a religious knowledge and attitude trait (pre- test and post-test and 1 or 2 months after the test) were filled in by the two groups. Training classes (religious knowledge and attitude trait) for the cases were held in 6 weeks, and the sessions lasted for 1½ h. The knowledge and attitude scores showed significant differences in the controls and cases after the intervention ( P = 0.001) and 2 months after the study ( P = 0.001). According to the results of independent t -test, a significant difference was found in the state anxiety score ( P = 0.002) and personal score ( P = 0.0197) between the two groups before the intervention; however, the results were strongly significant different after the intervention and 2 months after the study ( P ≤ 0.001). The improvement in the mothers' knowledge and attitude in religious subjects will reduce anxiety in primiparas.

  12. A simple and efficient alternative to implementing systematic random sampling in stereological designs without a motorized microscope stage.

    PubMed

    Melvin, Neal R; Poda, Daniel; Sutherland, Robert J

    2007-10-01

    When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.

  13. The triglyceride composition of 17 seed fats rich in octanoic, decanoic, or lauric acid.

    PubMed

    Litchfield, C; Miller, E; Harlow, R D; Reiser, R

    1967-07-01

    Seed fats of eight species ofLauraceae (laurel family), six species ofCuphea (Lythraceae family), and three species ofUlmaceae (elm family) were extracted, and the triglycerides were isolated by preparative thin-layer chromatography. GLC of the triglycerides on a silicone column resolved 10 to 18 peaks with a 22 to 58 carbon number range for each fat. These carbon number distributions yielded considerable information about triglyceride compositions of the fats.The most interesting finding was withLaurus nobilis seed fat, which contained 58.4% lauric acid and 29.2-29.8% trilaurin. A maximum of 19.9% trilaurin would be predicted by a 1, 2, 3-random, a 1, 3-random-2-random, or a 1-random-2-random-3-random distribution of the lauric acid(3). This indicates a specificity for the biosynthesis of a simple triglyceride byLaurus nobilis seed enzymes.Cuphea lanceolata seed fat also contained more simple triglyceride (tridecanoin) than would be predicted by the fatty acid distribution theories.

  14. Legal rights to safe abortion: knowledge and attitude of women in North-West Ethiopia toward the current Ethiopian abortion law.

    PubMed

    Muzeyen, R; Ayichiluhm, M; Manyazewal, T

    2017-07-01

    To assess women's knowledge and attitude toward Ethiopian current abortion law. A quantitative, community-based cross-sectional survey. Women of reproductive age in three selected lower districts in Bahir Dar, North-West Ethiopia, were included. Multi-stage simple random sampling and simple random sampling were used to select the districts and respondents, respectively. Data were collected using a structured questionnaire comprising questions related to knowledge and attitude toward legal status of abortion and cases where abortion is currently allowed by law in Ethiopia. Descriptive statistics were used to summarize the data and multivariable logistic regression computed to assess the magnitude and significance of associations. Of 845 eligible women selected, 774 (92%) consented to participate and completed the interview. A total of 512 (66%) women were aware of the legal status of the Ethiopian abortion law and their primary sources of information were electronic media such as television and radio (43%) followed by healthcare providers (38.7%). Among women with awareness of the law, 293 (57.2%) were poor in knowledge, 188 (36.7%) fairly knowledgeable, and 31 (6.1%) good in knowledge about the cases where abortion is allowed by law. Of the total 774 women included, 438 (56.5%) hold liberal and 336 (43.5%) conservative attitude toward legalization of abortion. In the multivariable logistic regression, age had a significant association with knowledge, whereas occupation had a significant association with attitude toward the law. Women who had poor knowledge toward the law were more likely to have conservative attitude toward the law (adjusted odds ratio, 0.40; 95% confidence interval, 0.23-0.61). Though the Ethiopian criminal code legalized abortion under certain circumstances since 2005, a significant number of women knew little about the law and several protested legalization of abortion. Countries such as Ethiopia with high maternal mortality records need to lift high-impact interventions that would trigger women to understand and exercise their legal rights to safe abortion and other reproductive health securities. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  15. Models of the Protocellular Structures, Functions and Evolution

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew; New, Michael; Keefe, Anthony; Szostak, Jack W.; Lanyi, Janos F.; DeVincenzi, Donald L. (Technical Monitor)

    2000-01-01

    In the absence of extinct or extant record of protocells, the most direct way to test our understanding of the origin of cellular life is to construct laboratory models that capture important features of protocellular systems. Such efforts are currently underway in a collaborative project between NASA-Ames, Harvard medical School and University of California. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures. The centerpiece of this project is a method for the in vitro evolution of protein enzymes toward arbitrary catalytic targets. A similar approach has already been developed for nucleic acids: First, a very large population of candidate molecules is generated using a random synthetic approach. Next, the small numbers of molecules that can accomplish the desired task are selected. These molecules are next vastly multiplied using the polymerase chain reaction. A mutagenic approach, in which the sequences of selected molecules are randomly altered, can yield further improvements in performance or alterations of specificities. Unfortunately, the catalytic potential of nucleic acids is rather limited. Proteins are more catalytically capable but cannot be directly amplified. In the new technique, this problem is circumvented by covalently linking each protein of the initial, diverse, pool to the RNA sequence that codes for it. Then, selection is performed on the proteins, but the nucleic acids are replicated. To date, we have obtained "a proof of concept" by evolving simple, novel proteins capable of selectively binding adenosine tri-phosphate (ATP). Our next goal is to create an enzyme that can phosphorylate amino acids and another to catalyze the formation of peptide bonds in the absence of nucleic acid templates. This latter reaction does not take place in contemporary cells. once developed, these enzymes will be encapsulated in liposomes so that they will function in a simulated cellular environment. To provide a continuous energy supply, usually needed to activate the substrates, an energy transduction complex which generates ATP from adenosine diphosphate, inorganic phosphate and light will be used. This system, consisting of two modern proteins, ATP synthase and bacteriorhodopsin, has already been built and shown to work efficiently. By coupling chemical synthesis to such a system, it will be possible to drive chemical reactions by light if only the substrates for these reactions are supplied.

  16. Anticipatory smooth eye movements with random-dot kinematograms

    PubMed Central

    Santos, Elio M.; Gnang, Edinah K.; Kowler, Eileen

    2012-01-01

    Anticipatory smooth eye movements were studied in response to expectations of motion of random-dot kinematograms (RDKs). Dot lifetime was limited (52–208 ms) to prevent selection and tracking of the motion of local elements and to disrupt the perception of an object moving across space. Anticipatory smooth eye movements were found in response to cues signaling the future direction of global RDK motion, either prior to the onset of the RDK or prior to a change in its direction of motion. Cues signaling the lifetime of the dots were not effective. These results show that anticipatory smooth eye movements can be produced by expectations of global motion and do not require a sustained representation of an object or set of objects moving across space. At the same time, certain properties of global motion (direction) were more sensitive to cues than others (dot lifetime), suggesting that the rules by which prediction operates to influence pursuit may go beyond simple associations between cues and the upcoming motion of targets. PMID:23027686

  17. Use of virtual reality intervention to improve reaction time in children with cerebral palsy: A randomized controlled trial.

    PubMed

    Pourazar, Morteza; Mirakhori, Fatemeh; Hemayattalab, Rasool; Bagherzadeh, Fazlolah

    2017-09-21

    The purpose of this study was to investigate the training effects of Virtual Reality (VR) intervention program on reaction time in children with cerebral palsy. Thirty boys ranging from 7 to 12 years (mean = 11.20; SD = .76) were selected by available sampling method and randomly divided into the experimental and control groups. Simple Reaction Time (SRT) and Discriminative Reaction Time (DRT) were measured at baseline and 1 day after completion of VR intervention. Multivariate analysis of variance (MANOVA) and paired sample t-test were performed to analyze the results. MANOVA test revealed significant effects for group in posttest phase, with lower reaction time in both measures for the experimental group. Based on paired sample t-test results, both RT measures significantly improved in experimental group following the VR intervention program. This paper proposes VR as a promising tool into the rehabilitation process for improving reaction time in children with cerebral palsy.

  18. Construction, Characterization, and Preliminary BAC-End Sequence Analysis of a Bacterial Artificial Chromosome Library of the Tea Plant (Camellia sinensis)

    PubMed Central

    Lin, Jinke; Kudrna, Dave; Wing, Rod A.

    2011-01-01

    We describe the construction and characterization of a publicly available BAC library for the tea plant, Camellia sinensis. Using modified methods, the library was constructed with the aim of developing public molecular resources to advance tea plant genomics research. The library consists of a total of 401,280 clones with an average insert size of 135 kb, providing an approximate coverage of 13.5 haploid genome equivalents. No empty vector clones were observed in a random sampling of 576 BAC clones. Further analysis of 182 BAC-end sequences from randomly selected clones revealed a GC content of 40.35% and low chloroplast and mitochondrial contamination. Repetitive sequence analyses indicated that LTR retrotransposons were the most predominant sequence class (86.93%–87.24%), followed by DNA retrotransposons (11.16%–11.69%). Additionally, we found 25 simple sequence repeats (SSRs) that could potentially be used as genetic markers. PMID:21234344

  19. Reducing DNA context dependence in bacterial promoters

    PubMed Central

    Carr, Swati B.; Densmore, Douglas M.

    2017-01-01

    Variation in the DNA sequence upstream of bacterial promoters is known to affect the expression levels of the products they regulate, sometimes dramatically. While neutral synthetic insulator sequences have been found to buffer promoters from upstream DNA context, there are no established methods for designing effective insulator sequences with predictable effects on expression levels. We address this problem with Degenerate Insulation Screening (DIS), a novel method based on a randomized 36-nucleotide insulator library and a simple, high-throughput, flow-cytometry-based screen that randomly samples from a library of 436 potential insulated promoters. The results of this screen can then be compared against a reference uninsulated device to select a set of insulated promoters providing a precise level of expression. We verify this method by insulating the constitutive, inducible, and repressible promotors of a four transcriptional-unit inverter (NOT-gate) circuit, finding both that order dependence is largely eliminated by insulation and that circuit performance is also significantly improved, with a 5.8-fold mean improvement in on/off ratio. PMID:28422998

  20. Prediction of large negative shaded-side spacecraft potentials

    NASA Technical Reports Server (NTRS)

    Prokopenko, S. M. L.; Laframboise, J. G.

    1977-01-01

    A calculation by Knott, for the floating potential of a spherically symmetric synchronous-altitude satellite in eclipse, was adapted to provide simple calculations of upper bounds on negative potentials which may be achieved by electrically isolated shaded surfaces on spacecraft in sunlight. Large (approximately 60 percent) increases in predicted negative shaded-side potentials are obtained. To investigate effective potential barrier or angular momentum selection effects due to the presence of less negative sunlit-side or adjacent surface potentials, these expressions were replaced by the ion random current, which is a lower bound for convex surfaces when such effects become very severe. Further large increases in predicted negative potentials were obtained, amounting to a doubling in some cases.

  1. PCANet: A Simple Deep Learning Baseline for Image Classification?

    PubMed

    Chan, Tsung-Han; Jia, Kui; Gao, Shenghua; Lu, Jiwen; Zeng, Zinan; Ma, Yi

    2015-12-01

    In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition.

  2. SU-F-R-33: Can CT and CBCT Be Used Simultaneously for Radiomics Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, R; Wang, J; Zhong, H

    2016-06-15

    Purpose: To investigate whether CBCT and CT can be used in radiomics analysis simultaneously. To establish a batch correction method for radiomics in two similar image modalities. Methods: Four sites including rectum, bladder, femoral head and lung were considered as region of interest (ROI) in this study. For each site, 10 treatment planning CT images were collected. And 10 CBCT images which came from same site of same patient were acquired at first radiotherapy fraction. 253 radiomics features, which were selected by our test-retest study at rectum cancer CT (ICC>0.8), were calculated for both CBCT and CT images in MATLAB.more » Simple scaling (z-score) and nonlinear correction methods were applied to the CBCT radiomics features. The Pearson Correlation Coefficient was calculated to analyze the correlation between radiomics features of CT and CBCT images before and after correction. Cluster analysis of mixed data (for each site, 5 CT and 5 CBCT data are randomly selected) was implemented to validate the feasibility to merge radiomics data from CBCT and CT. The consistency of clustering result and site grouping was verified by a chi-square test for different datasets respectively. Results: For simple scaling, 234 of the 253 features have correlation coefficient ρ>0.8 among which 154 features haveρ>0.9 . For radiomics data after nonlinear correction, 240 of the 253 features have ρ>0.8 among which 220 features have ρ>0.9. Cluster analysis of mixed data shows that data of four sites was almost precisely separated for simple scaling(p=1.29 * 10{sup −7}, χ{sup 2} test) and nonlinear correction (p=5.98 * 10{sup −7}, χ{sup 2} test), which is similar to the cluster result of CT data (p=4.52 * 10{sup −8}, χ{sup 2} test). Conclusion: Radiomics data from CBCT can be merged with those from CT by simple scaling or nonlinear correction for radiomics analysis.« less

  3. Random Item Generation Is Affected by Age

    ERIC Educational Resources Information Center

    Multani, Namita; Rudzicz, Frank; Wong, Wing Yiu Stephanie; Namasivayam, Aravind Kumar; van Lieshout, Pascal

    2016-01-01

    Purpose: Random item generation (RIG) involves central executive functioning. Measuring aspects of random sequences can therefore provide a simple method to complement other tools for cognitive assessment. We examine the extent to which RIG relates to specific measures of cognitive function, and whether those measures can be estimated using RIG…

  4. The impact of facecards on patients' knowledge, satisfaction, trust, and agreement with hospital physicians: a pilot study.

    PubMed

    Simons, Yael; Caprio, Timothy; Furiasse, Nicholas; Kriss, Michael; Williams, Mark V; O'Leary, Kevin J

    2014-03-01

    Simple interventions such as facecards can improve patients' knowledge of names and roles of hospital physicians, but the effect on other aspects of the patient-physician relationship is not clear. To pilot an intervention to improve familiarity with physicians and assess its potential to improve patients' satisfaction, trust, and agreement with physicians. Cluster randomized controlled trial assessing the impact of physician facecards. Physician facecards included pictures of physicians and descriptions of their roles. We performed structured interviews of randomly selected patients to assess outcomes. One of 2 similar hospitalist units and 1 of 2 teaching-service units in a large teaching hospital were randomly selected to implement the intervention. Satisfaction with physician communication and overall hospital care was assessed using the Hospital Consumer Assessment of Healthcare Providers and Systems. Trust and agreement were each assessed through instruments used in prior research. Overall, 138 patients completed interviews, with no differences in age, sex, or race between those receiving facecards and those not. More patients who received facecards correctly identified ≥1 hospital physician (89.1% vs 51.1%; P < 0.01) and their role (67.4% vs 16.3%; P < 0.01) than patients who had not received facecards. Patients had high baseline levels of satisfaction, trust, and agreement with hospital physicians, and we found no significant differences with the use of facecards. Physician facecards improved patients' knowledge of the names and roles of hospital physicians. Larger studies are needed to assess the impact on satisfaction, trust, and agreement with physicians. © 2013 Society of Hospital Medicine.

  5. New milk protein-derived peptides with potential antimicrobial activity: an approach based on bioinformatic studies.

    PubMed

    Dziuba, Bartłomiej; Dziuba, Marta

    2014-08-20

    New peptides with potential antimicrobial activity, encrypted in milk protein sequences, were searched for with the use of bioinformatic tools. The major milk proteins were hydrolyzed in silico by 28 enzymes. The obtained peptides were characterized by the following parameters: molecular weight, isoelectric point, composition and number of amino acid residues, net charge at pH 7.0, aliphatic index, instability index, Boman index, and GRAVY index, and compared with those calculated for known 416 antimicrobial peptides including 59 antimicrobial peptides (AMPs) from milk proteins listed in the BIOPEP database. A simple analysis of physico-chemical properties and the values of biological activity indicators were insufficient to select potentially antimicrobial peptides released in silico from milk proteins by proteolytic enzymes. The final selection was made based on the results of multidimensional statistical analysis such as support vector machines (SVM), random forest (RF), artificial neural networks (ANN) and discriminant analysis (DA) available in the Collection of Anti-Microbial Peptides (CAMP database). Eleven new peptides with potential antimicrobial activity were selected from all peptides released during in silico proteolysis of milk proteins.

  6. New Milk Protein-Derived Peptides with Potential Antimicrobial Activity: An Approach Based on Bioinformatic Studies

    PubMed Central

    Dziuba, Bartłomiej; Dziuba, Marta

    2014-01-01

    New peptides with potential antimicrobial activity, encrypted in milk protein sequences, were searched for with the use of bioinformatic tools. The major milk proteins were hydrolyzed in silico by 28 enzymes. The obtained peptides were characterized by the following parameters: molecular weight, isoelectric point, composition and number of amino acid residues, net charge at pH 7.0, aliphatic index, instability index, Boman index, and GRAVY index, and compared with those calculated for known 416 antimicrobial peptides including 59 antimicrobial peptides (AMPs) from milk proteins listed in the BIOPEP database. A simple analysis of physico-chemical properties and the values of biological activity indicators were insufficient to select potentially antimicrobial peptides released in silico from milk proteins by proteolytic enzymes. The final selection was made based on the results of multidimensional statistical analysis such as support vector machines (SVM), random forest (RF), artificial neural networks (ANN) and discriminant analysis (DA) available in the Collection of Anti-Microbial Peptides (CAMP database). Eleven new peptides with potential antimicrobial activity were selected from all peptides released during in silico proteolysis of milk proteins. PMID:25141106

  7. A transposase strategy for creating libraries of circularly permuted proteins.

    PubMed

    Mehta, Manan M; Liu, Shirley; Silberg, Jonathan J

    2012-05-01

    A simple approach for creating libraries of circularly permuted proteins is described that is called PERMutation Using Transposase Engineering (PERMUTE). In PERMUTE, the transposase MuA is used to randomly insert a minitransposon that can function as a protein expression vector into a plasmid that contains the open reading frame (ORF) being permuted. A library of vectors that express different permuted variants of the ORF-encoded protein is created by: (i) using bacteria to select for target vectors that acquire an integrated minitransposon; (ii) excising the ensemble of ORFs that contain an integrated minitransposon from the selected vectors; and (iii) circularizing the ensemble of ORFs containing integrated minitransposons using intramolecular ligation. Construction of a Thermotoga neapolitana adenylate kinase (AK) library using PERMUTE revealed that this approach produces vectors that express circularly permuted proteins with distinct sequence diversity from existing methods. In addition, selection of this library for variants that complement the growth of Escherichia coli with a temperature-sensitive AK identified functional proteins with novel architectures, suggesting that PERMUTE will be useful for the directed evolution of proteins with new functions.

  8. A transposase strategy for creating libraries of circularly permuted proteins

    PubMed Central

    Mehta, Manan M.; Liu, Shirley; Silberg, Jonathan J.

    2012-01-01

    A simple approach for creating libraries of circularly permuted proteins is described that is called PERMutation Using Transposase Engineering (PERMUTE). In PERMUTE, the transposase MuA is used to randomly insert a minitransposon that can function as a protein expression vector into a plasmid that contains the open reading frame (ORF) being permuted. A library of vectors that express different permuted variants of the ORF-encoded protein is created by: (i) using bacteria to select for target vectors that acquire an integrated minitransposon; (ii) excising the ensemble of ORFs that contain an integrated minitransposon from the selected vectors; and (iii) circularizing the ensemble of ORFs containing integrated minitransposons using intramolecular ligation. Construction of a Thermotoga neapolitana adenylate kinase (AK) library using PERMUTE revealed that this approach produces vectors that express circularly permuted proteins with distinct sequence diversity from existing methods. In addition, selection of this library for variants that complement the growth of Escherichia coli with a temperature-sensitive AK identified functional proteins with novel architectures, suggesting that PERMUTE will be useful for the directed evolution of proteins with new functions. PMID:22319214

  9. Estimating the price elasticity of beer: meta-analysis of data with heterogeneity, dependence, and publication bias.

    PubMed

    Nelson, Jon P

    2014-01-01

    Precise estimates of price elasticities are important for alcohol tax policy. Using meta-analysis, this paper corrects average beer elasticities for heterogeneity, dependence, and publication selection bias. A sample of 191 estimates is obtained from 114 primary studies. Simple and weighted means are reported. Dependence is addressed by restricting number of estimates per study, author-restricted samples, and author-specific variables. Publication bias is addressed using funnel graph, trim-and-fill, and Egger's intercept model. Heterogeneity and selection bias are examined jointly in meta-regressions containing moderator variables for econometric methodology, primary data, and precision of estimates. Results for fixed- and random-effects regressions are reported. Country-specific effects and sample time periods are unimportant, but several methodology variables help explain the dispersion of estimates. In models that correct for selection bias and heterogeneity, the average beer price elasticity is about -0.20, which is less elastic by 50% compared to values commonly used in alcohol tax policy simulations. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. A simple method for assessing occupational exposure via the one-way random effects model.

    PubMed

    Krishnamoorthy, K; Mathew, Thomas; Peng, Jie

    2016-11-01

    A one-way random effects model is postulated for the log-transformed shift-long personal exposure measurements, where the random effect in the model represents an effect due to the worker. Simple closed-form confidence intervals are proposed for the relevant parameters of interest using the method of variance estimates recovery (MOVER). The performance of the confidence bounds is evaluated and compared with those based on the generalized confidence interval approach. Comparison studies indicate that the proposed MOVER confidence bounds are better than the generalized confidence bounds for the overall mean exposure and an upper percentile of the exposure distribution. The proposed methods are illustrated using a few examples involving industrial hygiene data.

  11. Optimal spatial sampling techniques for ground truth data in microwave remote sensing of soil moisture

    NASA Technical Reports Server (NTRS)

    Rao, R. G. S.; Ulaby, F. T.

    1977-01-01

    The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.

  12. Modified signal-to-noise: a new simple and practical gene filtering approach based on the concept of projective adaptive resonance theory (PART) filtering method.

    PubMed

    Takahashi, Hiro; Honda, Hiroyuki

    2006-07-01

    Considering the recent advances in and the benefits of DNA microarray technologies, many gene filtering approaches have been employed for the diagnosis and prognosis of diseases. In our previous study, we developed a new filtering method, namely, the projective adaptive resonance theory (PART) filtering method. This method was effective in subclass discrimination. In the PART algorithm, the genes with a low variance in gene expression in either class, not both classes, were selected as important genes for modeling. Based on this concept, we developed novel simple filtering methods such as modified signal-to-noise (S2N') in the present study. The discrimination model constructed using these methods showed higher accuracy with higher reproducibility as compared with many conventional filtering methods, including the t-test, S2N, NSC and SAM. The reproducibility of prediction was evaluated based on the correlation between the sets of U-test p-values on randomly divided datasets. With respect to leukemia, lymphoma and breast cancer, the correlation was high; a difference of >0.13 was obtained by the constructed model by using <50 genes selected by S2N'. Improvement was higher in the smaller genes and such higher correlation was observed when t-test, NSC and SAM were used. These results suggest that these modified methods, such as S2N', have high potential to function as new methods for marker gene selection in cancer diagnosis using DNA microarray data. Software is available upon request.

  13. Transfer and alignment of random single-walled carbon nanotube films by contact printing.

    PubMed

    Liu, Huaping; Takagi, Daisuke; Chiashi, Shohei; Homma, Yoshikazu

    2010-02-23

    We present a simple method to transfer large-area random single-walled carbon nanotube (SWCNT) films grown on SiO(2) substrates onto another surface through a simple contact printing process. The transferred random SWCNT films can be assembled into highly ordered, dense regular arrays with high uniformity and reproducibility by sliding the growth substrate during the transfer process. The position of the transferred SWCNT film can be controlled by predefined patterns on the receiver substrates. The process is compatible with a variety of substrates, and even metal meshes for transmission electron microscopy (TEM) can be used as receiver substrates. Thus, suspended web-like SWCNT networks and aligned SWCNT arrays can be formed over the grids of TEM meshes, so that the structures of the transferred SWCNTs can be directly observed by TEM. This simple technique can be used to controllably transfer SWCNTs for property studies, for the fabrication of devices, or even as support films for TEM meshes.

  14. Chinese Herbal Bath Therapy for the Treatment of Knee Osteoarthritis: Meta-Analysis of Randomized Controlled Trials

    PubMed Central

    Chen, Bo; Zhan, Hongsheng; Chung, Mei; Lin, Xun; Zhang, Min; Pang, Jian; Wang, Chenchen

    2015-01-01

    Objective. Chinese herbal bath therapy (CHBT) has traditionally been considered to have analgesic and anti-inflammatory effects. We conducted the first meta-analysis evaluating its benefits for patients with knee osteoarthritis (OA). Methods. We searched three English and four Chinese databases through October, 2014. Randomized trials evaluating at least 2 weeks of CHBT for knee OA were selected. The effects of CHBT on clinical symptoms included both pain level (via the visual analog scale) and total effectiveness rate, which assessed pain, physical performance, and wellness. We performed random-effects meta-analyses using mean difference. Results. Fifteen studies totaling 1618 subjects met eligibility criteria. Bath prescription included, on average, 13 Chinese herbs with directions to steam and wash around the knee for 20–40 minutes once or twice daily. Mean treatment duration was 3 weeks. Results from meta-analysis showed superior pain improvement (mean difference = −0.59 points; 95% confidence intervals [CI], −0.83 to −0.36; p < 0.00001) and higher total effectiveness rate (risk ratio = 1.21; 95% CI, 1.15 to 1.28; p < 0.00001) when compared with standard western treatment. No serious adverse events were reported. Conclusion. Chinese herbal bath therapy may be a safe, effective, and simple alternative treatment modality for knee OA. Further rigorously designed, randomized trials are warranted. PMID:26483847

  15. Distributed clone detection in static wireless sensor networks: random walk with network division.

    PubMed

    Khan, Wazir Zada; Aalsalem, Mohammed Y; Saad, N M

    2015-01-01

    Wireless Sensor Networks (WSNs) are vulnerable to clone attacks or node replication attacks as they are deployed in hostile and unattended environments where they are deprived of physical protection, lacking physical tamper-resistance of sensor nodes. As a result, an adversary can easily capture and compromise sensor nodes and after replicating them, he inserts arbitrary number of clones/replicas into the network. If these clones are not efficiently detected, an adversary can be further capable to mount a wide variety of internal attacks which can emasculate the various protocols and sensor applications. Several solutions have been proposed in the literature to address the crucial problem of clone detection, which are not satisfactory as they suffer from some serious drawbacks. In this paper we propose a novel distributed solution called Random Walk with Network Division (RWND) for the detection of node replication attack in static WSNs which is based on claimer-reporter-witness framework and combines a simple random walk with network division. RWND detects clone(s) by following a claimer-reporter-witness framework and a random walk is employed within each area for the selection of witness nodes. Splitting the network into levels and areas makes clone detection more efficient and the high security of witness nodes is ensured with moderate communication and memory overheads. Our simulation results show that RWND outperforms the existing witness node based strategies with moderate communication and memory overheads.

  16. Diversity analysis in Cannabis sativa based on large-scale development of expressed sequence tag-derived simple sequence repeat markers.

    PubMed

    Gao, Chunsheng; Xin, Pengfei; Cheng, Chaohua; Tang, Qing; Chen, Ping; Wang, Changbiao; Zang, Gonggu; Zhao, Lining

    2014-01-01

    Cannabis sativa L. is an important economic plant for the production of food, fiber, oils, and intoxicants. However, lack of sufficient simple sequence repeat (SSR) markers has limited the development of cannabis genetic research. Here, large-scale development of expressed sequence tag simple sequence repeat (EST-SSR) markers was performed to obtain more informative genetic markers, and to assess genetic diversity in cannabis (Cannabis sativa L.). Based on the cannabis transcriptome, 4,577 SSRs were identified from 3,624 ESTs. From there, a total of 3,442 complementary primer pairs were designed as SSR markers. Among these markers, trinucleotide repeat motifs (50.99%) were the most abundant, followed by hexanucleotide (25.13%), dinucleotide (16.34%), tetranucloetide (3.8%), and pentanucleotide (3.74%) repeat motifs, respectively. The AAG/CTT trinucleotide repeat (17.96%) was the most abundant motif detected in the SSRs. One hundred and seventeen EST-SSR markers were randomly selected to evaluate primer quality in 24 cannabis varieties. Among these 117 markers, 108 (92.31%) were successfully amplified and 87 (74.36%) were polymorphic. Forty-five polymorphic primer pairs were selected to evaluate genetic diversity and relatedness among the 115 cannabis genotypes. The results showed that 115 varieties could be divided into 4 groups primarily based on geography: Northern China, Europe, Central China, and Southern China. Moreover, the coefficient of similarity when comparing cannabis from Northern China with the European group cannabis was higher than that when comparing with cannabis from the other two groups, owing to a similar climate. This study outlines the first large-scale development of SSR markers for cannabis. These data may serve as a foundation for the development of genetic linkage, quantitative trait loci mapping, and marker-assisted breeding of cannabis.

  17. Development of an algorithm to predict serum vitamin D levels using a simple questionnaire based on sunlight exposure.

    PubMed

    Vignali, Edda; Macchia, Enrico; Cetani, Filomena; Reggiardo, Giorgio; Cianferotti, Luisella; Saponaro, Federica; Marcocci, Claudio

    2017-01-01

    Sun exposure is the main determinant of vitamin D production. The aim of this study was to develop an algorithm to assess individual vitamin D status, independently of serum 25(OHD) measurement, using a simple questionnaire, mostly relying upon sunlight exposure, which might help select subjects requiring serum 25(OHD) measurement. Six hundred and twenty adult subjects living in a mountain village in Southern Italy, located at 954 m above the sea level and at a latitude of 40°50'11″76N, were asked to fill the questionnaire in two different periods of the year: August 2010 and March 2011. Seven predictors were considered: month of investigation, age, sex, BMI, average daily sunlight exposure, beach holidays in the past 12 months, and frequency of going outdoors. The statistical model assumes four classes of serum 25(OHD) concentrations: ≤10, 10-19.9, 20-29.9, and ≥30 ng/ml. The algorithm was developed using a two-step procedure. In Step 1, the linear regression equation was defined in 385 randomly selected subjects. In Step 2, the predictive ability of the regression model was tested in the remaining 235 subjects. Seasonality, daily sunlight exposure and beach holidays in the past 12 months accounted for 27.9, 13.5, and 6.4 % of the explained variance in predicting vitamin D status, respectively. The algorithm performed extremely well: 212 of 235 (90.2 %) subjects were assigned to the correct vitamin D status. In conclusion, our pilot study demonstrates that an algorithm to estimate the vitamin D status can be developed using a simple questionnaire based on sunlight exposure.

  18. Diversity Analysis in Cannabis sativa Based on Large-Scale Development of Expressed Sequence Tag-Derived Simple Sequence Repeat Markers

    PubMed Central

    Cheng, Chaohua; Tang, Qing; Chen, Ping; Wang, Changbiao; Zang, Gonggu; Zhao, Lining

    2014-01-01

    Cannabis sativa L. is an important economic plant for the production of food, fiber, oils, and intoxicants. However, lack of sufficient simple sequence repeat (SSR) markers has limited the development of cannabis genetic research. Here, large-scale development of expressed sequence tag simple sequence repeat (EST-SSR) markers was performed to obtain more informative genetic markers, and to assess genetic diversity in cannabis (Cannabis sativa L.). Based on the cannabis transcriptome, 4,577 SSRs were identified from 3,624 ESTs. From there, a total of 3,442 complementary primer pairs were designed as SSR markers. Among these markers, trinucleotide repeat motifs (50.99%) were the most abundant, followed by hexanucleotide (25.13%), dinucleotide (16.34%), tetranucloetide (3.8%), and pentanucleotide (3.74%) repeat motifs, respectively. The AAG/CTT trinucleotide repeat (17.96%) was the most abundant motif detected in the SSRs. One hundred and seventeen EST-SSR markers were randomly selected to evaluate primer quality in 24 cannabis varieties. Among these 117 markers, 108 (92.31%) were successfully amplified and 87 (74.36%) were polymorphic. Forty-five polymorphic primer pairs were selected to evaluate genetic diversity and relatedness among the 115 cannabis genotypes. The results showed that 115 varieties could be divided into 4 groups primarily based on geography: Northern China, Europe, Central China, and Southern China. Moreover, the coefficient of similarity when comparing cannabis from Northern China with the European group cannabis was higher than that when comparing with cannabis from the other two groups, owing to a similar climate. This study outlines the first large-scale development of SSR markers for cannabis. These data may serve as a foundation for the development of genetic linkage, quantitative trait loci mapping, and marker-assisted breeding of cannabis. PMID:25329551

  19. Evaluating surrogate endpoints, prognostic markers, and predictive markers — some simple themes

    PubMed Central

    Baker, Stuart G.; Kramer, Barnett S.

    2014-01-01

    Background A surrogate endpoint is an endpoint observed earlier than the true endpoint (a health outcome) that is used to draw conclusions about the effect of treatment on the unobserved true endpoint. A prognostic marker is a marker for predicting the risk of an event given a control treatment; it informs treatment decisions when there is information on anticipated benefits and harms of a new treatment applied to persons at high risk. A predictive marker is a marker for predicting the effect of treatment on outcome in a subgroup of patients or study participants; it provides more rigorous information for treatment selection than a prognostic marker when it is based on estimated treatment effects in a randomized trial. Methods We organized our discussion around a different theme for each topic. Results “Fundamentally an extrapolation” refers to the non-statistical considerations and assumptions needed when using surrogate endpoints to evaluate a new treatment. “Decision analysis to the rescue” refers to use the use of decision analysis to evaluate an additional prognostic marker because it is not possible to choose between purely statistical measures of marker performance. “The appeal of simplicity” refers to a straightforward and efficient use of a single randomized trial to evaluate overall treatment effect and treatment effect within subgroups using predictive markers. Conclusion The simple themes provide a general guideline for evaluation of surrogate endpoints, prognostic markers, and predictive markers. PMID:25385934

  20. A Multivariate Randomization Text of Association Applied to Cognitive Test Results

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert; Beard, Bettina

    2009-01-01

    Randomization tests provide a conceptually simple, distribution-free way to implement significance testing. We have applied this method to the problem of evaluating the significance of the association among a number (k) of variables. The randomization method was the random re-ordering of k-1 of the variables. The criterion variable was the value of the largest eigenvalue of the correlation matrix.

  1. Sydney Playground Project: A Cluster-Randomized Trial to Increase Physical Activity, Play, and Social Skills

    ERIC Educational Resources Information Center

    Bundy, Anita; Engelen, Lina; Wyver, Shirley; Tranter, Paul; Ragen, Jo; Bauman, Adrian; Baur, Louise; Schiller, Wendy; Simpson, Judy M.; Niehues, Anita N.; Perry, Gabrielle; Jessup, Glenda; Naughton, Geraldine

    2017-01-01

    Background: We assessed the effectiveness of a simple intervention for increasing children's physical activity, play, perceived competence/social acceptance, and social skills. Methods: A cluster-randomized controlled trial was conducted, in which schools were the clusters. Twelve Sydney (Australia) primary schools were randomly allocated to…

  2. S-SPatt: simple statistics for patterns on Markov chains.

    PubMed

    Nuel, Grégory

    2005-07-01

    S-SPatt allows the counting of patterns occurrences in text files and, assuming these texts are generated from a random Markovian source, the computation of the P-value of a given observation using a simple binomial approximation.

  3. COMPARISON OF RANDOM AND SYSTEMATIC SITE SELECTION FOR ASSESSING ATTAINMENT OF AQUATIC LIFE USES IN SEGMENTS OF THE OHIO RIVER

    EPA Science Inventory

    This report is a description of field work and data analysis results comparing a design comparable to systematic site selection with one based on random selection of sites. The report is expected to validate the use of random site selection in the bioassessment program for the O...

  4. CNN-BLPred: a Convolutional neural network based predictor for β-Lactamases (BL) and their classes.

    PubMed

    White, Clarence; Ismail, Hamid D; Saigo, Hiroto; Kc, Dukka B

    2017-12-28

    The β-Lactamase (BL) enzyme family is an important class of enzymes that plays a key role in bacterial resistance to antibiotics. As the newly identified number of BL enzymes is increasing daily, it is imperative to develop a computational tool to classify the newly identified BL enzymes into one of its classes. There are two types of classification of BL enzymes: Molecular Classification and Functional Classification. Existing computational methods only address Molecular Classification and the performance of these existing methods is unsatisfactory. We addressed the unsatisfactory performance of the existing methods by implementing a Deep Learning approach called Convolutional Neural Network (CNN). We developed CNN-BLPred, an approach for the classification of BL proteins. The CNN-BLPred uses Gradient Boosted Feature Selection (GBFS) in order to select the ideal feature set for each BL classification. Based on the rigorous benchmarking of CCN-BLPred using both leave-one-out cross-validation and independent test sets, CCN-BLPred performed better than the other existing algorithms. Compared with other architectures of CNN, Recurrent Neural Network, and Random Forest, the simple CNN architecture with only one convolutional layer performs the best. After feature extraction, we were able to remove ~95% of the 10,912 features using Gradient Boosted Trees. During 10-fold cross validation, we increased the accuracy of the classic BL predictions by 7%. We also increased the accuracy of Class A, Class B, Class C, and Class D performance by an average of 25.64%. The independent test results followed a similar trend. We implemented a deep learning algorithm known as Convolutional Neural Network (CNN) to develop a classifier for BL classification. Combined with feature selection on an exhaustive feature set and using balancing method such as Random Oversampling (ROS), Random Undersampling (RUS) and Synthetic Minority Oversampling Technique (SMOTE), CNN-BLPred performs significantly better than existing algorithms for BL classification.

  5. [Analysis on the accuracy of simple selection method of Fengshi (GB 31)].

    PubMed

    Li, Zhixing; Zhang, Haihua; Li, Suhe

    2015-12-01

    To explore the accuracy of simple selection method of Fengshi (GB 31). Through the study of the ancient and modern data,the analysis and integration of the acupuncture books,the comparison of the locations of Fengshi (GB 31) by doctors from all dynasties and the integration of modern anatomia, the modern simple selection method of Fengshi (GB 31) is definite, which is the same as the traditional way. It is believed that the simple selec tion method is in accord with the human-oriented thought of TCM. Treatment by acupoints should be based on the emerging nature and the individual difference of patients. Also, it is proposed that Fengshi (GB 31) should be located through the integration between the simple method and body surface anatomical mark.

  6. Simple Emergent Power Spectra from Complex Inflationary Physics

    NASA Astrophysics Data System (ADS)

    Dias, Mafalda; Frazer, Jonathan; Marsh, M. C. David

    2016-09-01

    We construct ensembles of random scalar potentials for Nf-interacting scalar fields using nonequilibrium random matrix theory, and use these to study the generation of observables during small-field inflation. For Nf=O (few ), these heavily featured scalar potentials give rise to power spectra that are highly nonlinear, at odds with observations. For Nf≫1 , the superhorizon evolution of the perturbations is generically substantial, yet the power spectra simplify considerably and become more predictive, with most realizations being well approximated by a linear power spectrum. This provides proof of principle that complex inflationary physics can give rise to simple emergent power spectra. We explain how these results can be understood in terms of large Nf universality of random matrix theory.

  7. Simple Emergent Power Spectra from Complex Inflationary Physics.

    PubMed

    Dias, Mafalda; Frazer, Jonathan; Marsh, M C David

    2016-09-30

    We construct ensembles of random scalar potentials for N_{f}-interacting scalar fields using nonequilibrium random matrix theory, and use these to study the generation of observables during small-field inflation. For N_{f}=O(few), these heavily featured scalar potentials give rise to power spectra that are highly nonlinear, at odds with observations. For N_{f}≫1, the superhorizon evolution of the perturbations is generically substantial, yet the power spectra simplify considerably and become more predictive, with most realizations being well approximated by a linear power spectrum. This provides proof of principle that complex inflationary physics can give rise to simple emergent power spectra. We explain how these results can be understood in terms of large N_{f} universality of random matrix theory.

  8. The correlation structure of several popular pseudorandom number generators

    NASA Technical Reports Server (NTRS)

    Neuman, F.; Merrick, R.; Martin, C. F.

    1973-01-01

    One of the desirable properties of a pseudorandom number generator is that the sequence of numbers it generates should have very low autocorrelation for all shifts except for zero shift and those that are multiples of its cycle length. Due to the simple methods of constructing random numbers, the ideal is often not quite fulfilled. A simple method of examining any random generator for previously unsuspected regularities is discussed. Once they are discovered it is often easy to derive the mathematical relationships, which describe the mathematical relationships, which describe the regular behavior. As examples, it is shown that high correlation exists in mixed and multiplicative congruential random number generators and prime moduli Lehmer generators for shifts a fraction of their cycle lengths.

  9. Predicting rates of inbreeding in populations undergoing selection.

    PubMed Central

    Woolliams, J A; Bijma, P

    2000-01-01

    Tractable forms of predicting rates of inbreeding (DeltaF) in selected populations with general indices, nonrandom mating, and overlapping generations were developed, with the principal results assuming a period of equilibrium in the selection process. An existing theorem concerning the relationship between squared long-term genetic contributions and rates of inbreeding was extended to nonrandom mating and to overlapping generations. DeltaF was shown to be approximately (1)/(4)(1 - omega) times the expected sum of squared lifetime contributions, where omega is the deviation from Hardy-Weinberg proportions. This relationship cannot be used for prediction since it is based upon observed quantities. Therefore, the relationship was further developed to express DeltaF in terms of expected long-term contributions that are conditional on a set of selective advantages that relate the selection processes in two consecutive generations and are predictable quantities. With random mating, if selected family sizes are assumed to be independent Poisson variables then the expected long-term contribution could be substituted for the observed, providing (1)/(4) (since omega = 0) was increased to (1)/(2). Established theory was used to provide a correction term to account for deviations from the Poisson assumptions. The equations were successfully applied, using simple linear models, to the problem of predicting DeltaF with sib indices in discrete generations since previously published solutions had proved complex. PMID:10747074

  10. Evolutionary constraints or opportunities?

    PubMed

    Sharov, Alexei A

    2014-04-22

    Natural selection is traditionally viewed as a leading factor of evolution, whereas variation is assumed to be random and non-directional. Any order in variation is attributed to epigenetic or developmental constraints that can hinder the action of natural selection. In contrast I consider the positive role of epigenetic mechanisms in evolution because they provide organisms with opportunities for rapid adaptive change. Because the term "constraint" has negative connotations, I use the term "regulated variation" to emphasize the adaptive nature of phenotypic variation, which helps populations and species to survive and evolve in changing environments. The capacity to produce regulated variation is a phenotypic property, which is not described in the genome. Instead, the genome acts as a switchboard, where mostly random mutations switch "on" or "off" preexisting functional capacities of organism components. Thus, there are two channels of heredity: informational (genomic) and structure-functional (phenotypic). Functional capacities of organisms most likely emerged in a chain of modifications and combinations of more simple ancestral functions. The role of DNA has been to keep records of these changes (without describing the result) so that they can be reproduced in the following generations. Evolutionary opportunities include adjustments of individual functions, multitasking, connection between various components of an organism, and interaction between organisms. The adaptive nature of regulated variation can be explained by the differential success of lineages in macro-evolution. Lineages with more advantageous patterns of regulated variation are likely to produce more species and secure more resources (i.e., long-term lineage selection). Copyright © 2014. Published by Elsevier Ireland Ltd.

  11. Genetic analysis of Apuleia leiocarpa as revealed by random amplified polymorphic DNA markers: prospects for population genetic studies.

    PubMed

    Lencina, K H; Konzen, E R; Tsai, S M; Bisognin, D A

    2016-12-19

    Apuleia leiocarpa (Vogel) J.F. MacBride is a hardwood species native to South America, which is at serious risk of extinction. Therefore, it is of prime importance to examine the genetic diversity of this species, information required for developing conservation, sustainable management, and breeding strategies. Although scarcely used in recent years, random amplified polymorphic DNA markers are useful resources for the analysis of genetic diversity and structure of tree species. This study represents the first genetic analysis based on DNA markers in A. leiocarpa that aimed to investigate the levels of polymorphism and to select markers for the precise characterization of its genetic structure. We adapted the original DNA extraction protocol based on cetyltrimethyl ammonium bromide, and describe a simple procedure that can be used to obtain high-quality samples from leaf tissues of this tree. Eighteen primers were selected, revealing 92 bands, from which 75 were polymorphic and 61 were sufficient to represent the overall genetic structure of the population without compromising the precision of the analysis. Some fragments were conserved among individuals, which can be sequenced and used to analyze nucleotide diversity parameters through a wider set of A. leiocarpa individuals and populations. The individuals were separated into 11 distinct groups with variable levels of genetic diversity, which is important for selecting desirable genotypes and for the development of a conservation and sustainable management program. Our results are of prime importance for further investigations concerning the genetic characterization of this important, but vulnerable species.

  12. Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design

    ERIC Educational Resources Information Center

    Wagler, Amy; Wagler, Ron

    2014-01-01

    Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…

  13. The influence of professional expertise and task complexity upon the potency of the contextual interference effect.

    PubMed

    Ollis, Stewart; Button, Chris; Fairweather, Malcolm

    2005-03-01

    The contextual interference (CI) effect has been investigated through practice schedule manipulations within both basic and applied studies. Despite extensive research activity there is little conclusive evidence regarding the optimal practice structure of real world manipulative tasks in professional training settings. The present study therefore assessed the efficacy of practising simple and complex knot-tying skills in professional fire-fighters training. Forty-eight participants were quasi-randomly assigned to various practice schedules along the CI continuum. Twenty-four participants were students selected for their novice knot-tying capabilities and 24 were experienced fire-fighters who were more 'experienced knot-tiers'. They were assessed for skill acquisition, retention and transfer effects having practiced tying knots classified as simple or complex. Surprisingly, high levels of CI scheduling enhance learning for novices even when practising a complex task. The findings also revealed that CI benefits are most apparent as learners engage in tasks high in transfer distality. In conclusion, complexity and experience are mediating factors influencing the potency of the CI training effect in real-world settings.

  14. On the adaptivity and complexity embedded into differential evolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Senkerik, Roman; Pluhacek, Michal; Jasek, Roman

    2016-06-08

    This research deals with the comparison of the two modern approaches for evolutionary algorithms, which are the adaptivity and complex chaotic dynamics. This paper aims on the investigations on the chaos-driven Differential Evolution (DE) concept. This paper is aimed at the embedding of discrete dissipative chaotic systems in the form of chaotic pseudo random number generators for the DE and comparing the influence to the performance with the state of the art adaptive representative jDE. This research is focused mainly on the possible disadvantages and advantages of both compared approaches. Repeated simulations for Lozi map driving chaotic systems were performedmore » on the simple benchmark functions set, which are more close to the real optimization problems. Obtained results are compared with the canonical not-chaotic and not adaptive DE. Results show that with used simple test functions, the performance of ChaosDE is better in the most cases than jDE and Canonical DE, furthermore due to the unique sequencing in CPRNG given by the hidden chaotic dynamics, thus better and faster selection of unique individuals from population, ChaosDE is faster.« less

  15. Classical and quantum stability in putative landscapes

    DOE PAGES

    Dine, Michael

    2017-01-18

    Landscape analyses often assume the existence of large numbers of fields, N, with all of the many couplings among these fields (subject to constraints such as local supersymmetry) selected independently and randomly from simple (say Gaussian) distributions. We point out that unitarity and perturbativity place significant constraints on behavior of couplings with N, eliminating otherwise puzzling results. In would-be flux compactifications of string theory, we point out that in order that there be large numbers of light fields, the compactification radii must scale as a positive power of N; scaling of couplings with N may also be necessary for perturbativity.more » We show that in some simple string theory settings with large numbers of fields, for fixed R and string coupling, one can bound certain sums of squares of couplings by order one numbers. This may argue for strong correlations, possibly calling into question the assumption of uncorrelated distributions. Finally, we consider implications of these considerations for classical and quantum stability of states without supersymmetry, with low energy supersymmetry arising from tuning of parameters, and with dynamical breaking of supersymmetry.« less

  16. Methods for Generating Complex Networks with Selected Structural Properties for Simulations: A Review and Tutorial for Neuroscientists

    PubMed Central

    Prettejohn, Brenton J.; Berryman, Matthew J.; McDonnell, Mark D.

    2011-01-01

    Many simulations of networks in computational neuroscience assume completely homogenous random networks of the Erdös–Rényi type, or regular networks, despite it being recognized for some time that anatomical brain networks are more complex in their connectivity and can, for example, exhibit the “scale-free” and “small-world” properties. We review the most well known algorithms for constructing networks with given non-homogeneous statistical properties and provide simple pseudo-code for reproducing such networks in software simulations. We also review some useful mathematical results and approximations associated with the statistics that describe these network models, including degree distribution, average path length, and clustering coefficient. We demonstrate how such results can be used as partial verification and validation of implementations. Finally, we discuss a sometimes overlooked modeling choice that can be crucially important for the properties of simulated networks: that of network directedness. The most well known network algorithms produce undirected networks, and we emphasize this point by highlighting how simple adaptations can instead produce directed networks. PMID:21441986

  17. A Simple and Effective Physical Characteristic Profiling Method for Methamphetamine Tablet Seized in China.

    PubMed

    Li, Tao; Hua, Zhendong; Meng, Xin; Liu, Cuimei

    2018-03-01

    Methamphetamine (MA) tablet production confers chemical and physical properties. This study developed a simple and effective physical characteristic profiling method for MA tablets with capital letter "WY" logos, which realized the discrimination between linked and unlinked seizures. Seventeen signature distances extracted from the "WY" logo were explored as factors for multivariate analysis and demonstrated to be effective to represent the features of tablets in the drug intelligence perspective. Receiver operating characteristic (ROC) curve was used to evaluate efficiency of different pretreatments and distance/correlation metrics, while "Standardization + Euclidean" and "Logarithm + Euclidean" algorithms outperformed the rest. Finally, hierarchical cluster analysis (HCA) was applied to the data set of 200 MA tablet seizures randomly selected from cases all around China in 2015, and 76% of them were classified into a group named after "WY-001." Moreover, the "WY-001" tablets occupied 51-80% tablet seizures from 2011 to 2015 in China, indicating the existence of a huge clandestine factory incessantly manufacturing MA tablets. © 2017 American Academy of Forensic Sciences.

  18. SSRscanner: a program for reporting distribution and exact location of simple sequence repeats

    PubMed Central

    Anwar, Tamanna; Khan, Asad U

    2006-01-01

    Simple sequence repeats (SSRs) have become important molecular markers for a broad range of applications, such as genome mapping and characterization, phenotype mapping, marker assisted selection of crop plants and a range of molecular ecology and diversity studies. These repeated DNA sequences are found in both prokaryotes and eukaryotes. They are distributed almost at random throughout the genome, ranging from mononucleotide to trinucleotide repeats. They are also found at longer lengths (> 6 repeating units) of tracts. Most of the computer programs that find SSRs do not report its exact position. A computer program SSRscanner was written to find out distribution, frequency and exact location of each SSR in the genome. SSRscanner is user friendly. It can search repeats of any length and produce outputs with their exact position on chromosome and their frequency of occurrence in the sequence. Availability This program has been written in PERL and is freely available for non-commercial users by request from the authors. Please contact the authors by E-mail: huzzi99@hotmail.com PMID:17597863

  19. Classical and quantum stability in putative landscapes

    NASA Astrophysics Data System (ADS)

    Dine, Michael

    2017-01-01

    Landscape analyses often assume the existence of large numbers of fields, N , with all of the many couplings among these fields (subject to constraints such as local supersymmetry) selected independently and randomly from simple (say Gaussian) distributions. We point out that unitarity and perturbativity place significant constraints on behavior of couplings with N , eliminating otherwise puzzling results. In would-be flux compactifications of string theory, we point out that in order that there be large numbers of light fields, the compactification radii must scale as a positive power of N ; scaling of couplings with N may also be necessary for perturbativity. We show that in some simple string theory settings with large numbers of fields, for fixed R and string coupling, one can bound certain sums of squares of couplings by order one numbers. This may argue for strong correlations, possibly calling into question the assumption of uncorrelated distributions. We consider implications of these considerations for classical and quantum stability of states without supersymmetry, with low energy supersymmetry arising from tuning of parameters, and with dynamical breaking of supersymmetry.

  20. Emergence of Primary Teeth in Children of Sunsari District of Eastern Nepal

    PubMed Central

    Gupta, Anita; Hiremath, SS; Singh, SK; Poudyal, S; Niraula, SR; Baral, DD; Singh, RK

    2007-01-01

    This study assessed the timing and eruption sequence of primary teeth in children of Sunsari district of Eastern Nepal and compared the eruption pattern of males & females between various, ethnic groups. Method This cross-sectional study, included 501 subjects, aged 3 months to 60 months selected by simple random sampling method. The determinant variables such as age, gender, ethnicity, and eruption of teeth were recorded. Results This study provides a model data on emergence of primary teeth and number of deciduous teeth in these children. This is a first study of its kind in Nepal. The findings of this study will help as a reference data for optimal use in clinical, academic, and research activities, especially for children of Eastern Nepal. PMID:18523631

  1. Data survey on the effect of product features on competitive advantage of selected firms in Nigeria.

    PubMed

    Olokundun, Maxwell; Iyiola, Oladele; Ibidunni, Stephen; Falola, Hezekiah; Salau, Odunayo; Amaihian, Augusta; Peter, Fred; Borishade, Taiye

    2018-06-01

    The main objective of this study was to present a data article that investigates the effect product features on firm's competitive advantage. Few studies have examined how the features of a product could help in driving the competitive advantage of a firm. Descriptive research method was used. Statistical Package for Social Sciences (SPSS 22) was engaged for analysis of one hundred and fifty (150) valid questionnaire which were completed by small business owners registered under small and medium scale enterprises development of Nigeria (SMEDAN). Stratified and simple random sampling techniques were employed; reliability and validity procedures were also confirmed. The field data set is made publicly available to enable critical or extended analysis.

  2. Active learning in the presence of unlabelable examples

    NASA Technical Reports Server (NTRS)

    Mazzoni, Dominic; Wagstaff, Kiri

    2004-01-01

    We propose a new active learning framework where the expert labeler is allowed to decline to label any example. This may be necessary because the true label is unknown or because the example belongs to a class that is not part of the real training problem. We show that within this framework, popular active learning algorithms (such as Simple) may perform worse than random selection because they make so many queries to the unlabelable class. We present a method by which any active learning algorithm can be modified to avoid unlabelable examples by training a second classifier to distinguish between the labelable and unlabelable classes. We also demonstrate the effectiveness of the method on two benchmark data sets and a real-world problem.

  3. Differentials in colostrum feeding among lactating women of block RS Pura of J and K: A lesson for nursing practice.

    PubMed

    Raina, Sunil Kumar; Mengi, Vijay; Singh, Gurdeep

    2012-07-01

    Breast feeding is universally and traditionally practicised in India. Experts advocate breast feeding as the best method of feeding young infants. To assess the role of various factors in determining colostrum feeding in block R. S. Pura of district Jammu. A stratified two-stage design with villages as the primary sampling unit and lactating mothers as secondary sampling unit. Villages were divided into different clusters on the basis of population and sampling units were selected by a simple random technique. Breastfeeding is almost universal in R. S. Pura. Differentials in discarding the first milk were not found to be important among various socioeconomic groups and the phenomenon appeared more general than specific.

  4. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST)

    PubMed Central

    Xu, Chonggang; Gertner, George

    2013-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037

  5. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST).

    PubMed

    Xu, Chonggang; Gertner, George

    2011-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.

  6. Efficacy of Exclusive Lingual Nerve Block versus Conventional Inferior Alveolar Nerve Block in Achieving Lingual Soft-tissue Anesthesia.

    PubMed

    Balasubramanian, Sasikala; Paneerselvam, Elavenil; Guruprasad, T; Pathumai, M; Abraham, Simin; Krishnakumar Raja, V B

    2017-01-01

    The aim of this randomized clinical trial was to assess the efficacy of exclusive lingual nerve block (LNB) in achieving selective lingual soft-tissue anesthesia in comparison with conventional inferior alveolar nerve block (IANB). A total of 200 patients indicated for the extraction of lower premolars were recruited for the study. The samples were allocated by randomization into control and study groups. Lingual soft-tissue anesthesia was achieved by IANB and exclusive LNB in the control and study group, respectively. The primary outcome variable studied was anesthesia of ipsilateral lingual mucoperiosteum, floor of mouth and tongue. The secondary variables assessed were (1) taste sensation immediately following administration of local anesthesia and (2) mouth opening and lingual nerve paresthesia on the first postoperative day. Data analysis for descriptive and inferential statistics was performed using SPSS (IBM SPSS Statistics for Windows, Version 22.0, Armonk, NY: IBM Corp. Released 2013) and a P < 0.05 was considered statistically significant. In comparison with the control group, the study group (LNB) showed statistically significant anesthesia of the lingual gingiva of incisors, molars, anterior floor of the mouth, and anterior tongue. Exclusive LNB is superior to IAN nerve block in achieving selective anesthesia of lingual soft tissues. It is technically simple and associated with minimal complications as compared to IAN block.

  7. Efficacy of Exclusive Lingual Nerve Block versus Conventional Inferior Alveolar Nerve Block in Achieving Lingual Soft-tissue Anesthesia

    PubMed Central

    Balasubramanian, Sasikala; Paneerselvam, Elavenil; Guruprasad, T; Pathumai, M; Abraham, Simin; Krishnakumar Raja, V. B.

    2017-01-01

    Objective: The aim of this randomized clinical trial was to assess the efficacy of exclusive lingual nerve block (LNB) in achieving selective lingual soft-tissue anesthesia in comparison with conventional inferior alveolar nerve block (IANB). Materials and Methods: A total of 200 patients indicated for the extraction of lower premolars were recruited for the study. The samples were allocated by randomization into control and study groups. Lingual soft-tissue anesthesia was achieved by IANB and exclusive LNB in the control and study group, respectively. The primary outcome variable studied was anesthesia of ipsilateral lingual mucoperiosteum, floor of mouth and tongue. The secondary variables assessed were (1) taste sensation immediately following administration of local anesthesia and (2) mouth opening and lingual nerve paresthesia on the first postoperative day. Results: Data analysis for descriptive and inferential statistics was performed using SPSS (IBM SPSS Statistics for Windows, Version 22.0, Armonk, NY: IBM Corp. Released 2013) and a P < 0.05 was considered statistically significant. In comparison with the control group, the study group (LNB) showed statistically significant anesthesia of the lingual gingiva of incisors, molars, anterior floor of the mouth, and anterior tongue. Conclusion: Exclusive LNB is superior to IAN nerve block in achieving selective anesthesia of lingual soft tissues. It is technically simple and associated with minimal complications as compared to IAN block. PMID:29264294

  8. Encounter success of free-ranging marine predator movements across a dynamic prey landscape.

    PubMed

    Sims, David W; Witt, Matthew J; Richardson, Anthony J; Southall, Emily J; Metcalfe, Julian D

    2006-05-22

    Movements of wide-ranging top predators can now be studied effectively using satellite and archival telemetry. However, the motivations underlying movements remain difficult to determine because trajectories are seldom related to key biological gradients, such as changing prey distributions. Here, we use a dynamic prey landscape of zooplankton biomass in the north-east Atlantic Ocean to examine active habitat selection in the plankton-feeding basking shark Cetorhinus maximus. The relative success of shark searches across this landscape was examined by comparing prey biomass encountered by sharks with encounters by random-walk simulations of 'model' sharks. Movements of transmitter-tagged sharks monitored for 964 days (16754 km estimated minimum distance) were concentrated on the European continental shelf in areas characterized by high seasonal productivity and complex prey distributions. We show movements by adult and sub-adult sharks yielded consistently higher prey encounter rates than 90% of random-walk simulations. Behavioural patterns were consistent with basking sharks using search tactics structured across multiple scales to exploit the richest prey areas available in preferred habitats. Simple behavioural rules based on learned responses to previously encountered prey distributions may explain the high performances. This study highlights how dynamic prey landscapes enable active habitat selection in large predators to be investigated from a trophic perspective, an approach that may inform conservation by identifying critical habitat of vulnerable species.

  9. Multi-Label Learning via Random Label Selection for Protein Subcellular Multi-Locations Prediction.

    PubMed

    Wang, Xiao; Li, Guo-Zheng

    2013-03-12

    Prediction of protein subcellular localization is an important but challenging problem, particularly when proteins may simultaneously exist at, or move between, two or more different subcellular location sites. Most of the existing protein subcellular localization methods are only used to deal with the single-location proteins. In the past few years, only a few methods have been proposed to tackle proteins with multiple locations. However, they only adopt a simple strategy, that is, transforming the multi-location proteins to multiple proteins with single location, which doesn't take correlations among different subcellular locations into account. In this paper, a novel method named RALS (multi-label learning via RAndom Label Selection), is proposed to learn from multi-location proteins in an effective and efficient way. Through five-fold cross validation test on a benchmark dataset, we demonstrate our proposed method with consideration of label correlations obviously outperforms the baseline BR method without consideration of label correlations, indicating correlations among different subcellular locations really exist and contribute to improvement of prediction performance. Experimental results on two benchmark datasets also show that our proposed methods achieve significantly higher performance than some other state-of-the-art methods in predicting subcellular multi-locations of proteins. The prediction web server is available at http://levis.tongji.edu.cn:8080/bioinfo/MLPred-Euk/ for the public usage.

  10. The molecular and mathematical basis of Waddington's epigenetic landscape: a framework for post-Darwinian biology?

    PubMed

    Huang, Sui

    2012-02-01

    The Neo-Darwinian concept of natural selection is plausible when one assumes a straightforward causation of phenotype by genotype. However, such simple 1:1 mapping must now give place to the modern concepts of gene regulatory networks and gene expression noise. Both can, in the absence of genetic mutations, jointly generate a diversity of inheritable randomly occupied phenotypic states that could also serve as a substrate for natural selection. This form of epigenetic dynamics challenges Neo-Darwinism. It needs to incorporate the non-linear, stochastic dynamics of gene networks. A first step is to consider the mathematical correspondence between gene regulatory networks and Waddington's metaphoric 'epigenetic landscape', which actually represents the quasi-potential function of global network dynamics. It explains the coexistence of multiple stable phenotypes within one genotype. The landscape's topography with its attractors is shaped by evolution through mutational re-wiring of regulatory interactions - offering a link between genetic mutation and sudden, broad evolutionary changes. Copyright © 2012 WILEY Periodicals, Inc.

  11. Genetic Confirmation of Mungbean (Vigna radiata) and Mashbean (Vigna mungo) Interspecific Recombinants using Molecular Markers.

    PubMed

    Abbas, Ghulam; Hameed, Amjad; Rizwan, Muhammad; Ahsan, Muhammad; Asghar, Muhammad J; Iqbal, Nayyer

    2015-01-01

    Molecular confirmation of interspecific recombinants is essential to overcome the issues like self-pollination, environmental influence, and inadequacy of morphological characteristics during interspecific hybridization. The present study was conducted for genetic confirmation of mungbean (female) and mashbean (male) interspecific crosses using molecular markers. Initially, polymorphic random amplified polymorphic DNA (RAPD), universal rice primers (URP), and simple sequence repeats (SSR) markers differentiating parent genotypes were identified. Recombination in hybrids was confirmed using these polymorphic DNA markers. The NM 2006 × Mash 88 was most successful interspecific cross. Most of true recombinants confirmed by molecular markers were from this cross combination. SSR markers were efficient in detecting genetic variability and recombination with reference to specific chromosomes and particular loci. SSR (RIS) and RAPD identified variability dispersed throughout the genome. In conclusion, DNA based marker assisted selection (MAS) efficiently confirmed the interspecific recombinants. The results provided evidence that MAS can enhance the authenticity of selection in mungbean improvement program.

  12. Applications of molecular markers in the discrimination of Panax species and Korean ginseng cultivars (Panax ginseng).

    PubMed

    Jo, Ick Hyun; Kim, Young Chang; Kim, Dong Hwi; Kim, Kee Hong; Hyun, Tae Kyung; Ryu, Hojin; Bang, Kyong Hwan

    2017-10-01

    The development of molecular markers is one of the most useful methods for molecular breeding and marker-based molecular associated selections. Even though there is less information on the reference genome, molecular markers are indispensable tools for determination of genetic variation and identification of species with high levels of accuracy and reproducibility. The demand for molecular approaches for marker-based breeding and genetic discriminations in Panax species has greatly increased in recent times and has been successfully applied for various purposes. However, owing to the existence of diverse molecular techniques and differences in their principles and applications, there should be careful consideration while selecting appropriate marker types. In this review, we outline the recent status of different molecular marker applications in ginseng research and industrial fields. In addition, we discuss the basic principles, requirements, and advantages and disadvantages of the most widely used molecular markers, including restriction fragment length polymorphism, random amplified polymorphic DNA, sequence tag sites, simple sequence repeats, and single nucleotide polymorphisms.

  13. Health-risk behaviors in agriculture and related factors, southeastern Anatolian region of Turkey.

    PubMed

    Yavuz, Hasret; Simsek, Zeynep; Akbaba, Muhsin

    2014-01-01

    Human behavior plays a central role in the maintenance of health and the prevention of diseases. This study aimed to determine the risky behaviors of farm operators selected from a province of Turkey's southeastern Anatolian region, as well as the factors related to risky behaviors. In this cross-sectional analysis, 380 farm operators were enrolled through simple random selection method, and the response rate was 85%. Health-risk behavior was measured using the Control List of Occupational Risks in Agriculture. Of 323 farm operators, 85.4% were male. The prevalence of risky behaviors related to measures of environmental risks were higher in animal husbandry, transportation, transportation and maintenance of machinery, pesticide application, child protection, thermal stress, and psychosocial factors in the work place. Education, age, duration of work, and size of agricultural area were associated with risky behaviors in a multiple linear regression (P < .05). Findings showed that a certified training program and a behavior surveillance system for agriculture should be developed.

  14. Environmental Influence on the Evolution of Morphological Complexity in Machines

    PubMed Central

    Auerbach, Joshua E.; Bongard, Josh C.

    2014-01-01

    Whether, when, how, and why increased complexity evolves in biological populations is a longstanding open question. In this work we combine a recently developed method for evolving virtual organisms with an information-theoretic metric of morphological complexity in order to investigate how the complexity of morphologies, which are evolved for locomotion, varies across different environments. We first demonstrate that selection for locomotion results in the evolution of organisms with morphologies that increase in complexity over evolutionary time beyond what would be expected due to random chance. This provides evidence that the increase in complexity observed is a result of a driven rather than a passive trend. In subsequent experiments we demonstrate that morphologies having greater complexity evolve in complex environments, when compared to a simple environment when a cost of complexity is imposed. This suggests that in some niches, evolution may act to complexify the body plans of organisms while in other niches selection favors simpler body plans. PMID:24391483

  15. DNA-Templated Introduction of an Aldehyde Handle in Proteins.

    PubMed

    Kodal, Anne Louise B; Rosen, Christian B; Mortensen, Michael R; Tørring, Thomas; Gothelf, Kurt V

    2016-07-15

    Many medical and biotechnological applications rely on protein labeling, but a key challenge is the production of homogeneous and site-specific conjugates. This can rarely be achieved by simple residue-specific random labeling, but generally requires genetic engineering. Using site-selective DNA-templated reductive amination, we created DNA-protein conjugates with control over labeling stoichiometry and without genetic engineering. A guiding DNA strand with a metal-binding functionality facilitates site-selectivity by directing the coupling of a second reactive DNA strand in the vicinity of a protein metal-binding site. We demonstrate DNA-templated reductive amination for His6 -tagged proteins and metal-binding proteins, including IgG1 antibodies. We also used a cleavable linker between the DNA and the protein to remove the DNA and introduce a single aldehyde on the protein. This functions as a handle for further modifications with desired labels. In addition to directing the aldehyde positioning, the DNA provides a straightforward route for purification between reaction steps. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. A simple model for pollen-parent fecundity distributions in bee-pollinated forage legume polycrosses

    USDA-ARS?s Scientific Manuscript database

    Random mating or panmixis is a fundamental assumption in quantitative genetic theory. Random mating is sometimes thought to occur in actual fact although a large body of empirical work shows that this is often not the case in nature. Models have been developed to model many non-random mating phenome...

  17. Under What Circumstances Does External Knowledge about the Correlation Structure Improve Power in Cluster Randomized Designs?

    ERIC Educational Resources Information Center

    Rhoads, Christopher

    2014-01-01

    Recent publications have drawn attention to the idea of utilizing prior information about the correlation structure to improve statistical power in cluster randomized experiments. Because power in cluster randomized designs is a function of many different parameters, it has been difficult for applied researchers to discern a simple rule explaining…

  18. Simulating and mapping spatial complexity using multi-scale techniques

    USGS Publications Warehouse

    De Cola, L.

    1994-01-01

    A central problem in spatial analysis is the mapping of data for complex spatial fields using relatively simple data structures, such as those of a conventional GIS. This complexity can be measured using such indices as multi-scale variance, which reflects spatial autocorrelation, and multi-fractal dimension, which characterizes the values of fields. These indices are computed for three spatial processes: Gaussian noise, a simple mathematical function, and data for a random walk. Fractal analysis is then used to produce a vegetation map of the central region of California based on a satellite image. This analysis suggests that real world data lie on a continuum between the simple and the random, and that a major GIS challenge is the scientific representation and understanding of rapidly changing multi-scale fields. -Author

  19. Minimizing effects of methodological decisions on interpretation and prediction in species distribution studies: An example with background selection

    USGS Publications Warehouse

    Jarnevich, Catherine S.; Talbert, Marian; Morisette, Jeffrey T.; Aldridge, Cameron L.; Brown, Cynthia; Kumar, Sunil; Manier, Daniel; Talbert, Colin; Holcombe, Tracy R.

    2017-01-01

    Evaluating the conditions where a species can persist is an important question in ecology both to understand tolerances of organisms and to predict distributions across landscapes. Presence data combined with background or pseudo-absence locations are commonly used with species distribution modeling to develop these relationships. However, there is not a standard method to generate background or pseudo-absence locations, and method choice affects model outcomes. We evaluated combinations of both model algorithms (simple and complex generalized linear models, multivariate adaptive regression splines, Maxent, boosted regression trees, and random forest) and background methods (random, minimum convex polygon, and continuous and binary kernel density estimator (KDE)) to assess the sensitivity of model outcomes to choices made. We evaluated six questions related to model results, including five beyond the common comparison of model accuracy assessment metrics (biological interpretability of response curves, cross-validation robustness, independent data accuracy and robustness, and prediction consistency). For our case study with cheatgrass in the western US, random forest was least sensitive to background choice and the binary KDE method was least sensitive to model algorithm choice. While this outcome may not hold for other locations or species, the methods we used can be implemented to help determine appropriate methodologies for particular research questions.

  20. Composition bias and the origin of ORFan genes

    PubMed Central

    Yomtovian, Inbal; Teerakulkittipong, Nuttinee; Lee, Byungkook; Moult, John; Unger, Ron

    2010-01-01

    Motivation: Intriguingly, sequence analysis of genomes reveals that a large number of genes are unique to each organism. The origin of these genes, termed ORFans, is not known. Here, we explore the origin of ORFan genes by defining a simple measure called ‘composition bias’, based on the deviation of the amino acid composition of a given sequence from the average composition of all proteins of a given genome. Results: For a set of 47 prokaryotic genomes, we show that the amino acid composition bias of real proteins, random ‘proteins’ (created by using the nucleotide frequencies of each genome) and ‘proteins’ translated from intergenic regions are distinct. For ORFans, we observed a correlation between their composition bias and their relative evolutionary age. Recent ORFan proteins have compositions more similar to those of random ‘proteins’, while the compositions of more ancient ORFan proteins are more similar to those of the set of all proteins of the organism. This observation is consistent with an evolutionary scenario wherein ORFan genes emerged and underwent a large number of random mutations and selection, eventually adapting to the composition preference of their organism over time. Contact: ron@biocoml.ls.biu.ac.il Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20231229

  1. Parallel Algorithms for Switching Edges in Heterogeneous Graphs.

    PubMed

    Bhuiyan, Hasanuzzaman; Khan, Maleq; Chen, Jiangzhuo; Marathe, Madhav

    2017-06-01

    An edge switch is an operation on a graph (or network) where two edges are selected randomly and one of their end vertices are swapped with each other. Edge switch operations have important applications in graph theory and network analysis, such as in generating random networks with a given degree sequence, modeling and analyzing dynamic networks, and in studying various dynamic phenomena over a network. The recent growth of real-world networks motivates the need for efficient parallel algorithms. The dependencies among successive edge switch operations and the requirement to keep the graph simple (i.e., no self-loops or parallel edges) as the edges are switched lead to significant challenges in designing a parallel algorithm. Addressing these challenges requires complex synchronization and communication among the processors leading to difficulties in achieving a good speedup by parallelization. In this paper, we present distributed memory parallel algorithms for switching edges in massive networks. These algorithms provide good speedup and scale well to a large number of processors. A harmonic mean speedup of 73.25 is achieved on eight different networks with 1024 processors. One of the steps in our edge switch algorithms requires the computation of multinomial random variables in parallel. This paper presents the first non-trivial parallel algorithm for the problem, achieving a speedup of 925 using 1024 processors.

  2. Parallel Algorithms for Switching Edges in Heterogeneous Graphs☆

    PubMed Central

    Khan, Maleq; Chen, Jiangzhuo; Marathe, Madhav

    2017-01-01

    An edge switch is an operation on a graph (or network) where two edges are selected randomly and one of their end vertices are swapped with each other. Edge switch operations have important applications in graph theory and network analysis, such as in generating random networks with a given degree sequence, modeling and analyzing dynamic networks, and in studying various dynamic phenomena over a network. The recent growth of real-world networks motivates the need for efficient parallel algorithms. The dependencies among successive edge switch operations and the requirement to keep the graph simple (i.e., no self-loops or parallel edges) as the edges are switched lead to significant challenges in designing a parallel algorithm. Addressing these challenges requires complex synchronization and communication among the processors leading to difficulties in achieving a good speedup by parallelization. In this paper, we present distributed memory parallel algorithms for switching edges in massive networks. These algorithms provide good speedup and scale well to a large number of processors. A harmonic mean speedup of 73.25 is achieved on eight different networks with 1024 processors. One of the steps in our edge switch algorithms requires the computation of multinomial random variables in parallel. This paper presents the first non-trivial parallel algorithm for the problem, achieving a speedup of 925 using 1024 processors. PMID:28757680

  3. Sample size calculations for stepped wedge and cluster randomised trials: a unified approach

    PubMed Central

    Hemming, Karla; Taljaard, Monica

    2016-01-01

    Objectives To clarify and illustrate sample size calculations for the cross-sectional stepped wedge cluster randomized trial (SW-CRT) and to present a simple approach for comparing the efficiencies of competing designs within a unified framework. Study Design and Setting We summarize design effects for the SW-CRT, the parallel cluster randomized trial (CRT), and the parallel cluster randomized trial with before and after observations (CRT-BA), assuming cross-sectional samples are selected over time. We present new formulas that enable trialists to determine the required cluster size for a given number of clusters. We illustrate by example how to implement the presented design effects and give practical guidance on the design of stepped wedge studies. Results For a fixed total cluster size, the choice of study design that provides the greatest power depends on the intracluster correlation coefficient (ICC) and the cluster size. When the ICC is small, the CRT tends to be more efficient; when the ICC is large, the SW-CRT tends to be more efficient and can serve as an alternative design when the CRT is an infeasible design. Conclusion Our unified approach allows trialists to easily compare the efficiencies of three competing designs to inform the decision about the most efficient design in a given scenario. PMID:26344808

  4. Factors affecting the informal payments in public and teaching hospitals.

    PubMed

    Aboutorabi, Ali; Ghiasipour, Maryam; Rezapour, Aziz; Pourreza, Abolghasem; Sarabi Asiabar, Ali; Tanoomand, Asghar

    2016-01-01

    Informal payments in the health sector of many developing countries are considered as a major impediment to health care reforms. Informal payments are a form of systemic fraud and have adverse effects on the performance of the health system. In this study, the frequency and extent of informal payments as well as the determinants of these payments were investigated in general hospitals affiliated to Tehran University of Medical Sciences. In this cross-sectional study, 300 discharged patients were selected using multi-stage random sampling method. First, three hospitals were selected randomly; then, through a simple random sampling, we recruited 300 discharged patients from internal, surgery, emergency, ICU & CCU wards. All data were collected by structured telephone interviews and questionnaire. We analyzed data using Chi- square, Kruskal-Wallis and Mann-Whitney tests. The results indicated that 21% (n=63) of individuals paid informally to the staff. About 4% (n=12) of the participants were faced with informal payment requests from hospital staff. There was a significant relationship between frequency of informal payments with marital status of participants and type of hospitals. According to our findings, none of the respondents had informal payments to physicians. The most frequent informal payments were in cash and were made to the hospitals' housekeeping staff to ensure more and better services. There was no significant relationship between the informal payments with socio-demographic characteristics, residential area and insurance status. Our findings revealed that many strategies can be used for both controlling and reducing informal payments. These include training patients and hospitals' staff, increasing income levels of employees, improving the quantity and quality of health services and changing the entrenched beliefs that necessitate informal payments.

  5. Mobile Phone Apps for Smoking Cessation: Quality and Usability Among Smokers With Psychosis.

    PubMed

    Ferron, Joelle C; Brunette, Mary F; Geiger, Pamela; Marsch, Lisa A; Adachi-Mejia, Anna M; Bartels, Stephen J

    2017-03-03

    Smoking is one of the top preventable causes of mortality in people with psychotic disorders such as schizophrenia. Cessation treatment improves abstinence outcomes, but access is a barrier. Mobile phone apps are one way to increase access to cessation treatment; however, whether they are usable by people with psychotic disorders, who often have special learning needs, is not known. Researchers reviewed 100 randomly selected apps for smoking cessation to rate them based on US guidelines for nicotine addiction treatment and to categorize them based on app functions. We aimed to test the usability and usefulness of the top-rated apps in 21 smokers with psychotic disorders. We identified 766 smoking cessation apps and randomly selected 100 for review. Two independent reviewers rated each app with the Adherence Index to US Clinical Practice Guideline for Treating Tobacco Use and Dependence. Then, smokers with psychotic disorders evaluated the top 9 apps within a usability testing protocol. We analyzed quantitative results using descriptive statistics and t tests. Qualitative data were open-coded and analyzed for themes. Regarding adherence to practice guidelines, most of the randomly sampled smoking cessation apps scored poorly-66% rated lower than 10 out of 100 on the Adherence Index (Mean 11.47, SD 11.8). Regarding usability, three common usability problems emerged: text-dense content, abstract symbols on the homepage, and subtle directions to edit features. In order for apps to be effective and usable for this population, developers should utilize a balance of text and simple design that facilitate ease of navigation and content comprehension that will help people learn quit smoking skills. ©Joelle C Ferron, Mary F Brunette, Pamela Geiger, Lisa A Marsch, Anna M Adachi-Mejia, Stephen J Bartels. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 03.03.2017.

  6. Mobile Phone Apps for Smoking Cessation: Quality and Usability Among Smokers With Psychosis

    PubMed Central

    Brunette, Mary F; Geiger, Pamela; Marsch, Lisa A; Adachi-Mejia, Anna M; Bartels, Stephen J

    2017-01-01

    Background Smoking is one of the top preventable causes of mortality in people with psychotic disorders such as schizophrenia. Cessation treatment improves abstinence outcomes, but access is a barrier. Mobile phone apps are one way to increase access to cessation treatment; however, whether they are usable by people with psychotic disorders, who often have special learning needs, is not known. Objective Researchers reviewed 100 randomly selected apps for smoking cessation to rate them based on US guidelines for nicotine addiction treatment and to categorize them based on app functions. We aimed to test the usability and usefulness of the top-rated apps in 21 smokers with psychotic disorders. Methods We identified 766 smoking cessation apps and randomly selected 100 for review. Two independent reviewers rated each app with the Adherence Index to US Clinical Practice Guideline for Treating Tobacco Use and Dependence. Then, smokers with psychotic disorders evaluated the top 9 apps within a usability testing protocol. We analyzed quantitative results using descriptive statistics and t tests. Qualitative data were open-coded and analyzed for themes. Results Regarding adherence to practice guidelines, most of the randomly sampled smoking cessation apps scored poorly—66% rated lower than 10 out of 100 on the Adherence Index (Mean 11.47, SD 11.8). Regarding usability, three common usability problems emerged: text-dense content, abstract symbols on the homepage, and subtle directions to edit features. Conclusions In order for apps to be effective and usable for this population, developers should utilize a balance of text and simple design that facilitate ease of navigation and content comprehension that will help people learn quit smoking skills. PMID:28258047

  7. Disease Surveillance on Complex Social Networks

    PubMed Central

    Herrera, Jose L.; Srinivasan, Ravi; Brownstein, John S.; Galvani, Alison P.; Meyers, Lauren Ancel

    2016-01-01

    As infectious disease surveillance systems expand to include digital, crowd-sourced, and social network data, public health agencies are gaining unprecedented access to high-resolution data and have an opportunity to selectively monitor informative individuals. Contact networks, which are the webs of interaction through which diseases spread, determine whether and when individuals become infected, and thus who might serve as early and accurate surveillance sensors. Here, we evaluate three strategies for selecting sensors—sampling the most connected, random, and friends of random individuals—in three complex social networks—a simple scale-free network, an empirical Venezuelan college student network, and an empirical Montreal wireless hotspot usage network. Across five different surveillance goals—early and accurate detection of epidemic emergence and peak, and general situational awareness—we find that the optimal choice of sensors depends on the public health goal, the underlying network and the reproduction number of the disease (R0). For diseases with a low R0, the most connected individuals provide the earliest and most accurate information about both the onset and peak of an outbreak. However, identifying network hubs is often impractical, and they can be misleading if monitored for general situational awareness, if the underlying network has significant community structure, or if R0 is high or unknown. Taking a theoretical approach, we also derive the optimal surveillance system for early outbreak detection but find that real-world identification of such sensors would be nearly impossible. By contrast, the friends-of-random strategy offers a more practical and robust alternative. It can be readily implemented without prior knowledge of the network, and by identifying sensors with higher than average, but not the highest, epidemiological risk, it provides reasonably early and accurate information. PMID:27415615

  8. Disease Surveillance on Complex Social Networks.

    PubMed

    Herrera, Jose L; Srinivasan, Ravi; Brownstein, John S; Galvani, Alison P; Meyers, Lauren Ancel

    2016-07-01

    As infectious disease surveillance systems expand to include digital, crowd-sourced, and social network data, public health agencies are gaining unprecedented access to high-resolution data and have an opportunity to selectively monitor informative individuals. Contact networks, which are the webs of interaction through which diseases spread, determine whether and when individuals become infected, and thus who might serve as early and accurate surveillance sensors. Here, we evaluate three strategies for selecting sensors-sampling the most connected, random, and friends of random individuals-in three complex social networks-a simple scale-free network, an empirical Venezuelan college student network, and an empirical Montreal wireless hotspot usage network. Across five different surveillance goals-early and accurate detection of epidemic emergence and peak, and general situational awareness-we find that the optimal choice of sensors depends on the public health goal, the underlying network and the reproduction number of the disease (R0). For diseases with a low R0, the most connected individuals provide the earliest and most accurate information about both the onset and peak of an outbreak. However, identifying network hubs is often impractical, and they can be misleading if monitored for general situational awareness, if the underlying network has significant community structure, or if R0 is high or unknown. Taking a theoretical approach, we also derive the optimal surveillance system for early outbreak detection but find that real-world identification of such sensors would be nearly impossible. By contrast, the friends-of-random strategy offers a more practical and robust alternative. It can be readily implemented without prior knowledge of the network, and by identifying sensors with higher than average, but not the highest, epidemiological risk, it provides reasonably early and accurate information.

  9. A Simulation Study on the Performance of the Simple Difference and Covariance-Adjusted Scores in Randomized Experimental Designs.

    PubMed

    Petscher, Yaacov; Schatschneider, Christopher

    2011-01-01

    Research by Huck and McLean (1975) demonstrated that the covariance-adjusted score is more powerful than the simple difference score, yet recent reviews indicate researchers are equally likely to use either score type in two-wave randomized experimental designs. A Monte Carlo simulation was conducted to examine the conditions under which the simple difference and covariance-adjusted scores were more or less powerful to detect treatment effects when relaxing certain assumptions made by Huck and McLean (1975). Four factors were manipulated in the design including sample size, normality of the pretest and posttest distributions, the correlation between pretest and posttest, and posttest variance. A 5 × 5 × 4 × 3 mostly crossed design was run with 1,000 replications per condition, resulting in 226,000 unique samples. The gain score was nearly as powerful as the covariance-adjusted score when pretest and posttest variances were equal, and as powerful in fan-spread growth conditions; thus, under certain circumstances the gain score could be used in two-wave randomized experimental designs.

  10. A Simulation Study on the Performance of the Simple Difference and Covariance-Adjusted Scores in Randomized Experimental Designs

    PubMed Central

    Petscher, Yaacov; Schatschneider, Christopher

    2015-01-01

    Research by Huck and McLean (1975) demonstrated that the covariance-adjusted score is more powerful than the simple difference score, yet recent reviews indicate researchers are equally likely to use either score type in two-wave randomized experimental designs. A Monte Carlo simulation was conducted to examine the conditions under which the simple difference and covariance-adjusted scores were more or less powerful to detect treatment effects when relaxing certain assumptions made by Huck and McLean (1975). Four factors were manipulated in the design including sample size, normality of the pretest and posttest distributions, the correlation between pretest and posttest, and posttest variance. A 5 × 5 × 4 × 3 mostly crossed design was run with 1,000 replications per condition, resulting in 226,000 unique samples. The gain score was nearly as powerful as the covariance-adjusted score when pretest and posttest variances were equal, and as powerful in fan-spread growth conditions; thus, under certain circumstances the gain score could be used in two-wave randomized experimental designs. PMID:26379310

  11. Ergonomics intervention in an Iranian television manufacturing industry.

    PubMed

    Motamedzade, M; Mohseni, M; Golmohammadi, R; Mahjoob, H

    2011-01-01

    The primary goal of this study was to use the Strain Index (SI) to assess the risk of developing upper extremity musculoskeletal disorders in a television (TV) manufacturing industry and evaluate the effectiveness of an educational intervention. The project was designed and implemented in two stages. In first stage, the SI score was calculated and the Nordic Musculoskeletal Questionnaire (NMQ) was completed. Following this, hazardous jobs were identified and existing risk factors in these jobs were studied. Based on these data, an educational intervention was designed and implemented. In the second stage, three months after implementing the interventions, the SI score was re-calculated and the Nordic Musculoskeletal Questionnaire (NMQ) completed again. 80 assembly workers of an Iranian TV manufacturing industry were randomly selected using simple random sampling approach. The results showed that the SI score had a good correlation with the symptoms of musculoskeletal disorders. It was also observed that the difference between prevalence of signs and symptoms of musculoskeletal disorders, before and after intervention, was significantly reduced. A well conducted implementation of an interventional program with total participation of all stakeholders can lead to a decrease in musculoskeletal disorders.

  12. Comparing the Efficiency of Two Different Extraction Techniques in Removal of Maxillary Third Molars: A Randomized Controlled Trial.

    PubMed

    Edward, Joseph; Aziz, Mubarak A; Madhu Usha, Arjun; Narayanan, Jyothi K

    2017-12-01

    Extractions are routine procedures in dental surgery. Traditional extraction techniques use a combination of severing the periodontal attachment, luxation with an elevator, and removal with forceps. A new technique of extraction of maxillary third molar is introduced in this study-Joedds technique, which is compared with the conventional technique. One hundred people were included in the study, the people were divided into two groups by means of simple random sampling. In one group conventional technique of maxillary third molar extraction was used and on second Joedds technique was used. Statistical analysis was carried out with student's t test. Analysis of 100 patients based on parameters showed that the novel joedds technique had minimal trauma to surrounding tissues, less tuberosity and root fractures and the time taken for extraction was <2 min while compared to other group of patients. This novel technique has proved to be better than conventional third molar extraction technique, with minimal complications. If Proper selection of cases and right technique are used.

  13. Record of hospitalizations for ambulatory care sensitive conditions: validation of the hospital information system.

    PubMed

    Rehem, Tania Cristina Morais Santa Barbara; de Oliveira, Maria Regina Fernandes; Ciosak, Suely Itsuko; Egry, Emiko Yoshikawa

    2013-01-01

    To estimate the sensitivity, specificity and positive and negative predictive values of the Unified Health System's Hospital Information System for the appropriate recording of hospitalizations for ambulatory care-sensitive conditions. The hospital information system records for conditions which are sensitive to ambulatory care, and for those which are not, were considered for analysis, taking the medical records as the gold standard. Through simple random sampling, a sample of 816 medical records was defined and selected by means of a list of random numbers using the Statistical Package for Social Sciences. The sensitivity was 81.89%, specificity was 95.19%, the positive predictive value was 77.61% and the negative predictive value was 96.27%. In the study setting, the Hospital Information System (SIH) was more specific than sensitive, with nearly 20% of care sensitive conditions not detected. There are no validation studies in Brazil of the Hospital Information System records for the hospitalizations which are sensitive to primary health care. These results are relevant when one considers that this system is one of the bases for assessment of the effectiveness of primary health care.

  14. Prevalence of faecal incontinence in community-dwelling older people in Bali, Indonesia.

    PubMed

    Suyasa, I Gede Putu Darma; Xiao, Lily Dongxia; Lynn, Penelope Ann; Skuza, Pawel Piotr; Paterson, Jan

    2015-06-01

    To explore the prevalence rate of faecal incontinence in community-dwelling older people, associated factors, impact on quality of life and practices in managing faecal incontinence. Using a cross-sectional design, 600 older people aged 60+ were randomly selected from a population of 2916 in Bali, Indonesia using a simple random sampling technique. Three hundred and three participants were interviewed (response rate 51%). The prevalence of faecal incontinence was 22.4% (95% confidence interval (CI) 18.0-26.8). Self-reported constipation (odds ratio (OR) 3.68, 95% CI 1.87-7.24) and loose stools (OR 2.66, 95% CI 1.47-4.78) were significantly associated with faecal incontinence. There was a strong positive correlation between total bowel control score and total quality-of-life score (P < 0.001, rs = 0.61) indicating significant alterations in quality of life. The current management practices varied from changing diet, visiting health-care professionals, and using modern and traditional medicines. Faecal incontinence is common among community-dwelling older people in Bali. © 2014 ACOTA.

  15. Genetic analyses of protein yield in dairy cows applying random regression models with time-dependent and temperature x humidity-dependent covariates.

    PubMed

    Brügemann, K; Gernand, E; von Borstel, U U; König, S

    2011-08-01

    Data used in the present study included 1,095,980 first-lactation test-day records for protein yield of 154,880 Holstein cows housed on 196 large-scale dairy farms in Germany. Data were recorded between 2002 and 2009 and merged with meteorological data from public weather stations. The maximum distance between each farm and its corresponding weather station was 50 km. Hourly temperature-humidity indexes (THI) were calculated using the mean of hourly measurements of dry bulb temperature and relative humidity. On the phenotypic scale, an increase in THI was generally associated with a decrease in daily protein yield. For genetic analyses, a random regression model was applied using time-dependent (d in milk, DIM) and THI-dependent covariates. Additive genetic and permanent environmental effects were fitted with this random regression model and Legendre polynomials of order 3 for DIM and THI. In addition, the fixed curve was modeled with Legendre polynomials of order 3. Heterogeneous residuals were fitted by dividing DIM into 5 classes, and by dividing THI into 4 classes, resulting in 20 different classes. Additive genetic variances for daily protein yield decreased with increasing degrees of heat stress and were lowest at the beginning of lactation and at extreme THI. Due to higher additive genetic variances, slightly higher permanent environment variances, and similar residual variances, heritabilities were highest for low THI in combination with DIM at the end of lactation. Genetic correlations among individual values for THI were generally >0.90. These trends from the complex random regression model were verified by applying relatively simple bivariate animal models for protein yield measured in 2 THI environments; that is, defining a THI value of 60 as a threshold. These high correlations indicate the absence of any substantial genotype × environment interaction for protein yield. However, heritabilities and additive genetic variances from the random regression model tended to be slightly higher in the THI range corresponding to cows' comfort zone. Selecting such superior environments for progeny testing can contribute to an accurate genetic differentiation among selection candidates. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  16. A Framework for Designing Cluster Randomized Trials with Binary Outcomes

    ERIC Educational Resources Information Center

    Spybrook, Jessaca; Martinez, Andres

    2011-01-01

    The purpose of this paper is to provide a frame work for approaching a power analysis for a CRT (cluster randomized trial) with a binary outcome. The authors suggest a framework in the context of a simple CRT and then extend it to a blocked design, or a multi-site cluster randomized trial (MSCRT). The framework is based on proportions, an…

  17. Understanding Statistical Power in Cluster Randomized Trials: Challenges Posed by Differences in Notation and Terminology

    ERIC Educational Resources Information Center

    Spybrook, Jessaca; Hedges, Larry; Borenstein, Michael

    2014-01-01

    Research designs in which clusters are the unit of randomization are quite common in the social sciences. Given the multilevel nature of these studies, the power analyses for these studies are more complex than in a simple individually randomized trial. Tools are now available to help researchers conduct power analyses for cluster randomized…

  18. Modeling 2D and 3D diffusion.

    PubMed

    Saxton, Michael J

    2007-01-01

    Modeling obstructed diffusion is essential to the understanding of diffusion-mediated processes in the crowded cellular environment. Simple Monte Carlo techniques for modeling obstructed random walks are explained and related to Brownian dynamics and more complicated Monte Carlo methods. Random number generation is reviewed in the context of random walk simulations. Programming techniques and event-driven algorithms are discussed as ways to speed simulations.

  19. Generating constrained randomized sequences: item frequency matters.

    PubMed

    French, Robert M; Perruchet, Pierre

    2009-11-01

    All experimental psychologists understand the importance of randomizing lists of items. However, randomization is generally constrained, and these constraints-in particular, not allowing immediately repeated items-which are designed to eliminate particular biases, frequently engender others. We describe a simple Monte Carlo randomization technique that solves a number of these problems. However, in many experimental settings, we are concerned not only with the number and distribution of items but also with the number and distribution of transitions between items. The algorithm mentioned above provides no control over this. We therefore introduce a simple technique that uses transition tables for generating correctly randomized sequences. We present an analytic method of producing item-pair frequency tables and item-pair transitional probability tables when immediate repetitions are not allowed. We illustrate these difficulties and how to overcome them, with reference to a classic article on word segmentation in infants. Finally, we provide free access to an Excel file that allows users to generate transition tables with up to 10 different item types, as well as to generate appropriately distributed randomized sequences of any length without immediately repeated elements. This file is freely available from http://leadserv.u-bourgogne.fr/IMG/xls/TransitionMatrix.xls.

  20. Robust online tracking via adaptive samples selection with saliency detection

    NASA Astrophysics Data System (ADS)

    Yan, Jia; Chen, Xi; Zhu, QiuPing

    2013-12-01

    Online tracking has shown to be successful in tracking of previously unknown objects. However, there are two important factors which lead to drift problem of online tracking, the one is how to select the exact labeled samples even when the target locations are inaccurate, and the other is how to handle the confusors which have similar features with the target. In this article, we propose a robust online tracking algorithm with adaptive samples selection based on saliency detection to overcome the drift problem. To deal with the problem of degrading the classifiers using mis-aligned samples, we introduce the saliency detection method to our tracking problem. Saliency maps and the strong classifiers are combined to extract the most correct positive samples. Our approach employs a simple yet saliency detection algorithm based on image spectral residual analysis. Furthermore, instead of using the random patches as the negative samples, we propose a reasonable selection criterion, in which both the saliency confidence and similarity are considered with the benefits that confusors in the surrounding background are incorporated into the classifiers update process before the drift occurs. The tracking task is formulated as a binary classification via online boosting framework. Experiment results in several challenging video sequences demonstrate the accuracy and stability of our tracker.

  1. Catalytic micromotor generating self-propelled regular motion through random fluctuation.

    PubMed

    Yamamoto, Daigo; Mukai, Atsushi; Okita, Naoaki; Yoshikawa, Kenichi; Shioi, Akihisa

    2013-07-21

    Most of the current studies on nano∕microscale motors to generate regular motion have adapted the strategy to fabricate a composite with different materials. In this paper, we report that a simple object solely made of platinum generates regular motion driven by a catalytic chemical reaction with hydrogen peroxide. Depending on the morphological symmetry of the catalytic particles, a rich variety of random and regular motions are observed. The experimental trend is well reproduced by a simple theoretical model by taking into account of the anisotropic viscous effect on the self-propelled active Brownian fluctuation.

  2. Catalytic micromotor generating self-propelled regular motion through random fluctuation

    NASA Astrophysics Data System (ADS)

    Yamamoto, Daigo; Mukai, Atsushi; Okita, Naoaki; Yoshikawa, Kenichi; Shioi, Akihisa

    2013-07-01

    Most of the current studies on nano/microscale motors to generate regular motion have adapted the strategy to fabricate a composite with different materials. In this paper, we report that a simple object solely made of platinum generates regular motion driven by a catalytic chemical reaction with hydrogen peroxide. Depending on the morphological symmetry of the catalytic particles, a rich variety of random and regular motions are observed. The experimental trend is well reproduced by a simple theoretical model by taking into account of the anisotropic viscous effect on the self-propelled active Brownian fluctuation.

  3. Distributed Clone Detection in Static Wireless Sensor Networks: Random Walk with Network Division

    PubMed Central

    Khan, Wazir Zada; Aalsalem, Mohammed Y.; Saad, N. M.

    2015-01-01

    Wireless Sensor Networks (WSNs) are vulnerable to clone attacks or node replication attacks as they are deployed in hostile and unattended environments where they are deprived of physical protection, lacking physical tamper-resistance of sensor nodes. As a result, an adversary can easily capture and compromise sensor nodes and after replicating them, he inserts arbitrary number of clones/replicas into the network. If these clones are not efficiently detected, an adversary can be further capable to mount a wide variety of internal attacks which can emasculate the various protocols and sensor applications. Several solutions have been proposed in the literature to address the crucial problem of clone detection, which are not satisfactory as they suffer from some serious drawbacks. In this paper we propose a novel distributed solution called Random Walk with Network Division (RWND) for the detection of node replication attack in static WSNs which is based on claimer-reporter-witness framework and combines a simple random walk with network division. RWND detects clone(s) by following a claimer-reporter-witness framework and a random walk is employed within each area for the selection of witness nodes. Splitting the network into levels and areas makes clone detection more efficient and the high security of witness nodes is ensured with moderate communication and memory overheads. Our simulation results show that RWND outperforms the existing witness node based strategies with moderate communication and memory overheads. PMID:25992913

  4. Orbitofrontal cortical activity during repeated free choice.

    PubMed

    Campos, Michael; Koppitch, Kari; Andersen, Richard A; Shimojo, Shinsuke

    2012-06-01

    Neurons in the orbitofrontal cortex (OFC) have been shown to encode subjective values, suggesting a role in preference-based decision-making, although the precise relation to choice behavior is unclear. In a repeated two-choice task, subjective values of each choice can account for aggregate choice behavior, which is the overall likelihood of choosing one option over the other. Individual choices, however, are impossible to predict with knowledge of relative subjective values alone. In this study we investigated the role of internal factors in choice behavior with a simple but novel free-choice task and simultaneous recording from individual neurons in nonhuman primate OFC. We found that, first, the observed sequences of choice behavior included periods of exceptionally long runs of each of two available options and periods of frequent switching. Neither a satiety-based mechanism nor a random selection process could explain the observed choice behavior. Second, OFC neurons encode important features of the choice behavior. These features include activity selective for exceptionally long runs of a given choice (stay selectivity) as well as activity selective for switches between choices (switch selectivity). These results suggest that OFC neural activity, in addition to encoding subjective values on a long timescale that is sensitive to satiety, also encodes a signal that fluctuates on a shorter timescale and thereby reflects some of the statistically improbable aspects of free-choice behavior.

  5. Orbitofrontal cortical activity during repeated free choice

    PubMed Central

    Koppitch, Kari; Andersen, Richard A.; Shimojo, Shinsuke

    2012-01-01

    Neurons in the orbitofrontal cortex (OFC) have been shown to encode subjective values, suggesting a role in preference-based decision-making, although the precise relation to choice behavior is unclear. In a repeated two-choice task, subjective values of each choice can account for aggregate choice behavior, which is the overall likelihood of choosing one option over the other. Individual choices, however, are impossible to predict with knowledge of relative subjective values alone. In this study we investigated the role of internal factors in choice behavior with a simple but novel free-choice task and simultaneous recording from individual neurons in nonhuman primate OFC. We found that, first, the observed sequences of choice behavior included periods of exceptionally long runs of each of two available options and periods of frequent switching. Neither a satiety-based mechanism nor a random selection process could explain the observed choice behavior. Second, OFC neurons encode important features of the choice behavior. These features include activity selective for exceptionally long runs of a given choice (stay selectivity) as well as activity selective for switches between choices (switch selectivity). These results suggest that OFC neural activity, in addition to encoding subjective values on a long timescale that is sensitive to satiety, also encodes a signal that fluctuates on a shorter timescale and thereby reflects some of the statistically improbable aspects of free-choice behavior. PMID:22423007

  6. An evaluation of selected NASA scientific and technical information products: Results of a pilot study

    NASA Technical Reports Server (NTRS)

    Pinelli, Thomas E.; Glassman, Myron

    1989-01-01

    A pilot study was conducted to evaluate selected NASA scientific and technical information (STI) products. The study, which utilized survey research in the form of a self-administered mail questionnaire, had a two-fold purpose -- to gather baseline data regarding the use and perceived usefulness of selected NASA STI products and to develop/validate questions that could be used in a future study concerned with the role of the U.S. government technical report in aeronautics. The sample frame consisted of 25,000 members of the American Institute of Aeronautics and Astronautics in the U.S. with academic, government or industrial affiliation. Simple random sampling was used to select 2000 individuals to participate in the study. Three hundred fifty-three usable questionnaires (17 percent response rate) were received by the established cutoff date. The findings indicate that: (1) NASA STI is used and is generally perceived as being important; (2) the use rate for NASA-authored conference/meeting papers, journal articles, and technical reports is fairly uniform; (3) a considerable number of respondents are unfamiliar with STAR (Scientific and Technical Aerospace Reports), IAA (International Aerospace Abstracts), SCAN (Selected Current Aerospace Notices), and the RECON on-line retrieval system; (4) a considerable number of respondents who are familiar with these media do not use them; and (5) the perceived quality of NASA-authored journal articles and technical reports is very good.

  7. Item Selection, Evaluation, and Simple Structure in Personality Data

    PubMed Central

    Pettersson, Erik; Turkheimer, Eric

    2010-01-01

    We report an investigation of the genesis and interpretation of simple structure in personality data using two very different self-reported data sets. The first consists of a set of relatively unselected lexical descriptors, whereas the second is based on responses to a carefully constructed instrument. In both data sets, we explore the degree of simple structure by comparing factor solutions to solutions from simulated data constructed to have either strong or weak simple structure. The analysis demonstrates that there is little evidence of simple structure in the unselected items, and a moderate degree among the selected items. In both instruments, however, much of the simple structure that could be observed originated in a strong dimension of positive vs. negative evaluation. PMID:20694168

  8. Determining Phylogenetic Relationships Among Date Palm Cultivars Using Random Amplified Polymorphic DNA (RAPD) and Inter-Simple Sequence Repeat (ISSR) Markers.

    PubMed

    Haider, Nadia

    2017-01-01

    Investigation of genetic variation and phylogenetic relationships among date palm (Phoenix dactylifera L.) cultivars is useful for their conservation and genetic improvement. Various molecular markers such as restriction fragment length polymorphisms (RFLPs), simple sequence repeat (SSR), representational difference analysis (RDA), and amplified fragment length polymorphism (AFLP) have been developed to molecularly characterize date palm cultivars. PCR-based markers random amplified polymorphic DNA (RAPD) and inter-simple sequence repeat (ISSR) are powerful tools to determine the relatedness of date palm cultivars that are difficult to distinguish morphologically. In this chapter, the principles, materials, and methods of RAPD and ISSR techniques are presented. Analysis of data generated from these two techniques and the use of these data to reveal phylogenetic relationships among date palm cultivars are also discussed.

  9. Mining Distance Based Outliers in Near Linear Time with Randomization and a Simple Pruning Rule

    NASA Technical Reports Server (NTRS)

    Bay, Stephen D.; Schwabacher, Mark

    2003-01-01

    Defining outliers by their distance to neighboring examples is a popular approach to finding unusual examples in a data set. Recently, much work has been conducted with the goal of finding fast algorithms for this task. We show that a simple nested loop algorithm that in the worst case is quadratic can give near linear time performance when the data is in random order and a simple pruning rule is used. We test our algorithm on real high-dimensional data sets with millions of examples and show that the near linear scaling holds over several orders of magnitude. Our average case analysis suggests that much of the efficiency is because the time to process non-outliers, which are the majority of examples, does not depend on the size of the data set.

  10. Factors Associated With Time to Site Activation, Randomization, and Enrollment Performance in a Stroke Prevention Trial.

    PubMed

    Demaerschalk, Bart M; Brown, Robert D; Roubin, Gary S; Howard, Virginia J; Cesko, Eldina; Barrett, Kevin M; Longbottom, Mary E; Voeks, Jenifer H; Chaturvedi, Seemant; Brott, Thomas G; Lal, Brajesh K; Meschia, James F; Howard, George

    2017-09-01

    Multicenter clinical trials attempt to select sites that can move rapidly to randomization and enroll sufficient numbers of patients. However, there are few assessments of the success of site selection. In the CREST-2 (Carotid Revascularization and Medical Management for Asymptomatic Carotid Stenosis Trials), we assess factors associated with the time between site selection and authorization to randomize, the time between authorization to randomize and the first randomization, and the average number of randomizations per site per month. Potential factors included characteristics of the site, specialty of the principal investigator, and site type. For 147 sites, the median time between site selection to authorization to randomize was 9.9 months (interquartile range, 7.7, 12.4), and factors associated with early site activation were not identified. The median time between authorization to randomize and a randomization was 4.6 months (interquartile range, 2.6, 10.5). Sites with authorization to randomize in only the carotid endarterectomy study were slower to randomize, and other factors examined were not significantly associated with time-to-randomization. The recruitment rate was 0.26 (95% confidence interval, 0.23-0.28) patients per site per month. By univariate analysis, factors associated with faster recruitment were authorization to randomize in both trials, principal investigator specialties of interventional radiology and cardiology, pre-trial reported performance >50 carotid angioplasty and stenting procedures per year, status in the top half of recruitment in the CREST trial, and classification as a private health facility. Participation in StrokeNet was associated with slower recruitment as compared with the non-StrokeNet sites. Overall, selection of sites with high enrollment rates will likely require customization to align the sites selected to the factor under study in the trial. URL: http://www.clinicaltrials.gov. Unique identifier: NCT02089217. © 2017 American Heart Association, Inc.

  11. Evaluation of two-fold fully conditional specification multiple imputation for longitudinal electronic health record data

    PubMed Central

    Welch, Catherine A; Petersen, Irene; Bartlett, Jonathan W; White, Ian R; Marston, Louise; Morris, Richard W; Nazareth, Irwin; Walters, Kate; Carpenter, James

    2014-01-01

    Most implementations of multiple imputation (MI) of missing data are designed for simple rectangular data structures ignoring temporal ordering of data. Therefore, when applying MI to longitudinal data with intermittent patterns of missing data, some alternative strategies must be considered. One approach is to divide data into time blocks and implement MI independently at each block. An alternative approach is to include all time blocks in the same MI model. With increasing numbers of time blocks, this approach is likely to break down because of co-linearity and over-fitting. The new two-fold fully conditional specification (FCS) MI algorithm addresses these issues, by only conditioning on measurements, which are local in time. We describe and report the results of a novel simulation study to critically evaluate the two-fold FCS algorithm and its suitability for imputation of longitudinal electronic health records. After generating a full data set, approximately 70% of selected continuous and categorical variables were made missing completely at random in each of ten time blocks. Subsequently, we applied a simple time-to-event model. We compared efficiency of estimated coefficients from a complete records analysis, MI of data in the baseline time block and the two-fold FCS algorithm. The results show that the two-fold FCS algorithm maximises the use of data available, with the gain relative to baseline MI depending on the strength of correlations within and between variables. Using this approach also increases plausibility of the missing at random assumption by using repeated measures over time of variables whose baseline values may be missing. PMID:24782349

  12. Statistical inferences for data from studies conducted with an aggregated multivariate outcome-dependent sample design

    PubMed Central

    Lu, Tsui-Shan; Longnecker, Matthew P.; Zhou, Haibo

    2016-01-01

    Outcome-dependent sampling (ODS) scheme is a cost-effective sampling scheme where one observes the exposure with a probability that depends on the outcome. The well-known such design is the case-control design for binary response, the case-cohort design for the failure time data and the general ODS design for a continuous response. While substantial work has been done for the univariate response case, statistical inference and design for the ODS with multivariate cases remain under-developed. Motivated by the need in biological studies for taking the advantage of the available responses for subjects in a cluster, we propose a multivariate outcome dependent sampling (Multivariate-ODS) design that is based on a general selection of the continuous responses within a cluster. The proposed inference procedure for the Multivariate-ODS design is semiparametric where all the underlying distributions of covariates are modeled nonparametrically using the empirical likelihood methods. We show that the proposed estimator is consistent and developed the asymptotically normality properties. Simulation studies show that the proposed estimator is more efficient than the estimator obtained using only the simple-random-sample portion of the Multivariate-ODS or the estimator from a simple random sample with the same sample size. The Multivariate-ODS design together with the proposed estimator provides an approach to further improve study efficiency for a given fixed study budget. We illustrate the proposed design and estimator with an analysis of association of PCB exposure to hearing loss in children born to the Collaborative Perinatal Study. PMID:27966260

  13. Age and sex prevalence of infectious dermatoses among primary school children in a rural South-Eastern Nigerian community

    PubMed Central

    Kalu, Eziyi Iche; Wagbatsoma, Victoria; Ogbaini-Emovon, Ephraim; Nwadike, Victor Ugochukwu; Ojide, Chiedozie Kingsley

    2015-01-01

    Introduction Various dermatoses, due to their morbidity characteristics, have been shown to negatively impact on learning. The most epidemiologically important seem to be the infectious types because of their transmissibility and amenability to simple school-health measures. The aim of this study was to assess the prevalence and sex/age correlates of infectious dermatoses in a rural South-eastern Nigerian community. Methods The pupils were proportionately recruited from the three primary schools based on school population. Stratified simple random sampling method was adopted and a table of random numbers was used to select required pupils from each arm. Clinical and laboratory examination was done to establish diagnoses of infectious skin disease. Data collected were analyzed using SPSS version 16. Results The 400 pupils consisted of 153 males and 247 females. Age range was between 6 and 12 years. The prevalence of infectious dermatoses was 72.3%. The five most prevalent clinical forms of infectious dermatoses, in order of decreasing prevalence, were tinea capitis (35.2%), scabies (10.5%), tinea corporis (5.8%), tinea pedis (5.5%), and impetigo (5.0%). More cases, generally, occurred among males than females (80.4% vs 67.2%)); while some specific clinical types, pediculosis and seborrheic dermatitis, exhibited predilection for females. Pyodermas and scabies were significantly more prevalent in the 7-9 age-group; while tinea capitis, tinea corporis, seborrheic dermatitis and pediculosis were more associated with ≥10 age-group. Conclusion Infectious dermatoses were highly prevalent in the surveyed population. Many of the clinical types exhibited sex- and age-specificity. PMID:26430479

  14. Intraoperative determination of the extent of corpus callosotomy for epilepsy: two simple techniques.

    PubMed

    Awad, I A; Wyllie, E; Luders, H; Ahl, J

    1990-01-01

    There is increasing interest in staged corpus callosotomy for intractable generalized epilepsy. At the first procedure, a portion (usually the anterior two-thirds) of the corpus callosum is sectioned. If seizures persist, completion of callosotomy or alternative treatment approaches can be considered. It is obviously important to ascertain that the desired extent of callosotomy was in fact accomplished at the time of initial operation. Our experience and the published literature indicate that the surgeon's impression at operation can be erroneous. We describe a technique of determining extent of corpus callosotomy during the procedure. The magnetic resonance imaging (MRI) scan in the midsagittal plane is used to select the desired extent of callosotomy. That point on the corpus callosum is characterized using simple planar geometry in relation to three anatomic landmarks in that same plane: the glabella, the inion, and the bregma (midline intersection of the coronal suture). The same point along the corpus callosum can then be located on a lateral skull x-ray using these same three anatomic landmarks. At surgery, an intraoperative lateral skull x-ray is obtained with a marking clip, thereby verifying the actual extent of callosotomy. We have verified the reliability of this scheme in 5 callosotomy procedures and have used this technique for intraoperative localization of midline and parasagittal targets in another 7 cases (3 tumors, 2 aneurysms, and 2 placements of interhemispheric subdural grids). In addition, we reviewed corpus callosum topography on 25 randomly selected MRI scans.(ABSTRACT TRUNCATED AT 250 WORDS)

  15. [Observation of curative effect of herpes zoster treated with acupuncture based on syndrome differentiation combined with pricking and cupping].

    PubMed

    Pan, Hua

    2011-10-01

    To compare the differences of curative effects of herpes zoster treated with acupuncture based on syndrome differentiation combined with pricking and cupping and simple pricking and cupping. Eighty-six cases were randomly divided into an observation group (43 cases) and a control group (43 cases). In observation group, acupoints selection based on syndrome differentiation i.e. Quchi (LI 11), Zusanli (ST 36), Sanyinjiao (SP 6), etc. were selected and pricking and cupping at affected parts were applied, and the cases were classified into damp heat in liver and gallbladder, damp retention and spleen deficiency, and qi deficiency and blood stasis. In control group, all the cases were simplely treated with pricking and cupping at affected parts. The treatment was given once a day, and seven days were made one session. The curative effect was evaluated after 2 courses, and the follow-up was carried on after 1 month. The cured and markedly effective rate was 93.0% (40/43) in observation group, superior to that of 67.4% (29/43) in control group (P < 0.01). Postherpetic neuralgia: there was 2.3% (1/43) in observation group, inferior to that of 14.0% (6/43) in control group. Comparison in 3 types: the cured and markedly effective rate of damp heat in liver and gallbladder: 94.7% (18/19) in observation group, and 85.7% (18/21) in control group, showing no significant difference between groups (P > 0.05). The cured and markedly effective rate of damp retention and spleen deficiency: 93.8% (15/16) in observation group, superior to that of 60.0% (9/15) in control group (P < 0. 05). The cured and markedly effective rate of qi deficiency and blood stasis: 87.5% (7/8) in observation group, superior to that of 28.6% (2/7) in control group (P < 0.05). For herpes zoster, acupoints selection based on syndrome differentiation combined with pricking and cupping therapy is high pertinent and effective, the postherpetic neuralgia can be reduced significantly and the curative effect is superior to that of simple pricking and cupping.

  16. Eradication of Helicobacter pylori for prevention of ulcer recurrence after simple closure of perforated peptic ulcer: a meta-analysis of randomized controlled trials.

    PubMed

    Wong, Chung-Shun; Chia, Chee-Fah; Lee, Hung-Chia; Wei, Po-Li; Ma, Hon-Ping; Tsai, Shin-Han; Wu, Chih-Hsiung; Tam, Ka-Wai

    2013-06-15

    Eradication of Helicobacter pylori has become part of the standard therapy for peptic ulcer. However, the role of H pylori eradication in perforation of peptic ulcers remains controversial. It is unclear whether eradication of the bacterium confers prolonged ulcer remission after simple repair of perforated peptic ulcer. A systematic review and meta-analysis of randomized controlled trials was performed to evaluate the effects of H pylori eradication on prevention of ulcer recurrence after simple closure of perforated peptic ulcers. The primary outcome to evaluate these effects was the incidence of postoperative ulcers; the secondary outcome was the rate of H pylori elimination. The meta-analysis included five randomized controlled trials and 401 patients. A high prevalence of H pylori infection occurred in patients with perforated peptic ulcers. Eradication of H pylori significantly reduced the incidence of ulcer recurrence at 8 wk (risk ratio 2.97; 95% confidence interval: 1.06-8.29) and 1 y (risk ratio 1.49; 95% confidence interval: 1.10-2.03) postoperation. The rate of H pylori eradication was significantly higher in the treatment group than in the nontreatment group. Eradication therapy should be provided to patients with H pylori infection after simple closure of perforated gastroduodenal ulcers. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. From Complex to Simple: Interdisciplinary Stochastic Models

    ERIC Educational Resources Information Center

    Mazilu, D. A.; Zamora, G.; Mazilu, I.

    2012-01-01

    We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…

  18. The Expected Sample Variance of Uncorrelated Random Variables with a Common Mean and Some Applications in Unbalanced Random Effects Models

    ERIC Educational Resources Information Center

    Vardeman, Stephen B.; Wendelberger, Joanne R.

    2005-01-01

    There is a little-known but very simple generalization of the standard result that for uncorrelated random variables with common mean [mu] and variance [sigma][superscript 2], the expected value of the sample variance is [sigma][superscript 2]. The generalization justifies the use of the usual standard error of the sample mean in possibly…

  19. Simple to complex modeling of breathing volume using a motion sensor.

    PubMed

    John, Dinesh; Staudenmayer, John; Freedson, Patty

    2013-06-01

    To compare simple and complex modeling techniques to estimate categories of low, medium, and high ventilation (VE) from ActiGraph™ activity counts. Vertical axis ActiGraph™ GT1M activity counts, oxygen consumption and VE were measured during treadmill walking and running, sports, household chores and labor-intensive employment activities. Categories of low (<19.3 l/min), medium (19.3 to 35.4 l/min) and high (>35.4 l/min) VEs were derived from activity intensity classifications (light <2.9 METs, moderate 3.0 to 5.9 METs and vigorous >6.0 METs). We examined the accuracy of two simple techniques (multiple regression and activity count cut-point analyses) and one complex (random forest technique) modeling technique in predicting VE from activity counts. Prediction accuracy of the complex random forest technique was marginally better than the simple multiple regression method. Both techniques accurately predicted VE categories almost 80% of the time. The multiple regression and random forest techniques were more accurate (85 to 88%) in predicting medium VE. Both techniques predicted the high VE (70 to 73%) with greater accuracy than low VE (57 to 60%). Actigraph™ cut-points for light, medium and high VEs were <1381, 1381 to 3660 and >3660 cpm. There were minor differences in prediction accuracy between the multiple regression and the random forest technique. This study provides methods to objectively estimate VE categories using activity monitors that can easily be deployed in the field. Objective estimates of VE should provide a better understanding of the dose-response relationship between internal exposure to pollutants and disease. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Smooth Scalar-on-Image Regression via Spatial Bayesian Variable Selection

    PubMed Central

    Goldsmith, Jeff; Huang, Lei; Crainiceanu, Ciprian M.

    2013-01-01

    We develop scalar-on-image regression models when images are registered multidimensional manifolds. We propose a fast and scalable Bayes inferential procedure to estimate the image coefficient. The central idea is the combination of an Ising prior distribution, which controls a latent binary indicator map, and an intrinsic Gaussian Markov random field, which controls the smoothness of the nonzero coefficients. The model is fit using a single-site Gibbs sampler, which allows fitting within minutes for hundreds of subjects with predictor images containing thousands of locations. The code is simple and is provided in less than one page in the Appendix. We apply this method to a neuroimaging study where cognitive outcomes are regressed on measures of white matter microstructure at every voxel of the corpus callosum for hundreds of subjects. PMID:24729670

  1. Data article on the effect of work engagement strategies on faculty staff behavioural outcomes in private universities.

    PubMed

    Falola, Hezekiah Olubusayo; Olokundun, Maxwell Ayodele; Salau, Odunayo Paul; Oludayo, Olumuyiwa Akinrole; Ibidunni, Ayodotun Stephen

    2018-06-01

    The main objective of this study was to present a data article that investigate the effect of work engagement strategies on faculty behavioural outcomes. Few studies analyse how work engagement strategies could help in driving standard work behaviour particularly in higher institutions. In an attempt to bridge this gap, this study was carried out using descriptive research method and Structural Equation Model (AMOS 22) for the analysis of four hundred and forty one (441) valid questionnaire which were completed by the faculty members of the six selected private universities in Nigeria using stratified and simple random sampling techniques. Factor model which shows high-reliability and good fit was generated, while construct validity was provided through convergent and discriminant analyses.

  2. Use of electronic healthcare records in large-scale simple randomized trials at the point of care for the documentation of value-based medicine.

    PubMed

    van Staa, T-P; Klungel, O; Smeeth, L

    2014-06-01

    A solid foundation of evidence of the effects of an intervention is a prerequisite of evidence-based medicine. The best source of such evidence is considered to be randomized trials, which are able to avoid confounding. However, they may not always estimate effectiveness in clinical practice. Databases that collate anonymized electronic health records (EHRs) from different clinical centres have been widely used for many years in observational studies. Randomized point-of-care trials have been initiated recently to recruit and follow patients using the data from EHR databases. In this review, we describe how EHR databases can be used for conducting large-scale simple trials and discuss the advantages and disadvantages of their use. © 2014 The Association for the Publication of the Journal of Internal Medicine.

  3. PERMutation Using Transposase Engineering (PERMUTE): A Simple Approach for Constructing Circularly Permuted Protein Libraries.

    PubMed

    Jones, Alicia M; Atkinson, Joshua T; Silberg, Jonathan J

    2017-01-01

    Rearrangements that alter the order of a protein's sequence are used in the lab to study protein folding, improve activity, and build molecular switches. One of the simplest ways to rearrange a protein sequence is through random circular permutation, where native protein termini are linked together and new termini are created elsewhere through random backbone fission. Transposase mutagenesis has emerged as a simple way to generate libraries encoding different circularly permuted variants of proteins. With this approach, a synthetic transposon (called a permuteposon) is randomly inserted throughout a circularized gene to generate vectors that express different permuted variants of a protein. In this chapter, we outline the protocol for constructing combinatorial libraries of circularly permuted proteins using transposase mutagenesis, and we describe the different permuteposons that have been developed to facilitate library construction.

  4. Equivalence between Step Selection Functions and Biased Correlated Random Walks for Statistical Inference on Animal Movement.

    PubMed

    Duchesne, Thierry; Fortin, Daniel; Rivest, Louis-Paul

    2015-01-01

    Animal movement has a fundamental impact on population and community structure and dynamics. Biased correlated random walks (BCRW) and step selection functions (SSF) are commonly used to study movements. Because no studies have contrasted the parameters and the statistical properties of their estimators for models constructed under these two Lagrangian approaches, it remains unclear whether or not they allow for similar inference. First, we used the Weak Law of Large Numbers to demonstrate that the log-likelihood function for estimating the parameters of BCRW models can be approximated by the log-likelihood of SSFs. Second, we illustrated the link between the two approaches by fitting BCRW with maximum likelihood and with SSF to simulated movement data in virtual environments and to the trajectory of bison (Bison bison L.) trails in natural landscapes. Using simulated and empirical data, we found that the parameters of a BCRW estimated directly from maximum likelihood and by fitting an SSF were remarkably similar. Movement analysis is increasingly used as a tool for understanding the influence of landscape properties on animal distribution. In the rapidly developing field of movement ecology, management and conservation biologists must decide which method they should implement to accurately assess the determinants of animal movement. We showed that BCRW and SSF can provide similar insights into the environmental features influencing animal movements. Both techniques have advantages. BCRW has already been extended to allow for multi-state modeling. Unlike BCRW, however, SSF can be estimated using most statistical packages, it can simultaneously evaluate habitat selection and movement biases, and can easily integrate a large number of movement taxes at multiple scales. SSF thus offers a simple, yet effective, statistical technique to identify movement taxis.

  5. 47 CFR 1.1604 - Post-selection hearings.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Post-selection hearings. 1.1604 Section 1.1604 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1604 Post-selection hearings. (a) Following the random...

  6. 47 CFR 1.1604 - Post-selection hearings.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Post-selection hearings. 1.1604 Section 1.1604 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1604 Post-selection hearings. (a) Following the random...

  7. Casimir rack and pinion as a miniaturized kinetic energy harvester

    NASA Astrophysics Data System (ADS)

    Miri, MirFaez; Etesami, Zahra

    2016-08-01

    We study a nanoscale machine composed of a rack and a pinion with no contact, but intermeshed via the lateral Casimir force. We adopt a simple model for the random velocity of the rack subject to external random forces, namely, a dichotomous noise with zero mean value. We show that the pinion, even when it experiences random thermal torque, can do work against a load. The device thus converts the kinetic energy of the random motions of the rack into useful work.

  8. Impact of two policy interventions on dietary diversity in Ecuador.

    PubMed

    Ponce, Juan; Ramos-Martin, Jesus

    2017-06-01

    To differentiate the effects of food vouchers and training in health and nutrition on consumption and dietary diversity in Ecuador by using an experimental design. Interventions involved enrolling three groups of approximately 200 randomly selected households per group in three provinces in Ecuador. Power estimates and sample size were computed using the Optimal Design software, with a power of 80 %, at 5 % of significance and with a minimum detectable effect of 0·25 (sd). The first group was assigned to receive a monthly food voucher of $US 40. The second group was assigned to receive the same $US 40 voucher, plus training on health and nutrition issues. The third group served as the control. Weekly household values of food consumption were converted into energy intake per person per day. A simple proxy indicator was constructed for dietary diversity, based on the Food Consumption Score. Finally, an econometric model with three specifications was used for analysing the differential effect of the interventions. Three provinces in Ecuador, two from the Sierra region (Carchi and Chimborazo) and one from the Coastal region (Santa Elena). Members of 773 households randomly selected (n 4343). No significant impact on consumption for any of the interventions was found. However, there was evidence that voucher systems had a positive impact on dietary diversity. No differentiated effects were found for the training intervention. The most cost-effective intervention to improve dietary diversity in Ecuador is the use of vouchers to support family choice in food options.

  9. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling

    PubMed Central

    Zhou, Fuqun; Zhang, Aining

    2016-01-01

    Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2–3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests’ features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data. PMID:27792152

  10. Socio-economic Status, Needs, and Utilization of Dental Services among Rural Adults in a Primary Health Center Area in Southern India

    PubMed Central

    Bommireddy, Vikram Simha; Pachava, Srinivas; Ravoori, Srinivas; Sanikommu, Suresh; Talluri, Devaki; Vinnakota, Narayana Rao

    2014-01-01

    Background: The oral disease burden in India is showing a steady increase in the recent years. Utilization of dental care being the major factor affecting the oral health status of the population is used as an important tool in oral health policy decision-making and is measured in terms of the number of dental visits per annum. Materials and Methods: A cross-sectional house to house questionnaire survey was conducted in three rural clusters which were randomly selected from a total of eight clusters served by a primary health center. Simple random sampling was used to select 100 houses from each cluster. Screening was done to examine the existing oral diseases. A total of 385 completed questionnaires were collected from 300 houses. Results: Of 385 study subjects, 183 have experienced previous dental problems. Major dental problem experienced by the study subjects was toothache (68.85%) and the treatment underwent was extraction (50.27%). Most preferred treatment centers by the study subjects were private dental hospital (68.25%) and reason identified was accessibility which constituted (45.24%) of all the reasons given. Negative attitude toward dental care is one of the important barriers; 50.8% of the non-utilizers felt dental treatment is not much important. Conclusion: Person’s attitude, lack of awareness, and affordability remain the barriers for utilization of dental services. Effective methods have to be exercised to breach such barriers. PMID:25628485

  11. A Highly Efficient Design Strategy for Regression with Outcome Pooling

    PubMed Central

    Mitchell, Emily M.; Lyles, Robert H.; Manatunga, Amita K.; Perkins, Neil J.; Schisterman, Enrique F.

    2014-01-01

    The potential for research involving biospecimens can be hindered by the prohibitive cost of performing laboratory assays on individual samples. To mitigate this cost, strategies such as randomly selecting a portion of specimens for analysis or randomly pooling specimens prior to performing laboratory assays may be employed. These techniques, while effective in reducing cost, are often accompanied by a considerable loss of statistical efficiency. We propose a novel pooling strategy based on the k-means clustering algorithm to reduce laboratory costs while maintaining a high level of statistical efficiency when predictor variables are measured on all subjects, but the outcome of interest is assessed in pools. We perform simulations motivated by the BioCycle study to compare this k-means pooling strategy with current pooling and selection techniques under simple and multiple linear regression models. While all of the methods considered produce unbiased estimates and confidence intervals with appropriate coverage, pooling under k-means clustering provides the most precise estimates, closely approximating results from the full data and losing minimal precision as the total number of pools decreases. The benefits of k-means clustering evident in the simulation study are then applied to an analysis of the BioCycle dataset. In conclusion, when the number of lab tests is limited by budget, pooling specimens based on k-means clustering prior to performing lab assays can be an effective way to save money with minimal information loss in a regression setting. PMID:25220822

  12. A highly efficient design strategy for regression with outcome pooling.

    PubMed

    Mitchell, Emily M; Lyles, Robert H; Manatunga, Amita K; Perkins, Neil J; Schisterman, Enrique F

    2014-12-10

    The potential for research involving biospecimens can be hindered by the prohibitive cost of performing laboratory assays on individual samples. To mitigate this cost, strategies such as randomly selecting a portion of specimens for analysis or randomly pooling specimens prior to performing laboratory assays may be employed. These techniques, while effective in reducing cost, are often accompanied by a considerable loss of statistical efficiency. We propose a novel pooling strategy based on the k-means clustering algorithm to reduce laboratory costs while maintaining a high level of statistical efficiency when predictor variables are measured on all subjects, but the outcome of interest is assessed in pools. We perform simulations motivated by the BioCycle study to compare this k-means pooling strategy with current pooling and selection techniques under simple and multiple linear regression models. While all of the methods considered produce unbiased estimates and confidence intervals with appropriate coverage, pooling under k-means clustering provides the most precise estimates, closely approximating results from the full data and losing minimal precision as the total number of pools decreases. The benefits of k-means clustering evident in the simulation study are then applied to an analysis of the BioCycle dataset. In conclusion, when the number of lab tests is limited by budget, pooling specimens based on k-means clustering prior to performing lab assays can be an effective way to save money with minimal information loss in a regression setting. Copyright © 2014 John Wiley & Sons, Ltd.

  13. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling.

    PubMed

    Zhou, Fuqun; Zhang, Aining

    2016-10-25

    Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.

  14. Simulations Using Random-Generated DNA and RNA Sequences

    ERIC Educational Resources Information Center

    Bryce, C. F. A.

    1977-01-01

    Using a very simple computer program written in BASIC, a very large number of random-generated DNA or RNA sequences are obtained. Students use these sequences to predict complementary sequences and translational products, evaluate base compositions, determine frequencies of particular triplet codons, and suggest possible secondary structures.…

  15. THE USE OF RANDOMIZED CONTROLLED TRIALS OF IN-HOME DRINKING WATER TREATMENT TO STUDY ENDEMIC WATERBORNE DISEASE

    EPA Science Inventory

    Randomized trials of water treatment have demonstrated the ability of simple water treatments to significantly reduce the incidence of gastrointestinal illnesses in developing countries where drinking water is of poor quality. Whether or not additional treatment at the tap reduc...

  16. Group Matching: Is This a Research Technique to Be Avoided?

    ERIC Educational Resources Information Center

    Ross, Donald C.; Klein, Donald F.

    1988-01-01

    The variance of the sample difference and the power of the "F" test for mean differences were studied under group matching on covariates and also under random assignment. Results shed light on systematic assignment procedures advocated to provide more precise estimates of treatment effects than simple random assignment. (TJH)

  17. Human mammary epithelial cells exhibit a bimodal correlated random walk pattern.

    PubMed

    Potdar, Alka A; Jeon, Junhwan; Weaver, Alissa M; Quaranta, Vito; Cummings, Peter T

    2010-03-10

    Organisms, at scales ranging from unicellular to mammals, have been known to exhibit foraging behavior described by random walks whose segments confirm to Lévy or exponential distributions. For the first time, we present evidence that single cells (mammary epithelial cells) that exist in multi-cellular organisms (humans) follow a bimodal correlated random walk (BCRW). Cellular tracks of MCF-10A pBabe, neuN and neuT random migration on 2-D plastic substrates, analyzed using bimodal analysis, were found to reveal the BCRW pattern. We find two types of exponentially distributed correlated flights (corresponding to what we refer to as the directional and re-orientation phases) each having its own correlation between move step-lengths within flights. The exponential distribution of flight lengths was confirmed using different analysis methods (logarithmic binning with normalization, survival frequency plots and maximum likelihood estimation). Because of the presence of non-uniform turn angle distribution of move step-lengths within a flight and two different types of flights, we propose that the epithelial random walk is a BCRW comprising of two alternating modes with varying degree of correlations, rather than a simple persistent random walk. A BCRW model rather than a simple persistent random walk correctly matches the super-diffusivity in the cell migration paths as indicated by simulations based on the BCRW model.

  18. A computational method for optimizing fuel treatment locations

    Treesearch

    Mark A. Finney

    2006-01-01

    Modeling and experiments have suggested that spatial fuel treatment patterns can influence the movement of large fires. On simple theoretical landscapes consisting of two fuel types (treated and untreated) optimal patterns can be analytically derived that disrupt fire growth efficiently (i.e. with less area treated than random patterns). Although conceptually simple,...

  19. Linking search space structure, run-time dynamics, and problem difficulty : a step toward demystifying tabu search.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitley, L. Darrell; Howe, Adele E.; Watson, Jean-Paul

    2004-09-01

    Tabu search is one of the most effective heuristics for locating high-quality solutions to a diverse array of NP-hard combinatorial optimization problems. Despite the widespread success of tabu search, researchers have a poor understanding of many key theoretical aspects of this algorithm, including models of the high-level run-time dynamics and identification of those search space features that influence problem difficulty. We consider these questions in the context of the job-shop scheduling problem (JSP), a domain where tabu search algorithms have been shown to be remarkably effective. Previously, we demonstrated that the mean distance between random local optima and the nearestmore » optimal solution is highly correlated with problem difficulty for a well-known tabu search algorithm for the JSP introduced by Taillard. In this paper, we discuss various shortcomings of this measure and develop a new model of problem difficulty that corrects these deficiencies. We show that Taillard's algorithm can be modeled with high fidelity as a simple variant of a straightforward random walk. The random walk model accounts for nearly all of the variability in the cost required to locate both optimal and sub-optimal solutions to random JSPs, and provides an explanation for differences in the difficulty of random versus structured JSPs. Finally, we discuss and empirically substantiate two novel predictions regarding tabu search algorithm behavior. First, the method for constructing the initial solution is highly unlikely to impact the performance of tabu search. Second, tabu tenure should be selected to be as small as possible while simultaneously avoiding search stagnation; values larger than necessary lead to significant degradations in performance.« less

  20. Clinical trials in crisis: four simple methodologic fixes

    PubMed Central

    Vickers, Andrew J.

    2014-01-01

    There is growing consensus that the US clinical trials system is broken, with trial costs and complexity increasing exponentially, and many trials failing to accrue. Yet concerns about the expense and failure rate of randomized trials are only the tip of the iceberg; perhaps what should worry us most is the number of trials that are not even considered because of projected costs and poor accrual. Several initiatives, including the Clinical Trials Transformation Initiative and the “Sensible Guidelines Group” seek to push back against current trends in clinical trials, arguing that all aspects of trials - including design, approval, conduct, monitoring, analysis and dissemination - should be based on evidence rather than contemporary norms. Proposed here are four methodologic fixes for current clinical trials. The first two aim to simplify trials, reducing costs and increasing patient acceptability by dramatically reducing eligibility criteria - often to the single criterion that the consenting physician is uncertain which of the two randomized arms is optimal - and by clinical integration, investment in data infrastructure to bring routinely collected data up to research grade to be used as endpoints in trials. The second two methodologic fixes aim to shed barriers to accrual, either by cluster randomization of clinicians (in the case of modifications to existing treatment) or by early consent, where patients are offered the chance of being randomly selected to be offered a novel intervention if disease progresses at a subsequent point. Such solutions may be partial, or result in a new set of problems of their own. Yet the current crisis in clinical trials mandates innovative approaches: randomized trials have resulted in enormous benefits for patients and we need to ensure that they continue to do so. PMID:25278228

  1. Clinical trials in crisis: Four simple methodologic fixes.

    PubMed

    Vickers, Andrew J

    2014-12-01

    There is growing consensus that the US clinical trials system is broken, with trial costs and complexity increasing exponentially, and many trials failing to accrue. Yet, concerns about the expense and failure rate of randomized trials are only the tip of the iceberg; perhaps what should worry us most is the number of trials that are not even considered because of projected costs and poor accrual. Several initiatives, including the Clinical Trials Transformation Initiative and the "Sensible Guidelines Group" seek to push back against current trends in clinical trials, arguing that all aspects of trials-including design, approval, conduct, monitoring, analysis, and dissemination-should be based on evidence rather than contemporary norms. Proposed here are four methodologic fixes for current clinical trials. The first two aim to simplify trials, reducing costs, and increasing patient acceptability by dramatically reducing eligibility criteria-often to the single criterion that the consenting physician is uncertain which of the two randomized arms is optimal-and by clinical integration, investment in data infrastructure to bring routinely collected data up to research grade to be used as endpoints in trials. The second two methodologic fixes aim to shed barriers to accrual, either by cluster randomization of clinicians (in the case of modifications to existing treatment) or by early consent, where patients are offered the chance of being randomly selected to be offered a novel intervention if disease progresses at a subsequent point. Such solutions may be partial, or result in a new set of problems of their own. Yet, the current crisis in clinical trials mandates innovative approaches: randomized trials have resulted in enormous benefits for patients, and we need to ensure that they continue to do so. © The Author(s) 2014.

  2. Purposeful Variable Selection and Stratification to Impute Missing FAST Data in Trauma Research

    PubMed Central

    Fuchs, Paul A.; del Junco, Deborah J.; Fox, Erin E.; Holcomb, John B.; Rahbar, Mohammad H.; Wade, Charles A.; Alarcon, Louis H.; Brasel, Karen J.; Bulger, Eileen M.; Cohen, Mitchell J.; Myers, John G.; Muskat, Peter; Phelan, Herb A.; Schreiber, Martin A.; Cotton, Bryan A.

    2013-01-01

    Background The Focused Assessment with Sonography for Trauma (FAST) exam is an important variable in many retrospective trauma studies. The purpose of this study was to devise an imputation method to overcome missing data for the FAST exam. Due to variability in patients’ injuries and trauma care, these data are unlikely to be missing completely at random (MCAR), raising concern for validity when analyses exclude patients with missing values. Methods Imputation was conducted under a less restrictive, more plausible missing at random (MAR) assumption. Patients with missing FAST exams had available data on alternate, clinically relevant elements that were strongly associated with FAST results in complete cases, especially when considered jointly. Subjects with missing data (32.7%) were divided into eight mutually exclusive groups based on selected variables that both described the injury and were associated with missing FAST values. Additional variables were selected within each group to classify missing FAST values as positive or negative, and correct FAST exam classification based on these variables was determined for patients with non-missing FAST values. Results Severe head/neck injury (odds ratio, OR=2.04), severe extremity injury (OR=4.03), severe abdominal injury (OR=1.94), no injury (OR=1.94), other abdominal injury (OR=0.47), other head/neck injury (OR=0.57) and other extremity injury (OR=0.45) groups had significant ORs for missing data; the other group odds ratio was not significant (OR=0.84). All 407 missing FAST values were imputed, with 109 classified as positive. Correct classification of non-missing FAST results using the alternate variables was 87.2%. Conclusions Purposeful imputation for missing FAST exams based on interactions among selected variables assessed by simple stratification may be a useful adjunct to sensitivity analysis in the evaluation of imputation strategies under different missing data mechanisms. This approach has the potential for widespread application in clinical and translational research and validation is warranted. Level of Evidence Level II Prognostic or Epidemiological PMID:23778515

  3. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2018-03-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2}). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3}) and the level sets of the Gaussian free field ({d≥ 3}). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  4. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2017-12-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2} ). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3} ) and the level sets of the Gaussian free field ({d≥ 3} ). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  5. A Computational Study of How Orientation Bias in the Lateral Geniculate Nucleus Can Give Rise to Orientation Selectivity in Primary Visual Cortex

    PubMed Central

    Kuhlmann, Levin; Vidyasagar, Trichur R.

    2011-01-01

    Controversy remains about how orientation selectivity emerges in simple cells of the mammalian primary visual cortex. In this paper, we present a computational model of how the orientation-biased responses of cells in lateral geniculate nucleus (LGN) can contribute to the orientation selectivity in simple cells in cats. We propose that simple cells are excited by lateral geniculate fields with an orientation-bias and disynaptically inhibited by unoriented lateral geniculate fields (or biased fields pooled across orientations), both at approximately the same retinotopic co-ordinates. This interaction, combined with recurrent cortical excitation and inhibition, helps to create the sharp orientation tuning seen in simple cell responses. Along with describing orientation selectivity, the model also accounts for the spatial frequency and length–response functions in simple cells, in normal conditions as well as under the influence of the GABAA antagonist, bicuculline. In addition, the model captures the response properties of LGN and simple cells to simultaneous visual stimulation and electrical stimulation of the LGN. We show that the sharp selectivity for stimulus orientation seen in primary visual cortical cells can be achieved without the excitatory convergence of the LGN input cells with receptive fields along a line in visual space, which has been a core assumption in classical models of visual cortex. We have also simulated how the full range of orientations seen in the cortex can emerge from the activity among broadly tuned channels tuned to a limited number of optimum orientations, just as in the classical case of coding for color in trichromatic primates. PMID:22013414

  6. Simple and rapid detection of the porcine reproductive and respiratory syndrome virus from pig whole blood using filter paper.

    PubMed

    Inoue, Ryo; Tsukahara, Takamitsu; Sunaba, Chinatsu; Itoh, Mitsugi; Ushida, Kazunari

    2007-04-01

    The combination of Flinders Technology Associates filter papers (FTA cards) and real-time PCR was examined to establish a simple and rapid technique for the detection of porcine reproductive and respiratory syndrome virus (PRRSV) from whole pig blood. A modified live PRRS vaccine was diluted with either sterilised saline or pig whole blood, and the suspensions were applied onto the FTA cards. The real-time RT-PCR detection of PRRSV was performed directly with the samples applied to the FTA card without the RNA extraction step. Six whole blood samples from at random selected piglets in the PRRSV infected farm were also assayed in this study. The expected PCR product was successfully amplified from either saline diluted or pig whole blood diluted vaccine. The same PCR ampliocon was detected from all blood samples assayed in this study. This study suggested that the combination of an FTA card and real-time PCR is a rapid and easy technique for the detection of PRRSV. This technique can remarkably shorten the time required for PRRSV detection from whole blood and makes the procedure much easier.

  7. Performance of some Ethiopian fenugreek (Trigonella foenum-graecum L.) germplasm collections as compared with the commercial variety Challa.

    PubMed

    Fikreselassie, Million

    2012-05-01

    Systematic breeding efforts on fenugreek have so far been neglected in Ethiopia. For this, 143 random samples of fenugreek accessions along with a commercial variety were used in this study to evaluate the potential of the land races. The field experiment was conducted at Haramaya University research station during 2011 main cropping season. Treatments were arranged in a 12x12 simple lattice design. The highest biomass and seed yielding accessions were generally concentrated more in the categories of yellow and green seed colors. When compared with the commercial variety, above 27% of the tested accessions performed significantly better in terms of seed yield indicating that significant yield gains could be secured by simple selection. However, further evaluation over wider environments is necessary to arrive at conclusive points for such quantitative traits. Green and yellow seeded accessions are widely distributed over all the country and over half of the accessions (63%) had green seed color. High seed yield bearing accessions were those collected from northwest and central part of Ethiopia, while accessions collected from eastern and northwestern Ethiopia were strikingly bold seed size. This variability would provide a basis for improving the crop in breeding program.

  8. Characterization of the Kenaf (Hibiscus cannabinus) Global Transcriptome Using Illumina Paired-End Sequencing and Development of EST-SSR Markers

    PubMed Central

    Li, Hui; Li, Defang; Chen, Anguo; Tang, Huijuan; Li, Jianjun; Huang, Siqi

    2016-01-01

    Kenaf (Hibiscus cannabinus L.) is an economically important natural fiber crop grown worldwide. However, only 20 expressed tag sequences (ESTs) for kenaf are available in public databases. The aim of this study was to develop large-scale simple sequence repeat (SSR) markers to lay a solid foundation for the construction of genetic linkage maps and marker-assisted breeding in kenaf. We used Illumina paired-end sequencing technology to generate new EST-simple sequences and MISA software to mine SSR markers. We identified 71,318 unigenes with an average length of 1143 nt and annotated these unigenes using four different protein databases. Overall, 9324 complementary pairs were designated as EST-SSR markers, and their quality was validated using 100 randomly selected SSR markers. In total, 72 primer pairs reproducibly amplified target amplicons, and 61 of these primer pairs detected significant polymorphism among 28 kenaf accessions. Thus, in this study, we have developed large-scale SSR markers for kenaf, and this new resource will facilitate construction of genetic linkage maps, investigation of fiber growth and development in kenaf, and also be of value to novel gene discovery and functional genomic studies. PMID:26960153

  9. Assessing map accuracy in a remotely sensed, ecoregion-scale cover map

    USGS Publications Warehouse

    Edwards, T.C.; Moisen, Gretchen G.; Cutler, D.R.

    1998-01-01

    Landscape- and ecoregion-based conservation efforts increasingly use a spatial component to organize data for analysis and interpretation. A challenge particular to remotely sensed cover maps generated from these efforts is how best to assess the accuracy of the cover maps, especially when they can exceed 1000 s/km2 in size. Here we develop and describe a methodological approach for assessing the accuracy of large-area cover maps, using as a test case the 21.9 million ha cover map developed for Utah Gap Analysis. As part of our design process, we first reviewed the effect of intracluster correlation and a simple cost function on the relative efficiency of cluster sample designs to simple random designs. Our design ultimately combined clustered and subsampled field data stratified by ecological modeling unit and accessibility (hereafter a mixed design). We next outline estimation formulas for simple map accuracy measures under our mixed design and report results for eight major cover types and the three ecoregions mapped as part of the Utah Gap Analysis. Overall accuracy of the map was 83.2% (SE=1.4). Within ecoregions, accuracy ranged from 78.9% to 85.0%. Accuracy by cover type varied, ranging from a low of 50.4% for barren to a high of 90.6% for man modified. In addition, we examined gains in efficiency of our mixed design compared with a simple random sample approach. In regard to precision, our mixed design was more precise than a simple random design, given fixed sample costs. We close with a discussion of the logistical constraints facing attempts to assess the accuracy of large-area, remotely sensed cover maps.

  10. Potential and limits to unravel the genetic architecture and predict the variation of Fusarium head blight resistance in European winter wheat (Triticum aestivum L.).

    PubMed

    Jiang, Y; Zhao, Y; Rodemann, B; Plieske, J; Kollers, S; Korzun, V; Ebmeyer, E; Argillier, O; Hinze, M; Ling, J; Röder, M S; Ganal, M W; Mette, M F; Reif, J C

    2015-03-01

    Genome-wide mapping approaches in diverse populations are powerful tools to unravel the genetic architecture of complex traits. The main goals of our study were to investigate the potential and limits to unravel the genetic architecture and to identify the factors determining the accuracy of prediction of the genotypic variation of Fusarium head blight (FHB) resistance in wheat (Triticum aestivum L.) based on data collected with a diverse panel of 372 European varieties. The wheat lines were phenotyped in multi-location field trials for FHB resistance and genotyped with 782 simple sequence repeat (SSR) markers, and 9k and 90k single-nucleotide polymorphism (SNP) arrays. We applied genome-wide association mapping in combination with fivefold cross-validations and observed surprisingly high accuracies of prediction for marker-assisted selection based on the detected quantitative trait loci (QTLs). Using a random sample of markers not selected for marker-trait associations revealed only a slight decrease in prediction accuracy compared with marker-based selection exploiting the QTL information. The same picture was confirmed in a simulation study, suggesting that relatedness is a main driver of the accuracy of prediction in marker-assisted selection of FHB resistance. When the accuracy of prediction of three genomic selection models was contrasted for the three marker data sets, no significant differences in accuracies among marker platforms and genomic selection models were observed. Marker density impacted the accuracy of prediction only marginally. Consequently, genomic selection of FHB resistance can be implemented most cost-efficiently based on low- to medium-density SNP arrays.

  11. Application and testing of a procedure to evaluate transferability of habitat suitability criteria

    USGS Publications Warehouse

    Thomas, Jeff A.; Bovee, Ken D.

    1993-01-01

    A procedure designed to test the transferability of habitat suitability criteria was evaluated in the Cache la Poudre River, Colorado. Habitat suitability criteria were developed for active adult and juvenile rainbow trout in the South Platte River, Colorado. These criteria were tested by comparing microhabitat use predicted from the criteria with observed microhabitat use by adult rainbow trout in the Cache la Poudre River. A one-sided X2 test, using counts of occupied and unoccupied cells in each suitability classification, was used to test for non-random selection for optimum habitat use over usable habitat and for suitable over unsuitable habitat. Criteria for adult rainbow trout were judged to be transferable to the Cache la Poudre River, but juvenile criteria (applied to adults) were not transferable. Random subsampling of occupied and unoccupied cells was conducted to determine the effect of sample size on the reliability of the test procedure. The incidence of type I and type II errors increased rapidly as the sample size was reduced below 55 occupied and 200 unoccupied cells. Recommended modifications to the procedure included the adoption of a systematic or randomized sampling design and direct measurement of microhabitat variables. With these modifications, the procedure is economical, simple and reliable. Use of the procedure as a quality assurance device in routine applications of the instream flow incremental methodology was encouraged.

  12. Improving Pulse Rate Measurements during Random Motion Using a Wearable Multichannel Reflectance Photoplethysmograph.

    PubMed

    Warren, Kristen M; Harvey, Joshua R; Chon, Ki H; Mendelson, Yitzhak

    2016-03-07

    Photoplethysmographic (PPG) waveforms are used to acquire pulse rate (PR) measurements from pulsatile arterial blood volume. PPG waveforms are highly susceptible to motion artifacts (MA), limiting the implementation of PR measurements in mobile physiological monitoring devices. Previous studies have shown that multichannel photoplethysmograms can successfully acquire diverse signal information during simple, repetitive motion, leading to differences in motion tolerance across channels. In this paper, we investigate the performance of a custom-built multichannel forehead-mounted photoplethysmographic sensor under a variety of intense motion artifacts. We introduce an advanced multichannel template-matching algorithm that chooses the channel with the least motion artifact to calculate PR for each time instant. We show that for a wide variety of random motion, channels respond differently to motion artifacts, and the multichannel estimate outperforms single-channel estimates in terms of motion tolerance, signal quality, and PR errors. We have acquired 31 data sets consisting of PPG waveforms corrupted by random motion and show that the accuracy of PR measurements achieved was increased by up to 2.7 bpm when the multichannel-switching algorithm was compared to individual channels. The percentage of PR measurements with error ≤ 5 bpm during motion increased by 18.9% when the multichannel switching algorithm was compared to the mean PR from all channels. Moreover, our algorithm enables automatic selection of the best signal fidelity channel at each time point among the multichannel PPG data.

  13. Whose data set is it anyway? Sharing raw data from randomized trials.

    PubMed

    Vickers, Andrew J

    2006-05-16

    Sharing of raw research data is common in many areas of medical research, genomics being perhaps the most well-known example. In the clinical trial community investigators routinely refuse to share raw data from a randomized trial without giving a reason. Data sharing benefits numerous research-related activities: reproducing analyses; testing secondary hypotheses; developing and evaluating novel statistical methods; teaching; aiding design of future trials; meta-analysis; and, possibly, preventing error, fraud and selective reporting. Clinical trialists, however, sometimes appear overly concerned with being scooped and with misrepresentation of their work. Both possibilities can be avoided with simple measures such as inclusion of the original trialists as co-authors on any publication resulting from data sharing. Moreover, if we treat any data set as belonging to the patients who comprise it, rather than the investigators, such concerns fall away. Technological developments, particularly the Internet, have made data sharing generally a trivial logistical problem. Data sharing should come to be seen as an inherent part of conducting a randomized trial, similar to the way in which we consider ethical review and publication of study results. Journals and funding bodies should insist that trialists make raw data available, for example, by publishing data on the Web. If the clinical trial community continues to fail with respect to data sharing, we will only strengthen the public perception that we do clinical trials to benefit ourselves, not our patients.

  14. Randomization in clinical trials in orthodontics: its significance in research design and methods to achieve it.

    PubMed

    Pandis, Nikolaos; Polychronopoulou, Argy; Eliades, Theodore

    2011-12-01

    Randomization is a key step in reducing selection bias during the treatment allocation phase in randomized clinical trials. The process of randomization follows specific steps, which include generation of the randomization list, allocation concealment, and implementation of randomization. The phenomenon in the dental and orthodontic literature of characterizing treatment allocation as random is frequent; however, often the randomization procedures followed are not appropriate. Randomization methods assign, at random, treatment to the trial arms without foreknowledge of allocation by either the participants or the investigators thus reducing selection bias. Randomization entails generation of random allocation, allocation concealment, and the actual methodology of implementing treatment allocation randomly and unpredictably. Most popular randomization methods include some form of restricted and/or stratified randomization. This article introduces the reasons, which make randomization an integral part of solid clinical trial methodology, and presents the main randomization schemes applicable to clinical trials in orthodontics.

  15. Validation of optical codes based on 3D nanostructures

    NASA Astrophysics Data System (ADS)

    Carnicer, Artur; Javidi, Bahram

    2017-05-01

    Image information encoding using random phase masks produce speckle-like noise distributions when the sample is propagated in the Fresnel domain. As a result, information cannot be accessed by simple visual inspection. Phase masks can be easily implemented in practice by attaching cello-tape to the plain-text message. Conventional 2D-phase masks can be generalized to 3D by combining glass and diffusers resulting in a more complex, physical unclonable function. In this communication, we model the behavior of a 3D phase mask using a simple approach: light is propagated trough glass using the angular spectrum of plane waves whereas the diffusor is described as a random phase mask and a blurring effect on the amplitude of the propagated wave. Using different designs for the 3D phase mask and multiple samples, we demonstrate that classification is possible using the k-nearest neighbors and random forests machine learning algorithms.

  16. Simple Example of Backtest Overfitting (SEBO)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    In the field of mathematical finance, a "backtest" is the usage of historical market data to assess the performance of a proposed trading strategy. It is a relatively simple matter for a present-day computer system to explore thousands, millions or even billions of variations of a proposed strategy, and pick the best performing variant as the "optimal" strategy "in sample" (i.e., on the input dataset). Unfortunately, such an "optimal" strategy often performs very poorly "out of sample" (i.e. on another dataset), because the parameters of the invest strategy have been oversit to the in-sample data, a situation known as "backtestmore » overfitting". While the mathematics of backtest overfitting has been examined in several recent theoretical studies, here we pursue a more tangible analysis of this problem, in the form of an online simulator tool. Given a input random walk time series, the tool develops an "optimal" variant of a simple strategy by exhaustively exploring all integer parameter values among a handful of parameters. That "optimal" strategy is overfit, since by definition a random walk is unpredictable. Then the tool tests the resulting "optimal" strategy on a second random walk time series. In most runs using our online tool, the "optimal" strategy derived from the first time series performs poorly on the second time series, demonstrating how hard it is not to overfit a backtest. We offer this online tool, "Simple Example of Backtest Overfitting (SEBO)", to facilitate further research in this area.« less

  17. Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs

    NASA Astrophysics Data System (ADS)

    Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.

    2016-07-01

    Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and number of points, varies with the abundance, size and distributional pattern of target biota. Therefore, we advocate either the incorporation of prior knowledge or the use of baseline surveys to establish key properties of intended target biota in the initial stages of monitoring programs.

  18. A Review of Compression, Ventilation, Defibrillation, Drug Treatment, and Targeted Temperature Management in Cardiopulmonary Resuscitation

    PubMed Central

    Pan, Jian; Zhu, Jian-Yong; Kee, Ho Sen; Zhang, Qing; Lu, Yuan-Qiang

    2015-01-01

    Objective: Important studies of cardiopulmonary resuscitation (CPR) techniques influence the development of new guidelines. We systematically reviewed the efficacy of some important studies of CPR. Data Sources: The data analyzed in this review are mainly from articles included in PubMed and EMBASE, published from 1964 to 2014. Study Selection: Original articles and critical reviews about CPR techniques were selected for review. Results: The survival rate after out-of-hospital cardiac arrest (OHCA) is improving. This improvement is associated with the performance of uninterrupted chest compressions and simple airway management procedures during bystander CPR. Real-time feedback devices can be used to improve the quality of CPR. The recommended dose, timing, and indications for adrenaline (epinephrine) use may change. The appropriate target temperature for targeted temperature management is still unclear. Conclusions: New studies over the past 5 years have evaluated various aspects of CPR in OHCA. Some of these studies were high-quality randomized controlled trials, which may help to improve the scientific understanding of resuscitation techniques and result in changes to CPR guidelines. PMID:25673462

  19. SELECTION DYNAMICS IN JOINT MATCHING TO RATE AND MAGNITUDE OF REINFORCEMENT

    PubMed Central

    McDowell, J. J; Popa, Andrei; Calvin, Nicholas T

    2012-01-01

    Virtual organisms animated by a selectionist theory of behavior dynamics worked on concurrent random interval schedules where both the rate and magnitude of reinforcement were varied. The selectionist theory consists of a set of simple rules of selection, recombination, and mutation that act on a population of potential behaviors by means of a genetic algorithm. An extension of the power function matching equation, which expresses behavior allocation as a joint function of exponentiated reinforcement rate and reinforcer magnitude ratios, was fitted to the virtual organisms' data, and over a range of moderate mutation rates was found to provide an excellent description of their behavior without residual trends. The mean exponents in this range of mutation rates were 0.83 for the reinforcement rate ratio and 0.68 for the reinforcer magnitude ratio, which are values that are comparable to those obtained in experiments with live organisms. These findings add to the evidence supporting the selectionist theory, which asserts that the world of behavior we observe and measure is created by evolutionary dynamics. PMID:23008523

  20. RANDOM EVOLUTIONS, MARKOV CHAINS, AND SYSTEMS OF PARTIAL DIFFERENTIAL EQUATIONS

    PubMed Central

    Griego, R. J.; Hersh, R.

    1969-01-01

    Several authors have considered Markov processes defined by the motion of a particle on a fixed line with a random velocity1, 6, 8, 10 or a random diffusivity.5, 12 A “random evolution” is a natural but apparently new generalization of this notion. In this note we hope to show that this concept leads to simple and powerful applications of probabilistic tools to initial-value problems of both parabolic and hyperbolic type. We obtain existence theorems, representation theorems, and asymptotic formulas, both old and new. PMID:16578690

  1. Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach

    ERIC Educational Resources Information Center

    Rotondi, Michael A.; Donner, Allan

    2009-01-01

    The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…

  2. Emissivity of half-space random media. [in passive remote sensing

    NASA Technical Reports Server (NTRS)

    Tsang, L.; Kong, J. A.

    1976-01-01

    Scattering of electromagnetic waves by a half-space random medium with three-dimensional correlation functions is studied with the Born approximation. The emissivity is calculated from a simple integral and is illustrated for various cases. The results are valid over a wavelength range smaller or larger than the correlation lengths.

  3. Teachers' Attitude towards Implementation of Learner-Centered Methodology in Science Education in Kenya

    ERIC Educational Resources Information Center

    Ndirangu, Caroline

    2017-01-01

    This study aims to evaluate teachers' attitude towards implementation of learner-centered methodology in science education in Kenya. The study used a survey design methodology, adopting the purposive, stratified random and simple random sampling procedures and hypothesised that there was no significant relationship between the head teachers'…

  4. Exploring Measurement Error with Cookies: A Real and Virtual Approach via Interactive Excel

    ERIC Educational Resources Information Center

    Sinex, Scott A; Gage, Barbara A.; Beck, Peggy J.

    2007-01-01

    A simple, guided-inquiry investigation using stacked sandwich cookies is employed to develop a simple linear mathematical model and to explore measurement error by incorporating errors as part of the investigation. Both random and systematic errors are presented. The model and errors are then investigated further by engaging with an interactive…

  5. Stochastic Order Redshift Technique (SORT): a simple, efficient and robust method to improve cosmological redshift measurements

    NASA Astrophysics Data System (ADS)

    Tejos, Nicolas; Rodríguez-Puebla, Aldo; Primack, Joel R.

    2018-01-01

    We present a simple, efficient and robust approach to improve cosmological redshift measurements. The method is based on the presence of a reference sample for which a precise redshift number distribution (dN/dz) can be obtained for different pencil-beam-like sub-volumes within the original survey. For each sub-volume we then impose that: (i) the redshift number distribution of the uncertain redshift measurements matches the reference dN/dz corrected by their selection functions and (ii) the rank order in redshift of the original ensemble of uncertain measurements is preserved. The latter step is motivated by the fact that random variables drawn from Gaussian probability density functions (PDFs) of different means and arbitrarily large standard deviations satisfy stochastic ordering. We then repeat this simple algorithm for multiple arbitrary pencil-beam-like overlapping sub-volumes; in this manner, each uncertain measurement has multiple (non-independent) 'recovered' redshifts which can be used to estimate a new redshift PDF. We refer to this method as the Stochastic Order Redshift Technique (SORT). We have used a state-of-the-art N-body simulation to test the performance of SORT under simple assumptions and found that it can improve the quality of cosmological redshifts in a robust and efficient manner. Particularly, SORT redshifts (zsort) are able to recover the distinctive features of the so-called 'cosmic web' and can provide unbiased measurement of the two-point correlation function on scales ≳4 h-1Mpc. Given its simplicity, we envision that a method like SORT can be incorporated into more sophisticated algorithms aimed to exploit the full potential of large extragalactic photometric surveys.

  6. Impact of waste disposal on health of a poor urban community in Zimbambwe.

    PubMed

    Makoni, F S; Ndamba, J; Mbati, P A; Manase, G

    2004-08-01

    To assess excreta and waste disposal facilities available and their impact on sanitation related diseases in Epworth, an informal settlement on the outskirts of Harare. Descriptive cross-sectional survey. This was a community based study of Epworth informal settlement. A total of 308 households were interviewed. Participating households were randomly selected from the three communities of Epworth. Secondary medical archival data on diarrhoeal disease prevalence was collected from local clinics and district health offices in the study areas. Only 7% of households were connected to the sewer system. The study revealed that in Zinyengere extension 13% had no toilet facilities, 48% had simple pits and 37% had Blair VIP latrines. In Overspill 2% had no toilet facilities, 28% had simple latrines and 36% had Blair VIP latrines while in New Gada 20% had no toilet facilities, 24% had simple pits and 23% had Blair VIP latrines. Although a significant percentage had latrines (83.2%), over 50% of the population were not satisfied with the toilet facilities they were using. All the respondents expressed dissatisfaction with their domestic waste disposal practices with 46.6% admitting to have indiscriminately dumped waste. According to the community, diarrhoeal diseases were the most prevalent diseases (50%) related to poor sanitation. Health statistics also indicated that diarrhoea was a major problem in this community. It is recommended that households and the local authorities concentrate on improving the provision of toilets, water and waste disposal facilities as a way of improving the health state of the community.

  7. A Randomized Study Comparing the Sniffing Position with Simple Head Extension for Glottis Visualization and Difficulty in Intubation during Direct Laryngoscopy.

    PubMed

    Akhtar, Mehmooda; Ali, Zulfiqar; Hassan, Nelofar; Mehdi, Saqib; Wani, Gh Mohammad; Mir, Aabid Hussain

    2017-01-01

    Proper positioning of the head and neck is important for an optimal laryngeal visualization. Traditionally, sniffing position (SP) is recommended to provide a superior glottic visualization, during direct laryngoscopy, enhancing the ease of intubation. Various studies in the last decade of this belief have challenged the need for sniffing position during intubation. We conducted a prospective study comparing the sniffing head position with simple head extension to study the laryngoscopic view and intubation difficulty during direct laryngoscopy. Five-hundred patients were included in this study and randomly distributed to SP or simple head extension. In the sniffing group, an incompressible head ring was placed under the head to raise its height by 7 cm from the neutral plane followed by maximal extension of the head. In the simple extension group, no headrest was placed under the head; however, maximal head extension was given at the time of laryngoscopy. Various factors as ability to mask ventilate, laryngoscopic visualization, intubation difficulty, and posture of the anesthesiologist during laryngoscopy and tracheal intubation were noted. In the incidence of difficult laryngoscopy (Cormack Grade III and IV), Intubation Difficulty Scale (IDS score) was compared between the two groups. There was no significant difference between two groups in Cormack grades. The IDS score differed significantly between sniffing group and simple extension group ( P = 0.000) with an increased difficulty during intubation in the simple head extension. Patients with simple head extension needed more lifting force, increased use of external laryngeal manipulation, and an increased use of alternate techniques during intubation when compared to SP. We conclude that compared to the simple head extension position, the SP should be used as a standard head position for intubation attempts under general anesthesia.

  8. Do rational numbers play a role in selection for stochasticity?

    PubMed

    Sinclair, Robert

    2014-01-01

    When a given tissue must, to be able to perform its various functions, consist of different cell types, each fairly evenly distributed and with specific probabilities, then there are at least two quite different developmental mechanisms which might achieve the desired result. Let us begin with the case of two cell types, and first imagine that the proportion of numbers of cells of these types should be 1:3. Clearly, a regular structure composed of repeating units of four cells, three of which are of the dominant type, will easily satisfy the requirements, and a deterministic mechanism may lend itself to the task. What if, however, the proportion should be 10:33? The same simple, deterministic approach would now require a structure of repeating units of 43 cells, and this certainly seems to require a far more complex and potentially prohibitive deterministic developmental program. Stochastic development, replacing regular units with random distributions of given densities, might not be evolutionarily competitive in comparison with the deterministic program when the proportions should be 1:3, but it has the property that, whatever developmental mechanism underlies it, its complexity does not need to depend very much upon target cell densities at all. We are immediately led to speculate that proportions which correspond to fractions with large denominators (such as the 33 of 10/33) may be more easily achieved by stochastic developmental programs than by deterministic ones, and this is the core of our thesis: that stochastic development may tend to occur more often in cases involving rational numbers with large denominators. To be imprecise: that simple rationality and determinism belong together, as do irrationality and randomness.

  9. Effect of lecture instruction on student performance on qualitative questions

    NASA Astrophysics Data System (ADS)

    Heron, Paula R. L.

    2015-06-01

    The impact of lecture instruction on student conceptual understanding in physics has been the subject of research for several decades. Most studies have reported disappointingly small improvements in student performance on conceptual questions despite direct instruction on the relevant topics. These results have spurred a number of attempts to improve learning in physics courses through new curricula and instructional techniques. This paper contributes to the research base through a retrospective analysis of 20 randomly selected qualitative questions on topics in kinematics, dynamics, electrostatics, waves, and physical optics that have been given in introductory calculus-based physics at the University of Washington over a period of 15 years. In some classes, questions were administered after relevant lecture instruction had been completed; in others, it had yet to begin. Simple statistical tests indicate that the average performance of the "after lecture" classes was significantly better than that of the "before lecture" classes for 11 questions, significantly worse for two questions, and indistinguishable for the remaining seven. However, the classes had not been randomly assigned to be tested before or after lecture instruction. Multiple linear regression was therefore conducted with variables (such as class size) that could plausibly lead to systematic differences in performance and thus obscure (or artificially enhance) the effect of lecture instruction. The regression models support the results of the simple tests for all but four questions. In those cases, the effect of lecture instruction was reduced to a nonsignificant level, or increased to a significant, negative level when other variables were considered. Thus the results provide robust evidence that instruction in lecture can increase student ability to give correct answers to conceptual questions but does not necessarily do so; in some cases it can even lead to a decrease.

  10. Statistical inferences for data from studies conducted with an aggregated multivariate outcome-dependent sample design.

    PubMed

    Lu, Tsui-Shan; Longnecker, Matthew P; Zhou, Haibo

    2017-03-15

    Outcome-dependent sampling (ODS) scheme is a cost-effective sampling scheme where one observes the exposure with a probability that depends on the outcome. The well-known such design is the case-control design for binary response, the case-cohort design for the failure time data, and the general ODS design for a continuous response. While substantial work has been carried out for the univariate response case, statistical inference and design for the ODS with multivariate cases remain under-developed. Motivated by the need in biological studies for taking the advantage of the available responses for subjects in a cluster, we propose a multivariate outcome-dependent sampling (multivariate-ODS) design that is based on a general selection of the continuous responses within a cluster. The proposed inference procedure for the multivariate-ODS design is semiparametric where all the underlying distributions of covariates are modeled nonparametrically using the empirical likelihood methods. We show that the proposed estimator is consistent and developed the asymptotically normality properties. Simulation studies show that the proposed estimator is more efficient than the estimator obtained using only the simple-random-sample portion of the multivariate-ODS or the estimator from a simple random sample with the same sample size. The multivariate-ODS design together with the proposed estimator provides an approach to further improve study efficiency for a given fixed study budget. We illustrate the proposed design and estimator with an analysis of association of polychlorinated biphenyl exposure to hearing loss in children born to the Collaborative Perinatal Study. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Simple-random-sampling-based multiclass text classification algorithm.

    PubMed

    Liu, Wuying; Wang, Lin; Yi, Mianzhu

    2014-01-01

    Multiclass text classification (MTC) is a challenging issue and the corresponding MTC algorithms can be used in many applications. The space-time overhead of the algorithms must be concerned about the era of big data. Through the investigation of the token frequency distribution in a Chinese web document collection, this paper reexamines the power law and proposes a simple-random-sampling-based MTC (SRSMTC) algorithm. Supported by a token level memory to store labeled documents, the SRSMTC algorithm uses a text retrieval approach to solve text classification problems. The experimental results on the TanCorp data set show that SRSMTC algorithm can achieve the state-of-the-art performance at greatly reduced space-time requirements.

  12. The Shark Random Swim - (Lévy Flight with Memory)

    NASA Astrophysics Data System (ADS)

    Businger, Silvia

    2018-05-01

    The Elephant Random Walk (ERW), first introduced by Schütz and Trimper (Phys Rev E 70:045101, 2004), is a one-dimensional simple random walk on Z having a memory about the whole past. We study the Shark Random Swim, a random walk with memory about the whole past, whose steps are α -stable distributed with α \\in (0,2] . Our aim in this work is to study the impact of the heavy tailed step distributions on the asymptotic behavior of the random walk. We shall see that, as for the ERW, the asymptotic behavior of the Shark Random Swim depends on its memory parameter p, and that a phase transition can be observed at the critical value p=1/α.

  13. Nondestructive X-ray diffraction measurement of warpage in silicon dies embedded in integrated circuit packages.

    PubMed

    Tanner, B K; Danilewsky, A N; Vijayaraghavan, R K; Cowley, A; McNally, P J

    2017-04-01

    Transmission X-ray diffraction imaging in both monochromatic and white beam section mode has been used to measure quantitatively the displacement and warpage stress in encapsulated silicon devices. The displacement dependence with position on the die was found to agree well with that predicted from a simple model of warpage stress. For uQFN microcontrollers, glued only at the corners, the measured misorientation contours are consistent with those predicted using finite element analysis. The absolute displacement, measured along a line through the die centre, was comparable to that reported independently by high-resolution X-ray diffraction and optical interferometry of similar samples. It is demonstrated that the precision is greater than the spread of values found in randomly selected batches of commercial devices, making the techniques viable for industrial inspection purposes.

  14. The NSO FTS database program and archive (FTSDBM)

    NASA Technical Reports Server (NTRS)

    Lytle, D. M.

    1992-01-01

    Data from the NSO Fourier transform spectrometer is being re-archived from half inch tape onto write-once compact disk. In the process, information about each spectrum and a low resolution copy of each spectrum is being saved into an on-line database. FTSDBM is a simple database management program in the NSO external package for IRAF. A command language allows the FTSDBM user to add entries to the database, delete entries, select subsets from the database based on keyword values including ranges of values, create new database files based on these subsets, make keyword lists, examine low resolution spectra graphically, and make disk number/file number lists. Once the archive is complete, FTSDBM will allow the database to be efficiently searched for data of interest to the user and the compact disk format will allow random access to that data.

  15. Fabrication of polymer micro-lens array with pneumatically diaphragm-driven drop-on-demand inkjet technology.

    PubMed

    Xie, Dan; Zhang, Honghai; Shu, Xiayun; Xiao, Junfeng

    2012-07-02

    The paper reports an effective method to fabricate micro-lens arrays with the ultraviolet-curable polymer, using an original pneumatically diaphragm-driven drop-on-demand inkjet system. An array of plano convex micro-lenses can be formed on the glass substrate due to surface tension and hydrophobic effect. The micro-lens arrays have uniform focusing function, smooth and real planar surface. The fabrication process showed good repeatability as well, fifty micro-lenses randomly selected form 9 × 9 miro-lens array with an average diameter of 333.28μm showed 1.1% variations. Also, the focal length, the surface roughness and optical property of the fabricated micro-lenses are measured, analyzed and proved satisfactory. The technique shows great potential for fabricating polymer micro-lens arrays with high flexibility, simple technological process and low production cost.

  16. Stochastic Formal Correctness of Numerical Algorithms

    NASA Technical Reports Server (NTRS)

    Daumas, Marc; Lester, David; Martin-Dorel, Erik; Truffert, Annick

    2009-01-01

    We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.

  17. Evaluation of generalized degrees of freedom for sparse estimation by replica method

    NASA Astrophysics Data System (ADS)

    Sakata, A.

    2016-12-01

    We develop a method to evaluate the generalized degrees of freedom (GDF) for linear regression with sparse regularization. The GDF is a key factor in model selection, and thus its evaluation is useful in many modelling applications. An analytical expression for the GDF is derived using the replica method in the large-system-size limit with random Gaussian predictors. The resulting formula has a universal form that is independent of the type of regularization, providing us with a simple interpretation. Within the framework of replica symmetric (RS) analysis, GDF has a physical meaning as the effective fraction of non-zero components. The validity of our method in the RS phase is supported by the consistency of our results with previous mathematical results. The analytical results in the RS phase are calculated numerically using the belief propagation algorithm.

  18. Wave scattering from random sets of closely spaced objects through linear embedding via Green's operators

    NASA Astrophysics Data System (ADS)

    Lancellotti, V.; de Hon, B. P.; Tijhuis, A. G.

    2011-08-01

    In this paper we present the application of linear embedding via Green's operators (LEGO) to the solution of the electromagnetic scattering from clusters of arbitrary (both conducting and penetrable) bodies randomly placed in a homogeneous background medium. In the LEGO method the objects are enclosed within simple-shaped bricks described in turn via scattering operators of equivalent surface current densities. Such operators have to be computed only once for a given frequency, and hence they can be re-used to perform the study of many distributions comprising the same objects located in different positions. The surface integral equations of LEGO are solved via the Moments Method combined with Adaptive Cross Approximation (to save memory) and Arnoldi basis functions (to compress the system). By means of purposefully selected numerical experiments we discuss the time requirements with respect to the geometry of a given distribution. Besides, we derive an approximate relationship between the (near-field) accuracy of the computed solution and the number of Arnoldi basis functions used to obtain it. This result endows LEGO with a handy practical criterion for both estimating the error and keeping it in check.

  19. Preliminary investigation of human exhaled breath for tuberculosis diagnosis by multidimensional gas chromatography - Time of flight mass spectrometry and machine learning.

    PubMed

    Beccaria, Marco; Mellors, Theodore R; Petion, Jacky S; Rees, Christiaan A; Nasir, Mavra; Systrom, Hannah K; Sairistil, Jean W; Jean-Juste, Marc-Antoine; Rivera, Vanessa; Lavoile, Kerline; Severe, Patrice; Pape, Jean W; Wright, Peter F; Hill, Jane E

    2018-02-01

    Tuberculosis (TB) remains a global public health malady that claims almost 1.8 million lives annually. Diagnosis of TB represents perhaps one of the most challenging aspects of tuberculosis control. Gold standards for diagnosis of active TB (culture and nucleic acid amplification) are sputum-dependent, however, in up to a third of TB cases, an adequate biological sputum sample is not readily available. The analysis of exhaled breath, as an alternative to sputum-dependent tests, has the potential to provide a simple, fast, and non-invasive, and ready-available diagnostic service that could positively change TB detection. Human breath has been evaluated in the setting of active tuberculosis using thermal desorption-comprehensive two-dimensional gas chromatography-time of flight mass spectrometry methodology. From the entire spectrum of volatile metabolites in breath, three random forest machine learning models were applied leading to the generation of a panel of 46 breath features. The twenty-two common features within each random forest model used were selected as a set that could distinguish subjects with confirmed pulmonary M. tuberculosis infection and people with other pathologies than TB. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Economic decision making and the application of nonparametric prediction models

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2008-01-01

    Sustained increases in energy prices have focused attention on gas resources in low-permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are often large. Planning and development decisions for extraction of such resources must be areawide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm, the decision to enter such plays depends on reconnaissance-level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional-scale cost functions. The context of the worked example is the Devonian Antrim-shale gas play in the Michigan basin. One finding relates to selection of the resource prediction model to be used with economic models. Models chosen because they can best predict aggregate volume over larger areas (many hundreds of sites) smooth out granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined arbitrarily by extraneous factors. The analysis shows a 15-20% gain in gas volume when these simple models are applied to order drilling prospects strategically rather than to choose drilling locations randomly. Copyright ?? 2008 Society of Petroleum Engineers.

  1. Low-level sensory plasticity during task-irrelevant perceptual learning: Evidence from conventional and double training procedures

    PubMed Central

    Pilly, Praveen K.; Grossberg, Stephen; Seitz, Aaron R.

    2009-01-01

    Studies of perceptual learning have focused on aspects of learning that are related to early stages of sensory processing. However, conclusions that perceptual learning results in low-level sensory plasticity are controversial, since such learning may also be attributed to plasticity in later stages of sensory processing or in readout from sensory to decision stages, or to changes in high-level central processing. To address this controversy, we developed a novel random dot motion (RDM) stimulus to target motion cells selective to contrast polarity by ensuring the motion direction information arises only from signal dot onsets and not their offsets, and used these stimuli in the paradigm of task-irrelevant perceptual learning (TIPL). In TIPL, learning is achieved in response to a stimulus by subliminally pairing that stimulus with the targets of an unrelated training task. In this manner, we are able to probe learning for an aspect of motion processing thought to be a function of directional V1 simple cells with a learning procedure that dissociates the learned stimulus from the decision processes relevant to the training task. Our results show direction-selective learning for the designated contrast polarity that does not transfer to the opposite contrast polarity. This polarity specificity was replicated in a double training procedure in which subjects were additionally exposed to the opposite polarity. Taken together, these results suggest that TIPL for motion stimuli may occur at the stage of directional V1 simple cells. Finally, a theoretical explanation is provided to understand the data. PMID:19800358

  2. Osteoporosis risk prediction for bone mineral density assessment of postmenopausal women using machine learning.

    PubMed

    Yoo, Tae Keun; Kim, Sung Kean; Kim, Deok Won; Choi, Joon Yul; Lee, Wan Hyung; Oh, Ein; Park, Eun-Cheol

    2013-11-01

    A number of clinical decision tools for osteoporosis risk assessment have been developed to select postmenopausal women for the measurement of bone mineral density. We developed and validated machine learning models with the aim of more accurately identifying the risk of osteoporosis in postmenopausal women compared to the ability of conventional clinical decision tools. We collected medical records from Korean postmenopausal women based on the Korea National Health and Nutrition Examination Surveys. The training data set was used to construct models based on popular machine learning algorithms such as support vector machines (SVM), random forests, artificial neural networks (ANN), and logistic regression (LR) based on simple surveys. The machine learning models were compared to four conventional clinical decision tools: osteoporosis self-assessment tool (OST), osteoporosis risk assessment instrument (ORAI), simple calculated osteoporosis risk estimation (SCORE), and osteoporosis index of risk (OSIRIS). SVM had significantly better area under the curve (AUC) of the receiver operating characteristic than ANN, LR, OST, ORAI, SCORE, and OSIRIS for the training set. SVM predicted osteoporosis risk with an AUC of 0.827, accuracy of 76.7%, sensitivity of 77.8%, and specificity of 76.0% at total hip, femoral neck, or lumbar spine for the testing set. The significant factors selected by SVM were age, height, weight, body mass index, duration of menopause, duration of breast feeding, estrogen therapy, hyperlipidemia, hypertension, osteoarthritis, and diabetes mellitus. Considering various predictors associated with low bone density, the machine learning methods may be effective tools for identifying postmenopausal women at high risk for osteoporosis.

  3. An Overview of Randomization and Minimization Programs for Randomized Clinical Trials

    PubMed Central

    Saghaei, Mahmoud

    2011-01-01

    Randomization is an essential component of sound clinical trials, which prevents selection biases and helps in blinding the allocations. Randomization is a process by which subsequent subjects are enrolled into trial groups only by chance, which essentially eliminates selection biases. A serious consequence of randomization is severe imbalance among the treatment groups with respect to some prognostic factors, which invalidate the trial results or necessitate complex and usually unreliable secondary analysis to eradicate the source of imbalances. Minimization on the other hand tends to allocate in such a way as to minimize the differences among groups, with respect to prognostic factors. Pure minimization is therefore completely deterministic, that is, one can predict the allocation of the next subject by knowing the factor levels of a previously enrolled subject and having the properties of the next subject. To eliminate the predictability of randomization, it is necessary to include some elements of randomness in the minimization algorithms. In this article brief descriptions of randomization and minimization are presented followed by introducing selected randomization and minimization programs. PMID:22606659

  4. The Impact of Presentation Format on Younger and Older Adults' Self-Regulated Learning.

    PubMed

    Price, Jodi

    2017-01-01

    Background/Study Context: Self-regulated learning involves deciding what to study and for how long. Debate surrounds whether individuals' selections are influenced more by item complexity, point values, or if instead people select in a left-to-right reading order, ignoring item complexity and value. The present study manipulated whether point values and presentation format favored selection of simple or complex Chinese-English pairs to assess the impact on younger and older adults' selection behaviors. One hundred and five younger (M age  = 20.26, SD = 2.38) and 102 older adults (M age  = 70.28, SD = 6.37) participated in the experiment. Participants studied four different 3 × 3 grids (two per trial), each containing three simple, three medium, and three complex Chinese-English vocabulary pairs presented in either a simple-first or complex-first order, depending on condition. Point values were assigned in either a 2-4-8 or 8-4-2 order so that either simple or complex items were favored. Points did not influence the order in which either age group selected items, whereas presentation format did. Younger and older adults selected more simple or complex items when they appeared in the first column. However, older adults selected and allocated more time to simpler items but recalled less overall than did younger adults. Memory beliefs and working memory capacity predicted study time allocation, but not item selection, behaviors. Presentation format must be considered when evaluating which theory of self-regulated learning best accounts for younger and older adults' study behaviors and whether there are age-related differences in self-regulated learning. The results of the present study combine with others to support the importance of also considering the role of external factors (e.g., working memory capacity and memory beliefs) in each age group's self-regulated learning decisions.

  5. The Beneficial Role of Random Strategies in Social and Financial Systems

    NASA Astrophysics Data System (ADS)

    Biondo, Alessio Emanuele; Pluchino, Alessandro; Rapisarda, Andrea

    2013-05-01

    In this paper we focus on the beneficial role of random strategies in social sciences by means of simple mathematical and computational models. We briefly review recent results obtained by two of us in previous contributions for the case of the Peter principle and the efficiency of a Parliament. Then, we develop a new application of random strategies to the case of financial trading and discuss in detail our findings about forecasts of markets dynamics.

  6. On Edge Exchangeable Random Graphs

    NASA Astrophysics Data System (ADS)

    Janson, Svante

    2017-06-01

    We study a recent model for edge exchangeable random graphs introduced by Crane and Dempsey; in particular we study asymptotic properties of the random simple graph obtained by merging multiple edges. We study a number of examples, and show that the model can produce dense, sparse and extremely sparse random graphs. One example yields a power-law degree distribution. We give some examples where the random graph is dense and converges a.s. in the sense of graph limit theory, but also an example where a.s. every graph limit is the limit of some subsequence. Another example is sparse and yields convergence to a non-integrable generalized graphon defined on (0,∞).

  7. Sampling for Patient Exit Interviews: Assessment of Methods Using Mathematical Derivation and Computer Simulations.

    PubMed

    Geldsetzer, Pascal; Fink, Günther; Vaikath, Maria; Bärnighausen, Till

    2018-02-01

    (1) To evaluate the operational efficiency of various sampling methods for patient exit interviews; (2) to discuss under what circumstances each method yields an unbiased sample; and (3) to propose a new, operationally efficient, and unbiased sampling method. Literature review, mathematical derivation, and Monte Carlo simulations. Our simulations show that in patient exit interviews it is most operationally efficient if the interviewer, after completing an interview, selects the next patient exiting the clinical consultation. We demonstrate mathematically that this method yields a biased sample: patients who spend a longer time with the clinician are overrepresented. This bias can be removed by selecting the next patient who enters, rather than exits, the consultation room. We show that this sampling method is operationally more efficient than alternative methods (systematic and simple random sampling) in most primary health care settings. Under the assumption that the order in which patients enter the consultation room is unrelated to the length of time spent with the clinician and the interviewer, selecting the next patient entering the consultation room tends to be the operationally most efficient unbiased sampling method for patient exit interviews. © 2016 The Authors. Health Services Research published by Wiley Periodicals, Inc. on behalf of Health Research and Educational Trust.

  8. The RANDOM computer program: A linear congruential random number generator

    NASA Technical Reports Server (NTRS)

    Miles, R. F., Jr.

    1986-01-01

    The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.

  9. Simple Copper Catalysts for the Aerobic Oxidation of Amines: Selectivity Control by the Counterion.

    PubMed

    Xu, Boran; Hartigan, Elizabeth M; Feula, Giancarlo; Huang, Zheng; Lumb, Jean-Philip; Arndtsen, Bruce A

    2016-12-19

    We describe the use of simple copper-salt catalysts in the selective aerobic oxidation of amines to nitriles or imines. These catalysts are marked by their exceptional efficiency, operate at ambient temperature and pressure, and allow the oxidation of amines without expensive ligands or additives. This study highlights the significant role counterions can play in controlling selectivity in catalytic aerobic oxidations. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Simple aspiration versus intercostal tube drainage for primary spontaneous pneumothorax in adults.

    PubMed

    Carson-Chahhoud, Kristin V; Wakai, Abel; van Agteren, Joseph Em; Smith, Brian J; McCabe, Grainne; Brinn, Malcolm P; O'Sullivan, Ronan

    2017-09-07

    For management of pneumothorax that occurs without underlying lung disease, also referred to as primary spontaneous pneumothorax, simple aspiration is technically easier to perform than intercostal tube drainage. In this systematic review, we seek to compare the clinical efficacy and safety of simple aspiration versus intercostal tube drainage for management of primary spontaneous pneumothorax. This review was first published in 2007 and was updated in 2017. To compare the clinical efficacy and safety of simple aspiration versus intercostal tube drainage for management of primary spontaneous pneumothorax. We searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2017, Issue 1) in the Cochrane Library; MEDLINE (1966 to January 2017); and Embase (1980 to January 2017). We searched the World Health Organization (WHO) International Clinical Trials Registry for ongoing trials (January 2017). We checked the reference lists of included trials and contacted trial authors. We imposed no language restrictions. We included randomized controlled trials (RCTs) of adults 18 years of age and older with primary spontaneous pneumothorax that compared simple aspiration versus intercostal tube drainage. Two review authors independently selected studies for inclusion, assessed trial quality, and extracted data. We combined studies using the random-effects model. Of 2332 publications obtained through the search strategy, seven studies met the inclusion criteria; one study was ongoing and six studies of 435 participants were eligible for inclusion in the updated review. Data show a significant difference in immediate success rates of procedures favouring tube drainage over simple aspiration for management of primary spontaneous pneumothorax (risk ratio (RR) 0.78, 95% confidence interval (CI) 0.69 to 0.89; 435 participants, 6 studies; moderate-quality evidence). Duration of hospitalization however was significantly less for patients treated by simple aspiration (mean difference (MD) -1.66, 95% CI -2.28 to -1.04; 387 participants, 5 studies; moderate-quality evidence). A narrative synthesis of evidence revealed that simple aspiration led to fewer adverse events (245 participants, 3 studies; low-quality evidence), but data suggest no differences between groups in terms of one-year success rate (RR 1.07, 95% CI 0.96 to 1.18; 318 participants, 4 studies; moderate-quality evidence), hospitalization rate (RR 0.60, 95% CI 0.25 to 1.47; 245 participants, 3 studies; very low-quality evidence), and patient satisfaction (median between-group difference of 0.5 on a scale from 1 to 10; 48 participants, 1 study; low-quality evidence). No studies provided data on cost-effectiveness. Available trials showed low to moderate-quality evidence that intercostal tube drainage produced higher rates of immediate success, while simple aspiration resulted in a shorter duration of hospitalization. Although adverse events were reported more commonly for patients treated with tube drainage, the low quality of the evidence warrants caution in interpreting these findings. Similarly, although this review observed no differences between groups when early failure rate, one-year success rate, or hospital admission rate was evaluated, this too needs to be put into the perspective of the quality of evidence, specifically, for evidence of very low and low quality for hospitalization rate and patient satisfaction, respectively. Future adequately powered research is needed to strengthen the evidence presented in this review.

  11. Random sphere packing model of heterogeneous propellants

    NASA Astrophysics Data System (ADS)

    Kochevets, Sergei Victorovich

    It is well recognized that combustion of heterogeneous propellants is strongly dependent on the propellant morphology. Recent developments in computing systems make it possible to start three-dimensional modeling of heterogeneous propellant combustion. A key component of such large scale computations is a realistic model of industrial propellants which retains the true morphology---a goal never achieved before. The research presented develops the Random Sphere Packing Model of heterogeneous propellants and generates numerical samples of actual industrial propellants. This is done by developing a sphere packing algorithm which randomly packs a large number of spheres with a polydisperse size distribution within a rectangular domain. First, the packing code is developed, optimized for performance, and parallelized using the OpenMP shared memory architecture. Second, the morphology and packing fraction of two simple cases of unimodal and bimodal packs are investigated computationally and analytically. It is shown that both the Loose Random Packing and Dense Random Packing limits are not well defined and the growth rate of the spheres is identified as the key parameter controlling the efficiency of the packing. For a properly chosen growth rate, computational results are found to be in excellent agreement with experimental data. Third, two strategies are developed to define numerical samples of polydisperse heterogeneous propellants: the Deterministic Strategy and the Random Selection Strategy. Using these strategies, numerical samples of industrial propellants are generated. The packing fraction is investigated and it is shown that the experimental values of the packing fraction can be achieved computationally. It is strongly believed that this Random Sphere Packing Model of propellants is a major step forward in the realistic computational modeling of heterogeneous propellant of combustion. In addition, a method of analysis of the morphology of heterogeneous propellants is developed which uses the concept of multi-point correlation functions. A set of intrinsic length scales of local density fluctuations in random heterogeneous propellants is identified by performing a Monte-Carlo study of the correlation functions. This method of analysis shows great promise for understanding the origins of the combustion instability of heterogeneous propellants, and is believed to become a valuable tool for the development of safe and reliable rocket engines.

  12. Effects of prey abundance, distribution, visual contrast and morphology on selection by a pelagic piscivore

    USGS Publications Warehouse

    Hansen, Adam G.; Beauchamp, David A.

    2014-01-01

    Most predators eat only a subset of possible prey. However, studies evaluating diet selection rarely measure prey availability in a manner that accounts for temporal–spatial overlap with predators, the sensory mechanisms employed to detect prey, and constraints on prey capture.We evaluated the diet selection of cutthroat trout (Oncorhynchus clarkii) feeding on a diverse planktivore assemblage in Lake Washington to test the hypothesis that the diet selection of piscivores would reflect random (opportunistic) as opposed to non-random (targeted) feeding, after accounting for predator–prey overlap, visual detection and capture constraints.Diets of cutthroat trout were sampled in autumn 2005, when the abundance of transparent, age-0 longfin smelt (Spirinchus thaleichthys) was low, and 2006, when the abundance of smelt was nearly seven times higher. Diet selection was evaluated separately using depth-integrated and depth-specific (accounted for predator–prey overlap) prey abundance. The abundance of different prey was then adjusted for differences in detectability and vulnerability to predation to see whether these factors could explain diet selection.In 2005, cutthroat trout fed non-randomly by selecting against the smaller, transparent age-0 longfin smelt, but for the larger age-1 longfin smelt. After adjusting prey abundance for visual detection and capture, cutthroat trout fed randomly. In 2006, depth-integrated and depth-specific abundance explained the diets of cutthroat trout well, indicating random feeding. Feeding became non-random after adjusting for visual detection and capture. Cutthroat trout selected strongly for age-0 longfin smelt, but against similar sized threespine stickleback (Gasterosteus aculeatus) and larger age-1 longfin smelt in 2006. Overlap with juvenile sockeye salmon (O. nerka) was minimal in both years, and sockeye salmon were rare in the diets of cutthroat trout.The direction of the shift between random and non-random selection depended on the presence of a weak versus a strong year class of age-0 longfin smelt. These fish were easy to catch, but hard to see. When their density was low, poor detection could explain their rarity in the diet. When their density was high, poor detection was compensated by higher encounter rates with cutthroat trout, sufficient to elicit a targeted feeding response. The nature of the feeding selectivity of a predator can be highly dependent on fluctuations in the abundance and suitability of key prey.

  13. Group Counseling With Emotionally Disturbed School Children in Taiwan.

    ERIC Educational Resources Information Center

    Chiu, Peter

    The application of group counseling to emotionally disturbed school children in Chinese culture was examined. Two junior high schools located in Tao-Yuan Province were randomly selected with two eighth-grade classes randomly selected from each school. Ten emotionally disturbed students were chosen from each class and randomly assigned to two…

  14. Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling

    ERIC Educational Resources Information Center

    Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah

    2014-01-01

    Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…

  15. On Measuring and Reducing Selection Bias with a Quasi-Doubly Randomized Preference Trial

    ERIC Educational Resources Information Center

    Joyce, Ted; Remler, Dahlia K.; Jaeger, David A.; Altindag, Onur; O'Connell, Stephen D.; Crockett, Sean

    2017-01-01

    Randomized experiments provide unbiased estimates of treatment effects, but are costly and time consuming. We demonstrate how a randomized experiment can be leveraged to measure selection bias by conducting a subsequent observational study that is identical in every way except that subjects choose their treatment--a quasi-doubly randomized…

  16. Random Walks on a Simple Cubic Lattice, the Multinomial Theorem, and Configurational Properties of Polymers

    ERIC Educational Resources Information Center

    Hladky, Paul W.

    2007-01-01

    Random-climb models enable undergraduate chemistry students to visualize polymer molecules, quantify their configurational properties, and relate molecular structure to a variety of physical properties. The model could serve as an introduction to more elaborate models of polymer molecules and could help in learning topics such as lattice models of…

  17. Predicting bending stiffness of randomly oriented hybrid panels

    Treesearch

    Laura Moya; William T.Y. Tze; Jerrold E. Winandy

    2010-01-01

    This study was conducted to develop a simple model to predict the bending modulus of elasticity (MOE) of randomly oriented hybrid panels. The modeling process involved three modules: the behavior of a single layer was computed by applying micromechanics equations, layer properties were adjusted for densification effects, and the entire panel was modeled as a three-...

  18. Financial Incentives and Student Achievement: Evidence from Randomized Trials. NBER Working Paper No. 15898

    ERIC Educational Resources Information Center

    Fryer, Roland G., Jr.

    2010-01-01

    This paper describes a series of school-based randomized trials in over 250 urban schools designed to test the impact of financial incentives on student achievement. In stark contrast to simple economic models, our results suggest that student incentives increase achievement when the rewards are given for inputs to the educational production…

  19. Multiple Imputation of Item Scores in Test and Questionnaire Data, and Influence on Psychometric Results

    ERIC Educational Resources Information Center

    van Ginkel, Joost R.; van der Ark, L. Andries; Sijtsma, Klaas

    2007-01-01

    The performance of five simple multiple imputation methods for dealing with missing data were compared. In addition, random imputation and multivariate normal imputation were used as lower and upper benchmark, respectively. Test data were simulated and item scores were deleted such that they were either missing completely at random, missing at…

  20. A Simple Spreadsheet Program to Simulate and Analyze the Far-UV Circular Dichroism Spectra of Proteins

    ERIC Educational Resources Information Center

    Abriata, Luciano A.

    2011-01-01

    A simple algorithm was implemented in a spreadsheet program to simulate the circular dichroism spectra of proteins from their secondary structure content and to fit [alpha]-helix, [beta]-sheet, and random coil contents from experimental far-UV circular dichroism spectra. The physical basis of the method is briefly reviewed within the context of…

  1. Contribution of Temporal Preparation and Processing Speed to Simple Reaction Time in Persons with Alzheimer's Disease and Mild Cognitive Impairment

    ERIC Educational Resources Information Center

    Sylvain-Roy, Stephanie; Bherer, Louis; Belleville, Sylvie

    2010-01-01

    Temporal preparation was assessed in 15 Alzheimer's disease (AD) patients, 20 persons with mild cognitive impairment (MCI) and 28 healthy older adults. Participants completed a simple reaction time task in which the preparatory interval duration varied randomly within two blocks (short versus long temporal window). Results indicated that AD and…

  2. Sampling pig farms at the abattoir in a cross-sectional study - Evaluation of a sampling method.

    PubMed

    Birkegård, Anna Camilla; Halasa, Tariq; Toft, Nils

    2017-09-15

    A cross-sectional study design is relatively inexpensive, fast and easy to conduct when compared to other study designs. Careful planning is essential to obtaining a representative sample of the population, and the recommended approach is to use simple random sampling from an exhaustive list of units in the target population. This approach is rarely feasible in practice, and other sampling procedures must often be adopted. For example, when slaughter pigs are the target population, sampling the pigs on the slaughter line may be an alternative to on-site sampling at a list of farms. However, it is difficult to sample a large number of farms from an exact predefined list, due to the logistics and workflow of an abattoir. Therefore, it is necessary to have a systematic sampling procedure and to evaluate the obtained sample with respect to the study objective. We propose a method for 1) planning, 2) conducting, and 3) evaluating the representativeness and reproducibility of a cross-sectional study when simple random sampling is not possible. We used an example of a cross-sectional study with the aim of quantifying the association of antimicrobial resistance and antimicrobial consumption in Danish slaughter pigs. It was not possible to visit farms within the designated timeframe. Therefore, it was decided to use convenience sampling at the abattoir. Our approach was carried out in three steps: 1) planning: using data from meat inspection to plan at which abattoirs and how many farms to sample; 2) conducting: sampling was carried out at five abattoirs; 3) evaluation: representativeness was evaluated by comparing sampled and non-sampled farms, and the reproducibility of the study was assessed through simulated sampling based on meat inspection data from the period where the actual data collection was carried out. In the cross-sectional study samples were taken from 681 Danish pig farms, during five weeks from February to March 2015. The evaluation showed that the sampling procedure was reproducible with results comparable to the collected sample. However, the sampling procedure favoured sampling of large farms. Furthermore, both under-sampled and over-sampled areas were found using scan statistics. In conclusion, sampling conducted at abattoirs can provide a spatially representative sample. Hence it is a possible cost-effective alternative to simple random sampling. However, it is important to assess the properties of the resulting sample so that any potential selection bias can be addressed when reporting the findings. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. SIMPL Systems, or: Can We Design Cryptographic Hardware without Secret Key Information?

    NASA Astrophysics Data System (ADS)

    Rührmair, Ulrich

    This paper discusses a new cryptographic primitive termed SIMPL system. Roughly speaking, a SIMPL system is a special type of Physical Unclonable Function (PUF) which possesses a binary description that allows its (slow) public simulation and prediction. Besides this public key like functionality, SIMPL systems have another advantage: No secret information is, or needs to be, contained in SIMPL systems in order to enable cryptographic protocols - neither in the form of a standard binary key, nor as secret information hidden in random, analog features, as it is the case for PUFs. The cryptographic security of SIMPLs instead rests on (i) a physical assumption on their unclonability, and (ii) a computational assumption regarding the complexity of simulating their output. This novel property makes SIMPL systems potentially immune against many known hardware and software attacks, including malware, side channel, invasive, or modeling attacks.

  4. The rationale and design of the Shockless IMPLant Evaluation (SIMPLE) trial: a randomized, controlled trial of defibrillation testing at the time of defibrillator implantation.

    PubMed

    Healey, Jeff S; Hohnloser, Stefan H; Glikson, Michael; Neuzner, Joerg; Viñolas, Xavier; Mabo, Philippe; Kautzner, Josef; O'Hara, Gilles; Van Erven, Liselot; Gadler, Frederick; Appl, Ursula; Connolly, Stuart J

    2012-08-01

    Defibrillation testing (DT) has been an integral part of defibrillator (implantable cardioverter defibrillator [ICD]) implantation; however, there is little evidence that it improves outcomes. Surveys show a trend toward ICD implantation without DT, which now exceeds 30% to 60% in some regions. Because there is no evidence to support dramatic shift in practice, a randomized trial is urgently needed. The SIMPLE trial will determine if ICD implantation without any DT is noninferior to implantation with DT. Patients will be eligible if they are receiving their first ICD using a Boston Scientific device (Boston Scientific, Natick, MA). Patients will be randomized to DT or no DT at the time of ICD implantation. In the DT arm, physicians will make all reasonable efforts to ensure 1 successful intraoperative defibrillation at 17 J or 2 at 21 J. The first clinical shock in all tachycardia zones will be set to 31 J for all patients. The primary outcome of SIMPLE will be the composite of ineffective appropriate shock or arrhythmic death. The safety outcome of SIMPLE will include a composite of potentially DT-related procedural complications within 30 days of ICD implantation. Several secondary outcomes will be evaluated, including all-cause mortality and heart failure hospitalization. Enrollment of 2,500 patients with 3.5-year mean follow-up will provide sufficient statistical power to demonstrate noninferiority. The study is being performed at approximately 90 centers in Canada, Europe, Israel, and Asia Pacific with final results expected in 2013. Copyright © 2012 Mosby, Inc. All rights reserved.

  5. Evolution in fluctuating environments: decomposing selection into additive components of the Robertson-Price equation.

    PubMed

    Engen, Steinar; Saether, Bernt-Erik

    2014-03-01

    We analyze the stochastic components of the Robertson-Price equation for the evolution of quantitative characters that enables decomposition of the selection differential into components due to demographic and environmental stochasticity. We show how these two types of stochasticity affect the evolution of multivariate quantitative characters by defining demographic and environmental variances as components of individual fitness. The exact covariance formula for selection is decomposed into three components, the deterministic mean value, as well as stochastic demographic and environmental components. We show that demographic and environmental stochasticity generate random genetic drift and fluctuating selection, respectively. This provides a common theoretical framework for linking ecological and evolutionary processes. Demographic stochasticity can cause random variation in selection differentials independent of fluctuating selection caused by environmental variation. We use this model of selection to illustrate that the effect on the expected selection differential of random variation in individual fitness is dependent on population size, and that the strength of fluctuating selection is affected by how environmental variation affects the covariance in Malthusian fitness between individuals with different phenotypes. Thus, our approach enables us to partition out the effects of fluctuating selection from the effects of selection due to random variation in individual fitness caused by demographic stochasticity. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.

  6. A randomized trial of upper limb botulimun toxin versus placebo injection, combined with physiotherapy, in children with hemiplegia.

    PubMed

    Ferrari, Adriano; Maoret, Anna Rosa; Muzzini, Simonetta; Alboresi, Silvia; Lombardi, Francesco; Sgandurra, Giuseppina; Paolicelli, Paola Bruna; Sicola, Elisa; Cioni, Giovanni

    2014-10-01

    The main goal of this study was to investigate the efficacy of Botulinum Toxin A (BoNT-A), combined with an individualized intensive physiotherapy/orthoses treatment, in improving upper limb activity and competence in daily activity in children with hemiplegia, and to compare its effectiveness with that of non-pharmacological instruments. It was a Randomized Clinical Trial of 27 children with spastic hemiplegic cerebral palsy, outpatients of two high speciality Centres for child rehabilitation. Each child was assigned by simple randomization to experimental group (BoNT-A) or control group (placebo). Assisting Hand Assessment (AHA) was chosen as primary outcome measure; other measures were selected according to ICF dimensions. Participants were assessed at baseline (T0), at T1, T2, T3 (1-3-6 months after injection, respectively). Every patient was given a specific physiotherapeutic treatment, consisting of individualized goal directed exercises, task oriented activities, daily stretching manoeuvres, functional and/or static orthoses. BoNT-A group showed a significant increase of AHA raw scores at T2, compared to control group (T2-T0: p=.025) and functional goals achievement (GAS) was also slightly better in the same group (p=.033). Other measures indicated some improvement in both groups, without significant intergroup differences. Children with intermediate severity of hand function at House scale for upper limb impairment seem to have a better benefit from BoNT-A protocol. BoNT-A was effective in improving manipulation in the activity domain, in association with individualized goal-directed physiotherapy and orthoses; the combined treatment is recommended. The study brings more evidence for the efficacy of a combined treatment botulinum toxin injection-physiotherapy-orthoses, and it gives some suggestions for candidate selection and individualized treatment. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Open quantum random walks: Bistability on pure states and ballistically induced diffusion

    NASA Astrophysics Data System (ADS)

    Bauer, Michel; Bernard, Denis; Tilloy, Antoine

    2013-12-01

    Open quantum random walks (OQRWs) deal with quantum random motions on a line for systems with internal and orbital degrees of freedom. The internal system behaves as a quantum random gyroscope coding for the direction of the orbital moves. We reveal the existence of a transition, depending on OQRW moduli, in the internal system behaviors from simple oscillations to random flips between two unstable pure states. This induces a transition in the orbital motions from the usual diffusion to ballistically induced diffusion with a large mean free path and large effective diffusion constant at large times. We also show that mixed states of the internal system are converted into random pure states during the process. We touch upon possible experimental realizations.

  8. Comparing the Effect of Echinacea and Chlorhexidine Mouthwash on the Microbial Flora of Intubated Patients Admitted to the Intensive Care Unit.

    PubMed

    Safarabadi, Mehdi; Ghaznavi-Rad, Ehsanollah; Pakniyat, Abdolghader; Rezaie, Korosh; Jadidi, Ali

    2017-01-01

    Providing intubated patients admitted to the intensive care units with oral healthcare is one of the main tasks of nurses in order to prevent Ventilator-Associated Pneumonia (VAP). This study aimed at comparing the effects of two mouthwash solutions (echinacea and chlorhexidine) on the oral microbial flora of patients hospitalized in the intensive care units. In this clinical trial, 70 patients aged between18 and 65 years undergoing tracheal intubation through the mouth in three hospitals in Arak, were selected using simple random sampling and were randomly divided into two groups: the intervention group and the control group. The oral health checklist was used to collect the data (before and after the intervention). The samples were obtained from the orally intubated patients and were then cultured in selective media. Afterwards, the aerobic microbial growth was investigated in all culture media. The data were analyzed using SPSS software. The microbial flora in the echinacea group significantly decreased after the intervention ( p < 0.0001) and it was also the case withmicrobial flora of the patients in the chlorhexidine group ( p < 0.001). After 4 days, the oral microbial flora of the patients in the intervention group was lower than that of the patients in the control group ( p < 0.001). The results showed that the echinacea solution was more effective in decreasing the oral microbial flora of patients in the intensive care unit. Given the benefits of the components of the herb Echinacea, it can be suggested as a viable alternative to chlorhexidine.

  9. Traditional Postextractive Implant Site Preparation Compared with Pre-extractive Interradicular Implant Bed Preparation in the Mandibular Molar Region, Using an Ultrasonic Device: A Randomized Pilot Study.

    PubMed

    Scarano, Antonio

    The immediate placement of single postextractive implants is increasing in the everyday clinical practice. Due to insufficient bone tissue volume, proper primary stability, essential for subsequent osseointegration, is sometimes not reached. The aim of this work was to compare two different approaches: implant bed preparation before and after root extraction. Twenty-two patients of both sexes were selected who needed an implant-prosthetic rehabilitation of the fractured first mandibular molar or presented an untreatable endodontic pathology. The sites were randomly assigned to the test group (treated with implant bed preparation before molar extractions) or control group (treated with implant bed preparation after molar extractions) by a computer-generated table. All implants were placed by the same operator, who was experienced in both traditional and ultrasonic techniques. The implant stability quotient (ISQ) and the position of the implant were evaluated. Statistical analysis was carried out. In the control group, three implants were placed in the central portion of the bone septum, while eight implants were placed with a tilted axis in relation to the septum; in the test group, all implants were placed in ideal positions within the root extraction sockets. The different position of the implants between the two procedures was statistically significant. This work presented an innovative approach for implant placement at the time of mandibular molar extraction. Preparing the implant bed with an ultrasonic device before root extraction is a simple technique and also allows greater stability to be reached in a selective case.

  10. The Coalescent Process in Models with Selection

    PubMed Central

    Kaplan, N. L.; Darden, T.; Hudson, R. R.

    1988-01-01

    Statistical properties of the process describing the genealogical history of a random sample of genes are obtained for a class of population genetics models with selection. For models with selection, in contrast to models without selection, the distribution of this process, the coalescent process, depends on the distribution of the frequencies of alleles in the ancestral generations. If the ancestral frequency process can be approximated by a diffusion, then the mean and the variance of the number of segregating sites due to selectively neutral mutations in random samples can be numerically calculated. The calculations are greatly simplified if the frequencies of the alleles are tightly regulated. If the mutation rates between alleles maintained by balancing selection are low, then the number of selectively neutral segregating sites in a random sample of genes is expected to substantially exceed the number predicted under a neutral model. PMID:3066685

  11. Communicating patient-reported outcome scores using graphic formats: results from a mixed-methods evaluation.

    PubMed

    Brundage, Michael D; Smith, Katherine C; Little, Emily A; Bantug, Elissa T; Snyder, Claire F

    2015-10-01

    Patient-reported outcomes (PROs) promote patient-centered care by using PRO research results ("group-level data") to inform decision making and by monitoring individual patient's PROs ("individual-level data") to inform care. We investigated the interpretability of current PRO data presentation formats. This cross-sectional mixed-methods study randomized purposively sampled cancer patients and clinicians to evaluate six group-data or four individual-data formats. A self-directed exercise assessed participants' interpretation accuracy and ratings of ease-of-understanding and usefulness (0 = least to 10 = most) of each format. Semi-structured qualitative interviews explored helpful and confusing format attributes. We reached thematic saturation with 50 patients (44 % < college graduate) and 20 clinicians. For group-level data, patients rated simple line graphs highest for ease-of-understanding and usefulness (median 8.0; 33 % selected for easiest to understand/most useful) and clinicians rated simple line graphs highest for ease-of-understanding and usefulness (median 9.0, 8.5) but most often selected line graphs with confidence limits or norms (30 % for each format for easiest to understand/most useful). Qualitative results support that clinicians value confidence intervals, norms, and p values, but patients find them confusing. For individual-level data, both patients and clinicians rated line graphs highest for ease-of-understanding (median 8.0 patients, 8.5 clinicians) and usefulness (median 8.0, 9.0) and selected them as easiest to understand (50, 70 %) and most useful (62, 80 %). The qualitative interviews supported highlighting scores requiring clinical attention and providing reference values. This study has identified preferences and opportunities for improving on current formats for PRO presentation and will inform development of best practices for PRO presentation. Both patients and clinicians prefer line graphs across group-level data and individual-level data formats, but clinicians prefer greater detail (e.g., statistical details) for group-level data.

  12. An On-Demand Optical Quantum Random Number Generator with In-Future Action and Ultra-Fast Response

    PubMed Central

    Stipčević, Mario; Ursin, Rupert

    2015-01-01

    Random numbers are essential for our modern information based society e.g. in cryptography. Unlike frequently used pseudo-random generators, physical random number generators do not depend on complex algorithms but rather on a physicsal process to provide true randomness. Quantum random number generators (QRNG) do rely on a process, wich can be described by a probabilistic theory only, even in principle. Here we present a conceptualy simple implementation, which offers a 100% efficiency of producing a random bit upon a request and simultaneously exhibits an ultra low latency. A careful technical and statistical analysis demonstrates its robustness against imperfections of the actual implemented technology and enables to quickly estimate randomness of very long sequences. Generated random numbers pass standard statistical tests without any post-processing. The setup described, as well as the theory presented here, demonstrate the maturity and overall understanding of the technology. PMID:26057576

  13. A generator for unique quantum random numbers based on vacuum states

    NASA Astrophysics Data System (ADS)

    Gabriel, Christian; Wittmann, Christoffer; Sych, Denis; Dong, Ruifang; Mauerer, Wolfgang; Andersen, Ulrik L.; Marquardt, Christoph; Leuchs, Gerd

    2010-10-01

    Random numbers are a valuable component in diverse applications that range from simulations over gambling to cryptography. The quest for true randomness in these applications has engendered a large variety of different proposals for producing random numbers based on the foundational unpredictability of quantum mechanics. However, most approaches do not consider that a potential adversary could have knowledge about the generated numbers, so the numbers are not verifiably random and unique. Here we present a simple experimental setup based on homodyne measurements that uses the purity of a continuous-variable quantum vacuum state to generate unique random numbers. We use the intrinsic randomness in measuring the quadratures of a mode in the lowest energy vacuum state, which cannot be correlated to any other state. The simplicity of our source, combined with its verifiably unique randomness, are important attributes for achieving high-reliability, high-speed and low-cost quantum random number generators.

  14. Effects of Selected Meditative Asanas on Kinaesthetic Perception and Speed of Movement

    ERIC Educational Resources Information Center

    Singh, Kanwaljeet; Bal, Baljinder S.; Deol, Nishan S.

    2009-01-01

    Study aim: To assess the effects of selected meditative "asanas" on kinesthetic perception and movement speed. Material and methods: Thirty randomly selected male students aged 18-24 years volunteered to participate in the study. They were randomly assigned into two groups: A (medidative) and B (control). The Nelson's movement speed and…

  15. Model Selection with the Linear Mixed Model for Longitudinal Data

    ERIC Educational Resources Information Center

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  16. Random one-of-N selector

    DOEpatents

    Kronberg, J.W.

    1993-04-20

    An apparatus for selecting at random one item of N items on the average comprising counter and reset elements for counting repeatedly between zero and N, a number selected by the user, a circuit for activating and deactivating the counter, a comparator to determine if the counter stopped at a count of zero, an output to indicate an item has been selected when the count is zero or not selected if the count is not zero. Randomness is provided by having the counter cycle very often while varying the relatively longer duration between activation and deactivation of the count. The passive circuit components of the activating/deactivating circuit and those of the counter are selected for the sensitivity of their response to variations in temperature and other physical characteristics of the environment so that the response time of the circuitry varies. Additionally, the items themselves, which may be people, may vary in shape or the time they press a pushbutton, so that, for example, an ultrasonic beam broken by the item or person passing through it will add to the duration of the count and thus to the randomness of the selection.

  17. Random one-of-N selector

    DOEpatents

    Kronberg, James W.

    1993-01-01

    An apparatus for selecting at random one item of N items on the average comprising counter and reset elements for counting repeatedly between zero and N, a number selected by the user, a circuit for activating and deactivating the counter, a comparator to determine if the counter stopped at a count of zero, an output to indicate an item has been selected when the count is zero or not selected if the count is not zero. Randomness is provided by having the counter cycle very often while varying the relatively longer duration between activation and deactivation of the count. The passive circuit components of the activating/deactivating circuit and those of the counter are selected for the sensitivity of their response to variations in temperature and other physical characteristics of the environment so that the response time of the circuitry varies. Additionally, the items themselves, which may be people, may vary in shape or the time they press a pushbutton, so that, for example, an ultrasonic beam broken by the item or person passing through it will add to the duration of the count and thus to the randomness of the selection.

  18. Network-Physics (NP) BEC DIGITAL(#)-VULNERABILITY; ``Q-Computing"=Simple-Arithmetic;Modular-Congruences=SignalXNoise PRODUCTS=Clock-model;BEC-Factorization;RANDOM-# Definition;P=/=NP TRIVIAL Proof!!!

    NASA Astrophysics Data System (ADS)

    Pi, E. I.; Siegel, E.

    2010-03-01

    Siegel[AMS Natl.Mtg.(2002)-Abs.973-60-124] digits logarithmic- law inversion to ONLY BEQS BEC:Quanta/Bosons=#: EMP-like SEVERE VULNERABILITY of ONLY #-networks(VS.ANALOG INvulnerability) via Barabasi NP(VS.dynamics[Not.AMS(5/2009)] critique);(so called)``quantum-computing''(QC) = simple-arithmetic (sansdivision);algorithmiccomplexities:INtractibility/UNdecidabi lity/INefficiency/NONcomputability/HARDNESS(so MIScalled) ``noise''-induced-phase-transition(NIT)ACCELERATION:Cook-Levin theorem Reducibility = RG fixed-points; #-Randomness DEFINITION via WHAT? Query(VS. Goldreich[Not.AMS(2002)] How? mea culpa)= ONLY MBCS hot-plasma v #-clumping NON-random BEC; Modular-Arithmetic Congruences = Signal x Noise PRODUCTS = clock-model; NON-Shor[Physica A,341,586(04)]BEC logarithmic-law inversion factorization: Watkins #-theory U statistical- physics); P=/=NP C-S TRIVIAL Proof: Euclid!!! [(So Miscalled) computational-complexity J-O obviation(3 millennia AGO geometry: NO:CC,``CS'';``Feet of Clay!!!'']; Query WHAT?:Definition: (so MIScalled)``complexity''=UTTER-SIMPLICITY!! v COMPLICATEDNESS MEASURE(S).

  19. Simple, Efficient Estimators of Treatment Effects in Randomized Trials Using Generalized Linear Models to Leverage Baseline Variables

    PubMed Central

    Rosenblum, Michael; van der Laan, Mark J.

    2010-01-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636

  20. Genome-wide characterization and selection of expressed sequence tag simple sequence repeat primers for optimized marker distribution and reliability in peach

    USDA-ARS?s Scientific Manuscript database

    Expressed sequence tag (EST) simple sequence repeats (SSRs) in Prunus were mined, and flanking primers designed and used for genome-wide characterization and selection of primers to optimize marker distribution and reliability. A total of 12,618 contigs were assembled from 84,727 ESTs, along with 34...

  1. Introduction of a Simple Experiment for the Undergraduate Organic Chemistry Laboratory Demonstrating the Lewis Acid and Shape-Selective Properties of Zeolite Na-Y

    ERIC Educational Resources Information Center

    Maloney, Vincent; Szczepanski, Zach

    2017-01-01

    A simple, inexpensive, discovery-based experiment for undergraduate organic laboratories has been developed that demonstrates the Lewis acid and shape-selective properties of zeolites. Calcined zeolite Na-Y promotes the electrophilic aromatic bromination of toluene with a significantly higher para/ortho ratio than observed under conventional…

  2. Population differentiation in Pacific salmon: local adaptation, genetic drift, or the environment?

    USGS Publications Warehouse

    Adkison, Milo D.

    1995-01-01

    Morphological, behavioral, and life-history differences between Pacific salmon (Oncorhynchus spp.) populations are commonly thought to reflect local adaptation, and it is likewise common to assume that salmon populations separated by small distances are locally adapted. Two alternatives to local adaptation exist: random genetic differentiation owing to genetic drift and founder events, and genetic homogeneity among populations, in which differences reflect differential trait expression in differing environments. Population genetics theory and simulations suggest that both alternatives are possible. With selectively neutral alleles, genetic drift can result in random differentiation despite many strays per generation. Even weak selection can prevent genetic drift in stable populations; however, founder effects can result in random differentiation despite selective pressures. Overlapping generations reduce the potential for random differentiation. Genetic homogeneity can occur despite differences in selective regimes when straying rates are high. In sum, localized differences in selection should not always result in local adaptation. Local adaptation is favored when population sizes are large and stable, selection is consistent over large areas, selective diffeentials are large, and straying rates are neither too high nor too low. Consideration of alternatives to local adaptation would improve both biological research and salmon conservation efforts.

  3. The Impact of Textual Input Enhancement and Explicit Rule Presentation on Iranian Elementary EFL Learners' Intake of Simple Past Tense

    ERIC Educational Resources Information Center

    Nahavandi, Naemeh; Mukundan, Jayakaran

    2013-01-01

    The present study investigated the impact of textual input enhancement and explicit rule presentation on 93 Iranian EFL learners' intake of simple past tense. Three intact general English classes in Tabriz Azad University were randomly assigned to: 1) a control group; 2) a TIE group; and 3) a TIE plus explicit rule presentation group. All…

  4. A Simple Game to Derive Lognormal Distribution

    ERIC Educational Resources Information Center

    Omey, E.; Van Gulck, S.

    2007-01-01

    In the paper we present a simple game that students can play in the classroom. The game can be used to show that random variables can behave in an unexpected way: the expected mean can tend to zero or to infinity; the variance can tend to zero or to infinity. The game can also be used to introduce the lognormal distribution. (Contains 1 table and…

  5. Computationally Efficient Resampling of Nonuniform Oversampled SAR Data

    DTIC Science & Technology

    2010-05-01

    noncoherently . The resample data is calculated using both a simple average and a weighted average of the demodulated data. The average nonuniform...trials with randomly varying accelerations. The results are shown in Fig. 5 for the noncoherent power difference and Fig. 6 for and coherent power...simple average. Figure 5. Noncoherent difference between SAR imagery generated with uniform sampling and nonuniform sampling that was resampled

  6. A Randomized Trial of a Computer-Assisted Tutoring Program Targeting Letter-Sound Expression

    ERIC Educational Resources Information Center

    DuBois, Matthew R.; Volpe, Robert J.; Hemphill, Elizabeth M.

    2014-01-01

    Given that many schools have limited resources and a high proportion of students who present with deficits in early literacy skills, supports aimed at preventing reading failure must be simple and efficient and generate meaningful changes in student learning. We used a randomized group design with a wait-list control to extend the work of Volpe,…

  7. Assessing Class-Wide Consistency and Randomness in Responses to True or False Questions Administered Online

    ERIC Educational Resources Information Center

    Pawl, Andrew; Teodorescu, Raluca E.; Peterson, Joseph D.

    2013-01-01

    We have developed simple data-mining algorithms to assess the consistency and the randomness of student responses to problems consisting of multiple true or false statements. In this paper we describe the algorithms and use them to analyze data from introductory physics courses. We investigate statements that emerge as outliers because the class…

  8. Combining Randomized and Non-Randomized Evidence in Clinical Research: A Review of Methods and Applications

    ERIC Educational Resources Information Center

    Verde, Pablo E.; Ohmann, Christian

    2015-01-01

    Researchers may have multiple motivations for combining disparate pieces of evidence in a meta-analysis, such as generalizing experimental results or increasing the power to detect an effect that a single study is not able to detect. However, while in meta-analysis, the main question may be simple, the structure of evidence available to answer it…

  9. Shift-phase code multiplexing technique for holographic memories and optical interconnection

    NASA Astrophysics Data System (ADS)

    Honma, Satoshi; Muto, Shinzo; Okamoto, Atsushi

    2008-03-01

    Holographic technologies for optical memories and interconnection devices have been studied actively because of high storage capacity, many wiring patterns and high transmission rate. Among multiplexing techniques such as angular, phase code and wavelength-multiplexing, speckle multiplexing technique have gotten attention due to the simple optical setup having an adjustable random phase filter in only one direction. To keep simple construction and to suppress crosstalk among adjacent page data or wiring patterns for efficient holographic memories and interconnection, we have to consider about optimum randomness of the phase filter. The high randomness causes expanding an illumination area of reference beam on holographic media. On the other hands, the small randomness causes the crosstalk between adjacent hologram data. We have proposed the method of holographic multiplexing, shift-phase code multiplexing with a two-dimensional orthogonal matrix phase filter. A lot of orthogonal phase codes can be produced by shifting the phase filter in one direction. It is able to read and record the individual holograms with low crosstalk. We give the basic experimental result on holographic data multiplexing and consider the phase pattern of the filter to suppress the crosstalk between adjacent holograms sufficiently.

  10. Redshift Survey Strategies

    NASA Astrophysics Data System (ADS)

    Jones, A. W.; Bland-Hawthorn, J.; Kaiser, N.

    1994-12-01

    In the first half of 1995, the Anglo-Australian Observatory is due to commission a wide field (2.1(deg) ), 400-fiber, double spectrograph system (2dF) at the f/3.3 prime focus of the AAT 3.9m bi-national facility. The instrument should be able to measure ~ 4000 galaxy redshifts (assuming a magnitude limit of b_J ~\\ 20) in a single dark night and is therefore ideally suited to studies of large-scale structure. We have carried out simple 3D numerical simulations to judge the relative merits of sparse surveys and contiguous surveys. We generate a survey volume and fill it randomly with particles according to a selection function which mimics a magnitude-limited survey at b_J = 19.7. Each of the particles is perturbed by a gaussian random field according to the dimensionless power spectrum k(3) P(k) / 2pi (2) determined by Feldman, Kaiser & Peacock (1994) from the IRAS QDOT survey. We introduce some redshift-space distortion as described by Kaiser (1987), a `thermal' component measured from pairwise velocities (Davis & Peebles 1983), and `fingers of god' due to rich clusters at random density enhancements. Our particular concern is to understand how the window function W(2(k)) of the survey geometry compromises the accuracy of statistical measures [e.g., P(k), xi (r), xi (r_sigma ,r_pi )] commonly used in the study of large-scale structure. We also examine the reliability of various tools (e.g. genus) for describing the topological structure within a contiguous region of the survey.

  11. 'On Your Feet to Earn Your Seat', a habit-based intervention to reduce sedentary behaviour in older adults: study protocol for a randomized controlled trial.

    PubMed

    Gardner, Benjamin; Thuné-Boyle, Ingela; Iliffe, Steve; Fox, Kenneth R; Jefferis, Barbara J; Hamer, Mark; Tyler, Nick; Wardle, Jane

    2014-09-20

    Many older adults are both highly sedentary (that is, spend considerable amounts of time sitting) and physically inactive (that is, do little physical activity). This protocol describes an exploratory trial of a theory-based behaviour change intervention in the form of a booklet outlining simple activities ('tips') designed both to reduce sedentary behaviour and to increase physical activity in older adults. The intervention is based on the 'habit formation' model, which proposes that consistent repetition leads to behaviour becoming automatic, sustaining activity gains over time. The intervention is being developed iteratively, in line with Medical Research Council complex intervention guidelines. Selection of activity tips was informed by semi-structured interviews and focus groups with older adults, and input from a multidisciplinary expert panel. An ongoing preliminary field test of acceptability among 25 older adults will inform further refinement. An exploratory randomized controlled trial will be conducted within a primary care setting, comparing the tips booklet with a control fact sheet. Retired, inactive and sedentary adults (n = 120) aged 60 to 74 years, with no physical impairments precluding light physical activity, will be recruited from general practices in north London, UK. The primary outcomes are recruitment and attrition rates. Secondary outcomes are changes in behaviour, habit, health and wellbeing over 12 weeks. Data will be used to inform study procedures for a future, larger-scale definitive randomized controlled trial. Current Controlled Trials ISRCTN47901994.

  12. A numerical study of sensory-guided multiple views for improved object identification

    NASA Astrophysics Data System (ADS)

    Blakeslee, B. A.; Zelnio, E. G.; Koditschek, D. E.

    2014-06-01

    We explore the potential on-line adjustment of sensory controls for improved object identification and discrimination in the context of a simulated high resolution camera system carried onboard a maneuverable robotic platform that can actively choose its observational position and pose. Our early numerical studies suggest the significant efficacy and enhanced performance achieved by even very simple feedback-driven iteration of the view in contrast to identification from a fixed pose, uninformed by any active adaptation. Specifically, we contrast the discriminative performance of the same conventional classification system when informed by: a random glance at a vehicle; two random glances at a vehicle; or a random glance followed by a guided second look. After each glance, edge detection algorithms isolate the most salient features of the image and template matching is performed through the use of the Hausdor↵ distance, comparing the simulated sensed images with reference images of the vehicles. We present initial simulation statistics that overwhelmingly favor the third scenario. We conclude with a sketch of our near-future steps in this study that will entail: the incorporation of more sophisticated image processing and template matching algorithms; more complex discrimination tasks such as distinguishing between two similar vehicles or vehicles in motion; more realistic models of the observers mobility including platform dynamics and eventually environmental constraints; and expanding the sensing task beyond the identification of a specified object selected from a pre-defined library of alternatives.

  13. Distance education and diabetes empowerment: A single-blind randomized control trial.

    PubMed

    Zamanzadeh, Vahid; Zirak, Mohammad; Hemmati Maslakpak, Masomeh; Parizad, Naser

    2017-11-01

    Diabetes is one of the biggest problems in healthcare systems and kills many people every year. Diabetes management is impossible when only utilizing medication. So, patients must be educated to manage their diabetes. This study aims to assess the effect of education by telephone and short message service on empowering patients with type 2 diabetes (primary outcome). A single-blind randomized controlled trial was conducted in the Urmia diabetes association in Iran. Sixty six participants with definitive diagnosis of type 2 diabetes entered into the study. Patients with secondary health problems were excluded. Patients were selected by simple random sampling then allocated into intervention (n=33) and control (n=33) groups. The intervention group received an educational text message daily and instructive phone calls three days a week for three months along with usual care. The Diabetes Empowerment Scale (DES) with confirmed validity and reliability was used for collecting data. Data was analyzed using SPSS V6.1. Independent t-test, paired t-test and chi-square were used to analyze the data. The empowerment of the intervention group compared with the control group significantly improved after three months of distance education (p<0.00, EF=1. 16). The study findings show that the distance education has a significant effect on empowering patients with type 2 diabetes. Therefore, using distance education along with other diabetes management intervention is highly effective and should be part of the care in diabetes treatment. Copyright © 2016 Diabetes India. Published by Elsevier Ltd. All rights reserved.

  14. Using pilot data to size a two-arm randomized trial to find a nearly optimal personalized treatment strategy.

    PubMed

    Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R

    2016-04-15

    A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.

  15. A Prospective Randomized Study on Operative Treatment for Simple Distal Tibial Fractures-Minimally Invasive Plate Osteosynthesis Versus Minimal Open Reduction and Internal Fixation.

    PubMed

    Kim, Ji Wan; Kim, Hyun Uk; Oh, Chang-Wug; Kim, Joon-Woo; Park, Ki Chul

    2018-01-01

    To compare the radiologic and clinical results of minimally invasive plate osteosynthesis (MIPO) and minimal open reduction and internal fixation (ORIF) for simple distal tibial fractures. Randomized prospective study. Three level 1 trauma centers. Fifty-eight patients with simple and distal tibial fractures were randomized into a MIPO group (treatment with MIPO; n = 29) or a minimal group (treatment with minimal ORIF; n = 29). These numbers were designed to define the rate of soft tissue complication; therefore, validation of superiority in union time or determination of differences in rates of delayed union was limited in this study. Simple distal tibial fractures treated with MIPO or minimal ORIF. The clinical outcome measurements included operative time, radiation exposure time, and soft tissue complications. To evaluate a patient's function, the American Orthopedic Foot and Ankle Society ankle score (AOFAS) was used. Radiologic measurements included fracture alignment, delayed union, and union time. All patients acquired bone union without any secondary intervention. The mean union time was 17.4 weeks and 16.3 weeks in the MIPO and minimal groups, respectively. There was 1 case of delayed union and 1 case of superficial infection in each group. The radiation exposure time was shorter in the minimal group than in the MIPO group. Coronal angulation showed a difference between both groups. The American Orthopedic Foot and Ankle Society ankle scores were 86.0 and 86.7 in the MIPO and minimal groups, respectively. Minimal ORIF resulted in similar outcomes, with no increased rate of soft tissue problems compared to MIPO. Both MIPO and minimal ORIF have high union rates and good functional outcomes for simple distal tibial fractures. Minimal ORIF did not result in increased rates of infection and wound dehiscence. Therapeutic Level II. See Instructions for Authors for a complete description of levels of evidence.

  16. Effect of expanding medicaid for parents on children's health insurance coverage: lessons from the Oregon experiment.

    PubMed

    DeVoe, Jennifer E; Marino, Miguel; Angier, Heather; O'Malley, Jean P; Crawford, Courtney; Nelson, Christine; Tillotson, Carrie J; Bailey, Steffani R; Gallia, Charles; Gold, Rachel

    2015-01-01

    In the United States, health insurance is not universal. Observational studies show an association between uninsured parents and children. This association persisted even after expansions in child-only public health insurance. Oregon's randomized Medicaid expansion for adults, known as the Oregon Experiment, created a rare opportunity to assess causality between parent and child coverage. To estimate the effect on a child's health insurance coverage status when (1) a parent randomly gains access to health insurance and (2) a parent obtains coverage. Oregon Experiment randomized natural experiment assessing the results of Oregon's 2008 Medicaid expansion. We used generalized estimating equation models to examine the longitudinal effect of a parent randomly selected to apply for Medicaid on their child's Medicaid or Children's Health Insurance Program (CHIP) coverage (intent-to-treat analyses). We used per-protocol analyses to understand the impact on children's coverage when a parent was randomly selected to apply for and obtained Medicaid. Participants included 14409 children aged 2 to 18 years whose parents participated in the Oregon Experiment. For intent-to-treat analyses, the date a parent was selected to apply for Medicaid was considered the date the child was exposed to the intervention. In per-protocol analyses, exposure was defined as whether a selected parent obtained Medicaid. Children's Medicaid or CHIP coverage, assessed monthly and in 6-month intervals relative to their parent's selection date. In the immediate period after selection, children whose parents were selected to apply significantly increased from 3830 (61.4%) to 4152 (66.6%) compared with a nonsignificant change from 5049 (61.8%) to 5044 (61.7%) for children whose parents were not selected to apply. Children whose parents were randomly selected to apply for Medicaid had 18% higher odds of being covered in the first 6 months after parent's selection compared with children whose parents were not selected (adjusted odds ratio [AOR]=1.18; 95% CI, 1.10-1.27). The effect remained significant during months 7 to 12 (AOR=1.11; 95% CI, 1.03-1.19); months 13 to 18 showed a positive but not significant effect (AOR=1.07; 95% CI, 0.99-1.14). Children whose parents were selected and obtained coverage had more than double the odds of having coverage compared with children whose parents were not selected and did not gain coverage (AOR=2.37; 95% CI, 2.14-2.64). Children's odds of having Medicaid or CHIP coverage increased when their parents were randomly selected to apply for Medicaid. Children whose parents were selected and subsequently obtained coverage benefited most. This study demonstrates a causal link between parents' access to Medicaid coverage and their children's coverage.

  17. Alzheimer random walk

    NASA Astrophysics Data System (ADS)

    Odagaki, Takashi; Kasuya, Keisuke

    2017-09-01

    Using the Monte Carlo simulation, we investigate a memory-impaired self-avoiding walk on a square lattice in which a random walker marks each of sites visited with a given probability p and makes a random walk avoiding the marked sites. Namely, p = 0 and p = 1 correspond to the simple random walk and the self-avoiding walk, respectively. When p> 0, there is a finite probability that the walker is trapped. We show that the trap time distribution can well be fitted by Stacy's Weibull distribution b(a/b){a+1}/{b}[Γ({a+1}/{b})]-1x^a\\exp(-a/bx^b)} where a and b are fitting parameters depending on p. We also find that the mean trap time diverges at p = 0 as p- α with α = 1.89. In order to produce sufficient number of long walks, we exploit the pivot algorithm and obtain the mean square displacement and its Flory exponent ν(p) as functions of p. We find that the exponent determined for 1000 step walks interpolates both limits ν(0) for the simple random walk and ν(1) for the self-avoiding walk as [ ν(p) - ν(0) ] / [ ν(1) - ν(0) ] = pβ with β = 0.388 when p ≪ 0.1 and β = 0.0822 when p ≫ 0.1. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.

  18. Incidence of Postoperative Pain after Single Visit and Two Visit Root Canal Therapy: A Randomized Controlled Trial

    PubMed Central

    Joshi, Sonal B.; Bhagwat, S.V; Patil, Sanjana A

    2016-01-01

    Introduction Root Canal Treatment (RCT) has become a mainstream procedure in dentistry. A successful RCT is presented by absence of clinical signs and symptoms in teeth without any radiographic evidence of periodontal involvement. Completing this procedure in one visit or multiple visits has long been a topic of discussion. Aim To evaluate the incidence of postoperative pain after root canal therapy performed in single visit and two visits. Material and Methods An unblinded/ open label randomized controlled trial was carried out in the endodontic department of the Dental Institute, where 78 patients were recruited from the regular pool of patients. A total of 66 maxillary central incisors requiring root canal therapy fulfilled the inclusion and exclusion criteria. Using simple randomization by biased coin randomization method, the selected patients were assigned into two groups: group A (n=33) and group B (n=33). Single visit root canal treatment was performed for group A and two visit root canal treatment for group B. Independent sample t-test was used for statistical analysis. Results Thirty three patients were allotted to group A where endodontic treatment was completed in single visit while 33 patients were allotted to group B where endodontic treatment was completed in two visits. One patient dropped-out from Group A. Hence in Group A, 32 patients were analysed while in Group B, 33 patients were analysed. After 6 hours, 12 hours and 24 hours of obturation, pain was significantly higher in Group B as compared to Group A. However, there was no significant difference in the pain experienced by the patients 48 hours after treatment in both the groups. Conclusion Incidence of pain after endodontic treatment being performed in one-visit or two-visits is not significantly different. PMID:27437339

  19. An epidemiological study of prevalence and comorbidity of obsessive compulsive disorder symptoms (SOCD) and stress in Pakistani Adults.

    PubMed

    Ashraf, Farzana; Malik, Sadia; Arif, Amna

    2017-01-01

    To investigate the prevalence and comorbidity of subclinical obsessive compulsive disorder (SOCD) symptoms and stress across gender, marital and employment statuses. A cross-sectional research was conducted from December, 2016 to March 2017 at two universities of cosmopolitan city Lahore. Two self-report scales measuring SOCD symptoms and stress were used to collect data from 377 adults selected through simple random sampling technique, proportionately distributed across gender, marital and employment status. From the total sample, 52% reported low level of stress and 48% faced high level of stress. Significant differences in prevalence were observed across marital and employment statuses whereas for men and women, it was observed same (24%). Comorbidity of high level of SOCD symptoms and high level of stress was seen 34%. Significant prevalence and comorbidity exists between SOCD symptoms and stress and more studies addressing diverse population are needed.

  20. Design and simulation study of the immunization Data Quality Audit (DQA).

    PubMed

    Woodard, Stacy; Archer, Linda; Zell, Elizabeth; Ronveaux, Olivier; Birmingham, Maureen

    2007-08-01

    The goal of the Data Quality Audit (DQA) is to assess whether the Global Alliance for Vaccines and Immunization-funded countries are adequately reporting the number of diphtheria-tetanus-pertussis immunizations given, on which the "shares" are awarded. Given that this sampling design is a modified two-stage cluster sample (modified because a stratified, rather than a simple, random sample of health facilities is obtained from the selected clusters); the formula for the calculation of the standard error for the estimate is unknown. An approximated standard error has been proposed, and the first goal of this simulation is to assess the accuracy of the standard error. Results from the simulations based on hypothetical populations were found not to be representative of the actual DQAs that were conducted. Additional simulations were then conducted on the actual DQA data to better access the precision of the DQ with both the original and the increased sample sizes.

  1. Assessing Date Palm Genetic Diversity Using Different Molecular Markers.

    PubMed

    Atia, Mohamed A M; Sakr, Mahmoud M; Adawy, Sami S

    2017-01-01

    Molecular marker technologies which rely on DNA analysis provide powerful tools to assess biodiversity at different levels, i.e., among and within species. A range of different molecular marker techniques have been developed and extensively applied for detecting variability in date palm at the DNA level. Recently, the employment of gene-targeting molecular marker approaches to study biodiversity and genetic variations in many plant species has increased the attention of researchers interested in date palm to carry out phylogenetic studies using these novel marker systems. Molecular markers are good indicators of genetic distances among accessions, because DNA-based markers are neutral in the face of selection. Here we describe the employment of multidisciplinary molecular marker approaches: amplified fragment length polymorphism (AFLP), start codon targeted (SCoT) polymorphism, conserved DNA-derived polymorphism (CDDP), intron-targeted amplified polymorphism (ITAP), simple sequence repeats (SSR), and random amplified polymorphic DNA (RAPD) to assess genetic diversity in date palm.

  2. Survey data on cost and benefits of climate smart agricultural technologies in western Kenya.

    PubMed

    Ng'ang'a, S K; Mwungu, C M; Mwongera, C; Kinyua, I; Notenbaert, A; Girvetz, E

    2018-02-01

    This paper describes data that were collected in three counties of western Kenya, namely Siaya, Bungoma, and Kakamega. The main aim of collecting the data was to assess the climate smartness, profitability and returns of soil protection and rehabilitation measures. The data were collected from 88 households. The households were selected using simple random sampling technique from a primary sampling frame of 180 farm households provided by the ministry of agriculture through the counties agricultural officers. The surveys were administered by trained research assistants using a structured questionnaire that was designed in Census and Survey Processing System (CSPro). Later, the data was exported to STATA version 14.1 for cleaning and management purposes. The data are hosted in an open source dataverse to allow other researchers generate new insights from the data (http://dx.doi.org/10.7910/DVN/K6JQXC).

  3. The need and its influence factors for community-based rehabilitation services for disabled persons in one district in Beijing.

    PubMed

    Dai, Hong; Xue, Hui; Yin, Zong-Jie; Xiao, Zhong-Xin

    2006-12-01

    To explore the needs for basic community-based rehabilitation services for disabled persons in Xuanwu District, Beijing, China, and to identify factors which influence disabled persons to accept rehabilitation services. One hundred and eight disabled persons were selected by systematic sampling and simple random sampling to assess their needs for community-based rehabilitation services. Of the interviewees, 57.4% needed the community-based rehabilitation services, but only 13.9% took advantage of it. The main factors influencing the interviewees to accept these services were cost (P < 0.05), knowledge about rehabilitation medicine (P < 0.05); and the belief in the therapeutic benefit of the community-based rehabilitation service (P < 0.05). A considerable gap exists between the supply of community-based rehabilitation services in Beijing and the needs for these services by disabled residents underscoring the need for improved availability, and for additional research.

  4. The way to uncover community structure with core and diversity

    NASA Astrophysics Data System (ADS)

    Chang, Y. F.; Han, S. K.; Wang, X. D.

    2018-07-01

    Communities are ubiquitous in nature and society. Individuals that share common properties often self-organize to form communities. Avoiding the shortages of computation complexity, pre-given information and unstable results in different run, in this paper, we propose a simple and efficient method to deepen our understanding of the emergence and diversity of communities in complex systems. By introducing the rational random selection, our method reveals the hidden deterministic and normal diverse community states of community structure. To demonstrate this method, we test it with real-world systems. The results show that our method could not only detect community structure with high sensitivity and reliability, but also provide instructional information about the hidden deterministic community world and the real normal diverse community world by giving out the core-community, the real-community, the tide and the diversity. Thizs is of paramount importance in understanding, predicting, and controlling a variety of collective behaviors in complex systems.

  5. Analysis of convergence of an evolutionary algorithm with self-adaptation using a stochastic Lyapunov function.

    PubMed

    Semenov, Mikhail A; Terkel, Dmitri A

    2003-01-01

    This paper analyses the convergence of evolutionary algorithms using a technique which is based on a stochastic Lyapunov function and developed within the martingale theory. This technique is used to investigate the convergence of a simple evolutionary algorithm with self-adaptation, which contains two types of parameters: fitness parameters, belonging to the domain of the objective function; and control parameters, responsible for the variation of fitness parameters. Although both parameters mutate randomly and independently, they converge to the "optimum" due to the direct (for fitness parameters) and indirect (for control parameters) selection. We show that the convergence velocity of the evolutionary algorithm with self-adaptation is asymptotically exponential, similar to the velocity of the optimal deterministic algorithm on the class of unimodal functions. Although some martingale inequalities have not be proved analytically, they have been numerically validated with 0.999 confidence using Monte-Carlo simulations.

  6. Transactional sex, condom and lubricant use among men who have sex with men in Lagos State, Nigeria.

    PubMed

    Ayoola, Oluyemisi O; Sekoni, Adekemi O; Odeyemi, Kofoworola A

    2013-12-01

    Men who have unprotected sex with men may also have unprotected sex with women and thus serve as an epidemiological bridge for HIV to the general population. This cross sectional descriptive study assessed condom and lubricant use and practice of transactional sex among men who have sex with men (MSM) in Lagos state. Simple random sampling was used to select three community centres and snowball sampling technique was used to recruit 321 respondents. Almost half (50.9%) had received payment for sex while 45.4% had paid for sex in the past. Consistent condom use was practiced by 40.5% of respondents during the last 10 sexual encounters, 85.6% used lubricants mostly with condom, products used were KY jelly, body cream, saliva and Vaseline. There is need for behavioural change to reduce risky practices which predisposes this group of MSM to HIV and sexually transmitted infections.

  7. NMR diffusion simulation based on conditional random walk.

    PubMed

    Gudbjartsson, H; Patz, S

    1995-01-01

    The authors introduce here a new, very fast, simulation method for free diffusion in a linear magnetic field gradient, which is an extension of the conventional Monte Carlo (MC) method or the convolution method described by Wong et al. (in 12th SMRM, New York, 1993, p.10). In earlier NMR-diffusion simulation methods, such as the finite difference method (FD), the Monte Carlo method, and the deterministic convolution method, the outcome of the calculations depends on the simulation time step. In the authors' method, however, the results are independent of the time step, although, in the convolution method the step size has to be adequate for spins to diffuse to adjacent grid points. By always selecting the largest possible time step the computation time can therefore be reduced. Finally the authors point out that in simple geometric configurations their simulation algorithm can be used to reduce computation time in the simulation of restricted diffusion.

  8. Social inheritance can explain the structure of animal social networks

    PubMed Central

    Ilany, Amiyaal; Akçay, Erol

    2016-01-01

    The social network structure of animal populations has major implications for survival, reproductive success, sexual selection and pathogen transmission of individuals. But as of yet, no general theory of social network structure exists that can explain the diversity of social networks observed in nature, and serve as a null model for detecting species and population-specific factors. Here we propose a simple and generally applicable model of social network structure. We consider the emergence of network structure as a result of social inheritance, in which newborns are likely to bond with maternal contacts, and via forming bonds randomly. We compare model output with data from several species, showing that it can generate networks with properties such as those observed in real social systems. Our model demonstrates that important observed properties of social networks, including heritability of network position or assortative associations, can be understood as consequences of social inheritance. PMID:27352101

  9. Rationally reduced libraries for combinatorial pathway optimization minimizing experimental effort.

    PubMed

    Jeschek, Markus; Gerngross, Daniel; Panke, Sven

    2016-03-31

    Rational flux design in metabolic engineering approaches remains difficult since important pathway information is frequently not available. Therefore empirical methods are applied that randomly change absolute and relative pathway enzyme levels and subsequently screen for variants with improved performance. However, screening is often limited on the analytical side, generating a strong incentive to construct small but smart libraries. Here we introduce RedLibs (Reduced Libraries), an algorithm that allows for the rational design of smart combinatorial libraries for pathway optimization thereby minimizing the use of experimental resources. We demonstrate the utility of RedLibs for the design of ribosome-binding site libraries by in silico and in vivo screening with fluorescent proteins and perform a simple two-step optimization of the product selectivity in the branched multistep pathway for violacein biosynthesis, indicating a general applicability for the algorithm and the proposed heuristics. We expect that RedLibs will substantially simplify the refactoring of synthetic metabolic pathways.

  10. Spatial vs. individual variability with inheritance in a stochastic Lotka-Volterra system

    NASA Astrophysics Data System (ADS)

    Dobramysl, Ulrich; Tauber, Uwe C.

    2012-02-01

    We investigate a stochastic spatial Lotka-Volterra predator-prey model with randomized interaction rates that are either affixed to the lattice sites and quenched, and / or specific to individuals in either population. In the latter situation, we include rate inheritance with mutations from the particles' progenitors. Thus we arrive at a simple model for competitive evolution with environmental variability and selection pressure. We employ Monte Carlo simulations in zero and two dimensions to study the time evolution of both species' densities and their interaction rate distributions. The predator and prey concentrations in the ensuing steady states depend crucially on the environmental variability, whereas the temporal evolution of the individualized rate distributions leads to largely neutral optimization. Contrary to, e.g., linear gene expression models, this system does not experience fixation at extreme values. An approximate description of the resulting data is achieved by means of an effective master equation approach for the interaction rate distribution.

  11. Contamination of mercury in tongkat Ali hitam herbal preparations.

    PubMed

    Ang, H H; Lee, K L

    2006-08-01

    The DCA (Drug Control Authority), Malaysia has implemented the phase three registration of traditional medicines on 1 January 1992. As such, a total of 100 products in various pharmaceutical dosage forms of a herbal preparation found in Malaysia, containing tongkat Ali hitam, either single or combined preparations, were analyzed for the presence of a heavy toxic metal, mercury, using atomic absorption spectrophotometer, after performing a simple random sampling to enable each sample an equal chance of being selected in an unbiased manner. Results showed that 26% of these products possessed 0.53-2.35 ppm of mercury, and therefore, do not comply with the quality requirement for traditional medicines in Malaysia. The quality requirement for traditional medicines in Malaysia is not exceeding 0.5 ppm for mercury. Out of these 26 products, four products have already registered with the DCA, Malaysia whilst the rest, however, have not registered with the DCA, Malaysia.

  12. Self-reported hand washing behaviors and foodborne illness: a propensity score matching approach.

    PubMed

    Ali, Mir M; Verrill, Linda; Zhang, Yuanting

    2014-03-01

    Hand washing is a simple and effective but easily overlooked way to reduce cross-contamination and the transmission of foodborne pathogens. In this study, we used the propensity score matching methodology to account for potential selection bias to explore our hypothesis that always washing hands before food preparation tasks is associated with a reduction in the probability of reported foodborne illness. Propensity score matching can simulate random assignment to a condition so that pretreatment observable differences between a treatment group and a control group are homogenous on all the covariates except the treatment variable. Using the U.S. Food and Drug Administration's 2010 Food Safety Survey, we estimated the effect of self-reported hand washing behavior on the probability of self-reported foodborne illness. Our results indicate that reported washing of hands with soap always before food preparation leads to a reduction in the probability of reported foodborne illness.

  13. Simulated annealing with probabilistic analysis for solving traveling salesman problems

    NASA Astrophysics Data System (ADS)

    Hong, Pei-Yee; Lim, Yai-Fung; Ramli, Razamin; Khalid, Ruzelan

    2013-09-01

    Simulated Annealing (SA) is a widely used meta-heuristic that was inspired from the annealing process of recrystallization of metals. Therefore, the efficiency of SA is highly affected by the annealing schedule. As a result, in this paper, we presented an empirical work to provide a comparable annealing schedule to solve symmetric traveling salesman problems (TSP). Randomized complete block design is also used in this study. The results show that different parameters do affect the efficiency of SA and thus, we propose the best found annealing schedule based on the Post Hoc test. SA was tested on seven selected benchmarked problems of symmetric TSP with the proposed annealing schedule. The performance of SA was evaluated empirically alongside with benchmark solutions and simple analysis to validate the quality of solutions. Computational results show that the proposed annealing schedule provides a good quality of solution.

  14. Effect of autogenic relaxation on depression among menopausal women in rural areas of Thiruvallur District (Tamil Nadu).

    PubMed

    Sujithra, S

    2014-01-01

    An experimental study was conducted among 60 menopausal women, 30 each in experimental and control group who met inclusion criteria. The menopausal women were identified in both the groups and level of depression was assessed using Cornell Dysthmia rating scale. Simple random sampling technique by lottery method was used for selecting the sample. Autogenic relaxation was practiced by the menopausal women for four weeks. The findings revealed that in experimental group, after intervention of autogenic relaxation on depression among menopausal women, 23 (76.7%) had mild depression. There was a statistically significant effectiveness in experimental group at the level of p < 0.05. There was a statistically significant association between the effectiveness of autogenic relaxation on depression among menopausal women in the post-experimental group with the type of family at the level of p < 0.05.

  15. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  16. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  17. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  18. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  19. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  20. Choosing between an Apple and a Chocolate Bar: the Impact of Health and Taste Labels

    PubMed Central

    Forwood, Suzanna E.; Walker, Alexander D.; Hollands, Gareth J.; Marteau, Theresa M.

    2013-01-01

    Increasing the consumption of fruit and vegetables is a central component of improving population health. Reasons people give for choosing one food over another suggest health is of lower importance than taste. This study assesses the impact of using a simple descriptive label to highlight the taste as opposed to the health value of fruit on the likelihood of its selection. Participants (N=439) were randomly allocated to one of five groups that varied in the label added to an apple: apple; healthy apple; succulent apple; healthy and succulent apple; succulent and healthy apple. The primary outcome measure was selection of either an apple or a chocolate bar as a dessert. Measures of the perceived qualities of the apple (taste, health, value, quality, satiety) and of participant characteristics (restraint, belief that tasty foods are unhealthy, BMI) were also taken. When compared with apple selection without any descriptor (50%), the labels combining both health and taste descriptors significantly increased selection of the apple (’healthy & succulent’ 65.9% and ‘succulent & healthy’ 62.4%), while the use of a single descriptor had no impact on the rate of apple selection (‘healthy’ 50.5% and ‘succulent’ 52%). The strongest predictors of individual dessert choice were the taste score given to the apple, and the lack of belief that healthy foods are not tasty. Interventions that emphasize the taste attributes of healthier foods are likely to be more effective at achieving healthier diets than those emphasizing health alone. PMID:24155964

  1. QSRR modeling for the chromatographic retention behavior of some β-lactam antibiotics using forward and firefly variable selection algorithms coupled with multiple linear regression.

    PubMed

    Fouad, Marwa A; Tolba, Enas H; El-Shal, Manal A; El Kerdawy, Ahmed M

    2018-05-11

    The justified continuous emerging of new β-lactam antibiotics provokes the need for developing suitable analytical methods that accelerate and facilitate their analysis. A face central composite experimental design was adopted using different levels of phosphate buffer pH, acetonitrile percentage at zero time and after 15 min in a gradient program to obtain the optimum chromatographic conditions for the elution of 31 β-lactam antibiotics. Retention factors were used as the target property to build two QSRR models utilizing the conventional forward selection and the advanced nature-inspired firefly algorithm for descriptor selection, coupled with multiple linear regression. The obtained models showed high performance in both internal and external validation indicating their robustness and predictive ability. Williams-Hotelling test and student's t-test showed that there is no statistical significant difference between the models' results. Y-randomization validation showed that the obtained models are due to significant correlation between the selected molecular descriptors and the analytes' chromatographic retention. These results indicate that the generated FS-MLR and FFA-MLR models are showing comparable quality on both the training and validation levels. They also gave comparable information about the molecular features that influence the retention behavior of β-lactams under the current chromatographic conditions. We can conclude that in some cases simple conventional feature selection algorithm can be used to generate robust and predictive models comparable to that are generated using advanced ones. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Selection for avian leukosis virus integration sites determines the clonal progression of B-cell lymphomas

    PubMed Central

    Malhotra, Sanandan; Justice, James; Morgan, Robin

    2017-01-01

    Avian leukosis virus (ALV) is a simple retrovirus that causes a wide range of tumors in chickens, the most common of which are B-cell lymphomas. The viral genome integrates into the host genome and uses its strong promoter and enhancer sequences to alter the expression of nearby genes, frequently inducing tumors. In this study, we compare the preferences for ALV integration sites in cultured cells and in tumors, by analysis of over 87,000 unique integration sites. In tissue culture we observed integration was relatively random with slight preferences for genes, transcription start sites and CpG islands. We also observed a preference for integrations in or near expressed and spliced genes. The integration pattern in cultured cells changed over the course of selection for oncogenic characteristics in tumors. In comparison to tissue culture, ALV integrations are more highly selected for proximity to transcription start sites in tumors. There is also a significant selection of ALV integrations away from CpG islands in the highly clonally expanded cells in tumors. Additionally, we utilized a high throughput method to quantify the magnitude of clonality in different stages of tumorigenesis. An ALV-induced tumor carries between 700 and 3000 unique integrations, with an average of 2.3 to 4 copies of proviral DNA per infected cell. We observed increasing tumor clonality during progression of B-cell lymphomas and identified gene players (especially TERT and MYB) and biological processes involved in tumor progression. PMID:29099869

  3. Peculiarities of the statistics of spectrally selected fluorescence radiation in laser-pumped dye-doped random media

    NASA Astrophysics Data System (ADS)

    Yuvchenko, S. A.; Ushakova, E. V.; Pavlova, M. V.; Alonova, M. V.; Zimnyakov, D. A.

    2018-04-01

    We consider the practical realization of a new optical probe method of the random media which is defined as the reference-free path length interferometry with the intensity moments analysis. A peculiarity in the statistics of the spectrally selected fluorescence radiation in laser-pumped dye-doped random medium is discussed. Previously established correlations between the second- and the third-order moments of the intensity fluctuations in the random interference patterns, the coherence function of the probe radiation, and the path difference probability density for the interfering partial waves in the medium are confirmed. The correlations were verified using the statistical analysis of the spectrally selected fluorescence radiation emitted by a laser-pumped dye-doped random medium. Water solution of Rhodamine 6G was applied as the doping fluorescent agent for the ensembles of the densely packed silica grains, which were pumped by the 532 nm radiation of a solid state laser. The spectrum of the mean path length for a random medium was reconstructed.

  4. 78 FR 4926 - Self-Regulatory Organizations; NASDAQ OMX PHLX LLC; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-23

    ... on the proposed rule change from interested persons. \\1\\ 15 U.S.C. 78s(b)(1). \\2\\ 17 CFR 240.19b-4. I....'' Specifically, the Exchange proposes to amend the Customer Rebate Program, Select Symbols,\\5\\ Simple and Complex... Category D to the Customer Rebate Program relating to Customer Simple Orders in Select Symbols. The...

  5. CURE-SMOTE algorithm and hybrid algorithm for feature selection and parameter optimization based on random forests.

    PubMed

    Ma, Li; Fan, Suohai

    2017-03-14

    The random forests algorithm is a type of classifier with prominent universality, a wide application range, and robustness for avoiding overfitting. But there are still some drawbacks to random forests. Therefore, to improve the performance of random forests, this paper seeks to improve imbalanced data processing, feature selection and parameter optimization. We propose the CURE-SMOTE algorithm for the imbalanced data classification problem. Experiments on imbalanced UCI data reveal that the combination of Clustering Using Representatives (CURE) enhances the original synthetic minority oversampling technique (SMOTE) algorithms effectively compared with the classification results on the original data using random sampling, Borderline-SMOTE1, safe-level SMOTE, C-SMOTE, and k-means-SMOTE. Additionally, the hybrid RF (random forests) algorithm has been proposed for feature selection and parameter optimization, which uses the minimum out of bag (OOB) data error as its objective function. Simulation results on binary and higher-dimensional data indicate that the proposed hybrid RF algorithms, hybrid genetic-random forests algorithm, hybrid particle swarm-random forests algorithm and hybrid fish swarm-random forests algorithm can achieve the minimum OOB error and show the best generalization ability. The training set produced from the proposed CURE-SMOTE algorithm is closer to the original data distribution because it contains minimal noise. Thus, better classification results are produced from this feasible and effective algorithm. Moreover, the hybrid algorithm's F-value, G-mean, AUC and OOB scores demonstrate that they surpass the performance of the original RF algorithm. Hence, this hybrid algorithm provides a new way to perform feature selection and parameter optimization.

  6. Using known populations of pronghorn to evaluate sampling plans and estimators

    USGS Publications Warehouse

    Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.

    1995-01-01

    Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.

  7. Determination of Pesticides Residues in Cucumbers Grown in Greenhouse and the Effect of Some Procedures on Their Residues.

    PubMed

    Leili, Mostafa; Pirmoghani, Amin; Samadi, Mohammad Taghi; Shokoohi, Reza; Roshanaei, Ghodratollah; Poormohammadi, Ali

    2016-11-01

    The objective of this study was to determine the residual concentrations of ethion and imidacloprid in cucumbers grown in greenhouse. The effect of some simple processing procedures on both ethion and imidacloprid residues were also studied. Ten active greenhouses that produce cucumber were randomly selected. Ethion and imidacloprid as the most widely used pesticides were measured in cucumber samples of studied greenhouses. Moreover, the effect of storing, washing, and peeling as simple processing procedures on both ethion and imidacloprid residues were investigated. One hour after pesticide application; the maximum residue levels (MRLs) of ethion and imidacloprid were higher than that of Codex standard level. One day after pesticide application, the levels of pesticides were decreased about 35 and 31% for ethion and imidacloprid, respectively, which still were higher than the MRL. Washing procedure led to about 51 and 42.5% loss in ethion and imidacloprid residues, respectively. Peeling procedure also led to highest loss of 93.4 and 63.7% in ethion and imidacloprid residues, respectively. The recovery for both target analytes was in the range between 88 and 102%. The residue values in collected samples one hour after pesticides application were higher than standard value. The storing, washing, and peeling procedures lead to the decrease of pesticide residues in greenhouse cucumbers. Among them, the peeling procedure has the greatest impact on residual reduction. Therefore, these procedures can be used as simple and effective processing techniques for reducing and removing pesticides from greenhouse products before their consumption.

  8. Aspergillus tubingensis and Aspergillus niger as the dominant black Aspergillus, use of simple PCR-RFLP for preliminary differentiation.

    PubMed

    Mirhendi, H; Zarei, F; Motamedi, M; Nouripour-Sisakht, S

    2016-03-01

    This work aimed to identify the species distribution of common clinical and environmental isolates of black Aspergilli based on simple restriction fragment length polymorphism (RFLP) analysis of the β-tubulin gene. A total of 149 clinical and environmental strains of black Aspergilli were collected and subjected to preliminary morphological examination. Total genomic DNAs were extracted, and PCR was performed to amplify part of the β-tubulin gene. At first, 52 randomly selected samples were species-delineated by sequence analysis. In order to distinguish the most common species, PCR amplicons of 117 black Aspergillus strains were identified by simple PCR-RFLP analysis using the enzyme TasI. Among 52 sequenced isolates, 28 were Aspergillus tubingensis, 21 Aspergillus niger, and the three remaining isolates included Aspergillus uvarum, Aspergillus awamori, and Aspergillus acidus. All 100 environmental and 17 BAL samples subjected to TasI-RFLP analysis of the β-tubulin gene, fell into two groups, consisting of about 59% (n=69) A. tubingensis and 41% (n=48) A. niger. Therefore, the method successfully and rapidly distinguished A. tubingensis and A. niger as the most common species among the clinical and environmental isolates. Although tardy, the Ehrlich test was also able to differentiate A. tubingensis and A. niger according to the yellow color reaction specific to A. niger. A. tubingensis and A. niger are the most common black Aspergillus in both clinical and environmental isolates in Iran. PCR-RFLP using TasI digestion of β-tubulin DNA enables rapid screening for these common species. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  9. The impact of text message reminders on adherence to antimalarial treatment in northern Ghana: a randomized trial.

    PubMed

    Raifman, Julia R G; Lanthorn, Heather E; Rokicki, Slawa; Fink, Günther

    2014-01-01

    Low rates of adherence to artemisinin-based combination therapy (ACT) regimens increase the risk of treatment failure and may lead to drug resistance, threatening the sustainability of current anti-malarial efforts. We assessed the impact of text message reminders on adherence to ACT regimens. Health workers at hospitals, clinics, pharmacies, and other stationary ACT distributors in Tamale, Ghana provided flyers advertising free mobile health information to individuals receiving malaria treatment. The messaging system automatically randomized self-enrolled individuals to the control group or the treatment group with equal probability; those in the treatment group were further randomly assigned to receive a simple text message reminder or the simple reminder plus an additional statement about adherence in 12-hour intervals. The main outcome was self-reported adherence based on follow-up interviews occurring three days after treatment initiation. We estimated the impact of the messages on treatment completion using logistic regression. 1140 individuals enrolled in both the study and the text reminder system. Among individuals in the control group, 61.5% took the full course of treatment. The simple text message reminders increased the odds of adherence (adjusted OR 1.45, 95% CI [1.03 to 2.04], p-value 0.028). Receiving an additional message did not result in a significant change in adherence (adjusted OR 0.77, 95% CI [0.50 to 1.20], p-value 0.252). The results of this study suggest that a simple text message reminder can increase adherence to antimalarial treatment and that additional information included in messages does not have a significant impact on completion of ACT treatment. Further research is needed to develop the most effective text message content and frequency. ClinicalTrials.gov NCT01722734.

  10. Cascaded Raman lasing in a PM phosphosilicate fiber with random distributed feedback

    NASA Astrophysics Data System (ADS)

    Lobach, Ivan A.; Kablukov, Sergey I.; Babin, Sergey A.

    2018-02-01

    We report on the first demonstration of a linearly polarized cascaded Raman fiber laser based on a simple half-open cavity with a broadband composite reflector and random distributed feedback in a polarization maintaining phosphosilicate fiber operating beyond zero dispersion wavelength ( 1400 nm). With increasing pump power from a Yb-doped fiber laser at 1080 nm, the random laser generates subsequently 8 W at 1262 nm and 9 W at 1515 nm with polarization extinction ratio of 27 dB. The generation linewidths amount to about 1 nm and 3 nm, respectively, being almost independent of power, in correspondence with the theory of a cascaded random lasing.

  11. Novel Multiplex PCR Assay for Detection of Chlorhexidine-Quaternary Ammonium, Mupirocin, and Methicillin Resistance Genes, with Simultaneous Discrimination of Staphylococcus aureus from Coagulase-Negative Staphylococci

    PubMed Central

    McClure, Jo-Ann; Zaal DeLongchamp, Johanna; Conly, John M.

    2017-01-01

    ABSTRACT Methicillin-resistant Staphylococcus aureus (MRSA) is a clinically significant pathogen that is resistant to a wide variety of antibiotics and responsible for a large number of nosocomial infections worldwide. The Agency for Healthcare Research and Quality and the Centers for Disease Control and Prevention recently recommended the adoption of universal mupirocin-chlorhexidine decolonization of all admitted intensive care unit patients rather than MRSA screening with targeted treatments, which raises a serious concern about the selection of resistance to mupirocin and chlorhexidine in strains of staphylococci. Thus, a simple, rapid, and reliable approach is paramount in monitoring the prevalence of resistance to these agents. We developed a simple multiplex PCR assay capable of screening Staphylococcus isolates for the presence of antiseptic resistance genes for chlorhexidine and quaternary ammonium compounds, as well as mupirocin and methicillin resistance genes, while simultaneously discriminating S. aureus from coagulase-negative staphylococci (CoNS). The assay incorporates 7 PCR targets, including the Staphylococcus 16S rRNA gene (specifically detecting Staphylococcus spp.), nuc (distinguishing S. aureus from CoNS), mecA (distinguishing MRSA from methicillin-susceptible S. aureus), mupA and mupB (identifying high-level mupirocin resistance), and qac and smr (identifying chlorhexidine and quaternary ammonium resistance). Our assay demonstrated 100% sensitivity, specificity, and accuracy in a total of 23 variant antiseptic- and/or antibiotic-resistant control strains. Further validation of our assay using 378 randomly selected and previously well-characterized local clinical isolates confirmed its feasibility and practicality. This may prove to be a useful tool for multidrug-resistant Staphylococcus monitoring in clinical laboratories, particularly in the wake of increased chlorhexidine and mupirocin treatments. PMID:28381601

  12. Novel Multiplex PCR Assay for Detection of Chlorhexidine-Quaternary Ammonium, Mupirocin, and Methicillin Resistance Genes, with Simultaneous Discrimination of Staphylococcus aureus from Coagulase-Negative Staphylococci.

    PubMed

    McClure, Jo-Ann; Zaal DeLongchamp, Johanna; Conly, John M; Zhang, Kunyan

    2017-06-01

    Methicillin-resistant Staphylococcus aureus (MRSA) is a clinically significant pathogen that is resistant to a wide variety of antibiotics and responsible for a large number of nosocomial infections worldwide. The Agency for Healthcare Research and Quality and the Centers for Disease Control and Prevention recently recommended the adoption of universal mupirocin-chlorhexidine decolonization of all admitted intensive care unit patients rather than MRSA screening with targeted treatments, which raises a serious concern about the selection of resistance to mupirocin and chlorhexidine in strains of staphylococci. Thus, a simple, rapid, and reliable approach is paramount in monitoring the prevalence of resistance to these agents. We developed a simple multiplex PCR assay capable of screening Staphylococcus isolates for the presence of antiseptic resistance genes for chlorhexidine and quaternary ammonium compounds, as well as mupirocin and methicillin resistance genes, while simultaneously discriminating S. aureus from coagulase-negative staphylococci (CoNS). The assay incorporates 7 PCR targets, including the Staphylococcus 16S rRNA gene (specifically detecting Staphylococcus spp.), nuc (distinguishing S. aureus from CoNS), mecA (distinguishing MRSA from methicillin-susceptible S. aureus ), mupA and mupB (identifying high-level mupirocin resistance), and qac and smr (identifying chlorhexidine and quaternary ammonium resistance). Our assay demonstrated 100% sensitivity, specificity, and accuracy in a total of 23 variant antiseptic- and/or antibiotic-resistant control strains. Further validation of our assay using 378 randomly selected and previously well-characterized local clinical isolates confirmed its feasibility and practicality. This may prove to be a useful tool for multidrug-resistant Staphylococcus monitoring in clinical laboratories, particularly in the wake of increased chlorhexidine and mupirocin treatments. Copyright © 2017 American Society for Microbiology.

  13. True random numbers from amplified quantum vacuum.

    PubMed

    Jofre, M; Curty, M; Steinlechner, F; Anzolin, G; Torres, J P; Mitchell, M W; Pruneri, V

    2011-10-10

    Random numbers are essential for applications ranging from secure communications to numerical simulation and quantitative finance. Algorithms can rapidly produce pseudo-random outcomes, series of numbers that mimic most properties of true random numbers while quantum random number generators (QRNGs) exploit intrinsic quantum randomness to produce true random numbers. Single-photon QRNGs are conceptually simple but produce few random bits per detection. In contrast, vacuum fluctuations are a vast resource for QRNGs: they are broad-band and thus can encode many random bits per second. Direct recording of vacuum fluctuations is possible, but requires shot-noise-limited detectors, at the cost of bandwidth. We demonstrate efficient conversion of vacuum fluctuations to true random bits using optical amplification of vacuum and interferometry. Using commercially-available optical components we demonstrate a QRNG at a bit rate of 1.11 Gbps. The proposed scheme has the potential to be extended to 10 Gbps and even up to 100 Gbps by taking advantage of high speed modulation sources and detectors for optical fiber telecommunication devices.

  14. The reasons for using and not using alternative medicine in Khorramabad women, west of Iran.

    PubMed

    Mahmoudi, Ghaffar Ali; Almasi, Vahid; Lorzadeh, Nahid; Khansari, Azadeh

    2015-06-01

    To evaluate reasons for using and not using alternative medicines. The cross-sectional study was conducted in 2009 on women over 18 years of age in Khorramabad, Iran. The subjects were selected by using cluster and simple random sampling method. The data were recorded in a questionnaire that involved questions about the subjects' age, marital status, their opinions on their general health, and advantages and disadvantages of conventional and alternative medicine. Of the 1600 women initially selected, 1551(97%) represented the final sample. The mean age of the participants was 35.04±10.71 years. Overall, 435(28%) spoke of disadvantages of alternative medicine; 277(18%) about the advantages of alternative medicine; 523(34%) about the advantages of conventional treatments; and 316(20%) about the disadvantages of conventional treatments. The most prevalent reason for not using the conventional treatments was the cost factor in 159(50.3%). Trust in physicians 328(62.7%) and distrust in alternative medicine therapists 317(73%) were the most prevalent reasons for using conventional treatments and not using alternative medicine. Similar studies should be done on the reasons for using and not using each medication of alternative medicines separately.

  15. The Mean Distance to the nth Neighbour in a Uniform Distribution of Random Points: An Application of Probability Theory

    ERIC Educational Resources Information Center

    Bhattacharyya, Pratip; Chakrabarti, Bikas K.

    2008-01-01

    We study different ways of determining the mean distance (r[subscript n]) between a reference point and its nth neighbour among random points distributed with uniform density in a D-dimensional Euclidean space. First, we present a heuristic method; though this method provides only a crude mathematical result, it shows a simple way of estimating…

  16. Addressing Early Retention in Antenatal Care Among HIV-Positive Women Through a Simple Intervention in Kinshasa, DRC: The Elombe "Champion" Standard Operating Procedure.

    PubMed

    Gill, Michelle M; Ditekemena, John; Loando, Aimé; Mbonze, Nana; Bakualufu, Jo; Machekano, Rhoderick; Nyombe, Cady; Temmerman, Marleen; Fwamba, Franck

    2018-03-01

    This cluster-randomized study aimed to assess the Elombe ("Champion") standard operating procedure (SOP), implemented by providers and Mentor Mothers, on HIV-positive pregnant women's retention between first and second antenatal visits. Sixteen facilities in Kinshasa were randomly assigned to intervention (SOP) or comparison (no SOP). Effect of the SOP was estimated using relative risk. Women in comparison facilities were more likely to miss second visits (RR 2.5, 95% CI 1.05-5.98) than women in intervention facilities (30.0%, n = 27 vs. 12.0%, n = 9, p < 0.002). Findings demonstrate that a simple intervention can reduce critical early loss to care in PMTCT programs providing universal, lifelong treatment.

  17. Universal shocks in the Wishart random-matrix ensemble.

    PubMed

    Blaizot, Jean-Paul; Nowak, Maciej A; Warchoł, Piotr

    2013-05-01

    We show that the derivative of the logarithm of the average characteristic polynomial of a diffusing Wishart matrix obeys an exact partial differential equation valid for an arbitrary value of N, the size of the matrix. In the large N limit, this equation generalizes the simple inviscid Burgers equation that has been obtained earlier for Hermitian or unitary matrices. The solution, through the method of characteristics, presents singularities that we relate to the precursors of shock formation in the Burgers equation. The finite N effects appear as a viscosity term in the Burgers equation. Using a scaling analysis of the complete equation for the characteristic polynomial, in the vicinity of the shocks, we recover in a simple way the universal Bessel oscillations (so-called hard-edge singularities) familiar in random-matrix theory.

  18. Analysis on the DNA Fingerprinting of Aspergillus Oryzae Mutant Induced by High Hydrostatic Pressure

    NASA Astrophysics Data System (ADS)

    Wang, Hua; Zhang, Jian; Yang, Fan; Wang, Kai; Shen, Si-Le; Liu, Bing-Bing; Zou, Bo; Zou, Guang-Tian

    2011-01-01

    The mutant strains of aspergillus oryzae (HP300a) are screened under 300 MPa for 20 min. Compared with the control strains, the screened mutant strains have unique properties such as genetic stability, rapid growth, lots of spores, and high protease activity. Random amplified polymorphic DNA (RAPD) and inter simple sequence repeats (ISSR) are used to analyze the DNA fingerprinting of HP300a and the control strains. There are 67.9% and 51.3% polymorphic bands obtained by these two markers, respectively, indicating significant genetic variations between HP300a and the control strains. In addition, comparison of HP300a and the control strains, the genetic distances of random sequence and simple sequence repeat of DNA are 0.51 and 0.34, respectively.

  19. Molecular Analysis of Date Palm Genetic Diversity Using Random Amplified Polymorphic DNA (RAPD) and Inter-Simple Sequence Repeats (ISSRs).

    PubMed

    El Sharabasy, Sherif F; Soliman, Khaled A

    2017-01-01

    The date palm is an ancient domesticated plant with great diversity and has been cultivated in the Middle East and North Africa for at last 5000 years. Date palm cultivars are classified based on the fruit moisture content, as dry, semidry, and soft dates. There are a number of biochemical and molecular techniques available for characterization of the date palm variation. This chapter focuses on the DNA-based markers random amplified polymorphic DNA (RAPD) and inter-simple sequence repeats (ISSR) techniques, in addition to biochemical markers based on isozyme analysis. These techniques coupled with appropriate statistical tools proved useful for determining phylogenetic relationships among date palm cultivars and provide information resources for date palm gene banks.

  20. Deterministic diffusion in flower-shaped billiards.

    PubMed

    Harayama, Takahisa; Klages, Rainer; Gaspard, Pierre

    2002-08-01

    We propose a flower-shaped billiard in order to study the irregular parameter dependence of chaotic normal diffusion. Our model is an open system consisting of periodically distributed obstacles in the shape of a flower, and it is strongly chaotic for almost all parameter values. We compute the parameter dependent diffusion coefficient of this model from computer simulations and analyze its functional form using different schemes, all generalizing the simple random walk approximation of Machta and Zwanzig. The improved methods we use are based either on heuristic higher-order corrections to the simple random walk model, on lattice gas simulation methods, or they start from a suitable Green-Kubo formula for diffusion. We show that dynamical correlations, or memory effects, are of crucial importance in reproducing the precise parameter dependence of the diffusion coefficent.

Top