ERIC Educational Resources Information Center
Moody, Judith D.; Gifford, Vernon D.
This study investigated the grouping effect on student achievement in a chemistry laboratory when homogeneous and heterogeneous formal reasoning ability, high and low levels of formal reasoning ability, group sizes of two and four, and homogeneous and heterogeneous gender were used for grouping factors. The sample consisted of all eight intact…
Sampling stratospheric aerosols with impactors
NASA Technical Reports Server (NTRS)
Oberbeck, Verne R.
1989-01-01
Derivation of statistically significant size distributions from impactor samples of rarefield stratospheric aerosols imposes difficult sampling constraints on collector design. It is shown that it is necessary to design impactors of different size for each range of aerosol size collected so as to obtain acceptable levels of uncertainty with a reasonable amount of data reduction.
Link, W.A.
2003-01-01
Heterogeneity in detection probabilities has long been recognized as problematic in mark-recapture studies, and numerous models developed to accommodate its effects. Individual heterogeneity is especially problematic, in that reasonable alternative models may predict essentially identical observations from populations of substantially different sizes. Thus even with very large samples, the analyst will not be able to distinguish among reasonable models of heterogeneity, even though these yield quite distinct inferences about population size. The problem is illustrated with models for closed and open populations.
Exploratory Factor Analysis with Small Sample Sizes
ERIC Educational Resources Information Center
de Winter, J. C. F.; Dodou, D.; Wieringa, P. A.
2009-01-01
Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes ("N"), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for "N" below 50. Simulations were carried out to estimate the minimum required "N" for different…
Planning Community-Based Assessments of HIV Educational Intervention Programs in Sub-Saharan Africa
ERIC Educational Resources Information Center
Kelcey, Ben; Shen, Zuchao
2017-01-01
A key consideration in planning studies of community-based HIV education programs is identifying a sample size large enough to ensure a reasonable probability of detecting program effects if they exist. Sufficient sample sizes for community- or group-based designs are proportional to the correlation or similarity of individuals within communities.…
Inductive and deductive reasoning in obsessive-compulsive disorder.
Liew, Janice; Grisham, Jessica R; Hayes, Brett K
2018-06-01
This study examined the hypothesis that participants diagnosed with obsessive-compulsive disorder (OCD) show a selective deficit in inductive reasoning but are equivalent to controls in deductive reasoning. Twenty-five participants with OCD and 25 non-clinical controls made inductive and deductive judgments about a common set of arguments that varied in logical validity and the amount of positive evidence provided (premise sample size). A second inductive reasoning task required participants to make forced-choice decisions and rate the usefulness of diverse evidence or non-diverse evidence for evaluating arguments. No differences in deductive reasoning were found between participants diagnosed with OCD and control participants. Both groups saw that the amount of positive evidence supporting a conclusion was an important guide for evaluating inductive arguments. However, those with OCD showed less sensitivity to premise diversity in inductive reasoning than controls. The findings were similar for both emotionally neutral and OCD-relevant stimuli. The absence of a clinical control group means that it is difficult to know whether the deficit in diversity-based reasoning is specific to those with OCD. People with OCD are impaired in some forms of inductive reasoning (using diverse evidence) but not others (use of sample size). Deductive reasoning appears intact in those with OCD. Difficulties using evidence diversity when reasoning inductively may maintain OCD symptoms through reduced generalization of learned safety information. Copyright © 2017 Elsevier Ltd. All rights reserved.
Nomogram for sample size calculation on a straightforward basis for the kappa statistic.
Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo
2014-09-01
Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.
Simple, Defensible Sample Sizes Based on Cost Efficiency
Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.
2009-01-01
Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055
Sex Differences in the Development of Moral Reasoning: A Rejoinder to Baumrind.
ERIC Educational Resources Information Center
Walker, Lawrence J.
1986-01-01
Addresses the criticisms of Diana Baumrind's review of his research on sex differences in moral reasoning development. Discusses issues such as the nature of moral development, the focus on adulthood, the choice of statistics, the effect of differing sample sizes and scoring systems, and the role of sexual experiences in explaining variability in…
Ranked set sampling: cost and optimal set size.
Nahhas, Ramzi W; Wolfe, Douglas A; Chen, Haiying
2002-12-01
McIntyre (1952, Australian Journal of Agricultural Research 3, 385-390) introduced ranked set sampling (RSS) as a method for improving estimation of a population mean in settings where sampling and ranking of units from the population are inexpensive when compared with actual measurement of the units. Two of the major factors in the usefulness of RSS are the set size and the relative costs of the various operations of sampling, ranking, and measurement. In this article, we consider ranking error models and cost models that enable us to assess the effect of different cost structures on the optimal set size for RSS. For reasonable cost structures, we find that the optimal RSS set sizes are generally larger than had been anticipated previously. These results will provide a useful tool for determining whether RSS is likely to lead to an improvement over simple random sampling in a given setting and, if so, what RSS set size is best to use in this case.
Determination of the optimal sample size for a clinical trial accounting for the population size.
Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin
2017-07-01
The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Adaptive cluster sampling: An efficient method for assessing inconspicuous species
Andrea M. Silletti; Joan Walker
2003-01-01
Restorationistis typically evaluate the success of a project by estimating the population sizes of species that have been planted or seeded. Because total census is raely feasible, they must rely on sampling methods for population estimates. However, traditional random sampling designs may be inefficient for species that, for one reason or another, are challenging to...
Schillaci, Michael A; Schillaci, Mario E
2009-02-01
The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (n<10) or very small (n < or = 5) sample sizes. This method can be used by researchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.
Neuromuscular dose-response studies: determining sample size.
Kopman, A F; Lien, C A; Naguib, M
2011-02-01
Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.
Johnston, Lisa G; McLaughlin, Katherine R; Rhilani, Houssine El; Latifi, Amina; Toufik, Abdalla; Bennani, Aziza; Alami, Kamal; Elomari, Boutaina; Handcock, Mark S
2015-01-01
Background Respondent-driven sampling is used worldwide to estimate the population prevalence of characteristics such as HIV/AIDS and associated risk factors in hard-to-reach populations. Estimating the total size of these populations is of great interest to national and international organizations, however reliable measures of population size often do not exist. Methods Successive Sampling-Population Size Estimation (SS-PSE) along with network size imputation allows population size estimates to be made without relying on separate studies or additional data (as in network scale-up, multiplier and capture-recapture methods), which may be biased. Results Ten population size estimates were calculated for people who inject drugs, female sex workers, men who have sex with other men, and migrants from sub-Sahara Africa in six different cities in Morocco. SS-PSE estimates fell within or very close to the likely values provided by experts and the estimates from previous studies using other methods. Conclusions SS-PSE is an effective method for estimating the size of hard-to-reach populations that leverages important information within respondent-driven sampling studies. The addition of a network size imputation method helps to smooth network sizes allowing for more accurate results. However, caution should be used particularly when there is reason to believe that clustered subgroups may exist within the population of interest or when the sample size is small in relation to the population. PMID:26258908
[Experimental analysis of some determinants of inductive reasoning].
Ono, K
1989-02-01
Three experiments were conducted from a behavioral perspective to investigate the determinants of inductive reasoning and to compare some methodological differences. The dependent variable used in these experiments was the threshold of confident response (TCR), which was defined as "the minimal sample size required to establish generalization from instances." Experiment 1 examined the effects of population size on inductive reasoning, and the results from 35 college students showed that the TCR varied in proportion to the logarithm of population size. In Experiment 2, 30 subjects showed distinct sensitivity to both prior probability and base-rate. The results from 70 subjects who participated in Experiment 3 showed that the TCR was affected by its consequences (risk condition), and especially, that humans were sensitive to a loss situation. These results demonstrate the sensitivity of humans to statistical variables in inductive reasoning. Furthermore, methodological comparison indicated that the experimentally observed values of TCR were close to, but not as precise as the optimal values predicted by Bayes' model. On the other hand, the subjective TCR estimated by subjects was highly discrepant from the observed TCR. These findings suggest that various aspects of inductive reasoning can be fruitfully investigated not only from subjective estimations such as probability likelihood but also from an objective behavioral perspective.
Long-term effective population size dynamics of an intensively monitored vertebrate population
Mueller, A-K; Chakarov, N; Krüger, O; Hoffman, J I
2016-01-01
Long-term genetic data from intensively monitored natural populations are important for understanding how effective population sizes (Ne) can vary over time. We therefore genotyped 1622 common buzzard (Buteo buteo) chicks sampled over 12 consecutive years (2002–2013 inclusive) at 15 microsatellite loci. This data set allowed us to both compare single-sample with temporal approaches and explore temporal patterns in the effective number of parents that produced each cohort in relation to the observed population dynamics. We found reasonable consistency between linkage disequilibrium-based single-sample and temporal estimators, particularly during the latter half of the study, but no clear relationship between annual Ne estimates () and census sizes. We also documented a 14-fold increase in between 2008 and 2011, a period during which the census size doubled, probably reflecting a combination of higher adult survival and immigration from further afield. Our study thus reveals appreciable temporal heterogeneity in the effective population size of a natural vertebrate population, confirms the need for long-term studies and cautions against drawing conclusions from a single sample. PMID:27553455
ERIC Educational Resources Information Center
Sullivan, Ethan A.
2011-01-01
The purpose of this study was to determine the impact of a business ethics course on the cognitive moral reasoning of freshmen business students. The sample consisted of 268 college students enrolled in a required business ethics course. The students took Rest's Defining Issues Test--Version 2 (DIT2) as a pre-test and then post-test (upon…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, J.D.; Joiner, W.C.H.
1979-10-01
Flux-flow noise power spectra taken on Pb/sub 80/In/sub 20/ foils as a function of the orientation of the magnetic field with respect to the sample surfaces are used to study changes in frequencies and bundle sizes as distances of fluxoid traversal and fluxoid lengths change. The results obtained for the frequency dependence of the noise spectra are entirely consistent with our model for flux motion interrupted by pinning centers, provided one makes the reasonable assumption that the distance between pinning centers which a fluxoid may encounter scales inversely with the fluxoid length. The importance of pinning centers in determining themore » noise characteristics is also demonstrated by the way in which subpulse distributions and generalized bundle sizes are altered by changes in the metallurgical structure of the sample. In unannealed samples the dependence of bundle size on magnetic field orientation is controlled by a structural anisotropy, and we find a correlation between large bundle size and the absence of short subpulse times. Annealing removes this anisotropy, and we find a stronger angular variation of bundle size than would be expected using present simplified models.« less
Combining the boundary shift integral and tensor-based morphometry for brain atrophy estimation
NASA Astrophysics Data System (ADS)
Michalkiewicz, Mateusz; Pai, Akshay; Leung, Kelvin K.; Sommer, Stefan; Darkner, Sune; Sørensen, Lauge; Sporring, Jon; Nielsen, Mads
2016-03-01
Brain atrophy from structural magnetic resonance images (MRIs) is widely used as an imaging surrogate marker for Alzheimers disease. Their utility has been limited due to the large degree of variance and subsequently high sample size estimates. The only consistent and reasonably powerful atrophy estimation methods has been the boundary shift integral (BSI). In this paper, we first propose a tensor-based morphometry (TBM) method to measure voxel-wise atrophy that we combine with BSI. The combined model decreases the sample size estimates significantly when compared to BSI and TBM alone.
Overall, John E; Tonidandel, Scott; Starbuck, Robert R
2006-01-01
Recent contributions to the statistical literature have provided elegant model-based solutions to the problem of estimating sample sizes for testing the significance of differences in mean rates of change across repeated measures in controlled longitudinal studies with differentially correlated error and missing data due to dropouts. However, the mathematical complexity and model specificity of these solutions make them generally inaccessible to most applied researchers who actually design and undertake treatment evaluation research in psychiatry. In contrast, this article relies on a simple two-stage analysis in which dropout-weighted slope coefficients fitted to the available repeated measurements for each subject separately serve as the dependent variable for a familiar ANCOVA test of significance for differences in mean rates of change. This article is about how a sample of size that is estimated or calculated to provide desired power for testing that hypothesis without considering dropouts can be adjusted appropriately to take dropouts into account. Empirical results support the conclusion that, whatever reasonable level of power would be provided by a given sample size in the absence of dropouts, essentially the same power can be realized in the presence of dropouts simply by adding to the original dropout-free sample size the number of subjects who would be expected to drop from a sample of that original size under conditions of the proposed study.
A. Broido; Hsiukang Yow
1977-01-01
Even before weight loss in the low-temperature pyrolysis of cellulose becomes significant, the average degree of polymerization of the partially pyrolyzed samples drops sharply. The gel permeation chromatograms of nitrated derivatives of the samples can be described in terms of a small number of mixed size populationsâeach component fitted within reasonable limits by a...
1995-05-01
principally surveys from Wyatt Data Services and the U.S. Chamber of Commerce ) to evaluate the reasonableness of compensation. The Wyatt surveys provided...better matches to similar industries, but the sample sizes often were too small to ensure stability in the data. Both the Wyatt survey and Chamber of Commerce survey...came from the U.S Chamber of Commerce Employee Benefits survey and included both defined benefit and 25 (P) Data removed for proprietary reasons
Design of Phase II Non-inferiority Trials.
Jung, Sin-Ho
2017-09-01
With the development of inexpensive treatment regimens and less invasive surgical procedures, we are confronted with non-inferiority study objectives. A non-inferiority phase III trial requires a roughly four times larger sample size than that of a similar standard superiority trial. Because of the large required sample size, we often face feasibility issues to open a non-inferiority trial. Furthermore, due to lack of phase II non-inferiority trial design methods, we do not have an opportunity to investigate the efficacy of the experimental therapy through a phase II trial. As a result, we often fail to open a non-inferiority phase III trial and a large number of non-inferiority clinical questions still remain unanswered. In this paper, we want to develop some designs for non-inferiority randomized phase II trials with feasible sample sizes. At first, we review a design method for non-inferiority phase III trials. Subsequently, we propose three different designs for non-inferiority phase II trials that can be used under different settings. Each method is demonstrated with examples. Each of the proposed design methods is shown to require a reasonable sample size for non-inferiority phase II trials. The three different non-inferiority phase II trial designs are used under different settings, but require similar sample sizes that are typical for phase II trials.
The efficacy of respondent-driven sampling for the health assessment of minority populations.
Badowski, Grazyna; Somera, Lilnabeth P; Simsiman, Brayan; Lee, Hye-Ryeon; Cassel, Kevin; Yamanaka, Alisha; Ren, JunHao
2017-10-01
Respondent driven sampling (RDS) is a relatively new network sampling technique typically employed for hard-to-reach populations. Like snowball sampling, initial respondents or "seeds" recruit additional respondents from their network of friends. Under certain assumptions, the method promises to produce a sample independent from the biases that may have been introduced by the non-random choice of "seeds." We conducted a survey on health communication in Guam's general population using the RDS method, the first survey that has utilized this methodology in Guam. It was conducted in hopes of identifying a cost-efficient non-probability sampling strategy that could generate reasonable population estimates for both minority and general populations. RDS data was collected in Guam in 2013 (n=511) and population estimates were compared with 2012 BRFSS data (n=2031) and the 2010 census data. The estimates were calculated using the unweighted RDS sample and the weighted sample using RDS inference methods and compared with known population characteristics. The sample size was reached in 23days, providing evidence that the RDS method is a viable, cost-effective data collection method, which can provide reasonable population estimates. However, the results also suggest that the RDS inference methods used to reduce bias, based on self-reported estimates of network sizes, may not always work. Caution is needed when interpreting RDS study findings. For a more diverse sample, data collection should not be conducted in just one location. Fewer questions about network estimates should be asked, and more careful consideration should be given to the kind of incentives offered to participants. Copyright © 2017. Published by Elsevier Ltd.
On the Treatment of Authors, Outliers, and Purchasing Power Parity Exchange Rates.
ERIC Educational Resources Information Center
Jaeger, Richard M.
1993-01-01
Ruth Stott violates canons of scholarly debate by attacking author's October 1992 "Kappan" article on world-class academic standards. Average class size predicted only 10% of variation in 13 year-olds' mean mathematics scores in 14 nations supplying reasonable comprehensive sampling frames for International Assessment of Academic…
NASA Astrophysics Data System (ADS)
Lintz, L.; Werts, S. P.
2014-12-01
The Ninety-Six National Historic Site is located in Greenwood County, SC. Recent geologic mapping of this area has revealed differences in soil properties over short distances within the park. We studied the chemistry of the clay minerals found within the soils to see if there was a correlation between the amounts of soil organic carbon contained in the soil and particle size in individual soil horizons. Three different vegetation areas, including an old field, a deciduous forest, and a pine forest were selected to see what influence vegetation type had on the clay chemistry and carbon levels as well. Four samples containing the O, A, and B horizons were taken from each location and we studied the carbon and nitrogen content using an elemental analyzer, particle size using a Laser Diffraction Particle Size Analyzer, and clay mineralogy with powder X-ray diffraction of each soil sample. Samples from the old field and pine forest gave an overall negative correlation between carbon content and clay percentage, which is against the normal trend for Southern Piedmont Ultisols. The deciduous forest samples gave no correlation at all between its carbon content and clay percentage. Together, all three locations show the same negative relationship, while once separated into vegetation type and A and B horizons it shows even more abnormal relationships of negative while several show no correlation (R2= 0.007403- 0.56268). Using powder XRD, we ran clay samples from each A and B horizon for the clay mineralogy. All three vegetation areas had the same results of containing quartz, kaolinite, and Fe oxides, therefore, clay chemistry is not a reason behind the abnormal trend of a negative correlation between average carbon content and clay percentage. Considering that all three locations have the same climate, topography, and parent material of metagranite, it could be reasonable to assume these results are a factor of environmental and biological influences rather than clay type.
Fragment size distribution statistics in dynamic fragmentation of laser shock-loaded tin
NASA Astrophysics Data System (ADS)
He, Weihua; Xin, Jianting; Zhao, Yongqiang; Chu, Genbai; Xi, Tao; Shui, Min; Lu, Feng; Gu, Yuqiu
2017-06-01
This work investigates the geometric statistics method to characterize the size distribution of tin fragments produced in the laser shock-loaded dynamic fragmentation process. In the shock experiments, the ejection of the tin sample with etched V-shape groove in the free surface are collected by the soft recovery technique. Subsequently, the produced fragments are automatically detected with the fine post-shot analysis techniques including the X-ray micro-tomography and the improved watershed method. To characterize the size distributions of the fragments, a theoretical random geometric statistics model based on Poisson mixtures is derived for dynamic heterogeneous fragmentation problem, which reveals linear combinational exponential distribution. The experimental data related to fragment size distributions of the laser shock-loaded tin sample are examined with the proposed theoretical model, and its fitting performance is compared with that of other state-of-the-art fragment size distribution models. The comparison results prove that our proposed model can provide far more reasonable fitting result for the laser shock-loaded tin.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp; Zhang, Xu
2015-07-07
Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources andmore » pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.« less
Rizvi, Farwa; Irfan, Ghazia
2012-01-01
High rates of contraceptive discontinuation for reasons other than the desire for pregnancy are a public health concern because of their association with negative reproductive health outcomes. The objective of this study was to determine reasons for discontinuation of contraceptive methods among couples with different family size and educational status. This cross-sectional study was carried out at the Obstetrics/Gynaecology Out-Patient Department of Pakistan Institute of Medical Sciences, Islamabad from April-September 2012. Patients (241) were selected by consecutive sampling after informed written consent and acquiring approval of Ethical Committee. The survey interview tool was a semi-structured questionnaire. Majority (68%) of women belonged to urban, and the rest were from rural areas. Mean age of these women was 29.43 +/- 5.384 year. Reasons for discontinuation of contraceptives included fear of injectable contraceptives (2.9%), contraceptive failure/pregnancy (7.46%), desire to become pregnant (63.48%), husband away at job (2.49%), health concerns/side effects (16.18%), affordability (0.83%), inconvenient to use (1.24%), acceptability (0.83%) and accessibility/lack of information (4.56%). Association of different reasons of discontinuation (chi square test) with the family size (actual number of children) was significant (p = 0.019) but was not significant with husband's or wife's educational status (p = 0.33 and 0.285 respectively). Keeping in mind the complex socioeconomic conditions in our country, Family planning programmers and stake holders need to identify women who strongly want to avoid a pregnancy and finding ways to help the couples successfully initiate and maintain appropriate contraceptive use.
Analogical reasoning in amazons.
Obozova, Tanya; Smirnova, Anna; Zorina, Zoya; Wasserman, Edward
2015-11-01
Two juvenile orange-winged amazons (Amazona amazonica) were initially trained to match visual stimuli by color, shape, and number of items, but not by size. After learning these three identity matching-to-sample tasks, the parrots transferred discriminative responding to new stimuli from the same categories that had been used in training (other colors, shapes, and numbers of items) as well as to stimuli from a different category (stimuli varying in size). In the critical testing phase, both parrots exhibited reliable relational matching-to-sample (RMTS) behavior, suggesting that they perceived and compared the relationship between objects in the sample stimulus pair to the relationship between objects in the comparison stimulus pairs, even though no physical matches were possible between items in the sample and comparison pairs. The parrots spontaneously exhibited this higher-order relational responding without having ever before been trained on RMTS tasks, therefore joining apes and crows in displaying this abstract cognitive behavior.
The impact of restorative treatment on tooth loss prevention.
Caldas Junior, Arnaldo de França; Silveira, Renata Cimões Jovino; Marcenes, Wagner
2003-01-01
A cross-sectional study was carried out to analyze tooth loss resulting from caries in relation to the number of times the extracted tooth had been restored, the type of caries diagnosed (primary or secondary), and socioeconomic indicators of patients from the city of Recife, Brazil. Ten public health centres and ten centres associated with health insurance companies were randomly selected. The size of the sample was calculated using a standard error of 2.5%. A confidence interval of 95% and a 50% prevalence of reasons for extractions were used for calculating the sample. The minimum size of the sample for meeting these requirements was 381 patients. Patients were randomly selected from the list of adults registered at each centre. A total of 410 patients were invited to take part in the study. The response rate was 100%, but 6 patients were excluded due to incompleteness of data in the questionnaire applied. An assessment was made to obtain the number of decayed, missing or filled teeth (DMFT index) and the reasons for extraction. The results showed a highly significant (p < 0.001) relationship between the number of times the tooth indicated for extraction had been restored and the reason for extraction being caries. Furthermore, the majority of teeth extracted due to caries had been restored two or more times. A highly statistically significant association was also observed between one indicator of use of dental services (F/DMFT) and extraction due to caries (p < 0.001). The findings questioned the belief that tooth loss can be prevented in the general population by merely providing restorative treatment.
Emotional reasoning processes and dysphoric mood: cross-sectional and prospective relationships.
Berle, David; Moulds, Michelle L
2013-01-01
Emotional reasoning refers to the use of subjective emotions, rather than objective evidence, to form conclusions about oneself and the world. Emotional reasoning appears to characterise anxiety disorders. We aimed to determine whether elevated levels of emotional reasoning also characterise dysphoria. In Study 1, low dysphoric (BDI-II≤4; n = 28) and high dysphoric (BDI-II ≥14; n = 42) university students were administered an emotional reasoning task relevant for dysphoria. In Study 2, a larger university sample were administered the same task, with additional self-referent ratings, and were followed up 8 weeks later. In Study 1, both the low and high dysphoric participants demonstrated emotional reasoning and there were no significant differences in scores on the emotional reasoning task between the low and high dysphoric groups. In Study 2, self-referent emotional reasoning interpretations showed small-sized positive correlations with depression symptoms. Emotional reasoning tendencies were stable across an 8-week interval although not predictive of subsequent depressive symptoms. Further, anxiety symptoms were independently associated with emotional reasoning and emotional reasoning was not associated with anxiety sensitivity, alexithymia, or deductive reasoning tendencies. The implications of these findings are discussed, including the possibility that while all individuals may engage in emotional reasoning, self-referent emotional reasoning may be associated with increased levels of depressive symptoms.
Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette
2018-03-01
In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Intelligent Gearbox Diagnosis Methods Based on SVM, Wavelet Lifting and RBR
Gao, Lixin; Ren, Zhiqiang; Tang, Wenliang; Wang, Huaqing; Chen, Peng
2010-01-01
Given the problems in intelligent gearbox diagnosis methods, it is difficult to obtain the desired information and a large enough sample size to study; therefore, we propose the application of various methods for gearbox fault diagnosis, including wavelet lifting, a support vector machine (SVM) and rule-based reasoning (RBR). In a complex field environment, it is less likely for machines to have the same fault; moreover, the fault features can also vary. Therefore, a SVM could be used for the initial diagnosis. First, gearbox vibration signals were processed with wavelet packet decomposition, and the signal energy coefficients of each frequency band were extracted and used as input feature vectors in SVM for normal and faulty pattern recognition. Second, precision analysis using wavelet lifting could successfully filter out the noisy signals while maintaining the impulse characteristics of the fault; thus effectively extracting the fault frequency of the machine. Lastly, the knowledge base was built based on the field rules summarized by experts to identify the detailed fault type. Results have shown that SVM is a powerful tool to accomplish gearbox fault pattern recognition when the sample size is small, whereas the wavelet lifting scheme can effectively extract fault features, and rule-based reasoning can be used to identify the detailed fault type. Therefore, a method that combines SVM, wavelet lifting and rule-based reasoning ensures effective gearbox fault diagnosis. PMID:22399894
Intelligent gearbox diagnosis methods based on SVM, wavelet lifting and RBR.
Gao, Lixin; Ren, Zhiqiang; Tang, Wenliang; Wang, Huaqing; Chen, Peng
2010-01-01
Given the problems in intelligent gearbox diagnosis methods, it is difficult to obtain the desired information and a large enough sample size to study; therefore, we propose the application of various methods for gearbox fault diagnosis, including wavelet lifting, a support vector machine (SVM) and rule-based reasoning (RBR). In a complex field environment, it is less likely for machines to have the same fault; moreover, the fault features can also vary. Therefore, a SVM could be used for the initial diagnosis. First, gearbox vibration signals were processed with wavelet packet decomposition, and the signal energy coefficients of each frequency band were extracted and used as input feature vectors in SVM for normal and faulty pattern recognition. Second, precision analysis using wavelet lifting could successfully filter out the noisy signals while maintaining the impulse characteristics of the fault; thus effectively extracting the fault frequency of the machine. Lastly, the knowledge base was built based on the field rules summarized by experts to identify the detailed fault type. Results have shown that SVM is a powerful tool to accomplish gearbox fault pattern recognition when the sample size is small, whereas the wavelet lifting scheme can effectively extract fault features, and rule-based reasoning can be used to identify the detailed fault type. Therefore, a method that combines SVM, wavelet lifting and rule-based reasoning ensures effective gearbox fault diagnosis.
Seidler, Anna Lene; Askie, Lisa M
2018-01-01
Objectives To analyse prospective versus retrospective trial registration trends on the Australian New Zealand Clinical Trials Registry (ANZCTR) and to evaluate the reasons for non-compliance with prospective registration. Design Part 1: Descriptive analysis of trial registration trends from 2006 to 2015. Part 2: Online registrant survey. Participants Part 1: All interventional trials registered on ANZCTR from 2006 to 2015. Part 2: Random sample of those who had retrospectively registered a trial on ANZCTR between 2010 and 2015. Main outcome measures Part 1: Proportion of prospective versus retrospective clinical trial registrations (ie, registration before versus after enrolment of the first participant) on the ANZCTR overall and by various key metrics, such as sponsor, funder, recruitment country and sample size. Part 2: Reasons for non-compliance with prospective registration and perceived usefulness of various proposed mechanisms to improve prospective registration compliance. Results Part 1: Analysis of the complete dataset of 9450 trials revealed that compliance with prospective registration increased from 48% (216 out of 446 trials) in 2006 to 63% (723/1148) in 2012 and has since plateaued at around 64%. Patterns of compliance were relatively consistent across sponsor and funder types (industry vs non-industry), type of intervention (drug vs non-drug) and size of trial (n<100, 100–500, >500). However, primary sponsors from Australia/New Zealand were almost twice as likely to register prospectively (62%; 4613/7452) compared with sponsors from other countries with a WHO Network Registry (35%; 377/1084) or sponsors from countries without a WHO Registry (29%; 230/781). Part 2: The majority (56%; 84/149) of survey respondents cited lack of awareness as a reason for not registering their study prospectively. Seventy-four per cent (111/149) stated that linking registration to ethics approval would facilitate prospective registration. Conclusions Despite some progress, compliance with prospective registration remains suboptimal. Linking registration to ethics approval was the favoured strategy among those sampled for improving compliance. PMID:29496896
Support vector regression to predict porosity and permeability: Effect of sample size
NASA Astrophysics Data System (ADS)
Al-Anazi, A. F.; Gates, I. D.
2012-02-01
Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function type and loss functions used.
Auffan, Mélanie; Rose, Jérôme; Bottero, Jean-Yves; Lowry, Gregory V; Jolivet, Jean-Pierre; Wiesner, Mark R
2009-10-01
The regulation of engineered nanoparticles requires a widely agreed definition of such particles. Nanoparticles are routinely defined as particles with sizes between about 1 and 100 nm that show properties that are not found in bulk samples of the same material. Here we argue that evidence for novel size-dependent properties alone, rather than particle size, should be the primary criterion in any definition of nanoparticles when making decisions about their regulation for environmental, health and safety reasons. We review the size-dependent properties of a variety of inorganic nanoparticles and find that particles larger than about 30 nm do not in general show properties that would require regulatory scrutiny beyond that required for their bulk counterparts.
Almutairy, Meznah; Torng, Eric
2018-01-01
Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.
Torng, Eric
2018-01-01
Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989
Atinga, Roger A; Abiiro, Gilbert Abotisem; Kuganab-Lem, Robert Bella
2015-03-01
To identify the factors influencing dropout from Ghana's health insurance scheme among populations living in slum communities. Cross-sectional data were collected from residents of 22 slums in the Accra Metropolitan Assembly. Cluster and systematic random sampling techniques were used to select and interview 600 individuals who had dropped out from the scheme 6 months prior to the study. Descriptive statistics and multivariate logistic regression models were computed to account for sample characteristics and reasons associated with the decision to dropout. The proportion of dropouts in the sample increased from the range of 6.8% in 2008 to 34.8% in 2012. Non-affordability of premium was the predominant reason followed by rare illness episodes, limited benefits of the scheme and poor service quality. Low-income earners and those with low education were significantly more likely to report premium non-affordability. Rare illness was a common reason among younger respondents, informal sector workers and respondents with higher education. All subgroups of age, education, occupation and income reported nominal benefits of the scheme as a reason for dropout. Interventions targeted at removing bottlenecks to health insurance enrolment are salient to maximising the size of the insurance pool. Strengthening service quality and extending the premium exemption to cover low-income families in slum communities is a valuable strategy to achieve universal health coverage. © 2014 John Wiley & Sons Ltd.
Ratios of total suspended solids to suspended sediment concentrations by particle size
Selbig, W.R.; Bannerman, R.T.
2011-01-01
Wet-sieving sand-sized particles from a whole storm-water sample before splitting the sample into laboratory-prepared containers can reduce bias and improve the precision of suspended-sediment concentrations (SSC). Wet-sieving, however, may alter concentrations of total suspended solids (TSS) because the analytical method used to determine TSS may not have included the sediment retained on the sieves. Measuring TSS is still commonly used by environmental managers as a regulatory metric for solids in storm water. For this reason, a new method of correlating concentrations of TSS and SSC by particle size was used to develop a series of correction factors for SSC as a means to estimate TSS. In general, differences between TSS and SSC increased with greater particle size and higher sand content. Median correction factors to SSC ranged from 0.29 for particles larger than 500m to 0.85 for particles measuring from 32 to 63m. Great variability was observed in each fraction-a result of varying amounts of organic matter in the samples. Wide variability in organic content could reduce the transferability of the correction factors. ?? 2011 American Society of Civil Engineers.
Emotional Reasoning Processes and Dysphoric Mood: Cross-Sectional and Prospective Relationships
Berle, David; Moulds, Michelle L.
2013-01-01
Emotional reasoning refers to the use of subjective emotions, rather than objective evidence, to form conclusions about oneself and the world [1]. Emotional reasoning appears to characterise anxiety disorders. We aimed to determine whether elevated levels of emotional reasoning also characterise dysphoria. In Study 1, low dysphoric (BDI-II≤4; n = 28) and high dysphoric (BDI-II ≥14; n = 42) university students were administered an emotional reasoning task relevant for dysphoria. In Study 2, a larger university sample were administered the same task, with additional self-referent ratings, and were followed up 8 weeks later. In Study 1, both the low and high dysphoric participants demonstrated emotional reasoning and there were no significant differences in scores on the emotional reasoning task between the low and high dysphoric groups. In Study 2, self-referent emotional reasoning interpretations showed small-sized positive correlations with depression symptoms. Emotional reasoning tendencies were stable across an 8-week interval although not predictive of subsequent depressive symptoms. Further, anxiety symptoms were independently associated with emotional reasoning and emotional reasoning was not associated with anxiety sensitivity, alexithymia, or deductive reasoning tendencies. The implications of these findings are discussed, including the possibility that while all individuals may engage in emotional reasoning, self-referent emotional reasoning may be associated with increased levels of depressive symptoms. PMID:23826276
Deontic and epistemic reasoning in children revisited: comment on Dack and Astington.
Cummins, Denise Dellarosa
2013-11-01
Dack and Astington (Journal of Experimental Child Psychology 110 2011 94-114) attempted to replicate the deontic reasoning advantage among preschoolers reported by Cummins (Memory & Cognition 24 1996 823-829) and by Harris and Nuñez (Child Development. 67 1996 572-1591). Dack and Astington argued that the apparent deontic advantage reported by these studies was in fact an artifact due to a methodological confound, namely, inclusion of an authority in the deontic condition only. Removing this confound attenuated the effect in young children but had no effect on the reasoning of 7-year-olds and adults. Thus, removing reference to authority "explains away" young children's apparent precocity at this type of reasoning. But this explanation rests on (a) a misunderstanding of norms as targets of deontic reasoning and (b) conclusions based on a sample size that was too small to detect the effect in young children. Copyright © 2013 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Hansen, Mark; Cai, Li; Monroe, Scott; Li, Zhen
2014-01-01
It is a well-known problem in testing the fit of models to multinomial data that the full underlying contingency table will inevitably be sparse for tests of reasonable length and for realistic sample sizes. Under such conditions, full-information test statistics such as Pearson's X[superscript 2]?? and the likelihood ratio statistic…
A simulation study on Bayesian Ridge regression models for several collinearity levels
NASA Astrophysics Data System (ADS)
Efendi, Achmad; Effrihan
2017-12-01
When analyzing data with multiple regression model if there are collinearities, then one or several predictor variables are usually omitted from the model. However, there sometimes some reasons, for instance medical or economic reasons, the predictors are all important and should be included in the model. Ridge regression model is not uncommon in some researches to use to cope with collinearity. Through this modeling, weights for predictor variables are used for estimating parameters. The next estimation process could follow the concept of likelihood. Furthermore, for the estimation nowadays the Bayesian version could be an alternative. This estimation method does not match likelihood one in terms of popularity due to some difficulties; computation and so forth. Nevertheless, with the growing improvement of computational methodology recently, this caveat should not at the moment become a problem. This paper discusses about simulation process for evaluating the characteristic of Bayesian Ridge regression parameter estimates. There are several simulation settings based on variety of collinearity levels and sample sizes. The results show that Bayesian method gives better performance for relatively small sample sizes, and for other settings the method does perform relatively similar to the likelihood method.
Investigation of Polymer Liquid Crystals
NASA Technical Reports Server (NTRS)
Han, Kwang S.
1996-01-01
The positron annihilation lifetime spectroscopy (PALS) using a low energy flux generator may provide a reasonably accurate technique for measuring molecular weights of linear polymers and characterization of thin polyimide films in terms of their dielectric constants and hydrophobity etc. Among the tested samples are glassy poly arylene Ether Ketone films, epoxy and other polyimide films. One of the proposed techniques relates the free volume cell size (V(sub f)) with sample molecular weight (M) in a manner remarkably similar to that obtained by Mark Houwink (M-H) between the inherent viscosity (eta) and molecular wieght of polymer solution. The PALS has also demonstrated that free-volume cell size in thermoset is a versatile, useful parameter that relates directly to the polymer segmental molecular weight, the cross-link density, and the coefficient of thermal expansion. Thus, a determination of free volume cell size provides a viable basis for complete microstructural characterization of thermoset polyimides and also gives direct information about the cross-link density and coefficient of expansion of the test samples. Seven areas of the research conducted are reported here.
NASA Astrophysics Data System (ADS)
Wu, Shuang; Kanada, Isao; Mewes, Tim; Mewes, Claudia; Mankey, Gary; Ariake, Yusuke; Suzuki, Takao
Soft ferrites have been extensively and intensively applied for high frequency device applications. Among them, Ba-ferrites substituted by Mn and Ti are particularly attractive as future soft magnetic material candidates for advanced high frequency device applications. However, very little has been known as to the intrinsic magnetic properties, such as damping parameter, which is crucial to develop high frequency devices. In the present study, much effort has been focused on fabrication of single crystal Ba-ferrites and measurements of damping parameter by FMR. Ba-ferrite samples consisted of many grains with various sizes have been prepared. The saturation magnetization and the magnetic anisotropy field of the sample are in reasonable agreement with the values in literature. The resonances positions in the FMR spectra over a wide frequency range also comply with theoretical predictions. However, the complex resonance shapes observed makes it difficult to extract dynamic magnetic property. Possible reasons are the demagnetization field originating from irregular sample shape or existence of multiple grains in the samples. S.W. acknowledges the support under the TDK Scholar Program.
Ryskin, Rachel A; Brown-Schmidt, Sarah
2014-01-01
Seven experiments use large sample sizes to robustly estimate the effect size of a previous finding that adults are more likely to commit egocentric errors in a false-belief task when the egocentric response is plausible in light of their prior knowledge. We estimate the true effect size to be less than half of that reported in the original findings. Even though we found effects in the same direction as the original, they were substantively smaller; the original study would have had less than 33% power to detect an effect of this magnitude. The influence of plausibility on the curse of knowledge in adults appears to be small enough that its impact on real-life perspective-taking may need to be reevaluated.
NASA Technical Reports Server (NTRS)
Glaser, F. M.
1976-01-01
Oligoclase and bloedite, two mined samples, have been investigated, and the diffuse reflectance spectra are presented. These data are for powdered material, 50 microns to 5 microns size mixture, cooled to 160 K. The reflectivity of the oligoclase sample was also measured at room temperature, about 290 K, and the results at these two temperatures do indicate some tentative differences. A frost of ordinary water was prepared and its spectral reflectance is presented. This result compares reasonably well with measurements made by other investigators.
Khan, Bilal; Lee, Hsuan-Wei; Fellows, Ian; Dombrowski, Kirk
2018-01-01
Size estimation is particularly important for populations whose members experience disproportionate health issues or pose elevated health risks to the ambient social structures in which they are embedded. Efforts to derive size estimates are often frustrated when the population is hidden or hard-to-reach in ways that preclude conventional survey strategies, as is the case when social stigma is associated with group membership or when group members are involved in illegal activities. This paper extends prior research on the problem of network population size estimation, building on established survey/sampling methodologies commonly used with hard-to-reach groups. Three novel one-step, network-based population size estimators are presented, for use in the context of uniform random sampling, respondent-driven sampling, and when networks exhibit significant clustering effects. We give provably sufficient conditions for the consistency of these estimators in large configuration networks. Simulation experiments across a wide range of synthetic network topologies validate the performance of the estimators, which also perform well on a real-world location-based social networking data set with significant clustering. Finally, the proposed schemes are extended to allow them to be used in settings where participant anonymity is required. Systematic experiments show favorable tradeoffs between anonymity guarantees and estimator performance. Taken together, we demonstrate that reasonable population size estimates are derived from anonymous respondent driven samples of 250-750 individuals, within ambient populations of 5,000-40,000. The method thus represents a novel and cost-effective means for health planners and those agencies concerned with health and disease surveillance to estimate the size of hidden populations. We discuss limitations and future work in the concluding section.
Quality of reporting of pilot and feasibility cluster randomised trials: a systematic review
Chan, Claire L; Leyrat, Clémence; Eldridge, Sandra M
2017-01-01
Objectives To systematically review the quality of reporting of pilot and feasibility of cluster randomised trials (CRTs). In particular, to assess (1) the number of pilot CRTs conducted between 1 January 2011 and 31 December 2014, (2) whether objectives and methods are appropriate and (3) reporting quality. Methods We searched PubMed (2011–2014) for CRTs with ‘pilot’ or ‘feasibility’ in the title or abstract; that were assessing some element of feasibility and showing evidence the study was in preparation for a main effectiveness/efficacy trial. Quality assessment criteria were based on the Consolidated Standards of Reporting Trials (CONSORT) extensions for pilot trials and CRTs. Results Eighteen pilot CRTs were identified. Forty-four per cent did not have feasibility as their primary objective, and many (50%) performed formal hypothesis testing for effectiveness/efficacy despite being underpowered. Most (83%) included ‘pilot’ or ‘feasibility’ in the title, and discussed implications for progression from the pilot to the future definitive trial (89%), but fewer reported reasons for the randomised pilot trial (39%), sample size rationale (44%) or progression criteria (17%). Most defined the cluster (100%), and number of clusters randomised (94%), but few reported how the cluster design affected sample size (17%), whether consent was sought from clusters (11%), or who enrolled clusters (17%). Conclusions That only 18 pilot CRTs were identified necessitates increased awareness of the importance of conducting and publishing pilot CRTs and improved reporting. Pilot CRTs should primarily be assessing feasibility, avoiding formal hypothesis testing for effectiveness/efficacy and reporting reasons for the pilot, sample size rationale and progression criteria, as well as enrolment of clusters, and how the cluster design affects design aspects. We recommend adherence to the CONSORT extensions for pilot trials and CRTs. PMID:29122791
El Bakkali, Ahmed; Haouane, Hicham; Moukhli, Abdelmajid; Costes, Evelyne; Van Damme, Patrick; Khadari, Bouchaib
2013-01-01
Phenotypic characterisation of germplasm collections is a decisive step towards association mapping analyses, but it is particularly expensive and tedious for woody perennial plant species. Characterisation could be more efficient if focused on a reasonably sized subset of accessions, or so-called core collection (CC), reflecting the geographic origin and variability of the germplasm. The questions that arise concern the sample size to use and genetic parameters that should be optimized in a core collection to make it suitable for association mapping. Here we investigated these questions in olive (Olea europaea L.), a perennial fruit species. By testing different sampling methods and sizes in a worldwide olive germplasm bank (OWGB Marrakech, Morocco) containing 502 unique genotypes characterized by nuclear and plastid loci, a two-step sampling method was proposed. The Shannon-Weaver diversity index was found to be the best criterion to be maximized in the first step using the Core Hunter program. A primary core collection of 50 entries (CC50) was defined that captured more than 80% of the diversity. This latter was subsequently used as a kernel with the Mstrat program to capture the remaining diversity. 200 core collections of 94 entries (CC94) were thus built for flexibility in the choice of varieties to be studied. Most entries of both core collections (CC50 and CC94) were revealed to be unrelated due to the low kinship coefficient, whereas a genetic structure spanning the eastern and western/central Mediterranean regions was noted. Linkage disequilibrium was observed in CC94 which was mainly explained by a genetic structure effect as noted for OWGB Marrakech. Since they reflect the geographic origin and diversity of olive germplasm and are of reasonable size, both core collections will be of major interest to develop long-term association studies and thus enhance genomic selection in olive species. PMID:23667437
Waliszewski, Matthias W; Redlich, Ulf; Breul, Victor; Tautenhahn, Jörg
2017-04-30
The aim of this review is to present the available clinical and surrogate endpoints that may be used in future studies performed in patients with peripheral artery occlusive disease (PAOD). Importantly, we describe statistical limitations of the most commonly used endpoints and offer some guidance with respect to study design for a given sample size. The proposed endpoints may be used in studies using surgical or interventional revascularization and/or drug treatments. Considering recently published study endpoints and designs, the usefulness of these endpoints for reimbursement is evaluated. Based on these potential study endpoints and patient sample size estimates with different non-inferiority or tests for difference hypotheses, a rating relative to their corresponding reimbursement values is attempted. As regards the benefit for the patients and for the payers, walking distance and the ankle brachial index (ABI) are the most feasible endpoints in a relatively small study samples given that other non-vascular impact factors can be controlled. Angiographic endpoints such as minimal lumen diameter (MLD) do not seem useful from a reimbursement standpoint despite their intuitiveness. Other surrogate endpoints, such as transcutaneous oxygen tension measurements, have yet to be established as useful endpoints in reasonably sized studies with patients with critical limb ischemia (CLI). From a reimbursement standpoint, WD and ABI are effective endpoints for a moderate study sample size given that non-vascular confounding factors can be controlled.
Walker, Christopher S; Yapuncich, Gabriel S; Sridhar, Shilpa; Cameron, Noël; Churchill, Steven E
2018-02-01
Body mass is an ecologically and biomechanically important variable in the study of hominin biology. Regression equations derived from recent human samples allow for the reasonable prediction of body mass of later, more human-like, and generally larger hominins from hip joint dimensions, but potential differences in hip biomechanics across hominin taxa render their use questionable with some earlier taxa (i.e., Australopithecus spp.). Morphometric prediction equations using stature and bi-iliac breadth avoid this problem, but their applicability to early hominins, some of which differ in both size and proportions from modern adult humans, has not been demonstrated. Here we use mean stature, bi-iliac breadth, and body mass from a global sample of human juveniles ranging in age from 6 to 12 years (n = 530 age- and sex-specific group annual means from 33 countries/regions) to evaluate the accuracy of several published morphometric prediction equations when applied to small humans. Though the body proportions of modern human juveniles likely differ from those of small-bodied early hominins, human juveniles (like fossil hominins) often differ in size and proportions from adult human reference samples and, accordingly, serve as a useful model for assessing the robustness of morphometric prediction equations. Morphometric equations based on adults systematically underpredict body mass in the youngest age groups and moderately overpredict body mass in the older groups, which fall in the body size range of adult Australopithecus (∼26-46 kg). Differences in body proportions, notably the ratio of lower limb length to stature, influence predictive accuracy. Ontogenetic changes in these body proportions likely influence the shift in prediction error (from under- to overprediction). However, because morphometric equations are reasonably accurate when applied to this juvenile test sample, we argue these equations may be used to predict body mass in small-bodied hominins, despite the potential for some error induced by differing body proportions and/or extrapolation beyond the original reference sample range. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kim, Hyoungrae; Jang, Cheongyun; Yadav, Dharmendra K; Kim, Mi-Hyun
2017-03-23
The accuracy of any 3D-QSAR, Pharmacophore and 3D-similarity based chemometric target fishing models are highly dependent on a reasonable sample of active conformations. Since a number of diverse conformational sampling algorithm exist, which exhaustively generate enough conformers, however model building methods relies on explicit number of common conformers. In this work, we have attempted to make clustering algorithms, which could find reasonable number of representative conformer ensembles automatically with asymmetric dissimilarity matrix generated from openeye tool kit. RMSD was the important descriptor (variable) of each column of the N × N matrix considered as N variables describing the relationship (network) between the conformer (in a row) and the other N conformers. This approach used to evaluate the performance of the well-known clustering algorithms by comparison in terms of generating representative conformer ensembles and test them over different matrix transformation functions considering the stability. In the network, the representative conformer group could be resampled for four kinds of algorithms with implicit parameters. The directed dissimilarity matrix becomes the only input to the clustering algorithms. Dunn index, Davies-Bouldin index, Eta-squared values and omega-squared values were used to evaluate the clustering algorithms with respect to the compactness and the explanatory power. The evaluation includes the reduction (abstraction) rate of the data, correlation between the sizes of the population and the samples, the computational complexity and the memory usage as well. Every algorithm could find representative conformers automatically without any user intervention, and they reduced the data to 14-19% of the original values within 1.13 s per sample at the most. The clustering methods are simple and practical as they are fast and do not ask for any explicit parameters. RCDTC presented the maximum Dunn and omega-squared values of the four algorithms in addition to consistent reduction rate between the population size and the sample size. The performance of the clustering algorithms was consistent over different transformation functions. Moreover, the clustering method can also be applied to molecular dynamics sampling simulation results.
High-concentration zeta potential measurements using light-scattering techniques
Kaszuba, Michael; Corbett, Jason; Watson, Fraser Mcneil; Jones, Andrew
2010-01-01
Zeta potential is the key parameter that controls electrostatic interactions in particle dispersions. Laser Doppler electrophoresis is an accepted method for the measurement of particle electrophoretic mobility and hence zeta potential of dispersions of colloidal size materials. Traditionally, samples measured by this technique have to be optically transparent. Therefore, depending upon the size and optical properties of the particles, many samples will be too concentrated and will require dilution. The ability to measure samples at or close to their neat concentration would be desirable as it would minimize any changes in the zeta potential of the sample owing to dilution. However, the ability to measure turbid samples using light-scattering techniques presents a number of challenges. This paper discusses electrophoretic mobility measurements made on turbid samples at high concentration using a novel cell with reduced path length. Results are presented on two different sample types, titanium dioxide and a polyurethane dispersion, as a function of sample concentration. For both of the sample types studied, the electrophoretic mobility results show a gradual decrease as the sample concentration increases and the possible reasons for these observations are discussed. Further, a comparison of the data against theoretical models is presented and discussed. Conclusions and recommendations are made from the zeta potential values obtained at high concentrations. PMID:20732896
NASA Astrophysics Data System (ADS)
Meng, Chao; Zhou, Hong; Cong, Dalong; Wang, Chuanwei; Zhang, Peng; Zhang, Zhihui; Ren, Luquan
2012-06-01
The thermal fatigue behavior of hot-work tool steel processed by a biomimetic coupled laser remelting process gets a remarkable improvement compared to untreated sample. The 'dowel pin effect', the 'dam effect' and the 'fence effect' of non-smooth units are the main reason of the conspicuous improvement of the thermal fatigue behavior. In order to get a further enhancement of the 'dowel pin effect', the 'dam effect' and the 'fence effect', this study investigated the effect of different unit morphologies (including 'prolate', 'U' and 'V' morphology) and the same unit morphology in different sizes on the thermal fatigue behavior of H13 hot-work tool steel. The results showed that the 'U' morphology unit had the optimum thermal fatigue behavior, then the 'V' morphology which was better than the 'prolate' morphology unit; when the unit morphology was identical, the thermal fatigue behavior of the sample with large unit sizes was better than that of the small sizes.
NASA Astrophysics Data System (ADS)
Young, G.; Jones, H. M.; Darbyshire, E.; Baustian, K. J.; McQuaid, J. B.; Bower, K. N.; Connolly, P. J.; Gallagher, M. W.; Choularton, T. W.
2016-03-01
Single-particle compositional analysis of filter samples collected on board the Facility for Airborne Atmospheric Measurements (FAAM) BAe-146 aircraft is presented for six flights during the springtime Aerosol-Cloud Coupling and Climate Interactions in the Arctic (ACCACIA) campaign (March-April 2013). Scanning electron microscopy was utilised to derive size-segregated particle compositions and size distributions, and these were compared to corresponding data from wing-mounted optical particle counters. Reasonable agreement between the calculated number size distributions was found. Significant variability in composition was observed, with differing external and internal mixing identified, between air mass trajectory cases based on HYbrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) analyses. Dominant particle classes were silicate-based dusts and sea salts, with particles notably rich in K and Ca detected in one case. Source regions varied from the Arctic Ocean and Greenland through to northern Russia and the European continent. Good agreement between the back trajectories was mirrored by comparable compositional trends between samples. Silicate dusts were identified in all cases, and the elemental composition of the dust was consistent for all samples except one. It is hypothesised that long-range, high-altitude transport was primarily responsible for this dust, with likely sources including the Asian arid regions.
Easteal, Simon
1985-01-01
The allele frequencies are described at ten polymorphic enzyme loci (of a total of 22 loci sampled) in 15 populations of the neotropical giant toad, Bufo marinus, introduced to Hawaii and Australia in the 1930s. The history of establishment of the ten populations is described and used as a framework for the analysis of allele frequency variances. The variances are used to determine the effective sizes of the populations. The estimates obtained (390 and 346) are reasonably precise, homogeneous between localities and much smaller than estimates of neighborhood size obtained previously using ecological methods. This discrepancy is discussed, and it is concluded that the estimates obtained here using genetic methods are the more reliable. PMID:3922852
Broberg, Per
2013-07-19
One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.
Hunter, Kylie Elizabeth; Seidler, Anna Lene; Askie, Lisa M
2018-03-01
To analyse prospective versus retrospective trial registration trends on the Australian New Zealand Clinical Trials Registry (ANZCTR) and to evaluate the reasons for non-compliance with prospective registration. Part 1: Descriptive analysis of trial registration trends from 2006 to 2015. Part 2: Online registrant survey. Part 1: All interventional trials registered on ANZCTR from 2006 to 2015. Part 2: Random sample of those who had retrospectively registered a trial on ANZCTR between 2010 and 2015. Part 1: Proportion of prospective versus retrospective clinical trial registrations (ie, registration before versus after enrolment of the first participant) on the ANZCTR overall and by various key metrics, such as sponsor, funder, recruitment country and sample size. Part 2: Reasons for non-compliance with prospective registration and perceived usefulness of various proposed mechanisms to improve prospective registration compliance. Part 1: Analysis of the complete dataset of 9450 trials revealed that compliance with prospective registration increased from 48% (216 out of 446 trials) in 2006 to 63% (723/1148) in 2012 and has since plateaued at around 64%. Patterns of compliance were relatively consistent across sponsor and funder types (industry vs non-industry), type of intervention (drug vs non-drug) and size of trial (n<100, 100-500, >500). However, primary sponsors from Australia/New Zealand were almost twice as likely to register prospectively (62%; 4613/7452) compared with sponsors from other countries with a WHO Network Registry (35%; 377/1084) or sponsors from countries without a WHO Registry (29%; 230/781). Part 2: The majority (56%; 84/149) of survey respondents cited lack of awareness as a reason for not registering their study prospectively. Seventy-four per cent (111/149) stated that linking registration to ethics approval would facilitate prospective registration. Despite some progress, compliance with prospective registration remains suboptimal. Linking registration to ethics approval was the favoured strategy among those sampled for improving compliance. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Diversity-based reasoning in children.
Heit, E; Hahn, U
2001-12-01
One of the hallmarks of inductive reasoning by adults is the diversity effect, namely that people draw stronger inferences from a diverse set of evidence than from a more homogenous set of evidence. However, past developmental work has not found consistent diversity effects with children age 9 and younger. We report robust sensitivity to diversity in children as young as 5, using everyday stimuli such as pictures of objects with people. Experiment 1 showed the basic diversity effect in 5- to 9-year-olds. Experiment 2 showed that, like adults, children restrict their use of diversity information when making inferences about remote categories. Experiment 3 used other stimulus sets to overcome an alternate explanation in terms of sample size rather than diversity effects. Finally, Experiment 4 showed that children more readily draw on diversity when reasoning about objects and their relations with people than when reasoning about objects' internal, hidden properties, thus partially explaining the negative findings of previous work. Relations to cross-cultural work and models of induction are discussed. Copyright 2001 Academic Press.
NASA Astrophysics Data System (ADS)
Dong, Xufeng; Guan, Xinchun; Ou, Jinping
2009-03-01
In the past ten years, there have been several investigations on the effects of particle size on magnetostrictive properties of polymer-bonded Terfenol-D composites, but they didn't get an agreement. To solve the conflict among them, Terfenol-D/unsaturated polyester resin composite samples were prepared from Tb0.3Dy0.7Fe2 powder with 20% volume fraction in six particle-size ranges (30-53, 53-150, 150-300, 300-450, 450-500 and 30-500μm). Then their magnetostrictive properties were tested. The results indicate the 53-150μm distribution presents the largest static and dynamic magnetostriction among the five monodispersed distribution samples. But the 30-500μm (polydispersed) distribution shows even larger response than 53-150μm distribution. It indicates the particle size level plays a doubleedged sword on magnetostrictive properties of magnetostrictive composites. The existence of the optimal particle size to prepare polymer-bonded Terfenol-D, whose composition is Tb0.3Dy0.7Fe2, is resulted from the competition between the positive effects and negative effects of increasing particle size. At small particle size level, the voids and the demagnetization effect decrease significantly with increasing particle size and leads to the increase of magnetostriction; while at lager particle size level, the percentage of single-crystal particles and packing density becomes increasingly smaller with increasing particle size and results in the decrease of magnetostriction. The reason for the other scholars got different results is analyzed.
Is it appropriate to composite fish samples for mercury trend monitoring and consumption advisories?
Gandhi, Nilima; Bhavsar, Satyendra P; Gewurtz, Sarah B; Drouillard, Ken G; Arhonditsis, George B; Petro, Steve
2016-03-01
Monitoring mercury levels in fish can be costly because variation by space, time, and fish type/size needs to be captured. Here, we explored if compositing fish samples to decrease analytical costs would reduce the effectiveness of the monitoring objectives. Six compositing methods were evaluated by applying them to an existing extensive dataset, and examining their performance in reproducing the fish consumption advisories and temporal trends. The methods resulted in varying amount (average 34-72%) of reductions in samples, but all (except one) reproduced advisories very well (96-97% of the advisories did not change or were one category more restrictive compared to analysis of individual samples). Similarly, the methods performed reasonably well in recreating temporal trends, especially when longer-term and frequent measurements were considered. The results indicate that compositing samples within 5cm fish size bins or retaining the largest/smallest individuals and compositing in-between samples in batches of 5 with decreasing fish size would be the best approaches. Based on the literature, the findings from this study are applicable to fillet, muscle plug and whole fish mercury monitoring studies. The compositing methods may also be suitable for monitoring Persistent Organic Pollutants (POPs) in fish. Overall, compositing fish samples for mercury monitoring could result in a substantial savings (approximately 60% of the analytical cost) and should be considered in fish mercury monitoring, especially in long-term programs or when study cost is a concern. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
The use of intuitive and analytic reasoning styles by patients with persecutory delusions.
Freeman, Daniel; Lister, Rachel; Evans, Nicole
2014-12-01
A previous study has shown an association of paranoid thinking with a reliance on rapid intuitive ('experiential') reasoning and less use of slower effortful analytic ('rational') reasoning. The objectives of the new study were to replicate the test of paranoia and reasoning styles in a large general population sample and to assess the use of these reasoning styles in patients with persecutory delusions. 30 Patients with persecutory delusions in the context of a non-affective psychotic disorder and 1000 non-clinical individuals completed self-report assessments of paranoia and reasoning styles. The patients with delusions reported lower levels of both experiential and analytic reasoning than the non-clinical individuals (effect sizes small to moderate). Both self-rated ability and engagement with the reasoning styles were lower in the clinical group. Within the non-clinical group, greater levels of paranoia were associated with lower levels of analytic reasoning, but there was no association with experiential reasoning. The study is cross-sectional and cannot determine whether the reasoning styles contribute to the occurrence of paranoia. It also cannot be determined whether the patient group's lower reasoning scores are specifically associated with the delusions. Clinical paranoia is associated with less reported use of analytic and experiential reasoning. This may reflect patients with current delusions being unconfident in their reasoning abilities or less aware of decision-making processes and hence less able to re-evaluate fearful cognitions. The dual process theory of reasoning may provide a helpful framework in which to discuss with patients decision-making styles. Copyright © 2014 Elsevier Ltd. All rights reserved.
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
NASA Astrophysics Data System (ADS)
Young, G.; Jones, H. M.; Darbyshire, E.; Baustian, K. J.; McQuaid, J. B.; Bower, K. N.; Connolly, P. J.; Gallagher, M. W.; Choularton, T. W.
2015-10-01
Single-particle compositional analysis of filter samples collected on-board the FAAM BAe-146 aircraft is presented for six flights during the springtime Aerosol-Cloud Coupling and Climate Interactions in the Arctic (ACCACIA) campaign (March-April 2013). Scanning electron microscopy was utilised to derive size distributions and size-segregated particle compositions. These data were compared to corresponding data from wing-mounted optical particle counters and reasonable agreement between the calculated number size distributions was found. Significant variability in composition was observed, with differing external and internal mixing identified, between air mass trajectory cases based on HYSPLIT analyses. Dominant particle classes were silicate-based dusts and sea salts, with particles notably rich in K and Ca detected in one case. Source regions varied from the Arctic Ocean and Greenland through to northern Russia and the European continent. Good agreement between the back trajectories was mirrored by comparable compositional trends between samples. Silicate dusts were identified in all cases, and the elemental composition of the dust was consistent for all samples except one. It is hypothesised that long-range, high-altitude transport was primarily responsible for this dust, with likely sources including the Asian arid regions.
Patrick, Megan E.; Schulenberg, John E.
2010-01-01
Developmental changes in both alcohol use behaviors and self-reported reasons for alcohol use were investigated. Participants were surveyed every two years from ages 18 to 30 as part of the Monitoring the Future national study (analytic weighted sample size N=9,308; 53% women, 40% college attenders). Latent growth models were used to examine correlations among trajectories of binge drinking and trajectories of self-reported reasons for alcohol use across young adulthood. Results revealed developmental changes in reasons for use and correlations between the patterns of within-person change in frequency of binge drinking and within-person change in reasons for use. In particular, an increase in binge drinking between ages 18 and 22 was most positively correlated with slopes of using alcohol to get high and because of boredom. Continued binge drinking between ages 22 and 30 was most strongly correlated with using alcohol to get away from problems. Almost no moderation by gender, race, college attendance, employment, or marital status was found. Binge drinking and reasons for alcohol use traveled together, illustrating the ongoing and dynamic connections between changes in binge drinking and changes in reasons for use across late adolescence and early adulthood. PMID:21219061
Adequacy of laser diffraction for soil particle size analysis
Fisher, Peter; Aumann, Colin; Chia, Kohleth; O'Halloran, Nick; Chandra, Subhash
2017-01-01
Sedimentation has been a standard methodology for particle size analysis since the early 1900s. In recent years laser diffraction is beginning to replace sedimentation as the prefered technique in some industries, such as marine sediment analysis. However, for the particle size analysis of soils, which have a diverse range of both particle size and shape, laser diffraction still requires evaluation of its reliability. In this study, the sedimentation based sieve plummet balance method and the laser diffraction method were used to measure the particle size distribution of 22 soil samples representing four contrasting Australian Soil Orders. Initially, a precise wet riffling methodology was developed capable of obtaining representative samples within the recommended obscuration range for laser diffraction. It was found that repeatable results were obtained even if measurements were made at the extreme ends of the manufacturer’s recommended obscuration range. Results from statistical analysis suggested that the use of sample pretreatment to remove soil organic carbon (and possible traces of calcium-carbonate content) made minor differences to the laser diffraction particle size distributions compared to no pretreatment. These differences were found to be marginally statistically significant in the Podosol topsoil and Vertosol subsoil. There are well known reasons why sedimentation methods may be considered to ‘overestimate’ plate-like clay particles, while laser diffraction will ‘underestimate’ the proportion of clay particles. In this study we used Lin’s concordance correlation coefficient to determine the equivalence of laser diffraction and sieve plummet balance results. The results suggested that the laser diffraction equivalent thresholds corresponding to the sieve plummet balance cumulative particle sizes of < 2 μm, < 20 μm, and < 200 μm, were < 9 μm, < 26 μm, < 275 μm respectively. The many advantages of laser diffraction for soil particle size analysis, and the empirical results of this study, suggest that deployment of laser diffraction as a standard test procedure can provide reliable results, provided consistent sample preparation is used. PMID:28472043
Rising rates of labor induction: present concerns and future strategies.
Rayburn, William F; Zhang, Jun
2002-07-01
The rate of labor induction nationwide increased gradually from 9.5% to 19.4% between 1990 and 1998. Reasons for this doubling of inductions relate to widespread availability of cervical ripening agents, pressure from patients, conveniences to physicians, and litigious constraints. The increase in medically indicated inductions was slower than the overall increase, suggesting that induction for marginal or elective reasons has risen more rapidly. Data to support or refute the benefits of marginal or elective inductions are limited. Many trials of inductions for marginal indications are either nonexistent or retrospective with small sample sizes, thereby limiting definitive conclusions. Until prospective clinical trials can better validate reasons for the liberal use of labor induction, it would seem prudent to maintain a cautious approach, especially among nulliparous women. Strategies are proposed for developing evidence-based guidelines to reduce the presumed increase in health care costs, risk of cesarean delivery for nulliparas, and overscheduling in labor and delivery.
Kidney function endpoints in kidney transplant trials: a struggle for power.
Ibrahim, A; Garg, A X; Knoll, G A; Akbari, A; White, C A
2013-03-01
Kidney function endpoints are commonly used in randomized controlled trials (RCTs) in kidney transplantation (KTx). We conducted this study to estimate the proportion of ongoing RCTs with kidney function endpoints in KTx where the proposed sample size is large enough to detect meaningful differences in glomerular filtration rate (GFR) with adequate statistical power. RCTs were retrieved using the key word "kidney transplantation" from the National Institute of Health online clinical trial registry. Included trials had at least one measure of kidney function tracked for at least 1 month after transplant. We determined the proportion of two-arm parallel trials that had sufficient sample sizes to detect a minimum 5, 7.5 and 10 mL/min difference in GFR between arms. Fifty RCTs met inclusion criteria. Only 7% of the trials were above a sample size of 562, the number needed to detect a minimum 5 mL/min difference between the groups should one exist (assumptions: α = 0.05; power = 80%, 10% loss to follow-up, common standard deviation of 20 mL/min). The result increased modestly to 36% of trials when a minimum 10 mL/min difference was considered. Only a minority of ongoing trials have adequate statistical power to detect between-group differences in kidney function using conventional sample size estimating parameters. For this reason, some potentially effective interventions which ultimately could benefit patients may be abandoned from future assessment. © Copyright 2013 The American Society of Transplantation and the American Society of Transplant Surgeons.
Zinc Nucleation and Growth in Microgravity
NASA Technical Reports Server (NTRS)
Michael, B. Patrick; Nuth, J. A., III; Lilleleht, L. U.; Vondrak, Richard R. (Technical Monitor)
2000-01-01
We report our experiences with zinc nucleation in a microgravity environment aboard NASA's Reduced Gravity Research Facility. Zinc vapor is produced by a heater in a vacuum chamber containing argon gas. Nucleation is induced by cooling and its onset is easily detected visually by the appearance of a cloud of solid, at least partially crystalline zinc particles. Size distribution of these particles is monitored in situ by photon correlation spectroscopy. Samples of particles are also extracted for later analysis by SEM. The initially rapid increase in particle size is followed by a slower period of growth. We apply Scaled Nucleation Theory to our data and find that the derived critical temperature of zinc, the critical cluster size at nucleation, and the surface tension values are all in reasonably good agreement with their accepted literature values.
A random-sum Wilcoxon statistic and its application to analysis of ROC and LROC data.
Tang, Liansheng Larry; Balakrishnan, N
2011-01-01
The Wilcoxon-Mann-Whitney statistic is commonly used for a distribution-free comparison of two groups. One requirement for its use is that the sample sizes of the two groups are fixed. This is violated in some of the applications such as medical imaging studies and diagnostic marker studies; in the former, the violation occurs since the number of correctly localized abnormal images is random, while in the latter the violation is due to some subjects not having observable measurements. For this reason, we propose here a random-sum Wilcoxon statistic for comparing two groups in the presence of ties, and derive its variance as well as its asymptotic distribution for large sample sizes. The proposed statistic includes the regular Wilcoxon rank-sum statistic. Finally, we apply the proposed statistic for summarizing location response operating characteristic data from a liver computed tomography study, and also for summarizing diagnostic accuracy of biomarker data.
Alturki, Reem; Schandelmaier, Stefan; Olu, Kelechi Kalu; von Niederhäusern, Belinda; Agarwal, Arnav; Frei, Roy; Bhatnagar, Neera; Hooft, Lotty; von Elm, Erik; Briel, Matthias
2017-01-01
One quarter of randomized clinical trials (RCTs) are prematurely discontinued and frequently remain unpublished. Trial registries can document whether a trial is ongoing, suspended, discontinued, or completed and therefore represent an important source for trial status information. The accuracy of this information is unclear. To examine the accuracy of completion status and reasons for discontinuation documented in trial registries as compared to corresponding publications of discontinued RCTs and to investigate potential predictors for accurate trial status information in registries. We conducted a cross-sectional study comparing information provided in publications (reference standard) to corresponding registry entries. First, we reviewed publications of RCTs providing information on both discontinuation and registration. We identified eligible publications through systematic searches of MEDLINE and EMBASE (2010-2014) and an international cohort of 1,017 RCTs initiated between 2000 and 2003. Second, pairs of investigators independently and in duplicate extracted data from publications and corresponding registry records. Third, for each discontinued RCT, we compared publication information to registry information. We used multivariable regression to examine whether accurate labeling of trials as discontinued (vs. other status) in the registry was associated with recent initiation of RCT, industry sponsorship, multicenter design, or larger sample size. We identified 173 publications of RCTs that were discontinued due to slow recruitment (55%), harm (16%), futility (11%), benefit (5%), other reasons (3%), or multiple reasons (9%). Trials were registered with clinicaltrials.gov (77%), isrctn.com (14%), or other registries (8%). Of the 173 corresponding registry records, 77 (45%) trials were labeled as discontinued and 57 (33%) provided a reason for discontinuation (of which 53, 93%, provided the same reason as in the publication). Labeling of discontinued trials as discontinued (vs. other label) in corresponding trial registry records improved over time (adjusted odds ratio 1.16 per year, confidence interval 1.04-1.30) and was possibly associated with industry sponsorship (2.01, 0.99-4.07) but unlikely with multicenter status (0.81, 0.32-2.04) or sample size (1.07, 0.89-1.29). Less than half of published discontinued RCTs were accurately labelled as discontinued in corresponding registry records. One-third of registry records provided a reason for discontinuation. Current trial status information in registries should be viewed with caution. Copyright © 2016 Elsevier Inc. All rights reserved.
Booksmythe, Isobel; Mautz, Brian; Davis, Jacqueline; Nakagawa, Shinichi; Jennions, Michael D
2017-02-01
Females can benefit from mate choice for male traits (e.g. sexual ornaments or body condition) that reliably signal the effect that mating will have on mean offspring fitness. These male-derived benefits can be due to material and/or genetic effects. The latter include an increase in the attractiveness, hence likely mating success, of sons. Females can potentially enhance any sex-biased benefits of mating with certain males by adjusting the offspring sex ratio depending on their mate's phenotype. One hypothesis is that females should produce mainly sons when mating with more attractive or higher quality males. Here we perform a meta-analysis of the empirical literature that has accumulated to test this hypothesis. The mean effect size was small (r = 0.064-0.095; i.e. explaining <1% of variation in offspring sex ratios) but statistically significant in the predicted direction. It was, however, not robust to correction for an apparent publication bias towards significantly positive results. We also examined the strength of the relationship using different indices of male attractiveness/quality that have been invoked by researchers (ornaments, behavioural displays, female preference scores, body condition, male age, body size, and whether a male is a within-pair or extra-pair mate). Only ornamentation and body size significantly predicted the proportion of sons produced. We obtained similar results regardless of whether we ran a standard random-effects meta-analysis, or a multi-level, Bayesian model that included a correction for phylogenetic non-independence. A moderate proportion of the variance in effect sizes (51.6-56.2%) was due to variation that was not attributable to sampling error (i.e. sample size). Much of this non-sampling error variance was not attributable to phylogenetic effects or high repeatability of effect sizes among species. It was approximately equally attributable to differences (occurring for unknown reasons) in effect sizes among and within studies (25.3, 22.9% of the total variance). There were no significant effects of year of publication or two aspects of study design (experimental/observational or field/laboratory) on reported effect sizes. We discuss various practical reasons and theoretical arguments as to why small effect sizes should be expected, and why there might be relatively high variation among studies. Currently, there are no species where replicated, experimental studies show that mothers adjust the offspring sex ratio in response to a generally preferred male phenotype. Ultimately, we need more experimental studies that test directly whether females produce more sons when mated to relatively more attractive males, and that provide the requisite evidence that their sons have higher mean fitness than their daughters. © 2015 Cambridge Philosophical Society.
Kendall, Carl; Kerr, Ligia R F S; Gondim, Rogerio C; Werneck, Guilherme L; Macena, Raimunda Hermelinda Maia; Pontes, Marta Kerr; Johnston, Lisa G; Sabin, Keith; McFarland, Willi
2008-07-01
Obtaining samples of populations at risk for HIV challenges surveillance, prevention planning, and evaluation. Methods used include snowball sampling, time location sampling (TLS), and respondent-driven sampling (RDS). Few studies have made side-by-side comparisons to assess their relative advantages. We compared snowball, TLS, and RDS surveys of men who have sex with men (MSM) in Forteleza, Brazil, with a focus on the socio-economic status (SES) and risk behaviors of the samples to each other, to known AIDS cases and to the general population. RDS produced a sample with wider inclusion of lower SES than snowball sampling or TLS-a finding of health significance given the majority of AIDS cases reported among MSM in the state were low SES. RDS also achieved the sample size faster and at lower cost. For reasons of inclusion and cost-efficiency, RDS is the sampling methodology of choice for HIV surveillance of MSM in Fortaleza.
Minetti, Andrea; Riera-Montes, Margarita; Nackers, Fabienne; Roederer, Thomas; Koudika, Marie Hortense; Sekkenes, Johanne; Taconet, Aurore; Fermon, Florence; Touré, Albouhary; Grais, Rebecca F; Checchi, Francesco
2012-10-12
Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes.
PyRETIS: A well-done, medium-sized python library for rare events.
Lervik, Anders; Riccardi, Enrico; van Erp, Titus S
2017-10-30
Transition path sampling techniques are becoming common approaches in the study of rare events at the molecular scale. More efficient methods, such as transition interface sampling (TIS) and replica exchange transition interface sampling (RETIS), allow the investigation of rare events, for example, chemical reactions and structural/morphological transitions, in a reasonable computational time. Here, we present PyRETIS, a Python library for performing TIS and RETIS simulations. PyRETIS directs molecular dynamics (MD) simulations in order to sample rare events with unbiased dynamics. PyRETIS is designed to be easily interfaced with any molecular simulation package and in the present release, it has been interfaced with GROMACS and CP2K, for classical and ab initio MD simulations, respectively. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Hallén, Jonas; Jensen, Jesper K; Buser, Peter; Jaffe, Allan S; Atar, Dan
2011-03-01
Presence of microvascular obstruction (MVO) following primary percutaneous coronary intervention (pPCI) for ST-elevation myocardial infarction (STEMI) confers higher risk of left-ventricular remodelling and dysfunction. Measurement of cardiac troponin I (cTnI) after STEMI reflects the extent of myocardial destruction. We aimed to explore whether cTnI values were associated with presence of MVO independently of infarct size in STEMI patients receiving pPCI. 175 patients with STEMI were included. cTnI was sampled at 24 and 48 h. MVO and infarct size was determined by delayed enhancement with cardiac magnetic resonance at five to seven days post index event. The presence of MVO following STEMI was associated with larger infarct size and higher values of cTnI at 24 and 48 h. For any given infarct size or cTnI value, there was a greater risk of MVO development in non-anterior infarctions. cTnI was strongly associated with MVO in both anterior and non-anterior infarctions (P < 0.01) after adjustment for covariates (including infarct size); and was reasonably effective in predicting MVO in individual patients (area-under-the-curve ≥0.81). Presence of MVO is reflected in levels of cTnI sampled at an early time-point following STEMI and this association persists after adjustment for infarct size.
Sperm count as a surrogate endpoint for male fertility control.
Benda, Norbert; Gerlinger, Christoph
2007-11-30
When assessing the effectiveness of a hormonal method of fertility control in men, the classical approach used for the assessment of hormonal contraceptives in women, by estimating the pregnancy rate or using a life-table analysis for the time to pregnancy, is difficult to apply in a clinical development program. The main reasons are the dissociation of the treated unit, i.e. the man, and the observed unit, i.e. his female partner, the high variability in the frequency of male intercourse, the logistical cost and ethical concerns related to the monitoring of the trial. A reasonable surrogate endpoint of the definite endpoint time to pregnancy is sperm count. In addition to the avoidance of the mentioned problems, trials that compare different treatments are possible with reasonable sample sizes, and study duration can be shorter. However, current products do not suppress sperm production to 100 per cent in all men and the sperm count is only observed with measurement error. Complete azoospermia might not be necessary in order to achieve an acceptable failure rate compared with other forms of male fertility control. Therefore, the use of sperm count as a surrogate endpoint must rely on the results of a previous trial in which both the definitive- and surrogate-endpoint results were assessed. The paper discusses different estimation functions of the mean pregnancy rate (corresponding to the cumulative hazard) that are based on the results of sperm count trial and a previous trial in which both sperm count and time to pregnancy were assessed, as well as the underlying assumptions. Sample size estimations are given for pregnancy rate estimation with a given precision.
Williams, Michael S; Cao, Yong; Ebel, Eric D
2013-07-15
Levels of pathogenic organisms in food and water have steadily declined in many parts of the world. A consequence of this reduction is that the proportion of samples that test positive for the most contaminated product-pathogen pairings has fallen to less than 0.1. While this is unequivocally beneficial to public health, datasets with very few enumerated samples present an analytical challenge because a large proportion of the observations are censored values. One application of particular interest to risk assessors is the fitting of a statistical distribution function to datasets collected at some point in the farm-to-table continuum. The fitted distribution forms an important component of an exposure assessment. A number of studies have compared different fitting methods and proposed lower limits on the proportion of samples where the organisms of interest are identified and enumerated, with the recommended lower limit of enumerated samples being 0.2. This recommendation may not be applicable to food safety risk assessments for a number of reasons, which include the development of new Bayesian fitting methods, the use of highly sensitive screening tests, and the generally larger sample sizes found in surveys of food commodities. This study evaluates the performance of a Markov chain Monte Carlo fitting method when used in conjunction with a screening test and enumeration of positive samples by the Most Probable Number technique. The results suggest that levels of contamination for common product-pathogen pairs, such as Salmonella on poultry carcasses, can be reliably estimated with the proposed fitting method and samples sizes in excess of 500 observations. The results do, however, demonstrate that simple guidelines for this application, such as the proportion of positive samples, cannot be provided. Published by Elsevier B.V.
Size distribution and growth rate of crystal nuclei near critical undercooling in small volumes
NASA Astrophysics Data System (ADS)
Kožíšek, Z.; Demo, P.
2017-11-01
Kinetic equations are numerically solved within standard nucleation model to determine the size distribution of nuclei in small volumes near critical undercooling. Critical undercooling, when first nuclei are detected within the system, depends on the droplet volume. The size distribution of nuclei reaches the stationary value after some time delay and decreases with nucleus size. Only a certain maximum size of nuclei is reached in small volumes near critical undercooling. As a model system, we selected recently studied nucleation in Ni droplet [J. Bokeloh et al., Phys. Rev. Let. 107 (2011) 145701] due to available experimental and simulation data. However, using these data for sample masses from 23 μg up to 63 mg (corresponding to experiments) leads to the size distribution of nuclei, when no critical nuclei in Ni droplet are formed (the number of critical nuclei < 1). If one takes into account the size dependence of the interfacial energy, the size distribution of nuclei increases to reasonable values. In lower volumes (V ≤ 10-9 m3) nucleus size reaches some maximum extreme size, which quickly increases with undercooling. Supercritical clusters continue their growth only if the number of critical nuclei is sufficiently high.
No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.
van Smeden, Maarten; de Groot, Joris A H; Moons, Karel G M; Collins, Gary S; Altman, Douglas G; Eijkemans, Marinus J C; Reitsma, Johannes B
2016-11-24
Ten events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies. The current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth's correction, are compared. The results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect ('separation'). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth's correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation. The current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.
NASA Astrophysics Data System (ADS)
Guerrero, C.; Zornoza, R.; Gómez, I.; Mataix-Solera, J.; Navarro-Pedreño, J.; Mataix-Beneyto, J.; García-Orenes, F.
2009-04-01
Near infrared (NIR) reflectance spectroscopy offers important advantages because is a non-destructive technique, the pre-treatments needed in samples are minimal, and the spectrum of the sample is obtained in less than 1 minute without the needs of chemical reagents. For these reasons, NIR is a fast and cost-effective method. Moreover, NIR allows the analysis of several constituents or parameters simultaneously from the same spectrum once it is obtained. For this, a needed steep is the development of soil spectral libraries (set of samples analysed and scanned) and calibrations (using multivariate techniques). The calibrations should contain the variability of the target site soils in which the calibration is to be used. Many times this premise is not easy to fulfil, especially in libraries recently developed. A classical way to solve this problem is through the repopulation of libraries and the subsequent recalibration of the models. In this work we studied the changes in the accuracy of the predictions as a consequence of the successive addition of samples to repopulation. In general, calibrations with high number of samples and high diversity are desired. But we hypothesized that calibrations with lower quantities of samples (lower size) will absorb more easily the spectral characteristics of the target site. Thus, we suspect that the size of the calibration (model) that will be repopulated could be important. For this reason we also studied this effect in the accuracy of predictions of the repopulated models. In this study we used those spectra of our library which contained data of soil Kjeldahl Nitrogen (NKj) content (near to 1500 samples). First, those spectra from the target site were removed from the spectral library. Then, different quantities of samples of the library were selected (representing the 5, 10, 25, 50, 75 and 100% of the total library). These samples were used to develop calibrations with different sizes (%) of samples. We used partial least squares regression, and leave-one-out cross validation as methods of calibration. Two methods were used to select the different quantities (size of models) of samples: (1) Based on Characteristics of Spectra (BCS), and (2) Based on NKj Values of Samples (BVS). Both methods tried to select representative samples. Each of the calibrations (containing the 5, 10, 25, 50, 75 or 100% of the total samples of the library) was repopulated with samples from the target site and then recalibrated (by leave-one-out cross validation). This procedure was sequential. In each step, 2 samples from the target site were added to the models, and then recalibrated. This process was repeated successively 10 times, being 20 the total number of samples added. A local model was also created with the 20 samples used for repopulation. The repopulated, non-repopulated and local calibrations were used to predict the NKj content in those samples from the target site not included in repopulations. For the measurement of the accuracy of the predictions, the r2, RMSEP and slopes were calculated comparing predicted with analysed NKj values. This scheme was repeated for each of the four target sites studied. In general, scarce differences can be found between results obtained with BCS and BVS models. We observed that the repopulation of models increased the r2 of the predictions in sites 1 and 3. The repopulation caused scarce changes of the r2 of the predictions in sites 2 and 4, maybe due to the high initial values (using non-repopulated models r2 >0.90). As consequence of repopulation, the RMSEP decreased in all the sites except in site 2, where a very low RMESP was obtained before the repopulation (0.4 g×kg-1). The slopes trended to approximate to 1, but this value was reached only in site 4 and after the repopulation with 20 samples. In sites 3 and 4, accurate predictions were obtained using the local models. Predictions obtained with models using similar size of samples (similar %) were averaged with the aim to describe the main patterns. The r2 of predictions obtained with models of higher size were not more accurate than those obtained with models of lower size. After repopulation, the RMSEP of predictions using models with lower sizes (5, 10 and 25% of samples of the library) were lower than RMSEP obtained with higher sizes (75 and 100%), indicating that small models can easily integrate the variability of the soils from the target site. The results suggest that calibrations of small size could be repopulated and "converted" in local calibrations. According to this, we can focus most of the efforts in the obtainment of highly accurate analytical values in a reduced set of samples (including some samples from the target sites). The patterns observed here are in opposition with the idea of global models. These results could encourage the expansion of this technique, because very large data based seems not to be needed. Future studies with very different samples will help to confirm the robustness of the patterns observed. Authors acknowledge to "Bancaja-UMH" for the financial support of the project "NIRPROS".
Experimental toxicology: Issues of statistics, experimental design, and replication.
Briner, Wayne; Kirwan, Jeral
2017-01-01
The difficulty of replicating experiments has drawn considerable attention. Issues with replication occur for a variety of reasons ranging from experimental design to laboratory errors to inappropriate statistical analysis. Here we review a variety of guidelines for statistical analysis, design, and execution of experiments in toxicology. In general, replication can be improved by using hypothesis driven experiments with adequate sample sizes, randomization, and blind data collection techniques. Copyright © 2016 Elsevier B.V. All rights reserved.
Chemical Characterization of an Envelope A Sample from Hanford Tank 241-AN-103
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hay, M.S.
2000-08-23
A whole tank composite sample from Hanford waste tank 241-AN-103 was received at the Savannah River Technology Center (SRTC) and chemically characterized. Prior to characterization the sample was diluted to {approximately}5 M sodium concentration. The filtered supernatant liquid, the total dried solids of the diluted sample, and the washed insoluble solids obtained from filtration of the diluted sample were analyzed. A mass balance calculation of the three fractions of the sample analyzed indicate the analytical results appear relatively self-consistent for major components of the sample. However, some inconsistency was observed between results where more than one method of determination wasmore » employed and for species present in low concentrations. A direct comparison to previous analyses of material from tank 241-AN-103 was not possible due to unavailability of data for diluted samples of tank 241-AN-103 whole tank composites. However, the analytical data for other types of samples from 241-AN-103 we re mathematically diluted and compare reasonably with the current results. Although the segments of the core samples used to prepare the sample received at SRTC were combined in an attempt to produce a whole tank composite, determination of how well the results of the current analysis represent the actual composition of the Hanford waste tank 241-AN-103 remains problematic due to the small sample size and the large size of the non-homogenized waste tank.« less
A few good reasons why species-area relationships do not work for parasites.
Strona, Giovanni; Fattorini, Simone
2014-01-01
Several studies failed to find strong relationships between the biological and ecological features of a host and the number of parasite species it harbours. In particular, host body size and geographical range are generally only weak predictors of parasite species richness, especially when host phylogeny and sampling effort are taken into account. These results, however, have been recently challenged by a meta-analytic study that suggested a prominent role of host body size and range extent in determining parasite species richness (species-area relationships). Here we argue that, in general, results from meta-analyses should not discourage researchers from investigating the reasons for the lack of clear patterns, thus proposing a few tentative explanations to the fact that species-area relationships are infrequent or at least difficult to be detected in most host-parasite systems. The peculiar structure of host-parasite networks, the enemy release hypothesis, the possible discrepancy between host and parasite ranges, and the evolutionary tendency of parasites towards specialization may explain why the observed patterns often do not fit those predicted by species-area relationships.
Silvestre, Ellida de Aguiar; Schwarcz, Kaiser Dias; Grando, Carolina; de Campos, Jaqueline Bueno; Sujii, Patricia Sanae; Tambarussi, Evandro Vagner; Macrini, Camila Menezes Trindade; Pinheiro, José Baldin; Brancalion, Pedro Henrique Santin; Zucchi, Maria Imaculada
2018-03-16
The reproductive system of a tree species has substantial impact on genetic diversity and structure within and among natural populations. Such information, should be considered when planning tree planting for forest restoration. Here, we describe the mating system and genetic diversity of an overexploited Neotropical tree, Myroxylon peruiferum L.f. (Fabaceae) sampled from a forest remnant (10 seed trees and 200 seeds) and assess whether the effective population size of nursery-grown seedlings (148 seedlings) is sufficient to prevent inbreeding depression in reintroduced populations. Genetic analyses were performed based on 8 microsatellite loci. M. peruiferum presented a mixed mating system with evidence of biparental inbreeding (t^m-t^s = 0.118). We found low levels of genetic diversity for M. peruiferum species (allelic richness: 1.40 to 4.82; expected heterozygosity: 0.29 to 0.52). Based on Ne(v) within progeny, we suggest a sample size of 47 seed trees to achieve an effective population size of 100. The effective population sizes for the nursery-grown seedlings were much smaller Ne = 27.54-34.86) than that recommended for short term Ne ≥ 100) population conservation. Therefore, to obtain a reasonable genetic representation of native tree species and prevent problems associated with inbreeding depression, seedling production for restoration purposes may require a much larger sampling effort than is currently used, a problem that is further complicated by species with a mixed mating system. This study emphasizes the need to integrate species reproductive biology into seedling production programs and connect conservation genetics with ecological restoration.
NASA Astrophysics Data System (ADS)
Japuntich, Daniel A.; Franklin, Luke M.; Pui, David Y.; Kuehn, Thomas H.; Kim, Seong Chan; Viner, Andrew S.
2007-01-01
Two different air filter test methodologies are discussed and compared for challenges in the nano-sized particle range of 10-400 nm. Included in the discussion are test procedure development, factors affecting variability and comparisons between results from the tests. One test system which gives a discrete penetration for a given particle size is the TSI 8160 Automated Filter tester (updated and commercially available now as the TSI 3160) manufactured by the TSI, Inc., Shoreview, MN. Another filter test system was developed utilizing a Scanning Mobility Particle Sizer (SMPS) to sample the particle size distributions downstream and upstream of an air filter to obtain a continuous percent filter penetration versus particle size curve. Filtration test results are shown for fiberglass filter paper of intermediate filtration efficiency. Test variables affecting the results of the TSI 8160 for NaCl and dioctyl phthalate (DOP) particles are discussed, including condensation particle counter stability and the sizing of the selected particle challenges. Filter testing using a TSI 3936 SMPS sampling upstream and downstream of a filter is also shown with a discussion of test variables and the need for proper SMPS volume purging and filter penetration correction procedure. For both tests, the penetration versus particle size curves for the filter media studied follow the theoretical Brownian capture model of decreasing penetration with decreasing particle diameter down to 10 nm with no deviation. From these findings, the authors can say with reasonable confidence that there is no evidence of particle thermal rebound in the size range.
Effect of catalyst on deposition of vanadium oxide in plasma ambient
NASA Astrophysics Data System (ADS)
Singh, Megha; Kumar, Prabhat; Saini, Sujit K.; Reddy, G. B.
2018-05-01
In this paper, we have studied effect of catalyst (buffer layer) on structure, morphology, crystallinity, uniformity of nanostructured thin films deposited in nitrogen plasma ambient keeping all other process parameters constant. The process used for deposition is novel known as Plasma Assisted Sublimation Process (PASP). Samples were then studied using SEM, TEM, HRTEM, Raman spectroscopy. By structural analysis it was found out that samples deposited on Ni layer composed chiefly of α-V2O5 but minor amount of other phases were present in the sample. Samples deposited on Al catalyst layer revealed different phase of V2O5, where sample deposited on Ag was composed chiefly of VO2±x phase. Further analysis revealed that morphology of samples is also affected by catalyst. While samples deposited in Al and Ag layer tend to have reasonably defined geometry, sample deposited on Ni layer were irregular in shape and size. All the results well corroborate with each other.
2012-01-01
Background Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. Methods We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. Results VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Conclusions Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes. PMID:23057445
Metric variation and sexual dimorphism in the dentition of Ouranopithecus macedoniensis.
Schrein, Caitlin M
2006-04-01
The fossil sample attributed to the late Miocene hominoid taxon Ouranopithecus macedoniensis is characterized by a high degree of dental metric variation. As a result, some researchers support a multiple-species taxonomy for this sample. Other researchers do not think that the sample variation is too great to be accommodated within one species. This study examines variation and sexual dimorphism in mandibular canine and postcanine dental metrics of an Ouranopithecus sample. Bootstrapping (resampling with replacement) of extant hominoid dental metric data is performed to test the hypothesis that the coefficients of variation (CV) and the indices of sexual dimorphism (ISD) of the fossil sample are not significantly different from those of modern great apes. Variation and sexual dimorphism in Ouranopithecus M(1) dimensions were statistically different from those of all extant ape samples; however, most of the dental metrics of Ouranopithecus were neither more variable nor more sexually dimorphic than those of Gorilla and Pongo. Similarly high levels of mandibular molar variation are known to characterize other fossil hominoid species. The Ouranopithecus specimens are morphologically homogeneous and it is probable that all but one specimen included in this study are from a single population. It is unlikely that the sample includes specimens of two sympatric large-bodied hominoid species. For these reasons, a single-species hypothesis is not rejected for the Ouranopithecus macedoniensis material. Correlations between mandibular first molar tooth size dimorphism and body size dimorphism indicate that O. macedoniensis and other extinct hominoids were more sexually size dimorphic than any living great apes, which suggests that social behaviors and life history profiles of these species may have been different from those of living species.
Dissection and lateral mounting of zebrafish embryos: analysis of spinal cord development.
Beck, Aaron P; Watt, Roland M; Bonner, Jennifer
2014-02-28
The zebrafish spinal cord is an effective investigative model for nervous system research for several reasons. First, genetic, transgenic and gene knockdown approaches can be utilized to examine the molecular mechanisms underlying nervous system development. Second, large clutches of developmentally synchronized embryos provide large experimental sample sizes. Third, the optical clarity of the zebrafish embryo permits researchers to visualize progenitor, glial, and neuronal populations. Although zebrafish embryos are transparent, specimen thickness can impede effective microscopic visualization. One reason for this is the tandem development of the spinal cord and overlying somite tissue. Another reason is the large yolk ball, which is still present during periods of early neurogenesis. In this article, we demonstrate microdissection and removal of the yolk in fixed embryos, which allows microscopic visualization while preserving surrounding somite tissue. We also demonstrate semipermanent mounting of zebrafish embryos. This permits observation of neurodevelopment in the dorso-ventral and anterior-posterior axes, as it preserves the three-dimensionality of the tissue.
Dissection and Lateral Mounting of Zebrafish Embryos: Analysis of Spinal Cord Development
Beck, Aaron P.; Watt, Roland M.; Bonner, Jennifer
2014-01-01
The zebrafish spinal cord is an effective investigative model for nervous system research for several reasons. First, genetic, transgenic and gene knockdown approaches can be utilized to examine the molecular mechanisms underlying nervous system development. Second, large clutches of developmentally synchronized embryos provide large experimental sample sizes. Third, the optical clarity of the zebrafish embryo permits researchers to visualize progenitor, glial, and neuronal populations. Although zebrafish embryos are transparent, specimen thickness can impede effective microscopic visualization. One reason for this is the tandem development of the spinal cord and overlying somite tissue. Another reason is the large yolk ball, which is still present during periods of early neurogenesis. In this article, we demonstrate microdissection and removal of the yolk in fixed embryos, which allows microscopic visualization while preserving surrounding somite tissue. We also demonstrate semipermanent mounting of zebrafish embryos. This permits observation of neurodevelopment in the dorso-ventral and anterior-posterior axes, as it preserves the three-dimensionality of the tissue. PMID:24637734
Reasoning, Attitudes, and Learning: What matters in Introductory Physics?
NASA Astrophysics Data System (ADS)
Bateman, Melissa; Pyper, Brian
2009-05-01
Recent research has been revealing a connection between epistemological beliefs, reasoning ability and conceptual understanding. Our project has been taking data collected from the Fall `08 and Winter `09 semesters to supplement existing data in strengthening the statistical value of our sample size. We administered four tests to selected introductory physics courses: the Epistemological Beliefs Assessment for Physical Science, the Lawson Classroom Test of Scientific Reasoning, The Force Concept Inventory, and the Conceptual Survey in Electricity and Magnetism. With these data we have been comparing test results to demographics to answer questions such as: Does gender affect how we learn physics? Does past physics experience affect how we learn physics? Does past math experience affect how we learn physics? And how do math background successes compare to physics background successes? As we answer these questions, we will be better prepared in the Physics classroom and better identify the struggles of our students and solutions to help them better succeed.
Strategies for Improving Power in School-Randomized Studies of Professional Development.
Kelcey, Ben; Phelps, Geoffrey
2013-12-01
Group-randomized designs are well suited for studies of professional development because they can accommodate programs that are delivered to intact groups (e.g., schools), the collaborative nature of professional development, and extant teacher/school assignments. Though group designs may be theoretically favorable, prior evidence has suggested that they may be challenging to conduct in professional development studies because well-powered designs will typically require large sample sizes or expect large effect sizes. Using teacher knowledge outcomes in mathematics, we investigated when and the extent to which there is evidence that covariance adjustment on a pretest, teacher certification, or demographic covariates can reduce the sample size necessary to achieve reasonable power. Our analyses drew on multilevel models and outcomes in five different content areas for over 4,000 teachers and 2,000 schools. Using these estimates, we assessed the minimum detectable effect sizes for several school-randomized designs with and without covariance adjustment. The analyses suggested that teachers' knowledge is substantially clustered within schools in each of the five content areas and that covariance adjustment for a pretest or, to a lesser extent, teacher certification, has the potential to transform designs that are unreasonably large for professional development studies into viable studies. © The Author(s) 2014.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herbold, E. B.; Walton, O.; Homel, M. A.
2015-10-26
This document serves as a final report to a small effort where several improvements were added to a LLNL code GEODYN-L to develop Discrete Element Method (DEM) algorithms coupled to Lagrangian Finite Element (FE) solvers to investigate powder-bed formation problems for additive manufacturing. The results from these simulations will be assessed for inclusion as the initial conditions for Direct Metal Laser Sintering (DMLS) simulations performed with ALE3D. The algorithms were written and performed on parallel computing platforms at LLNL. The total funding level was 3-4 weeks of an FTE split amongst two staff scientists and one post-doc. The DEM simulationsmore » emulated, as much as was feasible, the physical process of depositing a new layer of powder over a bed of existing powder. The DEM simulations utilized truncated size distributions spanning realistic size ranges with a size distribution profile consistent with realistic sample set. A minimum simulation sample size on the order of 40-particles square by 10-particles deep was utilized in these scoping studies in order to evaluate the potential effects of size segregation variation with distance displaced in front of a screed blade. A reasonable method for evaluating the problem was developed and validated. Several simulations were performed to show the viability of the approach. Future investigations will focus on running various simulations investigating powder particle sizing and screen geometries.« less
Thompson, William L.; Miller, Amy E.; Mortenson, Dorothy C.; Woodward, Andrea
2011-01-01
Monitoring natural resources in Alaskan national parks is challenging because of their remoteness, limited accessibility, and high sampling costs. We describe an iterative, three-phased process for developing sampling designs based on our efforts to establish a vegetation monitoring program in southwest Alaska. In the first phase, we defined a sampling frame based on land ownership and specific vegetated habitats within the park boundaries and used Path Distance analysis tools to create a GIS layer that delineated portions of each park that could be feasibly accessed for ground sampling. In the second phase, we used simulations based on landcover maps to identify size and configuration of the ground sampling units (single plots or grids of plots) and to refine areas to be potentially sampled. In the third phase, we used a second set of simulations to estimate sample size and sampling frequency required to have a reasonable chance of detecting a minimum trend in vegetation cover for a specified time period and level of statistical confidence. Results of the first set of simulations indicated that a spatially balanced random sample of single plots from the most common landcover types yielded the most efficient sampling scheme. Results of the second set of simulations were compared with field data and indicated that we should be able to detect at least a 25% change in vegetation attributes over 31. years by sampling 8 or more plots per year every five years in focal landcover types. This approach would be especially useful in situations where ground sampling is restricted by access.
Kopáni, Martin; Miglierini, Marcel; Lančok, Adriana; Dekan, Július; Čaplovicová, Mária; Jakubovský, Ján; Boča, Roman; Mrazova, Hedviga
2015-10-01
Iron is an essential element for fundamental cell functions and a catalyst for chemical reactions. Three samples extracted from the human spleen were investigated by scanning (SEM) and transmission electron microscopy (TEM), Mössbauer spectrometry (MS), and SQUID magnetometry. The sample with diagnosis of hemosiderosis (H) differs from that referring to hereditary spherocytosis and the reference sample. SEM reveals iron-rich micrometer-sized aggregate of various structures-tiny fibrils in hereditary spherocytosis sample and no fibrils in hemochromatosis. Hematite and magnetite particles from 2 to 6 μm in TEM with diffraction in all samples were shown. The SQUID magnetometry shows different amount of diamagnetic, paramagnetic and ferrimagnetic structures in the tissues. The MS results indicate contribution of ferromagnetically split sextets for all investigated samples. Their occurrence indicates that at least part of the sample is magnetically ordered below the critical temperature. The iron accumulation process is different in hereditary spherocytosis and hemosiderosis. This fact may be the reason of different iron crystallization.
Girod, Sabine C; Fassiotto, Magali; Menorca, Roseanne; Etzkowitz, Henry; Wren, Sherry M
2017-01-10
Faculty departure can present significant intellectual costs to an institution. The authors sought to identify the reasons for clinical and non-clinical faculty departures at one academic medical center (AMC). In May and June 2010, the authors surveyed 137 faculty members who left a west coast School of Medicine (SOM) between 1999 and 2009. In May and June 2015, the same survey was sent to 40 faculty members who left the SOM between 2010-2014, for a total sample size of 177 former faculty members. The survey probed work history and experience, reasons for departure, and satisfaction at the SOM versus their current workplace. Statistical analyses included Pearson's chi-square test of independence and independent sample t-tests to understand quantitative differences between clinical and non-clinical respondents, as well as coding of qualitative open-ended responses. Eighty-eight faculty members responded (50%), including three who had since returned to the SOM. Overall, professional and advancement opportunities, salary concerns, and personal/family reasons were the three most cited factors for leaving. The average length of time at this SOM was shorter for faculty in clinical roles, who expressed lower workplace satisfaction and were more likely to perceive incongruence and inaccuracy in institutional expectations for their success than those in non-clinical roles. Clinical faculty respondents noted difficulty in balancing competing demands and navigating institutional expectations for advancement as reasons for leaving. AMCs may not be meeting faculty needs, especially those in clinical roles who balance multiple missions as clinicians, researchers, and educators. Institutions should address the challenges these faculty face in order to best recruit, retain, and advance faculty.
Autonomous reinforcement learning with experience replay.
Wawrzyński, Paweł; Tanwani, Ajay Kumar
2013-05-01
This paper considers the issues of efficiency and autonomy that are required to make reinforcement learning suitable for real-life control tasks. A real-time reinforcement learning algorithm is presented that repeatedly adjusts the control policy with the use of previously collected samples, and autonomously estimates the appropriate step-sizes for the learning updates. The algorithm is based on the actor-critic with experience replay whose step-sizes are determined on-line by an enhanced fixed point algorithm for on-line neural network training. An experimental study with simulated octopus arm and half-cheetah demonstrates the feasibility of the proposed algorithm to solve difficult learning control problems in an autonomous way within reasonably short time. Copyright © 2012 Elsevier Ltd. All rights reserved.
Cachelin, F M; Striegel-Moore, R H; Elder, K A
1998-01-01
Recently, a shift in obesity treatment away from emphasizing ideal weight loss goals to establishing realistic weight loss goals has been proposed; yet, what constitutes "realistic" weight loss for different populations is not clear. This study examined notions of realistic shape and weight as well as body size assessment in a large community-based sample of African-American, Asian, Hispanic, and white men and women. Participants were 1893 survey respondents who were all dieters and primarily overweight. Groups were compared on various variables of body image assessment using silhouette ratings. No significant race differences were found in silhouette ratings, nor in perceptions of realistic shape or reasonable weight loss. Realistic shape and weight ratings by both women and men were smaller than current shape and weight but larger than ideal shape and weight ratings. Compared with male dieters, female dieters considered greater weight loss to be realistic. Implications of the findings for the treatment of obesity are discussed.
Error simulation of paired-comparison-based scaling methods
NASA Astrophysics Data System (ADS)
Cui, Chengwu
2000-12-01
Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilmour, M.I.; McGee, J.; Duvall, R.M.
2007-07-01
Hundreds of epidemiological studies have shown that exposure to ambient particulate matter (PM) is associated with dose-dependent increases in morbidity and mortality. While early reports focused on PM less than 10 {mu}m (PM10), numerous studies have since shown that the effects can occur with PM stratified into ultrafine (UF), fine (FI), and coarse (CO) size modes despite the fact that these materials differ significantly in both evolution and chemistry. Furthermore the chemical makeup of these different size fractions can vary tremendously depending on location, meteorology, and source profile. For this reason, high-volume three-stage particle impactors with the capacity to collectmore » UF, FI, and CO particles were deployed to four different locations in the United States (Seattle, WA; Salt Lake City, UT; Sterling Forest and South Bronx, NY), and weekly samples were collected for 1 mo in each place. The particles were extracted, assayed for a standardized battery of chemical components, and instilled into mouse lungs (female BALB/c) at doses of 25 and 100 {mu}g. Eighteen hours later animals were euthanized and parameters of injury and inflammation were monitored in the bronchoalveolar lavage fluid and plasma. Of the four locations, the South Bronx coarse fraction was the most potent sample in both pulmonary and systemic biomarkers. Receptor source modeling on the PM2.5 samples showed that the South Bronx sample was heavily influenced by emissions from coal fired power plants (31%) and mobile sources (22%). Further studies will assess how source profiles correlate with the observed effects for all locations and size fractions.« less
NASA Astrophysics Data System (ADS)
Shang, H.; Chen, L.; Bréon, F.-M.; Letu, H.; Li, S.; Wang, Z.; Su, L.
2015-07-01
The principles of the Polarization and Directionality of the Earth's Reflectance (POLDER) cloud droplet size retrieval requires that clouds are horizontally homogeneous. Nevertheless, the retrieval is applied by combining all measurements from an area of 150 km × 150 km to compensate for POLDER's insufficient directional sampling. Using the POLDER-like data simulated with the RT3 model, we investigate the impact of cloud horizontal inhomogeneity and directional sampling on the retrieval, and then analyze which spatial resolution is potentially accessible from the measurements. Case studies show that the sub-scale variability in droplet effective radius (CDR) can mislead both the CDR and effective variance (EV) retrievals. Nevertheless, the sub-scale variations in EV and cloud optical thickness (COT) only influence the EV retrievals and not the CDR estimate. In the directional sampling cases studied, the retrieval is accurate using limited observations and is largely independent of random noise. Several improvements have been made to the original POLDER droplet size retrieval. For example, the measurements in the primary rainbow region (137-145°) are used to ensure accurate large droplet (> 15 μm) retrievals and reduce the uncertainties caused by cloud heterogeneity. We apply the improved method using the POLDER global L1B data for June 2008, the new CDR results are compared with the operational CDRs. The comparison show that the operational CDRs tend to be underestimated for large droplets. The reason is that the cloudbow oscillations in the scattering angle region of 145-165° are weak for cloud fields with CDR > 15 μm. Lastly, a sub-scale retrieval case is analyzed, illustrating that a higher resolution, e.g., 42 km × 42 km, can be used when inverting cloud droplet size parameters from POLDER measurements.
Wright, Mark H.; Tung, Chih-Wei; Zhao, Keyan; Reynolds, Andy; McCouch, Susan R.; Bustamante, Carlos D.
2010-01-01
Motivation: The development of new high-throughput genotyping products requires a significant investment in testing and training samples to evaluate and optimize the product before it can be used reliably on new samples. One reason for this is current methods for automated calling of genotypes are based on clustering approaches which require a large number of samples to be analyzed simultaneously, or an extensive training dataset to seed clusters. In systems where inbred samples are of primary interest, current clustering approaches perform poorly due to the inability to clearly identify a heterozygote cluster. Results: As part of the development of two custom single nucleotide polymorphism genotyping products for Oryza sativa (domestic rice), we have developed a new genotype calling algorithm called ‘ALCHEMY’ based on statistical modeling of the raw intensity data rather than modelless clustering. A novel feature of the model is the ability to estimate and incorporate inbreeding information on a per sample basis allowing accurate genotyping of both inbred and heterozygous samples even when analyzed simultaneously. Since clustering is not used explicitly, ALCHEMY performs well on small sample sizes with accuracy exceeding 99% with as few as 18 samples. Availability: ALCHEMY is available for both commercial and academic use free of charge and distributed under the GNU General Public License at http://alchemy.sourceforge.net/ Contact: mhw6@cornell.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20926420
Ibrahim, Mohamed R
2017-04-01
A survey, of sample size 224, is designed to include the different related-factors to housing location choice, such as; socioeconomic factors, housing characteristics, travel behavior, current self-selection factors, housing demand and future location preferences. It comprises 16 questions, categorized into three different sections; socioeconomic (5 Questions), current dwelling unit characteristics (7 Questions), and housing demand characteristics (4 Questions). The first part, socioeconomic, covers the basic information about the respondent, such as; age, gender, marital status, employment, and car ownership. While the second part, current dwelling unit characteristics, covers different aspect concerning the residential unit typology, financial aspects, and travel behavior of the respondent. It includes the tenure types of the residential unit, estimation of the unit price (in the case of ownership or renting), housing typologies, the main reason for choosing the unit, in case of working, the modes of travel to work, and time to reach it, residential mobility in the last decade, and the ownership of any other residential units. The last part, housing demand characteristics, covers the size of the demand for a residential unit, preference in living in a certain area and the reason to choose it, and the preference of residential unit׳s tenure. This survey is a representative sample for the population in Alexandria, Egypt. The data in this article is represented in: How do people select their residential locations in Egypt? The case of Alexandria; JCIT1757.
Okada, Kensuke; Hoshino, Takahiro
2017-04-01
In psychology, the reporting of variance-accounted-for effect size indices has been recommended and widely accepted through the movement away from null hypothesis significance testing. However, most researchers have paid insufficient attention to the fact that effect sizes depend on the choice of the number of levels and their ranges in experiments. Moreover, the functional form of how and how much this choice affects the resultant effect size has not thus far been studied. We show that the relationship between the population effect size and number and range of levels is given as an explicit function under reasonable assumptions. Counterintuitively, it is found that researchers may affect the resultant effect size to be either double or half simply by suitably choosing the number of levels and their ranges. Through a simulation study, we confirm that this relation also applies to sample effect size indices in much the same way. Therefore, the variance-accounted-for effect size would be substantially affected by the basic research design such as the number of levels. Simple cross-study comparisons and a meta-analysis of variance-accounted-for effect sizes would generally be irrational unless differences in research designs are explicitly considered.
Wang, Sue-Jane; O'Neill, Robert T; Hung, Hm James
2010-10-01
The current practice for seeking genomically favorable patients in randomized controlled clinical trials using genomic convenience samples. To discuss the extent of imbalance, confounding, bias, design efficiency loss, type I error, and type II error that can occur in the evaluation of the convenience samples, particularly when they are small samples. To articulate statistical considerations for a reasonable sample size to minimize the chance of imbalance, and, to highlight the importance of replicating the subgroup finding in independent studies. Four case examples reflecting recent regulatory experiences are used to underscore the problems with convenience samples. Probability of imbalance for a pre-specified subgroup is provided to elucidate sample size needed to minimize the chance of imbalance. We use an example drug development to highlight the level of scientific rigor needed, with evidence replicated for a pre-specified subgroup claim. The convenience samples evaluated ranged from 18% to 38% of the intent-to-treat samples with sample size ranging from 100 to 5000 patients per arm. The baseline imbalance can occur with probability higher than 25%. Mild to moderate multiple confounders yielding the same directional bias in favor of the treated group can make treatment group incomparable at baseline and result in a false positive conclusion that there is a treatment difference. Conversely, if the same directional bias favors the placebo group or there is loss in design efficiency, the type II error can increase substantially. Pre-specification of a genomic subgroup hypothesis is useful only for some degree of type I error control. Complete ascertainment of genomic samples in a randomized controlled trial should be the first step to explore if a favorable genomic patient subgroup suggests a treatment effect when there is no clear prior knowledge and understanding about how the mechanism of a drug target affects the clinical outcome of interest. When stratified randomization based on genomic biomarker status cannot be implemented in designing a pharmacogenomics confirmatory clinical trial, if there is one genomic biomarker prognostic for clinical response, as a general rule of thumb, a sample size of at least 100 patients may be needed to be considered for the lower prevalence genomic subgroup to minimize the chance of an imbalance of 20% or more difference in the prevalence of the genomic marker. The sample size may need to be at least 150, 350, and 1350, respectively, if an imbalance of 15%, 10% and 5% difference is of concern.
Why eat at fast-food restaurants: reported reasons among frequent consumers.
Rydell, Sarah A; Harnack, Lisa J; Oakes, J Michael; Story, Mary; Jeffery, Robert W; French, Simone A
2008-12-01
A convenience sample of adolescents and adults who regularly eat at fast-food restaurants were recruited to participate in an experimental trial to examine the effect of nutrition labeling on meal choices. As part of this study, participants were asked to indicate how strongly they agreed or disagreed with 11 statements to assess reasons for eating at fast-food restaurants. Logistic regression was conducted to examine whether responses differed by demographic factors. The most frequently reported reasons for eating at fast-food restaurants were: fast food is quick (92%), restaurants are easy to get to (80%), and food tastes good (69%). The least frequently reported reasons were: eating fast food is a way of socializing with family and friends (33%), restaurants have nutritious foods to offer (21%), and restaurants are fun and entertaining (12%). Some differences were found with respect to the demographic factors examined. It appears that in order to reduce fast-food consumption, food and nutrition professionals need to identify alternative quick and convenient food sources. As motivation for eating at fast-food restaurants appears to differ somewhat by age, sex, education, employment status, and household size, tailored interventions could be considered.
A study of the dispersity of iron oxide and iron oxide-noble metal (Me = Pd, Pt) supported systems
NASA Astrophysics Data System (ADS)
Cherkezova-Zheleva, Z. P.; Shopska, M. G.; Krstić, J. B.; Jovanović, D. M.; Mitov, I. G.; Kadinov, G. B.
2007-09-01
Samples of one-(Fe) and two-component (Fe-Pd and Fe-Pt) catalysts were prepared by incipient wetness impregnation of four different supports: TiO2 (anatase), γ-Al2O3, activated carbon, and diatomite. The chosen synthesis conditions resulted in the formation of nanosized supported phases—iron oxide (in the one-component samples), or iron oxide-noble metal (in the two-component ones). Different agglomeration degrees of these phases were obtained as a result of thermal treatment. Ultradisperse size of the supported phase was maintained in some samples, while a process of partial agglomeration occurred in others, giving rise to nearly bidisperse (ultra-and highdisperse) supported particles. The different texture of the used supports and their chemical composition are the reasons for the different stability of the nanosized supported phases. The samples were tested as heterogeneous catalysts in total benzene oxidation reaction.
Using Relational Reasoning to Learn about Scientific Phenomena at Unfamiliar Scales
ERIC Educational Resources Information Center
Resnick, Ilyse; Davatzes, Alexandra; Newcombe, Nora S.; Shipley, Thomas F.
2016-01-01
Many scientific theories and discoveries involve reasoning about extreme scales, removed from human experience, such as time in geology, size in nanoscience. Thus, understanding scale is central to science, technology, engineering, and mathematics. Unfortunately, novices have trouble understanding and comparing sizes of unfamiliar large and small…
Using Relational Reasoning to Learn about Scientific Phenomena at Unfamiliar Scales
ERIC Educational Resources Information Center
Resnick, Ilyse; Davatzes, Alexandra; Newcombe, Nora S.; Shipley, Thomas F.
2017-01-01
Many scientific theories and discoveries involve reasoning about extreme scales, removed from human experience, such as time in geology and size in nanoscience. Thus, understanding scale is central to science, technology, engineering, and mathematics. Unfortunately, novices have trouble understanding and comparing sizes of unfamiliar large and…
Soil carbon inventories under a bioenergy crop (switchgrass): Measurement limitations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garten, C.T. Jr.; Wullschleger, S.D.
Approximately 5 yr after planting, coarse root carbon (C) and soil organic C (SOC) inventories were compared under different types of plant cover at four switchgrass (Panicum virgatum L.) production field trials in the southeastern USA. There was significantly more coarse root C under switchgrass (Alamo variety) and forest cover than tall fescue (Festuca arundinacea Schreb.), corn (Zea mays L.), or native pastures of mixed grasses. Inventories of SOC under switchgrass were not significantly greater than SOC inventories under other plant covers. At some locations the statistical power associated with ANOVA of SOC inventories was low, which raised questions aboutmore » whether differences in SOC could be detected statistically. A minimum detectable difference (MDD) for SOC inventories was calculated. The MDD is the smallest detectable difference between treatment means once the variation, significance level, statistical power, and sample size are specified. The analysis indicated that a difference of {approx}50 mg SOC/cm{sup 2} or 5 Mg SOC/ha, which is {approx}10 to 15% of existing SOC, could be detected with reasonable sample sizes and good statistical power. The smallest difference in SOC inventories that can be detected, and only with exceedingly large sample sizes, is {approx}2 to 3%. These measurement limitations have implications for monitoring and verification of proposals to ameliorate increasing global atmospheric CO{sub 2} concentrations by sequestering C in soils.« less
Scale and Sampling Effects on Floristic Quality
2016-01-01
Floristic Quality Assessment (FQA) is increasingly influential for making land management decisions, for directing conservation policy, and for research. But, the basic ecological properties and limitations of its metrics are ill defined and not well understood–especially those related to sample methods and scale. Nested plot data from a remnant tallgrass prairie sampled annually over a 12-year period, were used to investigate FQA properties associated with species detection rates, species misidentification rates, sample year, and sample grain/area. Plot size had no apparent effect on Mean C (an area’s average Floristic Quality level), nor did species detection levels above 65% detection. Simulated species misidentifications only affected Mean C values at greater than 10% in large plots, when the replaced species were randomly drawn from the broader county-wide species pool. Finally, FQA values were stable over the 12-year study, meaning that there was no evidence that the metrics exhibit year effects. The FQA metric Mean C is demonstrated to be robust to varied sample methodologies related to sample intensity (plot size, species detection rate), as well as sample year. These results will make FQA measures even more appealing for informing land-use decisions, policy, and research for two reasons: 1) The sampling effort needed to generate accurate and consistent site assessments with FQA measures is shown to be far lower than what has previously been assumed, and 2) the stable properties and consistent performance of metrics with respect to sample methods will allow for a remarkable level of comparability of FQA values from different sites and datasets compared to other commonly used ecological metrics. PMID:27489959
Garcia-Alvarez, Alicia; Mila-Villarroel, Raimon; Ribas-Barba, Lourdes; Egan, Bernadette; Badea, Mihaela; Maggi, Franco M; Salmenhaara, Maija; Restani, Patrizia; Serra-Majem, Lluis
2016-07-28
Obesity is increasing worldwide and weight-control strategies, including the consumption of plant food supplements (PFS), are proliferating. This article identifies the herbal ingredients in PFS consumed for weight control and by overweight/obese dieters in six European countries, and explores the relationship between their consumption and their self-reported BMI. Data used were a subset from the PlantLIBRA PFS Consumer Survey 2011-2012, a retrospective survey of 2359 PFS consumers. The survey used a bespoke frequency-of-PFS-usage questionnaire. Analyses were performed in two consumer subsamples of 1) respondents taking the products for "body weight reasons", and 2) "dieters for overweight/obesity", to identify the herbal ingredients consumed for these reasons. The relationship between the 5 most consumed herbal ingredients and self-reported BMI in groups 1 and 2 is explored by comparing BMI proportions of consumers vs. non-consumers (using Chi-squared test). 252 PFS (8.8 %) were consumed for "body weight reasons" (by 240 PFS consumers); 112 PFS consumers (4.8 %) were "dieting for overweight/obesity". Spain is the country where consuming herbal ingredients for body weight control and dieting were most popular. Artichoke was the most consumed herbal ingredient. Considering only the 5 top products consumed by those who responded "body weight", when using the total survey sample, a greater proportion of BMI ≥ 25 was observed among consumers of PFS containing artichoke and green tea as compared to non-consumers (58.4 % vs. 49.1 % and 63.2 % vs. 49.7 % respectively). Considering only the 5 top products consumed by "dieters" and using only the "dieters" sample, a lower proportion of BMI ≥ 25 was observed among pineapple-containing PFS consumers (38.5 % vs. 81.5 %); however, when using the entire survey sample, a greater proportion of BMI ≥ 25 was observed among artichoke-containing PFS consumers (58.4 % vs. 49.1 %). A comparison of results among the scarce publications evaluating the use of weight-loss supplements at the population level is limited. Nevertheless every hint is important in finding out which are the self-treatment strategies used by overweight/obese individuals in European countries. Although limited by a small sample size, our study represents a first attempt at analysing such data in six EU countries. Our findings should encourage the conduction of further studies on this topic, long-term and large sample-sized studies, ideally conducted in the general population.
Perceptual reasoning predicts handwriting impairments in adolescents with autism
Fuentes, Christina T.; Mostofsky, Stewart H.; Bastian, Amy J.
2010-01-01
Background: We have previously shown that children with autism spectrum disorder (ASD) have specific handwriting deficits consisting of poor form, and that these deficits are predicted by their motor abilities. It is not known whether the same handwriting impairments persist into adolescence and whether they remain linked to motor deficits. Methods: A case-control study of handwriting samples from adolescents with and without ASD was performed using the Minnesota Handwriting Assessment. Samples were scored on an individual letter basis in 5 categories: legibility, form, alignment, size, and spacing. Subjects were also administered an intelligence test and the Physical and Neurological Examination for Subtle (Motor) Signs (PANESS). Results: We found that adolescents with ASD, like children, show overall worse performance on a handwriting task than do age- and intelligence-matched controls. Also comparable to children, adolescents with ASD showed motor impairments relative to controls. However, adolescents with ASD differ from children in that Perceptual Reasoning Indices were significantly predictive of handwriting performance whereas measures of motor skills were not. Conclusions: Like children with ASD, adolescents with ASD have poor handwriting quality relative to controls. Despite still demonstrating motor impairments, in adolescents perceptual reasoning is the main predictor of handwriting performance, perhaps reflecting subjects' varied abilities to learn strategies to compensate for their motor impairments. GLOSSARY ASD = autism spectrum disorder; DSM-IV = Diagnostic and Statistical Manual of Mental Disorders, 4th edition; PANESS = Physical and Neurological Examination for Subtle (Motor) Signs; PRI = Perceptual Reasoning Index; WASI = Wechsler Abbreviated Scale of Intelligence; WISC = Wechsler Intelligence Scale for Children IV. PMID:21079184
NASA Astrophysics Data System (ADS)
Jiang, Jingkun; Chen, Da-Ren; Biswas, Pratim
2007-07-01
A flame aerosol reactor (FLAR) was developed to synthesize nanoparticles with desired properties (crystal phase and size) that could be independently controlled. The methodology was demonstrated for TiO2 nanoparticles, and this is the first time that large sets of samples with the same size but different crystal phases (six different ratios of anatase to rutile in this work) were synthesized. The degree of TiO2 nanoparticle agglomeration was determined by comparing the primary particle size distribution measured by scanning electron microscopy (SEM) to the mobility-based particle size distribution measured by online scanning mobility particle spectrometry (SMPS). By controlling the flame aerosol reactor conditions, both spherical unagglomerated particles and highly agglomerated particles were produced. To produce monodisperse nanoparticles, a high throughput multi-stage differential mobility analyser (MDMA) was used in series with the flame aerosol reactor. Nearly monodisperse nanoparticles (geometric standard deviation less than 1.05) could be collected in sufficient mass quantities (of the order of 10 mg) in reasonable time (1 h) that could be used in other studies such as determination of functionality or biological effects as a function of size.
Study the fragment size distribution in dynamic fragmentation of laser shock loding tin
NASA Astrophysics Data System (ADS)
He, Weihua; Xin, Jianting; Chu, Genbai; Shui, Min; Xi, Tao; Zhao, Yongqiang; Gu, Yuqiu
2017-06-01
Characterizing the distribution of fragment size produced from dynamic fragmentation process is very important for fundamental science like predicting material dymanic response performance and for a variety of engineering applications. However, only a few data about fragment mass or size have been obtained due to its great challenge in its dynamic measurement. This paper would focus on investigating the fragment size distribution from the dynamic fragmentation of laser shock-loaded metal. Material ejection of tin sample with wedge shape groove in the free surface is collected with soft recovery technique. Via fine post-shot analysis techniques including X-ray micro-tomography and the improved watershed method, it is found that fragments can be well detected. To characterize their size distributions, a random geometric statistics method based on Poisson mixtures was derived for dynamic heterogeneous fragmentation problem, which leads to a linear combinational exponential distribution. Finally we examined the size distribution of laser shock-loaded tin with the derived model, and provided comparisons with other state-of-art models. The resulting comparisons prove that our proposed model can provide more reasonable fitting result for laser shock-loaded metal.
Gaertner, Beate; Seitz, Ina; Fuchs, Judith; Busch, Markus A; Holzhausen, Martin; Martus, Peter; Scheidt-Nave, Christa
2016-01-19
Public health monitoring depends on valid health and disability estimates in the population 65+ years. This is hampered by high non-participation rates in this age group. There is limited insight into size and direction of potential baseline selection bias. We analyzed baseline non-participation in a register-based random sample of 1481 inner-city residents 65+ years, invited to a health examination survey according to demographics available for the entire sample, self-report information as available and reasons for non-participation. One year after recruitment, non-responders were revisited to assess their reasons. Five groups defined by participation status were differentiated: participants (N = 299), persons who had died or moved (N = 173), those who declined participation, but answered a short questionnaire (N = 384), those who declined participation and the short questionnaire (N = 324), and non-responders (N = 301). The results confirm substantial baseline selection bias with significant underrepresentation of persons 85+ years, persons in residential care or from disadvantaged neighborhoods, with lower education, foreign citizenship, or lower health-related quality of life. Finally, reasons for non-participation could be identified for 78% of all non-participants, including 183 non-responders. A diversity in health problems and barriers to participation exists among non-participants. Innovative study designs are needed for public health monitoring in aging populations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... public. Meetings which are completely or partly open to the public shall be held at reasonable times and at such a place that is reasonably accessible to the public. The size of the meeting room should be determined by such factors as the size of the committee, the number of members of the public who could...
Code of Federal Regulations, 2012 CFR
2012-10-01
... public. Meetings which are completely or partly open to the public shall be held at reasonable times and at such a place that is reasonably accessible to the public. The size of the meeting room should be determined by such factors as the size of the committee, the number of members of the public who could...
Code of Federal Regulations, 2010 CFR
2010-10-01
... public. Meetings which are completely or partly open to the public shall be held at reasonable times and at such a place that is reasonably accessible to the public. The size of the meeting room should be determined by such factors as the size of the committee, the number of members of the public who could...
Code of Federal Regulations, 2011 CFR
2011-10-01
... public. Meetings which are completely or partly open to the public shall be held at reasonable times and at such a place that is reasonably accessible to the public. The size of the meeting room should be determined by such factors as the size of the committee, the number of members of the public who could...
Impairments of colour vision induced by organic solvents: a meta-analysis study.
Paramei, Galina V; Meyer-Baron, Monika; Seeber, Andreas
2004-09-01
The impairment of colour discrimination induced by occupational exposure to toluene, styrene and mixtures of organic solvents is reviewed and analysed using a meta-analytical approach. Thirty-nine studies were surveyed covering a wide range of exposure conditions. Those studies using the Lanthony Panel D-15 desaturated test (D-15d) were further considered. From these for 15 samples data on colour discrimination ability (Colour Confusion Index, CCI) and exposure levels were provided, required for the meta-analysis. In accordance with previously reported higher CCI values for the exposed groups, the computations yielded positive effect sizes for 13 of the 15 samples, indicating that in the great majority of the studies the exposed groups showed inferior colour discrimination. However, the meta-analysis showed great variation in effect sizes across the studies. Possible reasons for inconsistency among the reported findings are discussed. These pertain to exposure-related parameters, as well as to confounders such as conditions of test administration and characteristics of subject samples. Those factors vary considerably among the studies and might have greatly contributed to divergence in measured colour vision capacity, thereby obscuring consistent effects of organic solvents on colour discrimination.
Characterization of electrokinetic gating valve in microfluidic channels.
Zhang, Guiseng; Du, Wei; Liu, Bi-Feng; Hisamoto, Hideaki; Terabe, Shigeru
2007-02-12
Electrokinetic gating, functioning as a micro-valve, has been widely employed in microfluidic chips for sample injection and flow switch. Investigating its valving performance is fundamentally vital for microfluidics and microfluidics-based chemical analysis. In this paper, electrokinetic gating valve in microchannels was evaluated using optical imaging technique. Microflow profiles at channels junction were examined, revealing that molecular diffusion played a significant role in the valving disable; which could cause analyte leakage in sample injection. Due to diffusion, the analyte crossed the interface of the analyte flow and gating flow, and then formed a cometic tail-like diffusion area at channels junction. From theoretical calculation and some experimental evidences, the size of the area was related to the diffusion coefficient and the velocity of analytes. Additionally, molecular diffusion was also believed to be another reason of sampling bias in gated injection.
Research on optimal DEM cell size for 3D visualization of loess terraces
NASA Astrophysics Data System (ADS)
Zhao, Weidong; Tang, Guo'an; Ji, Bin; Ma, Lei
2009-10-01
In order to represent the complex artificial terrains like loess terraces in Shanxi Province in northwest China, a new 3D visual method namely Terraces Elevation Incremental Visual Method (TEIVM) is put forth by the authors. 406 elevation points and 14 enclosed constrained lines are sampled according to the TIN-based Sampling Method (TSM) and DEM Elevation Points and Lines Classification (DEPLC). The elevation points and constrained lines are used to construct Constrained Delaunay Triangulated Irregular Networks (CD-TINs) of the loess terraces. In order to visualize the loess terraces well by use of optimal combination of cell size and Elevation Increment Value (EIV), the CD-TINs is converted to Grid-based DEM (G-DEM) by use of different combination of cell size and EIV with linear interpolating method called Bilinear Interpolation Method (BIM). Our case study shows that the new visual method can visualize the loess terraces steps very well when the combination of cell size and EIV is reasonable. The optimal combination is that the cell size is 1 m and the EIV is 6 m. Results of case study also show that the cell size should be at least smaller than half of both the terraces average width and the average vertical offset of terraces steps for representing the planar shapes of the terraces surfaces and steps well, while the EIV also should be larger than 4.6 times of the terraces average height. The TEIVM and results above is of great significance to the highly refined visualization of artificial terrains like loess terraces.
Soft γ-ray selected radio galaxies: favouring giant size discovery
NASA Astrophysics Data System (ADS)
Bassani, L.; Venturi, T.; Molina, M.; Malizia, A.; Dallacasa, D.; Panessa, F.; Bazzano, A.; Ubertini, P.
2016-09-01
Using the recent INTEGRAL/IBIS and Swift/BAT surveys we have extracted a sample of 64 confirmed plus three candidate radio galaxies selected in the soft gamma-ray band. The sample covers all optical classes and is dominated by objects showing a Fanaroff-Riley type II radio morphology; a large fraction (70 per cent) of the sample is made of `radiative mode' or high-excitation radio galaxies. We measured the source size on images from the NRAO VLA Sky Survey, the Faint Images of the Radio Sky at twenty-cm and the Sydney University Molonglo Sky Survey images and have compared our findings with data in the literature obtaining a good match. We surprisingly found that the soft gamma-ray selection favours the detection of large size radio galaxies: 60 per cent of objects in the sample have size greater than 0.4 Mpc while around 22 per cent reach dimension above 0.7 Mpc at which point they are classified as giant radio galaxies (GRGs), the largest and most energetic single entities in the Universe. Their fraction among soft gamma-ray selected radio galaxies is significantly larger than typically found in radio surveys, where only a few per cent of objects (1-6 per cent) are GRGs. This may partly be due to observational biases affecting radio surveys more than soft gamma-ray surveys, thus disfavouring the detection of GRGs at lower frequencies. The main reasons and/or conditions leading to the formation of these large radio structures are still unclear with many parameters such as high jet power, long activity time and surrounding environment all playing a role; the first two may be linked to the type of active galactic nucleus discussed in this work and partly explain the high fraction of GRGs found in the present sample. Our result suggests that high energy surveys may be a more efficient way than radio surveys to find these peculiar objects.
Altitudinal variation in age and body size in Yunnan pond frog (Pelophylax pleuraden).
Lou, Shang Ling; Jin, Long; Liu, Yan Hong; Mi, Zhi Ping; Tao, Gang; Tang, Yu Mei; Liao, Wen Bo
2012-08-01
Large-scale systematic patterns of body size are a basic concern of evolutionary biology. Identifying body size variation along altitudinal gradients may help us to understand the evolution of life history of animals. In this study, we investigated altitudinal variation in body size, age and growth rate in Chinese endemic frog, Pelophylax pleuraden. Data sampled from five populations covering an altitudinal span of 1413 to 1935 m in Sichuan province revealed that body size from five populations did not co-vary with altitudes, not following Bergmann's rule. Average adult SVL differed significantly among populations in males, but not in females. For both sexes, average adult age differed significantly among populations. Post-metamorphic growth rate did not co-vary with altitude, and females grew faster than males in all populations. When controlling the effect of age, body size did not differ among populations in both sexes, suggesting that age did not affect variation in body size among populations. For females, there may be other factors, such as the allocation of energy between growth and reproduction, that eliminated the effect of age on body size. To our minds, the major reason of body size variation among populations in male frogs may be related to individual longevity. Our findings also suggest that factors other than age and growth rate may contribute to size differences among populations.
Young, Mariel; Johannesdottir, Fjola; Poole, Ken; Shaw, Colin; Stock, J T
2018-02-01
Femoral head diameter is commonly used to estimate body mass from the skeleton. The three most frequently employed methods, designed by Ruff, Grine, and McHenry, were developed using different populations to address different research questions. They were not specifically designed for application to female remains, and their accuracy for this purpose has rarely been assessed or compared in living populations. This study analyzes the accuracy of these methods using a sample of modern British women through the use of pelvic CT scans (n = 97) and corresponding information about the individuals' known height and weight. Results showed that all methods provided reasonably accurate body mass estimates (average percent prediction errors under 20%) for the normal weight and overweight subsamples, but were inaccurate for the obese and underweight subsamples (average percent prediction errors over 20%). When women of all body mass categories were combined, the methods provided reasonable estimates (average percent prediction errors between 16 and 18%). The results demonstrate that different methods provide more accurate results within specific body mass index (BMI) ranges. The McHenry Equation provided the most accurate estimation for women of small body size, while the original Ruff Equation is most likely to be accurate if the individual was obese or severely obese. The refined Ruff Equation was the most accurate predictor of body mass on average for the entire sample, indicating that it should be utilized when there is no knowledge of the individual's body size or if the individual is assumed to be of a normal body size. The study also revealed a correlation between pubis length and body mass, and an equation for body mass estimation using pubis length was accurate in a dummy sample, suggesting that pubis length can also be used to acquire reliable body mass estimates. This has implications for how we interpret body mass in fossil hominins and has particular relevance to the interpretation of the long pubic ramus that is characteristic of Neandertals. Copyright © 2017 Elsevier Ltd. All rights reserved.
Gebler, J.B.
2004-01-01
The related topics of spatial variability of aquatic invertebrate community metrics, implications of spatial patterns of metric values to distributions of aquatic invertebrate communities, and ramifications of natural variability to the detection of human perturbations were investigated. Four metrics commonly used for stream assessment were computed for 9 stream reaches within a fairly homogeneous, minimally impaired stream segment of the San Pedro River, Arizona. Metric variability was assessed for differing sampling scenarios using simple permutation procedures. Spatial patterns of metric values suggest that aquatic invertebrate communities are patchily distributed on subsegment and segment scales, which causes metric variability. Wide ranges of metric values resulted in wide ranges of metric coefficients of variation (CVs) and minimum detectable differences (MDDs), and both CVs and MDDs often increased as sample size (number of reaches) increased, suggesting that any particular set of sampling reaches could yield misleading estimates of population parameters and effects that can be detected. Mean metric variabilities were substantial, with the result that only fairly large differences in metrics would be declared significant at ?? = 0.05 and ?? = 0.20. The number of reaches required to obtain MDDs of 10% and 20% varied with significance level and power, and differed for different metrics, but were generally large, ranging into tens and hundreds of reaches. Study results suggest that metric values from one or a small number of stream reach(es) may not be adequate to represent a stream segment, depending on effect sizes of interest, and that larger sample sizes are necessary to obtain reasonable estimates of metrics and sample statistics. For bioassessment to progress, spatial variability may need to be investigated in many systems and should be considered when designing studies and interpreting data.
Huang, Haijian; Wang, Xing; Tervoort, Elena; Zeng, Guobo; Liu, Tian; Chen, Xi; Sologubenko, Alla; Niederberger, Markus
2018-03-27
A general method for preparing nano-sized metal oxide nanoparticles with highly disordered crystal structure and their processing into stable aqueous dispersions is presented. With these nanoparticles as building blocks, a series of nanoparticles@reduced graphene oxide (rGO) composite aerogels are fabricated and directly used as high-power anodes for lithium-ion hybrid supercapacitors (Li-HSCs). To clarify the effect of the degree of disorder, control samples of crystalline nanoparticles with similar particle size are prepared. The results indicate that the structurally disordered samples show a significantly enhanced electrochemical performance compared to the crystalline counterparts. In particular, structurally disordered Ni x Fe y O z @rGO delivers a capacity of 388 mAh g -1 at 5 A g -1 , which is 6 times that of the crystalline sample. Disordered Ni x Fe y O z @rGO is taken as an example to study the reasons for the enhanced performance. Compared with the crystalline sample, density functional theory calculations reveal a smaller volume expansion during Li + insertion for the structurally disordered Ni x Fe y O z nanoparticles, and they are found to exhibit larger pseudocapacitive effects. Combined with an activated carbon (AC) cathode, full-cell tests of the lithium-ion hybrid supercapacitors are performed, demonstrating that the structurally disordered metal oxide nanoparticles@rGO||AC hybrid systems deliver high energy and power densities within the voltage range of 1.0-4.0 V. These results indicate that structurally disordered nanomaterials might be interesting candidates for exploring high-power anodes for Li-HSCs.
Horigan, G; Davies, M; Findlay-White, F; Chaney, D; Coates, V
2017-01-01
To identify the reasons why those offered a place on diabetes education programmes declined the opportunity. It is well established that diabetes education is critical to optimum diabetes care; it improves metabolic control, prevents complications, improves quality of life and empowers people to make informed choices to manage their condition. Despite the significant clinical and personal rewards offered by diabetes education, programmes are underused, with a significant proportion of patients choosing not to attend. A systematic search of the following databases was conducted for the period from 2005-2015: Medline; EMBASE; Scopus; CINAHL; and PsycINFO. Studies that met the inclusion criteria focusing on patient-reported reasons for non-attendance at structured diabetes education were selected. A total of 12 studies spanning quantitative and qualitative methodologies were included. The selected studies were published in Europe, USA, Pakistan, Canada and India, with a total sample size of 2260 people. Two broad categories of non-attender were identified: 1) those who could not attend for logistical, medical or financial reasons (e.g. timing, costs or existing comorbidities) and 2) those who would not attend because they perceived no benefit from doing so, felt they had sufficient knowledge already or had emotional and cultural reasons (e.g. no perceived problem, denial or negative feelings towards education). Diabetes education was declined for many reasons, and the range of expressed reasons was more diverse and complex than anticipated. New and innovative methods of delivering diabetes education are required which address the needs of people with diabetes whilst maintaining quality and efficiency. © 2016 Diabetes UK.
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Raad, Markus; de Rond, Tristan; Rübel, Oliver
Mass spectrometry imaging (MSI) has primarily been applied in localizing biomolecules within biological matrices. Although well-suited, the application of MSI for comparing thousands of spatially defined spotted samples has been limited. One reason for this is a lack of suitable and accessible data processing tools for the analysis of large arrayed MSI sample sets. In this paper, the OpenMSI Arrayed Analysis Toolkit (OMAAT) is a software package that addresses the challenges of analyzing spatially defined samples in MSI data sets. OMAAT is written in Python and is integrated with OpenMSI (http://openmsi.nersc.gov), a platform for storing, sharing, and analyzing MSI data.more » By using a web-based python notebook (Jupyter), OMAAT is accessible to anyone without programming experience yet allows experienced users to leverage all features. OMAAT was evaluated by analyzing an MSI data set of a high-throughput glycoside hydrolase activity screen comprising 384 samples arrayed onto a NIMS surface at a 450 μm spacing, decreasing analysis time >100-fold while maintaining robust spot-finding. The utility of OMAAT was demonstrated for screening metabolic activities of different sized soil particles, including hydrolysis of sugars, revealing a pattern of size dependent activities. Finally, these results introduce OMAAT as an effective toolkit for analyzing spatially defined samples in MSI. OMAAT runs on all major operating systems, and the source code can be obtained from the following GitHub repository: https://github.com/biorack/omaat.« less
Bernard, Andrew C; Mullineaux, David R; Auxier, James T; Forman, Jennifer L; Shapiro, Robert; Pienkowski, David
2010-07-01
This study sought to establish objective anthropometric measures of fit or misfit for young riders on adult and youth-sized all-terrain vehicles and use these metrics to test the unproved historical reasoning that age alone is a sufficient measure of rider-ATV fit. Male children (6-11 years, n=8; and 12-15 years, n=11) were selected by convenience sampling. Rider-ATV fit was quantified by five measures adapted from published recommendations: (1) standing-seat clearance, (2) hand size, (3) foot vs. foot-brake position, (4) elbow angle, and (5) handlebar-to-knee distance. Youths aged 12-15 years fit the adult-sized ATV better than the ATV Safety Institute recommended age-appropriate youth model (63% of subjects fit all 5 measures on adult-sized ATV vs. 20% on youth-sized ATV). Youths aged 6-11 years fit poorly on ATVs of both sizes (0% fit all 5 parameters on the adult-sized ATV vs 12% on the youth-sized ATV). The ATV Safety Institute recommends rider-ATV fit according to age and engine displacement, but no objective data linking age or anthropometrics with ATV engine or frame size has been previously published. Age alone is a poor predictor of rider-ATV fit; the five metrics used offer an improvement compared to current recommendations. Copyright 2010 Elsevier Ltd. All rights reserved.
Pearce, Michael; Hee, Siew Wan; Madan, Jason; Posch, Martin; Day, Simon; Miller, Frank; Zohar, Sarah; Stallard, Nigel
2018-02-08
Most confirmatory randomised controlled clinical trials (RCTs) are designed with specified power, usually 80% or 90%, for a hypothesis test conducted at a given significance level, usually 2.5% for a one-sided test. Approval of the experimental treatment by regulatory agencies is then based on the result of such a significance test with other information to balance the risk of adverse events against the benefit of the treatment to future patients. In the setting of a rare disease, recruiting sufficient patients to achieve conventional error rates for clinically reasonable effect sizes may be infeasible, suggesting that the decision-making process should reflect the size of the target population. We considered the use of a decision-theoretic value of information (VOI) method to obtain the optimal sample size and significance level for confirmatory RCTs in a range of settings. We assume the decision maker represents society. For simplicity we assume the primary endpoint to be normally distributed with unknown mean following some normal prior distribution representing information on the anticipated effectiveness of the therapy available before the trial. The method is illustrated by an application in an RCT in haemophilia A. We explicitly specify the utility in terms of improvement in primary outcome and compare this with the costs of treating patients, both financial and in terms of potential harm, during the trial and in the future. The optimal sample size for the clinical trial decreases as the size of the population decreases. For non-zero cost of treating future patients, either monetary or in terms of potential harmful effects, stronger evidence is required for approval as the population size increases, though this is not the case if the costs of treating future patients are ignored. Decision-theoretic VOI methods offer a flexible approach with both type I error rate and power (or equivalently trial sample size) depending on the size of the future population for whom the treatment under investigation is intended. This might be particularly suitable for small populations when there is considerable information about the patient population.
Estimated abundance of wild burros surveyed on Bureau of Land Management Lands in 2014
Griffin, Paul C.
2015-01-01
The Bureau of Land Management (BLM) requires accurate estimates of the numbers of wild horses (Equus ferus caballus) and burros (Equus asinus) living on the lands it manages. For over ten years, BLM in Arizona has used the simultaneous double-observer method of recording wild burros during aerial surveys and has reported population estimates for those surveys that come from two formulations of a Lincoln-Petersen type of analysis (Graham and Bell, 1989). In this report, I provide those same two types of burro population analysis for 2014 aerial survey data from six herd management areas (HMAs) in Arizona, California, Nevada, and Utah. I also provide burro population estimates based on a different form of simultaneous double-observer analysis, now in widespread use for wild horse surveys that takes into account the potential effects on detection probability of sighting covariates including group size, distance, vegetative cover, and other factors (Huggins, 1989, 1991). The true number of burros present in the six areas surveyed was not known, so population estimates made with these three types of analyses cannot be directly tested for accuracy in this report. I discuss theoretical reasons why the Huggins (1989, 1991) type of analysis should provide less biased estimates of population size than the Lincoln-Petersen analyses and why estimates from all forms of double-observer analyses are likely to be lower than the true number of animals present in the surveyed areas. I note reasons why I suggest using burro observations made at all available distances in analyses, not only those within 200 meters of the flight path. For all analytical methods, small sample sizes of observed groups can be problematic, but that sample size can be increased over time for Huggins (1989, 1991) analyses by pooling observations. I note ways by which burro population estimates could be tested for accuracy when there are radio-collared animals in the population or when there are simultaneous double-observer surveys before and after a burro gather and removal.
Learning to Reason from Samples
ERIC Educational Resources Information Center
Ben-Zvi, Dani; Bakker, Arthur; Makar, Katie
2015-01-01
The goal of this article is to introduce the topic of "learning to reason from samples," which is the focus of this special issue of "Educational Studies in Mathematics" on "statistical reasoning." Samples are data sets, taken from some wider universe (e.g., a population or a process) using a particular procedure…
Parent perspectives on attrition from tertiary care pediatric weight management programs.
Hampl, Sarah; Demeule, Michelle; Eneli, Ihuoma; Frank, Maura; Hawkins, Mary Jane; Kirk, Shelley; Morris, Patricia; Sallinen, Bethany J; Santos, Melissa; Ward, Wendy L; Rhodes, Erinn
2013-06-01
To describe parent/caregiver reasons for attrition from tertiary care weight management clinics/programs. A telephone survey was administered to 147 parents from weight management clinics/programs in the National Association of Children's Hospitals and Related Institutions' (now Children's Hospital Association's) FOCUS on a Fitter Future II collaborative. Scheduling, barriers to recommendation implementation, and transportation issues were endorsed by more than half of parents as having a moderate to high influence on their decision not to return. Family motivation and mismatched expectations between families and clinic/program staff were mentioned as influential by more than one-third. Only mismatched expectations correlated with patient demographics and program characteristics. [corrected]. Although limited by small sample size, the study found that parents who left geographically diverse weight management clinics/programs reported similar reasons for attrition. Future efforts should include offering alternative visit times, more treatment options, and financial and transportation assistance and exploring family expectations.
Cost-effectiveness of alternative outpatient pelvic inflammatory disease treatment strategies.
Smith, Kenneth J; Ness, Roberta B; Wiesenfeld, Harold C; Roberts, Mark S
2007-12-01
Effectiveness differences between outpatient pelvic inflammatory disease (PID) treatment regimens are uncertain, but significant differences in cost exist. To examine the influence of antibiotic costs on PID therapy cost-effectiveness. The authors used a Markov decision model to estimate the cost-effectiveness of recommended antibiotic regimens for PID and performed a value of information analysis to guide future research. Antibiotic costs vary between USD 43 and USD188. Pairwise comparisons, assuming a hypothetical 1% relative risk reduction in PID complications with the more expensive regimen, showed economically reasonable cost-effectiveness ratios. Value of information and sample size considerations support further investigation to detect 10% PID complication rate differences between regimens with >or=USD 50 cost differences. Within the cost range of recommended regimens, use of more expensive antibiotics would be economically reasonable if relatively small decreases in PID complication rates exist. Further investigation of effectiveness differences between regimens is needed.
NASA Astrophysics Data System (ADS)
Mondal, Puspen; Manekar, Meghmalhar; Srivastava, A. K.; Roy, S. B.
2009-07-01
We present the results of magnetization measurements on an as-cast nanocrystalline Nb3Al superconductor embedded in Nb-Al matrix. The typical grain size of Nb3Al ranges from about 2-8 nm with the maximum number of grains at around 3.5 nm, as visualized using transmission electron microscopy. The isothermal magnetization hysteresis loops in the superconducting state can be reasonably fitted within the well-known Kim-Anderson critical-state model. By using the same fitting parameters, we calculate the variation in field with respect to distance inside the sample and show the existence of a critical state over length scales much larger than the typical size of the superconducting grains. Our results indicate that a bulk critical current is possible in a system comprising of nanoparticles. The nonsuperconducting Nb-Al matrix thus appears to play a major role in the bulk current flow through the sample. The superconducting coherence length ξ is estimated to be around 3 nm, which is comparable to the typical grain size. The penetration depth λ is estimated to be about 94 nm, which is much larger than the largest of the superconducting grains. Our results could be useful for tuning the current carrying capability of conductors made out of composite materials which involve superconducting nanoparticles.
Wang, Qing; Feng, Xinbin; Yang, Yufeng; Yan, Haiyu
2011-12-01
Total mercury (THg) and methylmercury (MeHg) concentrations in four size fractions of plankton from three sampling stations in the Hg-contaminated and eutrophic Baihua Reservoir, Guizhou, China, were investigated for biomagnification and trophic transfer of Hg at different sites with various proximity to the major point sources of nutrients and metals. Total Hg concentrations in plankton of the various size fractions varied from 49 to 5,504 ng g(-1) and MeHg concentrations ranged from 3 to 101 ng g(-1). The percentage of Hg as MeHg varied from 0.16 to 70%. Total Hg and MeHg concentrations in plankton samples differed among the three sampling stations with different proximities from the major point sources. The plankton from the site closest to the dam contained the highest concentrations of MeHg. The successive increase of the ratios of MeHg to Hg from seston to macroplankton at all sites indicated that biomagnification is occurring along the plankton food web. However, biomagnification factors (BMF) for MeHg were low (1.5-2.0) between trophic levels. Concentrations of THg in seston decreased with an increase of chlorophyll concentrations, suggesting a significant dilution effect by the algae bloom for Hg. Eutrophication dilution may be a reason for lower MeHg accumulation by the four size classes of plankton in this Hg-contaminated reservoir. Copyright © 2011 SETAC.
Predictive accuracy of combined genetic and environmental risk scores.
Dudbridge, Frank; Pashayan, Nora; Yang, Jian
2018-02-01
The substantial heritability of most complex diseases suggests that genetic data could provide useful risk prediction. To date the performance of genetic risk scores has fallen short of the potential implied by heritability, but this can be explained by insufficient sample sizes for estimating highly polygenic models. When risk predictors already exist based on environment or lifestyle, two key questions are to what extent can they be improved by adding genetic information, and what is the ultimate potential of combined genetic and environmental risk scores? Here, we extend previous work on the predictive accuracy of polygenic scores to allow for an environmental score that may be correlated with the polygenic score, for example when the environmental factors mediate the genetic risk. We derive common measures of predictive accuracy and improvement as functions of the training sample size, chip heritabilities of disease and environmental score, and genetic correlation between disease and environmental risk factors. We consider simple addition of the two scores and a weighted sum that accounts for their correlation. Using examples from studies of cardiovascular disease and breast cancer, we show that improvements in discrimination are generally small but reasonable degrees of reclassification could be obtained with current sample sizes. Correlation between genetic and environmental scores has only minor effects on numerical results in realistic scenarios. In the longer term, as the accuracy of polygenic scores improves they will come to dominate the predictive accuracy compared to environmental scores. © 2017 WILEY PERIODICALS, INC.
Predictive accuracy of combined genetic and environmental risk scores
Pashayan, Nora; Yang, Jian
2017-01-01
ABSTRACT The substantial heritability of most complex diseases suggests that genetic data could provide useful risk prediction. To date the performance of genetic risk scores has fallen short of the potential implied by heritability, but this can be explained by insufficient sample sizes for estimating highly polygenic models. When risk predictors already exist based on environment or lifestyle, two key questions are to what extent can they be improved by adding genetic information, and what is the ultimate potential of combined genetic and environmental risk scores? Here, we extend previous work on the predictive accuracy of polygenic scores to allow for an environmental score that may be correlated with the polygenic score, for example when the environmental factors mediate the genetic risk. We derive common measures of predictive accuracy and improvement as functions of the training sample size, chip heritabilities of disease and environmental score, and genetic correlation between disease and environmental risk factors. We consider simple addition of the two scores and a weighted sum that accounts for their correlation. Using examples from studies of cardiovascular disease and breast cancer, we show that improvements in discrimination are generally small but reasonable degrees of reclassification could be obtained with current sample sizes. Correlation between genetic and environmental scores has only minor effects on numerical results in realistic scenarios. In the longer term, as the accuracy of polygenic scores improves they will come to dominate the predictive accuracy compared to environmental scores. PMID:29178508
Power and sensitivity of alternative fit indices in tests of measurement invariance.
Meade, Adam W; Johnson, Emily C; Braddy, Phillip W
2008-05-01
Confirmatory factor analytic tests of measurement invariance (MI) based on the chi-square statistic are known to be highly sensitive to sample size. For this reason, G. W. Cheung and R. B. Rensvold (2002) recommended using alternative fit indices (AFIs) in MI investigations. In this article, the authors investigated the performance of AFIs with simulated data known to not be invariant. The results indicate that AFIs are much less sensitive to sample size and are more sensitive to a lack of invariance than chi-square-based tests of MI. The authors suggest reporting differences in comparative fit index (CFI) and R. P. McDonald's (1989) noncentrality index (NCI) to evaluate whether MI exists. Although a general value of change in CFI (.002) seemed to perform well in the analyses, condition specific change in McDonald's NCI values exhibited better performance than a single change in McDonald's NCI value. Tables of these values are provided as are recommendations for best practices in MI testing. PsycINFO Database Record (c) 2008 APA, all rights reserved.
Towards well-defined gold nanomaterials via diafiltration and aptamer mediated synthesis
NASA Astrophysics Data System (ADS)
Sweeney, Scott Francis
Gold nanoparticles have garnered recent attention due to their intriguing size- and shape-dependent properties. Routine access to well-defined gold nanoparticle samples in terms of core diameter, shape, peripheral functionality and purity is required in order to carry out fundamental studies of their properties and to utilize these properties in future applications. For this reason, the development of methods for preparing well-defined gold nanoparticle samples remains an area of active research in materials science. In this dissertation, two methods, diafiltration and aptamer mediated synthesis, are explored as possible routes towards well-defined gold nanoparticle samples. It is shown that diafiltration has considerable potential for the efficient and convenient purification and size separation of water-soluble nanoparticles. The suitability of diafiltration for (i) the purification of water-soluble gold nanoparticles, (ii) the separation of a bimodal distribution of nanoparticles into fractions, (iii) the fractionation of a polydisperse sample and (iv) the isolation of [rimers from monomers and aggregates is studied. NMR, thermogravimetric analysis (TGA), and X-ray photoelectron spectroscopy (XPS) measurements demonstrate that diafiltration produces highly pure nanoparticles. UV-visible spectroscopic and transmission electron microscopic analyses show that diafiltration offers the ability to separate nanoparticles of disparate core size, including linked nanoparticles. These results demonstrate the applicability of diafiltration for the rapid and green preparation of high-purity gold nanoparticle samples and the size separation of heterogeneous nanoparticle samples. In the second half of the dissertation, the identification of materials specific aptamers and their use to synthesize shaped gold nanoparticles is explored. The use of in vitro selection for identifying materials specific peptide and oligonucleotide aptamers is reviewed, outlining the specific requirements of in vitro selection for materials and the ways in which the field can be advanced. A promising new technique, in vitro selection on surfaces (ISOS), is developed and the discovery using ISOS of RNA aptamers that bind to evaporated gold is discussed. Analysis of the isolated gold binding RNA aptamers indicates that they are highly structured with single-stranded polyadenosine binding motifs. These aptamers, and similarly isolated peptide aptamers, are briefly explored for their ability to synthesize gold nanoparticles. This dissertation contains both previously published and unpublished co-authored material.
Study optoelectronic properties for polymer composite thick film
NASA Astrophysics Data System (ADS)
Jobayr, Mahmood Radhi; Al Razak, Ali Hussein Abd; Mahdi, Shatha H.; Fadhil, Rihab Nassr
2018-05-01
Coupling the epoxy with cadmium oxide particles are important for optical properties that may be affected by various mixing proportions. The aim of this experimental study was to evaluate the effect of different mixing proportions on these properties of reinforced epoxy with cadmium oxide particles. The ultrasonic techniques were used to mix and prepared samples of composites. The surfaces topographic of the 50 µm thick reinforced epoxy films were studied using atomic force microscopy (AFM) and microscopy technique (FTIR) Spectroscopy. AFM imaging and quantitative characterization of the films showed that for all samples the root mean square of the surface roughness increases monotonically with increasing the CdO concentrations (from 0% to 15%). The observed effects of CdO concentrations on surface roughness can be explained by two things: the first reason is that the atoms of additives are combined with the original material to form a new compound that is smoother, more homogeneity and smaller in particle size. The second reason is due to high mixing due to ultrasonic mixing. It is clear also, AFM examination of the prepared samples of reinforced epoxy resin shown that topographical contrast and the identification of small structural details critically depend on hardness of epoxy resin, which in turn depended on the ratio of material (CdO) added. We show that the AFM imaging of the films showed that the mean diameter (104.8nm) of films for all of the samples decreased from 135.50 nm to 83.20 nm with the increase of CdO concentrations.
Abdullah, Kawsari; Thorpe, Kevin E; Mamak, Eva; Maguire, Jonathon L; Birken, Catherine S; Fehlings, Darcy; Hanley, Anthony J; Macarthur, Colin; Zlotkin, Stanley H; Parkin, Patricia C
2015-07-14
The OptEC trial aims to evaluate the effectiveness of oral iron in young children with non-anemic iron deficiency (NAID). The initial sample size calculated for the OptEC trial ranged from 112-198 subjects. Given the uncertainty regarding the parameters used to calculate the sample, an internal pilot study was conducted. The objectives of this internal pilot study were to obtain reliable estimate of parameters (standard deviation and design factor) to recalculate the sample size and to assess the adherence rate and reasons for non-adherence in children enrolled in the pilot study. The first 30 subjects enrolled into the OptEC trial constituted the internal pilot study. The primary outcome of the OptEC trial is the Early Learning Composite (ELC). For estimation of the SD of the ELC, descriptive statistics of the 4 month follow-up ELC scores were assessed within each intervention group. The observed SD within each group was then pooled to obtain an estimated SD (S2) of the ELC. Correlation (ρ) between the ELC measured at baseline and follow-up was assessed. Recalculation of the sample size was performed using analysis of covariance (ANCOVA) method which uses the design factor (1- ρ(2)). Adherence rate was calculated using a parent reported rate of missed doses of the study intervention. The new estimate of the SD of the ELC was found to be 17.40 (S2). The design factor was (1- ρ2) = 0.21. Using a significance level of 5%, power of 80%, S2 = 17.40 and effect estimate (Δ) ranging from 6-8 points, the new sample size based on ANCOVA method ranged from 32-56 subjects (16-28 per group). Adherence ranged between 14% and 100% with 44% of the children having an adherence rate ≥ 86%. Information generated from our internal pilot study was used to update the design of the full and definitive trial, including recalculation of sample size, determination of the adequacy of adherence, and application of strategies to improve adherence. ClinicalTrials.gov Identifier: NCT01481766 (date of registration: November 22, 2011).
Huang, Shuguang; Yeo, Adeline A; Li, Shuyu Dan
2007-10-01
The Kolmogorov-Smirnov (K-S) test is a statistical method often used for comparing two distributions. In high-throughput screening (HTS) studies, such distributions usually arise from the phenotype of independent cell populations. However, the K-S test has been criticized for being overly sensitive in applications, and it often detects a statistically significant difference that is not biologically meaningful. One major reason is that there is a common phenomenon in HTS studies that systematic drifting exists among the distributions due to reasons such as instrument variation, plate edge effect, accidental difference in sample handling, etc. In particular, in high-content cellular imaging experiments, the location shift could be dramatic since some compounds themselves are fluorescent. This oversensitivity of the K-S test is particularly overpowered in cellular assays where the sample sizes are very big (usually several thousands). In this paper, a modified K-S test is proposed to deal with the nonspecific location-shift problem in HTS studies. Specifically, we propose that the distributions are "normalized" by density curve alignment before the K-S test is conducted. In applications to simulation data and real experimental data, the results show that the proposed method has improved specificity.
Perception of medical university members from nutritional health in the quran.
Salarvand, Shahin; Pournia, Yadollah
2014-04-01
Desirable health is impossible without good nutrition, and Allah has addressed us on eating foods in 118 verses. This study aimed to describe the medical university faculty members' perceptions of nutritional health in the Quran, revealing the important role of faculty members. This qualitative study was conducted with a phenomenological approach. Homogeneous sampling was performed in a final sample size of 16 subjects. The Colaizzi's phenomenological method was applied for data analysis. Three main categories were extracted from the data analysis, including the importance of nutrition in the Quran (referring to certain fruits, vegetables and foods, illustrating and venerating the heavenly ones, nutritional recommendations, revealing the healing power of honey and the effects of fruits and vegetables on physical and social health); reasons of different foods being lawful (halal) and unlawful (haram) (religious slaughter, wine, meats, consequences of consuming haram materials, general expression of halal and haram terms); and fasting (fasting and physical health, fasting and mental health). What has been mentioned in the Quran is what scientists have achieved over the time, since the Quran is governed by logic. Although we do not know the reasons for many things in the Quran, we consider it as the foundation.
The fragmentation threshold and implications for explosive eruptions
NASA Astrophysics Data System (ADS)
Kennedy, B.; Spieler, O.; Kueppers, U.; Scheu, B.; Mueller, S.; Taddeucci, J.; Dingwell, D.
2003-04-01
The fragmentation threshold is the minimum pressure differential required to cause a porous volcanic rock to form pyroclasts. This is a critical parameter when considering the shift from effusive to explosive eruptions. We fragmented a variety of natural volcanic rock samples at room temperature (20oC) and high temperature (850oC) using a shock tube modified after Aldibirov and Dingwell (1996). This apparatus creates a pressure differential which drives fragmentation. Pressurized gas in the vesicles of the rock suddenly expands, blowing the sample apart. For this reason, the porosity is the primary control on the fragmentation threshold. On a graph of porosity against fragmentation threshold, our results from a variety of natural samples at both low and high temperatures all plot on the same curve and show the threshold increasing steeply at low porosities. A sharp decrease in the fragmentation threshold occurs as porosity increases from 0- 15%, while a more gradual decrease is seen from 15- 85%. The high temperature experiments form a curve with less variability than the low temperature experiments. For this reason, we have chosen to model the high temperature thresholds. The curve can be roughly predicted by the tensile strength of glass (140 MPa) divided by the porosity. Fractured phenocrysts in the majority of our samples reduces the overall strength of the sample. For this reason, the threshold values can be more accurately predicted by % matrix x the tensile strength/ porosity. At very high porosities the fragmentation threshold varies significantly due to the effect of bubble shape and size distributions on the permeability (Mueller et al, 2003). For example, high thresholds are seen for samples with very high permeabilities, where gas flow reduces the local pressure differential. These results allow us to predict the fragmentation threshold for any volcanic rock for which the porosity and crystal contents are known. During explosive eruptions, the fragmentation threshold may be exceeded in two ways: (1) by building an overpressure within the vesicles above the fragmentation threshold or (2) by unloading and exposing lithostatically pressurised magma to lower pressures. Using this data, we can in principle estimate the height of dome collapse or amount of overpressure necessary to produce an explosive eruption.
13 CFR 121.1009 - What are the procedures for making the size determination?
Code of Federal Regulations, 2011 CFR
2011-01-01
... determination. The SBA will base its formal size determination upon the record, including reasonable inferences... procurement by reducing its size. (5) A concern determined to be other than small under a particular size...
Kabaluk, J Todd; Binns, Michael R; Vernon, Robert S
2006-06-01
Counts of green peach aphid, Myzus persicae (Sulzer) (Hemiptera: Aphididae), in potato, Solanum tuberosum L., fields were used to evaluate the performance of the sampling plan from a pest management company. The counts were further used to develop a binomial sampling method, and both full count and binomial plans were evaluated using operating characteristic curves. Taylor's power law provided a good fit of the data (r2 = 0.95), with the relationship between the variance (s2) and mean (m) as ln(s2) = 1.81(+/- 0.02) + 1.55(+/- 0.01) ln(m). A binomial sampling method was developed using the empirical model ln(m) = c + dln(-ln(1 - P(T))), to which the data fit well for tally numbers (T) of 0, 1, 3, 5, 7, and 10. Although T = 3 was considered the most reasonable given its operating characteristics and presumed ease of classification above or below critical densities (i.e., action thresholds) of one and 10 M. persicae per leaf, the full count method is shown to be superior. The mean number of sample sites per field visit by the pest management company was 42 +/- 19, with more than one-half (54%) of the field visits involving sampling 31-50 sample sites, which was acceptable in the context of operating characteristic curves for a critical density of 10 M. persicae per leaf. Based on operating characteristics, actual sample sizes used by the pest management company can be reduced by at least 50%, on average, for a critical density of 10 M. persicae per leaf. For a critical density of one M. persicae per leaf used to avert the spread of potato leaf roll virus, sample sizes from 50 to 100 were considered more suitable.
Dunham, Kylee; Grand, James B.
2016-01-01
We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.
Kuiper, Rebecca M; Nederhoff, Tim; Klugkist, Irene
2015-05-01
In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration-based set of hypotheses containing equality constraints on the means, or a theory-based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory-based hypotheses) has advantages over exploration (i.e., examining all possible equality-constrained hypotheses). Furthermore, examining reasonable order-restricted hypotheses has more power to detect the true effect/non-null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory-based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number). © 2014 The British Psychological Society.
1985-12-01
statistics, each of the a levels fall. The mirror image of this is to work with the percentiles, or the I - a levels . These then become the minimum...To be valid, the power would have to be close to the *-levels, and that Is the case. The powers are not exactly equal to the a - levels , but that is a...Information available increases with sample size. When a - levels are analyzed, for a = .0 1, the only reasonable power Is 33 L 4 against the
Does Podcast Use Enhance Critical Thinking in Nursing Education?
Blum, Cynthia A
The purpose of this pilot interventional study was to examine relationships between adjunctive podcast viewing and nursing students' critical thinking (CT) abilities. Participants were last semester/preceptorship nursing students. The intervention group was given unrestricted access to a CT podcast. There was no statistical significance between Health Sciences Reasoning Test pretest and posttest scores, the number of times the podcast was viewed, and specific demographic factors. The results suggest that CT podcast viewing did not improve CT abilities. However, Likert scale results indicated students liked this method of learning. Demographic factors and sample size were limited, and further research is recommended.
Textual data in psychiatry: reasoning by analogy to quantitative principles.
Yang, Suzanne; Mulvey, Edward P; Falissard, Bruno
2012-08-01
Personal meaning in subjective experience is a key element in the treatment of persons with mental disorders. Open-response speech samples would appear to be suitable for studying this type of subjective experience, but there are still important challenges in using language as data. Scientific principles involved in sample size calculation, validity, and reliability may be applicable, by analogy, to data collected in the form of words. We describe a rationale for including computer-assisted techniques as one step of a qualitative analysis procedure that includes manual reading. Clarification of a framework for including language as data in psychiatric research may allow us to more effectively bridge biological and psychometric research with clinical practice, a setting where the patient's clinical "data" are, in large part, conveyed in words.
24 CFR 8.11 - Reasonable accommodation.
Code of Federal Regulations, 2010 CFR
2010-04-01
... make reasonable accommodation to the known physical or mental limitations of an otherwise qualified... accommodation would impose an undue hardship on the operation of its program. (b) Reasonable accommodation may... hardship on the operation of a recipient's program, factors to be considered include: (1) The overall size...
NASA Astrophysics Data System (ADS)
Bolling, Denzell Tamarcus
A significant amount of research has been devoted to the characterization of new engineering materials. Searching for new alloys which may improve weight, ultimate strength, or fatigue life are just a few of the reasons why researchers study different materials. In support of that mission this study focuses on the effects of specimen geometry and size on the dynamic failure of AA2219 aluminum alloy subjected to impact loading. Using the Split Hopkinson Pressure Bar (SHPB) system different geometric samples including cubic, rectangular, cylindrical, and frustum samples are loaded at different strain rates ranging from 1000s-1 to 6000s-1. The deformation properties, including the potential for the formation of adiabatic shear bands, of the different geometries are compared. Overall the cubic geometry achieves the highest critical strain and the maximum stress values at low strain rates and the rectangular geometry has the highest critical strain and the maximum stress at high strain rates. The frustum geometry type consistently achieves the lowest the maximum stress value compared to the other geometries under equal strain rates. All sample types clearly indicated susceptibility to strain localization at different locations within the sample geometry. Micrograph analysis indicated that adiabatic shear band geometry was influenced by sample geometry, and that specimens with a circular cross section are more susceptible to shear band formation than specimens with a rectangular cross section.
Naltrexone and Cognitive Behavioral Therapy for the Treatment of Alcohol Dependence
Baros, AM; Latham, PK; Anton, RF
2008-01-01
Background Sex differences in regards to pharmacotherapy for alcoholism is a topic of concern following publications suggesting naltrexone, one of the longest approved treatments of alcoholism, is not as effective in women as in men. This study was conducted by combining two randomized placebo controlled clinical trials utilizing similar methodologies and personnel in which the data was amalgamated to evaluate sex effects in a reasonable sized sample. Methods 211 alcoholics (57 female; 154 male) were randomized to the naltrexone/CBT or placebo/CBT arm of the two clinical trials analyzed. Baseline variables were examined for differences between sex and treatment groups via analysis of variance (ANOVA) for continuous variable or chi-square test for categorical variables. All initial outcome analysis was conducted under an intent-to-treat analysis plan. Effect sizes for naltrexone over placebo were determined by Cohen’s D (d). Results The effect size of naltrexone over placebo for the following outcome variables was similar in men and women (%days abstinent (PDA) d=0.36, %heavy drinking days (PHDD) d=0.36 and total standard drinks (TSD) d=0.36). Only for men were the differences significant secondary to the larger sample size (PDA p=0.03; PHDD p=0.03; TSD p=0.04). There were a few variables (GGT at wk-12 change from baseline to week-12: men d=0.36, p=0.05; women d=0.20, p=0.45 and drinks per drinking day: men d=0.36, p=0.05; women d=0.28, p=0.34) where the naltrexone effect size for men was greater than women. In women, naltrexone tended to increase continuous abstinent days before a first drink (women d-0.46, p=0.09; men d=0.00, p=0.44). Conclusions The effect size of naltrexone over placebo appeared similar in women and men in our hands suggesting the findings of sex differences in naltrexone response might have to do with sample size and/or endpoint drinking variables rather than any inherent pharmacological or biological differences in response. PMID:18336635
Guo, Shuang; Zhu, Chenqi; Gao-Yang, Yaya; Qiu, Bailing; Wu, Di; Liang, Qihui; He, Jiayuan; Han, Nanyin
2016-02-01
Gravitational field-flow fractionation is the simplest field-flow fractionation technique in terms of principle and operation. The earth' s gravity is its external field. Different sized particles are injected into a thin channel and carried by carrier fluid. The different velocities of the carrier liquid in different places results in a size-based separation. A gravitational field-flow fractionation (GrFFF) instrument was designed and constructed. Two kinds of polystyrene (PS) particles with different sizes (20 µm and 6 µm) were chosen as model particles. In this work, the separation of the sample was achieved by changing the concentration of NaN3, the percentage of mixed surfactant in the carrier liquid and the flow rate of carrier liquid. Six levels were set for each factor. The effects of these three factors on the retention ratio (R) and plate height (H) of the PS particles were investigated. It was found that R increased and H decreased with increasing particle size. On the other hand, the R and H increased with increasing flow rate. The R and H also increased with increasing NaN3 concentration. The reason was that the electrostatic repulsive force between the particles and the glass channel wall increased. The force allowed the samples approach closer to the channel wall. The results showed that the resolution and retention time can be improved by adjusting the experimental conditions. These results can provide important values to the further applications of GrFFF technique.
How many stakes are required to measure the mass balance of a glacier?
Fountain, A.G.; Vecchia, A.
1999-01-01
Glacier mass balance is estimated for South Cascade Glacier and Maclure Glacier using a one-dimensional regression of mass balance with altitude as an alternative to the traditional approach of contouring mass balance values. One attractive feature of regression is that it can be applied to sparse data sets where contouring is not possible and can provide an objective error of the resulting estimate. Regression methods yielded mass balance values equivalent to contouring methods. The effect of the number of mass balance measurements on the final value for the glacier showed that sample sizes as small as five stakes provided reasonable estimates, although the error estimates were greater than for larger sample sizes. Different spatial patterns of measurement locations showed no appreciable influence on the final value as long as different surface altitudes were intermittently sampled over the altitude range of the glacier. Two different regression equations were examined, a quadratic, and a piecewise linear spline, and comparison of results showed little sensitivity to the type of equation. These results point to the dominant effect of the gradient of mass balance with altitude of alpine glaciers compared to transverse variations. The number of mass balance measurements required to determine the glacier balance appears to be scale invariant for small glaciers and five to ten stakes are sufficient.
NASA Astrophysics Data System (ADS)
Krishnan, Vinoadh Kumar; Sinnaeruvadi, Kumaran; Verma, Shailendra Kumar; Dash, Biswaranjan; Agrawal, Priyanka; Subramanian, Karthikeyan
2017-08-01
The present work deals with synthesis, characterisation and elevated temperature mechanical property evaluation of V-4Cr-4Ti and oxide (yttria = 0.3, 0.6 and 0.9 at%) dispersion strengthened V-4Cr-4Ti alloy processed by mechanical alloying and field-assisted sintering, under optimal conditions. Microstructural parameters of both powder and sintered samples were deduced by X-ray diffraction (XRD) and further confirmed with high resolution transmission electron microscopy. Powder diffraction and electron microscopy study show that ball milling of starting elemental powders (V-4Cr-4Ti) with and without yttria addition has resulted in single phase α-V (V-4Cr-4Ti) alloy. Wherein, XRD and electron microscopy images of sintered samples have revealed phase separation (viz., Cr-V and Ti-V) and domain size reduction, with yttria addition. The reasons behind phase separation and domain size reduction with yttria addition during sintering are extensively discussed. Microhardness and high temperature compression tests were done on sintered samples. Yttria addition (0.3 and 0.6 at.%) increases the elevated temperature compressive strength and strain hardening exponent of α-V alloys. High temperature compression test of 0.9 at% yttria dispersed α-V alloy reveals a glassy behaviour.
Sensitivity, specificity, and reproducibility of four measures of laboratory turnaround time.
Valenstein, P N; Emancipator, K
1989-04-01
The authors studied the performance of four measures of laboratory turnaround time: the mean, median, 90th percentile, and proportion of tests reported within a predetermined cut-off interval (proportion of acceptable tests [PAT]). Measures were examined with the use of turnaround time data from 11,070 stat partial thromboplastin times, 16,761 urine cultures, and 28,055 stat electrolyte panels performed by a single laboratory. For laboratories with long turnaround times, the most important quality of a turnaround time measure is high reproducibility, so that improvement in reporting speed can be distinguished from random variation resulting from sampling. The mean was found to be the most reproducible of the four measures, followed by the median. The mean achieved acceptable precision with sample sizes of 100-500 tests. For laboratories with normally rapid turnaround times, the most important quality of a measure is high sensitivity and specificity for detecting whether turnaround time has dropped below standards. The PAT was found to be the best measure of turnaround time in this setting but required sample sizes of at least 500 tests to achieve acceptable accuracy. Laboratory turnaround time may be measured for different reasons. The method of measurement should be chosen with an eye toward its intended application.
Junno, Juho-Antti; Niskanen, Markku; Maijanen, Heli; Holt, Brigitte; Sladek, Vladimir; Niinimäki, Sirpa; Berner, Margit
2018-02-01
The stature/bi-iliac breadth method provides reasonably precise, skeletal frame size (SFS) based body mass (BM) estimations across adults as a whole. In this study, we examine the potential effects of age changes in anthropometric dimensions on the estimation accuracy of SFS-based body mass estimation. We use anthropometric data from the literature and our own skeletal data from two osteological collections to study effects of age on stature, bi-iliac breadth, body mass, and body composition, as they are major components behind body size and body size estimations. We focus on males, as relevant longitudinal data are based on male study samples. As a general rule, lean body mass (LBM) increases through adolescence and early adulthood until people are aged in their 30s or 40s, and starts to decline in the late 40s or early 50s. Fat mass (FM) tends to increase until the mid-50s and declines thereafter, but in more mobile traditional societies it may decline throughout adult life. Because BM is the sum of LBM and FM, it exhibits a curvilinear age-related pattern in all societies. Skeletal frame size is based on stature and bi-iliac breadth, and both of those dimensions are affected by age. Skeletal frame size based body mass estimation tends to increase throughout adult life in both skeletal and anthropometric samples because an age-related increase in bi-iliac breadth more than compensates for an age-related stature decline commencing in the 30s or 40s. Combined with the above-mentioned curvilinear BM change, this results in curvilinear estimation bias. However, for simulations involving low to moderate percent body fat, the stature/bi-iliac method works well in predicting body mass in younger and middle-aged adults. Such conditions are likely to have applied to most human paleontological and archaeological samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
7 CFR 999.1 - Regulation governing the importation of dates.
Code of Federal Regulations, 2012 CFR
2012-01-01
... includes dates coated with a substance materially altering their color. (5) Dates prepared or preserved... altering their color, or dates prepared for incorporation into a product by chopping, slicing, or other... hydration, the dates possess a reasonably good color, are reasonably uniform in size, are reasonably free...
7 CFR 999.1 - Regulation governing the importation of dates.
Code of Federal Regulations, 2011 CFR
2011-01-01
... includes dates coated with a substance materially altering their color. (5) Dates prepared or preserved... altering their color, or dates prepared for incorporation into a product by chopping, slicing, or other... hydration, the dates possess a reasonably good color, are reasonably uniform in size, are reasonably free...
NASA Astrophysics Data System (ADS)
Ni, W.; Zhang, Z.; Sun, G.
2017-12-01
Several large-scale maps of forest AGB have been released [1] [2] [3]. However, these existing global or regional datasets were only approximations based on combining land cover type and representative values instead of measurements of actual forest aboveground biomass or forest heights [4]. Rodríguez-Veiga et al[5] reported obvious discrepancies of existing forest biomass stock maps with in-situ observations in Mexico. One of the biggest challenges to the credibility of these maps comes from the scale gaps between the size of field sampling plots used to develop(or validate) estimation models and the pixel size of these maps and the availability of field sampling plots with sufficient size for the verification of these products [6]. It is time-consuming and labor-intensive to collect sufficient number of field sampling data over the plot size of the same as resolutions of regional maps. The smaller field sampling plots cannot fully represent the spatial heterogeneity of forest stands as shown in Figure 1. Forest AGB is directly determined by forest heights, diameter at breast height (DBH) of each tree, forest density and tree species. What measured in the field sampling are the geometrical characteristics of forest stands including the DBH, tree heights and forest densities. The LiDAR data is considered as the best dataset for the estimation of forest AGB. The main reason is that LiDAR can directly capture geometrical features of forest stands by its range detection capabilities.The remotely sensed dataset, which is capable of direct measurements of forest spatial structures, may serve as a ladder to bridge the scale gaps between the pixel size of regional maps of forest AGB and field sampling plots. Several researches report that TanDEM-X data can be used to characterize the forest spatial structures [7, 8]. In this study, the forest AGB map of northeast China were produced using ALOS/PALSAR data taking TanDEM-X data as a bridges. The TanDEM-X InSAR data used in this study and forest AGB map was shown in Figure 2. The technique details and further analysis will be given in the final report. AcknowledgmentThis work was supported in part by the National Basic Research Program of China (Grant No. 2013CB733401, 2013CB733404), and in part by the National Natural Science Foundation of China (Grant Nos. 41471311, 41371357, 41301395).
Siamian, Hasan; Yaminfirooz, Moosa; Dehghan, Zahra; Shahrabi, Afsaneh
2013-01-01
This study seeks to determine the expertise, use, and satisfaction of faculty members of Babol University of Medical Sciences, using the provided online information services by the university. This study is descriptive and analytical survey and the information gathering was through the questionnaireand the samples, based on the random of Kerjesi and Morgan Table sample size determination that was selected through stratified sampling proportionately to the size of the departments which summed up to 155 of which 113 responded to the mailed questionnaire. The results of the study show that among the various data sources such as books, journals and internet, faculty members have more undemandingand convenient access to the Internet compared to other resources use, however, half of the information needs of faculty members, 57 (50.4 percent) are provided by the printed books;and the databases available to the University and used by faculty members are PubMed with 76.1% and Science direct with 53.1% and Iranmedex with 46.9%.Only 17% of faculty members have the absolute contentment of the Internet information services,and more than half of the respondents (58.4%) expressed the low speed of Internet service as their major reason for their dissatisfaction of the provided services. Use and Satisfaction of Internet-Based Information Services of Faculty Members. Using the Internet to provide the needed information with an index of 46%is a significant issue. The results of the study show that among the various data sources such as books, journals and internet, faculty members have more undemandingand convenient access to the Internet and their access to printed books was really hard and limited, although the internet was more convenient to acquire information, most of the information needs of faculty members are provided by the printed books based on what they expressed. The study showed that the use and acquaintance of the sample of the information databases is very lowand only a few of them have the full satisfaction of the provided Internet information services and the main foremost reason for this major dissatisfaction is the low-speed Internet services at the University.
ERIC Educational Resources Information Center
Ainley, Janet; Gould, Robert; Pratt, Dave
2015-01-01
This paper is in the form of a reflective discussion of the collection of papers in this Special Issue on "Statistical reasoning: learning to reason from samples" drawing on deliberations arising at the Seventh International Collaboration for Research on Statistical Reasoning, Thinking, and Literacy (SRTL7). It is an important part of…
Evaluation of blast furnace slag as basal media for eelgrass bed.
Hizon-Fradejas, Amelia B; Nakano, Yoichi; Nakai, Satoshi; Nishijima, Wataru; Okada, Mitsumasa
2009-07-30
Two types of blast furnace slag (BFS), granulated (GS) and air-cooled slag (ACS), were evaluated as basal media for eelgrass bed. Evaluation was done by comparing BFS samples with natural eelgrass sediment (NES) in terms of some physico-chemical characteristics and then, investigating growth of eelgrass both in BFS and NES. In terms of particle size, both BFS samples were within the range acceptable for growing eelgrass. However, compared with NES, low silt-clay content for ACS and lack of organic matter content for both BFS samples were found. Growth experiment showed that eelgrass can grow in both types of BFS, although growth rates in BFS samples shown by leaf elongation were slower than that in NES. The possible reasons for stunted growth in BFS were assumed to be lack of organic matter and release of some possible toxins from BFS. Reduction of sulfide content of BFS samples did not result to enhanced growth; though sulfide release was eliminated, release of Zn was greater than before treatment and concentration of that reached to alarming amounts.
Reflecting on Graphs: Attributes of Graph Choice and Construction Practices in Biology
Angra, Aakanksha; Gardner, Stephanie M.
2017-01-01
Undergraduate biology education reform aims to engage students in scientific practices such as experimental design, experimentation, and data analysis and communication. Graphs are ubiquitous in the biological sciences, and creating effective graphical representations involves quantitative and disciplinary concepts and skills. Past studies document student difficulties with graphing within the contexts of classroom or national assessments without evaluating student reasoning. Operating under the metarepresentational competence framework, we conducted think-aloud interviews to reveal differences in reasoning and graph quality between undergraduate biology students, graduate students, and professors in a pen-and-paper graphing task. All professors planned and thought about data before graph construction. When reflecting on their graphs, professors and graduate students focused on the function of graphs and experimental design, while most undergraduate students relied on intuition and data provided in the task. Most undergraduate students meticulously plotted all data with scaled axes, while professors and some graduate students transformed the data, aligned the graph with the research question, and reflected on statistics and sample size. Differences in reasoning and approaches taken in graph choice and construction corroborate and extend previous findings and provide rich targets for undergraduate and graduate instruction. PMID:28821538
Granularity analysis for mathematical proofs.
Schiller, Marvin R G
2013-04-01
Mathematical proofs generally allow for various levels of detail and conciseness, such that they can be adapted for a particular audience or purpose. Using automated reasoning approaches for teaching proof construction in mathematics presupposes that the step size of proofs in such a system is appropriate within the teaching context. This work proposes a framework that supports the granularity analysis of mathematical proofs, to be used in the automated assessment of students' proof attempts and for the presentation of hints and solutions at a suitable pace. Models for granularity are represented by classifiers, which can be generated by hand or inferred from a corpus of sample judgments via machine-learning techniques. This latter procedure is studied by modeling granularity judgments from four experts. The results provide support for the granularity of assertion-level proofs but also illustrate a degree of subjectivity in assessing step size. Copyright © 2013 Cognitive Science Society, Inc.
Enhanced magnetocaloric effect tuning efficiency in Ni-Mn-Sn alloy ribbons
NASA Astrophysics Data System (ADS)
Quintana-Nedelcos, A.; Sánchez Llamazares, J. L.; Daniel-Perez, G.
2017-11-01
The present work was undertaken to investigate the effect of microstructure on the magnetic entropy change of Ni50Mn37Sn13 ribbon alloys. Unchanged sample composition and cell parameter of austenite allowed us to study strictly the correlation between the average grain size and the total magnetic field induced entropy change (ΔST). We found that a size-dependent martensitic transformation tuning results in a wide temperature range tailoring (>40 K) of the magnetic entropy change with a reasonably small variation on the peak value of the total field induced entropy change. The peak values varied from 6.0 J kg-1 K-1 to 7.7 J kg-1 K-1 for applied fields up to 2 T. Different tuning efficiencies obtained by diverse MCE tailoring approaches are compared to highlight the advantages of the herein proposed mechanism.
Transition from Forward Smoldering to Flaming in Small Polyurethane Foam Samples
NASA Technical Reports Server (NTRS)
Bar-Ilan, A.; Putzeys, O.; Rein, G.; Fernandez-Pello, A. C.
2004-01-01
Experimental observations are presented of the effect of the flow velocity and oxygen concentration, and of a thermal radiant flux, on the transition from smoldering to flaming in forward smoldering of small samples of polyurethane foam with a gas/solid interface. The experiments are part of a project studying the transition from smolder to flaming under conditions encountered in spacecraft facilities, i.e., microgravity, low velocity variable oxygen concentration flows. Because the microgravity experiments are planned for the International Space Station, the foam samples had to be limited in size for safety and launch mass reasons. The feasible sample size is too small for smolder to self propagate because of heat losses to the surrounding environment. Thus, the smolder propagation and the transition to flaming had to be assisted by reducing the heat losses to the surroundings and increasing the oxygen concentration. The experiments are conducted with small parallelepiped samples vertically placed in a wind tunnel. Three of the sample lateral-sides are maintained at elevated temperature and the fourth side is exposed to an upward flow and to a radiant flux. It is found that decreasing the flow velocity and increasing its oxygen concentration, and/or increasing the radiant flux enhances the transition to flaming, and reduces the delay time to transition. Limiting external ambient conditions for the transition to flaming are reported for the present experimental set-up. The results show that smolder propagation and the transition to flaming can occur in relatively small fuel samples if the external conditions are appropriate. The results also indicate that transition to flaming occurs in the char left behind by the smolder reaction, and it has the characteristics of a gas-phase ignition induced by the smolder reaction, which acts as the source of both gaseous fuel and heat.
Taphonomic bias in pollen and spore record: a review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisk, L.H.
The high dispersibility and ease of pollen and spore transport have led researchers to conclude erroneously that fossil pollen and spore floras are relatively complete and record unbiased representations of the regional vegetation extant at the time of sediment deposition. That such conclusions are unjustified is obvious when the authors remember that polynomorphs are merely organic sedimentary particles and undergo hydraulic sorting not unlike clastic sedimentary particles. Prior to deposition in the fossil record, pollen and spores can be hydraulically sorted by size, shape, and weight, subtly biasing relative frequencies in fossil assemblages. Sorting during transport results in palynofloras whosemore » composition is environmentally dependent. Therefore, depositional environment is an important consideration to make correct inferences on the source vegetation. Sediment particle size of original rock samples may contain important information on the probability of a taphonomically biased pollen and spore assemblage. In addition, a reasonable test of hydraulic sorting is the distribution of pollen grain sizes and shapes in each assemblage. Any assemblage containing a wide spectrum of grain sizes and shapes has obviously not undergone significant sorting. If unrecognized, taphonomic bias can lead to paleoecologic, paleoclimatic, and even biostratigraphic misinterpretations.« less
Effect of Boundary Condition on the Shear Behaviour of Rock Joints in the Direct Shear Test
NASA Astrophysics Data System (ADS)
Bahaaddini, M.
2017-05-01
The common method for determination of the mechanical properties of the rock joints is the direct shear test. This paper aims to study the effect of boundary condition on the results of direct shear tests. Experimental studies undertaken in this research showed that the peak shear strength is mostly overestimated. This problem is more pronounced for steep asperities and under high normal stresses. Investigation of the failure mode of these samples showed that tensile cracks are generated at the boundary of sample close to the specimen holders and propagated inside the intact materials. In order to discover the reason of observed failure mechanism in experiments, the direct shear test was simulated using PFC2D. Results of numerical models showed that the gap zone size between the upper and lower specimen holders has a significant effect on the shear mechanism. For the high gap size, stresses concentrate at the vicinity of the tips of specimen holders and result in generation and propagation of tensile cracks inside the intact material. However, by reducing the gap size, stresses are concentrated on asperities, and damage of specimen at its boundary is not observed. Results of this paper show that understanding the shear mechanism of rock joints is an essential step prior to interpreting the results of direct shear tests.
A Reassessment of Bergmann's Rule in Modern Humans
Foster, Frederick; Collard, Mark
2013-01-01
It is widely accepted that modern humans conform to Bergmann's rule, which holds that body size in endothermic species will increase as temperature decreases. However, there are reasons to question the reliability of the findings on which this consensus is based. One of these is that the main studies that have reported that modern humans conform to Bergmann's rule have employed samples that contain a disproportionately large number of warm-climate and northern hemisphere groups. With this in mind, we used latitudinally-stratified and hemisphere-specific samples to re-assess the relationship between modern human body size and temperature. We found that when groups from north and south of the equator were analyzed together, Bergmann's rule was supported. However, when groups were separated by hemisphere, Bergmann's rule was only supported in the northern hemisphere. In the course of exploring these results further, we found that the difference between our northern and southern hemisphere subsamples is due to the limited latitudinal and temperature range in the latter subsample. Thus, our study suggests that modern humans do conform to Bergmann's rule but only when there are major differences in latitude and temperature among groups. Specifically, groups must span more than 50 degrees of latitude and/or more than 30°C for it to hold. This finding has important implications for work on regional variation in human body size and its relationship to temperature. PMID:24015229
Zhang, Fang; Wagner, Anita K; Ross-Degnan, Dennis
2011-11-01
Interrupted time series is a strong quasi-experimental research design to evaluate the impacts of health policy interventions. Using simulation methods, we estimated the power requirements for interrupted time series studies under various scenarios. Simulations were conducted to estimate the power of segmented autoregressive (AR) error models when autocorrelation ranged from -0.9 to 0.9 and effect size was 0.5, 1.0, and 2.0, investigating balanced and unbalanced numbers of time periods before and after an intervention. Simple scenarios of autoregressive conditional heteroskedasticity (ARCH) models were also explored. For AR models, power increased when sample size or effect size increased, and tended to decrease when autocorrelation increased. Compared with a balanced number of study periods before and after an intervention, designs with unbalanced numbers of periods had less power, although that was not the case for ARCH models. The power to detect effect size 1.0 appeared to be reasonable for many practical applications with a moderate or large number of time points in the study equally divided around the intervention. Investigators should be cautious when the expected effect size is small or the number of time points is small. We recommend conducting various simulations before investigation. Copyright © 2011 Elsevier Inc. All rights reserved.
Unities in Inductive Reasoning. Technical Report No. 18.
ERIC Educational Resources Information Center
Sternberg, Robert J.; Gardner, Michael K.
Two experiments were performed to study inductive reasoning as a set of thought processes that operates on the structure, as opposed to the content, of organized memory. The content of the reasoning consisted of inductions concerning the names of mammals, assumed to occupy a Euclidean space of three dimensions (size, ferocity, and humanness) in…
Effects of biochar amendment on geotechnical properties of landfill cover soil.
Reddy, Krishna R; Yaghoubi, Poupak; Yukselen-Aksoy, Yeliz
2015-06-01
Biochar is a carbon-rich product obtained when plant-based biomass is heated in a closed container with little or no available oxygen. Biochar-amended soil has the potential to serve as a landfill cover material that can oxidise methane emissions for two reasons: biochar amendment can increase the methane retention time and also enhance the biological activity that can promote the methanotrophic oxidation of methane. Hydraulic conductivity, compressibility and shear strength are the most important geotechnical properties that are required for the design of effective and stable landfill cover systems, but no studies have been reported on these properties for biochar-amended landfill cover soils. This article presents physicochemical and geotechnical properties of a biochar, a landfill cover soil and biochar-amended soils. Specifically, the effects of amending 5%, 10% and 20% biochar (of different particle sizes as produced, size-20 and size-40) to soil on its physicochemical properties, such as moisture content, organic content, specific gravity and pH, as well as geotechnical properties, such as hydraulic conductivity, compressibility and shear strength, were determined from laboratory testing. Soil or biochar samples were prepared by mixing them with 20% deionised water based on dry weight. Samples of soil amended with 5%, 10% and 20% biochar (w/w) as-is or of different select sizes, were also prepared at 20% initial moisture content. The results show that the hydraulic conductivity of the soil increases, compressibility of the soil decreases and shear strength of the soil increases with an increase in the biochar amendment, and with a decrease in biochar particle size. Overall, the study revealed that biochar-amended soils can possess excellent geotechnical properties to serve as stable landfill cover materials. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Xu, R.; Prodanovic, M.
2017-12-01
Due to the low porosity and permeability of tight porous media, hydrocarbon productivity strongly depends on the pore structure. Effective characterization of pore/throat sizes and reconstruction of their connectivity in tight porous media remains challenging. Having a representative pore throat network, however, is valuable for calculation of other petrophysical properties such as permeability, which is time-consuming and costly to obtain by experimental measurements. Due to a wide range of length scales encountered, a combination of experimental methods is usually required to obtain a comprehensive picture of the pore-body and pore-throat size distributions. In this work, we combine mercury intrusion capillary pressure (MICP) and nuclear magnetic resonance (NMR) measurements by percolation theory to derive pore-body size distribution, following the work by Daigle et al. (2015). However, in their work, the actual pore-throat sizes and the distribution of coordination numbers are not well-defined. To compensate for that, we build a 3D unstructured two-scale pore throat network model initialized by the measured porosity and the calculated pore-body size distributions, with a tunable pore-throat size and coordination number distribution, which we further determine by matching the capillary pressure vs. saturation curve from MICP measurement, based on the fact that the mercury intrusion process is controlled by both the pore/throat size distributions and the connectivity of the pore system. We validate our model by characterizing several core samples from tight Middle East carbonate, and use the network model to predict the apparent permeability of the samples under single phase fluid flow condition. Results show that the permeability we get is in reasonable agreement with the Coreval experimental measurements. The pore throat network we get can be used to further calculate relative permeability curves and simulate multiphase flow behavior, which will provide valuable insights into the production optimization and enhanced oil recovery design.
Li, A; Meyre, D
2013-04-01
A robust replication of initial genetic association findings has proved to be difficult in human complex diseases and more specifically in the obesity field. An obvious cause of non-replication in genetic association studies is the initial report of a false positive result, which can be explained by a non-heritable phenotype, insufficient sample size, improper correction for multiple testing, population stratification, technical biases, insufficient quality control or inappropriate statistical analyses. Replication may, however, be challenging even when the original study describes a true positive association. The reasons include underpowered replication samples, gene × gene, gene × environment interactions, genetic and phenotypic heterogeneity and subjective interpretation of data. In this review, we address classic pitfalls in genetic association studies and provide guidelines for proper discovery and replication genetic association studies with a specific focus on obesity.
Nanoscale surface modification of glass using a 1064 nm pulsed laser
NASA Astrophysics Data System (ADS)
Theppakuttai, Senthil; Chen, Shaochen
2003-07-01
We report a method to produce nanopatterns on borosilicate glass by a Nd:yttrium-aluminum-garnet laser (10 ns, 1064 nm), using silica nanospheres. Nonlinear absorption of the enhanced optical field between the spheres and glass sample is believed to be the primary reason for the creation of nanofeatures on the glass substrate. By shining the laser beam from the backside of the glass sample, the scattering effects are minimized and only the direct field enhancement due to the spheres is utilized for surface patterning. To confirm this, calculations based on the Mie scattering theory were performed, and the resulting intensity as a function of scattering angles are presented. The nanofeatures thus obtained by this method are 350 nm in diameter and the distance between them is around 640 nm, which is same as the size of spheres used.
Bioaerosols study in central Taiwan during summer season.
Wang, Chun-Chin; Fang, Guor-Cheng; Lee, LienYao
2007-04-01
Suspended particles, of which bioaerosols are one type, constitute one of the main reasons to cause severe air quality in Taiwan. Bioaerosols include allergens such as fungi, bacteria, actinomycetes, arthropods and protozoa, as well as microbial products such as mycotoxins, endotoxins and glucans. When allergens and microbial products are suspended in the air, local air quality will be influenced severely. In addition, when the particle size is small enough to pass through the respiratory tract entering the human body, the health of the local population is also threatened. Therefore, the purpose of this study attempted to understand the concentration and types of bacteria during summer period at four sampling sites in Taichung city, central Taiwan. The results indicated that total average bacterial concentration by using R2A medium incubated for 48 h were 7.3 x 10(2) and 1.2 x 10(3) cfu/m3 for Chung-Ming elementary sampling site during daytime and night-time period of summer season. In addition, total average bacterial concentration by using R2A medium incubated for 48 h were 2.2 x 10(3) and 2.5 x 10(3) cfu/m3 for Taichung refuse incineration plant sampling site during daytime and night-time period of summer season. As for Rice Field sampling site during daytime and night-time period of summer season, the results also reflected that the total average bacterial concentration by using R2A medium incubated for 48 h were 3.4 x 10(3) and 3.5 x 10(3) cfu/m3. Finally, total average bacterial concentration by using R2A medium incubated for 48 h were 1.6 x 10(3) and 1.9 x 10(3) cfu/m3 for Central Taiwan Science Park sampling site during daytime and night-time period of summer season. Moreover, the average bacterial concentration increased as the incubated time in a growth medium increased for particle sizes of 0.65-1.1, 1.1-2.1, 2.1-3.3, 3.3-4.7 and 4.7-7.0 microm. The total average bacterial concentration has no significant difference for day and night sampling period at any sampling site for the expression of bacterial concentration in term of order. The high average bacterial concentration was found in the particle size of 0.53-0.71 mm (average bioaerosol size was in the range of 2.1-4.7 microm) for each sampling site. Besides, there were exceeded 20 kinds of bacteria for each sampling site and the bacterial shape were rod, coccus and filamentous.
NASA Astrophysics Data System (ADS)
Fahnestock, Eugene G.; Yu, Yang; Hamilton, Douglas P.; Schwartz, Stephen; Stickle, Angela; Miller, Paul L.; Cheng, Andy F.; Michel, Patrick; AIDA Impact Simulation Working Group
2016-10-01
The proposed Asteroid Impact Deflection and Assessment (AIDA) mission includes NASA's Double Asteroid Redirection Test (DART), whose impact with the secondary of near-Earth binary asteroid 65803 Didymos is expected to liberate large amounts of ejecta. We present efforts within the AIDA Impact Simulation Working Group to comprehensively simulate the behavior of this impact ejecta as it moves through and exits the system. Group members at JPL, OCA, and UMD have been working largely independently, developing their own strategies and methodologies. Ejecta initial conditions may be imported from output of hydrocode impact simulations or generated from crater scaling laws derived from point-source explosion models. We started with the latter approach, using reasonable assumptions for the secondary's density, porosity, surface cohesive strength, and vanishingly small net gravitational/rotational surface acceleration. We adopted DART's planned size, mass, closing velocity, and impact geometry for the cratering event. Using independent N-Body codes, we performed Monte Carlo integration of ejecta particles sampled over reasonable particle size ranges, and over launch locations within the crater footprint. In some cases we scaled the number of integrated particles in various size bins to the estimated number of particles consistent with a realistic size-frequency distribution. Dynamical models used for the particle integration varied, but all included full gravity potential of both primary and secondary, the solar tide, and solar radiation pressure (accounting for shadowing). We present results for the proportions of ejecta reaching ultimate fates of escape, return impact on the secondary, and transfer impact onto the primary. We also present the time history of reaching those outcomes, i.e., ejecta clearing timescales, and the size-frequency distribution of remaining ejecta at given post-impact durations. We find large numbers of particles remain in the system for several weeks after impact. Clearing timescales are nonlinearly dependent on particle size as expected, such that only the largest ejecta persist longest. We find results are strongly dependent on the local surface geometry at the modeled impact locations.
van den Bogert, Cornelis A; Souverein, Patrick C; Brekelmans, Cecile T M; Janssen, Susan W J; Koëter, Gerard H; Leufkens, Hubert G M; Bouter, Lex M
2017-08-01
The objective of the study was to identify the reasons for discontinuation of clinical drug trials and to evaluate whether efficacy-related discontinuations were adequately planned in the trial protocol. All clinical drug trials in the Netherlands, reviewed by institutional review boards in 2007, were followed until December 2015. Data were obtained through the database of the Dutch competent authority (Central Committee on Research Involving Human Subjects [CCMO]) and a questionnaire to the principal investigators. Reasons for trial discontinuation were the primary outcome of the study. Three reasons for discontinuation were analyzed separately: all cause, recruitment failure, and efficacy related (when an interim analysis had demonstrated futility or superiority). Among the efficacy-related discontinuations, we examined whether the data monitoring committee, the stopping rule, and the moment of the interim analysis in the trial progress were specified in the trial protocol. Of the 574 trials, 102 (17.8%) were discontinued. The most common reasons were recruitment failure (33 of 574; 5.7%) and solely efficacy related (30 of 574; 5.2%). Of the efficacy-related discontinuations, 10 of 30 (33.3%) of the trial protocols reported all three aspects in the trial protocol, and 20 of 30 (66.7%) reported at least one aspect in the trial protocol. One out of five clinical drug trials is discontinued before the planned trial end, with recruitment failure and futility as the most common reasons. The target sample size of trials should be feasible, and interim analyses should be adequately described in trial protocols. Copyright © 2017 Elsevier Inc. All rights reserved.
Belief-bias reasoning in non-clinical delusion-prone individuals.
Anandakumar, T; Connaughton, E; Coltheart, M; Langdon, R
2017-03-01
It has been proposed that people with delusions have difficulty inhibiting beliefs (i.e., "doxastic inhibition") so as to reason about them as if they might not be true. We used a continuity approach to test this proposal in non-clinical adults scoring high and low in psychometrically assessed delusion-proneness. High delusion-prone individuals were expected to show greater difficulty than low delusion-prone individuals on "conflict" items of a "belief-bias" reasoning task (i.e. when required to reason logically about statements that conflicted with reality), but not on "non-conflict" items. Twenty high delusion-prone and twenty low delusion-prone participants (according to the Peters et al. Delusions Inventory) completed a belief-bias reasoning task and tests of IQ, working memory and general inhibition (Excluded Letter Fluency, Stroop and Hayling Sentence Completion). High delusion-prone individuals showed greater difficulty than low delusion-prone individuals on the Stroop and Excluded Letter Fluency tests of inhibition, but no greater difficulty on the conflict versus non-conflict items of the belief-bias task. They did, however, make significantly more errors overall on the belief-bias task, despite controlling for IQ, working memory and general inhibitory control. The study had a relatively small sample size and used non-clinical participants to test a theory of cognitive processing in individuals with clinically diagnosed delusions. Results failed to support a role for doxastic inhibitory failure in non-clinical delusion-prone individuals. These individuals did, however, show difficulty with conditional reasoning about statements that may or may not conflict with reality, independent of any general cognitive or inhibitory deficits. Copyright © 2016 Elsevier Ltd. All rights reserved.
Belief-bias reasoning in non-clinical delusion-prone individuals.
Anandakumar, T; Connaughton, E; Coltheart, M; Langdon, R
2017-09-01
It has been proposed that people with delusions have difficulty inhibiting beliefs (i.e., "doxastic inhibition") so as to reason about them as if they might not be true. We used a continuity approach to test this proposal in non-clinical adults scoring high and low in psychometrically assessed delusion-proneness. High delusion-prone individuals were expected to show greater difficulty than low delusion-prone individuals on "conflict" items of a "belief-bias" reasoning task (i.e. when required to reason logically about statements that conflicted with reality), but not on "non-conflict" items. Twenty high delusion-prone and twenty low delusion-prone participants (according to the Peters et al. Delusions Inventory) completed a belief-bias reasoning task and tests of IQ, working memory and general inhibition (Excluded Letter Fluency, Stroop and Hayling Sentence Completion). High delusion-prone individuals showed greater difficulty than low delusion-prone individuals on the Stroop and Excluded Letter Fluency tests of inhibition, but no greater difficulty on the conflict versus non-conflict items of the belief-bias task. They did, however, make significantly more errors overall on the belief-bias task, despite controlling for IQ, working memory and general inhibitory control. The study had a relatively small sample size and used non-clinical participants to test a theory of cognitive processing in individuals with clinically diagnosed delusions. Results failed to support a role for doxastic inhibitory failure in non-clinical delusion-prone individuals. These individuals did, however, show difficulty with conditional reasoning about statements that may or may not conflict with reality, independent of any general cognitive or inhibitory deficits. Copyright © 2016 Elsevier Ltd. All rights reserved.
Das, Susanta; Nam, Kwangho; Major, Dan Thomas
2018-03-13
In recent years, a number of quantum mechanical-molecular mechanical (QM/MM) enzyme studies have investigated the dependence of reaction energetics on the size of the QM region using energy and free energy calculations. In this study, we revisit the question of QM region size dependence in QM/MM simulations within the context of energy and free energy calculations using a proton transfer in a DNA base pair as a test case. In the simulations, the QM region was treated with a dispersion-corrected AM1/d-PhoT Hamiltonian, which was developed to accurately describe phosphoryl and proton transfer reactions, in conjunction with an electrostatic embedding scheme using the particle-mesh Ewald summation method. With this rigorous QM/MM potential, we performed rather extensive QM/MM sampling, and found that the free energy reaction profiles converge rapidly with respect to the QM region size within ca. ±1 kcal/mol. This finding suggests that the strategy of QM/MM simulations with reasonably sized and selected QM regions, which has been employed for over four decades, is a valid approach for modeling complex biomolecular systems. We point to possible causes for the sensitivity of the energy and free energy calculations to the size of the QM region, and potential implications.
Bohlin, Jon; Andreassen, Bettina K; Joubert, Bonnie R; Magnus, Maria C; Wu, Michael C; Parr, Christine L; Håberg, Siri E; Magnus, Per; Reese, Sarah E; Stoltenberg, Camilla; London, Stephanie J; Nystad, Wenche
2015-07-29
Several epidemiologic studies indicate that maternal gestational weight gain (GWG) influences health outcomes in offspring. Any underlying mechanisms have, however, not been established. A recent study of 88 children based on the Avon Longitudinal Study of Parents and Children (ALSPAC) cohort examined the methylation levels at 1,505 Cytosine-Guanine methylation (CpG) loci and found several to be significantly associated with maternal weight gain between weeks 0 and 18 of gestation. Since these results could not be replicated we wanted to examine associations between 0 and 18 week GWG and genome-wide methylation levels using the Infinium HumanMethylation450 BeadChip (450K) platform on a larger sample size, i.e. 729 newborns sampled from the Norwegian Mother and Child Cohort Study (MoBa). We found no CpG loci associated with 0-18 week GWG after adjusting for the set of covariates used in the ALSPAC study (i.e. child's sex and maternal age) and for multiple testing (q > 0.9, both 1,505 and 473,731 tests). Hence, none of the CpG loci linked with the genes found significantly associated with 0-18 week GWG in the ALSPAC study were significant in our study. The inconsistency in the results with the ALSPAC study with regards to the 0-18 week GWG model may arise for several reasons: sampling from different populations, dissimilar methylome coverage, sample size and/or false positive findings.
IndeCut evaluates performance of network motif discovery algorithms.
Ansariola, Mitra; Megraw, Molly; Koslicki, David
2018-05-01
Genomic networks represent a complex map of molecular interactions which are descriptive of the biological processes occurring in living cells. Identifying the small over-represented circuitry patterns in these networks helps generate hypotheses about the functional basis of such complex processes. Network motif discovery is a systematic way of achieving this goal. However, a reliable network motif discovery outcome requires generating random background networks which are the result of a uniform and independent graph sampling method. To date, there has been no method to numerically evaluate whether any network motif discovery algorithm performs as intended on realistically sized datasets-thus it was not possible to assess the validity of resulting network motifs. In this work, we present IndeCut, the first method to date that characterizes network motif finding algorithm performance in terms of uniform sampling on realistically sized networks. We demonstrate that it is critical to use IndeCut prior to running any network motif finder for two reasons. First, IndeCut indicates the number of samples needed for a tool to produce an outcome that is both reproducible and accurate. Second, IndeCut allows users to choose the tool that generates samples in the most independent fashion for their network of interest among many available options. The open source software package is available at https://github.com/megrawlab/IndeCut. megrawm@science.oregonstate.edu or david.koslicki@math.oregonstate.edu. Supplementary data are available at Bioinformatics online.
Sgaier, Sema K; Eletskaya, Maria; Engl, Elisabeth; Mugurungi, Owen; Tambatamba, Bushimbwa; Ncube, Gertrude; Xaba, Sinokuthemba; Nanga, Alice; Gogolina, Svetlana; Odawo, Patrick; Gumede-Moyo, Sehlulekile; Kretschmer, Steve
2017-09-13
Public health programs are starting to recognize the need to move beyond a one-size-fits-all approach in demand generation, and instead tailor interventions to the heterogeneity underlying human decision making. Currently, however, there is a lack of methods to enable such targeting. We describe a novel hybrid behavioral-psychographic segmentation approach to segment stakeholders on potential barriers to a target behavior. We then apply the method in a case study of demand generation for voluntary medical male circumcision (VMMC) among 15-29 year-old males in Zambia and Zimbabwe. Canonical correlations and hierarchical clustering techniques were applied on representative samples of men in each country who were differentiated by their underlying reasons for their propensity to get circumcised. We characterized six distinct segments of men in Zimbabwe, and seven segments in Zambia, according to their needs, perceptions, attitudes and behaviors towards VMMC, thus highlighting distinct reasons for a failure to engage in the desired behavior.
Assessing the impact of a respiratory diagnosis on smoking cessation.
Jones, Alexandra
2017-07-27
The aim of this study was to assess the impact of respiratory diagnoses on smoking cessation. A total of 229 current and former smokers, with and without respiratory diagnoses completed an anonymous online questionnaire assessing how their smoking habit changed when diagnosed with various respiratory conditions. Among all participants the most common reason for quitting smoking was to reduce the risk of health problems in general. In those with a chronic respiratory diagnosis, this was their most common reason for quitting. Motivation to quit smoking, scored by participants on a scale of 0-10, increased at the time of diagnosis then further increased after diagnosis of a chronic respiratory condition but declined after diagnosis of an acute respiratory condition. The research had a small sample size so further research is required. However, important themes are highlighted with the potential to influence clinical practice. All clinicians should receive training to promote cessation at the time of diagnosing respiratory conditions.
Eletskaya, Maria; Engl, Elisabeth; Mugurungi, Owen; Tambatamba, Bushimbwa; Ncube, Gertrude; Xaba, Sinokuthemba; Nanga, Alice; Gogolina, Svetlana; Odawo, Patrick; Gumede-Moyo, Sehlulekile; Kretschmer, Steve
2017-01-01
Public health programs are starting to recognize the need to move beyond a one-size-fits-all approach in demand generation, and instead tailor interventions to the heterogeneity underlying human decision making. Currently, however, there is a lack of methods to enable such targeting. We describe a novel hybrid behavioral-psychographic segmentation approach to segment stakeholders on potential barriers to a target behavior. We then apply the method in a case study of demand generation for voluntary medical male circumcision (VMMC) among 15–29 year-old males in Zambia and Zimbabwe. Canonical correlations and hierarchical clustering techniques were applied on representative samples of men in each country who were differentiated by their underlying reasons for their propensity to get circumcised. We characterized six distinct segments of men in Zimbabwe, and seven segments in Zambia, according to their needs, perceptions, attitudes and behaviors towards VMMC, thus highlighting distinct reasons for a failure to engage in the desired behavior. PMID:28901285
Seol, Hyunsoo
2016-06-01
The purpose of this study was to apply the bootstrap procedure to evaluate how the bootstrapped confidence intervals (CIs) for polytomous Rasch fit statistics might differ according to sample sizes and test lengths in comparison with the rule-of-thumb critical value of misfit. A total of 25 simulated data sets were generated to fit the Rasch measurement and then a total of 1,000 replications were conducted to compute the bootstrapped CIs under each of 25 testing conditions. The results showed that rule-of-thumb critical values for assessing the magnitude of misfit were not applicable because the infit and outfit mean square error statistics showed different magnitudes of variability over testing conditions and the standardized fit statistics did not exactly follow the standard normal distribution. Further, they also do not share the same critical range for the item and person misfit. Based on the results of the study, the bootstrapped CIs can be used to identify misfitting items or persons as they offer a reasonable alternative solution, especially when the distributions of the infit and outfit statistics are not well known and depend on sample size. © The Author(s) 2016.
Failure to Replicate a Genetic Association May Provide Important Clues About Genetic Architecture
Greene, Casey S.; Penrod, Nadia M.; Williams, Scott M.; Moore, Jason H.
2009-01-01
Replication has become the gold standard for assessing statistical results from genome-wide association studies. Unfortunately this replication requirement may cause real genetic effects to be missed. A real result can fail to replicate for numerous reasons including inadequate sample size or variability in phenotype definitions across independent samples. In genome-wide association studies the allele frequencies of polymorphisms may differ due to sampling error or population differences. We hypothesize that some statistically significant independent genetic effects may fail to replicate in an independent dataset when allele frequencies differ and the functional polymorphism interacts with one or more other functional polymorphisms. To test this hypothesis, we designed a simulation study in which case-control status was determined by two interacting polymorphisms with heritabilities ranging from 0.025 to 0.4 with replication sample sizes ranging from 400 to 1600 individuals. We show that the power to replicate the statistically significant independent main effect of one polymorphism can drop dramatically with a change of allele frequency of less than 0.1 at a second interacting polymorphism. We also show that differences in allele frequency can result in a reversal of allelic effects where a protective allele becomes a risk factor in replication studies. These results suggest that failure to replicate an independent genetic effect may provide important clues about the complexity of the underlying genetic architecture. We recommend that polymorphisms that fail to replicate be checked for interactions with other polymorphisms, particularly when samples are collected from groups with distinct ethnic backgrounds or different geographic regions. PMID:19503614
Finite length-scale anti-gravity and observations of mass discrepancies in galaxies
NASA Astrophysics Data System (ADS)
Sanders, R. H.
1986-01-01
The modification of Newtonian attraction suggested by Sanders (1984) contains a repulsive Yukawa component which is characterised by two physical parameters: a coupling constant, α, and a length scale, r0. Although this form of the gravitational potential can result in flat rotation curves for a galaxy (or a point mass) it is not obvious that any modification of gravity associated with a definite length scale can reproduce the observed rotation curves of galaxies covering a wide range of mass and size. Here it is shown that the rotation curves of galaxies ranging in size from 5 to 40 kpc can be reproduced by this modified potential. Moreover, the implied mass-to-light ratios for a larger sample of galaxies are reasonable (one to three) and show no systematic trend with the size of the galaxy. The observed infrared Tully-Fisher law is shown to be consistent with the prediction of this revised gravity. The modified potential permits the X-ray emitting halos observed around elliptical galaxies to be bound without the addition of dark matter.
Breast cancer mitosis detection in histopathological images with spatial feature extraction
NASA Astrophysics Data System (ADS)
Albayrak, Abdülkadir; Bilgin, Gökhan
2013-12-01
In this work, cellular mitosis detection in histopathological images has been investigated. Mitosis detection is very expensive and time consuming process. Development of digital imaging in pathology has enabled reasonable and effective solution to this problem. Segmentation of digital images provides easier analysis of cell structures in histopathological data. To differentiate normal and mitotic cells in histopathological images, feature extraction step is very crucial step for the system accuracy. A mitotic cell has more distinctive textural dissimilarities than the other normal cells. Hence, it is important to incorporate spatial information in feature extraction or in post-processing steps. As a main part of this study, Haralick texture descriptor has been proposed with different spatial window sizes in RGB and La*b* color spaces. So, spatial dependencies of normal and mitotic cellular pixels can be evaluated within different pixel neighborhoods. Extracted features are compared with various sample sizes by Support Vector Machines using k-fold cross validation method. According to the represented results, it has been shown that separation accuracy on mitotic and non-mitotic cellular pixels gets better with the increasing size of spatial window.
Statistical Analyses of Femur Parameters for Designing Anatomical Plates.
Wang, Lin; He, Kunjin; Chen, Zhengming
2016-01-01
Femur parameters are key prerequisites for scientifically designing anatomical plates. Meanwhile, individual differences in femurs present a challenge to design well-fitting anatomical plates. Therefore, to design anatomical plates more scientifically, analyses of femur parameters with statistical methods were performed in this study. The specific steps were as follows. First, taking eight anatomical femur parameters as variables, 100 femur samples were classified into three classes with factor analysis and Q-type cluster analysis. Second, based on the mean parameter values of the three classes of femurs, three sizes of average anatomical plates corresponding to the three classes of femurs were designed. Finally, based on Bayes discriminant analysis, a new femur could be assigned to the proper class. Thereafter, the average anatomical plate suitable for that new femur was selected from the three available sizes of plates. Experimental results showed that the classification of femurs was quite reasonable based on the anatomical aspects of the femurs. For instance, three sizes of condylar buttress plates were designed. Meanwhile, 20 new femurs are judged to which classes the femurs belong. Thereafter, suitable condylar buttress plates were determined and selected.
Perception of Medical University Members From Nutritional Health in the Quran
Salarvand, Shahin; Pournia, Yadollah
2014-01-01
Background: Desirable health is impossible without good nutrition, and Allah has addressed us on eating foods in 118 verses. Objectives: This study aimed to describe the medical university faculty members’ perceptions of nutritional health in the Quran, revealing the important role of faculty members. Materials and Methods: This qualitative study was conducted with a phenomenological approach. Homogeneous sampling was performed in a final sample size of 16 subjects. The Colaizzi's phenomenological method was applied for data analysis. Results: Three main categories were extracted from the data analysis, including the importance of nutrition in the Quran (referring to certain fruits, vegetables and foods, illustrating and venerating the heavenly ones, nutritional recommendations, revealing the healing power of honey and the effects of fruits and vegetables on physical and social health); reasons of different foods being lawful (halal) and unlawful (haram) (religious slaughter, wine, meats, consequences of consuming haram materials, general expression of halal and haram terms); and fasting (fasting and physical health, fasting and mental health). Conclusions: What has been mentioned in the Quran is what scientists have achieved over the time, since the Quran is governed by logic. Although we do not know the reasons for many things in the Quran, we consider it as the foundation. PMID:24910781
Rapid microscopy measurement of very large spectral images.
Lindner, Moshe; Shotan, Zav; Garini, Yuval
2016-05-02
The spectral content of a sample provides important information that cannot be detected by the human eye or by using an ordinary RGB camera. The spectrum is typically a fingerprint of the chemical compound, its environmental conditions, phase and geometry. Thus measuring the spectrum at each point of a sample is important for a large range of applications from art preservation through forensics to pathological analysis of a tissue section. To date, however, there is no system that can measure the spectral image of a large sample in a reasonable time. Here we present a novel method for scanning very large spectral images of microscopy samples even if they cannot be viewed in a single field of view of the camera. The system is based on capturing information while the sample is being scanned continuously 'on the fly'. Spectral separation implements Fourier spectroscopy by using an interferometer mounted along the optical axis. High spectral resolution of ~5 nm at 500 nm could be achieved with a diffraction-limited spatial resolution. The acquisition time is fairly high and takes 6-8 minutes for a sample size of 10mm x 10mm measured under a bright-field microscope using a 20X magnification.
Small sample mediation testing: misplaced confidence in bootstrapped confidence intervals.
Koopman, Joel; Howe, Michael; Hollenbeck, John R; Sin, Hock-Peng
2015-01-01
Bootstrapping is an analytical tool commonly used in psychology to test the statistical significance of the indirect effect in mediation models. Bootstrapping proponents have particularly advocated for its use for samples of 20-80 cases. This advocacy has been heeded, especially in the Journal of Applied Psychology, as researchers are increasingly utilizing bootstrapping to test mediation with samples in this range. We discuss reasons to be concerned with this escalation, and in a simulation study focused specifically on this range of sample sizes, we demonstrate not only that bootstrapping has insufficient statistical power to provide a rigorous hypothesis test in most conditions but also that bootstrapping has a tendency to exhibit an inflated Type I error rate. We then extend our simulations to investigate an alternative empirical resampling method as well as a Bayesian approach and demonstrate that they exhibit comparable statistical power to bootstrapping in small samples without the associated inflated Type I error. Implications for researchers testing mediation hypotheses in small samples are presented. For researchers wishing to use these methods in their own research, we have provided R syntax in the online supplemental materials. (c) 2015 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Pinilla, P.; Tazzari, M.; Pascucci, I.; Youdin, A. N.; Garufi, A.; Manara, C. F.; Testi, L.; van der Plas, G.; Barenfeld, S. A.; Canovas, H.; Cox, E. G.; Hendler, N. P.; Pérez, L. M.; van der Marel, N.
2018-05-01
We analyze the dust morphology of 29 transition disks (TDs) observed with Atacama Large (sub-)Millimeter Array (ALMA) at (sub-)millimeter emission. We perform the analysis in the visibility plane to characterize the total flux, cavity size, and shape of the ring-like structure. First, we found that the M dust–M ⋆ relation is much flatter for TDs than the observed trends from samples of class II sources in different star-forming regions. This relation demonstrates that cavities open in high (dust) mass disks, independent of the stellar mass. The flatness of this relation contradicts the idea that TDs are a more evolved set of disks. Two potential reasons (not mutually exclusive) may explain this flat relation: the emission is optically thick or/and millimeter-sized particles are trapped in a pressure bump. Second, we discuss our results of the cavity size and ring width in the context of different physical processes for cavity formation. Photoevaporation is an unlikely leading mechanism for the origin of the cavity of any of the targets in the sample. Embedded giant planets or dead zones remain as potential explanations. Although both models predict correlations between the cavity size and the ring shape for different stellar and disk properties, we demonstrate that with the current resolution of the observations, it is difficult to obtain these correlations. Future observations with higher angular resolution observations of TDs with ALMA will help discern between different potential origins of cavities in TDs.
Health sciences librarians' attitudes toward the Academy of Health Information Professionals
Baker, Lynda M.; Kars, Marge; Petty, Janet
2004-01-01
Objectives: The purpose of the study was to ascertain health sciences librarians' attitudes toward the Academy of Health Information Professionals (AHIP). Sample: Systematic sampling was used to select 210 names from the list of members of the Midwest Chapter of the Medical Library Association. Methods: A questionnaire containing open- and closed-ended questions was used to collect the data. Results: A total of 135 usable questionnaires were returned. Of the respondents, 34.8% are members of the academy and most are at the senior or distinguished member levels. The academy gives them a sense of professionalism and helps them to keep current with new trends. The majority of participants (65.2%) are not members of the academy. Among the various reasons proffered are that neither institutions nor employers require it and that there is no obvious benefit to belonging to the academy. Conclusions: More research needs to be done with a larger sample size to determine the attitudes of health sciences librarians, nationwide, toward the academy. PMID:15243638
Robustness of survival estimates for radio-marked animals
Bunck, C.M.; Chen, C.-L.
1992-01-01
Telemetry techniques are often used to study the survival of birds and mammals; particularly whcn mark-recapture approaches are unsuitable. Both parametric and nonparametric methods to estimate survival have becn developed or modified from other applications. An implicit assumption in these approaches is that the probability of re-locating an animal with a functioning transmitter is one. A Monte Carlo study was conducted to determine the bias and variance of the Kaplan-Meier estimator and an estimator based also on the assumption of constant hazard and to eva!uate the performance of the two-sample tests associated with each. Modifications of each estimator which allow a re-Iocation probability of less than one are described and evaluated. Generallv the unmodified estimators were biased but had lower variance. At low sample sizes all estimators performed poorly. Under the null hypothesis, the distribution of all test statistics reasonably approximated the null distribution when survival was low but not when it was high. The power of the two-sample tests were similar.
Developmental Changes in the Consideration of Sample Diversity in Inductive Reasoning
ERIC Educational Resources Information Center
Rhodes, Marjorie; Gelman, Susan A.; Brickman, Daniel
2008-01-01
Determining whether a sample provides a good basis for broader generalizations is a basic challenge of inductive reasoning. Adults apply a diversity-based strategy to this challenge, expecting diverse samples to be a better basis for generalization than homogeneous samples. For example, adults expect that a property shared by two diverse mammals…
ERIC Educational Resources Information Center
Thompson, Bruce
1999-01-01
A study examined effect-size reporting in 23 quantitative articles reported in "Exceptional Children". Findings reveal that effect sizes are rarely being reported, although exemplary reporting practices were also noted. Reasons why encouragement by the American Psychological Association to report effect size has been ineffective are…
NASA Astrophysics Data System (ADS)
Carrier, B. L.; Beaty, D. W.
2017-12-01
NASA's Mars 2020 rover is scheduled to land on Mars in 2021 and will be equipped with a sampling system capable of collecting rock cores, as well as a specialized drill bit for collecting unconsolidated granular material. A key mission objective is to collect a set of samples that have enough scientific merit to justify returning to Earth. In the case of granular materials, we would like to catalyze community discussion on what we would do with these samples if they arrived in our laboratories, as input to decision-making related to sampling the regolith. Numerous scientific objectives have been identified which could be achieved or significantly advanced via the analysis of martian rocks, "regolith," and gas samples. The term "regolith" has more than one definition, including one that is general and one that is much more specific. For the purpose of this analysis we use the term "granular materials" to encompass the most general meaning and restrict "regolith" to a subset of that. Our working taxonomy includes the following: 1) globally sourced airfall dust (dust); 2) saltation-sized particles (sand); 3) locally sourced decomposed rock (regolith); 4) crater ejecta (ejecta); and, 5) other. Analysis of martian granular materials could serve to advance our understanding areas including habitability and astrobiology, surface-atmosphere interactions, chemistry, mineralogy, geology and environmental processes. Results of these analyses would also provide input into planning for future human exploration of Mars, elucidating possible health and mechanical hazards caused by the martian surface material, as well as providing valuable information regarding available resources for ISRU and civil engineering purposes. Results would also be relevant to matters of planetary protection and ground-truthing orbital observations. We will present a preliminary analysis of the following, in order to generate community discussion and feedback on all issues relating to: What are the specific reasons (and their priorities) for collecting samples of granular materials? How do those reasons translate to sampling priorities? In what condition would these samples be expected to be received? What is our best projection of the approach by which these samples would be divided, prepared, and analyzed to achieve our objectives?
Bosgraaf, Remko P; Ketelaars, Pleun J W; Verhoef, Viola M J; Massuger, Leon F A G; Meijer, Chris J L M; Melchers, Willem J G; Bekkers, Ruud L M
2014-07-01
High attendance rates in cervical screening are essential for effective cancer prevention. Offering HPV self-sampling to non-responders increases participation rates. The objectives of this study were to determine why non-responders do not attend regular screening, and why they do or do not participate when offered a self-sampling device. A questionnaire study was conducted in the Netherlands from October 2011 to December 2012. A total of 35,477 non-responders were invited to participate in an HPV self-sampling study; 5347 women did opt out. Finally, 30,130 women received a questionnaire and self-sampling device. The analysis was based on 9484 returned questionnaires (31.5%) with a self-sample specimen, and 682 (2.3%) without. Among women who returned both, the main reason for non-attendance to cervical screening was that they forgot to schedule an appointment (3068; 32.3%). The most important reason to use the self-sampling device was the opportunity to take a sample in their own time-setting (4763; 50.2%). A total of 30.9% of the women who did not use the self-sampling device preferred after all to have a cervical smear taken instead. Organisational barriers are the main reason for non-attendance in regular cervical screening. Important reasons for non-responders to the regular screening to use a self-sampling device are convenience and self-control. Copyright © 2014 Elsevier Inc. All rights reserved.
Patterns of Hierarchy in Formal and Principled Moral Reasoning.
ERIC Educational Resources Information Center
Zeidler, Dana Lewis
Measurements of formal reasoning and principled moral reasoning ability were obtained from a sample of 99 tenth grade students. Specific modes of formal reasoning (proportional reasoning, controlling variables, probabilistic, correlational and combinatorial reasoning) were first examined. Findings support the notion of hierarchical relationships…
Mineralogy of SNC Meteorite EET79001 by Simultaneous Fitting of Moessbauer Backscatter Spectra
NASA Technical Reports Server (NTRS)
Morris, Richard V.; Agresti, D. G.
2010-01-01
We have acquired M ssbauer spectra for SNC meteorite EET79001 with a MIMOS II backscatter M ssbauer spectrometer [1] similar to those now operating on Mars as part of the Mars Exploration Rover (MER) missions. We are working to compare the Fe mineralogical composition of martian meteorites with in-situ measurements on Mars. Our samples were hand picked from the >1 mm size fraction of saw fines on the basis of lithology, color, and grain size (Table 1). The chips were individually analyzed at approx.300K by placing them on a piece of plastic that was in turn supported by the contact ring of the instrument (oriented vertically). Tungsten foil was used to mask certain areas from analysis. As shown in Figure 1, a variety of spectra was obtained, each resulting from different relative contributions of the Fe-bearing minerals present in the sample. Because the nine samples are reasonably mixtures of the same Fe-bearing phases in variable proportions, the nine spectra were fit simultaneously (simfit) with a common model, adjusting parameters to a single minimum chi-squared convergence criterion [2]. The starting point for the fitting model and values of hyperfine parameters was the work of Solberg and Burns [3], who identified olivine, pyroxene, and ferrous glass as major, and ilmenite and a ferric phase as minor (<5%), Fe-bearing phases in EET79001.
Vasylkiv, Oleg; Borodianska, Hanna; Badica, Petre; Grasso, Salvatore; Sakka, Yoshio; Tok, Alfred; Su, Liap Tat; Bosman, Michael; Ma, Jan
2012-02-01
Boron carbide B4C powders were subject to reactive spark plasma sintering (also known as field assisted sintering, pulsed current sintering or plasma assisted sintering) under nitrogen atmosphere. For an optimum hexagonal BN (h-BN) content estimated from X-ray diffraction measurements at approximately 0.4 wt%, the as-prepared BaCb-(BxOy/BN) ceramic shows values of Berkovich and Vickers hardness of 56.7 +/- 3.1 GPa and 39.3 +/- 7.6 GPa, respectively. These values are higher than for the vacuum SPS processed B4C pristine sample and the h-BN -mechanically-added samples. XRD and electronic microscopy data suggest that in the samples produced by reactive SPS in N2 atmosphere, and containing an estimated amount of 0.3-1.5% h-BN, the crystallite size of the boron carbide grains is decreasing with the increasing amount of N2, while for the newly formed lamellar h-BN the crystallite size is almost constant (approximately 30-50 nm). BN is located at the grain boundaries between the boron carbide grains and it is wrapped and intercalated by a thin layer of boron oxide. BxOy/BN forms a fine and continuous 3D mesh-like structure that is a possible reason for good mechanical properties.
Priors in Whole-Genome Regression: The Bayesian Alphabet Returns
Gianola, Daniel
2013-01-01
Whole-genome enabled prediction of complex traits has received enormous attention in animal and plant breeding and is making inroads into human and even Drosophila genetics. The term “Bayesian alphabet” denotes a growing number of letters of the alphabet used to denote various Bayesian linear regressions that differ in the priors adopted, while sharing the same sampling model. We explore the role of the prior distribution in whole-genome regression models for dissecting complex traits in what is now a standard situation with genomic data where the number of unknown parameters (p) typically exceeds sample size (n). Members of the alphabet aim to confront this overparameterization in various manners, but it is shown here that the prior is always influential, unless n ≫ p. This happens because parameters are not likelihood identified, so Bayesian learning is imperfect. Since inferences are not devoid of the influence of the prior, claims about genetic architecture from these methods should be taken with caution. However, all such procedures may deliver reasonable predictions of complex traits, provided that some parameters (“tuning knobs”) are assessed via a properly conducted cross-validation. It is concluded that members of the alphabet have a room in whole-genome prediction of phenotypes, but have somewhat doubtful inferential value, at least when sample size is such that n ≪ p. PMID:23636739
Statistical characterization of a large geochemical database and effect of sample size
Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.
2005-01-01
The authors investigated statistical distributions for concentrations of chemical elements from the National Geochemical Survey (NGS) database of the U.S. Geological Survey. At the time of this study, the NGS data set encompasses 48,544 stream sediment and soil samples from the conterminous United States analyzed by ICP-AES following a 4-acid near-total digestion. This report includes 27 elements: Al, Ca, Fe, K, Mg, Na, P, Ti, Ba, Ce, Co, Cr, Cu, Ga, La, Li, Mn, Nb, Nd, Ni, Pb, Sc, Sr, Th, V, Y and Zn. The goal and challenge for the statistical overview was to delineate chemical distributions in a complex, heterogeneous data set spanning a large geographic range (the conterminous United States), and many different geological provinces and rock types. After declustering to create a uniform spatial sample distribution with 16,511 samples, histograms and quantile-quantile (Q-Q) plots were employed to delineate subpopulations that have coherent chemical and mineral affinities. Probability groupings are discerned by changes in slope (kinks) on the plots. Major rock-forming elements, e.g., Al, Ca, K and Na, tend to display linear segments on normal Q-Q plots. These segments can commonly be linked to petrologic or mineralogical associations. For example, linear segments on K and Na plots reflect dilution of clay minerals by quartz sand (low in K and Na). Minor and trace element relationships are best displayed on lognormal Q-Q plots. These sensitively reflect discrete relationships in subpopulations within the wide range of the data. For example, small but distinctly log-linear subpopulations for Pb, Cu, Zn and Ag are interpreted to represent ore-grade enrichment of naturally occurring minerals such as sulfides. None of the 27 chemical elements could pass the test for either normal or lognormal distribution on the declustered data set. Part of the reasons relate to the presence of mixtures of subpopulations and outliers. Random samples of the data set with successively smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.
Deutsch, Anne-Marie; Lande, R Gregory
2017-07-01
Military suicide rates have been rising over the past decade and continue to challenge military treatment facilities. Assessing suicide risk and improving treatments are a large part of the mission for clinicians who work with uniformed service members. This study attempts to expand the toolkit of military suicide prevention by focusing on protective factors over risk factors. In 1983, Marsha Linehan published a checklist called the Reasons for Living Scale, which asked subjects to check the reasons they choose to continue living, rather than choosing suicide. The authors of this article hypothesized that military service members may have different or additional reasons to live which may relate to their military service. They created a new version of Linehan's inventory by adding protective factors related to military life. The purpose of these additions was to make the inventory more acceptable and relevant to the military population, as well as to identify whether these items constitute a separate subscale as distinguished from previously identified factors. A commonly used assessment tool, the Reasons for Living Inventory (RFL) designed by Marsha Linehan, was expanded to offer items geared to the military population. The RFL presents users with a list of items which may be reasons to not commit suicide (e.g., "I have a responsibility and commitment to my family"). The authors used focus groups of staff and patients in a military psychiatric partial hospitalization program to identify military-centric reasons to live. This process yielded 20 distinct items which were added to Linehan's original list of 48. This expanded list became the Reasons for Living-Military Version. A sample of 200 patients in the military partial hospitalization program completed the inventory at time of or close to admission. This study was approved by the Institutional Review Board at Walter Reed National Military Center for adhering to ethical principles related to pursuing research with human subjects. The rotated factor matrix revealed six factors that have been labeled as follows: Survival and Coping Beliefs, Military Values, Responsibility to Family, Fear of Suicide/Disability/Unknown, Moral Objections and Child-Related Concerns. The subscale of Military Values is a new factor reflecting the addition of military items to the original RFL. Results suggest that formally assessing protective factors in a military psychiatric population has potential as a useful tool in the prevention of military suicide and therefore warrants further research. The latent factor we have entitled "Military Values" may help identify those service members for whom military training or "esprit de corps" is a reason for living. Further research can focus on further validation, pre/post-treatment effects on scores, expanded clinical use to stimulate increased will to live, or evaluation of whether scores on this scale, or the subscale of Military Values, can predict future suicidal behavior by service members. Finally, a larger sample size may produce more robust results to support these findings. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.
Methods for obtaining true particle size distributions from cross section measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lord, Kristina Alyse
2013-01-01
Sectioning methods are frequently used to measure grain sizes in materials. These methods do not provide accurate grain sizes for two reasons. First, the sizes of features observed on random sections are always smaller than the true sizes of solid spherical shaped objects, as noted by Wicksell [1]. This is the case because the section very rarely passes through the center of solid spherical shaped objects randomly dispersed throughout a material. The sizes of features observed on random sections are inversely related to the distance of the center of the solid object from the section [1]. Second, on a planemore » section through the solid material, larger sized features are more frequently observed than smaller ones due to the larger probability for a section to come into contact with the larger sized portion of the spheres than the smaller sized portion. As a result, it is necessary to find a method that takes into account these reasons for inaccurate particle size measurements, while providing a correction factor for accurately determining true particle size measurements. I present a method for deducing true grain size distributions from those determined from specimen cross sections, either by measurement of equivalent grain diameters or linear intercepts.« less
NASA Astrophysics Data System (ADS)
Lynch, James F.; Irish, James D.; Sherwood, Christopher R.; Agrawal, Yogesh C.
1994-08-01
During the winter of 1990-1991 an Acoustic BackScatter System (ABSS), five Optical Backscatterance Sensors (OBSs) and a Laser In Situ Settling Tube (LISST) were deployed in 90 m of water off the California coast for 3 months as part of the Sediment Transport Events on Shelves and Slopes (STRESS) experiment. By looking at sediment transport events with both optical (OBS) and acoustic (ABSS) sensors, one obtains information about the size of the particles transported as well as their concentration. Specifically, we employ two different methods of estimating "average particle size". First, we use vertical scattering intensity profile slopes (acoustical and optical) to infer average particle size using a Rouse profile model of the boundary layer and a Stokes law fall velocity assumption. Secondly, we use a combination of optics and acoustics to form a multifrequency (two frequency) inverse for the average particle size. These results are compared to independent observations from the LISST instrument, which measures the particle size spectrum in situ using laser diffraction techniques. Rouse profile based inversions for particle size are found to be in good agreement with the LISST results except during periods of transport event initiation, when the Rouse profile is not expected to be valid. The two frequency inverse, which is boundary layer model independent, worked reasonably during all periods, with average particle sizes correlating well with the LISST estimates. In order to further corroborate the particle size inverses from the acoustical and optical instruments, we also examined size spectra obtained from in situ sediment grab samples and water column samples (suspended sediments), as well as laboratory tank experiments using STRESS sediments. Again, good agreement is noted. The laboratory tank experiment also allowed us to study the acoustical and optical scattering law characteristics of the STRESS sediments. It is seen that, for optics, using the cross sectional area of an equivalent sphere is a very good first approximation whereas for acoustics, which is most sensitive in the region ka ˜ 1, the particle volume itself is best sensed. In concluding, we briefly interpret the history of some STRESS transport events in light of the size distribution and other information available. For one of the events "anomalous" suspended particle size distributions are noted, i.e. larger particles are seen suspended before finer ones. Speculative hypotheses for why this signature is observed are presented.
Developing Students' Reasoning about Samples and Sampling in the Context of Informal Inferences
ERIC Educational Resources Information Center
Meletiou-Mavrotheris, Maria; Paparistodemou, Efi
2015-01-01
The expanding use of data in modern society for prediction and decision-making makes it a priority for mathematics instruction to help students build sound foundations of inferential reasoning at a young age. This study contributes to the emerging research literature on the early development of informal inferential reasoning through the conduct of…
Code of Federal Regulations, 2010 CFR
2010-07-01
... installments. Ordinarily, the size of installment deductions must bear a reasonable relationship to the size of... special attention to applicable statutes of limitations. (c) If the employee retires or resigns or if his...
Zhao, Qi; Liu, Yuanning; Zhang, Ning; Hu, Menghan; Zhang, Hao; Joshi, Trupti; Xu, Dong
2018-01-01
In recent years, an increasing number of studies have reported the presence of plant miRNAs in human samples, which resulted in a hypothesis asserting the existence of plant-derived exogenous microRNA (xenomiR). However, this hypothesis is not widely accepted in the scientific community due to possible sample contamination and the small sample size with lack of rigorous statistical analysis. This study provides a systematic statistical test that can validate (or invalidate) the plant-derived xenomiR hypothesis by analyzing 388 small RNA sequencing data from human samples in 11 types of body fluids/tissues. A total of 166 types of plant miRNAs were found in at least one human sample, of which 14 plant miRNAs represented more than 80% of the total plant miRNAs abundance in human samples. Plant miRNA profiles were characterized to be tissue-specific in different human samples. Meanwhile, the plant miRNAs identified from microbiome have an insignificant abundance compared to those from humans, while plant miRNA profiles in human samples were significantly different from those in plants, suggesting that sample contamination is an unlikely reason for all the plant miRNAs detected in human samples. This study also provides a set of testable synthetic miRNAs with isotopes that can be detected in situ after being fed to animals.
Lambert, Kim; Coe, Jason; Niel, Lee; Dewey, Cate; Sargeant, Jan M
2015-01-01
Companion-animal relinquishment is a worldwide phenomenon that leaves companion animals homeless. Knowing why humans make the decision to end their relationship with a companion-animal can help in our understanding of this complex societal issue and can help to develop preventive strategies. A systematic review and meta-analysis was conducted to summarize reasons why dogs are surrendered, and determine if certain study characteristics were associated with the reported proportions of reasons for surrender. Articles investigating one or more reasons for dog surrender were selected from the references of a published scoping review. Two reviewers assessed the titles and abstracts of these articles, identifying 39 relevant articles. From these, 21 articles were further excluded because of ineligible study design, insufficient data available for calculating a proportion, or no data available for dogs. Data were extracted from 18 articles and meta-analysis was conducted on articles investigating reasons for dog surrender to a shelter (n=9) or dog surrender for euthanasia (n=5). Three studies were excluded from meta-analysis because they were duplicate populations. Other reasons for excluding studies from meta-analysis were, (1) the study only investigated reasons for dog re-relinquishment (n=2) and (2) the study sample size was <10 (n=1). Two articles investigated reasons for both dog surrender to a shelter and dog surrender for euthanasia. Results of meta-analysis found owner health/illness as a reason for dog surrender to a shelter had an overall estimate of 4.6% (95% CI: 4.1%, 5.2%). For all other identified reasons for surrender there was significant variation in methodology among studies preventing further meta-analysis. Univariable meta-regression was conducted to explore sources of variation among these studies. Country was identified as a significant source of variation (p<0.01) among studies reporting behavioural problems as a reason for dog surrender for euthanasia. The overall estimate for studies from Australia was 10% (95% CI: 8.0%, 12.0%; I(2)=15.5%), compared to 16% (95% CI: 15.0%, 18.0%; I(2)=20.2%) for studies from other countries. The present systematic review and meta-analysis highlights the need for further research and standardization of data collection to improve understanding of the reasons for dog relinquishment. Copyright © 2014 Elsevier B.V. All rights reserved.
Predicting herbicide and biocide concentrations in rivers across Switzerland
NASA Astrophysics Data System (ADS)
Wemyss, Devon; Honti, Mark; Stamm, Christian
2014-05-01
Pesticide concentrations vary strongly in space and time. Accordingly, intensive sampling is required to achieve a reliable quantification of pesticide pollution. As this requires substantial resources, loads and concentration ranges in many small and medium streams remain unknown. Here, we propose partially filling the information gap for herbicides and biocides by using a modelling approach that predicts stream concentrations without site-specific calibration simply based on generally available data like land use, discharge and nation-wide consumption data. The simple, conceptual model distinguishes herbicide losses from agricultural fields, private gardens and biocide losses from buildings (facades, roofs). The herbicide model is driven by river discharge and the applied herbicide mass; the biocide model requires precipitation and the footprint area of urban areas containing the biocide. The model approach allows for modelling concentrations across multiple catchments at the daily, or shorter, time scale and for small to medium-sized catchments (1 - 100 km2). Four high resolution sampling campaigns in the Swiss Plateau were used to calibrate the model parameters for six model compounds: atrazine, metolachlor, terbuthylazine, terbutryn, diuron and mecoprop. Five additional sampled catchments across Switzerland were used to directly compare the predicted to the measured concentrations. Analysis of the first results reveals a reasonable simulation of the concentration dynamics for specific rainfall events and across the seasons. Predicted concentration ranges are reasonable even without site-specific calibration. This indicates the transferability of the calibrated model directly to other areas. However, the results also demonstrate systematic biases in that the highest measured peaks were not attained by the model. Probable causes for these deviations are conceptual model limitations and input uncertainty (pesticide use intensity, local precipitation, etc.). Accordingly, the model will be conceptually improved. This presentation will present the model simulations and compare the performance of the original and the modified model versions. Finally, the model will be applied across approximately 50% of the catchments in the Swiss Plateau, where necessary input data is available and where the model concept can be reasonably applied.
Précis of statistical significance: rationale, validity, and utility.
Chow, S L
1998-04-01
The null-hypothesis significance-test procedure (NHSTP) is defended in the context of the theory-corroboration experiment, as well as the following contrasts: (a) substantive hypotheses versus statistical hypotheses, (b) theory corroboration versus statistical hypothesis testing, (c) theoretical inference versus statistical decision, (d) experiments versus nonexperimental studies, and (e) theory corroboration versus treatment assessment. The null hypothesis can be true because it is the hypothesis that errors are randomly distributed in data. Moreover, the null hypothesis is never used as a categorical proposition. Statistical significance means only that chance influences can be excluded as an explanation of data; it does not identify the nonchance factor responsible. The experimental conclusion is drawn with the inductive principle underlying the experimental design. A chain of deductive arguments gives rise to the theoretical conclusion via the experimental conclusion. The anomalous relationship between statistical significance and the effect size often used to criticize NHSTP is more apparent than real. The absolute size of the effect is not an index of evidential support for the substantive hypothesis. Nor is the effect size, by itself, informative as to the practical importance of the research result. Being a conditional probability, statistical power cannot be the a priori probability of statistical significance. The validity of statistical power is debatable because statistical significance is determined with a single sampling distribution of the test statistic based on H0, whereas it takes two distributions to represent statistical power or effect size. Sample size should not be determined in the mechanical manner envisaged in power analysis. It is inappropriate to criticize NHSTP for nonstatistical reasons. At the same time, neither effect size, nor confidence interval estimate, nor posterior probability can be used to exclude chance as an explanation of data. Neither can any of them fulfill the nonstatistical functions expected of them by critics.
Heiskanen, Kati; Ahonen, Riitta; Kanerva, Risto; Karttunen, Pekka; Timonen, Johanna
2017-01-01
The aim of this study was to explore the reasons behind medicine shortages from the perspective of pharmaceutical companies and pharmaceutical wholesalers in Finland. The study took the form of semi-structured interviews. Forty-one pharmaceutical companies and pharmaceutical wholesalers were invited to participate in the study. The pharmaceutical companies were the member organizations of Pharma Industry Finland (PIF) (N = 30) and the Finnish Generic Pharmaceutical Association (FGPA) (N = 7). One company which is a central player in the pharmaceutical market in Finland but does not belong to PIF or FGPA was also invited. The pharmaceutical wholesalers were those with a nationwide distribution network (N = 3). A total of 30 interviews were conducted between March and June 2016. The data were subjected to qualitative thematic analysis. The most common reasons behind medicine shortages in Finland were the small size of the pharmaceutical market (29/30), sudden or fluctuating demand (28/30), small stock sizes (25/30), long delivery time (23/30) and a long or complex production chain (23/30). The reasons for the medicine shortages were supply-related more often than demand-related. However, the reasons were often complex and there was more than one reason behind a shortage. Supply-related reasons behind shortages commonly interfaced with the country-specific characteristics of Finland, whereas demand-related reasons were commonly associated with the predictability and attractiveness of the market. Some reasons, such as raw material shortages, were considered global and thus had similar effects on other countries.
Rousselet, Jérôme; Imbert, Charles-Edouard; Dekri, Anissa; Garcia, Jacques; Goussard, Francis; Vincent, Bruno; Denux, Olivier; Robinet, Christelle; Dorkeld, Franck; Roques, Alain; Rossi, Jean-Pierre
2013-01-01
Mapping species spatial distribution using spatial inference and prediction requires a lot of data. Occurrence data are generally not easily available from the literature and are very time-consuming to collect in the field. For that reason, we designed a survey to explore to which extent large-scale databases such as Google maps and Google Street View could be used to derive valid occurrence data. We worked with the Pine Processionary Moth (PPM) Thaumetopoea pityocampa because the larvae of that moth build silk nests that are easily visible. The presence of the species at one location can therefore be inferred from visual records derived from the panoramic views available from Google Street View. We designed a standardized procedure allowing evaluating the presence of the PPM on a sampling grid covering the landscape under study. The outputs were compared to field data. We investigated two landscapes using grids of different extent and mesh size. Data derived from Google Street View were highly similar to field data in the large-scale analysis based on a square grid with a mesh of 16 km (96% of matching records). Using a 2 km mesh size led to a strong divergence between field and Google-derived data (46% of matching records). We conclude that Google database might provide useful occurrence data for mapping the distribution of species which presence can be visually evaluated such as the PPM. However, the accuracy of the output strongly depends on the spatial scales considered and on the sampling grid used. Other factors such as the coverage of Google Street View network with regards to sampling grid size and the spatial distribution of host trees with regards to road network may also be determinant.
Dekri, Anissa; Garcia, Jacques; Goussard, Francis; Vincent, Bruno; Denux, Olivier; Robinet, Christelle; Dorkeld, Franck; Roques, Alain; Rossi, Jean-Pierre
2013-01-01
Mapping species spatial distribution using spatial inference and prediction requires a lot of data. Occurrence data are generally not easily available from the literature and are very time-consuming to collect in the field. For that reason, we designed a survey to explore to which extent large-scale databases such as Google maps and Google street view could be used to derive valid occurrence data. We worked with the Pine Processionary Moth (PPM) Thaumetopoea pityocampa because the larvae of that moth build silk nests that are easily visible. The presence of the species at one location can therefore be inferred from visual records derived from the panoramic views available from Google street view. We designed a standardized procedure allowing evaluating the presence of the PPM on a sampling grid covering the landscape under study. The outputs were compared to field data. We investigated two landscapes using grids of different extent and mesh size. Data derived from Google street view were highly similar to field data in the large-scale analysis based on a square grid with a mesh of 16 km (96% of matching records). Using a 2 km mesh size led to a strong divergence between field and Google-derived data (46% of matching records). We conclude that Google database might provide useful occurrence data for mapping the distribution of species which presence can be visually evaluated such as the PPM. However, the accuracy of the output strongly depends on the spatial scales considered and on the sampling grid used. Other factors such as the coverage of Google street view network with regards to sampling grid size and the spatial distribution of host trees with regards to road network may also be determinant. PMID:24130675
Integrative data analysis in clinical psychology research.
Hussong, Andrea M; Curran, Patrick J; Bauer, Daniel J
2013-01-01
Integrative data analysis (IDA), a novel framework for conducting the simultaneous analysis of raw data pooled from multiple studies, offers many advantages including economy (i.e., reuse of extant data), power (i.e., large combined sample sizes), the potential to address new questions not answerable by a single contributing study (e.g., combining longitudinal studies to cover a broader swath of the lifespan), and the opportunity to build a more cumulative science (i.e., examining the similarity of effects across studies and potential reasons for dissimilarities). There are also methodological challenges associated with IDA, including the need to account for sampling heterogeneity across studies, to develop commensurate measures across studies, and to account for multiple sources of study differences as they impact hypothesis testing. In this review, we outline potential solutions to these challenges and describe future avenues for developing IDA as a framework for studies in clinical psychology.
The geographical vector in distribution of genetic diversity for Clonorchis sinensis.
Solodovnik, Daria A; Tatonova, Yulia V; Burkovskaya, Polina V
2018-01-01
Clonorchis sinensis, the causative agent of clonorchiasis, is one of the most important parasites that inhabit countries of East and Southeast Asia. In this study, we validated the existence of a geographical vector for C. sinensis using the partial cox1 mtDNA gene, which includes a conserved region. The samples of parasite were divided into groups corresponding to three river basins, and the size of the conserved region had a strong tendency to increase from the northernmost to the southernmost samples. This indicates the availability of the geographical vector in distribution of genetic diversity. A vector is a quantity that is characterized by magnitude and direction. Geographical vector obtained in cox1 gene of C. sinensis has both these features. The reasons for the occurrence of this feature, including the influence of intermediate and definitive hosts on vector formation, and the possibility of its use for clonorchiasis monitoring are discussed. Graphical abstract ᅟ.
Integrative Data Analysis in Clinical Psychology Research
Hussong, Andrea M.; Curran, Patrick J.; Bauer, Daniel J.
2013-01-01
Integrative Data Analysis (IDA), a novel framework for conducting the simultaneous analysis of raw data pooled from multiple studies, offers many advantages including economy (i.e., reuse of extant data), power (i.e., large combined sample sizes), the potential to address new questions not answerable by a single contributing study (e.g., combining longitudinal studies to cover a broader swath of the lifespan), and the opportunity to build a more cumulative science (i.e., examining the similarity of effects across studies and potential reasons for dissimilarities). There are also methodological challenges associated with IDA, including the need to account for sampling heterogeneity across studies, to develop commensurate measures across studies, and to account for multiple sources of study differences as they impact hypothesis testing. In this review, we outline potential solutions to these challenges and describe future avenues for developing IDA as a framework for studies in clinical psychology. PMID:23394226
Exploring the Factor Structure of Neurocognitive Measures in Older Individuals
Santos, Nadine Correia; Costa, Patrício Soares; Amorim, Liliana; Moreira, Pedro Silva; Cunha, Pedro; Cotter, Jorge; Sousa, Nuno
2015-01-01
Here we focus on factor analysis from a best practices point of view, by investigating the factor structure of neuropsychological tests and using the results obtained to illustrate on choosing a reasonable solution. The sample (n=1051 individuals) was randomly divided into two groups: one for exploratory factor analysis (EFA) and principal component analysis (PCA), to investigate the number of factors underlying the neurocognitive variables; the second to test the “best fit” model via confirmatory factor analysis (CFA). For the exploratory step, three extraction (maximum likelihood, principal axis factoring and principal components) and two rotation (orthogonal and oblique) methods were used. The analysis methodology allowed exploring how different cognitive/psychological tests correlated/discriminated between dimensions, indicating that to capture latent structures in similar sample sizes and measures, with approximately normal data distribution, reflective models with oblimin rotation might prove the most adequate. PMID:25880732
The decision to extract: part II. Analysis of clinicians' stated reasons for extraction.
Baumrind, S; Korn, E L; Boyd, R L; Maxwell, R
1996-04-01
In a recently reported study, the pretreatment records of each subject in a randomized clinical trial of 148 patients with Class I and Class II malocclusions presenting for orthodontic treatment were evaluated independently by five experienced clinicians (drawn from a panel of 14). The clinicians displayed a higher incidence of agreement with each other than had been expected with respect to the decision as to whether extraction was indicated in each specific case. To improve our understanding of how clinicians made their decisions on whether to extract or not, the records of a subset of 72 subjects randomly selected from the full sample of 148, have now been examined in greater detail. In 21 of these cases, all five clinicians decided to treat without extraction. Among the remaining 51 cases, there were 202 decisions to extract (31 unanimous decision cases and 20 split decision cases). The clinicians cited a total of 469 reasons to support these decisions. Crowding was cited as the first reason in 49% of decisions to extract, followed by incisor protrusion (14%), need for profile correction (8%), Class II severity (5%), and achievement of a stable result (5%). When all the reasons for extraction in each clinician's decision were considered as a group, crowding was cited in 73% of decisions, incisor protrusion in 35%, need for profile correction in 27%, Class II severity in 15% and posttreatment stability in 9%. Tooth size anomalies, midline deviations, reduced growth potential, severity of overjet, maintenance of existing profile, desire to close the bite, periodontal problems, and anticipation of poor cooperation accounted collectively for 12% of the first reasons and were mentioned in 54% of the decisions, implying that these considerations play a consequential, if secondary, role in the decision-making process. All other reasons taken together were mentioned in fewer than 20% of cases. In this sample at least, clinicians focused heavily on appearance-related factors that are qualitatively determinable by physical examination of the surface structures of the face and teeth. They appear to have made primary use of indicators available on study casts and facial photographs and relatively little use of information that is available only on cephalograms or that involves the application of specialized orthodontic theories.
What about N? A methodological study of sample-size reporting in focus group studies.
Carlsen, Benedicte; Glenton, Claire
2011-03-11
Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out. We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed. We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96). Thirty seven (17%) studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers. Based on these findings we suggest that journals adopt more stringent requirements for focus group method reporting. The often poor and inconsistent reporting seen in these studies may also reflect the lack of clear, evidence-based guidance about deciding on sample size. More empirical research is needed to develop focus group methodology.
van Velthoven, Michelle Helena; Li, Ye; Wang, Wei; Du, Xiaozhen; Wu, Qiong; Chen, Li; Majeed, Azeem; Rudan, Igor; Zhang, Yanfeng; Car, Josip
2013-01-01
Background We set up a collaboration between researchers in China and the UK that aimed to explore the use of mHealth in China. This is the first paper in a series of papers on a large mHealth project part of this collaboration. This paper included the aims and objectives of the mHealth project, our field site, and the detailed methods of two studies. Field site The field site for this mHealth project was Zhao County, which lies 280 km south of Beijing in Hebei Province, China. Methods We described the methodology of two studies: (i) a mixed methods study exploring factors influencing sample size calculations for mHealth–based health surveys and (ii) a cross–over study determining validity of an mHealth text messaging data collection tool. The first study used mixed methods, both quantitative and qualitative, including: (i) two surveys with caregivers of young children, (ii) interviews with caregivers, village doctors and participants of the cross–over study, and (iii) researchers’ views. We combined data from caregivers, village doctors and researchers to provide an in–depth understanding of factors influencing sample size calculations for mHealth–based health surveys. The second study, a cross–over study, used a randomised cross–over study design to compare the traditional face–to–face survey method to the new text messaging survey method. We assessed data equivalence (intrarater agreement), the amount of information in responses, reasons for giving different responses, the response rate, characteristics of non–responders, and the error rate. Conclusions This paper described the objectives, field site and methods of a large mHealth project part of a collaboration between researchers in China and the UK. The mixed methods study evaluating factors that influence sample size calculations could help future studies with estimating reliable sample sizes. The cross–over study comparing face–to–face and text message survey data collection could help future studies with developing their mHealth tools. PMID:24363919
Chung, K Y; Carter, G J; Stancliffe, J D
1999-02-01
A new European/International Standard (ISOprEN 10882-1) on the sampling of airborne particulates generated during welding and allied processes has been proposed. The use of a number of samplers and sampling procedures is allowable within the defined protocol. The influence of these variables on welding fume exposures measured during welding and grinding of stainless and mild steel using the gas metal arc (GMA) and flux-cored arc (FCA) and GMA welding of aluminium has been examined. Results show that use of any of the samplers will not give significantly different measured exposures. The effect on exposure measurement of placing the samplers on either side of the head was variable; consequently, sampling position cannot be meaningfully defined. All samplers collected significant amounts of grinding dust. Therefore, gravimetric determination of welding fume exposure in atmospheres containing grinding dust will be inaccurate. The use of a new size selective sampler can, to some extent, be used to give a more accurate estimate of exposure. The reliability of fume analysis data of welding consumables has caused concern; and the reason for differences that existed between the material safety data sheet and the analysis of fume samples collected requires further investigation.
ERIC Educational Resources Information Center
Garfield, Joan; Le, Laura; Zieffler, Andrew; Ben-Zvi, Dani
2015-01-01
This paper describes the importance of developing students' reasoning about samples and sampling variability as a foundation for statistical thinking. Research on expert-novice thinking as well as statistical thinking is reviewed and compared. A case is made that statistical thinking is a type of expert thinking, and as such, research…
29 CFR 1450.12 - Collection in installments.
Code of Federal Regulations, 2010 CFR
2010-07-01
... arrangement and which contains a provision accelerating the debt in the event the debtor defaults. The size and frequency of installment payments should bear a reasonable relation to the size of the debt and the debtor's ability to pay. If possible, the installment payments should be sufficient in size and...
22 CFR 512.12 - Collection in installments.
Code of Federal Regulations, 2010 CFR
2010-04-01
... provision accelerating the debt in the event the debtor defaults. The size and frequency of the payments should bear a reasonable relation to the size of the debt and ability to the debtor to pay. If possible the installment payments should be sufficient in size and frequency to liquidate the Government's...
14 CFR 1261.411 - Collection in installments.
Code of Federal Regulations, 2010 CFR
2010-01-01
... event the debtor defaults. The size and frequency of installment payments should bear a reasonable relation to the size of the debt and the debtor's ability to pay. If possible, the installment payments should be sufficient in size and frequency to liquidate the Government's claim in not more than 3 years...
29 CFR 20.33 - Collection in installments.
Code of Federal Regulations, 2010 CFR
2010-07-01
... accelerating the debt in the event the debtor defaults. The size and frequency of installment payments should bear a reasonable relation to the size of the debt and the debtor's ability to pay. If possible, the installment payments should be sufficient in size and frequency to liquidate the Government's claim in not...
An empirical model of human aspiration in low-velocity air using CFD investigations.
Anthony, T Renée; Anderson, Kimberly R
2015-01-01
Computational fluid dynamics (CFD) modeling was performed to investigate the aspiration efficiency of the human head in low velocities to examine whether the current inhaled particulate mass (IPM) sampling criterion matches the aspiration efficiency of an inhaling human in airflows common to worker exposures. Data from both mouth and nose inhalation, averaged to assess omnidirectional aspiration efficiencies, were compiled and used to generate a unifying model to relate particle size to aspiration efficiency of the human head. Multiple linear regression was used to generate an empirical model to estimate human aspiration efficiency and included particle size as well as breathing and freestream velocities as dependent variables. A new set of simulated mouth and nose breathing aspiration efficiencies was generated and used to test the fit of empirical models. Further, empirical relationships between test conditions and CFD estimates of aspiration were compared to experimental data from mannequin studies, including both calm-air and ultra-low velocity experiments. While a linear relationship between particle size and aspiration is reported in calm air studies, the CFD simulations identified a more reasonable fit using the square of particle aerodynamic diameter, which better addressed the shape of the efficiency curve's decline toward zero for large particles. The ultimate goal of this work was to develop an empirical model that incorporates real-world variations in critical factors associated with particle aspiration to inform low-velocity modifications to the inhalable particle sampling criterion.
Production of drug nanosuspensions: effect of drug physical properties on nanosizing efficiency.
Liu, Tao; Müller, Rainer H; Möschwitzer, Jan P
2018-02-01
Drug nanosuspension is one of the established methods to improve the bioavailability of poorly soluble drugs. Drug physical properties aspect (morphology, solid state, starting size et al) is a critical parameter determining the production efficiency. Some drug modification approaches such as spray-drying were proved to improve the millability of drug powders. However, the mechanism behind those improved performances is unclear. This study is to systematically investigate the influence of those physical properties. Five different APIs (active pharmaceutical ingredients) with different millabilities, i.e. resveratrol, hesperetin, glibenclamide, rutin, and quercetin, were processed by standard high pressure homogenization (HPH), wet bead milling (WBM), and a combinative method of spray-drying and HPH. Smaller starting sizes of certain APIs could accelerate the particle size reduction velocity during both HPH and WBM processes. Spherical particles were observed for almost all spray-dried powders (except spray-dried hesperetin) after spray-drying. The crystallinity of some spray-dried samples such as rutin and glibenclamide became much lower than their corresponding unmodified powders. Almost all spray-dried drug powders after HPH processes could lead to smaller nanocrystal particle size than unmodified APIs. The modified microstructure instead of solid state after spray-drying explained the potential reason for improved nanosizing efficiency. In addition, the contribution of starting size on the production efficiency was also critical according to both HPH and WBM results.
Bergh, Daniel
2015-01-01
Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.
Tang, Weiming; Yang, Haitao; Mahapatra, Tanmay; Huan, Xiping; Yan, Hongjing; Li, Jianjun; Fu, Gengfeng; Zhao, Jinkou; Detels, Roger
2013-01-01
Background Respondent-driven-sampling (RDS) has well been recognized as a method for sampling from most hard-to-reach populations like commercial sex workers, drug users and men who have sex with men. However the feasibility of this sampling strategy in terms of recruiting a diverse spectrum of these hidden populations has not been understood well yet in developing countries. Methods In a cross sectional study in Nanjing city of Jiangsu province of China, 430 MSM were recruited including 9 seeds in 14 weeks of study period using RDS. Information regarding socio-demographic characteristics and sexual risk behavior were collected and testing was done for HIV and syphilis. Duration, completion, participant characteristics and the equilibrium of key factors were used for assessing feasibility of RDS. Homophily of key variables, socio-demographic distribution and social network size were used as the indicators of diversity. Results In the study sample, adjusted HIV and syphilis prevalence were 6.6% and 14.6% respectively. Majority (96.3%) of the participants were recruited by members of their own social network. Although there was a tendency for recruitment within the same self-identified group (homosexuals recruited 60.0% homosexuals), considerable cross-group recruitment (bisexuals recruited 52.3% homosexuals) was also seen. Homophily of the self-identified sexual orientations was 0.111 for homosexuals. Upon completion of the recruitment process, participant characteristics and the equilibrium of key factors indicated that RDS was feasible for sampling MSM in Nanjing. Participants recruited by RDS were found to be diverse after assessing the homophily of key variables in successive waves of recruitment, the proportion of characteristics after reaching equilibrium and the social network size. The observed design effects were nearly the same or even better than the theoretical design effect of 2. Conclusion RDS was found to be an efficient and feasible sampling method for recruiting a diverse sample of MSM in a reasonable time. PMID:24244280
Currie, Robert W.
2016-01-01
Extreme winter losses of honey bee colonies are a major threat to beekeeping but the combinations of factors underlying colony loss remain debatable. We monitored colonies in two environments (colonies wintered indoors or outdoors) and characterized the effects of two parasitic mites, seven viruses, and Nosema on honey bee colony mortality and population loss over winter. Samples were collected from two locations within hives in fall, mid-winter and spring of 2009/2010. Although fall parasite and pathogen loads were similar in outdoor and indoor-wintered colonies, the outdoor-wintered colonies had greater relative reductions in bee population score over winter. Seasonal patterns in deformed wing virus (DWV), black queen cell virus (BQCV), and Nosema level also differed with the wintering environment. DWV and Nosema levels decreased over winter for indoor-wintered colonies but BQCV did not. Both BQCV and Nosema concentration increased over winter in outdoor-wintered colonies. The mean abundance of Varroa decreased and concentration of Sacbrood virus (SBV), Kashmir bee virus (KBV), and Chronic bee paralysis virus (CBPV) increased over winter but seasonal patterns were not affected by wintering method. For most viruses, either entrance or brood area samples were reasonable predictors of colony virus load but there were significant season*sample location interactions for Nosema and BQCV, indicating that care must be taken when selecting samples from a single location. For Nosema spp., the fall entrance samples were better predictors of future infestation levels than were fall brood area samples. For indoor-wintered colonies, Israeli acute paralysis virus IAPV concentration was negatively correlated with spring population size. For outdoor-wintered hives, spring Varroa abundance and DWV concentration were positively correlated with bee loss and negatively correlated with spring population size. Multivariate analyses for fall collected samples indicated higher DWV was associated with colony death as did high SBV for spring-collected samples. PMID:27448049
16 CFR 642.3 - Prescreen opt-out notice.
Code of Federal Regulations, 2014 CFR
2014-01-01
... size that is larger than the type size of the principal text on the same page, but in no event smaller than 12-point type, or if provided by electronic means, then reasonable steps shall be taken to ensure that the type size is larger than the type size of the principal text on the same page; (ii) On the...
16 CFR 642.3 - Prescreen opt-out notice.
Code of Federal Regulations, 2013 CFR
2013-01-01
... size that is larger than the type size of the principal text on the same page, but in no event smaller than 12-point type, or if provided by electronic means, then reasonable steps shall be taken to ensure that the type size is larger than the type size of the principal text on the same page; (ii) On the...
16 CFR 642.3 - Prescreen opt-out notice.
Code of Federal Regulations, 2011 CFR
2011-01-01
... size that is larger than the type size of the principal text on the same page, but in no event smaller than 12-point type, or if provided by electronic means, then reasonable steps shall be taken to ensure that the type size is larger than the type size of the principal text on the same page; (ii) On the...
The prevalence of domestic violence within different socio-economic classes in Central Trinidad.
Nagassar, R P; Rawlins, J M; Sampson, N R; Zackerali, J; Chankadyal, K; Ramasir, C; Boodram, R
2010-01-01
Domestic violence is a medical and social issue that often leads to negative consequences for society. This paper examines the association between the prevalence of domestic violence in relation to the different socio-economic classes in Central Trinidad. The paper also explores the major perceived causes of physical abuse in Central Trinidad. Participants were selected using a two-stage stratified sampling method within the Couva district. Households, each contributing one participant, were stratified into different socioeconomic classes (SES Class) and each stratum size (or its share in the sample) was determined by the portion of its size in the sampling frame to the total sample; then its members were randomly selected. The sampling method attempted to balance and then minimize racial, age, cultural biases and confounding factors. The participant chosen had to be older than 16-years of age, female and a resident of the household. If more than one female was at home, the most senior was interviewed. The study found a statistically significant relationship between verbal abuse (p = 0.0017), physical abuse (p = 0.0012) and financial abuse (p = 0.001) and socio-economic class. For all the socio-economic classes considered, the highest prevalence of domestic violence occurred amongst the working class and lower middle socio-economic classes. The most prominent reasons cited for the physical violence was drug and alcohol abuse (37%) and communication differences (16.3%). These were the other two main perceived causes of the violence. The power of the study was 0.78 and the all strata prevalence of domestic violence was 41%. Domestic violence was reported within all socio-economic class groupings but it was most prevalent within the working class and lower middle socio-economic classes. The major perceived cause of domestic violence was alcohol/drug abuse.
The U.S. Geological Survey coal quality (COALQUAL) database version 3.0
Palmer, Curtis A.; Oman, Charles L.; Park, Andy J.; Luppens, James A.
2015-12-21
Because of database size limits during the development of COALQUAL Version 1.3, many analyses of individual bench samples were merged into whole coal bed averages. The methodology for making these composite intervals was not consistent. Size limits also restricted the amount of georeferencing information and forced removal of qualifier notations such as "less than detection limit" (<) information, which can cause problems when using the data. A review of the original data sheets revealed that COALQUAL Version 2.0 was missing information that was needed for a complete understanding of a coal section. Another important database issue to resolve was the USGS "remnant moisture" problem. Prior to 1998, tests for remnant moisture (as-determined moisture in the sample at the time of analysis) were not performed on any USGS major, minor, or trace element coal analyses. Without the remnant moisture, it is impossible to convert the analyses to a usable basis (as-received, dry, etc.). Based on remnant moisture analyses of hundreds of samples of different ranks (and known residual moisture) reported after 1998, it was possible to develop a method to provide reasonable estimates of remnant moisture for older data to make it more useful in COALQUAL Version 3.0. In addition, COALQUAL Version 3.0 is improved by (1) adding qualifiers, including statistical programming to deal with the qualifiers; (2) clarifying the sample compositing problems; and (3) adding associated samples. Version 3.0 of COALQUAL also represents the first attempt to incorporate data verification by mathematically crosschecking certain analytical parameters. Finally, a new database system was designed and implemented to replace the outdated DOS program used in earlier versions of the database.
2014-01-01
Background Every social grouping in the world has its own cultural practices and beliefs which guide its members on how they should live or behave. Harmful traditional practices that affect children are Female genital mutilation, Milk teeth extraction, Food taboo, Uvula cutting, keeping babies out of exposure to sun, and Feeding fresh butter to new born babies. The objective of this study was to assess factors associated with harmful traditional practices among children less than 5 years of age in Axum town, North Ethiopia. Methods Community based cross sectional study was conducted in 752 participants who were selected using multi stage sampling; Simple random sampling method was used to select ketenas from all kebelles of Axum town. After proportional allocation of sample size, systematic random sampling method was used to get the study participants. Data was collected using interviewer administered Tigrigna version questionnaire, it was entered and analyzed using SPSS version 16. Descriptive statistics was calculated and logistic regressions were used to analyze the data. Results Out of the total sample size 50.7% children were females, the mean age of children was 26.28 months and majority of mothers had no formal education. About 87.8% mothers had performed at least one traditional practice to their children; uvula cutting was practiced on 86.9% children followed by milk teeth extraction 12.5% and eye borrows incision 2.4% children. Fear of swelling, pus and rapture of the uvula was the main reason to perform uvula cutting. Conclusion The factors associated with harmful traditional practices were educational status, occupation, religion of mothers and harmful traditional practices performed on the mothers. PMID:24952584
Gebrekirstos, Kahsu; Abebe, Mesfin; Fantahun, Atsede
2014-06-21
Every social grouping in the world has its own cultural practices and beliefs which guide its members on how they should live or behave. Harmful traditional practices that affect children are Female genital mutilation, Milk teeth extraction, Food taboo, Uvula cutting, keeping babies out of exposure to sun, and Feeding fresh butter to new born babies. The objective of this study was to assess factors associated with harmful traditional practices among children less than 5 years of age in Axum town, North Ethiopia. Community based cross sectional study was conducted in 752 participants who were selected using multi stage sampling; Simple random sampling method was used to select ketenas from all kebelles of Axum town. After proportional allocation of sample size, systematic random sampling method was used to get the study participants. Data was collected using interviewer administered Tigrigna version questionnaire, it was entered and analyzed using SPSS version 16. Descriptive statistics was calculated and logistic regressions were used to analyze the data. Out of the total sample size 50.7% children were females, the mean age of children was 26.28 months and majority of mothers had no formal education. About 87.8% mothers had performed at least one traditional practice to their children; uvula cutting was practiced on 86.9% children followed by milk teeth extraction 12.5% and eye borrows incision 2.4% children. Fear of swelling, pus and rapture of the uvula was the main reason to perform uvula cutting. The factors associated with harmful traditional practices were educational status, occupation, religion of mothers and harmful traditional practices performed on the mothers.
[Immunological surrogate endpoints to evaluate vaccine efficacy].
Jin, Pengfei; Li, Jingxin; Zhou, Yang; Zhu, Fengcai
2015-12-01
An immunological surrogate endpoints is a vaccine-induced immune response (either humoral or cellular immune) that predicts protection against clinical endpoints (infection or disease), and can be used to evaluate vaccine efficacy in clinical vaccine trials. Compared with field efficacy trials observing clinical endpoints, immunological vaccine trials could reduce the sample size or shorten the duration of a trial, which promote the license and development of new candidate vaccines. For these reasons, establishing immunological surrogate endpoints is one of 14 Grand Challenges of Global Health of the National Institutes of Health (NIH) and the Bill and Melinda Gates Foundation. From two parts of definition and statistical methods for evaluation of surrogate endpoints, this review provides a more comprehensive description.
Critical considerations when planning experimental in vivo studies in dental traumatology.
Andreasen, Jens O; Andersson, Lars
2011-08-01
In vivo studies are sometimes needed to understand healing processes after trauma. For several reasons, not the least ethical, such studies have to be carefully planned and important considerations have to be taken into account about suitability of the experimental model, sample size and optimizing the accuracy of the analysis. Several manuscripts of in vivo studies are submitted for publication to Dental Traumatology and rejected because of inadequate design, methodology or insufficient documentation of the results. The authors have substantial experience in experimental in vivo studies of tissue healing in dental traumatology and share their knowledge regarding critical considerations when planning experimental in vivo studies. © 2011 John Wiley & Sons A/S.
Hannouche, A; Chebbo, G; Ruban, G; Tassin, B; Lemaire, B J; Joannis, C
2011-01-01
This article confirms the existence of a strong linear relationship between turbidity and total suspended solids (TSS) concentration. However, the slope of this relation varies between dry and wet weather conditions, as well as between sites. The effect of this variability on estimating the instantaneous wet weather TSS concentration is assessed on the basis of the size of the calibration dataset used to establish the turbidity - TSS relationship. Results obtained indicate limited variability both between sites and during dry weather, along with a significant inter-event variability. Moreover, turbidity allows an evaluation of TSS concentrations with an acceptable level of accuracy for a reasonable rainfall event sampling campaign effort.
Tapered enlarged ends in multimode optical fibers.
Brenci, M; Falciai, R; Scheggi, A M
1982-01-15
Radiation characteristics of multimode fibers with enlarged tapers were investigated on a number of samples obtained by varying the fiber drawing speed with a given law corresponding to a prefixed taper profile. The characterization of the fibers was made by near- and far-field intensity pattern measurements as well as by measuring the losses introduced by the taper. With a suitable choice of parameters the taper constitutes a reasonable low-loss component useful, for example, for either efficient coupling to large-spot high-power density sources or connecting fibers of different sizes. Conversely at the exit of the fiber the taper can be used for beam shaping which is of interest for mechanical or surgical applications.
Fast, Dense Low Cost Scintillator for Nuclear Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woody, Craig
2009-07-31
We have studied the morphology, transparency, and optical properties of SrHfO{sub 3}:Ce ceramics. Ceramics can be made transparent by carefully controlling the stoichiometry of the precursor powders. When fully dense, transparent samples can be obtained. Ceramics with a composition close to stoichiometry (Sr:Hf ~ 1) appear to show good transparency and a reasonable light yield several times that of BGO. The contact and distance transparency of ceramics hot-pressed at about 1450ºC is very good, but deteriorates at increasingly higher hot-press temperatures. If these ceramics can be produced in large quantities and sizes, at low cost, they may be of considerablemore » interest for PET and CT.« less
Comparing four methods to estimate usual intake distributions.
Souverein, O W; Dekkers, A L; Geelen, A; Haubrock, J; de Vries, J H; Ocké, M C; Harttig, U; Boeing, H; van 't Veer, P
2011-07-01
The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As 'true' usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data from the European Food Consumption Validation (EFCOVAL) Study in which two 24-h dietary recalls (24-HDRs) and food frequency data were collected. The methods being compared were the Iowa State University Method (ISU), National Cancer Institute Method (NCI), Multiple Source Method (MSM) and Statistical Program for Age-adjusted Dietary Assessment (SPADE). Simulation data were constructed with varying numbers of subjects (n), different values for the Box-Cox transformation parameter (λ(BC)) and different values for the ratio of the within- and between-person variance (r(var)). All data were analyzed with the four different methods and the estimated usual mean intake and selected percentiles were obtained. Moreover, the 2-day within-person mean was estimated as an additional 'method'. These five methods were compared in terms of the mean bias, which was calculated as the mean of the differences between the estimated value and the known true value. The application of data from the EFCOVAL Project included calculations of nutrients (that is, protein, potassium, protein density) and foods (that is, vegetables, fruit and fish). Overall, the mean bias of the ISU, NCI, MSM and SPADE Methods was small. However, for all methods, the mean bias and the variation of the bias increased with smaller sample size, higher variance ratios and with more pronounced departures from normality. Serious mean bias (especially in the 95th percentile) was seen using the NCI Method when r(var) = 9, λ(BC) = 0 and n = 1000. The ISU Method and MSM showed a somewhat higher s.d. of the bias compared with NCI and SPADE Methods, indicating a larger method uncertainty. Furthermore, whereas the ISU, NCI and SPADE Methods produced unimodal density functions by definition, MSM produced distributions with 'peaks', when sample size was small, because of the fact that the population's usual intake distribution was based on estimated individual usual intakes. The application to the EFCOVAL data showed that all estimates of the percentiles and mean were within 5% of each other for the three nutrients analyzed. For vegetables, fruit and fish, the differences were larger than that for nutrients, but overall the sample mean was estimated reasonably. The four methods that were compared seem to provide good estimates of the usual intake distribution of nutrients. Nevertheless, care needs to be taken when a nutrient has a high within-person variation or has a highly skewed distribution, and when the sample size is small. As the methods offer different features, practical reasons may exist to prefer one method over the other.
Durability Assessment of Gamma Tial
NASA Technical Reports Server (NTRS)
Draper, Susan L.; Lerch, Bradley A.; Pereira, J. Michael; Miyoshi, Kazuhisa; Arya, Vinod K.; Zhuang, Wyman
2004-01-01
Gamma TiAl was evaluated as a candidate alloy for low-pressure turbine blades in aeroengines. The durability of g-TiAl was studied by examining the effects of impact or fretting on its fatigue strength. Cast-to-size Ti-48Al-2Cr-2Nb was studied in impact testing with different size projectiles at various impact energies as the reference alloy and subsequently fatigue tested. Impacting degraded the residual fatigue life. However, under the ballistic impact conditions studied, it was concluded that the impacts expected in an aeroengine would not result in catastrophic damage, nor would the damage be severe enough to result in a fatigue failure under the anticipated design loads. In addition, other gamma alloys were investigated including another cast-to-size alloy, several cast and machined specimens, and a forged alloy. Within this Ti-48-2-2 family of alloys aluminum content was also varied. The cracking patterns as a result of impacting were documented and correlated with impact variables. The cracking type and severity was reasonably predicted using finite element models. Mean stress affects were also studied on impact-damaged fatigue samples. The fatigue strength was accurately predicted based on the flaw size using a threshold-based, fracture mechanics approach. To study the effects of wear due to potential applications in a blade-disk dovetail arrangement, the machined Ti-47-2-2 alloy was fretted against In-718 using pin-on-disk experiments. Wear mechanisms were documented and compared to those of Ti-6Al-4V. A few fatigue samples were also fretted and subsequently fatigue tested. It was found that under the conditions studied, the fretting was not severe enough to affect the fatigue strength of g-TiAl.
NASA Astrophysics Data System (ADS)
Fang, G. C.; Zhang, L.; Huang, C. S.
2012-12-01
Daily samples of size-fractionated (18, 10, 2.5 and 1.0 μm) particulate-bound mercury Hg(p) were collected using Micro-Orifice Uniform Deposition Impactors (MOUDI), on randomly selected days each month between November 2010 and July 2011, at a traffic site (Hungkuang), a wetland site (Gaomei), and an industrial site (Quanxing) in central Taiwan. Bulk dry deposition was also collected simultaneously using a surrogate surface. The nine-month average (±standard deviation) Hg(p) concentrations were 0.57 (±0.90), 0.17 (±0.27), and 0.94 (±0.92) ng m-3 at Hungkuang, Gaomei, and Quanxing, respectively. Concentrations in November and December were much higher than in the other months due to a combination of high local emissions and meteorological conditions. PM1.0 contributed more than 50% to the bulk concentration at the traffic and the industrial sites, but only contributed 25% at the wetland site. PM1.0-2.5 contributed 25%-50%, depending on location, to the bulk mass. Coarse fraction (PM2.5-18) contributed 7% at Hungkuang, 25% at Gaomei, and 19% at Quanxing. Samples with very high bulk concentrations had large fine fractions. Annual dry deposition estimated from the surrogate surface measurements was in the range of 30-85 μg m-2 yr-1 at the three sites. Coarse particulate Hg(p) were estimated to contribute 50-85% of the total Hg(p) dry deposition. Daily dry deposition velocities (Vd) ranged from 0.01 to 7.7 cm s-1. The annual Vd generated from the total measured fluxes was 0.34, 0.60 and 0.29 cm s-1 at Hungkuang, Gaomei, and Quanxing, respectively. These values can be reasonably reproduced using a size-resolved model and measured size fractions.
Exploring the Impact of Formal Education on the Moral Reasoning Abilities of College Students
ERIC Educational Resources Information Center
Nather, Fatima
2013-01-01
The present study was to investigate the patterns of moral reasoning of a sample of college students at Kuwait University, and to examine the effect of education level upon their moral reasoning abilities. A sample of 90 college male students participated in this study. They ranged in age from 17-25. For the purpose of this study they were divided…
Faryabi, Javad; Rajabi, Mahboobeh; Alirezaee, Shahin
2014-01-01
Background: Motorcycle crashes are the cause of severe morbidity and mortality especially because of head injuries. It seems that wearing a helmet has an effective role in protection against head injuries. Nevertheless, motorcyclists usually have no tendency to wear a helmet when driving in cities and have several reasons for this behavior. Objectives: This study aimed to evaluate the use and reasons for not using a helmet by motorcyclists admitted to an emergency ward of a trauma hospital due to accident in Kerman, Iran. Patients and Methods: This study was carried out by recoding the opinions of motorcyclists who had been transferred to the emergency ward of Shahid Bahonar Hospital (Kerman/Iran). Since no data was available on the frequency of the use of helmets, a pilot study was carried out and a sample size of 377 was determined for the main study. Then a researcher-made questionnaire was used to investigate the motorcyclists’ reasons for not using a helmet. Results: Only 21.5% of the motorcyclists had been wearing helmets at the time of the accident. The most frequent reasons for not using a helmet were the heavy weight of the helmet (77%), feeling of heat (71.4%), pain in the neck (69.4%), feeling of suffocation (67.7%), limitation of head and neck movements (59.6%) and all together, physical discomfort was the main cause of not wearing a helmet during motorcycle rides. Conclusions: In general, it appears that it is possible to increase the use of helmets by eliminating its physical problems, and increasing the knowledge of community members in relation to the advantages of helmet use, which will result in a significant decrease in traumas resulting from motorcycle accidents. PMID:25599066
Ahonen, Riitta; Kanerva, Risto; Karttunen, Pekka; Timonen, Johanna
2017-01-01
The aim of this study was to explore the reasons behind medicine shortages from the perspective of pharmaceutical companies and pharmaceutical wholesalers in Finland. The study took the form of semi-structured interviews. Forty-one pharmaceutical companies and pharmaceutical wholesalers were invited to participate in the study. The pharmaceutical companies were the member organizations of Pharma Industry Finland (PIF) (N = 30) and the Finnish Generic Pharmaceutical Association (FGPA) (N = 7). One company which is a central player in the pharmaceutical market in Finland but does not belong to PIF or FGPA was also invited. The pharmaceutical wholesalers were those with a nationwide distribution network (N = 3). A total of 30 interviews were conducted between March and June 2016. The data were subjected to qualitative thematic analysis. The most common reasons behind medicine shortages in Finland were the small size of the pharmaceutical market (29/30), sudden or fluctuating demand (28/30), small stock sizes (25/30), long delivery time (23/30) and a long or complex production chain (23/30). The reasons for the medicine shortages were supply-related more often than demand-related. However, the reasons were often complex and there was more than one reason behind a shortage. Supply-related reasons behind shortages commonly interfaced with the country-specific characteristics of Finland, whereas demand-related reasons were commonly associated with the predictability and attractiveness of the market. Some reasons, such as raw material shortages, were considered global and thus had similar effects on other countries. PMID:28658307
Selebalo-Bereng, Lebohang; Patel, Cynthia Joan
2018-01-17
This study focused on the relationship between religion, religiosity/spirituality (R/S), and attitudes of a sample of South African male secondary school youth toward women's rights to legal abortion in different situations. We distributed 400 self-administered questionnaires assessing the main variables (attitudes toward reasons for abortion and R/S) to the target sample in six different secondary schools in KwaZulu-Natal, South Africa. The responses of a final sample of 327 learners were then analyzed using the Statistical Package for the Social Sciences (SPSS) software. The findings revealed that religion and R/S play a role in the youths' attitudes toward abortion. While the Hindu subsample indicated higher overall support across the different scenarios, the Muslim subsample reported greater disapproval than the other groups on 'Elective reasons' and in instances of 'Objection by significant others.' The Christian youth had the most negative attitudes to abortion for 'Traumatic reasons' and 'When women's health/life' was threatened. Across the sample, higher R/S levels were linked with more negative attitudes toward reasons for abortion.
Effect of centrifugation on dynamic susceptibility of magnetic fluids
NASA Astrophysics Data System (ADS)
Pshenichnikov, Alexander; Lebedev, Alexander; Lakhtina, Ekaterina; Kuznetsov, Andrey
2017-06-01
The dispersive composition, dynamic susceptibility and spectrum of times of magnetization relaxation for six samples of magnetic fluid obtained by centrifuging two base colloidal solutions of the magnetite in kerosene was investigated experimentally. The base solutions differed by the concentration of the magnetic phase and the width of the particle size distribution. The procedure of cluster analysis allowing one to estimate the characteristic sizes of aggregates with uncompensated magnetic moments was described. The results of the magnetogranulometric and cluster analyses were discussed. It was shown that centrifugation has a strong effect on the physical properties of the separated fractions, which is related to the spatial redistribution of particles and multi-particle aggregates. The presence of aggregates in magnetic fluids is interpreted as the main reason of low-frequency (0.1-10 kHz) dispersion of the dynamic susceptibility. The obtained results count in favor of using centrifugation as an effective means of changing the dynamic susceptibility over wide limits and obtaining fluids with the specified type of susceptibility dispersion.
Plume Particle Collection and Sizing from Static Firing of Solid Rocket Motors
NASA Technical Reports Server (NTRS)
Sambamurthi, Jay K.
1995-01-01
Thermal radiation from the plume of any solid rocket motor, containing aluminum as one of the propellant ingredients, is mainly from the microscopic, hot aluminum oxide particles in the plume. The plume radiation to the base components of the flight vehicle is primarily determined by the plume flowfield properties, the size distribution of the plume particles, and their optical properties. The optimum design of a vehicle base thermal protection system is dependent on the ability to accurately predict this intense thermal radiation using validated theoretical models. This article describes a successful effort to collect reasonably clean plume particle samples from the static firing of the flight simulation motor (FSM-4) on March 10, 1994 at the T-24 test bed at the Thiokol space operations facility as well as three 18.3% scaled MNASA motors tested at NASA/MSFC. Prior attempts to collect plume particles from the full-scale motor firings have been unsuccessful due to the extremely hostile thermal and acoustic environment in the vicinity of the motor nozzle.
40 CFR 13.18 - Installment payments.
Code of Federal Regulations, 2010 CFR
2010-07-01
... accelerating the debt in the event of default. The size and frequency of installment payments will bear a reasonable relation to the size of the debt and the debtor's ability to pay. The installment payments will be sufficient in size and frequency to liquidate the debt in not more than 3 years, unless the Administrator...
22 CFR 213.19 - Installment payments.
Code of Federal Regulations, 2010 CFR
2010-04-01
... a provision accelerating the debt in the event of default. The size and frequency of installment payments will bear a reasonable relation to the size of the debt and the debtor's ability to pay. The installment payments will be sufficient in size and frequency to liquidate the debt in not more than 3 years...
On the Attitude of Secondary 1 Students towards Science
NASA Astrophysics Data System (ADS)
Kuppan, L.; Munirah, S. K.; Foong, S. K.; Yeung, A. S.
2010-07-01
The understanding of students' attitude towards science will give a sense of direction when designing pedagogical approaches and lesson packages so that reasons for not liking science is arrested and eventually the nation's need for science oriented workforce is addressed in the future. This study is part of a 3-year research project entitled PbI1@School: A large scale study on the effect of "Physics by Inquiry" pedagogy on Secondary One students' attitude and aptitude in science, involving school, National Institute of Education (NIE) Singapore, University of Washington at Seattle and the Ministry of Education (MOE) of Singapore. The results from a survey conducted on a sample size of 215 secondary 1 students indicate that fun in studying science is a major reason for their interest towards the subject. Those who do not like science dislike the idea of surface learning such as memorizing facts and information. Besides, all these students in our sample appear to be inquisitive. We believe that the teaching and learning system needs to be modified to increase or at least sustain the students' interest in science and capitalize on students' inquisitiveness. Although the results obtained are interesting and give an insight on secondary 1 students' attitude towards science, we intend to carry out a more rigorous study to identify correlations between students' responses for different attitude questions to understand deeply their attitude towards science.
Bermúdez, M Paz; Ramiro, M Teresa; Teva, Inmaculada; Ramiro-Sánchez, Tamara; Buela-Casal, Gualberto
To analyse sexual behaviour, HIV testing, HIV testing intentions and reasons for not testing for HIV in university students from Cuzco (Peru). The sample comprised 1,377 university students from several institutions from Cuzco (Peru). The size of the sample was set according to a maximum 3% error estimation and a 97% confidence interval. Ages ranged from 16 to 30 years old. The data were collected through a self-administered, anonymous and voluntary questionnaire regarding sexual behaviour and HIV testing. The data were collected in classrooms during teaching hours. A higher percentage of males than females reported having had vaginal, anal and oral sex, a higher number of sexual partners and an earlier age at first vaginal and oral sex. A higher percentage of females than males did not use condoms when they first had anal sex and had a higher anal sex-risk index. Most of the participants had never been HIV tested. The main reason was that they were sure that they were not HIV infected. It seems that there was a low HIV risk perception in these participants despite the fact that they had been involved in sexual risk behaviours. Prevention campaigns focused on the general population as well as the at-risk populations and young people are needed. Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
Al-Aidaroos, Ali M.; Mantha, Gopikrishna
2018-06-01
Monthly abundance of the subclass Copepoda was analyzed from the zooplankton samples collected at Obhur Creek, Jeddah, Saudi Arabia during December-2011 till December-2012. Zooplankton samples were collected through surface horizontal tows by a modified WP2 net ( via. mouth diameter 50 cm, length 180 cm, 150 μm mesh size). Order Calanoida dominated the abundance with mean annual average of 75.29%. We observed abnormal protuberances on copepods, known as tumour-like anomalies (TLAs). Calanoida showed more frequent and prominent TLAs on its dorsal surface with highest mean percentage occurring during Jun. 2012 (1.64%). The percentage prevalence of TLAs on the Copepoda was highest during Jun. 2012 (1.36%) and least during Nov. 2012 (0.03%). It is suggested that these TLAs might be caused due to the presence of potentially high levels of toxic substances, which weakens the exoskeleton and thereby making them more susceptible to infections or due to wounds from parasites or might be related to the occurrence of symbiotic tantulocarids or might be due to the radiation stress as a control measure. Whatever the reason, these TLAs have become a serious emerging threat to the aquatic food web. Our investigation is the first of its kind in the coastal waters of the Saudi Red Sea, which needs further investigations in order to elucidate the possible reasons for these abnormalities.
Factors Related to Smoking Habits of Male Adolescents
Naing, Nyi Nyi; Ahmad, Zulkifli; Musa, Razlan; Hamid, Farique Rizal Abdul; Ghazali, Haslan; Bakar, Mohd Hilmi Abu
2004-01-01
A cross-sectional study was conducted to identify the factors related to smoking habits of adolescents among secondary school boys in Kelantan state, Malaysia. A total of 451 upper secondary male students from day, boarding and vocational schools were investigated using a structured questionnaire. Cluster sampling was applied to achieve the required sample size. The significant findings included: 1) the highest prevalence of smoking was found among schoolboys from the vocational school; 2) mean duration of smoking was 2.5 years; 3) there were significant associations between smoking status and parents' smoking history, academic performance, perception of the health hazards of smoking, and type of school attended. Peer influence was the major reason students gave for taking up the habit. Religion was most often indicated by non-smokers as their reason for not smoking. Approximately 3/5 of the smokers had considered quitting and 45% of them had tried at least once to stop smoking. Mass media was indicated as the best information source for the students to acquire knowledge about negative aspects of the smoking habit. The authors believe an epidemic of tobacco use is imminent if drastic action is not taken, and recommend that anti-smoking campaigns with an emphasis on the religious aspect should start as early as in primary school. Intervention programs to encourage behavior modification of adolescents are also recommended. PMID:19570279
Factors Related to Smoking Habits of Male Adolescents
Naing, Nyi Nyi; Ahmad, Zulkifli; Musa, Razlan; Hamid, Farique Rizal Abdul; Ghazali, Haslan; Bakar, Mohd Hilmi Abu
2004-01-01
A cross-sectional study was conducted to identify the factors related to smoking habits of adolescents among secondary school boys in Kelantan state, Malaysia. A total of 451 upper secondary male students from day, boarding and vocational schools were investigated using a structured questionnaire. Cluster sampling was applied to achieve the required sample size. The significant findings included: 1) the highest prevalence of smoking was found among schoolboys from the vocational school; 2) mean duration of smoking was 2.5 years; 3) there were significant associations between smoking status and parents' smoking history, academic performance, perception of the health hazards of smoking, and type of school attended. Peer influence was the major reason students gave for taking up the habit. Religion was most often indicated by non-smokers as their reason for not smoking. Approximately 3/5 of the smokers had considered quitting and 45% of them had tried at least once to stop smoking. Mass media was indicated as the best information source for the students to acquire knowledge about negative aspects of the smoking habit. The authors believe an epidemic of tobacco use is imminent if drastic action is not taken, and recommend that anti-smoking campaigns with an emphasis on the religious aspect should start as early as in primary school. Intervention programs to encourage behavior modification of adolescents are also recommended.
Factors related to smoking habits of male adolescents.
Naing, Nyi Nyi; Ahmad, Zulkifli; Musa, Razlan; Hamid, Farique Rizal Abdul; Ghazali, Haslan; Bakar, Mohd Hilmi Abu
2004-09-15
A cross-sectional study was conducted to identify the factors related to smoking habits of adolescents among secondary school boys in Kelantan state, Malaysia. A total of 451 upper secondary male students from day, boarding and vocational schools were investigated using a structured questionnaire. Cluster sampling was applied to achieve the required sample size. The significant findings included: 1) the highest prevalence of smoking was found among schoolboys from the vocational school; 2) mean duration of smoking was 2.5 years; 3) there were significant associations between smoking status and parents' smoking history, academic performance, perception of the health hazards of smoking, and type of school attended. Peer influence was the major reason students gave for taking up the habit. Religion was most often indicated by non-smokers as their reason for not smoking. Approximately 3/5 of the smokers had considered quitting and 45% of them had tried at least once to stop smoking. Mass media was indicated as the best information source for the students to acquire knowledge about negative aspects of the smoking habit. The authors believe an epidemic of tobacco use is imminent if drastic action is not taken, and recommend that anti-smoking campaigns with an emphasis on the religious aspect should start as early as in primary school. Intervention programs to encourage behavior modification of adolescents are also recommended.
First report of ciliate (Protozoa) epibionts on deep-sea harpacticoid copepods
NASA Astrophysics Data System (ADS)
Sedlacek, Linda; Thistle, David; Fernandez-Leborans, Gregorio; Carman, Kevin R.; Barry, James P.
2013-08-01
We report the first observations of ciliate epibionts on deep-sea, benthic harpacticoid copepods. One ciliate epibiont species belonged to class Karyorelictea, one to subclass Suctoria, and one to subclass Peritrichia. Our samples came from the continental rise off central California (36.709°N, 123.523°W, 3607 m depth). We found that adult harpacticoids carried ciliate epibionts significantly more frequently than did subadult copepodids. The reason for the pattern is unknown, but it may involve differences between adults and subadult copepodids in size or in time spent swimming. We also found that the ciliate epibiont species occurred unusually frequently on the adults of two species of harpacticoid copepod; a third harpacticoid species just failed the significance test. When we ranked the 57 harpacticoid species in our samples in order of abundance, three species identified were, as a group, significantly more abundant than expected by chance if one assumes that the abundance of the group and the presence of ciliate epibionts on them were uncorrelated. High abundance may be among the reasons a harpacticoid species carries a ciliate epibiont species disproportionately frequently. For the combinations of harpacticoid species and ciliate epibiont species identified, we found one in which males and females differed significantly in the proportion that carried epibionts. Such a sex bias has also been reported for shallow-water, calanoid copepods.
Esker, D.; Sheridan, R.E.; Ashley, G.M.; Waldner, J.S.; Hall, D.W.
1996-01-01
A new technique, using empirical relationships between median grain size and density and velocity to calculate proxy values for density and velocity, avoids many of the problems associated with the use of well logs and shipboard measurements to construct synthetic seismograms. This method was used to groundtruth and correlate across both analog and digital shallow high-resolution seismic data on the New Jersey shelf. Sampling dry vibracores to determine median grain size eliminates the detrimental effects that coring disturbances and preservation variables have on the sediment and water content of the core. The link between seismic response to lithology and bed spacing is more exact. The exact frequency of the field seismic data can be realistically simulated by a 10-20 cm sampling interval of the vibracores. The estimate of the percentage error inherent in this technique, 12% for acoustic impedance and 24% for reflection amplitude, is calculated to one standard deviation and is within a reasonable limit for such a procedure. The synthetic seismograms of two cores, 4-6 m long, were used to correlate specific sedimentary deposits to specific seismic reflection responses. Because this technique is applicable to unconsolidated sediments, it is ideal for upper Pleistocene and Holocene strata. Copyright ?? 1996, SEPM (Society for Sedimentary Geology).
Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin
2017-06-01
A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.
ERIC Educational Resources Information Center
Pfannkuch, Maxine; Arnold, Pip; Wild, Chris J.
2015-01-01
Currently, instruction pays little attention to the development of students' sampling variability reasoning in relation to statistical inference. In this paper, we briefly discuss the especially designed sampling variability learning experiences students aged about 15 engaged in as part of a research project. We examine assessment and…
Pina, Violeta; Castillo, Alejandro; Cohen Kadosh, Roi; Fuentes, Luis J.
2015-01-01
Previous studies have suggested that numerical processing relates to mathematical performance, but it seems that such relationship is more evident for intentional than for automatic numerical processing. In the present study we assessed the relationship between the two types of numerical processing and specific mathematical abilities in a sample of 109 children in grades 1–6. Participants were tested in an ample range of mathematical tests and also performed both a numerical and a size comparison task. The results showed that numerical processing related to mathematical performance only when inhibitory control was involved in the comparison tasks. Concretely, we found that intentional numerical processing, as indexed by the numerical distance effect in the numerical comparison task, was related to mathematical reasoning skills only when the task-irrelevant dimension (the physical size) was incongruent; whereas automatic numerical processing, indexed by the congruency effect in the size comparison task, was related to mathematical calculation skills only when digits were separated by small distance. The observed double dissociation highlights the relevance of both intentional and automatic numerical processing in mathematical skills, but when inhibitory control is also involved. PMID:25873909
Evaluation of a formula that categorizes female gray wolf breeding status by nipple size
Barber-Meyer, Shannon M.; Mech, L. David
2015-01-01
The proportion by age class of wild Canis lupus (Gray Wolf) females that reproduce in any given year remains unclear; thus, we evaluated the applicability to our long-term (1972–2013) data set of the Mech et al. (1993) formula that categorizes female Gray Wolf breeding status by nipple size and time of year. We used the formula to classify Gray Wolves from 68 capture events into 4 categories (yearling, adult non-breeder, former breeder, current breeder). To address issues with small sample size and variance, we created an ambiguity index to allow some Gray Wolves to be classed into 2 categories. We classified 20 nipple measurements ambiguously: 16 current or former breeder, 3 former or adult non-breeder, and 1 yearling or adult non-breeder. The formula unambiguously classified 48 (71%) of the nipple measurements; based on supplemental field evidence, at least 5 (10%) of these were incorrect. When used in conjunction with an ambiguity index we developed and with corrections made for classifications involving very large nipples, and supplemented with available field evidence, the Mech et al. (1993) formula provided reasonably reliable classification of breeding status in wild female Gray Wolves.
Surgeons' motivation for choice of workplace.
Kähler, Lena; Kristiansen, Maria; Rudkjøbing, Andreas; Strandberg-Larsen, Martin
2012-09-01
To ensure qualified health care professionals at public hospitals in the future, it is important to understand which factors attract health care professionals to certain positions. The aim of this study was to explore motives for choosing employment at either public or private hospitals in a group of Danish surgeons, as well as to examine if organizational characteristics had an effect on motivation. Eight qualitative interviews were conducted with surgeons from both public and private hospitals sampled using the snowball method. The interviews were based on a semi-structured interview guide and analyzed by means of phenomenological theory. Motivational factors such as personal influence on the job, the opportunity to provide the best possible patient care, challenging work tasks colleagues, and ideological reasons were emphasized by the surgeons as important reasons for their choice of employment. Motivational factors appeared to be strongly connected to the structure of the organization; especially the size of the organization was perceived to be essential. It is worth noting that salary, in contrast to the general belief, was considered a secondary benefit rather than a primary motivational factor for employment. The study revealed that motivational factors are multidimensional and rooted in organizational structure; i.e. organizational size rather than whether the organization is public or private is crucial. There is a need for further research on the topic, but it seems clear that future health care planning may benefit from taking into account the implications that large organizational structures have for the staff working within these organizations. not relevant. not relevant.
Estimated Mid-Infrared (200-2000 cm-1) Optical Constants of Some Silica Polymorphs
NASA Astrophysics Data System (ADS)
Glotch, Timothy; Rossman, G. R.; Michalski, J. R.
2006-09-01
We use Lorentz-Lorenz dispersion analysis to model the mid-infrared (200-2000 cm-1) optical constants, of opal-A, opal-CT, and tridymite. These minerals, which are all polymorphs of silica (SiO2), are potentially important in the analysis of thermal emission spectra acquired by the Mars Global Surveyor Thermal Emission Spectrometer (MGS-TES) and Mars Exploration Rover Mini-TES instruments in orbit and on the surface of Mars as well as emission spectra acquired by telescopes of planetary disks and dust and debris clouds in young solar systems. Mineral samples were crushed, washed, and sieved and emissivity spectra of the >100; μm size fraction were acquired at Arizona State University's emissivity spectroscopy laboratory. Therefore, the spectra and optical constants are representative of all crystal orientations. Ideally, emissivity or reflectance measurements of single polished crystals or fine powders pressed to compact disks are used for the determination of mid-infrared optical constants. Measurements of these types of surfaces eliminate or minimize multiple reflections, providing a specular surface. Our measurements, however, likely produce a reasonable approximation of specular emissivity or reflectance, as the minimum particle size is greater than the maximum wavelength of light measured. Future work will include measurement of pressed disks of powdered samples in emission and reflection, and when possible, small single crystals under an IR reflectance microscope, which will allow us to assess the variability of spectra and optical constants under different sample preparation and measurement conditions.
Kinetically inert Cu in coastal waters.
Kogut, Megan B; Voelker, Bettina M
2003-02-01
Many studies have shown that Cu and other metals in natural waters are mostly bound by unidentified compounds interpreted to be strong ligands reversibly complexing a given metal. However, commonly applied analytical techniques are not capable of distinguishing strongly but reversibly complexed metal from metal bound in kinetically inert compounds. In this work, we use a modified competitive ligand exchange adsorptive cathodic stripping voltammetry method combined with size fractionation to show that most if not all of the apparently very strongly (log K > or = 13) bound Cu in samples from five New England coastal waters (1-18 nM, 10-60% of total Cu) is actually present as kinetically inert compounds. In three of the five samples examined by ultrafiltration, a significant portion of the 0.2-microm-filtrable inert Cu was retained by a 0.02-microm-pore size filter, suggesting that at least some of the Cu was kinetically inert because it was physically sequestered in colloidal material. The rest of the ambient Cu, and Cu added in titrations, were reversibly bound in complexes that could be modeled as having conditional stability constants of 10(10)-10(13). The Cu-binding ability of these complexes was equivalent to that of seawater containing reasonable concentrations of humic substances from terrestrial sources, approximately 0.15-0.45 mg of C/L. Both the inert compounds and the reversible ligands were important for determining [Cu2+] at ambient Cu levels in our samples.
Understanding the role of conscientiousness in healthy aging: where does the brain come in?
Patrick, Christopher J
2014-05-01
In reviewing this impressive series of articles, I was struck by 2 points in particular: (a) the fact that the empirically oriented articles focused on analyses of data from very large samples, with the articles by Friedman, Kern, Hampson, and Duckworth (2014) and Kern, Hampson, Goldbert, and Friedman (2014) highlighting an approach to merging existing data sets through use of "metric bridges" to address key questions not addressable through 1 data set alone, and (b) the fact that the articles as a whole included limited mention of neuroscientific (i.e., brain research) concepts, methods, and findings. One likely reason for the lack of reference to brain-oriented work is the persisting gap between smaller sample size lab-experimental and larger sample size multivariate-correlational approaches to psychological research. As a strategy for addressing this gap and bringing a distinct neuroscientific component to the National Institute on Aging's conscientiousness and health initiative, I suggest that the metric bridging approach highlighted by Friedman and colleagues could be used to connect existing large-scale data sets containing both neurophysiological variables and measures of individual difference constructs to other data sets containing richer arrays of nonphysiological variables-including data from longitudinal or twin studies focusing on personality and health-related outcomes (e.g., Terman Life Cycle study and Hawaii longitudinal studies, as described in the article by Kern et al., 2014). (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Gilbert, Christopher C; Grine, Frederick E
2010-03-01
Papionin monkeys are widespread, relatively common members of Plio-Pleistocene faunal assemblages across Africa. For these reasons, papionin taxa have been used as biochronological indicators by which to infer the ages of the South African karst cave deposits. A recent morphometric study of South African fossil papionin muzzle shape concluded that its variation attests to a substantial and greater time depth for these sites than is generally estimated. This inference is significant, because accurate dating of the South African cave sites is critical to our knowledge of hominin evolution and mammalian biogeographic history. We here report the results of a comparative analysis of extant papionin monkeys by which variability of the South African fossil papionins may be assessed. The muzzles of 106 specimens representing six extant papionin genera were digitized and interlandmark distances were calculated. Results demonstrate that the overall amount of morphological variation present within the fossil assemblage fits comfortably within the range exhibited by the extant sample. We also performed a statistical experiment to assess the limitations imposed by small sample sizes, such as typically encountered in the fossil record. Results suggest that 15 specimens are sufficient to accurately represent the population mean for a given phenotype, but small sample sizes are insufficient to permit the accurate estimation of the population standard deviation, variance, and range. The suggestion that the muzzle morphology of fossil papionins attests to a considerable and previously unrecognized temporal depth of the South African karst cave sites is unwarranted.
Mcknight, Katherine K.; Wellons, Melissa F.; Sites, Cynthia K.; Roth, David L.; Szychowski, Jeff M.; Halanych, Jewell H.; Cushman, Mary; Safford, Monika M.
2011-01-01
Objectives To examine regional and Black-White differences in mean age at self-reported menopause among community-dwelling women in the US. Study Design Cross-sectional survey conducted in the context of the REasons for Geographic And Racial Differences in Stroke and Myocardial Infarction study. Results We studied 22,484 menopausal women. After controlling for covariates, Southern women reported menopause 10.8 months earlier than Northeastern women, 8.4 months earlier than Midwestern women, and 6.0 months earlier than Western women (p<0.05 for all). No difference was observed in menopausal age between Black and White women after controlling for covariates (p=0.69). Conclusions Women in the South report earlier menopause than those in other regions, but the cause remains unclear. Our study's large sample size and adjustment for multiple confounders lends weight to our finding of no racial difference in age at menopause. More study is needed of the implications of these findings with regard to vascular health. PMID:21663888
Corticosteroids for severe influenza pneumonia: A critical appraisal
Nedel, Wagner Luis; Nora, David Garcia; Salluh, Jorge Ibrain Figueira; Lisboa, Thiago; Póvoa, Pedro
2016-01-01
Influenza pneumonia is associated with high number of severe cases requiring hospital and intensive care unit (ICU) admissions with high mortality. Systemic steroids are proposed as a valid therapeutic option even though its effects are still controversial. Heterogeneity of published data regarding study design, population demographics, severity of illness, dosing, type and timing of corticosteroids administered constitute an important limitation for drawing robust conclusions. However, it is reasonable to admit that, as it was not found any advantage of corticosteroid therapy in so diverse conditions, such beneficial effects do not exist at all. Its administration is likely to increase overall mortality and such trend is consistent regardless of the quality as well as the sample size of studies. Moreover it was shown that corticosteroids might be associated with higher incidence of hospital-acquired pneumonia and longer duration of mechanical ventilation and ICU stay. Finally, it is reasonable to conclude that corticosteroids failed to demonstrate any beneficial effects in the treatment of patients with severe influenza infection. Thus its current use in severe influenza pneumonia should be restricted to very selected cases and in the setting of clinical trials. PMID:26855898
Cranford, James A; McCabe, Sean Esteban; Boyd, Carol J; Slayden, Janie; Reed, Mark B; Ketchie, Julie M; Lange, James E; Scott, Marcia S
2008-01-01
This study conducted a follow-up telephone survey of a probability sample of college students who did not respond to a Web survey to determine correlates of and reasons for nonresponse. A stratified random sample of 2502 full-time first-year undergraduate students was invited to participate in a Web-based survey. A random sample of 221 students who did not respond to the original Web survey completed an abbreviated version of the original survey by telephone. Nonresponse did not vary by gender, but nonresponse was higher among Blacks and Hispanics compared to Whites, and Blacks compared to Asians. Nonresponders reported lower frequency of past 28 days drinking, lower levels of past-year and past 28-days heavy episodic drinking, and more time spent preparing for classes than responders. The most common reasons for nonresponse were "too busy" (45.7%), "not interested" (18.1%), and "forgot to complete survey" (18.1%). Reasons for nonresponse to Web surveys among college students are similar to reasons for nonresponse to mail and telephone surveys, and some nonresponse reasons vary as a function of alcohol involvement.
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.
Forest Fuels Management in Europe
Gavriil Xanthopoulos; David Caballero; Miguel Galante; Daniel Alexandrian; Eric Rigolot; Raffaella Marzano
2006-01-01
Current fuel management practices vary considerably between European countries. Topography, forest and forest fuel characteristics, size and compartmentalization of forests, forest management practices, land uses, land ownership, size of properties, legislation, and, of course, tradition, are reasons for these differences.Firebreak construction,...
Guo, Jiin-Huarng; Luh, Wei-Ming
2009-05-01
When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.
Vacchiano, Giuseppe; Luna Maldonado, Aurelio; Matas Ros, Maria; Fiorenza, Elisa; Silvestre, Angela; Simonetti, Biagio; Pieri, Maria
2018-06-01
The study reports the evolution of the demyelinization process based on cholesterol ([CHOL]) levels quantified in median nerve samples and collected at different times-from death from both right and left wrists. The statistical data show that the phenomenon evolves differently in the right and left nerves. Such a difference can reasonably be attributed to a different multicenter evolution of the demyelinization. For data analysis, the enrolled subjects were grouped by similar postmortem intervals (PMIs), considering 3 intervals: PMI < 48 hours, 48 hours < PMI < 78 hours, and PMI > 78 hours. Data obtained from tissue dissected within 48 hours of death allowed for a PMI estimation according to the following equations: PMI = 0.000 + 0.7623 [CHOL]right (R = 0.581) for the right wrist and PMI = 0.000 + 0.8911 [CHOL]left (R = 0.794) for the left wrist.At present, this correlation cannot be considered to be definitive because of the limitation of the small size of the samples analyzed, because the differences in the sampling time and the interindividual and intraindividual variation may influence the demyelinization process.
ERIC Educational Resources Information Center
Cooke, Jason; Henderson, Eric J.
2009-01-01
Experiments are presented that demonstrate the size-exclusion properties of zeolites and reveal the reason for naming zeolites "molecular sieves". If an IR spectrometer is available, the adsorption or exclusion of alcohols of varying sizes from dichloromethane or chloroform solutions can be readily demonstrated by monitoring changes in the…
Twelve- to 14-Month-Old Infants Can Predict Single-Event Probability with Large Set Sizes
ERIC Educational Resources Information Center
Denison, Stephanie; Xu, Fei
2010-01-01
Previous research has revealed that infants can reason correctly about single-event probabilities with small but not large set sizes (Bonatti, 2008; Teglas "et al.", 2007). The current study asks whether infants can make predictions regarding single-event probability with large set sizes using a novel procedure. Infants completed two trials: A…
Perceived Benefits of an Undergraduate Degree
ERIC Educational Resources Information Center
Norton, Cole; Martini, Tanya
2017-01-01
Canadian university students tend to endorse employment-related reasons for attending university ahead of other reasons such as personal satisfaction or intellectual growth. In the present study, first- and fourth-year students from a mid-sized Canadian university reported on the benefits they expected to receive from their degree and rated their…
7 CFR 3550.117 - WWD grant purposes.
Code of Federal Regulations, 2013 CFR
2013-01-01
... (48 square feet) in size. (f) Pay reasonable costs for closing abandoned septic tanks and water wells... for individuals to: (a) Extend service lines from the system to their residence. (b) Connect service lines to residence's plumbing. (c) Pay reasonable charges or fees for connecting to a system. (d) Pay...
7 CFR 3550.117 - WWD grant purposes.
Code of Federal Regulations, 2012 CFR
2012-01-01
... (48 square feet) in size. (f) Pay reasonable costs for closing abandoned septic tanks and water wells... for individuals to: (a) Extend service lines from the system to their residence. (b) Connect service lines to residence's plumbing. (c) Pay reasonable charges or fees for connecting to a system. (d) Pay...
Multiscale Analysis of Soil Porosity from Hg Injection Curves in Soils from Minas Gerais, Brazil
NASA Astrophysics Data System (ADS)
Vidal Vázquez, E.; Miranda, J. G. V.; Paz-Ferreiro, J.
2012-04-01
The soil pore space is a continuum extremely variable in size, including structures smaller than nanometres and as large as macropores or cracks with millimetres or even centimetres size. Pore size distributions (PSDs) affects important soil functions, such as transmission and storage of water, and root growth. Direct and indirect measurements of PSDs are currently used to characterize soil structure. Mercury injection porosimetry is useful for assessing equivalent pore size diameters in the range from about 0,5 nm to 100 μm. Here, the multifractal formalism was employed to describe Hg injection curves measured in duplicate samples collected on 54 horizons from 19 profiles in Minas Gerais state, Brazil. Ten of the studied profiles were classified as Ferralsols (Latosols, Oxisols). Besides these, other wide different soil groups were sampled, including Nitisol, Acrisol, Alisol, Luvisol, Planosol, Cambisol, Andosol and Leptosol. Clay content varied from 4 to 86% and pore volume in the range from 100 to 0.005 μm was between 5.52 a 53.76 cm3100g-1. All the horizons taken on Ferralsols and Nitisols as well as in Bt argic horizons from Acrisol Alisol, Luvisol and Planosol clearly showed a bimodal pore size distribution. Pore volume in the range from 100 to 0.005 μm and microporosity (0,2-0.005 μm) showed a significant relationship with clay content an Al2O3. All the Hg injection data sets studied soil showed remarkably good scaling trends and could be fitted reasonably well with multifractal models. The capacity dimensions, D0, was not significantly different from the Euclidean dimension. The entropy dimension, D1, varied from 0.590 to 0.946 , whereas the Hölder exponent of order zero, α0was between 1.027 and 1.451, and these two parameters showed a lineal negatives relationship, as expected. The highest D1 values, ranging from 0.913 to 0.980, were obtained for the Leptosol, whereas the lowest D1 values, ranging from 0.641 to 0.766 corresponded to the Nitisol. This results reflect that most of the measure concentrated in a small size domain for the horizons sampled from the Nitisol, whereas for the Leptosol the measure was more evenly distributed. In general, multifractal indices have been found to be useful for assessing differences in pore size distributions of the studied soil types.
Study of magnetic and electrical properties of nanocrystalline Mn doped NiO.
Raja, S Philip; Venkateswaran, C
2011-03-01
Diluted Magnetic Semiconductors (DMS) are intensively explored in recent years for its applications in spintronics, which is expected to revolutionize the present day information technology. Nanocrystalline Mn doped NiO samples were prepared using chemical co-precipitation method with an aim to realize room temperature ferromagnetism. Phase formation of the samples was studied using X-ray diffraction-Rietveld analysis. Scanning electron microscopy and Energy dispersive X-ray analysis results reveal the nanocrystalline nature of the samples, agglomeration of the particles, considerable particle size distribution and the near stoichiometry. Thermomagnetic curves confirm the single-phase formation of the samples up to 1% doping of Mn. Vibrating Sample Magnetometer measurements indicate the absence of ferromagnetism at room temperature. This may be due to the low concentration of Mn2+ ions having weak indirect coupling with Ni2+ ions. The lack of free carriers is also expected to be the reason for the absence of ferromagnetism, which is in agreement with the results of resistivity measurements using impedance spectroscopy. Arrhenius plot shows the presence of two thermally activated regions and the activation energy for the nanocrystalline Mn doped sample was found to be greater than that of undoped NiO. This is attributed to the doping effect of Mn. However, the dielectric constant of the samples was found to be of the same order of magnitude very much comparable with that of undoped NiO.
Random sampling of elementary flux modes in large-scale metabolic networks.
Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel
2012-09-15
The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Moufti, Asaad M. B.
2014-11-01
Mineralogical studies revealed that the stream sediments in northwestern Saudi Arabia between Duba and Al Wajh on the Red Sea coast are auriferous and can represent a potential source of easily recoverable placer gold. The detailed ore microscopic study supported by fire assay data of stream sediments at the southern sector of Duba-Al Wajh (Wadi Al Miyah, Wadi Haramil and Wadi Thalbah) in NW Saudi Arabia show economic concentrations of gold in their silt fraction (40-63 μm). However, particles of extremely fine “dusty” gold (⩽40 μm in size) are identified in most stations as independent grains. The maximum gold content in the samples of Wadi Al Miyah is 13.61 wt% which is reported for the light fraction (⩽40 μm). Maximum gold content in the heavy fractions of Wadi Haramil stream sediments amounts 6.90 g/t Au in a relatively coarse fraction (63-125 μm). The size still fulfills the silt fraction, but the coarsening of gold can be correlated with either original size of native gold in the Neoproterozoic mineralized zone or/and distance of transportation. It appears that the most gold-rich fractions of the analyzed samples are those from Wadi Thalbah. They have the highest index figure, which suggests that its placer gold may be economically exploitable. Gold content in the heavy fractions of samples from Wadi Thalbah is high and lies within a wide range (6.27-28.83 g/t), except for a single sample collected at the upstream with 0.77 g/t Au only. Fire assay data of samples from three wadis at the northern sector show that their gold content is clearly lower than in the samples from the southern sector. Only few samples from Wadi South Marwah are promising because they contain reasonable gold content (3.10-3.60 g/t) before heavy liquid separation. The two samples give gold content up to 11.03 g/t in their heavy mineral concentrate. The heavy fractions from both Wadi Al Amud and Wadi Salma are poor in gold where the maximum content of the metal in these concentrates are 1.32 and 1.17 g/t, respectively. Generally, the heavy mineral concentrates of both wadis contain ⩽1 g/t Au which is presently uneconomic. Generally, fire assay data of gold proved that samples from the wadis in the southern sectors are more promising for future gold exploration and exploitation.
Fast and precise dense grid size measurement method based on coaxial dual optical imaging system
NASA Astrophysics Data System (ADS)
Guo, Jiping; Peng, Xiang; Yu, Jiping; Hao, Jian; Diao, Yan; Song, Tao; Li, Ameng; Lu, Xiaowei
2015-10-01
Test sieves with dense grid structure are widely used in many fields, accurate gird size calibration is rather critical for success of grading analysis and test sieving. But traditional calibration methods suffer from the disadvantages of low measurement efficiency and shortage of sampling number of grids which could lead to quality judgment risk. Here, a fast and precise test sieve inspection method is presented. Firstly, a coaxial imaging system with low and high optical magnification probe is designed to capture the grid images of the test sieve. Then, a scaling ratio between low and high magnification probes can be obtained by the corresponding grids in captured images. With this, all grid dimensions in low magnification image can be obtained by measuring few corresponding grids in high magnification image with high accuracy. Finally, by scanning the stage of the tri-axis platform of the measuring apparatus, whole surface of the test sieve can be quickly inspected. Experiment results show that the proposed method can measure the test sieves with higher efficiency compare to traditional methods, which can measure 0.15 million grids (gird size 0.1mm) within only 60 seconds, and it can measure grid size range from 20μm to 5mm precisely. In a word, the presented method can calibrate the grid size of test sieve automatically with high efficiency and accuracy. By which, surface evaluation based on statistical method can be effectively implemented, and the quality judgment will be more reasonable.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Trauma-Related Predictors of Deontic Reasoning: A Pilot Study in a Community Sample of Children
ERIC Educational Resources Information Center
DePrince, Anne P.; Chu, Ann T.; Combs, Melody D.
2008-01-01
Objective: Deontic reasoning (i.e., reasoning about duties and obligations) is essential to navigating interpersonal relationships. Though previous research demonstrates links between deontic reasoning abilities and trauma-related factors (i.e., dissociation, exposure to multiple victimizations) in adults, studies have yet to examine deontic…
NASA Astrophysics Data System (ADS)
Shen, Zhong; Zhong, Jin-Yi; Chai, Na-Na; He, Xin; Zang, Jian-Zheng; Xu, Hui; Han, Xiao-Yuan; Zhang, Peng
2017-06-01
Zr4+, Ge4+ doped and co-doped TiO2 nanoparticles were prepared by a 'one-pot' homogeneous precipitation method. The photocatalytic reaction kinetics of DMMP and the disinfection efficiency of HD, GD and VX on the samples were investigated. By means of a variety of characterization methods, especially the positron annihilation lifetime spectroscopy, the changes in structure and property of TiO2 across doping were studied. The results show that the reasonable engineering design of novel photocatalysts in the field of CWAs decontamination can be realized by adjusting the bulk-to-surface defects ratio, except for crystal structure, specific surface area, pore size distribution and light utilization.
Structure and kinematics of the broad-line regions in active galaxies from IUE variability data
NASA Technical Reports Server (NTRS)
Koratkar, Anuradha P.; Gaskell, C. Martin
1991-01-01
IUE archival data are used here to investigate the structure nad kinematics of the broad-line regions (BLRs) in nine AGN. It is found that the centroid of the line-continuum cross-correlation functions (CCFs) can be determined with reasonable reliability. The errors in BLR size estimates from CCFs for irregularly sampled light curves are fairly well understood. BLRs are found to have small luminosity-weighted radii, and lines of high ionization tend to be emitted closer to the central source than lines of low ionization, especially for low-luminosity objects. The motion of the gas is gravity-dominated with both pure inflow and pure outflow of high-velocity gas being excluded at a high confidence level for certain geometries.
[Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].
Suzukawa, Yumi; Toyoda, Hideki
2012-04-01
This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.
Sample Size Estimation: The Easy Way
ERIC Educational Resources Information Center
Weller, Susan C.
2015-01-01
This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…
The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education
ERIC Educational Resources Information Center
Slavin, Robert; Smith, Dewi
2009-01-01
Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…
Synchrotron-based XRD from rat bone of different age groups.
Rao, D V; Gigante, G E; Cesareo, R; Brunetti, A; Schiavon, N; Akatsuka, T; Yuasa, T; Takeda, T
2017-05-01
Synchrotron-based XRD spectra from rat bone of different age groups (w, 56 w and 78w), lumber vertebra at early stages of bone formation, Calcium hydroxyapatite (HAp) [Ca 10 (PO 4 ) 6 (OH) 2 ] bone fill with varying composition (60% and 70%) and bone cream (35-48%), has been acquired with 15keV synchrotron X-rays. Experiments were performed at Desy, Hamburg, Germany, utilizing the Resonant and Diffraction beamline (P9), with 15keV X-rays (λ=0.82666 A 0 ). Diffraction data were quantitatively analyzed using the Rietveld refinement approach, which allowed us to characterize the structure of these samples in their early stages. Hydroxyapatite, received considerable attention in medical and materials sciences, since these materials are the hard tissues, such as bone and teeth. Higher bioactivity of these samples gained reasonable interest for biological application and for bone tissue repair in oral surgery and orthopedics. The results obtained from these samples, such as phase data, crystalline size of the phases, as well as the degree of crystallinity, confirm the apatite family crystallizing in a hexagonal system, space group P6 3 /m with the lattice parameters of a=9.4328Å and c=6.8842Å (JCPDS card #09-0432). Synchrotron-based XRD patterns are relatively sharp and well resolved and can be attributed to the hexagonal crystal form of hydroxyapatite. All the samples were examined with scanning electron microscope at an accelerating voltage of 15kV. The presence of large globules of different sizes is observed, in small age groups of the rat bone (8w) and lumber vertebra (LV), as distinguished from, large age groups (56 and 78w) in all samples with different magnification, reflects an amorphous phase without significant traces of crystalline phases. Scanning electron microscopy (SEM) was used to characterize the morphology and crystalline properties of Hap, for all the samples, from 2 to 100μm resolution. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kostadinova-Avramova, M.; Kovacheva, M.
2015-10-01
Archaeological baked clay remains provide valuable information about the geomagnetic field in historical past, but determination of the geomagnetic field characteristics, especially intensity, is often a difficult task. This study was undertaken to elucidate the reasons for unsuccessful intensity determination experiments obtained from two different Bulgarian archaeological sites (Nessebar - Early Byzantine period and Malenovo - Early Iron Age). With this aim, artificial clay samples were formed in the laboratory and investigated. The clay used for the artificial samples preparation differs according to its initial state. Nessebar clay was baked in the antiquity, but Malenovo clay was raw, taken from the clay deposit near the site. The obtained artificial samples were repeatedly heated eight times in known magnetic field to 700 °C. X-ray diffraction analyses and rock-magnetic experiments were performed to obtain information about the mineralogical content and magnetic properties of the initial and laboratory heated clays. Two different protocols were applied for the intensity determination-Coe version of Thellier and Thellier method and multispecimen parallel differential pTRM protocol. Various combinations of laboratory fields and mutual positions of the directions of laboratory field and carried thermoremanence were used in the performed Coe experiment. The obtained results indicate that the failure of this experiment is probably related to unfavourable grain sizes of the prevailing magnetic carriers combined with the chosen experimental conditions. The multispecimen parallel differential pTRM protocol in its original form gives excellent results for the artificial samples, but failed for the real samples (samples coming from previously studied kilns of Nessebar and Malenovo sites). Obviously the strong dependence of this method on the homogeneity of the used subsamples hinders its implementation in its original form for archaeomaterials. The latter are often heterogeneous due to variable heating conditions in the different parts of the archaeological structures. The study draws attention to the importance of multiple heating for the stabilization of grain size distribution in baked clay materials and the need of elucidation of this question.
Sample Acquisition and Caching architecture for the Mars Sample Return mission
NASA Astrophysics Data System (ADS)
Zacny, K.; Chu, P.; Cohen, J.; Paulsen, G.; Craft, J.; Szwarc, T.
This paper presents a Mars Sample Return (MSR) Sample Acquisition and Caching (SAC) study developed for the three rover platforms: MER, MER+, and MSL. The study took into account 26 SAC requirements provided by the NASA Mars Exploration Program Office. For this SAC architecture, the reduction of mission risk was chosen by us as having greater priority than mass or volume. For this reason, we selected a “ One Bit per Core” approach. The enabling technology for this architecture is Honeybee Robotics' “ eccentric tubes” core breakoff approach. The breakoff approach allows the drill bits to be relatively small in diameter and in turn lightweight. Hence, the bits could be returned to Earth with the cores inside them with only a modest increase to the total returned mass, but a significant decrease in complexity. Having dedicated bits allows a reduction in the number of core transfer steps and actuators. It also alleviates the bit life problem, eliminates cross contamination, and aids in hermetic sealing. An added advantage is faster drilling time, lower power, lower energy, and lower Weight on Bit (which reduces Arm preload requirements). Drill bits are based on the BigTooth bit concept, which allows re-use of the same bit multiple times, if necessary. The proposed SAC consists of a 1) Rotary-Percussive Core Drill, 2) Bit Storage Carousel, 3) Cache, 4) Robotic Arm, and 5) Rock Abrasion and Brushing Bit (RABBit), which is deployed using the Drill. The system also includes PreView bits (for viewing of cores prior to caching) and Powder bits for acquisition of regolith or cuttings. The SAC total system mass is less than 22 kg for MER and MER+ size rovers and less than 32 kg for the MSL-size rover.
Dissolution Rates of Biogenic Carbonate Sediments from the Bermuda Platform
NASA Astrophysics Data System (ADS)
Finlay, A. J.; Andersson, A. J.
2016-02-01
The contribution of biogenic carbonate sediment dissolution rates to overall net reef accretion/erosion (under both present and future oceanic pCO2 levels) has been strikingly neglected, despite experimental results indicating that sediment dissolution might be more sensitive to ocean acidification (OA) than calcification. Dissolution of carbonate sediments could impact net reef accretion rates as well as the formation and preservation of valuable marine and terrestrial ecosystems. Bulk sediment dissolution rates of samples from the Bermuda carbonate platform were measured in natural seawater at pCO2 values ranging from approximately 3500 μatm to 9000 μatm. This range of pCO2 levels incorporates values currently observed in porewaters on the Bermuda carbonate platform as well as a potential future increase in porewater pCO2 levels due to OA. Sediment samples from two different stations on the reef platform were analyzed for grain size and mineralogy. Dissolution rates of sediments in the dominant grain size fraction of the platform (500-1000 μm) from both stations ranged between 16.25 and 47.19 (± 0.27 to 0.79) μmoles g-1 hr-1 and are comparable to rates previously obtained from laboratory experiments on other natural carbonate sediments. At a pCO2 of 3500 μatm, rates from both samples were similar, despite their differing mineralogy. However, at pCO2 levels above 3500 μatm, the sediment sample with a greater weight percent of Mg-calcite had slightly higher dissolution rates. Despite many laboratory studies on biogenic carbonate dissolution, a significant disparity still exists between laboratory measurements and field observations. Performing additional controlled, laboratory experiments on natural sediment may help to elucidate the reasons for this disparity.
Determination of Survivable Fires
NASA Technical Reports Server (NTRS)
Dietrich, D. L.; Niehaus, J. E.; Ruff, G. A.; Urban, D. L.; Takahashi, F.; Easton, J. W.; Abbott, A. A.; Graf, J. C.
2012-01-01
At NASA, there exists no standardized design or testing protocol for spacecraft fire suppression systems (either handheld or total flooding designs). An extinguisher's efficacy in safely suppressing any reasonable or conceivable fire is the primary benchmark. That concept, however, leads to the question of what a reasonable or conceivable fire is. While there exists the temptation to over-size' the fire extinguisher, weight and volume considerations on spacecraft will always (justifiably) push for the minimum size extinguisher required. This paper attempts to address the question of extinguisher size by examining how large a fire a crew member could successfully survive and extinguish in the confines of a spacecraft. The hazards to the crew and equipment during an accidental fire include excessive pressure rise resulting in a catastrophic rupture of the vehicle skin, excessive temperatures that burn or incapacitate the crew (due to hyperthermia), carbon dioxide build-up or other accumulation of other combustion products (e.g. carbon monoxide). Estimates of these quantities are determined as a function of fire size and mass of material burned. This then becomes the basis for determining the maximum size of a target fire for future fire extinguisher testing.
Altschuler, Justin; Margolius, David; Bodenheimer, Thomas; Grumbach, Kevin
2012-01-01
PURPOSE Primary care faces the dilemma of excessive patient panel sizes in an environment of a primary care physician shortage. We aimed to estimate primary care panel sizes under different models of task delegation to nonphysician members of the primary care team. METHODS We used published estimates of the time it takes for a primary care physician to provide preventive, chronic, and acute care for a panel of 2,500 patients, and modeled how panel sizes would change if portions of preventive and chronic care services were delegated to nonphysician team members. RESULTS Using 3 assumptions about the degree of task delegation that could be achieved (77%, 60%, and 50% of preventive care, and 47%, 30%, and 25% of chronic care), we estimated that a primary care team could reasonably care for a panel of 1,947, 1,523, or 1,387 patients. CONCLUSIONS If portions of preventive and chronic care services are delegated to nonphysician team members, primary care practices can provide recommended preventive and chronic care with panel sizes that are achievable with the available primary care workforce. PMID:22966102
Altschuler, Justin; Margolius, David; Bodenheimer, Thomas; Grumbach, Kevin
2012-01-01
PURPOSE Primary care faces the dilemma of excessive patient panel sizes in an environment of a primary care physician shortage. We aimed to estimate primary care panel sizes under different models of task delegation to nonphysician members of the primary care team. METHODS We used published estimates of the time it takes for a primary care physician to provide preventive, chronic, and acute care for a panel of 2,500 patients, and modeled how panel sizes would change if portions of preventive and chronic care services were delegated to nonphysician team members. RESULTS Using 3 assumptions about the degree of task delegation that could be achieved (77%, 60%, and 50% of preventive care, and 47%, 30%, and 25% of chronic care), we estimated that a primary care team could reasonably care for a panel of 1,947, 1,523, or 1,387 patients. CONCLUSIONS If portions of preventive and chronic care services are delegated to nonphysician team members, primary care practices can provide recommended preventive and chronic care with panel sizes that are achievable with the available primary care workforce.
Correlated evolution of host and parasite body size: tests of Harrison's rule using birds and lice.
Johnson, Kevin P; Bush, Sarah E; Clayton, Dale H
2005-08-01
Large-bodied species of hosts often harbor large-bodied parasites, a pattern known as Harrison's rule. Harrison's rule has been documented for a variety of animal parasites and herbivorous insects, yet the adaptive basis of the body-size correlation is poorly understood. We used phylogenetically independent methods to test for Harrison's rule across a large assemblage of bird lice (Insecta: Phthiraptera). The analysis revealed a significant relationship between louse and host size, despite considerable variation among taxa. We explored factors underlying this variation by testing Harrison's rule within two groups of feather-specialist lice that share hosts (pigeons and doves). The two groups, wing lice (Columbicola spp.) and body lice (Physconelloidinae spp.), have similar life histories, despite spending much of their time on different feather tracts. Wing lice showed strong support for Harrison's rule, whereas body lice showed no significant correlation with host size. Wing louse size was correlated with wing feather size, which was in turn correlated with overall host size. In contrast, body louse size showed no correlation with body feather size, which also was not correlated with overall host size. The reason why body lice did not fit Harrison's rule may be related to the fact that different species of body lice use different microhabitats within body feathers. More detailed measurements of body feathers may be needed to explore the precise relationship of body louse size to relevant components of feather size. Whatever the reason, Harrison's rule does not hold in body lice, possibly because selection on body size is mediated by community-level interactions between body lice.
Ciriello, Rosanna; Iallorenzi, Pina Teresa; Laurita, Alessandro; Guerrieri, Antonio
2017-03-01
A novel capillary zone electrophoresis (CZE) method was developed for an improved separation and size characterization of pristine gold nanoparticles (AuNP) using uncoated fused-silica capillaries with UV-Vis detection at 520 nm. To avoid colloid aggregation and/or adsorption during runs, poly(sodium 4-styrenesulfonate) (PSS) was added (1%, w/v) in the running buffer (CAPS 10 mM, pH 11). This polyelectrolyte conferred an enhanced stabilization to AuNP, both steric and electrostatic, exalting at the same time their differences in electrophoretic mobility. Resolution was further and successfully improved through a stepwise field strength gradient by the application of 25 kV for the first 5 min and then 10 kV. Migration times varied linearly with particles diameters showing relative standard deviations better than 1% for daily experiments and 3% for interday experiments. A comparison with the size distribution obtained by transmission electron microscopy (TEM) allowed assessing that the electrophoretic profile can reasonably be considered as representative of the effective size heterogeneity of each colloid. Finally, the practical utility of the proposed method was demonstrated by measuring the core diameter of a gold colloid sample produced by chemical synthesis which was in good agreement with the value obtained by TEM measurements. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Phylogenetic effective sample size.
Bartoszek, Krzysztof
2016-10-21
In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Miranda, J Jaime; Gilman, Robert H; García, Héctor H; Smeeth, Liam
2009-01-01
Background Mass-migration observed in Peru from the 1970s occurred because of the need to escape from politically motivated violence and work related reasons. The majority of the migrant population, mostly Andean peasants from the mountainous areas, tends to settle in clusters in certain parts of the capital and their rural environment could not be more different than the urban one. Because the key driver for migration was not the usual economic and work-related reasons, the selection effects whereby migrants differ from non-migrants are likely to be less prominent in Peru. Thus the Peruvian context offers a unique opportunity to test the effects of migration. Methods/Design The PERU MIGRANT (PEru's Rural to Urban MIGRANTs) study was designed to investigate the magnitude of differences between rural-to-urban migrant and non-migrant groups in specific CVD risk factors. For this, three groups were selected: Rural, people who have always have lived in a rural environment; Rural-urban, people who migrated from rural to urban areas; and, Urban, people who have always lived in a urban environment. Discussion Overall response rate at enrolment was 73.2% and overall response rate at completion of the study was 61.6%. A rejection form was obtained in 282/323 people who refused to take part in the study (87.3%). Refusals did not differ by sex in rural and migrant groups, but 70% of refusals in the urban group were males. In terms of age, most refusals were observed in the oldest age-group (>60 years old) in all study groups. The final total sample size achieved was 98.9% of the target sample size (989/1000). Of these, 52.8% (522/989) were females. Final size of the rural, migrant and urban study groups were 201, 589 and 199 urban people, respectively. Migrant's average age at first migration and years lived in an urban environment were 14.4 years (IQR 10–17) and 32 years (IQR 25–39), respectively. This paper describes the PERU MIGRANT study design together with a critical analysis of the potential for bias and confounding in migrant studies, and strategies for reducing these problems. A discussion of the potential advantages provided by the case of migration in Peru to the field of migration and health is also presented. PMID:19505331
Developing Systems of Notation as a Trace of Reasoning
ERIC Educational Resources Information Center
Tillema, Erik; Hackenberg, Amy
2011-01-01
In this paper, we engage in a thought experiment about how students might notate their reasoning for composing fractions multiplicatively (taking a fraction of a fraction and determining its size in relation to the whole). In the thought experiment we differentiate between two levels of a fraction composition scheme, which have been identified in…
The endothelial sample size analysis in corneal specular microscopy clinical examinations.
Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci
2012-05-01
To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.
Accounting for twin births in sample size calculations for randomised trials.
Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J
2018-05-04
Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.
Sample size determination for mediation analysis of longitudinal data.
Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying
2018-03-27
Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.
Public Opinion Polls, Chicken Soup and Sample Size
ERIC Educational Resources Information Center
Nguyen, Phung
2005-01-01
Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.
Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.
Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto
2007-07-01
To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.
Bateman, James; Allen, Maggie E; Kidd, Jane; Parsons, Nick; Davies, David
2012-08-01
Virtual Patients (VPs) are web-based representations of realistic clinical cases. They are proposed as being an optimal method for teaching clinical reasoning skills. International standards exist which define precisely what constitutes a VP. There are multiple design possibilities for VPs, however there is little formal evidence to support individual design features. The purpose of this trial is to explore the effect of two different potentially important design features on clinical reasoning skills and the student experience. These are the branching case pathways (present or absent) and structured clinical reasoning feedback (present or absent). This is a multi-centre randomised 2 x 2 factorial design study evaluating two independent variables of VP design, branching (present or absent), and structured clinical reasoning feedback (present or absent).The study will be carried out in medical student volunteers in one year group from three university medical schools in the United Kingdom, Warwick, Keele and Birmingham. There are four core musculoskeletal topics. Each case can be designed in four different ways, equating to 16 VPs required for the research. Students will be randomised to four groups, completing the four VP topics in the same order, but with each group exposed to a different VP design sequentially. All students will be exposed to the four designs. Primary outcomes are performance for each case design in a standardized fifteen item clinical reasoning assessment, integrated into each VP, which is identical for each topic. Additionally a 15-item self-reported evaluation is completed for each VP, based on a widely used EViP tool. Student patterns of use of the VPs will be recorded.In one centre, formative clinical and examination performance will be recorded, along with a self reported pre and post-intervention reasoning score, the DTI. Our power calculations indicate a sample size of 112 is required for both primary outcomes. This trial will provide robust evidence to support the effectiveness of different designs of virtual patients, based on student performance and evaluation. The cases and all learning materials will be open access and available on a Creative Commons Attribution-Share-Alike license.
Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie
2013-08-01
The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.
Cooke, Richard; French, David P
2008-01-01
Meta-analysis was used to quantify how well the Theories of Reasoned Action and Planned Behaviour have predicted intentions to attend screening programmes and actual attendance behaviour. Systematic literature searches identified 33 studies that were included in the review. Across the studies as a whole, attitudes had a large-sized relationship with intention, while subjective norms and perceived behavioural control (PBC) possessed medium-sized relationships with intention. Intention had a medium-sized relationship with attendance, whereas the PBC-attendance relationship was small sized. Due to heterogeneity in results between studies, moderator analyses were conducted. The moderator variables were (a) type of screening test, (b) location of recruitment, (c) screening cost and (d) invitation to screen. All moderators affected theory of planned behaviour relationships. Suggestions for future research emerging from these results include targeting attitudes to promote intention to screen, a greater use of implementation intentions in screening information and examining the credibility of different screening providers.
Three-dimensional templating arthroplasty of the humeral head.
Cho, Sung Won; Jharia, Trambak K; Moon, Young Lae; Sim, Sung Woo; Shin, Dong Sun; Bigliani, Louis U
2013-10-01
No anatomical study has been conducted over Asian population to design humeral head prosthesis for the population concerned. This study was done to evaluate the accuracy of commercially available humeral head prosthetic designs, in replicating the humeral head anatomy. CT scan data of 48 patients were taken and their 3D CAD models were generated. Then, humeral head prosthetic design of a BF shoulder system produced by a standardized, commercially available company (Zimmer) was used for templating shoulder arthroplasty and the humeral head size having the perfect fit was assessed. These data were compared with the available data in the literature. All the humeral heads were perfectly matched by one of the sizes available. The average head size was 48.5 mm and the average head thickness was 23.5 mm. The results matched reasonably well with the available data in the literature. The humeral head anatomy can be recreated reasonably well by the commercially available humeral head prosthetic designs and sizes. Their dimensions are similar to that of the published literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weaver, Jordan S.; Khosravani, Ali; Castillo, Andrew
Recent spherical nanoindentation protocols have proven robust at capturing the local elastic-plastic response of polycrystalline metal samples at length scales much smaller than the grain size. In this work, we extend these protocols to length scales that include multiple grains to recover microindentation stress-strain curves. These new protocols are first established in this paper and then demonstrated for Al-6061 by comparing the measured indentation stress-strain curves with the corresponding measurements from uniaxial tension tests. More specifically, the scaling factors between the uniaxial yield strength and the indentation yield strength was determined to be about 1.9, which is significantly lower thanmore » the value of 2.8 used commonly in literature. Furthermore, the reasons for this difference are discussed. Second, the benefits of these new protocols in facilitating high throughput exploration of process-property relationships are demonstrated through a simple case study.« less
Metallurgical investigation of wire breakage of tyre bead grade.
Palit, Piyas; Das, Souvik; Mathur, Jitendra
2015-10-01
Tyre bead grade wire is used for tyre making application. The wire is used as reinforcement inside the polymer of tyre. The wire is available in different size/section such as 1.6-0.80 mm thin Cu coated wire. During tyre making operation at tyre manufacturer company, wire failed frequently. In this present study, different broken/defective wire samples were collected from wire mill for detailed investigation of the defect. The natures of the defects were localized and similar in nature. The fracture surface was of finger nail type. Crow feet like defects including button like surface abnormalities were also observed on the broken wire samples. The defect was studied at different directions under microscope. Different advanced metallographic techniques have been used for detail investigation. The analysis revealed that, white layer of surface martensite was formed and it caused the final breakage of wire. In this present study we have also discussed about the possible reason for the formation of such kind of surface martensite (hard-phase).
Weaver, Jordan S.; Khosravani, Ali; Castillo, Andrew; ...
2016-06-14
Recent spherical nanoindentation protocols have proven robust at capturing the local elastic-plastic response of polycrystalline metal samples at length scales much smaller than the grain size. In this work, we extend these protocols to length scales that include multiple grains to recover microindentation stress-strain curves. These new protocols are first established in this paper and then demonstrated for Al-6061 by comparing the measured indentation stress-strain curves with the corresponding measurements from uniaxial tension tests. More specifically, the scaling factors between the uniaxial yield strength and the indentation yield strength was determined to be about 1.9, which is significantly lower thanmore » the value of 2.8 used commonly in literature. Furthermore, the reasons for this difference are discussed. Second, the benefits of these new protocols in facilitating high throughput exploration of process-property relationships are demonstrated through a simple case study.« less
Maximum likelihood estimation of finite mixture model for economic data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Time-resolved Fast Neutron Radiography of Air-water Two-phase Flows
NASA Astrophysics Data System (ADS)
Zboray, Robert; Dangendorf, Volker; Mor, Ilan; Tittelmeier, Kai; Bromberger, Benjamin; Prasser, Horst-Michael
Neutron imaging, in general, is a useful technique for visualizing low-Z materials (such as water or plastics) obscured by high-Z materials. However, when significant amounts of both materials are present and full-bodied samples have to be examined, cold and thermal neutrons rapidly reach their applicability limit as the samples become opaque. In such cases one can benefit from the high penetrating power of fast neutrons. In this work we demonstrate the feasibility of time-resolved, fast neutron radiography of generic air-water two-phase flows in a 1.5 cm thick flow channel with Aluminum walls and rectangular cross section. The experiments have been carried out at the high-intensity, white-beam facility of the Physikalisch-Technische Bundesanstalt, Germany. Exposure times down to 3.33 ms have been achieved at reasonable image quality and acceptable motion artifacts. Different two-phase flow regimes such as bubbly slug and churn flows have been examined. Two-phase flow parameters like the volumetric gas fraction, bubble size and bubble velocities have been measured.
An engineering method for estimating notch-size effect in fatigue tests on steel
NASA Technical Reports Server (NTRS)
Kuhn, Paul; Hardrath, Herbert F
1952-01-01
Neuber's proposed method of calculating a practical factor of stress concentration for parts containing notches of arbitrary size depends on the knowledge of a "new material constant" which can be established only indirectly. In this paper, the new constant has been evaluated for a large variety of steels from fatigue tests reported in the literature, attention being confined to stresses near the endurance limit. Reasonably satisfactory results were obtained with the assumption that the constant depends only on the tensile strength of the steel. Even in cases where the notches were cracks of which only the depth was known, reasonably satisfactory agreement was found between calculated and experimental factors. It is also shown that the material constant can be used in an empirical formula to estimate the size effect on unnotched specimens tested in bending fatigue.
Small-sized microplastics and pigmented particles in bottled mineral water.
Oßmann, Barbara E; Sarau, George; Holtmannspötter, Heinrich; Pischetsrieder, Monika; Christiansen, Silke H; Dicke, Wilhelm
2018-09-15
Up to now, only a few studies about microparticle contamination of bottled mineral water have been published. The smallest analysed particle size was 5 μm. However, due to toxicological reasons, especially microparticles smaller than 1.5 μm are critically discussed. Therefore, in the present study, 32 samples of bottled mineral water were investigated for contamination by microplastics, pigment and additive particles. Due to the application of aluminium coated polycarbonate membrane filters and micro-Raman spectroscopy, a lowest analysed particle size of 1 μm was achieved. Microplastics were found in water from all bottle types: in single use and reusable bottles made of poly(ethylene terephthalate) (PET) as well as in glass bottles. The amount of microplastics in mineral water varied from 2649 ± 2857 per litre in single use PET bottles up to 6292 ± 10521 per litre in glass bottles. While in plastic bottles, the predominant polymer type was PET; in glass bottles various polymers such as polyethylene or styrene-butadiene-copolymer were found. Hence, besides the packaging itself, other contamination sources have to be considered. Pigment particles were detected in high amounts in reusable, paper labelled bottles (195047 ± 330810 pigment particles per litre in glass and 23594 ± 25518 pigment particles per litre in reusable paper labelled PET bottles). Pigment types found in water samples were the same as used for label printing, indicating the bottle cleaning process as possible contamination route. Furthermore, on average 708 ± 1024 particles per litre of the additive Tris(2,4-di-tert-butylphenyl)phosphite were found in reusable PET bottles. This additive might be leached out from the bottle material itself. Over 90% of the detected microplastics and pigment particles were smaller than 5 μm and thus not covered by previous studies. In summary, this is the first study reporting about microplastics, pigment and additive particles found in bottled mineral water samples with a smallest analysed particle size of 1 μm. Copyright © 2018 Elsevier Ltd. All rights reserved.
Sociomoral Reasoning in Congenitally Deaf Children as a Function of Cognitive Maturity.
ERIC Educational Resources Information Center
Markoulis, Diomedes; Christoforou, Maria
1991-01-01
Compares the operational and sociomoral reasoning maturity of 70 deaf children with that of a sensory unimpaired control sample. Tests subjects individually on three Piagetian tasks, story pairs, and the concept of justice. Finds slower development of operational reasoning in the deaf children but comparable development in sociomoral reasoning.…
ERIC Educational Resources Information Center
Sumpter, Lovisa
2016-01-01
This study examines Swedish upper secondary school teachers' gendered conceptions about students' mathematical reasoning: whether reasoning was considered gendered and, if so, which type of reasoning was attributed to girls and boys. The sample consisted of 62 teachers from six different schools from four different locations in Sweden. The results…
Inverse reasoning processes in obsessive-compulsive disorder.
Wong, Shiu F; Grisham, Jessica R
2017-04-01
The inference-based approach (IBA) is one cognitive model that aims to explain the aetiology and maintenance of obsessive-compulsive disorder (OCD). The model proposes that certain reasoning processes lead an individual with OCD to confuse an imagined possibility with an actual probability, a state termed inferential confusion. One such reasoning process is inverse reasoning, in which hypothetical causes form the basis of conclusions about reality. Although previous research has found associations between a self-report measure of inferential confusion and OCD symptoms, evidence of a specific association between inverse reasoning and OCD symptoms is lacking. In the present study, we developed a task-based measure of inverse reasoning in order to investigate whether performance on this task is associated with OCD symptoms in an online sample. The results provide some evidence for the IBA assertion: greater endorsement of inverse reasoning was significantly associated with OCD symptoms, even when controlling for general distress and OCD-related beliefs. Future research is needed to replicate this result in a clinical sample and to investigate a potential causal role for inverse reasoning in OCD. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
King, James A.
1987-01-01
The goal is to explain Case-Based Reasoning as a vehicle to establish knowledge-based systems based on experimental reasoning for possible space applications. This goal will be accomplished through an examination of reasoning based on prior experience in a sample domain, and also through a presentation of proposed space applications which could utilize Case-Based Reasoning techniques.
Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.
Rochon, K; Scoles, G A; Lysyk, T J
2012-03-01
A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.
RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.
Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu
2018-05-30
One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.
Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu
2015-07-01
Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.
de Campos, Francisco Ferreira; Enzweiler, Jacinta
2016-05-01
The concentrations of rare earth elements (REE), measured in water samples from Atibaia River and its tributary Anhumas Creek, Brazil, present excess of dissolved gadolinium. Such anthropogenic anomalies of Gd in water, already described in other parts of the world, result from the use of stable and soluble Gd chelates as contrast agents in magnetic resonance imaging. Atibaia River constitutes the main water supply of Campinas Metropolitan area, and its basin receives wastewater effluents. The REE concentrations in water samples were determined in 0.22-μm pore size filtered samples, without and after preconcentration by solid-phase extraction with bis-(2-ethyl-hexyl)-phosphate. This preconcentration method was unable to retain the anthropogenic Gd quantitatively. The probable reason is that the Gd chelates dissociate slowly in acidic media to produce the free ion that is retained by the phosphate ester. Strong correlations between Gd and constituents or parameters associated with effluents confirmed the source of most Gd in water samples as anthropogenic. The shale-normalized REE patterns of Atibaia River and Anhumas Creek water samples showed light and heavy REE enrichment trends, respectively. Also, positive Ce anomalies in many Atibaia River samples, as well as the strong correlations of the REE (except Gd) with terrigenous elements, imply that inorganic colloidal particles contributed to the REE measured values.
Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review.
McKeown, Andrew; Gewandter, Jennifer S; McDermott, Michael P; Pawlowski, Joseph R; Poli, Joseph J; Rothstein, Daniel; Farrar, John T; Gilron, Ian; Katz, Nathaniel P; Lin, Allison H; Rappaport, Bob A; Rowbotham, Michael C; Turk, Dennis C; Dworkin, Robert H; Smith, Shannon M
2015-03-01
Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. Copyright © 2015 American Pain Society. All rights reserved.
Johnson, Eric D; Tubau, Elisabet
2017-06-01
Presenting natural frequencies facilitates Bayesian inferences relative to using percentages. Nevertheless, many people, including highly educated and skilled reasoners, still fail to provide Bayesian responses to these computationally simple problems. We show that the complexity of relational reasoning (e.g., the structural mapping between the presented and requested relations) can help explain the remaining difficulties. With a non-Bayesian inference that required identical arithmetic but afforded a more direct structural mapping, performance was universally high. Furthermore, reducing the relational demands of the task through questions that directed reasoners to use the presented statistics, as compared with questions that prompted the representation of a second, similar sample, also significantly improved reasoning. Distinct error patterns were also observed between these presented- and similar-sample scenarios, which suggested differences in relational-reasoning strategies. On the other hand, while higher numeracy was associated with better Bayesian reasoning, higher-numerate reasoners were not immune to the relational complexity of the task. Together, these findings validate the relational-reasoning view of Bayesian problem solving and highlight the importance of considering not only the presented task structure, but also the complexity of the structural alignment between the presented and requested relations.
Type and location of findings in dental panoramic tomographs in 7-12-year-old orthodontic patients.
Pakbaznejad Esmaeili, Elmira; Ekholm, Marja; Haukka, Jari; Waltimo-Sirén, Janna
2016-01-01
The Radiation and Nuclear Safety Authority in Finland has paid attention to the large numbers of dental panoramic tomographs (DPTs), particularly in 7-12-year-old children. The majority of these radiographs are taken for orthodontic reasons. Because of the high radiosensitivity of children, the size of the irradiated field should be carefully chosen to yield the necessary diagnostic information at the lowest possible dose. The purpose of the present study was, therefore, to assess the outcome of DPTs within this age group in terms of type and location of pathological findings. It was also hypothesized that DPTs of orthodontic patients rarely display unrestored caries. Four hundred and forty-one DPTs, taken of 7-12-year-old children in 2010-2014, were randomly sampled. The 413 of them (94%) that had been taken for orthodontic reasons were analysed. All pathologic findings were restricted to the tooth-bearing area and there was no pathology in the bone structure or any incidental findings in the region of temporomandibular joint. Unlike hypothesized, 27% of the orthodontic DPTs showed caries in deciduous teeth and 16% in permanent teeth. A sub-sample of 229 DPTs, analysed for developmental dental and occlusal problems, most commonly displayed crowding (50%), positional anomalies and local problems with tooth eruption (32%), as well as hyperodontia (15%). Inclusion of only the actual area of interest in the image field should be considered case-specifically as a means to reduce the radiation dose.
Terribile, L C; Diniz-Filho, J A F; De Marco, P
2010-05-01
The use of ecological niche models (ENM) to generate potential geographic distributions of species has rapidly increased in ecology, conservation and evolutionary biology. Many methods are available and the most used are Maximum Entropy Method (MAXENT) and the Genetic Algorithm for Rule Set Production (GARP). Recent studies have shown that MAXENT perform better than GARP. Here we used the statistics methods of ROC - AUC (area under the Receiver Operating Characteristics curve) and bootstrap to evaluate the performance of GARP and MAXENT in generate potential distribution models for 39 species of New World coral snakes. We found that values of AUC for GARP ranged from 0.923 to 0.999, whereas those for MAXENT ranged from 0.877 to 0.999. On the whole, the differences in AUC were very small, but for 10 species GARP outperformed MAXENT. Means and standard deviations for 100 bootstrapped samples with sample sizes ranging from 3 to 30 species did not show any trends towards deviations from a zero difference in AUC values of GARP minus AUC values of MAXENT. Ours results suggest that further studies are still necessary to establish under which circumstances the statistical performance of the methods vary. However, it is also important to consider the possibility that this empirical inductive reasoning may fail in the end, because we almost certainly could not establish all potential scenarios generating variation in the relative performance of models.
ERIC Educational Resources Information Center
Frank, Andrew J.; Cathcart, Nicole; Maly, Kenneth E.; Kitaev, Vladimir
2010-01-01
A robust and reasonably simple experiment is described that introduces students to the visualization of nanoscale properties and is intended for a first-year laboratory. Silver nanoprisms (NPs) that display different colors due to variation of their plasmonic absorption with respect to size are prepared. Control over the size of the silver…
Geng, Elvin H; Bangsberg, David R; Musinguzi, Nicolas; Emenyonu, Nneka; Bwana, Mwebesa Bosco; Yiannoutsos, Constantin T; Glidden, David V; Deeks, Steven G; Martin, Jeffrey N
2010-03-01
Losses to follow-up after initiation of antiretroviral therapy (ART) are common in Africa and are a considerable obstacle to understanding the effectiveness of nascent treatment programs. We sought to characterize, through a sampling-based approach, reasons for and outcomes of patients who become lost to follow-up. Cohort study. We searched for and interviewed a representative sample of lost patients or close informants in the community to determine reasons for and outcomes among lost patients. Three thousand six hundred twenty-eight HIV-infected adults initiated ART between January 1, 2004 and September 30, 2007 in Mbarara, Uganda. Eight hundred twenty-nine became lost to follow-up (cumulative incidence at 1, 2, and 3 years of 16%, 30%, and 39%). We sought a representative sample of 128 lost patients in the community and ascertained vital status in 111 (87%). Top reasons for loss included lack of transportation or money and work/child care responsibilities. Among the 111 lost patients who had their vital status ascertained through tracking, 32 deaths occurred (cumulative 1-year incidence 36%); mortality was highest shortly after the last clinic visit. Lower pre-ART CD4 T-cell count, older age, low blood pressure, and a central nervous system syndrome at the last clinic visit predicted deaths. Of patients directly interviewed, 83% were in care at another clinic and 71% were still using ART. Sociostructural factors are the primary reasons for loss to follow-up. Outcomes among the lost are heterogeneous: both deaths and transfers to other clinics were common. Tracking a sample of lost patients is an efficient means for programs to understand site-specific reasons for and outcomes among patients lost to follow-up.
Magnetic and dielectric properties of Fe3BO6 nanoplates prepared through self-combustion method
NASA Astrophysics Data System (ADS)
Kumari, Kalpana
In the present investigation, a facile synthesis method is explored involving a self-combustion of a solid precursor mixture of iron oxide Fe2O3 and boric acid (H3BO3) using camphor (C10H16O) as fuel in ambient air in order to form a single phase Fe3BO6 crystallites. X-ray diffraction (XRD), Field emission electron microscopy (FESEM), magnetic, and dielectric properties of as prepared sample are studied. From XRD pattern, a single phase compound is observed with an orthorhombic crystal structure (Pnma space group), with average crystallite size of 42nm. A reasonably uniform size distribution of the plates and self-assemblies is retained in the sample. A magnetic transition is observed in dielectric permittivity (at ˜445K) and power loss (at ˜435K) when plotted against temperature. A weak peak occurs near 330K due to the charge reordering in the sample. For temperatures above the transition temperature, a sharp increase of the dielectric loss is observed which occurs due to the presence of thermally activated charge carriers. A canted antiferromagnetic Fe3+ ordering in a Fe3BO6 lattice with a localized charge surface layer is an apparent source of exhibiting a ferroelectric feature in this unique example of a centrosymmetric compound. An induced spin current over the Fe sites thus could give rise to a polarization hysteresis loop. Due to the presence of both ferromagnetic as well as polarization ordering, Fe3BO6 behaves like a single phase multiferroic ceramics.
Snowfall Retrivals Using a Video Disdrometer
NASA Astrophysics Data System (ADS)
Newman, A. J.; Kucera, P. A.
2004-12-01
A video disdrometer has been recently developed at NASA/Wallops Flight Facility in an effort to improve surface precipitation measurements. One of the goals of the upcoming Global Precipitation Measurement (GPM) mission is to provide improved satellite-based measurements of snowfall in mid-latitudes. Also, with the planned dual-polarization upgrade of US National Weather Service weather radars, there is potential for significant improvements in radar-based estimates of snowfall. The video disdrometer, referred to as the Rain Imaging System (RIS), was deployed in Eastern North Dakota during the 2003-2004 winter season to measure size distributions, precipitation rate, and density estimates of snowfall. The RIS uses CCD grayscale video camera with a zoom lens to observe hydrometers in a sample volume located 2 meters from end of the lens and approximately 1.5 meters away from an independent light source. The design of the RIS may eliminate sampling errors from wind flow around the instrument. The RIS operated almost continuously in the adverse conditions often observed in the Northern Plains. Preliminary analysis of an extended winter snowstorm has shown encouraging results. The RIS was able to provide crystal habit information, variability of particle size distributions for the lifecycle of the storm, snowfall rates, and estimates of snow density. Comparisons with coincident snow core samples and measurements from the nearby NWS Forecast Office indicate the RIS provides reasonable snowfall measurements. WSR-88D radar observations over the RIS were used to generate a snowfall-reflectivity relationship from the storm. These results along with several other cases will be shown during the presentation.
Liu, Bing; Mei, Hua; DesMarteau, Darryl; Creager, Stephen E
2014-12-11
A monoprotic [(trifluoromethyl)benzenesulfonyl]imide (SI) superacid electrolyte was used to covalently modify a mesoporous carbon xerogel (CX) support via reaction of the corresponding trifluoromethyl aryl sulfonimide diazonium zwitterion with the carbon surface. Electrolyte attachment was demonstrated by elemental analysis, acid-base titration, and thermogravimetric analysis. The ion-exchange capacity of the fluoroalkyl-aryl-sulfonimide-grafted carbon xerogel (SI-CX) was ∼0.18 mequiv g(-1), as indicated by acid-base titration. Platinum nanoparticles were deposited onto the SI-grafted carbon xerogel samples by the impregnation and reduction method, and these materials were employed to fabricate polyelectrolyte membrane fuel-cell (PEMFC) electrodes by the decal transfer method. The SI-grafted carbon-xerogel-supported platinum (Pt/SI-CX) was characterized by X-ray diffraction and transmission electron microscopy to determine platinum nanoparticle size and distribution, and the findings are compared with CX-supported platinum catalyst without the grafted SI electrolyte (Pt/CX). Platinum nanoparticle sizes are consistently larger on Pt/SI-CX than on Pt/CX. The electrochemically active surface area (ESA) of platinum catalyst on the Pt/SI-CX and Pt/CX samples was measured with ex situ cyclic voltammetry (CV) using both hydrogen adsorption/desorption and carbon monoxide stripping methods and by in situ CV within membrane electrode assemblies (MEAs). The ESA values for Pt/SI-CX are consistently lower than those for Pt/CX. Some possible reasons for the behavior of samples with and without grafted SI layers and implications for the possible use of SI-grafted carbon layers in PEMFC devices are discussed.
Frömke, Cornelia; Hothorn, Ludwig A; Kropf, Siegfried
2008-01-27
In many research areas it is necessary to find differences between treatment groups with several variables. For example, studies of microarray data seek to find a significant difference in location parameters from zero or one for ratios thereof for each variable. However, in some studies a significant deviation of the difference in locations from zero (or 1 in terms of the ratio) is biologically meaningless. A relevant difference or ratio is sought in such cases. This article addresses the use of relevance-shifted tests on ratios for a multivariate parallel two-sample group design. Two empirical procedures are proposed which embed the relevance-shifted test on ratios. As both procedures test a hypothesis for each variable, the resulting multiple testing problem has to be considered. Hence, the procedures include a multiplicity correction. Both procedures are extensions of available procedures for point null hypotheses achieving exact control of the familywise error rate. Whereas the shift of the null hypothesis alone would give straight-forward solutions, the problems that are the reason for the empirical considerations discussed here arise by the fact that the shift is considered in both directions and the whole parameter space in between these two limits has to be accepted as null hypothesis. The first algorithm to be discussed uses a permutation algorithm, and is appropriate for designs with a moderately large number of observations. However, many experiments have limited sample sizes. Then the second procedure might be more appropriate, where multiplicity is corrected according to a concept of data-driven order of hypotheses.
A review of reporting of participant recruitment and retention in RCTs in six major journals
Toerien, Merran; Brookes, Sara T; Metcalfe, Chris; de Salis, Isabel; Tomlin, Zelda; Peters, Tim J; Sterne, Jonathan; Donovan, Jenny L
2009-01-01
Background Poor recruitment and retention of participants in randomised controlled trials (RCTs) is problematic but common. Clear and detailed reporting of participant flow is essential to assess the generalisability and comparability of RCTs. Despite improved reporting since the implementation of the CONSORT statement, important problems remain. This paper aims: (i) to update and extend previous reviews evaluating reporting of participant recruitment and retention in RCTs; (ii) to quantify the level of participation throughout RCTs. Methods We reviewed all reports of RCTs of health care interventions and/or processes with individual randomisation, published July–December 2004 in six major journals. Short, secondary or interim reports, and Phase I/II trials were excluded. Data recorded were: general RCT details; inclusion of flow diagram; participant flow throughout trial; reasons for non-participation/withdrawal; target sample sizes. Results 133 reports were reviewed. Overall, 79% included a flow diagram, but over a third were incomplete. The majority reported the flow of participants at each stage of the trial after randomisation. However, 40% failed to report the numbers assessed for eligibility. Percentages of participants retained at each stage were high: for example, 90% of eligible individuals were randomised, and 93% of those randomised were outcome assessed. On average, trials met their sample size targets. However, there were some substantial shortfalls: for example 21% of trials reporting a sample size calculation failed to achieve adequate numbers at randomisation, and 48% at outcome assessment. Reporting of losses to follow up was variable and difficult to interpret. Conclusion The majority of RCTs reported the flow of participants well after randomisation, although only two-thirds included a complete flow chart and there was great variability over the definition of "lost to follow up". Reporting of participant eligibility was poor, making assessments of recruitment practice and external validity difficult. Reporting of participant flow throughout RCTs could be improved by small changes to the CONSORT chart. PMID:19591685
A review of reporting of participant recruitment and retention in RCTs in six major journals.
Toerien, Merran; Brookes, Sara T; Metcalfe, Chris; de Salis, Isabel; Tomlin, Zelda; Peters, Tim J; Sterne, Jonathan; Donovan, Jenny L
2009-07-10
Poor recruitment and retention of participants in randomised controlled trials (RCTs) is problematic but common. Clear and detailed reporting of participant flow is essential to assess the generalisability and comparability of RCTs. Despite improved reporting since the implementation of the CONSORT statement, important problems remain. This paper aims: (i) to update and extend previous reviews evaluating reporting of participant recruitment and retention in RCTs; (ii) to quantify the level of participation throughout RCTs. We reviewed all reports of RCTs of health care interventions and/or processes with individual randomisation, published July-December 2004 in six major journals. Short, secondary or interim reports, and Phase I/II trials were excluded. Data recorded were: general RCT details; inclusion of flow diagram; participant flow throughout trial; reasons for non-participation/withdrawal; target sample sizes. 133 reports were reviewed. Overall, 79% included a flow diagram, but over a third were incomplete. The majority reported the flow of participants at each stage of the trial after randomisation. However, 40% failed to report the numbers assessed for eligibility. Percentages of participants retained at each stage were high: for example, 90% of eligible individuals were randomised, and 93% of those randomised were outcome assessed. On average, trials met their sample size targets. However, there were some substantial shortfalls: for example 21% of trials reporting a sample size calculation failed to achieve adequate numbers at randomisation, and 48% at outcome assessment. Reporting of losses to follow up was variable and difficult to interpret. The majority of RCTs reported the flow of participants well after randomisation, although only two-thirds included a complete flow chart and there was great variability over the definition of "lost to follow up". Reporting of participant eligibility was poor, making assessments of recruitment practice and external validity difficult. Reporting of participant flow throughout RCTs could be improved by small changes to the CONSORT chart.
Automated particle identification through regression analysis of size, shape and colour
NASA Astrophysics Data System (ADS)
Rodriguez Luna, J. C.; Cooper, J. M.; Neale, S. L.
2016-04-01
Rapid point of care diagnostic tests and tests to provide therapeutic information are now available for a range of specific conditions from the measurement of blood glucose levels for diabetes to card agglutination tests for parasitic infections. Due to a lack of specificity these test are often then backed up by more conventional lab based diagnostic methods for example a card agglutination test may be carried out for a suspected parasitic infection in the field and if positive a blood sample can then be sent to a lab for confirmation. The eventual diagnosis is often achieved by microscopic examination of the sample. In this paper we propose a computerized vision system for aiding in the diagnostic process; this system used a novel particle recognition algorithm to improve specificity and speed during the diagnostic process. We will show the detection and classification of different types of cells in a diluted blood sample using regression analysis of their size, shape and colour. The first step is to define the objects to be tracked by a Gaussian Mixture Model for background subtraction and binary opening and closing for noise suppression. After subtracting the objects of interest from the background the next challenge is to predict if a given object belongs to a certain category or not. This is a classification problem, and the output of the algorithm is a Boolean value (true/false). As such the computer program should be able to "predict" with reasonable level of confidence if a given particle belongs to the kind we are looking for or not. We show the use of a binary logistic regression analysis with three continuous predictors: size, shape and color histogram. The results suggest this variables could be very useful in a logistic regression equation as they proved to have a relatively high predictive value on their own.
Physical pretreatment of biogenic-rich trommel fines for fast pyrolysis.
Eke, Joseph; Onwudili, Jude A; Bridgwater, Anthony V
2017-12-01
Energy from Waste (EfW) technologies such as fluidized bed fast pyrolysis, are beneficial for both energy generation and waste management. Such technologies, however face significant challenges due to the heterogeneous nature, particularly the high ash contents of some municipal solid waste types e.g. trommel fines. A study of the physical/mechanical and thermal characteristics of these complex wastes is important for two main reasons; (a) to inform the design and operation of pyrolysis systems to handle the characteristics of such waste; (b) to control/modify the characteristics of the waste to fit with existing EFW technologies via appropriate feedstock preparation methods. In this study, the preparation and detailed characterisation of a sample of biogenic-rich trommel fines has been carried out with a view to making the feedstock suitable for fast pyrolysis based on an existing fluidized bed reactor. Results indicate that control of feed particle size was very important to prevent problems of dust entrainment in the fluidizing gas as well as to prevent feeder hardware problems caused by large stones and aggregates. After physical separation and size reduction, nearly 70wt% of the trommel fines was obtained within the size range suitable for energy recovery using the existing fast pyrolysis system. This pyrolyzable fraction could account for about 83% of the energy content of the 'as received' trommel fines sample. Therefore there was no significant differences in the thermochemical properties of the raw and pre-treated feedstocks, indicating that suitably prepared trommel fines samples can be used for energy recovery, with significant reduction in mass and volume of the original waste. Consequently, this can lead to more than 90% reduction in the present costs of disposal of trommel fines in landfills. In addition, the recovered plastics and textile materials could be used as refuse derived fuel. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pourchez, Jérémie; Forest, Valérie; Boumahdi, Najih; Boudard, Delphine; Tomatis, Maura; Fubini, Bice; Herlin-Boime, Nathalie; Leconte, Yann; Guilhot, Bernard; Cottier, Michèle; Grosseau, Philippe
2012-10-01
Silicon carbide is an extremely hard, wear resistant, and thermally stable material with particular photoluminescence and interesting biocompatibility properties. For this reason, it is largely employed for industrial applications such as ceramics. More recently, nano-sized SiC particles were expected to enlarge their use in several fields such as composite supports, power electronics, biomaterials, etc. However, their large-scaled development is restricted by the potential toxicity of nanoparticles related to their manipulation and inhalation. This study aimed at synthesizing (by laser pyrolysis or sol-gel methods), characterizing physico-chemical properties of six samples of SiC nanopowders, then determining their in vitro biological impact(s). Using a macrophage cell line, toxicity was assessed in terms of cell membrane damage (LDH release), inflammatory effect (TNF-α production), and oxidative stress (reactive oxygen species generation). None of the six samples showed cytotoxicity while remarkable pro-oxidative reactions and inflammatory response were recorded, whose intensity appears related to the physico-chemical features of nano-sized SiC particles. In vitro data clearly showed an impact of the extent of nanoparticle surface area and the nature of crystalline phases (α-SiC vs. β-SiC) on the TNF-α production, a role of surface iron on free radical release, and of the oxidation state of the surface on cellular H2O2 production.
ERIC Educational Resources Information Center
Hamed, Kastro
2008-01-01
To address the confusion resulting from difficulties with proportional reasoning among preservice physical science students, a cube-assembly activity was used to bring a sense of concreteness to abstract ideas. The activity took students from the concrete step of assembling cubes of various sizes and directly measuring their properties to slightly…
ERIC Educational Resources Information Center
Snodgrass, Suzanne
2011-01-01
Health professionals use critical thinking, a key problem solving skill, for clinical reasoning which is defined as the use of knowledge and reflective inquiry to diagnose a clinical problem. Teaching these skills in traditional settings with growing class sizes is challenging, and students increasingly expect learning that is flexible and…
What Is a Reasonable Answer? Ways for Students to Investigate and Develop Their Number Sense
ERIC Educational Resources Information Center
Muir, Tracey
2012-01-01
Although number sense is difficult to define, it involves having a good intuition about numbers and their relationships, including the ability to have a "feel" for the relative size of numbers and to make reasonable estimations. Students with good number sense typically recognise the relative magnitude of numbers, appreciate the effect…
Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis
Adnan, Tassha Hilda
2016-01-01
Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446
Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M
2018-06-01
Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.
ERIC Educational Resources Information Center
Morren, Mattijn; Muris, Peter; Kindt, Merel; Schouten, Erik; van den Hout, Marcel
2008-01-01
Emotional and parent-based reasoning refer to the tendency to rely on personal or parental anxiety response information rather than on objective danger information when estimating the dangerousness of a situation. This study investigated the prospective relationships of emotional and parent-based reasoning with anxiety symptoms in a sample of…
ERIC Educational Resources Information Center
Rich, John D., Jr.; Fullard, William; Overton, Willis
2011-01-01
One Hundred and Twelve Latino students from Philadelphia participated in this study, which examined the development of deductive reasoning across adolescence, and the relation of reasoning to test anxiety and standardized test scores. As predicted, 11th and ninth graders demonstrated significantly more advanced reasoning than seventh graders.…
Male circumcision decreases penile sensitivity as measured in a large cohort.
Bronselaer, Guy A; Schober, Justine M; Meyer-Bahlburg, Heino F L; T'Sjoen, Guy; Vlietinck, Robert; Hoebeke, Piet B
2013-05-01
WHAT'S KNOWN ON THE SUBJECT? AND WHAT DOES THE STUDY ADD?: The sensitivity of the foreskin and its importance in erogenous sensitivity is widely debated and controversial. This is part of the actual public debate on circumcision for non-medical reason. Today some studies on the effect of circumcision on sexual function are available. However they vary widely in outcome. The present study shows in a large cohort of men, based on self-assessment, that the foreskin has erogenous sensitivity. It is shown that the foreskin is more sensitive than the uncircumcised glans mucosa, which means that after circumcision genital sensitivity is lost. In the debate on clitoral surgery the proven loss of sensitivity has been the strongest argument to change medical practice. In the present study there is strong evidence on the erogenous sensitivity of the foreskin. This knowledge hopefully can help doctors and patients in their decision on circumcision for non-medical reason. To test the hypothesis that sensitivity of the foreskin is a substantial part of male penile sensitivity. To determine the effects of male circumcision on penile sensitivity in a large sample. The study aimed at a sample size of ≈1000 men. Given the intimate nature of the questions and the intended large sample size, the authors decided to create an online survey. Respondents were recruited by means of leaflets and advertising. The analysis sample consisted of 1059 uncircumcised and 310 circumcised men. For the glans penis, circumcised men reported decreased sexual pleasure and lower orgasm intensity. They also stated more effort was required to achieve orgasm, and a higher percentage of them experienced unusual sensations (burning, prickling, itching, or tingling and numbness of the glans penis). For the penile shaft a higher percentage of circumcised men described discomfort and pain, numbness and unusual sensations. In comparison to men circumcised before puberty, men circumcised during adolescence or later indicated less sexual pleasure at the glans penis, and a higher percentage of them reported discomfort or pain and unusual sensations at the penile shaft. This study confirms the importance of the foreskin for penile sensitivity, overall sexual satisfaction, and penile functioning. Furthermore, this study shows that a higher percentage of circumcised men experience discomfort or pain and unusual sensations as compared with the uncircumcised population. Before circumcision without medical indication, adult men, and parents considering circumcision of their sons, should be informed of the importance of the foreskin in male sexuality. © 2013 BJU International.
Modelling Furrow Irrigation-Induced Erosion on a Sandy Loam Soil in Samaru, Northern Nigeria
Dibal, Jibrin M.; Igbadun, H. E.; Ramalan, A. A.; Mudiare, O. J.
2014-01-01
Assessment of soil erosion and sediment yield in furrow irrigation is limited in Samaru-Zaria. Data was collected in 2009 and 2010 and was used to develop a dimensionless model for predicting furrow irrigation-induced erosion (FIIE) using the dimensional analyses approach considering stream size, furrow length, furrow width, soil infiltration rate, hydraulic shear stress, soil erodibility, and time flow of water in the furrows as the building components. One liter of water-sediment samples was collected from the furrows during irrigations from which sediment concentrations and soil erosion per furrow were calculated. Stream sizes Q (2.5, 1.5, and 0.5 l/s), furrow lengths X (90 and 45 m), and furrow widths W (0.75 and 0.9 m) constituted the experimental factors randomized in a split plot design with four replications. Water flow into and out of the furrows was measured using cutthroat flumes. The model produced reasonable predictions relative to field measurements with coefficient of determination R 2 in the neighborhood of 0.8, model prediction efficiency NSE (0.7000), high index of agreement (0.9408), and low coefficient of variability (0.4121). The model is most sensitive to water stream size. The variables in the model are easily measurable; this makes it better and easily adoptable. PMID:27471748
Ripple, Dean C; Montgomery, Christopher B; Hu, Zhishang
2015-02-01
Accurate counting and sizing of protein particles has been limited by discrepancies of counts obtained by different methods. To understand the bias and repeatability of techniques in common use in the biopharmaceutical community, the National Institute of Standards and Technology has conducted an interlaboratory comparison for sizing and counting subvisible particles from 1 to 25 μm. Twenty-three laboratories from industry, government, and academic institutions participated. The circulated samples consisted of a polydisperse suspension of abraded ethylene tetrafluoroethylene particles, which closely mimic the optical contrast and morphology of protein particles. For restricted data sets, agreement between data sets was reasonably good: relative standard deviations (RSDs) of approximately 25% for light obscuration counts with lower diameter limits from 1 to 5 μm, and approximately 30% for flow imaging with specified manufacturer and instrument setting. RSDs of the reported counts for unrestricted data sets were approximately 50% for both light obscuration and flow imaging. Differences between instrument manufacturers were not statistically significant for light obscuration but were significant for flow imaging. We also report a method for accounting for differences in the reported diameter for flow imaging and electrical sensing zone techniques; the method worked well for diameters greater than 15 μm. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Development and analysis of a finite element model to simulate pulmonary emphysema in CT imaging.
Diciotti, Stefano; Nobis, Alessandro; Ciulli, Stefano; Landini, Nicholas; Mascalchi, Mario; Sverzellati, Nicola; Innocenti, Bernardo
2015-01-01
In CT imaging, pulmonary emphysema appears as lung regions with Low-Attenuation Areas (LAA). In this study we propose a finite element (FE) model of lung parenchyma, based on a 2-D grid of beam elements, which simulates pulmonary emphysema related to smoking in CT imaging. Simulated LAA images were generated through space sampling of the model output. We employed two measurements of emphysema extent: Relative Area (RA) and the exponent D of the cumulative distribution function of LAA clusters size. The model has been used to compare RA and D computed on the simulated LAA images with those computed on the models output. Different mesh element sizes and various model parameters, simulating different physiological/pathological conditions, have been considered and analyzed. A proper mesh element size has been determined as the best trade-off between reliable results and reasonable computational cost. Both RA and D computed on simulated LAA images were underestimated with respect to those calculated on the models output. Such underestimations were larger for RA (≈ -44 ÷ -26%) as compared to those for D (≈ -16 ÷ -2%). Our FE model could be useful to generate standard test images and to design realistic physical phantoms of LAA images for the assessment of the accuracy of descriptors for quantifying emphysema in CT imaging.
Size resolved fog water chemistry and its atmospheric implications
NASA Astrophysics Data System (ADS)
Chakraborty, Abhishek; Gupta, Tarun; Tripathi, Sachchida; Ervens, Barbara; Bhattu, Deepika
2015-04-01
Fog is a natural meteorological phenomenon that occurs throughout the world. It usually contains substantial quantity of liquid water and results in severe visibility reduction leading to disruption of normal life. Fog is generally seen as a natural cleansing agent but it also has the potential to form Secondary Organic Aerosol (SOA) via aqueous processing of ambient aerosols. Size- resolved fog water chemistry for inorganics were reported in previous studies but processing of organics inside the fog water and quantification of aqSOA remained a challenge. To assess the organics processing via fog aqueous processing, size resolved fog water samples were collected in two consecutive winter seasons (2012-13, 2013-14) at Kanpur, a heavily polluted urban area of India. Caltech 3 stage fog collector was used to collect the fog droplets in 3 size fraction; coarse (droplet diameter > 22 µm), medium (22> droplet diameter >16 µm) and fine (16> droplet diameter >4 µm). Collected samples were atomized into various instruments such as Aerosol Mass Spectrometer (AMS), Cloud Condensation Nucleus Counter (CCNc), Total Organic Carbon (TOC) and a thermo denuder (TD) for the physico-chemical characterization of soluble constituents. Fine droplets are found to be more enriched with different aerosol species and interestingly contain more aged and less volatile organics compared to other coarser sizes. Organics inside fine droplets have an average O/C = 0.87 compared to O/C of 0.67 and 0.74 of coarse and medium droplets. Metal chemistry and higher residence time of fine droplets are seemed to be the two most likely reasons for this outcome from as the results of a comprehensive modeling carried out on the observed data indicate. CCN activities of the aerosols from fine droplets are also much higher than that of coarse or medium droplets. Fine droplets also contain light absorbing material as was obvious from their 'yellowish' solution. Source apportionment of fog water organics via PMF (Positive matrix factorization) revealed presence of some very highly oxidized OA inside fog water samples. From PMF results a method for aqSOA estimation is developed and aqSOA was found to be substantially contributing to total SOA. These findings indicate that light fog with large number of fine droplets can process the ambient aerosols more efficiently than very dense fog with larger droplets where scavenging becomes more important. These findings also highlight the need of incorporating fog size resolved chemistry along with metal chemistry into global models for accurately predicting aqSOA formation and contribution to total organic aerosol loading.
Value for money? A contingent valuation study of the optimal size of the Swedish health care budget.
Eckerlund, I; Johannesson, M; Johansson, P O; Tambour, M; Zethraeus, N
1995-11-01
The contingent valuation method has been developed in the environmental field to measure the willingness to pay for environmental changes using survey methods. In this exploratory study the contingent valuation method was used to analyse how much individuals are willing to spend in total in the form of taxes for health care in Sweden, i.e. to analyse the optimal size of the 'health care budget' in Sweden. A binary contingent valuation question was included in a telephone survey of a random sample of 1260 households in Sweden. With a conservative interpretation of the data the result shows that 50% of the respondents would accept an increased tax payment to health care of about SEK 60 per month ($1 = SEK 8). It is concluded that the results indicate that the population overall thinks that the current spending on health care in Sweden is on a reasonable level. There seems to be a willingness to increase the tax payments somewhat, but major increases does not seem acceptable to a majority of the population.
NASA Astrophysics Data System (ADS)
Fischetti, Massimo V.; Vandenberghe, William G.
2016-04-01
We show that the electron mobility in ideal, free-standing two-dimensional "buckled" crystals with broken horizontal mirror (σh) symmetry and Dirac-like dispersion (such as silicene and germanene) is dramatically affected by scattering with the acoustic flexural modes (ZA phonons). This is caused both by the broken σh symmetry and by the diverging number of long-wavelength ZA phonons, consistent with the Mermin-Wagner theorem. Non-σh-symmetric, "gapped" 2D crystals (such as semiconducting transition-metal dichalcogenides with a tetragonal crystal structure) are affected less severely by the broken σh symmetry, but equally seriously by the large population of the acoustic flexural modes. We speculate that reasonable long-wavelength cutoffs needed to stabilize the structure (finite sample size, grain size, wrinkles, defects) or the anharmonic coupling between flexural and in-plane acoustic modes (shown to be effective in mirror-symmetric crystals, like free-standing graphene) may not be sufficient to raise the electron mobility to satisfactory values. Additional effects (such as clamping and phonon stiffening by the substrate and/or gate insulator) may be required.
Strength statistics of single crystals and metallic glasses under small stressed volumes
Gao, Yanfei; Bei, Hongbin
2016-05-13
It has been well documented that plastic deformation of crystalline and amorphous metals/alloys shows a general trend of “smaller is stronger”. The majority of the experimental and modeling studies along this line have been focused on finding and reasoning the scaling slope or exponent in the logarithmic plot of strength versus size. In contrast to this view, here we show that the universal picture should be the thermally activated nucleation mechanisms in small stressed volume, the stochastic behavior as to find the weakest links in intermediate sizes of the stressed volume, and the convolution of these two mechanisms with respectmore » to variables such as indenter radius in nanoindentation pop-in, crystallographic orientation, pre-strain level, sample length as in uniaxial tests, and others. Furthermore, experiments that cover the entire spectrum of length scales and a unified model that treats both thermal activation and spatial stochasticity have discovered new perspectives in understanding and correlating the strength statistics in a vast of observations in nanoindentation, micro-pillar compression, and fiber/whisker tension tests of single crystals and metallic glasses.« less
NASA Astrophysics Data System (ADS)
Dixit, Saurabh; Singhal, Sonal; Vankar, V. D.; Shukla, A. K.
2017-10-01
In this article, size dependent correlation of acoustic states is established for radial breathing mode (RBM). Single walled carbon nanotubes (SWCNTs) are synthesized along with carbon encapsulated iron nanoparticles by pulse laser deposition at room temperature. Ferrocene is used as a catalyst for growth of SWCNTs. Various studies such as HR-TEM, X-Ray Diffraction (XRD), Raman spectroscopy and NIR-Absorption spectroscopy are utilized to confirm the presence of SWCNTs in the as-synthesized and purified samples. RBM of SWCNTs can be differentiated here from Raman modes of carbon encapsulated iron nanoparticles by comparing their line shape asymmetry as well as oscillator strength. Furthermore, a quantum confinement model is proposed for RBM. It is invoked here that RBM is manifestation of quantum confinement of acoustic phonons. Well reported analytical relation of RBM is utilized to explore the nature of phonons responsible for RBM on the basis of quantum confinement model. Diameters of SWCNTs estimated by Raman studies are found to be in reasonably good agreement with that of NIR-absorption studies.
Hagell, Peter; Westergren, Albert
Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).
Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M
2018-04-01
A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.
Strong smoker interest in 'setting an example to children' by quitting: national survey data.
Thomson, George; Wilson, Nick; Weerasekera, Deepa; Edwards, Richard
2011-02-01
To further explore smoker views on reasons to quit. As part of the multi-country ITC Project, a national sample of 1,376 New Zealand adult (18+ years) smokers was surveyed in 2007/08. This sample included boosted sampling of Māori, Pacific and Asian New Zealanders. 'Setting an example to children' was given as 'very much' a reason to quit by 51%, compared to 45% giving personal health concerns. However, the 'very much' and 'somewhat' responses (combined) were greater for personal health (81%) than 'setting an example to children' (74%). Price was the third ranked reason (67%). In a multivariate analysis, women were significantly more likely to state that 'setting an example to children' was 'very much' or 'somewhat' a reason to quit; as were Māori, or Pacific compared to European; and those suffering financial stress. The relatively high importance of 'example to children' as a reason to quit is an unusual finding, and may have arisen as a result of social marketing campaigns encouraging cessation to protect families in New Zealand. The policy implications could include a need for a greater emphasis on social reasons (e.g. 'example to children'), in pack warnings, and in social marketing for smoking cessation. © 2011 The Authors. ANZJPH © 2010 Public Health Association of Australia.
Dong, Ting; Durning, Steven J; Artino, Anthony R; van der Vleuten, Cees; Holmboe, Eric; Lipner, Rebecca; Schuwirth, Lambert
2015-04-01
Clinical reasoning is essential for the practice of medicine. Dual process theory conceptualizes reasoning as falling into two general categories: nonanalytic reasoning (pattern recognition) and analytic reasoning (active comparing and contrasting of alternatives). The debate continues regarding how expert performance develops and how individuals make the best use of analytic and nonanalytic processes. Several investigators have identified the unexpected finding that intermediates tend to perform better on licensing examination items than experts, which has been termed the "intermediate effect." We explored differences between faculty and residents on multiple-choice questions (MCQs) using dual process measures (both reading and answering times) to inform this ongoing debate. Faculty (board-certified internists; experts) and residents (internal medicine interns; intermediates) answered live licensing examination MCQs (U.S. Medical Licensing Examination Step 2 Clinical Knowledge and American Board of Internal Medicine Certifying Examination) while being timed. We conducted repeated analysis of variance to compare the 2 groups on average reading time, answering time, and accuracy on various types of items. Faculty and residents did not differ significantly in reading time [F (1,35) = 0.01, p = 0.93], answering time [F (1,35) = 0.60, p = 0.44], or accuracy [F (1,35) = 0.24, p = 0.63] regardless of easy or hard items. Dual process theory was not evidenced in this study. However, this lack of difference between faculty and residents may have been affected by the small sample size of participants and MCQs may not reflect how physicians made decisions in actual practice setting. Reprint & Copyright © 2015 Association of Military Surgeons of the U.S.
Sepúlveda, Nuno; Drakeley, Chris
2015-04-03
In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.
Using known populations of pronghorn to evaluate sampling plans and estimators
Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.
1995-01-01
Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.
Code of Federal Regulations, 2012 CFR
2012-01-01
... than the type size of the principal text on the same page, but in no event smaller than 12 point type, or if provided by electronic means, then reasonable steps shall be taken to ensure that the type size is larger than the type size of the principal text on the same page; (B) On the front side of the...
Code of Federal Regulations, 2014 CFR
2014-01-01
... than the type size of the principal text on the same page, but in no event smaller than 12 point type, or if provided by electronic means, then reasonable steps shall be taken to ensure that the type size is larger than the type size of the principal text on the same page; (B) On the front side of the...
Code of Federal Regulations, 2013 CFR
2013-01-01
... than the type size of the principal text on the same page, but in no event smaller than 12 point type, or if provided by electronic means, then reasonable steps shall be taken to ensure that the type size is larger than the type size of the principal text on the same page; (B) On the front side of the...
Phuong, Nam Ngoc; Zalouk-Vergnoux, Aurore; Poirier, Laurence; Kamari, Abderrahmane; Châtel, Amélie; Mouneyrac, Catherine; Lagarde, Fabienne
2016-04-01
The ubiquitous presence and persistency of microplastics (MPs) in aquatic environments are of particular concern since they represent an increasing threat to marine organisms and ecosystems. Great differences of concentrations and/or quantities in field samples have been observed depending on geographical location around the world. The main types reported have been polyethylene, polypropylene, and polystyrene. The presence of MPs in marine wildlife has been shown in many studies focusing on ingestion and accumulation in different tissues, whereas studies of the biological effects of MPs in the field are scarce. If the nature and abundance/concentrations of MPs have not been systematically determined in field samples, this is due to the fact that the identification of MPs from environmental samples requires mastery and execution of several steps and techniques. For this reason and due to differences in sampling techniques and sample preparation, it remains difficult to compare the published studies. Most laboratory experiments have been performed with MP concentrations of a higher order of magnitude than those found in the field. Consequently, the ingestion and associated effects observed in exposed organisms have corresponded to great contaminant stress, which does not mimic the natural environment. Medium contaminations are produced with only one type of polymer of a precise sizes and homogenous shape whereas the MPs present in the field are known to be a mix of many types, sizes and shapes of plastic. Moreover, MPs originating in marine environments can be colonized by organisms and constitute the sorption support for many organic compounds present in environment that are not easily reproducible in laboratory. Determination of the mechanical and chemical effects of MPs on organisms is still a challenging area of research. Among the potential chemical effects it is necessary to differentiate those related to polymer properties from those due to the sorption/desorption of organic compounds. Copyright © 2015 Elsevier Ltd. All rights reserved.
Burnt clay magnetic properties and palaeointensity determination
NASA Astrophysics Data System (ADS)
Avramova, Mariya; Lesigyarski, Deyan
2014-05-01
Burnt clay structures found in situ are the most valuable materials for archaeomagnetic studies. From these materials the full geomagnetic field vector described by inclination, declination and intensity can be retrieved. The reliability of the obtained directional results is related to the precision of samples orientation and the accuracy of characteristic remanence determination. Palaeointensity evaluations depend on much more complex factors - stability of carried remanent magnetization, grain-size distribution of magnetic particles and mineralogical transformations during heating. In the last decades many efforts have been made to shed light over the reasons for the bad success rate of palaeointensity experiments. Nevertheless, sometimes the explanation of the bad archaeointensity results with the magnetic properties of the studied materials is quite unsatisfactory. In order to show how difficult is to apply a priory strict criteria for the suitability of a given collection of archaeomagnetic materials, artificial samples formed from four different baked clays are examined. Two of the examined clay types were taken from clay deposits from different parts of Bulgaria and two clays were taken from ancient archaeological baked clay structures from the Central part of Bulgaria and the Black sea coast, respectively. The samples formed from these clays were repeatedly heated in known magnetic field to 700oC. Different analyses were performed to obtain information about the mineralogical content and magnetic properties of the samples. The obtained results point that all clays reached stable magnetic mineralogy after the repeated heating to 700oC, the main magnetic mineral is of titano/magnetite type and the magnetic particles are predominantly with pseudo single domain grain sizes. In spite that, the magnetic properies of the studied clays seem to be very similar, reliable palaeointensity results were obtained only from the clays coming from clay deposits. The palaeointensity experiments for the samples formed from the ancient baked clays completely failed to give relibable results.
NASA Astrophysics Data System (ADS)
Remsen, Andrew; Hopkins, Thomas L.; Samson, Scott
2004-01-01
Zooplankton and suspended particles were sampled in the upper 100 m of the Gulf of Mexico with the High Resolution Sampler. This towed-platform can concurrently sample zooplankton with plankton nets, an Optical Plankton Counter (OPC) and the Shadowed Image Particle Profiling and Evaluation Recorder (SIPPER), a zooplankton imaging system. This allowed for direct comparison of mesozooplankton abundance, biomass, taxonomic composition and size distributions between simultaneously collected net samples, OPC data, and digital imagery. While the net data were numerically and taxonomically similar to that of previous studies in the region, analysis of the SIPPER imagery revealed that nets significantly underestimated larvacean, doliolid, protoctist and cnidarian/ctenophore abundance by 300%, 379%, 522% and 1200%, respectively. The inefficiency of the nets in sampling the fragile and gelatinous zooplankton groups led to a dry-weight biomass estimate less than half that of the SIPPER total and suggests that this component of the zooplankton assemblage is more important than previously determined for this region. Additionally, using the SIPPER data we determined that more than 29% of all mesozooplankton-sized particles occurred within 4 mm of another particle and therefore would not be separately counted by the OPC. This suggests that coincident counting is a major problem for the OPC even at the low zooplankton abundances encountered in low latitude oligotrophic systems like the Gulf. Furthermore, we found that the colonial cyanobacterium Trichodesmium was the most abundant recognizable organism in the SIPPER dataset, while it was difficult to quantify with the nets. For these reasons, the traditional method of using net samples to ground truth OPC data would not be adequate in describing the particle assemblage described here. Consequently we suggest that in situ imaging sensors be included in any comprehensive study of mesozooplankton.
NASA Astrophysics Data System (ADS)
Vandendriessche, Sofie; Messiaen, Marlies; O'Flynn, Sarah; Vincx, Magda; Degraer, Steven
2007-02-01
Floating seaweed is considered to be an important habitat for juvenile fishes due to the provision of food, shelter, a visual orientation point and passive transport. The importance of the presence of the highly dynamical seaweed clumps from the North Sea to juvenile neustonic fishes was investigated by analysing both neuston samples (without seaweed) and seaweed samples concerning fish community structure, and length-frequency distributions and feeding habits of five associated fish species. While the neustonic fish community was mainly seasonally structured, the seaweed-associated fish community was more complex: the response of the associated fish species to environmental variables was species specific and probably influenced by species interactions, resulting in a large multivariate distance between the samples dominated by Chelon labrosus and the samples dominated by Cyclopterus lumpus, Trachurus trachurus and Ciliata mustela. The results of the stomach analysis confirmed that C. lumpus is a weedpatch specialist that has a close spatial affinity with the seaweed and feeds intensively on the seaweed-associated invertebrate fauna. Similarly, C. mustela juveniles also fed on the seaweed fauna, but in a more opportunistic way. The shape of the size-frequency distribution suggested enhanced growth when associated with floating seaweed. Chelon labrosus and T. trachurus juveniles were generally large in seaweed samples, but large individuals were also encountered in the neuston. The proportion of associated invertebrate fauna in their diet was of minor importance, compared to the proportions in C. lumpus. Individuals of Syngnathus rostellatus mainly fed on planktonic invertebrates but had a discontinuous size-frequency distribution, suggesting that some of the syngnathids were carried with the seaweed upon detachment and stayed associated. Floating seaweeds can therefore be regarded as ephemeral habitats shared between several fish species (mainly juveniles) that use them for different reasons and with varying intensity.
Human Rights-Based Approaches to Mental Health
Bradley, Valerie J.; Sahakian, Barbara J.
2016-01-01
Abstract The incidence of human rights violations in mental health care across nations has been described as a “global emergency” and an “unresolved global crisis.” The relationship between mental health and human rights is complex and bidirectional. Human rights violations can negatively impact mental health. Conversely, respecting human rights can improve mental health. This article reviews cases where an explicitly human rights-based approach was used in mental health care settings. Although the included studies did not exhibit a high level of methodological rigor, the qualitative information obtained was considered useful and informative for future studies. All studies reviewed suggest that human-rights based approaches can lead to clinical improvements at relatively low costs. Human rights-based approaches should be utilized for legal and moral reasons, since human rights are fundamental pillars of justice and civilization. The fact that such approaches can contribute to positive therapeutic outcomes and, potentially, cost savings, is additional reason for their implementation. However, the small sample size and lack of controlled, quantitative measures limit the strength of conclusions drawn from included studies. More objective, high quality research is needed to ascertain the true extent of benefits to service users and providers. PMID:27781015
Human Rights-Based Approaches to Mental Health: A Review of Programs.
Porsdam Mann, Sebastian; Bradley, Valerie J; Sahakian, Barbara J
2016-06-01
The incidence of human rights violations in mental health care across nations has been described as a "global emergency" and an "unresolved global crisis." The relationship between mental health and human rights is complex and bidirectional. Human rights violations can negatively impact mental health. Conversely, respecting human rights can improve mental health. This article reviews cases where an explicitly human rights-based approach was used in mental health care settings. Although the included studies did not exhibit a high level of methodological rigor, the qualitative information obtained was considered useful and informative for future studies. All studies reviewed suggest that human-rights based approaches can lead to clinical improvements at relatively low costs. Human rights-based approaches should be utilized for legal and moral reasons, since human rights are fundamental pillars of justice and civilization. The fact that such approaches can contribute to positive therapeutic outcomes and, potentially, cost savings, is additional reason for their implementation. However, the small sample size and lack of controlled, quantitative measures limit the strength of conclusions drawn from included studies. More objective, high quality research is needed to ascertain the true extent of benefits to service users and providers.
Kang, Jian; Li, Xin; Jin, Rui; Ge, Yong; Wang, Jinfeng; Wang, Jianghao
2014-01-01
The eco-hydrological wireless sensor network (EHWSN) in the middle reaches of the Heihe River Basin in China is designed to capture the spatial and temporal variability and to estimate the ground truth for validating the remote sensing productions. However, there is no available prior information about a target variable. To meet both requirements, a hybrid model-based sampling method without any spatial autocorrelation assumptions is developed to optimize the distribution of EHWSN nodes based on geostatistics. This hybrid model incorporates two sub-criteria: one for the variogram modeling to represent the variability, another for improving the spatial prediction to evaluate remote sensing productions. The reasonability of the optimized EHWSN is validated from representativeness, the variogram modeling and the spatial accuracy through using 15 types of simulation fields generated with the unconditional geostatistical stochastic simulation. The sampling design shows good representativeness; variograms estimated by samples have less than 3% mean error relative to true variograms. Then, fields at multiple scales are predicted. As the scale increases, estimated fields have higher similarities to simulation fields at block sizes exceeding 240 m. The validations prove that this hybrid sampling method is effective for both objectives when we do not know the characteristics of an optimized variables. PMID:25317762
Characteristics of Qualitative Descriptive Studies: A Systematic Review
Kim, Hyejin; Sefcik, Justine S.; Bradway, Christine
2016-01-01
Qualitative description (QD) is a term that is widely used to describe qualitative studies of health care and nursing-related phenomena. However, limited discussions regarding QD are found in the existing literature. In this systematic review, we identified characteristics of methods and findings reported in research articles published in 2014 whose authors identified the work as QD. After searching and screening, data were extracted from the sample of 55 QD articles and examined to characterize research objectives, design justification, theoretical/philosophical frameworks, sampling and sample size, data collection and sources, data analysis, and presentation of findings. In this review, three primary findings were identified. First, despite inconsistencies, most articles included characteristics consistent with limited, available QD definitions and descriptions. Next, flexibility or variability of methods was common and desirable for obtaining rich data and achieving understanding of a phenomenon. Finally, justification for how a QD approach was chosen and why it would be an appropriate fit for a particular study was limited in the sample and, therefore, in need of increased attention. Based on these findings, recommendations include encouragement to researchers to provide as many details as possible regarding the methods of their QD study so that readers can determine whether the methods used were reasonable and effective in producing useful findings. PMID:27686751
Kang, Jian; Li, Xin; Jin, Rui; Ge, Yong; Wang, Jinfeng; Wang, Jianghao
2014-10-14
The eco-hydrological wireless sensor network (EHWSN) in the middle reaches of the Heihe River Basin in China is designed to capture the spatial and temporal variability and to estimate the ground truth for validating the remote sensing productions. However, there is no available prior information about a target variable. To meet both requirements, a hybrid model-based sampling method without any spatial autocorrelation assumptions is developed to optimize the distribution of EHWSN nodes based on geostatistics. This hybrid model incorporates two sub-criteria: one for the variogram modeling to represent the variability, another for improving the spatial prediction to evaluate remote sensing productions. The reasonability of the optimized EHWSN is validated from representativeness, the variogram modeling and the spatial accuracy through using 15 types of simulation fields generated with the unconditional geostatistical stochastic simulation. The sampling design shows good representativeness; variograms estimated by samples have less than 3% mean error relative to true variograms. Then, fields at multiple scales are predicted. As the scale increases, estimated fields have higher similarities to simulation fields at block sizes exceeding 240 m. The validations prove that this hybrid sampling method is effective for both objectives when we do not know the characteristics of an optimized variables.
Sugar markers in aerosol particles from an agro-industrial region in Brazil
NASA Astrophysics Data System (ADS)
Urban, R. C.; Alves, C. A.; Allen, A. G.; Cardoso, A. A.; Queiroz, M. E. C.; Campos, M. L. A. M.
2014-06-01
This work aimed to better understand how aerosol particles from sugar cane burning contribute to the chemical composition of the lower troposphere in an agro-industrial region of São Paulo State (Brazil) affected by sugar and ethanol fuel production. During a period of 21 months, we collected 105 samples and quantified 20 saccharides by GC-MS. The average concentrations of levoglucosan (L), mannosan (M), and galactosan (G) for 24-h sampling were 116, 16, and 11 ng m-3 respectively. The three anhydrosugars had higher and more variable concentrations in the nighttime and during the sugar cane harvest period, due to more intense biomass burning practices. The calculated L/M ratio, which may serve as a signature for sugar cane smoke particles, was 9 ± 5. Although the total concentrations of the anhydrosugars varied greatly among samples, the relative mass size distributions of the saccharides were reasonably constant. Emissions due to biomass burning were estimated to correspond to 69% (mass) of the sugars quantified in the harvest samples, whereas biogenic emissions corresponded to 10%. In the non-harvest period, these values were 44 and 27%, respectively, indicating that biomass burning is an important source of aerosol to the regional atmosphere during the whole year.
Dong, Nan; Yang, Xiaohuan; Cai, Hongyan; Xu, Fengjiao
2017-01-01
The research on the grid size suitability is important to provide improvement in accuracies of gridded population distribution. It contributes to reveal the actual spatial distribution of population. However, currently little research has been done in this area. Many well-modeled gridded population dataset are basically built at a single grid scale. If the grid cell size is not appropriate, it will result in spatial information loss or data redundancy. Therefore, in order to capture the desired spatial variation of population within the area of interest, it is necessary to conduct research on grid size suitability. This study summarized three expressed levels to analyze grid size suitability, which include location expressed level, numeric information expressed level, and spatial relationship expressed level. This study elaborated the reasons for choosing the five indexes to explore expression suitability. These five indexes are consistency measure, shape index rate, standard deviation of population density, patches diversity index, and the average local variance. The suitable grid size was determined by constructing grid size-indicator value curves and suitable grid size scheme. Results revealed that the three expressed levels on 10m grid scale are satisfying. And the population distribution raster data with 10m grid size provide excellent accuracy without loss. The 10m grid size is recommended as the appropriate scale for generating a high-quality gridded population distribution in our study area. Based on this preliminary study, it indicates the five indexes are coordinated with each other and reasonable and effective to assess grid size suitability. We also suggest choosing these five indexes in three perspectives of expressed level to carry out the research on grid size suitability of gridded population distribution.
Dong, Nan; Yang, Xiaohuan; Cai, Hongyan; Xu, Fengjiao
2017-01-01
The research on the grid size suitability is important to provide improvement in accuracies of gridded population distribution. It contributes to reveal the actual spatial distribution of population. However, currently little research has been done in this area. Many well-modeled gridded population dataset are basically built at a single grid scale. If the grid cell size is not appropriate, it will result in spatial information loss or data redundancy. Therefore, in order to capture the desired spatial variation of population within the area of interest, it is necessary to conduct research on grid size suitability. This study summarized three expressed levels to analyze grid size suitability, which include location expressed level, numeric information expressed level, and spatial relationship expressed level. This study elaborated the reasons for choosing the five indexes to explore expression suitability. These five indexes are consistency measure, shape index rate, standard deviation of population density, patches diversity index, and the average local variance. The suitable grid size was determined by constructing grid size-indicator value curves and suitable grid size scheme. Results revealed that the three expressed levels on 10m grid scale are satisfying. And the population distribution raster data with 10m grid size provide excellent accuracy without loss. The 10m grid size is recommended as the appropriate scale for generating a high-quality gridded population distribution in our study area. Based on this preliminary study, it indicates the five indexes are coordinated with each other and reasonable and effective to assess grid size suitability. We also suggest choosing these five indexes in three perspectives of expressed level to carry out the research on grid size suitability of gridded population distribution. PMID:28122050
ERIC Educational Resources Information Center
Wang, Winnie W.; Chang, June C.; Lew, Jonathan W.
2009-01-01
This study examined how the academic aspirations of Asian Pacific Americans (APAs) attending community colleges are influenced by their demographic and educational background, reasons for attending, and obstacles they expect to encounter. The sample consisted of 846 APAs out of a total student sample of 5,000 in an urban community college…
Chai, Feng; Xu, Ling; Liao, Yun-mao; Chao, Yong-lie
2003-07-01
The fabrication of all-ceramic dental restorations is challenged by ceramics' relatively low flexural strength and intrinsic poor resistance to fracture. This paper aimed at investigating the relationships between powder-size gradation and mechanical properties of Zirconia toughened glass infiltrated nanometer-ceramic composite (Al(2)O(3)-nZrO(2)). Al(2)O(3)-nZrO(2) ceramics powder (W) was processed by combination methods of chemical co-precipitation and ball milling with addition of different powder-sized ZrO(2). Field-emission scanning electron microscopy was used to determine the particle size distribution and characterize the particle morphology of powders. The matrix compacts were made by slip-casting technique and sintered to 1,450 degrees C and flexural strength and the fracture toughness of them were measured. 1. The particle distribution of Al(2)O(3)-nZrO(2) ceramics powder ranges from 0.02 - 3.5 micro m and among them the superfine particles almost accounted for 20%. 2. The ceramic matrix samples with addition of nZrO(2) (W) showed much higher flexural strength (115.434 +/- 5.319) MPa and fracture toughness (2.04 +/- 0.10) MPa m(1/2) than those of pure Al(2)O(3) ceramics (62.763 +/- 7.220 MPa; 1.16 +/- 0.02 MPa m(1/2)). The particle size of additive ZrO(2) may impose influences on mechanical properties of Al(2)O(3)-nZrO(2) ceramics matrix. Good homogeneity and reasonable powder-size gradation of ceramic powder can improve the mechanical properties of material.
NASA Astrophysics Data System (ADS)
Vu, T. H. Y.; Ramjauny, Y.; Rizza, G.; Hayoun, M.
2016-01-01
We investigate the dissolution law of metallic nanoparticles (NPs) under sustained irradiation. The system is composed of isolated spherical gold NPs (4-100 nm) embedded in an amorphous silica host matrix. Samples are irradiated at room temperature in the nuclear stopping power regime with 4 MeV Au ions for fluences up to 8 × 1016 cm-2. Experimentally, the dependence of the dissolution kinetics on the irradiation fluence is linear for large NPs (45-100 nm) and exponential for small NPs (4-25 nm). A lattice-based kinetic Monte Carlo (KMC) code, which includes atomic diffusion and ballistic displacement events, is used to simulate the dynamical competition between irradiation effects and thermal healing. The KMC simulations allow for a qualitative description of the NP dissolution in two main stages, in good agreement with the experiment. Moreover, the perfect correlation obtained between the evolution of the simulated flux of ejected atoms and the dissolution rate in two stages implies that there exists an effect of the size of NPs on their dissolution and a critical size for the transition between the two stages. The Frost-Russell model providing an analytical solution for the dissolution rate, accounts well for the first dissolution stage but fails in reproducing the data for the second stage. An improved model obtained by including a size-dependent recoil generation rate permits fully describing the dissolution for any NP size. This proves, in particular, that the size effect on the generation rate is the principal reason for the existence of two regimes. Finally, our results also demonstrate that it is justified to use a unidirectional approximation to describe the dissolution of the NP under irradiation, because the solute concentration is particularly low in metal-glass nanocomposites.
NASA Astrophysics Data System (ADS)
Griffin, Leslie Little
The purpose of this study was to determine the relationship of selected cognitive abilities and physical science misconceptions held by preservice elementary teachers. The cognitive abilities under investigation were: formal reasoning ability as measured by the Lawson Classroom Test of Formal Reasoning (Lawson, 1978); working memory capacity as measured by the Figural Intersection Test (Burtis & Pascual-Leone, 1974); verbal intelligence as measured by the Acorn National Academic Aptitude Test: Verbal Intelligence (Kobal, Wrightstone, & Kunze, 1944); and field dependence/independence as measured by the Group Embedded Figures Test (Witkin, Oltman, & Raskin, 1971). The number of physical science misconceptions held by preservice elementary teachers was measured by the Misconceptions in Science Questionnaire (Franklin, 1992). The data utilized in this investigation were obtained from 36 preservice elementary teachers enrolled in two sections of a science methods course at a small regional university in the southeastern United States. Multiple regression techniques were used to analyze the collected data. The following conclusions were reached following an analysis of the data. The variables of formal reasoning ability and verbal intelligence were identified as having significant relationships, both individually and in combination, to the dependent variable of selected physical science misconceptions. Though the correlations were not high enough to yield strong predictors of physical science misconceptions or strong relationships, they were of sufficient magnitude to warrant further investigation. It is recommended that further investigation be conducted replicating this study with a larger sample size. In addition, experimental research should be implemented to explore the relationships suggested in this study between the cognitive variables of formal reasoning ability and verbal intelligence and the dependent variable of selected physical science misconceptions. Further research should also focus on the detection of a broad range of science misconceptions among preservice elementary teachers.
Palacios, Julia A; Minin, Vladimir N
2013-03-01
Changes in population size influence genetic diversity of the population and, as a result, leave a signature of these changes in individual genomes in the population. We are interested in the inverse problem of reconstructing past population dynamics from genomic data. We start with a standard framework based on the coalescent, a stochastic process that generates genealogies connecting randomly sampled individuals from the population of interest. These genealogies serve as a glue between the population demographic history and genomic sequences. It turns out that only the times of genealogical lineage coalescences contain information about population size dynamics. Viewing these coalescent times as a point process, estimating population size trajectories is equivalent to estimating a conditional intensity of this point process. Therefore, our inverse problem is similar to estimating an inhomogeneous Poisson process intensity function. We demonstrate how recent advances in Gaussian process-based nonparametric inference for Poisson processes can be extended to Bayesian nonparametric estimation of population size dynamics under the coalescent. We compare our Gaussian process (GP) approach to one of the state-of-the-art Gaussian Markov random field (GMRF) methods for estimating population trajectories. Using simulated data, we demonstrate that our method has better accuracy and precision. Next, we analyze two genealogies reconstructed from real sequences of hepatitis C and human Influenza A viruses. In both cases, we recover more believed aspects of the viral demographic histories than the GMRF approach. We also find that our GP method produces more reasonable uncertainty estimates than the GMRF method. Copyright © 2013, The International Biometric Society.
Children's Concepts of the Shape and Size of the Earth, Sun and Moon
NASA Astrophysics Data System (ADS)
Bryce, T. G. K.; Blown, E. J.
2013-02-01
Children's understandings of the shape and relative sizes of the Earth, Sun and Moon have been extensively researched and in a variety of ways. Much is known about the confusions which arise as young people try to grasp ideas about the world and our neighbouring celestial bodies. Despite this, there remain uncertainties about the conceptual models which young people use and how they theorise in the process of acquiring more scientific conceptions. In this article, the relevant published research is reviewed critically and in-depth in order to frame a series of investigations using semi-structured interviews carried out with 248 participants aged 3-18 years from China and New Zealand. Analysis of qualitative and quantitative data concerning the reasoning of these subjects (involving cognitive categorisations and their rank ordering) confirmed that (a) concepts of Earth shape and size are embedded in a 'super-concept' or 'Earth notion' embracing ideas of physical shape, 'ground' and 'sky', habitation of and identity with Earth; (b) conceptual development is similar in cultures where teachers hold a scientific world view and (c) children's concepts of shape and size of the Earth, Sun and Moon can be usefully explored within an ethnological approach using multi-media interviews combined with observational astronomy. For these young people, concepts of the shape and size of the Moon and Sun were closely correlated with their Earth notion concepts and there were few differences between the cultures despite their contrasts. Analysis of the statistical data used Kolmogorov-Smirnov Two-Sample Tests with hypotheses confirmed at K-S alpha level 0.05; rs : p < 0.01.
Strength of Zerodur® for mirror applications
NASA Astrophysics Data System (ADS)
Béhar-Lafenêtre, S.; Cornillon, Laurence; Ait-Zaid, Sonia
2015-09-01
Zerodur® is a well-known glass-ceramic used for optical components because of its unequalled dimensional stability under thermal environment. In particular it has been used since decades in Thales Alenia Space's optical payloads for space telescopes, especially for mirrors. The drawback of Zerodur® is however its quite low strength, but the relatively small size of mirrors in the past had made it unnecessary to further investigate this aspect, although elementary tests have always shown higher failure strength. As performance of space telescopes is increasing, the size of mirrors increases accordingly, and an optimization of the design is necessary, mainly for mass saving. Therefore the question of the effective strength of Zerodur® has become a real issue. Thales Alenia Space has investigated the application of the Weibull law and associated size effects on Zerodur® in 2014, under CNES funding, through a thorough test campaign with a high number of samples (300) of various types. The purpose was to accurately determine the parameters of the Weibull law for Zerodur® when machined in the same conditions as mirrors. The proposed paper will discuss the obtained results, in the light of the Weibull theory. The applicability of the 2-parameter and 3-parameter (with threshold strength) laws will be compared. The expected size effect has not been evidenced therefore some investigations are led to determine the reasons of this result, from the test implementation quality to the data post-processing methodology. However this test campaign has already provided enough data to safely increase the allowable value for mirrors sizing.
Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R
2017-09-14
While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.
Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.
You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary
2011-02-01
The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.
Debray, Thomas P A; Moons, Karel G M; Riley, Richard D
2018-03-01
Small-study effects are a common threat in systematic reviews and may indicate publication bias. Their existence is often verified by visual inspection of the funnel plot. Formal tests to assess the presence of funnel plot asymmetry typically estimate the association between the reported effect size and their standard error, the total sample size, or the inverse of the total sample size. In this paper, we demonstrate that the application of these tests may be less appropriate in meta-analysis of survival data, where censoring influences statistical significance of the hazard ratio. We subsequently propose 2 new tests that are based on the total number of observed events and adopt a multiplicative variance component. We compare the performance of the various funnel plot asymmetry tests in an extensive simulation study where we varied the true hazard ratio (0.5 to 1), the number of published trials (N=10 to 100), the degree of censoring within trials (0% to 90%), and the mechanism leading to participant dropout (noninformative versus informative). Results demonstrate that previous well-known tests for detecting funnel plot asymmetry suffer from low power or excessive type-I error rates in meta-analysis of survival data, particularly when trials are affected by participant dropout. Because our novel test (adopting estimates of the asymptotic precision as study weights) yields reasonable power and maintains appropriate type-I error rates, we recommend its use to evaluate funnel plot asymmetry in meta-analysis of survival data. The use of funnel plot asymmetry tests should, however, be avoided when there are few trials available for any meta-analysis. © 2017 The Authors. Research Synthesis Methods Published by John Wiley & Sons, Ltd.
Paramonova, Ekaterina; Zerfoss, Erica L.; Logan, Bruce E.
2006-01-01
Point-of-use filters containing granular activated carbon (GAC) are an effective method for removing certain chemicals from water, but their ability to remove bacteria and viruses has been relatively untested. Collision efficiencies (α) were determined using clean-bed filtration theory for two bacteria (Raoutella terrigena 33257 and Escherichia coli 25922), a bacteriophage (MS2), and latex microspheres for four GAC samples. These GAC samples had particle size distributions that were bimodal, but only a single particle diameter can be used in the filtration equation. Therefore, consistent with previous reports, we used a particle diameter based on the smallest diameter of the particles (derived from the projected areas of 10% of the smallest particles). The bacterial collision efficiencies calculated using the filtration model were high (0.8 ≤ α ≤ 4.9), indicating that GAC was an effective capture material. Collision efficiencies greater than unity reflect an underestimation of the collision frequency, likely as a result of particle roughness and wide GAC size distributions. The collision efficiencies for microspheres (0.7 ≤ α ≤ 3.5) were similar to those obtained for bacteria, suggesting that the microspheres were a reasonable surrogate for the bacteria. The bacteriophage collision efficiencies ranged from ≥0.2 to ≤0.4. The predicted levels of removal for 1-cm-thick carbon beds ranged from 0.8 to 3 log for the bacteria and from 0.3 to 1.0 log for the phage. These tests demonstrated that GAC can be an effective material for removal of bacteria and phage and that GAC particle size is a more important factor than relative stickiness for effective particle removal. PMID:16885264
Influence of Aluminum Content on Grain Refinement and Strength of AZ31 Magnesium GTA Weld Metal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Babu, N. Kishore; Cross, Carl E.
2012-06-28
The goal is to characterize the effect of Al content on AZ31 weld metal, the grain size and strength, and examine role of Al on grain refinement. The approach is to systematically vary the aluminum content of AZ31 weld metal, Measure average grain size in weld metal, and Measure cross-weld tensile properties and hardness. Conclusions are that: (1) increased Al content in AZ31 weld metal results in grain refinement Reason: higher undercooling during solidification; (2) weld metal grain refinement resulted in increased strength & hardness Reason: grain boundary strengthening; and (3) weld metal strength can be raised to wrought basemore » metal levels.« less
ERIC Educational Resources Information Center
Schlechter, Melissa; Milevsky, Avidan
2010-01-01
The purpose of the current study is to determine the interconnection between parental level of education, psychological well-being, academic achievement and reasons for pursuing higher education in adolescents. Participants included 439 college freshmen from a mid-size state university in the northeastern USA. A survey, including indices of…
Markert, Ronald J; O'Neill, Sally C; Bhatia, Subhash C
2003-01-01
The objectives of continuing medical education (CME) programs include knowledge acquisition, skill development, clinical reasoning and decision making, and health care outcomes. We conducted a year-long medical education research study in which knowledge acquisition in our CME programs was assessed. A randomized separate-sample pretest/past-test design, a quasi-experimental technique, was used. Nine CME programs with a sufficient number of participants were identified a priori. Knowledge acquisition was compared between the control group and the intervention group for the nine individual programs and for the combined programs. A total of 667 physicians, nurses, and other health professionals participated. Significant gain in knowledge was found for six programs: Perinatology, Pain Management, Fertility Care 2, Pediatrics, Colorectal Diseases, and Alzheimer's Disease (each p < .001). Also, the intervention group differed from the control group when the nine programs were combined (p < .001), with an effect size of .84. The use of sound quasi-experimental research methodology (separate-sample pretest/post-test design), the inclusion of a representative sample of CME programs, and the analysis of nearly 700 subjects led us to have confidence in concluding that our CME participants acquired a meaningful amount of new knowledge.
Atmospheric particulate measurements in Norfolk, Virginia
NASA Technical Reports Server (NTRS)
Storey, R. W., Jr.; Sentell, R. J.; Woods, D. C.; Smith, J. R.; Harris, F. S., Jr.
1975-01-01
Characterization of atmospheric particulates was conducted at a site near the center of Norfolk, Virginia. Air quality was measured in terms of atmospheric mass loading, particle size distribution, and particulate elemental composition for a period of 2 weeks. The objectives of this study were (1) to establish a mean level of air quality and deviations about this mean, (2) to ascertain diurnal changes or special events in air quality, and (3) to evaluate instrumentation and sampling schedules. Simultaneous measurements were made with the following instruments: a quartz crystal microbalance particulate monitor, a light-scattering multirange particle counter, a high-volume air sampler, and polycarbonate membrane filters. To assess the impact of meteorological conditions on air quality variations, continuous data on temperature, relative humidity, wind speed, and wind direction were recorded. Particulate elemental composition was obtained from neutron activation and scanning electron microscopy analyses of polycarbonate membrane filter samples. The measured average mass loading agrees reasonably well with the mass loadings determined by the Virginia State Air Pollution Control Board. There are consistent diurnal increases in atmospheric mass loading in the early morning and a sample time resolution of 1/2 hour seems necessary to detect most of the significant events.
A comparison of two gears for quantifying abundance of lotic-dwelling crayfish
Williams, Kristi; Brewer, Shannon K.; Ellersieck, Mark R.
2014-01-01
Crayfish (saddlebacked crayfish, Orconectes medius) catch was compared using a kick seine applied two different ways with a 1-m2 quadrat sampler (with known efficiency and bias in riffles) from three small streams in the Missouri Ozarks. Triplicate samples (one of each technique) were taken from two creeks and one headwater stream (n=69 sites) over a two-year period. General linear mixed models showed the number of crayfish collected using the quadrat sampler was greater than the number collected using either of the two seine techniques. However, there was no significant interaction with gear suggesting year, stream size, and channel unit type did not relate to different catches of crayfish by gear type. Variation in catch among gears was similar, as was the proportion of young-of-year individuals across samples taken with different gears or techniques. Negative binomial linear regression provided the appropriate relation between the gears which allows correction factors to be applied, if necessary, to relate catches by the kick seine to those of the quadrat sampler. The kick seine appears to be a reasonable substitute to the quadrat sampler in these shallow streams, with the advantage of ease of use and shorter time required per sample.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dharmarajan, Guha; Beasley, James C.; Beatty, William S.
Many aspects of parasite biology critically depend on their hosts, and understanding how host-parasite populations are co-structured can help improve our understanding of the ecology of parasites, their hosts, and host-parasite interactions. Here, this study utilized genetic data collected from raccoons (Procyon lotor), and a specialist parasite, the raccoon tick (Ixodes texanus), to test for genetic co-structuring of host-parasite populations at both landscape and host scales. At the landscape scale, our analyses revealed a significant correlation between genetic and geographic distance matrices (i.e., isolation by distance) in ticks, but not their hosts. While there are several mechanisms that could leadmore » to a stronger pattern of isolation by distance in tick vs. raccoon datasets, our analyses suggest that at least one reason for the above pattern is the substantial increase in statistical power (due to the ≈8-fold increase in sample size) afforded by sampling parasites. Host-scale analyses indicated higher relatedness between ticks sampled from related vs. unrelated raccoons trapped within the same habitat patch, a pattern likely driven by increased contact rates between related hosts. Lastly, by utilizing fine-scale genetic data from both parasites and hosts, our analyses help improve our understanding of epidemiology and host ecology.« less
Study on the Factors Affecting the Mechanical Behavior of Electron Beam Melted Ti6Al4V
NASA Astrophysics Data System (ADS)
Pirozzi, Carmine; Franchitti, Stefania; Borrelli, Rosario; Caiazzo, Fabrizia; Alfieri, Vittorio; Argenio, Paolo
2017-09-01
In this study, a mechanical characterization has been performed on EBM built Ti-6Al-4V tensile samples. The results of tensile tests have shown a different behavior between two sets of specimens: as built and machined ones. Supporting investigations have been carried out in order to physically explain the statistical difference of mechanical performances. Cylindrical samples which represent the tensile specimens geometry have been EBM manufactured and then investigated in their as built conditions from macrostructural and microstructural point of view. In order to make robust this study, cylindrical samples have been EBM manufactured with different size and at different height from build plate. The reason of this choice was arisen from the need of understanding if other factors as the massivity and specific location could affect the microstructure and defects generations consequently influencing the mechanical behavior of the EBMed components. The results of this study have proved that the irregularity of external circular surfaces of examined cylinders, reducing significantly the true cross section withstanding the applied load, has given a comprehensive physical explanation of the different tensile behavior of the two sets of tensile specimens.
Investigation of electronic and magnetic properties of Ni0.5Cu0.5Fe2O4: theoretical and experimental
NASA Astrophysics Data System (ADS)
Sharma, Uma Shankar; Shah, Rashmi
2018-05-01
In present study, Ni0.5Cu0.5Fe2O4 been was synthesized with Co-precipitation method and prepared samples were annealed at 300°C and 500°C. The single phase formation of nickel ferrite was confirmed through powder X-ray diffraction (XRD). The presence of various functional groups was confirmed through FTIR analysis. The effects of the annealing temperature on the particle sizes and magnetic properties of the ferrite samples were investigated and interpret with valid reasons. The structural and magnetic properties of the ferrite samples were strongly affected by the annealing temperature. The annealing temperature increases coercivity and saturation magnetization values are continuously increased. Spin polarization calculations are performed on the Ni0.5Cu0.5Fe2O4, compounds within density functional theory (DFT) and find out equilibrium lattice constants 8.2 Å and DOS show there exists large spin splitting between the spin up and spin down channels near the Fermi level confirm p-d hybridization. The theoretical calculated magnetic are slightly higher than our experimental results. The other results have been discussed in detail.
NASA Astrophysics Data System (ADS)
Martin, Sabrina; Bange, Jens
2014-01-01
Crawford et al. (Boundary-Layer Meteorol 66:237-245, 1993) showed that the time average is inappropriate for airborne eddy-covariance flux calculations. The aircraft's ground speed through a turbulent field is not constant. One reason can be a correlation with vertical air motion, so that some types of structures are sampled more densely than others. To avoid this, the time-sampled data are adjusted for the varying ground speed so that the modified estimates are equivalent to spatially-sampled data. A comparison of sensible heat-flux calculations using temporal and spatial averaging methods is presented and discussed. Data of the airborne measurement systems , Helipod and Dornier 128-6 are used for the analysis. These systems vary in size, weight and aerodynamic characteristics, since the is a small unmanned aerial vehicle (UAV), the Helipod a helicopter-borne turbulence probe and the Dornier 128-6 a manned research aircraft. The systematic bias anticipated in covariance computations due to speed variations was neither found when averaging over Dornier, Helipod nor UAV flight legs. However, the random differences between spatial and temporal averaging fluxes were found to be up to 30 % on the individual flight legs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Randriamanakoto, Z.; Väisänen, P.; Escala, A.
2013-10-01
We have established a relation between the brightest super star cluster (SSC) magnitude in a galaxy and the host star formation rate (SFR) for the first time in the near-infrared (NIR). The data come from a statistical sample of ∼40 luminous IR galaxies (LIRGs) and starbursts utilizing K-band adaptive optics imaging. While expanding the observed relation to longer wavelengths, less affected by extinction effects, it also pushes to higher SFRs. The relation we find, M{sub K} ∼ –2.6log SFR, is similar to that derived previously in the optical and at lower SFRs. It does not, however, fit the optical relationmore » with a single optical to NIR color conversion, suggesting systematic extinction and/or age effects. While the relation is broadly consistent with a size-of-sample explanation, we argue physical reasons for the relation are likely as well. In particular, the scatter in the relation is smaller than expected from pure random sampling strongly suggesting physical constraints. We also derive a quantifiable relation tying together cluster-internal effects and host SFR properties to possibly explain the observed brightest SSC magnitude versus SFR dependency.« less
Tarescavage, Anthony M; Corey, David M; Ben-Porath, Yossef S
2015-02-01
The purpose of this study was to investigate the predictive validity of the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) in a sample of law enforcement officers. MMPI-2-RF scores were collected from preemployment psychological evaluations of 136 male police officers, and supervisor ratings of performance and problem behavior were subsequently obtained during the initial probationary period. The sample produced meaningfully lower and less variant substantive scale scores than the general population and the MMPI-2-RF Police Candidate comparison group, which significantly affected effect sizes for the zero-order correlations. After applying a correction for range restriction, MMPI-2-RF substantive scales demonstrated moderate to strong associations with criteria, particularly in the Emotional Dysfunction and Interpersonal Functioning domains. Relative risk ratio analyses showed that cutoffs of 45T and 50T maintained reasonable selection ratios because of the exceptionally low scores in this sample and were associated with significantly increased risk for problematic behavior. These results provide support for the predictive validity of the MMPI-2-RF substantive scales in this setting. Implications of these findings and limitations of these results are discussed. © The Author(s) 2014.
Children Prefer Diverse Samples for Inductive Reasoning in the Social Domain.
Noyes, Alexander; Christie, Stella
2016-07-01
Not all samples of evidence are equally conclusive: Diverse evidence is more representative than narrow evidence. Prior research showed that children did not use sample diversity in evidence selection tasks, indiscriminately choosing diverse or narrow sets (tiger-mouse; tiger-lion) to learn about animals. This failure is not due to a general deficit of inductive reasoning, but reflects children's belief about the category and property at test. Five- to 7 year-olds' inductive reasoning (n = 65) was tested in two categories (animal, people) and properties (toy preference, biological property). As stated earlier, children ignored diverse evidence when learning about animals' biological properties. When learning about people's toy preferences, however, children selected the diverse samples, providing the most compelling evidence to date of spontaneous selection of diverse evidence. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.
Opsahl, Stephen P.; Crow, Cassi L.
2014-01-01
During collection of streambed-sediment samples, additional samples from a subset of three sites (the SAR Elmendorf, SAR 72, and SAR McFaddin sites) were processed by using a 63-µm sieve on one aliquot and a 2-mm sieve on a second aliquot for PAH and n-alkane analyses. The purpose of analyzing PAHs and n-alkanes on a sample containing sand, silt, and clay versus a sample containing only silt and clay was to provide data that could be used to determine if these organic constituents had a greater affinity for silt- and clay-sized particles relative to sand-sized particles. The greater concentrations of PAHs in the <63-μm size-fraction samples at all three of these sites are consistent with a greater percentage of binding sites associated with fine-grained (<63 μm) sediment versus coarse-grained (<2 mm) sediment. The larger difference in total PAHs between the <2-mm and <63-μm size-fraction samples at the SAR Elmendorf site might be related to the large percentage of sand in the <2-mm size-fraction sample which was absent in the <63-μm size-fraction sample. In contrast, the <2-mm size-fraction sample collected from the SAR McFaddin site contained very little sand and was similar in particle-size composition to the <63-μm size-fraction sample.
HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE
NASA Technical Reports Server (NTRS)
De, Salvo L. J.
1994-01-01
HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.
Study samples are too small to produce sufficiently precise reliability coefficients.
Charter, Richard A
2003-04-01
In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.
Frank R. Thompson; Monica J. Schwalbach
1995-01-01
We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...
Hydration entropy change from the hard sphere model.
Graziano, Giuseppe; Lee, Byungkook
2002-12-10
The gas to liquid transfer entropy change for a pure non-polar liquid can be calculated quite accurately using a hard sphere model that obeys the Carnahan-Starling equation of state. The same procedure fails to produce a reasonable value for hydrogen bonding liquids such as water, methanol and ethanol. However, the size of the molecules increases when the hydrogen bonds are turned off to produce the hard sphere system and the volume packing density rises. We show here that the hard sphere system that has this increased packing density reproduces the experimental transfer entropy values rather well. The gas to water transfer entropy values for small non-polar hydrocarbons is also not reproduced by a hard sphere model, whether one uses the normal (2.8 A diameter) or the increased (3.2 A) size for water. At least part of the reason that the hard sphere model with 2.8 A size water produces too small entropy change is that the size of water is too small for a system without hydrogen bonds. The reason that the 3.2 A model also produces too small entropy values is that this is an overly crowded system and that the free volume introduced in the system by the addition of a solute molecule produces too much of a relief to this crowding. A hard sphere model, in which the free volume increase is limited by requiring that the average surface-to-surface distance between the solute and water molecules is the same as that between the increased-size water molecules, does approximately reproduce the experimental hydration entropy values. Copyright 2002 Elsevier Science B.V.
Neutron depolarization effects in a high-Tc superconductor (abstract)
NASA Astrophysics Data System (ADS)
Nunes, A. C.; Pickart, S. J.; Crow, L.; Goyette, R.; McGuire, T. R.; Shinde, S.; Shaw, T. M.
1988-11-01
Using the polarized beam small-angle neutron scattering spectrometer at the Rhode Island Nuclear Science Center Reactor, we have observed significant depolarization of a neutron beam by passage through polycrystalline high-Tc superconductors, specifically 123 Y-Ba-Cu-O prepared and characterized at the IBM Watson Research Center. We believe that this technique will prove useful in studying aspects of these materials, such as the penetration depth of shielding currents, the presence and structure of trapped flux vortices, and grain size effects on the supercurrent distribution in polycrystalline samples. The two samples showed sharp transitions at 87 and 89 K, and have been studied at temperatures of 77 K; the second sample has also been studied at 4 K. The transition to the superconducting state was monitored by the shift in resonant frequency of a coil surrounding the sample. No measurable depolarization was observed in either sample at 77 K in both the field-cooled and zero-field-cooled states, using applied fields of 0 (nominal), 54, and 1400 Oe. This negative result may be connected with the fact that the material is still in the reversible region as indicated by susceptibility measurements, but it allows an estimate of the upper bound of possible inhomogeneous internal fields, assuming a distance scale for the superconducting regions. For the 10-μm grain size suggested by photomicrographs, this upper bound for the field turns out to be 1.2 kOe, which seems reasonable. At 4 K a significant depolarization was observed when the sample was cooled in low fields and a field of 1400 Oe was subsequently applied. This result suggests that flux lines are penetrating the sample. Further investigations are being carried out to determine the field and temperature dependence of the depolarization, and attempts will be made to model it quantitatively in terms of possible internal field distributions. We are also searching for possible diffraction effects from ordered vortex arrays and plan to extend the measurements to Bi and Tl compositions. These results will be reported in detail elsewhere.
Fahl Mar, Kaysee; Schilling, Joshua; Brown, Walter A.
2018-01-01
Background Recent studies show that placebo response has grown significantly over time in clinical trials for antidepressants, ADHD medications, antiepileptics, and antidiabetics. Contrary to expectations, trial outcome measures and success rates have not been impacted. This study aimed to see if this trend of increasing placebo response and stable efficacy outcome measures is unique to the conditions previously studied or if it occurs in trials for conditions with physiologically-measured symptoms, such as hypertension. Method For this reason, we evaluated the efficacy data reported in the US Food and Drug Administration Medical and Statistical reviews for 23 antihypertensive programs (32,022 patients, 63 trials, 142 treatment arms). Placebo and medication response, effect sizes, and drug-placebo differences were calculated for each treatment arm and examined over time using meta-regression. We also explored the relationship of sample size, trial duration, baseline blood pressure, and number of treatment arms to placebo/drug response and efficacy outcome measures. Results Like trials of other conditions, placebo response has risen significantly over time (R2 = 0.093, p = 0.018) and effect size (R2 = 0.013, p = 0.187) drug-placebo difference (R2 = 0.013, p = 0.182) and success rate (134/142, 94.4%) have remained unaffected, likely due to a significant compensatory increase in antihypertensive response (R2 = 0.086, p<0.001). Treatment arms are likely overpowered with sample sizes increasing over time (R2 = 0.387, p<0.0001) and stable, large effect sizes (0.78 ±0.37). The exploratory analysis of sample size, trial duration, baseline blood pressure, and number of treatment arms yielded mixed results unlikely to explain the pattern of placebo response and efficacy outcomes over time. The magnitude of placebo response had no relationship to effect size (p = 0.877), antihypertensive-placebo differences (p = 0.752), or p-values (p = 0.963) but was correlated with antihypertensive response (R2 = 0.347, p<0.0001). Conclusions As hypothesized, this study shows that placebo response is increasing in clinical trials for hypertension without any evidence of this increase impacting trial outcomes. Attempting to control placebo response in clinical trials for hypertension may not be necessary for successful efficacy outcomes. In exploratory analysis, we noted that despite finding significant relationships, none of the trial or patient characteristics we examined offered a clear explanation of the rise in placebo and stability in outcome measures over time. Collectively, these data suggest that the phenomenon of increasing placebo response and stable efficacy outcomes may be a general trend, occurring across trials for various psychiatric and medical conditions with physiological and non-physiological endpoints. PMID:29489874
Grain size of loess and paleosol samples: what are we measuring?
NASA Astrophysics Data System (ADS)
Varga, György; Kovács, János; Szalai, Zoltán; Újvári, Gábor
2017-04-01
Particle size falling into a particularly narrow range is among the most important properties of windblown mineral dust deposits. Therefore, various aspects of aeolian sedimentation and post-depositional alterations can be reconstructed only from precise grain size data. Present study is aimed at (1) reviewing grain size data obtained from different measurements, (2) discussing the major reasons for disagreements between data obtained by frequently applied particle sizing techniques, and (3) assesses the importance of particle shape in particle sizing. Grain size data of terrestrial aeolian dust deposits (loess and paleosoil) were determined by laser scattering instruments (Fritsch Analysette 22 Microtec Plus, Horiba Partica La-950 v2 and Malvern Mastersizer 3000 with a Hydro Lv unit), while particles size and shape distributions were acquired by Malvern Morphologi G3-ID. Laser scattering results reveal that the optical parameter settings of the measurements have significant effects on the grain size distributions, especially for the fine-grained fractions (<5 µm). Significant differences between the Mie and Fraunhofer approaches were found for the finest grain size fractions, while only slight discrepancies were observed for the medium to coarse silt fractions. It should be noted that the different instruments provided different grain size distributions even with the exactly same optical settings. Image analysis-based grain size data indicated underestimation of clay and fine silt fractions compared to laser measurements. The measured circle-equivalent diameter of image analysis is calculated from the acquired two-dimensional image of the particle. It is assumed that the instantaneous pulse of compressed air disperse the sedimentary particles onto the glass slide with a consistent orientation with their largest area facing to the camera. However, this is only one outcome of infinite possible projections of a three-dimensional object and it cannot be regarded as a representative one. The third (height) dimension of the particles remains unknown, so the volume-based weightings are fairly dubious in the case of platy particles. Support of the National Research, Development and Innovation Office (Hungary) under contract NKFI 120620 is gratefully acknowledged. It was additionally supported (for G. Varga) by the Bolyai János Research Scholarship of the Hungarian Academy of Sciences.
ERIC Educational Resources Information Center
Strazzeri, Kenneth Charles
2013-01-01
The purposes of this study were to investigate (a) undergraduate students' reasoning about the concepts of confidence intervals (b) undergraduate students' interactions with "well-designed" screencast videos on sampling distributions and confidence intervals, and (c) how screencast videos improve undergraduate students' reasoning ability…
Reasons for quitting among emerging adults and adolescents in substance-use-disorder treatment.
Smith, Douglas C; Cleeland, Leah; Dennis, Michael L
2010-05-01
Understanding developmental differences in reasons for quitting substance use may assist clinicians in tailoring treatments to different clinical populations. This study investigates whether alcohol-disordered and problem-drinking emerging adults (i.e., ages 18-25 years) have different reasons for quitting than younger adolescents (i.e., ages 13-17 years). Using a large clinical sample of emerging adults and adolescents, we compared endorsement rates for 26 separate reasons for quitting between emerging adults and adolescents who were matched on clinical severity. Then age group was regressed on total, interpersonal, and personal reasons for quitting, and mediation tests were conducted with variables proposed to be developmentally salient to emerging adults. Among both age groups, self-control reasons were the most highly endorsed. Emerging adults had significantly fewer interpersonal reasons for quitting (Cohen's d = 0.20), and this association was partially mediated by days of being in trouble with one's family. There were no differences in personal reasons or total number of reasons for quitting. Our findings are consistent with developmental theory suggesting that emerging adults experience less social control, which here leads to less interpersonal motivation to refrain from alcohol and drug use. As emerging adults in clinical samples may indicate few interpersonal reasons for quitting, one challenge to tailoring treatments for them will be identifying innovative ways of leveraging social supports and altering existing social networks.
Reducing Class Size in New York City: Promise vs. Practice
ERIC Educational Resources Information Center
Farrie, Danielle; Johnson, Monete; Lecker, Wendy; Luhm, Theresa
2016-01-01
In the landmark school funding litigation, "Campaign for Fiscal Equity v. State" ("CFE"), the highest Court in New York recognized that reasonable class sizes are an essential element of a constitutional "sound basic education." In response to the rulings in the case, in 2007, the Legislature adopted a law mandating…
Superlinear scaling for innovation in cities.
Arbesman, Samuel; Kleinberg, Jon M; Strogatz, Steven H
2009-01-01
Superlinear scaling in cities, which appears in sociological quantities such as economic productivity and creative output relative to urban population size, has been observed, but not been given a satisfactory theoretical explanation. Here we provide a network model for the superlinear relationship between population size and innovation found in cities, with a reasonable range for the exponent.
24 CFR 884.219 - Overcrowded and underoccupied units.
Code of Federal Regulations, 2010 CFR
2010-04-01
... assisted under this part is not Decent, Safe, and Sanitary by reason of increase in Family size, or that a Contract unit is larger than appropriate for the size of the Family in occupancy, housing assistance payments with respect to such unit will not be abated, unless the Owner fails to offer the Family a...
24 CFR 886.125 - Overcrowded and underoccupied units.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Sanitary by reason of increase in Family size or that a Contract unit is larger than appropriate for the size of the Family in occupancy, housing assistance payments with respect to such unit will not be abated, unless the Owner fails to offer the Family a suitable unit as soon as one becomes vacant and...
Superlinear scaling for innovation in cities
NASA Astrophysics Data System (ADS)
Arbesman, Samuel; Kleinberg, Jon M.; Strogatz, Steven H.
2009-01-01
Superlinear scaling in cities, which appears in sociological quantities such as economic productivity and creative output relative to urban population size, has been observed, but not been given a satisfactory theoretical explanation. Here we provide a network model for the superlinear relationship between population size and innovation found in cities, with a reasonable range for the exponent.
7 CFR 51.1406 - Sample for grade or size determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., AND STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Sample for Grade Or Size Determination § 51.1406 Sample for grade or size determination. Each sample shall consist of 100 pecans. The...
Surface facial modelling and allometry in relation to sexual dimorphism.
Velemínská, J; Bigoni, L; Krajíček, V; Borský, J; Šmahelová, D; Cagáňová, V; Peterka, M
2012-04-01
Sexual dimorphism is responsible for a substantial part of human facial variability, the study of which is essential for many scientific fields ranging from evolution to special biomedical topics. Our aim was to analyse the relationship between size variability and shape facial variability of sexual traits in the young adult Central European population and to construct average surface models of adult males and females. The method of geometric morphometrics allowed not only the identification of dimorphic traits, but also the evaluation of static allometry and the visualisation of sexual facial differences. Facial variability in the studied sample was characterised by a strong relationship between facial size and shape of sexual dimorphic traits. Large size of face was associated with facial elongation and vice versa. Regarding shape sexual dimorphic traits, a wide, vaulted and high forehead in combination with a narrow and gracile lower face were typical for females. Variability in shape dimorphic traits was smaller in females compared to males. For female classification, shape sexual dimorphic traits are more important, while for males the stronger association is with face size. Males generally had a closer inter-orbital distance and a deeper position of the eyes in relation to the facial plane, a larger and wider straight nose and nostrils, and more massive lower face. Using pseudo-colour maps to provide a detailed schematic representation of the geometrical differences between the sexes, we attempted to clarify the reasons underlying the development of such differences. Copyright © 2012 Elsevier GmbH. All rights reserved.
Lee, Paul H; Tse, Andy C Y
2017-05-01
There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Connection between the growth rate distribution and the size dependent crystal growth
NASA Astrophysics Data System (ADS)
Mitrović, M. M.; Žekić, A. A.; IIić, Z. Z.
2002-07-01
The results of investigations of the connection between the growth rate dispersions and the size dependent crystal growth of potassium dihydrogen phosphate (KDP), Rochelle salt (RS) and sodium chlorate (SC) are presented. A possible way out of the existing confusion in the size dependent crystal growth investigations is suggested. It is shown that the size independent growth exists if the crystals belonging to one growth rate distribution maximum are considered separately. The investigations suggest possible reason for the observed distribution maxima widths, and the high data scattering on the growth rate versus the crystal size dependence.
Timely and complete publication of economic evaluations alongside randomized controlled trials.
Thorn, Joanna C; Noble, Sian M; Hollingworth, William
2013-01-01
Little is known about the extent and nature of publication bias in economic evaluations. Our objective was to determine whether economic evaluations are subject to publication bias by considering whether economic data are as likely to be reported, and reported as promptly, as effectiveness data. Trials that intended to conduct an economic analysis and ended before 2008 were identified in the International Standard Randomised Controlled Trial Number (ISRCTN) register; a random sample of 100 trials was retrieved. Fifty comparator trials were randomly drawn from those not identified as intending to conduct an economic study. The trial start and end dates, estimated sample size and funder type were extracted. For trials planning economic evaluations, effectiveness and economic publications were sought; publication dates and journal impact factors were extracted. Effectiveness abstracts were assessed for whether they reached a firm conclusion that one intervention was most effective. Primary investigators were contacted about reasons for non-publication of results, or reasons for differential publication strategies for effectiveness and economic results. Trials planning an economic study were more likely to be funded by government (p = 0.01) and larger (p = 0.003) than other trials. The trials planning an economic evaluation had a mean of 6.5 (range 2.7-13.2) years since the trial end in which to publish their results. Effectiveness results were reported by 70 %, while only 43 % published economic evaluations (p < 0.001). Reasons for non-publication of economic results included the intervention being ineffective, and staffing issues. Funding source, time since trial end and length of study were not associated with a higher probability of publishing the economic evaluation. However, studies that were small or of unknown size were significantly less likely to publish economic evaluations than large studies (p < 0.001). The authors' confidence in labelling one intervention clearly most effective did not affect the probability of publication. The mean time to publication was 0.7 years longer for cost-effectiveness data than for effectiveness data where both were published (p = 0.001). The median journal impact factor was 1.6 points higher for effectiveness publications than for the corresponding economic publications (p = 0.01). Reasons for publishing in different journals included editorial decision making and the additional time that economic evaluation takes to conduct. Trials that intend to conduct an economic analysis are less likely to report economic data than effectiveness data. Where economic results do appear, they are published later, and in journals with lower impact factors. These results suggest that economic output may be more susceptible than effectiveness data to publication bias. Funders, grant reviewers and trialists themselves should ensure economic evaluations are prioritized and adequately staffed to avoid potential problems with bias.
Distribution of the two-sample t-test statistic following blinded sample size re-estimation.
Lu, Kaifeng
2016-05-01
We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Ngamjarus, Chetta; Chongsuvivatwong, Virasakdi; McNeil, Edward; Holling, Heinz
2017-01-01
Sample size determination usually is taught based on theory and is difficult to understand. Using a smartphone application to teach sample size calculation ought to be more attractive to students than using lectures only. This study compared levels of understanding of sample size calculations for research studies between participants attending a lecture only versus lecture combined with using a smartphone application to calculate sample sizes, to explore factors affecting level of post-test score after training sample size calculation, and to investigate participants’ attitude toward a sample size application. A cluster-randomized controlled trial involving a number of health institutes in Thailand was carried out from October 2014 to March 2015. A total of 673 professional participants were enrolled and randomly allocated to one of two groups, namely, 341 participants in 10 workshops to control group and 332 participants in 9 workshops to intervention group. Lectures on sample size calculation were given in the control group, while lectures using a smartphone application were supplied to the test group. Participants in the intervention group had better learning of sample size calculation (2.7 points out of maximnum 10 points, 95% CI: 24 - 2.9) than the participants in the control group (1.6 points, 95% CI: 1.4 - 1.8). Participants doing research projects had a higher post-test score than those who did not have a plan to conduct research projects (0.9 point, 95% CI: 0.5 - 1.4). The majority of the participants had a positive attitude towards the use of smartphone application for learning sample size calculation.
Public School Center vs. Family Home Day Care: Single Parents' Reasons for Selection.
ERIC Educational Resources Information Center
Rothschild, Maria Stupp
This study investigates the reasons single parents in San Diego had for choosing either a public day care center or a licensed day care home for their children. A sample of 30 single parents with children in school district administered children's centers was drawn and matched by a similarly geographically distributed sample of 23 parents with…
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2011-01-01
Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…
Sample Size Determination for Regression Models Using Monte Carlo Methods in R
ERIC Educational Resources Information Center
Beaujean, A. Alexander
2014-01-01
A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…
Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F
2014-07-10
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.
Reasoning about Shape as a Pattern in Variability
ERIC Educational Resources Information Center
Bakker, Arthur
2004-01-01
This paper examines ways in which coherent reasoning about key concepts such as variability, sampling, data, and distribution can be developed as part of statistics education. Instructional activities that could support such reasoning were developed through design research conducted with students in grades 7 and 8. Results are reported from a…
Children's and Their Friends' Moral Reasoning: Relations with Aggressive Behavior
ERIC Educational Resources Information Center
Gasser, Luciano; Malti, Tina
2012-01-01
Friends' moral characteristics such as their moral reasoning represent an important social contextual factor for children's behavioral socialization. Guided by this assumption, we compared the effects of children's and friends' moral reasoning on their aggressive behavior in a low-risk sample of elementary school children. Peer nominations and…
Sample size determination in group-sequential clinical trials with two co-primary endpoints
Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi
2014-01-01
We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799
Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.
Luh, Wei-Ming; Guo, Jiin-Huarng
2007-05-01
Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.
Managed care's Achilles heel: ethical immaturity.
Thompson, R E
2000-01-01
How can physician executives determine the prevailing values in the managed care arena? What are the consequences when values statements are ignored during decision-making? These questions can be answered using a process called ethical reasoning, which is different and more productive than making moral judgments, such as "is managed care good or bad?" Failing to include ethical reasoning in executive offices and boardrooms is a form of ethical immaturity. It fuels public suspicion that managed care's goal may be maximizing profit at all costs, as opposed to seeking reasonable profit through provision of dependable and accessible health care services. One outcome of ethical reasoning is rediscovering the basic truth that running one's business on competitive rather than altruistic principles is ethical whenever greater efficiencies and economic growth enlarge the size of the pie for everyone. Reasonable self-interest is a perfectly acceptable reason to act ethically. The time has come for physician executives to develop a basic understanding of pragmatic ethics, and to appreciate the value of adding ethical reasoning to the decision-making process.
Dust-bathing behavior of laying hens in enriched colony housing systems and an aviary system
Louton, H.; Bergmann, S.; Reese, S.; Erhard, M. H.; Rauch, E.
2016-01-01
The dust-bathing behavior of Lohmann Selected Leghorn hens was compared in 4 enriched colony housing systems and in an aviary system. The enriched colony housing systems differed especially in the alignment and division of the functional areas dust bath, nest, and perches. Forty-eight-hour video recordings were performed at 3 time-points during the laying period, and focal animal sampling and behavior sampling methods were used to analyze the dust-bathing behavior. Focal animal data included the relative fractions of dust-bathing hens overall, of hens bathing in the dust-bath area, and of those bathing on the wire floor throughout the day. Behavior data included the number of dust-bathing bouts within a predefined time range, the duration of 1 bout, the number of and reasons for interruptions, and the number of and reasons for the termination of dust-bathing bouts. Results showed that the average duration of dust bathing varied between the 4 enriched colony housing systems compared with the aviary system. The duration of dust-bathing bouts was shorter than reported under natural conditions. A positive correlation between dust-bathing activity and size of the dust-bath area was observed. Frequently, dust baths were interrupted and terminated by disturbing influences such as pecking by other hens. This was especially observed in the enriched colony housing systems. In none of the observed systems, neither in the enriched colony housing nor in the aviary system, were all of the observed dust baths terminated “normally.” Dust bathing behavior on the wire mesh rather than in the provided dust-bath area generally was observed at different frequencies in all enriched colony housing systems during all observation periods, but never in the aviary system. The size and design of the dust-bath area influenced the prevalence of dust-bathing behavior in that small and subdivided dust-bath areas reduced the number of dust-bathing bouts but increased the incidence of sham dust bathing on the wire mesh. PMID:27044875
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jomekian, A.; Faculty of Chemical Engineering, Iran University of Science and Technology; Behbahani, R.M., E-mail: behbahani@put.ac.ir
Ultra porous ZIF-8 particles synthesized using PEO/PA6 based poly(ether-block-amide) (Pebax 1657) as structure directing agent. Structural properties of ZIF-8 samples prepared under different synthesis parameters were investigated by laser particle size analysis, XRD, N{sub 2} adsorption analysis, BJH and BET tests. The overall results showed that: (1) The mean pore size of all ZIF-8 samples increased remarkably (from 0.34 nm to 1.1–2.5 nm) compared to conventionally synthesized ZIF-8 samples. (2) Exceptional BET surface area of 1869 m{sup 2}/g was obtained for a ZIF-8 sample with mean pore size of 2.5 nm. (3) Applying high concentrations of Pebax 1657 to themore » synthesis solution lead to higher surface area, larger pore size and smaller particle size for ZIF-8 samples. (4) Both, Increase in temperature and decrease in molar ratio of MeIM/Zn{sup 2+} had increasing effect on ZIF-8 particle size, pore size, pore volume, crystallinity and BET surface area of all investigated samples. - Highlights: • The pore size of ZIF-8 samples synthesized with Pebax 1657 increased remarkably. • The BET surface area of 1869 m{sup 2}/gr obtained for a ZIF-8 synthesized sample with Pebax. • Increase in temperature had increasing effect on textural properties of ZIF-8 samples. • Decrease in MeIM/Zn{sup 2+} had increasing effect on textural properties of ZIF-8 samples.« less
ERIC Educational Resources Information Center
Gil, Einat; Gibbs, Alison L.
2017-01-01
In this study, we follow students' modeling and covariational reasoning in the context of learning about big data. A three-week unit was designed to allow 12th grade students in a mathematics course to explore big and mid-size data using concepts such as trend and scatter to describe the relationships between variables in multivariate settings.…
ERIC Educational Resources Information Center
Sahin, Alper; Weiss, David J.
2015-01-01
This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…
Sampling algorithms for validation of supervised learning models for Ising-like systems
NASA Astrophysics Data System (ADS)
Portman, Nataliya; Tamblyn, Isaac
2017-12-01
In this paper, we build and explore supervised learning models of ferromagnetic system behavior, using Monte-Carlo sampling of the spin configuration space generated by the 2D Ising model. Given the enormous size of the space of all possible Ising model realizations, the question arises as to how to choose a reasonable number of samples that will form physically meaningful and non-intersecting training and testing datasets. Here, we propose a sampling technique called ;ID-MH; that uses the Metropolis-Hastings algorithm creating Markov process across energy levels within the predefined configuration subspace. We show that application of this method retains phase transitions in both training and testing datasets and serves the purpose of validation of a machine learning algorithm. For larger lattice dimensions, ID-MH is not feasible as it requires knowledge of the complete configuration space. As such, we develop a new ;block-ID; sampling strategy: it decomposes the given structure into square blocks with lattice dimension N ≤ 5 and uses ID-MH sampling of candidate blocks. Further comparison of the performance of commonly used machine learning methods such as random forests, decision trees, k nearest neighbors and artificial neural networks shows that the PCA-based Decision Tree regressor is the most accurate predictor of magnetizations of the Ising model. For energies, however, the accuracy of prediction is not satisfactory, highlighting the need to consider more algorithmically complex methods (e.g., deep learning).
Sample size calculations for case-control studies
This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.
Crows spontaneously exhibit analogical reasoning.
Smirnova, Anna; Zorina, Zoya; Obozova, Tanya; Wasserman, Edward
2015-01-19
Analogical reasoning is vital to advanced cognition and behavioral adaptation. Many theorists deem analogical thinking to be uniquely human and to be foundational to categorization, creative problem solving, and scientific discovery. Comparative psychologists have long been interested in the species generality of analogical reasoning, but they initially found it difficult to obtain empirical support for such thinking in nonhuman animals (for pioneering efforts, see [2, 3]). Researchers have since mustered considerable evidence and argument that relational matching-to-sample (RMTS) effectively captures the essence of analogy, in which the relevant logical arguments are presented visually. In RMTS, choice of test pair BB would be correct if the sample pair were AA, whereas choice of test pair EF would be correct if the sample pair were CD. Critically, no items in the correct test pair physically match items in the sample pair, thus demanding that only relational sameness or differentness is available to support accurate choice responding. Initial evidence suggested that only humans and apes can successfully learn RMTS with pairs of sample and test items; however, monkeys have subsequently done so. Here, we report that crows too exhibit relational matching behavior. Even more importantly, crows spontaneously display relational responding without ever having been trained on RMTS; they had only been trained on identity matching-to-sample (IMTS). Such robust and uninstructed relational matching behavior represents the most convincing evidence yet of analogical reasoning in a nonprimate species, as apes alone have spontaneously exhibited RMTS behavior after only IMTS training. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sequential sampling: a novel method in farm animal welfare assessment.
Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J
2016-02-01
Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.
The effect of creative problem solving on students’ mathematical adaptive reasoning
NASA Astrophysics Data System (ADS)
Muin, A.; Hanifah, S. H.; Diwidian, F.
2018-01-01
This research was conducted to analyse the effect of creative problem solving (CPS) learning model on the students’ mathematical adaptive reasoning. The method used in this study was a quasi-experimental with randomized post-test only control group design. Samples were taken as many as two classes by cluster random sampling technique consisting of experimental class (CPS) as many as 40 students and control class (conventional) as many as 40 students. Based on the result of hypothesis testing with the t-test at the significance level of 5%, it was obtained that significance level of 0.0000 is less than α = 0.05. This shows that the students’ mathematical adaptive reasoning skills who were taught by CPS model were higher than the students’ mathematical adaptive reasoning skills of those who were taught by conventional model. The result of this research showed that the most prominent aspect of adaptive reasoning that could be developed through a CPS was inductive intuitive. Two aspects of adaptive reasoning, which were inductive intuitive and deductive intuitive, were mostly balanced. The different between inductive intuitive and deductive intuitive aspect was not too big. CPS model can develop student mathematical adaptive reasoning skills. CPS model can facilitate development of mathematical adaptive reasoning skills thoroughly.
McLennan, J D
2001-06-01
The objectives of this study were to determine: 1) whether mothers' perceptions of typical community practice for breast-feeding duration influence their personal practices and 2) whether the mothers' reports of community reasons for terminating breast-feeding identify barriers not elicited through self-report. The study was conducted in 1997 in a sample of poor neighborhoods in a periurban district of Santo Domingo, the capital of the Dominican Republic. A representative sample of 220 mothers from these neighborhoods was interviewed with a structured questionnaire. While the duration of breast-feeding was similar for self-report and for mothers' perceptions of typical community practice, there was no statistically significant correlation between these two variables. "Mother-driven" reasons for early termination of breast-feeding, such as "fear of loss of figure or of breast shape" and "not wanting to breast-feed," were frequently perceived as community reasons but rarely given as personal reasons. Personal reasons were predominately "child-driven," including "the child not wanting the breast," or reasons beyond the mother's control such as having "insufficient" milk. Maternal report of community reasons for early termination may be a useful way to identify factors that would not otherwise be revealed on self-report. These additional reasons may guide health promotion efforts aimed at increasing breast-feeding duration.
Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat
2018-03-01
To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.
Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed
NASA Astrophysics Data System (ADS)
Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi
2010-05-01
To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.
Henderson, Peter A; Magurran, Anne E
2010-05-22
Species abundance distributions (SADs) are widely used as a tool for summarizing ecological communities but may have different shapes, depending on the currency used to measure species importance. We develop a simple plotting method that links SADs in the alternative currencies of numerical abundance and biomass and is underpinned by testable predictions about how organisms occupy physical space. When log numerical abundance is plotted against log biomass, the species lie within an approximately triangular region. Simple energetic and sampling constraints explain the triangular form. The dispersion of species within this triangle is the key to understanding why SADs of numerical abundance and biomass can differ. Given regular or random species dispersion, we can predict the shape of the SAD for both currencies under a variety of sampling regimes. We argue that this dispersion pattern will lie between regular and random for the following reasons. First, regular dispersion patterns will result if communities are comprised groups of organisms that use different components of the physical space (e.g. open water, the sea bed surface or rock crevices in a marine fish assemblage), and if the abundance of species in each of these spatial guilds is linked to the way individuals of varying size use the habitat. Second, temporal variation in abundance and sampling error will tend to randomize this regular pattern. Data from two intensively studied marine ecosystems offer empirical support for these predictions. Our approach also has application in environmental monitoring and the recognition of anthropogenic disturbance, which may change the shape of the triangular region by, for example, the loss of large body size top predators that occur at low abundance.
Henderson, Peter A.; Magurran, Anne E.
2010-01-01
Species abundance distributions (SADs) are widely used as a tool for summarizing ecological communities but may have different shapes, depending on the currency used to measure species importance. We develop a simple plotting method that links SADs in the alternative currencies of numerical abundance and biomass and is underpinned by testable predictions about how organisms occupy physical space. When log numerical abundance is plotted against log biomass, the species lie within an approximately triangular region. Simple energetic and sampling constraints explain the triangular form. The dispersion of species within this triangle is the key to understanding why SADs of numerical abundance and biomass can differ. Given regular or random species dispersion, we can predict the shape of the SAD for both currencies under a variety of sampling regimes. We argue that this dispersion pattern will lie between regular and random for the following reasons. First, regular dispersion patterns will result if communities are comprised groups of organisms that use different components of the physical space (e.g. open water, the sea bed surface or rock crevices in a marine fish assemblage), and if the abundance of species in each of these spatial guilds is linked to the way individuals of varying size use the habitat. Second, temporal variation in abundance and sampling error will tend to randomize this regular pattern. Data from two intensively studied marine ecosystems offer empirical support for these predictions. Our approach also has application in environmental monitoring and the recognition of anthropogenic disturbance, which may change the shape of the triangular region by, for example, the loss of large body size top predators that occur at low abundance. PMID:20071388
ERIC Educational Resources Information Center
Alshamali, Mahmoud A.; Daher, Wajeeh M.
2016-01-01
This study aimed at identifying the levels of scientific reasoning of upper primary stage (grades 4-7) science teachers based on their use of a problem-solving strategy. The study sample (N = 138; 32 % male and 68 % female) was randomly selected using stratified sampling from an original population of 437 upper primary school teachers. The…
Project EDDIE: Improving Big Data skills in the classroom
NASA Astrophysics Data System (ADS)
Soule, D. C.; Bader, N.; Carey, C.; Castendyk, D.; Fuller, R.; Gibson, C.; Gougis, R.; Klug, J.; Meixner, T.; Nave, L. E.; O'Reilly, C.; Richardson, D.; Stomberg, J.
2015-12-01
High-frequency sensor-based datasets are driving a paradigm shift in the study of environmental processes. The online availability of high-frequency data creates an opportunity to engage undergraduate students in primary research by using large, long-term, and sensor-based, datasets for science courses. Project EDDIE (Environmental Data-Driven Inquiry & Exploration) is developing flexible classroom activity modules designed to (1) improve quantitative and reasoning skills; (2) develop the ability to engage in scientific discourse and argument; and (3) increase students' engagement in science. A team of interdisciplinary faculty from private and public research universities and undergraduate institutions have developed these modules to meet a series of pedagogical goals that include (1) developing skills required to manipulate large datasets at different scales to conduct inquiry-based investigations; (2) developing students' reasoning about statistical variation; and (3) fostering accurate student conceptions about the nature of environmental science. The modules cover a wide range of topics, including lake physics and metabolism, stream discharge, water quality, soil respiration, seismology, and climate change. Assessment data from questionnaire and recordings collected during the 2014-2015 academic year show that our modules are effective at making students more comfortable analyzing data. Continued development is focused on improving student learning outcomes with statistical concepts like variation, randomness and sampling, and fostering scientific discourse during module engagement. In the coming year, increased sample size will expand our assessment opportunities to comparison groups in upper division courses and allow for evaluation of module-specific conceptual knowledge learned. This project is funded by an NSF TUES grant (NSF DEB 1245707).
Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris
2015-12-30
Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.
Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology
Vavrek, Matthew J.
2015-01-01
Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770
Matters of Size: Obesity as a Diversity Issue in the Field of Early Childhood.
ERIC Educational Resources Information Center
Jalongo, Mary Renck
1999-01-01
Notes that obesity is the primary reason for peer rejection in America; examines effects of obesity on wellness, self-esteem, peer relationships, and social status of children/families and early childhood teachers. Suggests that early childhood educators: (1) educate all stakeholders about nutrition and body size issues; (2) speak out against…
Use of Statistical Heuristics in Everyday Inductive Reasoning.
ERIC Educational Resources Information Center
Nisbett, Richard E.; And Others
1983-01-01
In everyday reasoning, people use statistical heuristics (judgmental tools that are rough intuitive equivalents of statistical principles). Use of statistical heuristics is more likely when (1) sampling is clear, (2) the role of chance is clear, (3) statistical reasoning is normative for the event, or (4) the subject has had training in…
Development and Validation of the Self-Harm Reasons Questionnaire
ERIC Educational Resources Information Center
Lewis, Stephen P.; Santor, Darcy A.
2008-01-01
Understanding the reasons for self-harm (SH) may be paramount for the identification and treatment of SH behavior. Presently, the psychometric properties for SH reason questionnaires are generally unknown or tested only in non-inpatient samples. Existing inpatient measures may have limited generalizability and do not examine SH apart from an…
Moral Reasoning: Its Relation to Logical Thinking and Role-Taking.
ERIC Educational Resources Information Center
Smith, Marion E.
1978-01-01
In a sample of 100 children, aged 8-14, there was a clear association between consolidated concrete operational thinking and Kohlberg's Stage 2 moral reasoning, and some evidence that, in order of development, logical thinking precedes role-taking, which precedes moral reasoning, at corresponding levels of conceptual complexity. (Author/SJL)
Improving the accuracy of livestock distribution estimates through spatial interpolation.
Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy
2012-11-01
Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.
Biostatistics Series Module 5: Determining Sample Size
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437
Sample size and power for cost-effectiveness analysis (part 1).
Glick, Henry A
2011-03-01
Basic sample size and power formulae for cost-effectiveness analysis have been established in the literature. These formulae are reviewed and the similarities and differences between sample size and power for cost-effectiveness analysis and for the analysis of other continuous variables such as changes in blood pressure or weight are described. The types of sample size and power tables that are commonly calculated for cost-effectiveness analysis are also described and the impact of varying the assumed parameter values on the resulting sample size and power estimates is discussed. Finally, the way in which the data for these calculations may be derived are discussed.
Estimation of sample size and testing power (Part 4).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2012-01-01
Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.
Mayer, B; Muche, R
2013-01-01
Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.
Rare high-impact disease variants: properties and identifications.
Park, Leeyoung; Kim, Ju Han
2016-03-21
Although many genome-wide association studies have been performed, the identification of disease polymorphisms remains important. It is now suspected that many rare disease variants induce the association signal of common variants in linkage disequilibrium (LD). Based on recent development of genetic models, the current study provides explanations of the existence of rare variants with high impacts and common variants with low impacts. Disease variants are neither necessary nor sufficient due to gene-gene or gene-environment interactions. A new method was developed based on theoretical aspects to identify both rare and common disease variants by their genotypes. Common disease variants were identified with relatively small odds ratios and relatively small sample sizes, except for specific situations in which the disease variants were in strong LD with a variant with a higher frequency. Rare disease variants with small impacts were difficult to identify without increasing sample sizes; however, the method was reasonably accurate for rare disease variants with high impacts. For rare variants, dominant variants generally showed better Type II error rates than recessive variants; however, the trend was reversed for common variants. Type II error rates increased in gene regions containing more than two disease variants because the more common variant, rather than both disease variants, was usually identified. The proposed method would be useful for identifying common disease variants with small impacts and rare disease variants with large impacts when disease variants have the same effects on disease presentation.
Optical design considerations when imaging the fundus with an adaptive optics correction
NASA Astrophysics Data System (ADS)
Wang, Weiwei; Campbell, Melanie C. W.; Kisilak, Marsha L.; Boyd, Shelley R.
2008-06-01
Adaptive Optics (AO) technology has been used in confocal scanning laser ophthalmoscopes (CSLO) which are analogous to confocal scanning laser microscopes (CSLM) with advantages of real-time imaging, increased image contrast, a resistance to image degradation by scattered light, and improved optical sectioning. With AO, the instrumenteye system can have low enough aberrations for the optical quality to be limited primarily by diffraction. Diffraction-limited, high resolution imaging would be beneficial in the understanding and early detection of eye diseases such as diabetic retinopathy. However, to maintain diffraction-limited imaging, sufficient pixel sampling over the field of view is required, resulting in the need for increased data acquisition rates for larger fields. Imaging over smaller fields may be a disadvantage with clinical subjects because of fixation instability and the need to examine larger areas of the retina. Reduction in field size also reduces the amount of light sampled per pixel, increasing photon noise. For these reasons, we considered an instrument design with a larger field of view. When choosing scanners to be used in an AOCSLO, the ideal frame rate should be above the flicker fusion rate for the human observer and would also allow user control of targets projected onto the retina. In our AOCSLO design, we have studied the tradeoffs between field size, frame rate and factors affecting resolution. We will outline optical approaches to overcome some of these tradeoffs and still allow detection of the earliest changes in the fundus in diabetic retinopathy.
Cramer, Holger; Haller, Heidemarie; Dobos, Gustav; Lauche, Romy
2016-01-01
A reasonable estimation of expected dropout rates is vital for adequate sample size calculations in randomized controlled trials (RCTs). Underestimating expected dropouts rates increases the risk of false negative results while overestimating rates results in overly large sample sizes, raising both ethical and economic issues. To estimate expected dropout rates in RCTs on yoga interventions, MEDLINE/PubMed, Scopus, IndMED, and the Cochrane Library were searched through February 2014; a total of 168 RCTs were meta-analyzed. Overall dropout rate was 11.42% (95% confidence interval [CI] = 10.11%, 12.73%) in the yoga groups; rates were comparable in usual care and psychological control groups and were slightly higher in exercise control groups (rate = 14.53%; 95% CI = 11.56%, 17.50%; odds ratio = 0.82; 95% CI = 0.68, 0.98; p = 0.03). For RCTs with durations above 12 weeks, dropout rates in yoga groups increased to 15.23% (95% CI = 11.79%, 18.68%). The upper border of 95% CIs for dropout rates commonly was below 20% regardless of study origin, health condition, gender, age groups, and intervention characteristics; however, it exceeded 40% for studies on HIV patients or heterogeneous age groups. In conclusion, dropout rates can be expected to be less than 15 to 20% for most RCTs on yoga interventions. Yet dropout rates beyond 40% are possible depending on the participants' sociodemographic and health condition.
Haller, Heidemarie; Dobos, Gustav; Lauche, Romy
2016-01-01
A reasonable estimation of expected dropout rates is vital for adequate sample size calculations in randomized controlled trials (RCTs). Underestimating expected dropouts rates increases the risk of false negative results while overestimating rates results in overly large sample sizes, raising both ethical and economic issues. To estimate expected dropout rates in RCTs on yoga interventions, MEDLINE/PubMed, Scopus, IndMED, and the Cochrane Library were searched through February 2014; a total of 168 RCTs were meta-analyzed. Overall dropout rate was 11.42% (95% confidence interval [CI] = 10.11%, 12.73%) in the yoga groups; rates were comparable in usual care and psychological control groups and were slightly higher in exercise control groups (rate = 14.53%; 95% CI = 11.56%, 17.50%; odds ratio = 0.82; 95% CI = 0.68, 0.98; p = 0.03). For RCTs with durations above 12 weeks, dropout rates in yoga groups increased to 15.23% (95% CI = 11.79%, 18.68%). The upper border of 95% CIs for dropout rates commonly was below 20% regardless of study origin, health condition, gender, age groups, and intervention characteristics; however, it exceeded 40% for studies on HIV patients or heterogeneous age groups. In conclusion, dropout rates can be expected to be less than 15 to 20% for most RCTs on yoga interventions. Yet dropout rates beyond 40% are possible depending on the participants' sociodemographic and health condition. PMID:27413387
Compression fatigue behavior and failure mechanism of porous titanium for biomedical applications.
Li, Fuping; Li, Jinshan; Huang, Tingting; Kou, Hongchao; Zhou, Lian
2017-01-01
Porous titanium and its alloys are believed to be one of the most attractive biomaterials for orthopedic implant applications. In the present work, porous pure titanium with 50-70% porosity and different pore size was fabricated by diffusion bonding. Compression fatigue behavior was systematically studied along the out-of-plane direction. It resulted that porous pure titanium has anisotropic pore structure and the microstructure is fine-grained equiaxed α phase with a few twins in some α grains. Porosity and pore size have some effect on the S-N curve but this effect is negligible when the fatigue strength is normalized by the yield stress. The relationship between normalized fatigue strength and fatigue life conforms to a power law. The compression fatigue behavior is characteristic of strain accumulation. Porous titanium experiences uniform deformation throughout the entire sample when fatigue cycle is lower than a critical value (N T ). When fatigue cycles exceed N T , strain accumulates rapidly and a single collapse band forms with a certain angle to the loading direction, leading to the sudden failure of testing sample. Both cyclic ratcheting and fatigue crack growth contribute to the fatigue failure mechanism, while the cyclic ratcheting is the dominant one. Porous titanium possesses higher normalized fatigue strength which is in the range of 0.5-0.55 at 10 6 cycles. The reasons for the higher normalized fatigue strength were analyzed based on the microstructure and fatigue failure mechanism. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lampit, Amit; Ebster, Claus; Valenzuela, Michael
2014-01-01
Cognitive skills are important predictors of job performance, but the extent to which computerized cognitive training (CCT) can improve job performance in healthy adults is unclear. We report, for the first time, that a CCT program aimed at attention, memory, reasoning and visuo-spatial abilities can enhance productivity in healthy younger adults on bookkeeping tasks with high relevance to real-world job performance. 44 business students (77.3% female, mean age 21.4 ± 2.6 years) were assigned to either (a) 20 h of CCT, or (b) 20 h of computerized arithmetic training (active control) by a matched sampling procedure. Both interventions were conducted over a period of 6 weeks, 3-4 1-h sessions per week. Transfer of skills to performance on a 60-min paper-based bookkeeping task was measured at three time points-baseline, after 10 h and after 20 h of training. Repeated measures ANOVA found a significant Group X Time effect on productivity (F = 7.033, df = 1.745; 73.273, p = 0.003) with a significant interaction at both the 10-h (Relative Cohen's effect size = 0.38, p = 0.014) and 20-h time points (Relative Cohen's effect size = 0.40, p = 0.003). No significant effects were found on accuracy or on Conners' Continuous Performance Test, a measure of sustained attention. The results are discussed in reference to previous findings on the relationship between brain plasticity and job performance. Generalization of results requires further study.
A sequential bioequivalence design with a potential ethical advantage.
Fuglsang, Anders
2014-07-01
This paper introduces a two-stage approach for evaluation of bioequivalence, where, in contrast to the designs of Diane Potvin and co-workers, two stages are mandatory regardless of the data obtained at stage 1. The approach is derived from Potvin's method C. It is shown that under circumstances with relatively high variability and relatively low initial sample size, this method has an advantage over Potvin's approaches in terms of sample sizes while controlling type I error rates at or below 5% with a minute occasional trade-off in power. Ethically and economically, the method may thus be an attractive alternative to the Potvin designs. It is also shown that when using the method introduced here, average total sample sizes are rather independent of initial sample size. Finally, it is shown that when a futility rule in terms of sample size for stage 2 is incorporated into this method, i.e., when a second stage can be abolished due to sample size considerations, there is often an advantage in terms of power or sample size as compared to the previously published methods.
NASA Astrophysics Data System (ADS)
Imamura, M.; Kubo, T.; Takumi, K.
2016-12-01
Rheology of the lower mantle largely depends on the grain-size evolution in constituent minerals. The pioneering work on the grain growth kinetics in MgSiO3 bridgmanite and MgO periclase (Yamazaki et al., 1996) has raised the problem that the grain growth rate is too slow to explain the lower-mantle viscosity. This inconsistency may arise from effects of elastic stress due to the eutectoid transformation (e.g., Solomatov et al., 2002) and it may be difficult to extrapolate the slow kinetics obtained to geological timescales. We conducted grain growth experiments in pyrolitic material at 25-27 GPa, and 1600-1950°C for 30-3000 min using Kawai-type high pressure apparatus at Kyushu University. Four phases of bridgmanite, ferro-priclase, Ca-perovskite and majoritic garnet were present in recovered samples annealed at 25 GPa. To avoid the effects of the eutectoid texture, we took the grain growth data only from the sample exhibiting relatively homogeneous equi-granular texture. That was achieved after annealing for 30 minutes at 1800-1950°C (use these grain sizes as d0), and not achieved even after annealing for 3000 minutes at 1600°C. We preliminarily obtained kinetic parameters of n=4.9 and H* 420 kJ/mol for bridgmanite, and n=4.7 and H* 160 kJ/mol for ferro-pericalse. The ratio of grain sizes of bridgmanite and ferro-periclase is almost constant during the grain growth process. These results indicate faster kinetics compared to the previous study, and can be reasonably interpreted as the grain growth occurred by Ostwald ripening. On the other hand, three phases without majoritic garnet were present at higher pressure of 27 GPa and 1800°C, in which the grain size was slightly larger probably due to the smaller proportion of the secondary phases. When extrapolating the grain growth kinetics obtained in the four phases, the grain size of bridgmanite is roughly estimated to be 4-50 µm at 800-1200°C and 200-600 µm at 1600-2000°C in 108 years. These grain sizes may explain the lower-mantle viscosity in diffusion creep regime if we consider the effects of deformation-induced grain growth in convecting mantle (Hiraga et al., 2010).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adekola, A.S.; Colaresi, J.; Douwen, J.
2015-07-01
Environmental scientific research requires a detector that has sensitivity low enough to reveal the presence of any contaminant in the sample at a reasonable counting time. Canberra developed the germanium detector geometry called Small Anode Germanium (SAGe) Well detector, which is now available commercially. The SAGe Well detector is a new type of low capacitance germanium well detector manufactured using small anode technology capable of advancing many environmental scientific research applications. The performance of this detector has been evaluated for a range of sample sizes and geometries counted inside the well, and on the end cap of the detector. Themore » detector has energy resolution performance similar to semi-planar detectors, and offers significant improvement over the existing coaxial and Well detectors. Energy resolution performance of 750 eV Full Width at Half Maximum (FWHM) at 122 keV γ-ray energy and resolution of 2.0 - 2.3 keV FWHM at 1332 keV γ-ray energy are guaranteed for detector volumes up to 425 cm{sup 3}. The SAGe Well detector offers an optional 28 mm well diameter with the same energy resolution as the standard 16 mm well. Such outstanding resolution performance will benefit environmental applications in revealing the detailed radionuclide content of samples, particularly at low energy, and will enhance the detection sensitivity resulting in reduced counting time. The detector is compatible with electric coolers without any sacrifice in performance and supports the Canberra Mathematical efficiency calibration method (In situ Object Calibration Software or ISOCS, and Laboratory Source-less Calibration Software or LABSOCS). In addition, the SAGe Well detector supports true coincidence summing available in the ISOCS/LABSOCS framework. The improved resolution performance greatly enhances detection sensitivity of this new detector for a range of sample sizes and geometries counted inside the well. This results in lower minimum detectable concentrations compared to Traditional Well detectors. The SAGe Well detectors are compatible with Marinelli beakers and compete very well with semi-planar and coaxial detectors for large samples in many applications. (authors)« less
Sample Size Determination for One- and Two-Sample Trimmed Mean Tests
ERIC Educational Resources Information Center
Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng
2008-01-01
Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…
The cost of large numbers of hypothesis tests on power, effect size and sample size.
Lazzeroni, L C; Ray, A
2012-01-01
Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing.
SignalPlant: an open signal processing software platform.
Plesinger, F; Jurco, J; Halamek, J; Jurak, P
2016-07-01
The growing technical standard of acquisition systems allows the acquisition of large records, often reaching gigabytes or more in size as is the case with whole-day electroencephalograph (EEG) recordings, for example. Although current 64-bit software for signal processing is able to process (e.g. filter, analyze, etc) such data, visual inspection and labeling will probably suffer from rather long latency during the rendering of large portions of recorded signals. For this reason, we have developed SignalPlant-a stand-alone application for signal inspection, labeling and processing. The main motivation was to supply investigators with a tool allowing fast and interactive work with large multichannel records produced by EEG, electrocardiograph and similar devices. The rendering latency was compared with EEGLAB and proves significantly faster when displaying an image from a large number of samples (e.g. 163-times faster for 75 × 10(6) samples). The presented SignalPlant software is available free and does not depend on any other computation software. Furthermore, it can be extended with plugins by third parties ensuring its adaptability to future research tasks and new data formats.
The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.
Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S
2016-10-01
The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.
Influence of item distribution pattern and abundance on efficiency of benthic core sampling
Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.
2014-01-01
ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.
Jeffrey H. Gove
2003-01-01
Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
NASA Astrophysics Data System (ADS)
Lin, Che-Tseng; Huang, Tzu-Yang; Huang, Jau-Jiun; Wu, Nae-Lih; Leung, Man-kit
2016-10-01
Multifunctional co-poly(amic acid) (PAmA) containing pyrene and carboxylic acid side-chains is developed as a binder for the recycled kerf-loss Si-Ni-SiC composite anode. The capacity retention performance of the lithium-ion battery can be apparently enhanced. In a long-cycle test of 300 lithiation/delithiation cycles, 79% of capacity retention is achieved. In considering that the recycled kerf-loss Si sample contains 38 wt% inactive micro-sized SiC abrasive particles, the achieved capacity of 648 mAh g-1 is reasonably high in comparison to other reported values. Small anode thickness expansion of 43% is found in a 100 cycle test, reflecting that the use of the PAmA binder can create strong interconnection among the silicon particles, conductive carbons and copper electrode.
Formation of Minor Phases in a Nickel-Based Disk Superalloy
NASA Technical Reports Server (NTRS)
Gabb, T. P.; Garg, A.; Miller, D. R.; Sudbrack, C. K.; Hull, D. R.; Johnson, D.; Rogers, R. B.; Gayda, J.; Semiatin, S. L.
2012-01-01
The minor phases of powder metallurgy disk superalloy LSHR were studied. Samples were consistently heat treated at three different temperatures for long times to approximate equilibrium. Additional heat treatments were also performed for shorter times, to then assess non-equilibrium conditions. Minor phases including MC carbides, M23C6 carbides, M3B2 borides, and sigma were identified. Their transformation temperatures, lattice parameters, compositions, average sizes and total area fractions were determined, and compared to estimates of an existing phase prediction software package. Parameters measured at equilibrium sometimes agreed reasonably well with software model estimates, with potential for further improvements. Results for shorter times representing non-equilibrium indicated significant potential for further extension of the software to such conditions, which are more commonly observed during heat treatments and service at high temperatures for disk applications.