Sample records for realistic sample sizes

  1. Realistic weight perception and body size assessment in a racially diverse community sample of dieters.

    PubMed

    Cachelin, F M; Striegel-Moore, R H; Elder, K A

    1998-01-01

    Recently, a shift in obesity treatment away from emphasizing ideal weight loss goals to establishing realistic weight loss goals has been proposed; yet, what constitutes "realistic" weight loss for different populations is not clear. This study examined notions of realistic shape and weight as well as body size assessment in a large community-based sample of African-American, Asian, Hispanic, and white men and women. Participants were 1893 survey respondents who were all dieters and primarily overweight. Groups were compared on various variables of body image assessment using silhouette ratings. No significant race differences were found in silhouette ratings, nor in perceptions of realistic shape or reasonable weight loss. Realistic shape and weight ratings by both women and men were smaller than current shape and weight but larger than ideal shape and weight ratings. Compared with male dieters, female dieters considered greater weight loss to be realistic. Implications of the findings for the treatment of obesity are discussed.

  2. Sample Size Determination for Regression Models Using Monte Carlo Methods in R

    ERIC Educational Resources Information Center

    Beaujean, A. Alexander

    2014-01-01

    A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…

  3. Simulating realistic predator signatures in quantitative fatty acid signature analysis

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.

    2015-01-01

    Diet estimation is an important field within quantitative ecology, providing critical insights into many aspects of ecology and community dynamics. Quantitative fatty acid signature analysis (QFASA) is a prominent method of diet estimation, particularly for marine mammal and bird species. Investigators using QFASA commonly use computer simulation to evaluate statistical characteristics of diet estimators for the populations they study. Similar computer simulations have been used to explore and compare the performance of different variations of the original QFASA diet estimator. In both cases, computer simulations involve bootstrap sampling prey signature data to construct pseudo-predator signatures with known properties. However, bootstrap sample sizes have been selected arbitrarily and pseudo-predator signatures therefore may not have realistic properties. I develop an algorithm to objectively establish bootstrap sample sizes that generates pseudo-predator signatures with realistic properties, thereby enhancing the utility of computer simulation for assessing QFASA estimator performance. The algorithm also appears to be computationally efficient, resulting in bootstrap sample sizes that are smaller than those commonly used. I illustrate the algorithm with an example using data from Chukchi Sea polar bears (Ursus maritimus) and their marine mammal prey. The concepts underlying the approach may have value in other areas of quantitative ecology in which bootstrap samples are post-processed prior to their use.

  4. Global Sensitivity Analysis with Small Sample Sizes: Ordinary Least Squares Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Michael J.; Liu, Wei; Sivaramakrishnan, Raghu

    2016-12-21

    A new version of global sensitivity analysis is developed in this paper. This new version coupled with tools from statistics, machine learning, and optimization can devise small sample sizes that allow for the accurate ordering of sensitivity coefficients for the first 10-30 most sensitive chemical reactions in complex chemical-kinetic mechanisms, and is particularly useful for studying the chemistry in realistic devices. A key part of the paper is calibration of these small samples. Because these small sample sizes are developed for use in realistic combustion devices, the calibration is done over the ranges of conditions in such devices, with amore » test case being the operating conditions of a compression ignition engine studied earlier. Compression ignition engines operate under low-temperature combustion conditions with quite complicated chemistry making this calibration difficult, leading to the possibility of false positives and false negatives in the ordering of the reactions. So an important aspect of the paper is showing how to handle the trade-off between false positives and false negatives using ideas from the multiobjective optimization literature. The combination of the new global sensitivity method and the calibration are sample sizes a factor of approximately 10 times smaller than were available with our previous algorithm.« less

  5. Bayesian Modal Estimation of the Four-Parameter Item Response Model in Real, Realistic, and Idealized Data Sets.

    PubMed

    Waller, Niels G; Feuerstahler, Leah

    2017-01-01

    In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).

  6. Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature.

    PubMed

    Szucs, Denes; Ioannidis, John P A

    2017-03-01

    We have empirically assessed the distribution of published effect sizes and estimated power by analyzing 26,841 statistical records from 3,801 cognitive neuroscience and psychology papers published recently. The reported median effect size was D = 0.93 (interquartile range: 0.64-1.46) for nominally statistically significant results and D = 0.24 (0.11-0.42) for nonsignificant results. Median power to detect small, medium, and large effects was 0.12, 0.44, and 0.73, reflecting no improvement through the past half-century. This is so because sample sizes have remained small. Assuming similar true effect sizes in both disciplines, power was lower in cognitive neuroscience than in psychology. Journal impact factors negatively correlated with power. Assuming a realistic range of prior probabilities for null hypotheses, false report probability is likely to exceed 50% for the whole literature. In light of our findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience.

  7. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

    PubMed

    Candel, Math J J M; Van Breukelen, Gerard J P

    2010-06-30

    Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

  8. Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature

    PubMed Central

    Szucs, Denes; Ioannidis, John P. A.

    2017-01-01

    We have empirically assessed the distribution of published effect sizes and estimated power by analyzing 26,841 statistical records from 3,801 cognitive neuroscience and psychology papers published recently. The reported median effect size was D = 0.93 (interquartile range: 0.64–1.46) for nominally statistically significant results and D = 0.24 (0.11–0.42) for nonsignificant results. Median power to detect small, medium, and large effects was 0.12, 0.44, and 0.73, reflecting no improvement through the past half-century. This is so because sample sizes have remained small. Assuming similar true effect sizes in both disciplines, power was lower in cognitive neuroscience than in psychology. Journal impact factors negatively correlated with power. Assuming a realistic range of prior probabilities for null hypotheses, false report probability is likely to exceed 50% for the whole literature. In light of our findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience. PMID:28253258

  9. Simulation of Powder Layer Deposition in Additive Manufacturing Processes Using the Discrete Element Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herbold, E. B.; Walton, O.; Homel, M. A.

    2015-10-26

    This document serves as a final report to a small effort where several improvements were added to a LLNL code GEODYN-­L to develop Discrete Element Method (DEM) algorithms coupled to Lagrangian Finite Element (FE) solvers to investigate powder-­bed formation problems for additive manufacturing. The results from these simulations will be assessed for inclusion as the initial conditions for Direct Metal Laser Sintering (DMLS) simulations performed with ALE3D. The algorithms were written and performed on parallel computing platforms at LLNL. The total funding level was 3-­4 weeks of an FTE split amongst two staff scientists and one post-­doc. The DEM simulationsmore » emulated, as much as was feasible, the physical process of depositing a new layer of powder over a bed of existing powder. The DEM simulations utilized truncated size distributions spanning realistic size ranges with a size distribution profile consistent with realistic sample set. A minimum simulation sample size on the order of 40-­particles square by 10-­particles deep was utilized in these scoping studies in order to evaluate the potential effects of size segregation variation with distance displaced in front of a screed blade. A reasonable method for evaluating the problem was developed and validated. Several simulations were performed to show the viability of the approach. Future investigations will focus on running various simulations investigating powder particle sizing and screen geometries.« less

  10. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  11. Design and Calibration of a High Volume Cascade Impactor

    ERIC Educational Resources Information Center

    Gussman, R. A.; And Others

    1973-01-01

    This study was to develop an air sampling device capable of classifying large quantities of airborne particulate matter into discrete size fractions. Such fractionation will facilitate chemical analysis of the various particulate pollutants and thereby provide a more realistic assessment of the effects of particulate matter on human beings. (BL)

  12. THE POWER TO DETECT A DIFFERENCE: DETERMINING SAMPLE SIZE REQUIREMENTS FOR EVALUATION OF REPRODUCTIVE/DEVELOPMENTAL EFFECTS FROM EXPOSURE TO COMPLEX MIXTURES OF DISINFECTION BYPRODUCTS

    EPA Science Inventory

    Toxicological assessment of environmentally-realistic complex mixtures of drinking-water disinfection byproducts (DBPs) are needed to address concerns raised by some epidemiological studies showing associations between exposure to chemically disinfected water and adverse reproduc...

  13. Meta-analysis of multiple outcomes: a multilevel approach.

    PubMed

    Van den Noortgate, Wim; López-López, José Antonio; Marín-Martínez, Fulgencio; Sánchez-Meca, Julio

    2015-12-01

    In meta-analysis, dependent effect sizes are very common. An example is where in one or more studies the effect of an intervention is evaluated on multiple outcome variables for the same sample of participants. In this paper, we evaluate a three-level meta-analytic model to account for this kind of dependence, extending the simulation results of Van den Noortgate, López-López, Marín-Martínez, and Sánchez-Meca Behavior Research Methods, 45, 576-594 (2013) by allowing for a variation in the number of effect sizes per study, in the between-study variance, in the correlations between pairs of outcomes, and in the sample size of the studies. At the same time, we explore the performance of the approach if the outcomes used in a study can be regarded as a random sample from a population of outcomes. We conclude that although this approach is relatively simple and does not require prior estimates of the sampling covariances between effect sizes, it gives appropriate mean effect size estimates, standard error estimates, and confidence interval coverage proportions in a variety of realistic situations.

  14. A Monte Carlo Approach to Unidimensionality Testing in Polytomous Rasch Models

    ERIC Educational Resources Information Center

    Christensen, Karl Bang; Kreiner, Svend

    2007-01-01

    Many statistical tests are designed to test the different assumptions of the Rasch model, but only few are directed at detecting multidimensionality. The Martin-Lof test is an attractive approach, the disadvantage being that its null distribution deviates strongly from the asymptotic chi-square distribution for most realistic sample sizes. A Monte…

  15. Sampling errors in the measurement of rain and hail parameters

    NASA Technical Reports Server (NTRS)

    Gertzman, H. S.; Atlas, D.

    1977-01-01

    Attention is given to a general derivation of the fractional standard deviation (FSD) of any integrated property X such that X(D) = cD to the n. This work extends that of Joss and Waldvogel (1969). The equation is applicable to measuring integrated properties of cloud, rain or hail populations (such as water content, precipitation rate, kinetic energy, or radar reflectivity) which are subject to statistical sampling errors due to the Poisson distributed fluctuations of particles sampled in each particle size interval and the weighted sum of the associated variances in proportion to their contribution to the integral parameter to be measured. Universal curves are presented which are applicable to the exponential size distribution permitting FSD estimation of any parameters from n = 0 to n = 6. The equations and curves also permit corrections for finite upper limits in the size spectrum and a realistic fall speed law.

  16. Effects of sample size on estimates of population growth rates calculated with matrix models.

    PubMed

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  17. Analysis of the Impact of Realistic Wind Size Parameter on the Delft3D Model

    NASA Astrophysics Data System (ADS)

    Washington, M. H.; Kumar, S.

    2017-12-01

    The wind size parameter, which is the distance from the center of the storm to the location of the maximum winds, is currently a constant in the Delft3D model. As a result, the Delft3D model's output prediction of the water levels during a storm surge are inaccurate compared to the observed data. To address these issues, an algorithm to calculate a realistic wind size parameter for a given hurricane was designed and implemented using the observed water-level data for Hurricane Matthew. A performance evaluation experiment was conducted to demonstrate the accuracy of the model's prediction of water levels using the realistic wind size input parameter compared to the default constant wind size parameter for Hurricane Matthew, with the water level data observed from October 4th, 2016 to October 9th, 2016 from National Oceanic and Atmospheric Administration (NOAA) as a baseline. The experimental results demonstrate that the Delft3D water level output for the realistic wind size parameter, compared to the default constant size parameter, matches more accurately with the NOAA reference water level data.

  18. Stratospheric CCN sampling program

    NASA Technical Reports Server (NTRS)

    Rogers, C. F.

    1981-01-01

    When Mt. St. Helens produced several major eruptions in the late spring of 1980, there was a strong interest in the characterization of the cloud condensation nuclei (CCN) activity of the material that was injected into the troposphere and stratosphere. The scientific value of CCN measurements is two fold: CCN counts may be directly applied to calculations of the interaction of the aerosol (enlargement) at atmospherically-realistic relative humidities or supersaturations; and if the chemical constituency of the aerosol can be assumed, the number-versus-critical supersaturation spectrum may be converted into a dry aerosol size spectrum covering a size region not readily measured by other methods. The sampling method is described along with the instrumentation used in the experiments.

  19. Limited-Information Goodness-of-Fit Testing of Diagnostic Classification Item Response Theory Models. CRESST Report 840

    ERIC Educational Resources Information Center

    Hansen, Mark; Cai, Li; Monroe, Scott; Li, Zhen

    2014-01-01

    It is a well-known problem in testing the fit of models to multinomial data that the full underlying contingency table will inevitably be sparse for tests of reasonable length and for realistic sample sizes. Under such conditions, full-information test statistics such as Pearson's X[superscript 2]?? and the likelihood ratio statistic…

  20. IndeCut evaluates performance of network motif discovery algorithms.

    PubMed

    Ansariola, Mitra; Megraw, Molly; Koslicki, David

    2018-05-01

    Genomic networks represent a complex map of molecular interactions which are descriptive of the biological processes occurring in living cells. Identifying the small over-represented circuitry patterns in these networks helps generate hypotheses about the functional basis of such complex processes. Network motif discovery is a systematic way of achieving this goal. However, a reliable network motif discovery outcome requires generating random background networks which are the result of a uniform and independent graph sampling method. To date, there has been no method to numerically evaluate whether any network motif discovery algorithm performs as intended on realistically sized datasets-thus it was not possible to assess the validity of resulting network motifs. In this work, we present IndeCut, the first method to date that characterizes network motif finding algorithm performance in terms of uniform sampling on realistically sized networks. We demonstrate that it is critical to use IndeCut prior to running any network motif finder for two reasons. First, IndeCut indicates the number of samples needed for a tool to produce an outcome that is both reproducible and accurate. Second, IndeCut allows users to choose the tool that generates samples in the most independent fashion for their network of interest among many available options. The open source software package is available at https://github.com/megrawlab/IndeCut. megrawm@science.oregonstate.edu or david.koslicki@math.oregonstate.edu. Supplementary data are available at Bioinformatics online.

  1. "Magnitude-based inference": a statistical review.

    PubMed

    Welsh, Alan H; Knight, Emma J

    2015-04-01

    We consider "magnitude-based inference" and its interpretation by examining in detail its use in the problem of comparing two means. We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how "magnitude-based inference" is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. We show that "magnitude-based inference" is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with "magnitude-based inference" and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using "magnitude-based inference," a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis.

  2. Vocational interests in the United States: Sex, age, ethnicity, and year effects.

    PubMed

    Morris, Michael L

    2016-10-01

    Vocational interests predict educational and career choices, job performance, and career success (Rounds & Su, 2014). Although sex differences in vocational interests have long been observed (Thorndike, 1911), an appropriate overall measure has been lacking from the literature. Using a cross-sectional sample of United States residents aged 14 to 63 who completed the Strong Interest Inventory assessment between 2005 and 2014 (N = 1,283,110), I examined sex, age, ethnicity, and year effects on work related interest levels using both multivariate and univariate effect size estimates of individual dimensions (Holland's Realistic, Investigative, Artistic, Social, Enterprising, and Conventional). Men scored higher on Realistic (d = -1.14), Investigative (d = -.32), Enterprising (d = -.22), and Conventional (d = -.23), while women scored higher on Artistic (d = .19) and Social (d = .38), mostly replicating previous univariate findings. Multivariate, overall sex differences were very large (disattenuated Mahalanobis' D = 1.61; 27% overlap). Interest levels were slightly lower and overall sex differences larger in younger samples. Overall sex differences have narrowed slightly for 18-22 year-olds in more recent samples. Generally very small ethnicity effects included relatively higher Investigative and Enterprising scores for Asians, Indians, and Middle Easterners, lower Realistic scores for Blacks and Native Americans, higher Realistic, Artistic, and Social scores for Pacific Islanders, and lower Conventional scores for Whites. Using Prediger's (1982) model, women were more interested in people (d = 1.01) and ideas (d = .18), while men were more interested in things and data. These results, consistent with previous reviews showing large sex differences and small year effects, suggest that large sex differences in work related interests will continue to be observed for decades. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Using variance components to estimate power in a hierarchically nested sampling design improving monitoring of larval Devils Hole pupfish

    USGS Publications Warehouse

    Dzul, Maria C.; Dixon, Philip M.; Quist, Michael C.; Dinsomore, Stephen J.; Bower, Michael R.; Wilson, Kevin P.; Gaines, D. Bailey

    2013-01-01

    We used variance components to assess allocation of sampling effort in a hierarchically nested sampling design for ongoing monitoring of early life history stages of the federally endangered Devils Hole pupfish (DHP) (Cyprinodon diabolis). Sampling design for larval DHP included surveys (5 days each spring 2007–2009), events, and plots. Each survey was comprised of three counting events, where DHP larvae on nine plots were counted plot by plot. Statistical analysis of larval abundance included three components: (1) evaluation of power from various sample size combinations, (2) comparison of power in fixed and random plot designs, and (3) assessment of yearly differences in the power of the survey. Results indicated that increasing the sample size at the lowest level of sampling represented the most realistic option to increase the survey's power, fixed plot designs had greater power than random plot designs, and the power of the larval survey varied by year. This study provides an example of how monitoring efforts may benefit from coupling variance components estimation with power analysis to assess sampling design.

  4. Kinetics of phase transformation in glass forming systems

    NASA Technical Reports Server (NTRS)

    Ray, Chandra S.

    1994-01-01

    The objectives of this research were to (1) develop computer models for realistic simulations of nucleation and crystal growth in glasses, which would also have the flexibility to accomodate the different variables related to sample characteristics and experimental conditions, and (2) design and perform nucleation and crystallization experiments using calorimetric measurements, such as differential scanning calorimetry (DSC) and differential thermal analysis (DTA) to verify these models. The variables related to sample characteristics mentioned in (1) above include size of the glass particles, nucleating agents, and the relative concentration of the surface and internal nuclei. A change in any of these variables changes the mode of the transformation (crystallization) kinetics. A variation in experimental conditions includes isothermal and nonisothermal DSC/DTA measurements. This research would lead to develop improved, more realistic methods for analysis of the DSC/DTA peak profiles to determine the kinetic parameters for nucleation and crystal growth as well as to assess the relative merits and demerits of the thermoanalytical models presently used to study the phase transformation in glasses.

  5. A multi-particle crushing apparatus for studying rock fragmentation due to repeated impacts

    NASA Astrophysics Data System (ADS)

    Huang, S.; Mohanty, B.; Xia, K.

    2017-12-01

    Rock crushing is a common process in mining and related operations. Although a number of particle crushing tests have been proposed in the literature, most of them are concerned with single-particle crushing, i.e., a single rock sample is crushed in each test. Considering the realistic scenario in crushers where many fragments are involved, a laboratory crushing apparatus is developed in this study. This device consists of a Hopkinson pressure bar system and a piston-holder system. The Hopkinson pressure bar system is used to apply calibrated dynamic loads to the piston-holder system, and the piston-holder system is used to hold rock samples and to recover fragments for subsequent particle size analysis. The rock samples are subjected to three to seven impacts under three impact velocities (2.2, 3.8, and 5.0 m/s), with the feed size of the rock particle samples limited between 9.5 and 12.7 mm. Several key parameters are determined from this test, including particle size distribution parameters, impact velocity, loading pressure, and total work. The results show that the total work correlates well with resulting fragmentation size distribution, and the apparatus provides a useful tool for studying the mechanism of crushing, which further provides guidelines for the design of commercial crushers.

  6. “Magnitude-based Inference”: A Statistical Review

    PubMed Central

    Welsh, Alan H.; Knight, Emma J.

    2015-01-01

    ABSTRACT Purpose We consider “magnitude-based inference” and its interpretation by examining in detail its use in the problem of comparing two means. Methods We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how “magnitude-based inference” is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. Results and Conclusions We show that “magnitude-based inference” is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with “magnitude-based inference” and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using “magnitude-based inference,” a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis. PMID:25051387

  7. Finite element and analytical solutions for van der Pauw and four-point probe correction factors when multiple non-ideal measurement conditions coexist

    NASA Astrophysics Data System (ADS)

    Reveil, Mardochee; Sorg, Victoria C.; Cheng, Emily R.; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O.

    2017-09-01

    This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.

  8. Finite element and analytical solutions for van der Pauw and four-point probe correction factors when multiple non-ideal measurement conditions coexist.

    PubMed

    Reveil, Mardochee; Sorg, Victoria C; Cheng, Emily R; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O

    2017-09-01

    This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.

  9. Accuracy in parameter estimation for targeted effects in structural equation modeling: sample size planning for narrow confidence intervals.

    PubMed

    Lai, Keke; Kelley, Ken

    2011-06-01

    In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about the magnitude of the population targeted effects. With the goal of obtaining sufficiently narrow confidence intervals for the model parameters of interest, sample size planning methods for SEM are developed from the accuracy in parameter estimation approach. One method plans for the sample size so that the expected confidence interval width is sufficiently narrow. An extended procedure ensures that the obtained confidence interval will be no wider than desired, with some specified degree of assurance. A Monte Carlo simulation study was conducted that verified the effectiveness of the procedures in realistic situations. The methods developed have been implemented in the MBESS package in R so that they can be easily applied by researchers. © 2011 American Psychological Association

  10. Accounting for randomness in measurement and sampling in studying cancer cell population dynamics.

    PubMed

    Ghavami, Siavash; Wolkenhauer, Olaf; Lahouti, Farshad; Ullah, Mukhtar; Linnebacher, Michael

    2014-10-01

    Knowing the expected temporal evolution of the proportion of different cell types in sample tissues gives an indication about the progression of the disease and its possible response to drugs. Such systems have been modelled using Markov processes. We here consider an experimentally realistic scenario in which transition probabilities are estimated from noisy cell population size measurements. Using aggregated data of FACS measurements, we develop MMSE and ML estimators and formulate two problems to find the minimum number of required samples and measurements to guarantee the accuracy of predicted population sizes. Our numerical results show that the convergence mechanism of transition probabilities and steady states differ widely from the real values if one uses the standard deterministic approach for noisy measurements. This provides support for our argument that for the analysis of FACS data one should consider the observed state as a random variable. The second problem we address is about the consequences of estimating the probability of a cell being in a particular state from measurements of small population of cells. We show how the uncertainty arising from small sample sizes can be captured by a distribution for the state probability.

  11. Methods for specifying the target difference in a randomised controlled trial: the Difference ELicitation in TriAls (DELTA) systematic review.

    PubMed

    Hislop, Jenni; Adewuyi, Temitope E; Vale, Luke D; Harrild, Kirsten; Fraser, Cynthia; Gurung, Tara; Altman, Douglas G; Briggs, Andrew H; Fayers, Peter; Ramsay, Craig R; Norrie, John D; Harvey, Ian M; Buckley, Brian; Cook, Jonathan A

    2014-05-01

    Randomised controlled trials (RCTs) are widely accepted as the preferred study design for evaluating healthcare interventions. When the sample size is determined, a (target) difference is typically specified that the RCT is designed to detect. This provides reassurance that the study will be informative, i.e., should such a difference exist, it is likely to be detected with the required statistical precision. The aim of this review was to identify potential methods for specifying the target difference in an RCT sample size calculation. A comprehensive systematic review of medical and non-medical literature was carried out for methods that could be used to specify the target difference for an RCT sample size calculation. The databases searched were MEDLINE, MEDLINE In-Process, EMBASE, the Cochrane Central Register of Controlled Trials, the Cochrane Methodology Register, PsycINFO, Science Citation Index, EconLit, the Education Resources Information Center (ERIC), and Scopus (for in-press publications); the search period was from 1966 or the earliest date covered, to between November 2010 and January 2011. Additionally, textbooks addressing the methodology of clinical trials and International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) tripartite guidelines for clinical trials were also consulted. A narrative synthesis of methods was produced. Studies that described a method that could be used for specifying an important and/or realistic difference were included. The search identified 11,485 potentially relevant articles from the databases searched. Of these, 1,434 were selected for full-text assessment, and a further nine were identified from other sources. Fifteen clinical trial textbooks and the ICH tripartite guidelines were also reviewed. In total, 777 studies were included, and within them, seven methods were identified-anchor, distribution, health economic, opinion-seeking, pilot study, review of the evidence base, and standardised effect size. A variety of methods are available that researchers can use for specifying the target difference in an RCT sample size calculation. Appropriate methods may vary depending on the aim (e.g., specifying an important difference versus a realistic difference), context (e.g., research question and availability of data), and underlying framework adopted (e.g., Bayesian versus conventional statistical approach). Guidance on the use of each method is given. No single method provides a perfect solution for all contexts.

  12. Efficient Bayesian mixed model analysis increases association power in large cohorts

    PubMed Central

    Loh, Po-Ru; Tucker, George; Bulik-Sullivan, Brendan K; Vilhjálmsson, Bjarni J; Finucane, Hilary K; Salem, Rany M; Chasman, Daniel I; Ridker, Paul M; Neale, Benjamin M; Berger, Bonnie; Patterson, Nick; Price, Alkes L

    2014-01-01

    Linear mixed models are a powerful statistical tool for identifying genetic associations and avoiding confounding. However, existing methods are computationally intractable in large cohorts, and may not optimize power. All existing methods require time cost O(MN2) (where N = #samples and M = #SNPs) and implicitly assume an infinitesimal genetic architecture in which effect sizes are normally distributed, which can limit power. Here, we present a far more efficient mixed model association method, BOLT-LMM, which requires only a small number of O(MN)-time iterations and increases power by modeling more realistic, non-infinitesimal genetic architectures via a Bayesian mixture prior on marker effect sizes. We applied BOLT-LMM to nine quantitative traits in 23,294 samples from the Women’s Genome Health Study (WGHS) and observed significant increases in power, consistent with simulations. Theory and simulations show that the boost in power increases with cohort size, making BOLT-LMM appealing for GWAS in large cohorts. PMID:25642633

  13. Novel application of DEM to modelling comminution processes

    NASA Astrophysics Data System (ADS)

    Delaney, Gary W.; Cleary, Paul W.; Sinnott, Matt D.; Morrison, Rob D.

    2010-06-01

    Comminution processes in which grains are broken down into smaller and smaller sizes represent a critical component in many industries including mineral processing, cement production, food processing and pharmaceuticals. We present a novel DEM implementation capable of realistically modelling such comminution processes. This extends on a previous implementation of DEM particle breakage that utilized spherical particles. Our new extension uses super-quadric particles, where daughter fragments with realistic size and shape distributions are packed inside a bounding parent super-quadric. We demonstrate the flexibility of our approach in different particle breakage scenarios and examine the effect of the chosen minimum resolved particle size. This incorporation of the effect of particle shape in the breakage process allows for more realistic DEM simulations to be performed, that can provide additional fundamental insights into comminution processes and into the behaviour of individual pieces of industrial machinery.

  14. DESCARTES' RULE OF SIGNS AND THE IDENTIFIABILITY OF POPULATION DEMOGRAPHIC MODELS FROM GENOMIC VARIATION DATA.

    PubMed

    Bhaskar, Anand; Song, Yun S

    2014-01-01

    The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the "folded" SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes' rule of signs for polynomials to the Laplace transform of piecewise continuous functions.

  15. DESCARTES’ RULE OF SIGNS AND THE IDENTIFIABILITY OF POPULATION DEMOGRAPHIC MODELS FROM GENOMIC VARIATION DATA1

    PubMed Central

    Bhaskar, Anand; Song, Yun S.

    2016-01-01

    The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the “folded” SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes’ rule of signs for polynomials to the Laplace transform of piecewise continuous functions. PMID:28018011

  16. Laser-induced incandescence of titania nanoparticles synthesized in a flame

    NASA Astrophysics Data System (ADS)

    Cignoli, F.; Bellomunno, C.; Maffi, S.; Zizak, G.

    2009-09-01

    Laser induced incandescence experiments were carried out in a flame reactor during titania nanoparticle synthesis. The structure of the reactor employed allowed for a rather smooth particle growth along the flame axis, with limited mixing of different size particles. Particle incandescence was excited by the 4th harmonic of a Nd:YAG laser. The radiation emitted from the particles was recorded in time and checked by spectral analysis. Results were compared with measurements from transmission electron microscopy of samples taken at the same locations probed by incandescence. This was done covering a portion of the flame length within which a particle size growth of a factor of about four was detected . The incandescence decay time was found to increase monotonically with particle size. The attainment of a process control tool in nanoparticle flame synthesis appears to be realistic.

  17. Methods for Specifying the Target Difference in a Randomised Controlled Trial: The Difference ELicitation in TriAls (DELTA) Systematic Review

    PubMed Central

    Hislop, Jenni; Adewuyi, Temitope E.; Vale, Luke D.; Harrild, Kirsten; Fraser, Cynthia; Gurung, Tara; Altman, Douglas G.; Briggs, Andrew H.; Fayers, Peter; Ramsay, Craig R.; Norrie, John D.; Harvey, Ian M.; Buckley, Brian; Cook, Jonathan A.

    2014-01-01

    Background Randomised controlled trials (RCTs) are widely accepted as the preferred study design for evaluating healthcare interventions. When the sample size is determined, a (target) difference is typically specified that the RCT is designed to detect. This provides reassurance that the study will be informative, i.e., should such a difference exist, it is likely to be detected with the required statistical precision. The aim of this review was to identify potential methods for specifying the target difference in an RCT sample size calculation. Methods and Findings A comprehensive systematic review of medical and non-medical literature was carried out for methods that could be used to specify the target difference for an RCT sample size calculation. The databases searched were MEDLINE, MEDLINE In-Process, EMBASE, the Cochrane Central Register of Controlled Trials, the Cochrane Methodology Register, PsycINFO, Science Citation Index, EconLit, the Education Resources Information Center (ERIC), and Scopus (for in-press publications); the search period was from 1966 or the earliest date covered, to between November 2010 and January 2011. Additionally, textbooks addressing the methodology of clinical trials and International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) tripartite guidelines for clinical trials were also consulted. A narrative synthesis of methods was produced. Studies that described a method that could be used for specifying an important and/or realistic difference were included. The search identified 11,485 potentially relevant articles from the databases searched. Of these, 1,434 were selected for full-text assessment, and a further nine were identified from other sources. Fifteen clinical trial textbooks and the ICH tripartite guidelines were also reviewed. In total, 777 studies were included, and within them, seven methods were identified—anchor, distribution, health economic, opinion-seeking, pilot study, review of the evidence base, and standardised effect size. Conclusions A variety of methods are available that researchers can use for specifying the target difference in an RCT sample size calculation. Appropriate methods may vary depending on the aim (e.g., specifying an important difference versus a realistic difference), context (e.g., research question and availability of data), and underlying framework adopted (e.g., Bayesian versus conventional statistical approach). Guidance on the use of each method is given. No single method provides a perfect solution for all contexts. Please see later in the article for the Editors' Summary PMID:24824338

  18. Sample size determination for estimating antibody seroconversion rate under stable malaria transmission intensity.

    PubMed

    Sepúlveda, Nuno; Drakeley, Chris

    2015-04-03

    In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.

  19. Experimental and computational analysis of sound absorption behavior in needled nonwovens

    NASA Astrophysics Data System (ADS)

    Soltani, Parham; Azimian, Mehdi; Wiegmann, Andreas; Zarrebini, Mohammad

    2018-07-01

    In this paper application of X-ray micro-computed tomography (μCT) together with fluid simulation techniques to predict sound absorption characteristics of needled nonwovens is discussed. Melt-spun polypropylene fibers of different fineness were made on an industrial scale compact melt spinning line. A conventional batt forming-needling line was used to prepare the needled samples. The normal incidence sound absorption coefficients were measured using impedance tube method. Realistic 3D images of samples at micron-level spatial resolution were obtained using μCT. Morphology of fabrics was characterized in terms of porosity, fiber diameter distribution, fiber curliness and pore size distribution from high-resolution realistic 3D images using GeoDict software. In order to calculate permeability and flow resistivity of media, fluid flow was simulated by numerically solving incompressible laminar Newtonian flow through the 3D pore space of realistic structures. Based on the flow resistivity, the frequency-dependent acoustic absorption coefficient of the needled nonwovens was predicted using the empirical model of Delany and Bazley (1970) and its associated modified models. The results were compared and validated with the corresponding experimental results. Based on morphological analysis, it was concluded that for a given weight per unit area, finer fibers yield to presence of higher number of fibers in the samples. This results in formation of smaller and more tortuous pores, which in turn leads to increase in flow resistivity of media. It was established that, among the empirical models, Mechel modification to Delany and Bazley model had superior predictive ability when compared to that of the original Delany and Bazley model at frequency range of 100-5000 Hz and is well suited to polypropylene needled nonwovens.

  20. Application of nonlinear ultrasonics to inspection of stainless steel for dry storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulrich, Timothy James II; Anderson, Brain E.; Remillieux, Marcel C.

    This report summarized technical work conducted by LANL staff an international collaborators in support of the UFD Storage Experimentation effort. The focus of the current technical work is on the detection and imaging of a failure mechanism known as stress corrosion cracking (SCC) in stainless steel using the nonlinear ultrasonic technique known as TREND. One of the difficulties faced in previous work is in finding samples that contain realistically sized SCC. This year such samples were obtained from EPRI. Reported here are measurements made on these samples. One of the key findings is the ability to detect subsurface changes tomore » the direction in which a crack is penetrating into the sample. This result follows from last year's report that demonstrated the ability of TREND techniques to image features below the sample surface. A new collaboration was established with AGH University of Science and Technology, Krakow, Poland.« less

  1. Optimizing trial design in pharmacogenetics research: comparing a fixed parallel group, group sequential, and adaptive selection design on sample size requirements.

    PubMed

    Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit

    2013-01-01

    Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.

  2. Design of an occulter testbed at flight Fresnel numbers

    NASA Astrophysics Data System (ADS)

    Sirbu, Dan; Kasdin, N. Jeremy; Kim, Yunjong; Vanderbei, Robert J.

    2015-01-01

    An external occulter is a spacecraft flown along the line-of-sight of a space telescope to suppress starlight and enable high-contrast direct imaging of exoplanets. Laboratory verification of occulter designs is necessary to validate the optical models used to design and predict occulter performance. At Princeton, we are designing and building a testbed that allows verification of scaled occulter designs whose suppressed shadow is mathematically identical to that of space occulters. Here, we present a sample design operating at a flight Fresnel number and is thus representative of a realistic space mission. We present calculations of experimental limits arising from the finite size and propagation distance available in the testbed, limitations due to manufacturing feature size, and non-ideal input beam. We demonstrate how the testbed is designed to be feature-size limited, and provide an estimation of the expected performance.

  3. Modified Distribution-Free Goodness-of-Fit Test Statistic.

    PubMed

    Chun, So Yeon; Browne, Michael W; Shapiro, Alexander

    2018-03-01

    Covariance structure analysis and its structural equation modeling extensions have become one of the most widely used methodologies in social sciences such as psychology, education, and economics. An important issue in such analysis is to assess the goodness of fit of a model under analysis. One of the most popular test statistics used in covariance structure analysis is the asymptotically distribution-free (ADF) test statistic introduced by Browne (Br J Math Stat Psychol 37:62-83, 1984). The ADF statistic can be used to test models without any specific distribution assumption (e.g., multivariate normal distribution) of the observed data. Despite its advantage, it has been shown in various empirical studies that unless sample sizes are extremely large, this ADF statistic could perform very poorly in practice. In this paper, we provide a theoretical explanation for this phenomenon and further propose a modified test statistic that improves the performance in samples of realistic size. The proposed statistic deals with the possible ill-conditioning of the involved large-scale covariance matrices.

  4. Development of size-selective sampling of Bacillus anthracis surrogate spores from simulated building air intake mixtures for analysis via laser-induced breakdown spectroscopy.

    PubMed

    Gibb-Snyder, Emily; Gullett, Brian; Ryan, Shawn; Oudejans, Lukas; Touati, Abderrahmane

    2006-08-01

    Size-selective sampling of Bacillus anthracis surrogate spores from realistic, common aerosol mixtures was developed for analysis by laser-induced breakdown spectroscopy (LIBS). A two-stage impactor was found to be the preferential sampling technique for LIBS analysis because it was able to concentrate the spores in the mixtures while decreasing the collection of potentially interfering aerosols. Three common spore/aerosol scenarios were evaluated, diesel truck exhaust (to simulate a truck running outside of a building air intake), urban outdoor aerosol (to simulate common building air), and finally a protein aerosol (to simulate either an agent mixture (ricin/anthrax) or a contaminated anthrax sample). Two statistical methods, linear correlation and principal component analysis, were assessed for differentiation of surrogate spore spectra from other common aerosols. Criteria for determining percentages of false positives and false negatives via correlation analysis were evaluated. A single laser shot analysis of approximately 4 percent of the spores in a mixture of 0.75 m(3) urban outdoor air doped with approximately 1.1 x 10(5) spores resulted in a 0.04 proportion of false negatives. For that same sample volume of urban air without spores, the proportion of false positives was 0.08.

  5. Inadequacy of Conventional Grab Sampling for Remediation Decision-Making for Metal Contamination at Small-Arms Ranges.

    PubMed

    Clausen, J L; Georgian, T; Gardner, K H; Douglas, T A

    2018-01-01

    Research shows grab sampling is inadequate for evaluating military ranges contaminated with energetics because of their highly heterogeneous distribution. Similar studies assessing the heterogeneous distribution of metals at small-arms ranges (SAR) are lacking. To address this we evaluated whether grab sampling provides appropriate data for performing risk analysis at metal-contaminated SARs characterized with 30-48 grab samples. We evaluated the extractable metal content of Cu, Pb, Sb, and Zn of the field data using a Monte Carlo random resampling with replacement (bootstrapping) simulation approach. Results indicate the 95% confidence interval of the mean for Pb (432 mg/kg) at one site was 200-700 mg/kg with a data range of 5-4500 mg/kg. Considering the U.S. Environmental Protection Agency screening level for lead is 400 mg/kg, the necessity of cleanup at this site is unclear. Resampling based on populations of 7 and 15 samples, a sample size more realistic for the area yielded high false negative rates.

  6. A nonuniform popularity-similarity optimization (nPSO) model to efficiently generate realistic complex networks with communities

    NASA Astrophysics Data System (ADS)

    Muscoloni, Alessandro; Vittorio Cannistraci, Carlo

    2018-05-01

    The investigation of the hidden metric space behind complex network topologies is a fervid topic in current network science and the hyperbolic space is one of the most studied, because it seems associated to the structural organization of many real complex systems. The popularity-similarity-optimization (PSO) model simulates how random geometric graphs grow in the hyperbolic space, generating realistic networks with clustering, small-worldness, scale-freeness and rich-clubness. However, it misses to reproduce an important feature of real complex networks, which is the community organization. The geometrical-preferential-attachment (GPA) model was recently developed in order to confer to the PSO also a soft community structure, which is obtained by forcing different angular regions of the hyperbolic disk to have a variable level of attractiveness. However, the number and size of the communities cannot be explicitly controlled in the GPA, which is a clear limitation for real applications. Here, we introduce the nonuniform PSO (nPSO) model. Differently from GPA, the nPSO generates synthetic networks in the hyperbolic space where heterogeneous angular node attractiveness is forced by sampling the angular coordinates from a tailored nonuniform probability distribution (for instance a mixture of Gaussians). The nPSO differs from GPA in other three aspects: it allows one to explicitly fix the number and size of communities; it allows one to tune their mixing property by means of the network temperature; it is efficient to generate networks with high clustering. Several tests on the detectability of the community structure in nPSO synthetic networks and wide investigations on their structural properties confirm that the nPSO is a valid and efficient model to generate realistic complex networks with communities.

  7. The widespread misuse of effect sizes.

    PubMed

    Dankel, Scott J; Mouser, J Grant; Mattocks, Kevin T; Counts, Brittany R; Jessee, Matthew B; Buckner, Samuel L; Loprinzi, Paul D; Loenneke, Jeremy P

    2017-05-01

    Studies comparing multiple groups (i.e., experimental and control) often examine the efficacy of an intervention by calculating within group effect sizes using Cohen's d. This method is inappropriate and largely impacted by the pre-test variability as opposed to the variability in the intervention itself. Furthermore, the percentage change is often analyzed, but this is highly impacted by the baseline values and can be potentially misleading. Thus, the objective of this study was to illustrate the common misuse of the effect size and percent change measures. Here we provide a realistic sample data set comparing two resistance training groups with the same pre-test to post-test change. Statistical tests that are commonly performed within the literature were computed. Analyzing the within group effect size favors the control group, while the percent change favors the experimental group. The most appropriate way to present the data would be to plot the individual responses or, for larger samples, provide the mean change and 95% confidence intervals of the mean change. This details the magnitude and variability within the response to the intervention itself in units that are easily interpretable. This manuscript demonstrates the common misuse of the effect size and details the importance for investigators to always report raw values, even when alternative statistics are performed. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  8. (Sample) Size Matters: Best Practices for Defining Error in Planktic Foraminiferal Proxy Records

    NASA Astrophysics Data System (ADS)

    Lowery, C.; Fraass, A. J.

    2016-02-01

    Paleoceanographic research is a vital tool to extend modern observational datasets and to study the impact of climate events for which there is no modern analog. Foraminifera are one of the most widely used tools for this type of work, both as paleoecological indicators and as carriers for geochemical proxies. However, the use of microfossils as proxies for paleoceanographic conditions brings about a unique set of problems. This is primarily due to the fact that groups of individual foraminifera, which usually live about a month, are used to infer average conditions for time periods ranging from hundreds to tens of thousands of years. Because of this, adequate sample size is very important for generating statistically robust datasets, particularly for stable isotopes. In the early days of stable isotope geochemistry, instrumental limitations required hundreds of individual foraminiferal tests to return a value. This had the fortunate side-effect of smoothing any seasonal to decadal changes within the planktic foram population. With the advent of more sensitive mass spectrometers, smaller sample sizes have now become standard. While this has many advantages, the use of smaller numbers of individuals to generate a data point has lessened the amount of time averaging in the isotopic analysis and decreased precision in paleoceanographic datasets. With fewer individuals per sample, the differences between individual specimens will result in larger variation, and therefore error, and less precise values for each sample. Unfortunately, most (the authors included) do not make a habit of reporting the error associated with their sample size. We have created an open-source model in R to quantify the effect of sample sizes under various realistic and highly modifiable parameters (calcification depth, diagenesis in a subset of the population, improper identification, vital effects, mass, etc.). For example, a sample in which only 1 in 10 specimens is diagenetically altered can be off by >0.3‰ δ18O VPDB, or 1°C. Here, we demonstrate the use of this tool to quantify error in micropaleontological datasets, and suggest best practices for minimizing error when generating stable isotope data with foraminifera.

  9. Perspective: Size selected clusters for catalysis and electrochemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halder, Avik; Curtiss, Larry A.; Fortunelli, Alessandro

    We report that size-selected clusters containing a handful of atoms may possess noble catalytic properties different from nano-sized or bulk catalysts. Size- and composition-selected clusters can also serve as models of the catalytic active site, where an addition or removal of a single atom can have a dramatic effect on their activity and selectivity. In this Perspective, we provide an overview of studies performed under both ultra-high vacuum and realistic reaction conditions aimed at the interrogation, characterization and understanding of the performance of supported size-selected clusters in heterogeneous and electrochemical reactions, which address the effects of cluster size, cluster composition,more » cluster-support interactions and reaction conditions, the key parameters for the understanding and control of catalyst functionality. Computational modelling based on density functional theory sampling of local minima and energy barriers or ab initio Molecular Dynamics simulations is an integral part of this research by providing fundamental understanding of the catalytic processes at the atomic level, as well as by predicting new materials compositions which can be validated in experiments. Lastly, we discuss approaches which aim at the scale up of the production of well-defined clusters for use in real world applications.« less

  10. Perspective: Size selected clusters for catalysis and electrochemistry

    DOE PAGES

    Halder, Avik; Curtiss, Larry A.; Fortunelli, Alessandro; ...

    2018-03-15

    We report that size-selected clusters containing a handful of atoms may possess noble catalytic properties different from nano-sized or bulk catalysts. Size- and composition-selected clusters can also serve as models of the catalytic active site, where an addition or removal of a single atom can have a dramatic effect on their activity and selectivity. In this Perspective, we provide an overview of studies performed under both ultra-high vacuum and realistic reaction conditions aimed at the interrogation, characterization and understanding of the performance of supported size-selected clusters in heterogeneous and electrochemical reactions, which address the effects of cluster size, cluster composition,more » cluster-support interactions and reaction conditions, the key parameters for the understanding and control of catalyst functionality. Computational modelling based on density functional theory sampling of local minima and energy barriers or ab initio Molecular Dynamics simulations is an integral part of this research by providing fundamental understanding of the catalytic processes at the atomic level, as well as by predicting new materials compositions which can be validated in experiments. Lastly, we discuss approaches which aim at the scale up of the production of well-defined clusters for use in real world applications.« less

  11. Perspective: Size selected clusters for catalysis and electrochemistry

    NASA Astrophysics Data System (ADS)

    Halder, Avik; Curtiss, Larry A.; Fortunelli, Alessandro; Vajda, Stefan

    2018-03-01

    Size-selected clusters containing a handful of atoms may possess noble catalytic properties different from nano-sized or bulk catalysts. Size- and composition-selected clusters can also serve as models of the catalytic active site, where an addition or removal of a single atom can have a dramatic effect on their activity and selectivity. In this perspective, we provide an overview of studies performed under both ultra-high vacuum and realistic reaction conditions aimed at the interrogation, characterization, and understanding of the performance of supported size-selected clusters in heterogeneous and electrochemical reactions, which address the effects of cluster size, cluster composition, cluster-support interactions, and reaction conditions, the key parameters for the understanding and control of catalyst functionality. Computational modeling based on density functional theory sampling of local minima and energy barriers or ab initio molecular dynamics simulations is an integral part of this research by providing fundamental understanding of the catalytic processes at the atomic level, as well as by predicting new materials compositions which can be validated in experiments. Finally, we discuss approaches which aim at the scale up of the production of well-defined clusters for use in real world applications.

  12. Linking Different Exposure Patterns to Internal Lung Dose for Heterogeneous Ambient Aerosols

    EPA Science Inventory

    Particulate matter (PM) in the ambient air is a complex mixture of particles with different sizes and chemical compositions. Because potential health effects are known to be different for different size particles, specific dose of size-fractionated PM under realistic exposure con...

  13. Spontaneous emission in the presence of a realistically sized cylindrical waveguide

    NASA Astrophysics Data System (ADS)

    Dung, Ho Trung

    2016-02-01

    Various quantities characterizing the spontaneous emission process of a dipole emitter including the emission rate and the emission pattern can be expressed in terms of the Green tensor of the surrounding environment. By expanding the Green tensor around some analytically known background one as a Born series, and truncating it under appropriate conditions, complicated boundaries can be tackled with ease. However, when the emitter is embedded in the medium, even the calculation of the first-order term in the Born series is problematic because of the presence of a singularity. We show how to eliminate this singularity for a medium of arbitrary size and shape by expanding around the bulk medium rather than vacuum. In the highly symmetric configuration of an emitter located on the axis of a realistically sized cylinder, it is shown that the singularity can be removed by changing the integral variables and then the order of integration. Using both methods, we investigate the spontaneous emission rate of an initially excited two-level dipole emitter, embedded in a realistically sized cylinder, which can be a common optical fiber in the long-length limit and a disk in the short-length limit. The spatial distribution of the emitted light is calculated using the Born-expansion approach, and local-field corrections to the spontaneous emission rate are briefly discussed.

  14. Problem Posing with Realistic Mathematics Education Approach in Geometry Learning

    NASA Astrophysics Data System (ADS)

    Mahendra, R.; Slamet, I.; Budiyono

    2017-09-01

    One of the difficulties of students in the learning of geometry is on the subject of plane that requires students to understand the abstract matter. The aim of this research is to determine the effect of Problem Posing learning model with Realistic Mathematics Education Approach in geometry learning. This quasi experimental research was conducted in one of the junior high schools in Karanganyar, Indonesia. The sample was taken using stratified cluster random sampling technique. The results of this research indicate that the model of Problem Posing learning with Realistic Mathematics Education Approach can improve students’ conceptual understanding significantly in geometry learning especially on plane topics. It is because students on the application of Problem Posing with Realistic Mathematics Education Approach are become to be active in constructing their knowledge, proposing, and problem solving in realistic, so it easier for students to understand concepts and solve the problems. Therefore, the model of Problem Posing learning with Realistic Mathematics Education Approach is appropriately applied in mathematics learning especially on geometry material. Furthermore, the impact can improve student achievement.

  15. Overcoming time scale and finite size limitations to compute nucleation rates from small scale well tempered metadynamics simulations.

    PubMed

    Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele

    2016-12-07

    Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.

  16. Overcoming time scale and finite size limitations to compute nucleation rates from small scale well tempered metadynamics simulations

    NASA Astrophysics Data System (ADS)

    Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele

    2016-12-01

    Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.

  17. Particle-Size-Grouping Model of Precipitation Kinetics in Microalloyed Steels

    NASA Astrophysics Data System (ADS)

    Xu, Kun; Thomas, Brian G.

    2012-03-01

    The formation, growth, and size distribution of precipitates greatly affects the microstructure and properties of microalloyed steels. Computational particle-size-grouping (PSG) kinetic models based on population balances are developed to simulate precipitate particle growth resulting from collision and diffusion mechanisms. First, the generalized PSG method for collision is explained clearly and verified. Then, a new PSG method is proposed to model diffusion-controlled precipitate nucleation, growth, and coarsening with complete mass conservation and no fitting parameters. Compared with the original population-balance models, this PSG method saves significant computation and preserves enough accuracy to model a realistic range of particle sizes. Finally, the new PSG method is combined with an equilibrium phase fraction model for plain carbon steels and is applied to simulate the precipitated fraction of aluminum nitride and the size distribution of niobium carbide during isothermal aging processes. Good matches are found with experimental measurements, suggesting that the new PSG method offers a promising framework for the future development of realistic models of precipitation.

  18. Spatial variability in plankton biomass and hydrographic variables along an axial transect in Chesapeake Bay

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Roman, M.; Kimmel, D.; McGilliard, C.; Boicourt, W.

    2006-05-01

    High-resolution, axial sampling surveys were conducted in Chesapeake Bay during April, July, and October from 1996 to 2000 using a towed sampling device equipped with sensors for depth, temperature, conductivity, oxygen, fluorescence, and an optical plankton counter (OPC). The results suggest that the axial distribution and variability of hydrographic and biological parameters in Chesapeake Bay were primarily influenced by the source and magnitude of freshwater input. Bay-wide spatial trends in the water column-averaged values of salinity were linear functions of distance from the main source of freshwater, the Susquehanna River, at the head of the bay. However, spatial trends in the water column-averaged values of temperature, dissolved oxygen, chlorophyll-a and zooplankton biomass were nonlinear along the axis of the bay. Autocorrelation analysis and the residuals of linear and quadratic regressions between each variable and latitude were used to quantify the patch sizes for each axial transect. The patch sizes of each variable depended on whether the data were detrended, and the detrending techniques applied. However, the patch size of each variable was generally larger using the original data compared to the detrended data. The patch sizes of salinity were larger than those for dissolved oxygen, chlorophyll-a and zooplankton biomass, suggesting that more localized processes influence the production and consumption of plankton. This high-resolution quantification of the zooplankton spatial variability and patch size can be used for more realistic assessments of the zooplankton forage base for larval fish species.

  19. Power analysis to detect treatment effect in longitudinal studies with heterogeneous errors and incomplete data.

    PubMed

    Vallejo, Guillermo; Ato, Manuel; Fernández García, Paula; Livacic Rojas, Pablo E; Tuero Herrero, Ellián

    2016-08-01

     S. Usami (2014) describes a method to realistically determine sample size in longitudinal research using a multilevel model. The present research extends the aforementioned work to situations where it is likely that the assumption of homogeneity of the errors across groups is not met and the error term does not follow a scaled identity covariance structure.   For this purpose, we followed a procedure based on transforming the variance components of the linear growth model and the parameter related to the treatment effect into specific and easily understandable indices. At the same time, we provide the appropriate statistical machinery for researchers to use when data loss is unavoidable, and changes in the expected value of the observed responses are not linear.   The empirical powers based on unknown variance components were virtually the same as the theoretical powers derived from the use of statistically processed indexes.   The main conclusion of the study is the accuracy of the proposed method to calculate sample size in the described situations with the stipulated power criteria.

  20. Family Relationships in Realistic Young Adult Fiction, 1987 to 1991.

    ERIC Educational Resources Information Center

    Sampson, Cathie

    The purpose of this study was to determine how parents and family relationships are characterized in realistic young adult fiction. A random sample of 20 realistic young adult novels was selected from the American Library Association's Best Lists for the years 1987-1991. A content analysis of the novels focused on the following: (1) whether…

  1. Snow particles extracted from X-ray computed microtomography imagery and their single-scattering properties

    NASA Astrophysics Data System (ADS)

    Ishimoto, Hiroshi; Adachi, Satoru; Yamaguchi, Satoru; Tanikawa, Tomonori; Aoki, Teruo; Masuda, Kazuhiko

    2018-04-01

    Sizes and shapes of snow particles were determined from X-ray computed microtomography (micro-CT) images, and their single-scattering properties were calculated at visible and near-infrared wavelengths using a Geometrical Optics Method (GOM). We analyzed seven snow samples including fresh and aged artificial snow and natural snow obtained from field samples. Individual snow particles were numerically extracted, and the shape of each snow particle was defined by applying a rendering method. The size distribution and specific surface area distribution were estimated from the geometrical properties of the snow particles, and an effective particle radius was derived for each snow sample. The GOM calculations at wavelengths of 0.532 and 1.242 μm revealed that the realistic snow particles had similar scattering phase functions as those of previously modeled irregular shaped particles. Furthermore, distinct dendritic particles had a characteristic scattering phase function and asymmetry factor. The single-scattering properties of particles of effective radius reff were compared with the size-averaged single-scattering properties. We found that the particles of reff could be used as representative particles for calculating the average single-scattering properties of the snow. Furthermore, the single-scattering properties of the micro-CT particles were compared to those of particle shape models using our current snow retrieval algorithm. For the single-scattering phase function, the results of the micro-CT particles were consistent with those of a conceptual two-shape model. However, the particle size dependence differed for the single-scattering albedo and asymmetry factor.

  2. Potential for adult-based epidemiological studies to characterize overall cancer risks associated with a lifetime of CT scans.

    PubMed

    Shuryak, Igor; Lubin, Jay H; Brenner, David J

    2014-06-01

    Recent epidemiological studies have suggested that radiation exposure from pediatric CT scanning is associated with small excess cancer risks. However, the majority of CT scans are performed on adults, and most radiation-induced cancers appear during middle or old age, in the same age range as background cancers. Consequently, a logical next step is to investigate the effects of CT scanning in adulthood on lifetime cancer risks by conducting adult-based, appropriately designed epidemiological studies. Here we estimate the sample size required for such studies to detect CT-associated risks. This was achieved by incorporating different age-, sex-, time- and cancer type-dependent models of radiation carcinogenesis into an in silico simulation of a population-based cohort study. This approach simulated individual histories of chest and abdominal CT exposures, deaths and cancer diagnoses. The resultant sample sizes suggest that epidemiological studies of realistically sized cohorts can detect excess lifetime cancer risks from adult CT exposures. For example, retrospective analysis of CT exposure and cancer incidence data from a population-based cohort of 0.4 to 1.3 million (depending on the carcinogenic model) CT-exposed UK adults, aged 25-65 in 1980 and followed until 2015, provides 80% power for detecting cancer risks from chest and abdominal CT scans.

  3. Measuring the X-shaped structures in edge-on galaxies

    NASA Astrophysics Data System (ADS)

    Savchenko, S. S.; Sotnikova, N. Ya.; Mosenkov, A. V.; Reshetnikov, V. P.; Bizyaev, D. V.

    2017-11-01

    We present a detailed photometric study of a sample of 22 edge-on galaxies with clearly visible X-shaped structures. We propose a novel method to derive geometrical parameters of these features, along with the parameters of their host galaxies based on the multi-component photometric decomposition of galactic images. To include the X-shaped structure into our photometric model, we use the imfit package, in which we implement a new component describing the X-shaped structure. This method is applied for a sample of galaxies with available Sloan Digital Sky Survey and Spitzer IRAC 3.6 μm observations. In order to explain our results, we perform realistic N-body simulations of a Milky Way-type galaxy and compare the observed and the model X-shaped structures. Our main conclusions are as follows: (1) galaxies with strong X-shaped structures reside in approximately the same local environments as field galaxies; (2) the characteristic size of the X-shaped structures is about 2/3 of the bar size; (3) there is a correlation between the X-shaped structure size and its observed flatness: the larger structures are more flattened; (4) our N-body simulations qualitatively confirm the observational results and support the bar-driven scenario for the X-shaped structure formation.

  4. Cell wall microstructure, pore size distribution and absolute density of hemp shiv

    PubMed Central

    Lawrence, M.; Ansell, M. P.; Hussain, A.

    2018-01-01

    This paper, for the first time, fully characterizes the intrinsic physical parameters of hemp shiv including cell wall microstructure, pore size distribution and absolute density. Scanning electron microscopy revealed microstructural features similar to hardwoods. Confocal microscopy revealed three major layers in the cell wall: middle lamella, primary cell wall and secondary cell wall. Computed tomography improved the visualization of pore shape and pore connectivity in three dimensions. Mercury intrusion porosimetry (MIP) showed that the average accessible porosity was 76.67 ± 2.03% and pore size classes could be distinguished into micropores (3–10 nm) and macropores (0.1–1 µm and 20–80 µm). The absolute density was evaluated by helium pycnometry, MIP and Archimedes' methods. The results show that these methods can lead to misinterpretation of absolute density. The MIP method showed a realistic absolute density (1.45 g cm−3) consistent with the density of the known constituents, including lignin, cellulose and hemi-cellulose. However, helium pycnometry and Archimedes’ methods gave falsely low values owing to 10% of the volume being inaccessible pores, which require sample pretreatment in order to be filled by liquid or gas. This indicates that the determination of the cell wall density is strongly dependent on sample geometry and preparation. PMID:29765652

  5. Cell wall microstructure, pore size distribution and absolute density of hemp shiv

    NASA Astrophysics Data System (ADS)

    Jiang, Y.; Lawrence, M.; Ansell, M. P.; Hussain, A.

    2018-04-01

    This paper, for the first time, fully characterizes the intrinsic physical parameters of hemp shiv including cell wall microstructure, pore size distribution and absolute density. Scanning electron microscopy revealed microstructural features similar to hardwoods. Confocal microscopy revealed three major layers in the cell wall: middle lamella, primary cell wall and secondary cell wall. Computed tomography improved the visualization of pore shape and pore connectivity in three dimensions. Mercury intrusion porosimetry (MIP) showed that the average accessible porosity was 76.67 ± 2.03% and pore size classes could be distinguished into micropores (3-10 nm) and macropores (0.1-1 µm and 20-80 µm). The absolute density was evaluated by helium pycnometry, MIP and Archimedes' methods. The results show that these methods can lead to misinterpretation of absolute density. The MIP method showed a realistic absolute density (1.45 g cm-3) consistent with the density of the known constituents, including lignin, cellulose and hemi-cellulose. However, helium pycnometry and Archimedes' methods gave falsely low values owing to 10% of the volume being inaccessible pores, which require sample pretreatment in order to be filled by liquid or gas. This indicates that the determination of the cell wall density is strongly dependent on sample geometry and preparation.

  6. Active and realistic passive marijuana exposure tested by three immunoassays and GC/MS in urine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mule, S.J.; Lomax, P.; Gross, S.J.

    Human urine samples obtained before and after active and passive exposure to marijuana were analyzed by immune kits (Roche, Amersham, and Syva) and gas chromatography/mass spectrometry (GC/MS). Seven of eight subjects were positive for the entire five-day test period with one immune kit. The latter correlated with GC/MS in 98% of the samples. Passive inhalation experiments under conditions likely to reflect realistic exposure resulted consistently in less than 10 ng/mL of cannabinoids. The 10-100-ng/mL cannabinoid concentration range essential for detection of occasional and moderate marijuana users is thus unaffected by realistic passive inhalation.

  7. Inferring the photometric and size evolution of galaxies from image simulations. I. Method

    NASA Astrophysics Data System (ADS)

    Carassou, Sébastien; de Lapparent, Valérie; Bertin, Emmanuel; Le Borgne, Damien

    2017-09-01

    Context. Current constraints on models of galaxy evolution rely on morphometric catalogs extracted from multi-band photometric surveys. However, these catalogs are altered by selection effects that are difficult to model, that correlate in non trivial ways, and that can lead to contradictory predictions if not taken into account carefully. Aims: To address this issue, we have developed a new approach combining parametric Bayesian indirect likelihood (pBIL) techniques and empirical modeling with realistic image simulations that reproduce a large fraction of these selection effects. This allows us to perform a direct comparison between observed and simulated images and to infer robust constraints on model parameters. Methods: We use a semi-empirical forward model to generate a distribution of mock galaxies from a set of physical parameters. These galaxies are passed through an image simulator reproducing the instrumental characteristics of any survey and are then extracted in the same way as the observed data. The discrepancy between the simulated and observed data is quantified, and minimized with a custom sampling process based on adaptive Markov chain Monte Carlo methods. Results: Using synthetic data matching most of the properties of a Canada-France-Hawaii Telescope Legacy Survey Deep field, we demonstrate the robustness and internal consistency of our approach by inferring the parameters governing the size and luminosity functions and their evolutions for different realistic populations of galaxies. We also compare the results of our approach with those obtained from the classical spectral energy distribution fitting and photometric redshift approach. Conclusions: Our pipeline infers efficiently the luminosity and size distribution and evolution parameters with a very limited number of observables (three photometric bands). When compared to SED fitting based on the same set of observables, our method yields results that are more accurate and free from systematic biases.

  8. Predictive accuracy of combined genetic and environmental risk scores.

    PubMed

    Dudbridge, Frank; Pashayan, Nora; Yang, Jian

    2018-02-01

    The substantial heritability of most complex diseases suggests that genetic data could provide useful risk prediction. To date the performance of genetic risk scores has fallen short of the potential implied by heritability, but this can be explained by insufficient sample sizes for estimating highly polygenic models. When risk predictors already exist based on environment or lifestyle, two key questions are to what extent can they be improved by adding genetic information, and what is the ultimate potential of combined genetic and environmental risk scores? Here, we extend previous work on the predictive accuracy of polygenic scores to allow for an environmental score that may be correlated with the polygenic score, for example when the environmental factors mediate the genetic risk. We derive common measures of predictive accuracy and improvement as functions of the training sample size, chip heritabilities of disease and environmental score, and genetic correlation between disease and environmental risk factors. We consider simple addition of the two scores and a weighted sum that accounts for their correlation. Using examples from studies of cardiovascular disease and breast cancer, we show that improvements in discrimination are generally small but reasonable degrees of reclassification could be obtained with current sample sizes. Correlation between genetic and environmental scores has only minor effects on numerical results in realistic scenarios. In the longer term, as the accuracy of polygenic scores improves they will come to dominate the predictive accuracy compared to environmental scores. © 2017 WILEY PERIODICALS, INC.

  9. Predictive accuracy of combined genetic and environmental risk scores

    PubMed Central

    Pashayan, Nora; Yang, Jian

    2017-01-01

    ABSTRACT The substantial heritability of most complex diseases suggests that genetic data could provide useful risk prediction. To date the performance of genetic risk scores has fallen short of the potential implied by heritability, but this can be explained by insufficient sample sizes for estimating highly polygenic models. When risk predictors already exist based on environment or lifestyle, two key questions are to what extent can they be improved by adding genetic information, and what is the ultimate potential of combined genetic and environmental risk scores? Here, we extend previous work on the predictive accuracy of polygenic scores to allow for an environmental score that may be correlated with the polygenic score, for example when the environmental factors mediate the genetic risk. We derive common measures of predictive accuracy and improvement as functions of the training sample size, chip heritabilities of disease and environmental score, and genetic correlation between disease and environmental risk factors. We consider simple addition of the two scores and a weighted sum that accounts for their correlation. Using examples from studies of cardiovascular disease and breast cancer, we show that improvements in discrimination are generally small but reasonable degrees of reclassification could be obtained with current sample sizes. Correlation between genetic and environmental scores has only minor effects on numerical results in realistic scenarios. In the longer term, as the accuracy of polygenic scores improves they will come to dominate the predictive accuracy compared to environmental scores. PMID:29178508

  10. Design of a digital phantom population for myocardial perfusion SPECT imaging research.

    PubMed

    Ghaly, Michael; Du, Yong; Fung, George S K; Tsui, Benjamin M W; Links, Jonathan M; Frey, Eric

    2014-06-21

    Digital phantoms and Monte Carlo (MC) simulations have become important tools for optimizing and evaluating instrumentation, acquisition and processing methods for myocardial perfusion SPECT (MPS). In this work, we designed a new adult digital phantom population and generated corresponding Tc-99m and Tl-201 projections for use in MPS research. The population is based on the three-dimensional XCAT phantom with organ parameters sampled from the Emory PET Torso Model Database. Phantoms included three variations each in body size, heart size, and subcutaneous adipose tissue level, for a total of 27 phantoms of each gender. The SimSET MC code and angular response functions were used to model interactions in the body and the collimator-detector system, respectively. We divided each phantom into seven organs, each simulated separately, allowing use of post-simulation summing to efficiently model uptake variations. Also, we adapted and used a criterion based on the relative Poisson effective count level to determine the required number of simulated photons for each simulated organ. This technique provided a quantitative estimate of the true noise in the simulated projection data, including residual MC simulation noise. Projections were generated in 1 keV wide energy windows from 48-184 keV assuming perfect energy resolution to permit study of the effects of window width, energy resolution, and crosstalk in the context of dual isotope MPS. We have developed a comprehensive method for efficiently simulating realistic projections for a realistic population of phantoms in the context of MPS imaging. The new phantom population and realistic database of simulated projections will be useful in performing mathematical and human observer studies to evaluate various acquisition and processing methods such as optimizing the energy window width, investigating the effect of energy resolution on image quality and evaluating compensation methods for degrading factors such as crosstalk in the context of single and dual isotope MPS.

  11. Design of a digital phantom population for myocardial perfusion SPECT imaging research

    NASA Astrophysics Data System (ADS)

    Ghaly, Michael; Du, Yong; Fung, George S. K.; Tsui, Benjamin M. W.; Links, Jonathan M.; Frey, Eric

    2014-06-01

    Digital phantoms and Monte Carlo (MC) simulations have become important tools for optimizing and evaluating instrumentation, acquisition and processing methods for myocardial perfusion SPECT (MPS). In this work, we designed a new adult digital phantom population and generated corresponding Tc-99m and Tl-201 projections for use in MPS research. The population is based on the three-dimensional XCAT phantom with organ parameters sampled from the Emory PET Torso Model Database. Phantoms included three variations each in body size, heart size, and subcutaneous adipose tissue level, for a total of 27 phantoms of each gender. The SimSET MC code and angular response functions were used to model interactions in the body and the collimator-detector system, respectively. We divided each phantom into seven organs, each simulated separately, allowing use of post-simulation summing to efficiently model uptake variations. Also, we adapted and used a criterion based on the relative Poisson effective count level to determine the required number of simulated photons for each simulated organ. This technique provided a quantitative estimate of the true noise in the simulated projection data, including residual MC simulation noise. Projections were generated in 1 keV wide energy windows from 48-184 keV assuming perfect energy resolution to permit study of the effects of window width, energy resolution, and crosstalk in the context of dual isotope MPS. We have developed a comprehensive method for efficiently simulating realistic projections for a realistic population of phantoms in the context of MPS imaging. The new phantom population and realistic database of simulated projections will be useful in performing mathematical and human observer studies to evaluate various acquisition and processing methods such as optimizing the energy window width, investigating the effect of energy resolution on image quality and evaluating compensation methods for degrading factors such as crosstalk in the context of single and dual isotope MPS.

  12. Size-Dictionary Interpolation for Robot's Adjustment.

    PubMed

    Daneshmand, Morteza; Aabloo, Alvo; Anbarjafari, Gholamreza

    2015-01-01

    This paper describes the classification and size-dictionary interpolation of the three-dimensional data obtained by a laser scanner to be used in a realistic virtual fitting room, where automatic activation of the chosen mannequin robot, while several mannequin robots of different genders and sizes are simultaneously connected to the same computer, is also considered to make it mimic the body shapes and sizes instantly. The classification process consists of two layers, dealing, respectively, with gender and size. The interpolation procedure tries to find out which set of the positions of the biologically inspired actuators for activation of the mannequin robots could lead to the closest possible resemblance of the shape of the body of the person having been scanned, through linearly mapping the distances between the subsequent size-templates and the corresponding position set of the bioengineered actuators, and subsequently, calculating the control measures that could maintain the same distance proportions, where minimizing the Euclidean distance between the size-dictionary template vectors and that of the desired body sizes determines the mathematical description. In this research work, the experimental results of the implementation of the proposed method on Fits.me's mannequin robots are visually illustrated, and explanation of the remaining steps toward completion of the whole realistic online fitting package is provided.

  13. Using Realist Synthesis to Develop an Evidence Base from an Identified Data Set on Enablers and Barriers for Alcohol and Drug Program Implementation

    ERIC Educational Resources Information Center

    Hunter, Barbara; MacLean, Sarah; Berends, Lynda

    2012-01-01

    The purpose of this paper is to show how "realist synthesis" methodology (Pawson, 2002) was adapted to review a large sample of community based projects addressing alcohol and drug use problems. Our study drew on a highly varied sample of 127 projects receiving funding from a national non-government organisation in Australia between 2002…

  14. Vehicle Anthropometric Specification

    DTIC Science & Technology

    2013-04-01

    acquisitions and upgrades when no suitable data on the relevant ADF population is available. RELEASE LIMITATION Approved for public release...Given the increasing size of the military population, 36 year old body size data realistically no longer accurately reflects the size and shape of the...American and European Surface Anthropometry Resource (CAESAR) anthropometric dataset to represent the dimensions of these groups (18-50 year old white

  15. The role of ingroup threat and conservative ideologies on prejudice against immigrants in two samples of Italian adults.

    PubMed

    Caricati, Luca; Mancini, Tiziana; Marletta, Giuseppe

    2017-01-01

    This research investigated the relationship among perception of ingroup threats (realistic and symbolic), conservative ideologies (social dominance orientation [SDO] and right-wing authoritarianism [RWA]), and prejudice against immigrants. Data were collected with a cross-sectional design in two samples: non-student Italian adults (n = 223) and healthcare professionals (n = 679). Results were similar in both samples and indicated that symbolic and realistic threats, as well as SDO and RWA, positively and significantly predicted anti-immigrant prejudice. Moreover, the model considering SDO and RWA as mediators of threats' effects on prejudice showed a better fit than the model in which ingroup threats mediated the effects of SDO and RWA on prejudice against immigrants. Accordingly, SDO and RWA partially mediated the effect of both symbolic and realistic threats, which maintained a significant effect on prejudice against immigrants, however.

  16. Method for obtaining structure and interactions from oriented lipid bilayers

    PubMed Central

    Lyatskaya, Yulia; Liu, Yufeng; Tristram-Nagle, Stephanie; Katsaras, John; Nagle, John F.

    2009-01-01

    Precise calculations are made of the scattering intensity I(q) from an oriented stack of lipid bilayers using a realistic model of fluctuations. The quantities of interest include the bilayer bending modulus Kc , the interbilayer interaction modulus B, and bilayer structure through the form factor F(qz). It is shown how Kc and B may be obtained from data at large qz where fluctuations dominate. Good estimates of F(qz) can be made over wide ranges of qz by using I(q) in q regions away from the peaks and for qr≠0 where details of the scattering domains play little role. Rough estimates of domain sizes can also be made from smaller qz data. Results are presented for data taken on fully hydrated, oriented DOPC bilayers in the Lα phase. These results illustrate the advantages of oriented samples compared to powder samples. PMID:11304287

  17. Quantitative radiomics: impact of stochastic effects on textural feature analysis implies the need for standards

    PubMed Central

    Nyflot, Matthew J.; Yang, Fei; Byrd, Darrin; Bowen, Stephen R.; Sandison, George A.; Kinahan, Paul E.

    2015-01-01

    Abstract. Image heterogeneity metrics such as textural features are an active area of research for evaluating clinical outcomes with positron emission tomography (PET) imaging and other modalities. However, the effects of stochastic image acquisition noise on these metrics are poorly understood. We performed a simulation study by generating 50 statistically independent PET images of the NEMA IQ phantom with realistic noise and resolution properties. Heterogeneity metrics based on gray-level intensity histograms, co-occurrence matrices, neighborhood difference matrices, and zone size matrices were evaluated within regions of interest surrounding the lesions. The impact of stochastic variability was evaluated with percent difference from the mean of the 50 realizations, coefficient of variation and estimated sample size for clinical trials. Additionally, sensitivity studies were performed to simulate the effects of patient size and image reconstruction method on the quantitative performance of these metrics. Complex trends in variability were revealed as a function of textural feature, lesion size, patient size, and reconstruction parameters. In conclusion, the sensitivity of PET textural features to normal stochastic image variation and imaging parameters can be large and is feature-dependent. Standards are needed to ensure that prospective studies that incorporate textural features are properly designed to measure true effects that may impact clinical outcomes. PMID:26251842

  18. Quantitative radiomics: impact of stochastic effects on textural feature analysis implies the need for standards.

    PubMed

    Nyflot, Matthew J; Yang, Fei; Byrd, Darrin; Bowen, Stephen R; Sandison, George A; Kinahan, Paul E

    2015-10-01

    Image heterogeneity metrics such as textural features are an active area of research for evaluating clinical outcomes with positron emission tomography (PET) imaging and other modalities. However, the effects of stochastic image acquisition noise on these metrics are poorly understood. We performed a simulation study by generating 50 statistically independent PET images of the NEMA IQ phantom with realistic noise and resolution properties. Heterogeneity metrics based on gray-level intensity histograms, co-occurrence matrices, neighborhood difference matrices, and zone size matrices were evaluated within regions of interest surrounding the lesions. The impact of stochastic variability was evaluated with percent difference from the mean of the 50 realizations, coefficient of variation and estimated sample size for clinical trials. Additionally, sensitivity studies were performed to simulate the effects of patient size and image reconstruction method on the quantitative performance of these metrics. Complex trends in variability were revealed as a function of textural feature, lesion size, patient size, and reconstruction parameters. In conclusion, the sensitivity of PET textural features to normal stochastic image variation and imaging parameters can be large and is feature-dependent. Standards are needed to ensure that prospective studies that incorporate textural features are properly designed to measure true effects that may impact clinical outcomes.

  19. Practical Advice on Calculating Confidence Intervals for Radioprotection Effects and Reducing Animal Numbers in Radiation Countermeasure Experiments

    PubMed Central

    Landes, Reid D.; Lensing, Shelly Y.; Kodell, Ralph L.; Hauer-Jensen, Martin

    2014-01-01

    The dose of a substance that causes death in P% of a population is called an LDP, where LD stands for lethal dose. In radiation research, a common LDP of interest is the radiation dose that kills 50% of the population by a specified time, i.e., lethal dose 50 or LD50. When comparing LD50 between two populations, relative potency is the parameter of interest. In radiation research, this is commonly known as the dose reduction factor (DRF). Unfortunately, statistical inference on dose reduction factor is seldom reported. We illustrate how to calculate confidence intervals for dose reduction factor, which may then be used for statistical inference. Further, most dose reduction factor experiments use hundreds, rather than tens of animals. Through better dosing strategies and the use of a recently available sample size formula, we also show how animal numbers may be reduced while maintaining high statistical power. The illustrations center on realistic examples comparing LD50 values between a radiation countermeasure group and a radiation-only control. We also provide easy-to-use spreadsheets for sample size calculations and confidence interval calculations, as well as SAS® and R code for the latter. PMID:24164553

  20. Development of a Probabilistic Dynamic Synthesis Method for the Analysis of Nondeterministic Structures

    NASA Technical Reports Server (NTRS)

    Brown, A. M.

    1998-01-01

    Accounting for the statistical geometric and material variability of structures in analysis has been a topic of considerable research for the last 30 years. The determination of quantifiable measures of statistical probability of a desired response variable, such as natural frequency, maximum displacement, or stress, to replace experience-based "safety factors" has been a primary goal of these studies. There are, however, several problems associated with their satisfactory application to realistic structures, such as bladed disks in turbomachinery. These include the accurate definition of the input random variables (rv's), the large size of the finite element models frequently used to simulate these structures, which makes even a single deterministic analysis expensive, and accurate generation of the cumulative distribution function (CDF) necessary to obtain the probability of the desired response variables. The research presented here applies a methodology called probabilistic dynamic synthesis (PDS) to solve these problems. The PDS method uses dynamic characteristics of substructures measured from modal test as the input rv's, rather than "primitive" rv's such as material or geometric uncertainties. These dynamic characteristics, which are the free-free eigenvalues, eigenvectors, and residual flexibility (RF), are readily measured and for many substructures, a reasonable sample set of these measurements can be obtained. The statistics for these rv's accurately account for the entire random character of the substructure. Using the RF method of component mode synthesis, these dynamic characteristics are used to generate reduced-size sample models of the substructures, which are then coupled to form system models. These sample models are used to obtain the CDF of the response variable by either applying Monte Carlo simulation or by generating data points for use in the response surface reliability method, which can perform the probabilistic analysis with an order of magnitude less computational effort. Both free- and forced-response analyses have been performed, and the results indicate that, while there is considerable room for improvement, the method produces usable and more representative solutions for the design of realistic structures with a substantial savings in computer time.

  1. Spatial design and strength of spatial signal: Effects on covariance estimation

    USGS Publications Warehouse

    Irvine, Kathryn M.; Gitelman, Alix I.; Hoeting, Jennifer A.

    2007-01-01

    In a spatial regression context, scientists are often interested in a physical interpretation of components of the parametric covariance function. For example, spatial covariance parameter estimates in ecological settings have been interpreted to describe spatial heterogeneity or “patchiness” in a landscape that cannot be explained by measured covariates. In this article, we investigate the influence of the strength of spatial dependence on maximum likelihood (ML) and restricted maximum likelihood (REML) estimates of covariance parameters in an exponential-with-nugget model, and we also examine these influences under different sampling designs—specifically, lattice designs and more realistic random and cluster designs—at differing intensities of sampling (n=144 and 361). We find that neither ML nor REML estimates perform well when the range parameter and/or the nugget-to-sill ratio is large—ML tends to underestimate the autocorrelation function and REML produces highly variable estimates of the autocorrelation function. The best estimates of both the covariance parameters and the autocorrelation function come under the cluster sampling design and large sample sizes. As a motivating example, we consider a spatial model for stream sulfate concentration.

  2. A phantom design for assessment of detectability in PET imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollenweber, Scott D., E-mail: scott.wollenweber@g

    2016-09-15

    Purpose: The primary clinical role of positron emission tomography (PET) imaging is the detection of anomalous regions of {sup 18}F-FDG uptake, which are often indicative of malignant lesions. The goal of this work was to create a task-configurable fillable phantom for realistic measurements of detectability in PET imaging. Design goals included simplicity, adjustable feature size, realistic size and contrast levels, and inclusion of a lumpy (i.e., heterogeneous) background. Methods: The detection targets were hollow 3D-printed dodecahedral nylon features. The exostructure sphere-like features created voids in a background of small, solid non-porous plastic (acrylic) spheres inside a fillable tank. The featuresmore » filled at full concentration while the background concentration was reduced due to filling only between the solid spheres. Results: Multiple iterations of feature size and phantom construction were used to determine a configuration at the limit of detectability for a PET/CT system. A full-scale design used a 20 cm uniform cylinder (head-size) filled with a fixed pattern of features at a contrast of approximately 3:1. Known signal-present and signal-absent PET sub-images were extracted from multiple scans of the same phantom and with detectability in a challenging (i.e., useful) range. These images enabled calculation and comparison of the quantitative observer detectability metrics between scanner designs and image reconstruction methods. The phantom design has several advantages including filling simplicity, wall-less contrast features, the control of the detectability range via feature size, and a clinically realistic lumpy background. Conclusions: This phantom provides a practical method for testing and comparison of lesion detectability as a function of imaging system, acquisition parameters, and image reconstruction methods and parameters.« less

  3. Parking simulation of three-dimensional multi-sized star-shaped particles

    NASA Astrophysics Data System (ADS)

    Zhu, Zhigang; Chen, Huisu; Xu, Wenxiang; Liu, Lin

    2014-04-01

    The shape and size of particles may have a great impact on the microstructure as well as the physico-properties of particulate composites. However, it is challenging to configure a parking system of particles to a geometrical shape that is close to realistic grains in particulate composites. In this work, with the assistance of x-ray tomography and a spherical harmonic series, we present a star-shaped particle that is close to realistic arbitrary-shaped grains. To realize such a hard particle parking structure, an inter-particle overlapping detection algorithm is introduced. A serial sectioning approach is employed to visualize the particle parking structure for the purpose of justifying the reliability of the overlapping detection algorithm. Furthermore, the validity of the area and perimeter of solids in any arbitrary section of a plane calculated using a numerical method is verified by comparison with those obtained using an image analysis approach. This contribution is helpful to further understand the dependence of the micro-structure and physico-properties of star-shaped particles on the realistic geometrical shape.

  4. Radiation-Spray Coupling for Realistic Flow Configurations

    NASA Technical Reports Server (NTRS)

    El-Asrag, Hossam; Iannetti, Anthony C.

    2011-01-01

    Three Large Eddy Simulations (LES) for a lean-direct injection (LDI) combustor are performed and compared. In addition to the cold flow simulation, the effect of radiation coupling with the multi-physics reactive flow is analyzed. The flame let progress variable approach is used as a subgrid combustion model combined with a stochastic subgrid model for spray atomization and an optically thin radiation model. For accurate chemistry modeling, a detailed Jet-A surrogate mechanism is utilized. To achieve realistic inflow, a simple recycling technique is performed at the inflow section upstream of the swirler. Good comparison is shown with the experimental data mean and root mean square profiles. The effect of combustion is found to change the shape and size of the central recirculation zone. Radiation is found to change the spray dynamics and atomization by changing the heat release distribution and the local temperature values impacting the evaporation process. The simulation with radiation modeling shows wider range of droplet size distribution by altering the evaporation rate. The current study proves the importance of radiation modeling for accurate prediction in realistic spray combustion configurations, even for low pressure systems.

  5. Design, analysis and presentation of factorial randomised controlled trials

    PubMed Central

    Montgomery, Alan A; Peters, Tim J; Little, Paul

    2003-01-01

    Background The evaluation of more than one intervention in the same randomised controlled trial can be achieved using a parallel group design. However this requires increased sample size and can be inefficient, especially if there is also interest in considering combinations of the interventions. An alternative may be a factorial trial, where for two interventions participants are allocated to receive neither intervention, one or the other, or both. Factorial trials require special considerations, however, particularly at the design and analysis stages. Discussion Using a 2 × 2 factorial trial as an example, we present a number of issues that should be considered when planning a factorial trial. The main design issue is that of sample size. Factorial trials are most often powered to detect the main effects of interventions, since adequate power to detect plausible interactions requires greatly increased sample sizes. The main analytical issues relate to the investigation of main effects and the interaction between the interventions in appropriate regression models. Presentation of results should reflect the analytical strategy with an emphasis on the principal research questions. We also give an example of how baseline and follow-up data should be presented. Lastly, we discuss the implications of the design, analytical and presentational issues covered. Summary Difficulties in interpreting the results of factorial trials if an influential interaction is observed is the cost of the potential for efficient, simultaneous consideration of two or more interventions. Factorial trials can in principle be designed to have adequate power to detect realistic interactions, and in any case they are the only design that allows such effects to be investigated. PMID:14633287

  6. An evaluation of inferential procedures for adaptive clinical trial designs with pre-specified rules for modifying the sample size.

    PubMed

    Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S

    2014-09-01

    Many papers have introduced adaptive clinical trial methods that allow modifications to the sample size based on interim estimates of treatment effect. There has been extensive commentary on type I error control and efficiency considerations, but little research on estimation after an adaptive hypothesis test. We evaluate the reliability and precision of different inferential procedures in the presence of an adaptive design with pre-specified rules for modifying the sampling plan. We extend group sequential orderings of the outcome space based on the stage at stopping, likelihood ratio statistic, and sample mean to the adaptive setting in order to compute median-unbiased point estimates, exact confidence intervals, and P-values uniformly distributed under the null hypothesis. The likelihood ratio ordering is found to average shorter confidence intervals and produce higher probabilities of P-values below important thresholds than alternative approaches. The bias adjusted mean demonstrates the lowest mean squared error among candidate point estimates. A conditional error-based approach in the literature has the benefit of being the only method that accommodates unplanned adaptations. We compare the performance of this and other methods in order to quantify the cost of failing to plan ahead in settings where adaptations could realistically be pre-specified at the design stage. We find the cost to be meaningful for all designs and treatment effects considered, and to be substantial for designs frequently proposed in the literature. © 2014, The International Biometric Society.

  7. Merging Marine Ecosystem Models and Genomics

    NASA Astrophysics Data System (ADS)

    Coles, V.; Hood, R. R.; Stukel, M. R.; Moran, M. A.; Paul, J. H.; Satinsky, B.; Zielinski, B.; Yager, P. L.

    2015-12-01

    oceanography. One of the grand challenges of oceanography is to develop model techniques to more effectively incorporate genomic information. As one approach, we developed an ecosystem model whose community is determined by randomly assigning functional genes to build each organism's "DNA". Microbes are assigned a size that sets their baseline environmental responses using allometric response cuves. These responses are modified by the costs and benefits conferred by each gene in an organism's genome. The microbes are embedded in a general circulation model where environmental conditions shape the emergent population. This model is used to explore whether organisms constructed from randomized combinations of metabolic capability alone can self-organize to create realistic oceanic biogeochemical gradients. Realistic community size spectra and chlorophyll-a concentrations emerge in the model. The model is run repeatedly with randomly-generated microbial communities and each time realistic gradients in community size spectra, chlorophyll-a, and forms of nitrogen develop. This supports the hypothesis that the metabolic potential of a community rather than the realized species composition is the primary factor setting vertical and horizontal environmental gradients. Vertical distributions of nitrogen and transcripts for genes involved in nitrification are broadly consistent with observations. Modeled gene and transcript abundance for nitrogen cycling and processing of land-derived organic material match observations along the extreme gradients in the Amazon River plume, and they help to explain the factors controlling observed variability.

  8. Unified Static and Dynamic Recrystallization Model for the Minerals of Earth's Mantle Using Internal State Variable Model

    NASA Astrophysics Data System (ADS)

    Cho, H. E.; Horstemeyer, M. F.; Baumgardner, J. R.

    2017-12-01

    In this study, we present an internal state variable (ISV) constitutive model developed to model static and dynamic recrystallization and grain size progression in a unified manner. This method accurately captures temperature, pressure and strain rate effect on the recrystallization and grain size. Because this ISV approach treats dislocation density, volume fraction of recrystallization and grain size as internal variables, this model can simultaneously track their history during the deformation with unprecedented realism. Based on this deformation history, this method can capture realistic mechanical properties such as stress-strain behavior in the relationship of microstructure-mechanical property. Also, both the transient grain size during the deformation and the steady-state grain size of dynamic recrystallization can be predicted from the history variable of recrystallization volume fraction. Furthermore, because this model has a capability to simultaneously handle plasticity and creep behaviors (unified creep-plasticity), the mechanisms (static recovery (or diffusion creep), dynamic recovery (or dislocation creep) and hardening) related to dislocation dynamics can also be captured. To model these comprehensive mechanical behaviors, the mathematical formulation of this model includes elasticity to evaluate yield stress, work hardening in treating plasticity, creep, as well as the unified recrystallization and grain size progression. Because pressure sensitivity is especially important for the mantle minerals, we developed a yield function combining Drucker-Prager shear failure and von Mises yield surfaces to model the pressure dependent yield stress, while using pressure dependent work hardening and creep terms. Using these formulations, we calibrated against experimental data of the minerals acquired from the literature. Additionally, we also calibrated experimental data for metals to show the general applicability of our model. Understanding of realistic mantle dynamics can only be acquired once the various deformation regimes and mechanisms are comprehensively modeled. The results of this study demonstrate that this ISV model is a good modeling candidate to help reveal the realistic dynamics of the Earth's mantle.

  9. Kalman filter approach for uncertainty quantification in time-resolved laser-induced incandescence.

    PubMed

    Hadwin, Paul J; Sipkens, Timothy A; Thomson, Kevin A; Liu, Fengshan; Daun, Kyle J

    2018-03-01

    Time-resolved laser-induced incandescence (TiRe-LII) data can be used to infer spatially and temporally resolved volume fractions and primary particle size distributions of soot-laden aerosols, but these estimates are corrupted by measurement noise as well as uncertainties in the spectroscopic and heat transfer submodels used to interpret the data. Estimates of the temperature, concentration, and size distribution of soot primary particles within a sample aerosol are typically made by nonlinear regression of modeled spectral incandescence decay, or effective temperature decay, to experimental data. In this work, we employ nonstationary Bayesian estimation techniques to infer aerosol properties from simulated and experimental LII signals, specifically the extended Kalman filter and Schmidt-Kalman filter. These techniques exploit the time-varying nature of both the measurements and the models, and they reveal how uncertainty in the estimates computed from TiRe-LII data evolves over time. Both techniques perform better when compared with standard deterministic estimates; however, we demonstrate that the Schmidt-Kalman filter produces more realistic uncertainty estimates.

  10. Validation of PCR methods for quantitation of genetically modified plants in food.

    PubMed

    Hübner, P; Waiblinger, H U; Pietsch, K; Brodmann, P

    2001-01-01

    For enforcement of the recently introduced labeling threshold for genetically modified organisms (GMOs) in food ingredients, quantitative detection methods such as quantitative competitive (QC-PCR) and real-time PCR are applied by official food control laboratories. The experiences of 3 European food control laboratories in validating such methods were compared to describe realistic performance characteristics of quantitative PCR detection methods. The limit of quantitation (LOQ) of GMO-specific, real-time PCR was experimentally determined to reach 30-50 target molecules, which is close to theoretical prediction. Starting PCR with 200 ng genomic plant DNA, the LOQ depends primarily on the genome size of the target plant and ranges from 0.02% for rice to 0.7% for wheat. The precision of quantitative PCR detection methods, expressed as relative standard deviation (RSD), varied from 10 to 30%. Using Bt176 corn containing test samples and applying Bt176 specific QC-PCR, mean values deviated from true values by -7to 18%, with an average of 2+/-10%. Ruggedness of real-time PCR detection methods was assessed in an interlaboratory study analyzing commercial, homogeneous food samples. Roundup Ready soybean DNA contents were determined in the range of 0.3 to 36%, relative to soybean DNA, with RSDs of about 25%. Taking the precision of quantitative PCR detection methods into account, suitable sample plans and sample sizes for GMO analysis are suggested. Because quantitative GMO detection methods measure GMO contents of samples in relation to reference material (calibrants), high priority must be given to international agreements and standardization on certified reference materials.

  11. Improving Mathematics Teaching in Kindergarten with Realistic Mathematical Education

    ERIC Educational Resources Information Center

    Papadakis, Stamatios; Kalogiannakis, Michail; Zaranis, Nicholas

    2017-01-01

    The present study investigates and compares the influence of teaching Realistic Mathematics on the development of mathematical competence in kindergarten. The sample consisted of 231 Greek kindergarten students. For the implementation of the survey, we conducted an intervention, which included one experimental and one control group. Children in…

  12. Optimized methods for epilepsy therapy development using an etiologically realistic model of focal epilepsy in the rat

    PubMed Central

    Eastman, Clifford L.; Fender, Jason S.; Temkin, Nancy R.; D’Ambrosio, Raimondo

    2015-01-01

    Conventionally developed antiseizure drugs fail to control epileptic seizures in about 30% of patients, and no treatment prevents epilepsy. New etiologically realistic, syndrome-specific epilepsy models are expected to identify better treatments by capturing currently unknown ictogenic and epileptogenic mechanisms that operate in the corresponding patient populations. Additionally, the use of electrocorticography permits better monitoring of epileptogenesis and the full spectrum of acquired seizures, including focal nonconvulsive seizures that are typically difficult to treat in humans. Thus, the combined use of etiologically realistic models and electrocorticography may improve our understanding of the genesis and progression of epilepsy, and facilitate discovery and translation of novel treatments. However, this approach is labor intensive and must be optimized. To this end, we used an etiologically realistic rat model of posttraumatic epilepsy, in which the initiating fluid percussion injury closely replicates contusive closed-head injury in humans, and has been adapted to maximize epileptogenesis and focal non-convulsive seizures. We obtained week-long 5-electrode electrocorticography 1 month post-injury, and used a Monte-Carlo-based non-parametric bootstrap strategy to test the impact of electrode montage design, duration-based seizure definitions, group size and duration of recordings on the assessment of posttraumatic epilepsy, and on statistical power to detect antiseizure and antiepileptogenic treatment effects. We found that use of seizure definition based on clinical criteria rather than event duration, and of recording montages closely sampling the activity of epileptic foci, maximize the power to detect treatment effects. Detection of treatment effects was marginally improved by prolonged recording, and 24 h recording epochs were sufficient to provide 80% power to detect clinically interesting seizure control or prevention of seizures with small groups of animals. We conclude that appropriate electrode montage and clinically relevant seizure definition permit convenient deployment of fluid percussion injury and electrocorticography for epilepsy therapy development. PMID:25523813

  13. Effect of reaction-step-size noise on the switching dynamics of stochastic populations

    NASA Astrophysics Data System (ADS)

    Be'er, Shay; Heller-Algazi, Metar; Assaf, Michael

    2016-05-01

    In genetic circuits, when the messenger RNA lifetime is short compared to the cell cycle, proteins are produced in geometrically distributed bursts, which greatly affects the cellular switching dynamics between different metastable phenotypic states. Motivated by this scenario, we study a general problem of switching or escape in stochastic populations, where influx of particles occurs in groups or bursts, sampled from an arbitrary distribution. The fact that the step size of the influx reaction is a priori unknown and, in general, may fluctuate in time with a given correlation time and statistics, introduces an additional nondemographic reaction-step-size noise into the system. Employing the probability-generating function technique in conjunction with Hamiltonian formulation, we are able to map the problem in the leading order onto solving a stationary Hamilton-Jacobi equation. We show that compared to the "usual case" of single-step influx, bursty influx exponentially decreases the population's mean escape time from its long-lived metastable state. In particular, close to bifurcation we find a simple analytical expression for the mean escape time which solely depends on the mean and variance of the burst-size distribution. Our results are demonstrated on several realistic distributions and compare well with numerical Monte Carlo simulations.

  14. A combined experimental and numerical study on upper airway dosimetry of inhaled nanoparticles from an electrical discharge machine shop.

    PubMed

    Tian, Lin; Shang, Yidan; Chen, Rui; Bai, Ru; Chen, Chunying; Inthavong, Kiao; Tu, Jiyuan

    2017-07-12

    Exposure to nanoparticles in the workplace is a health concern to occupational workers with increased risk of developing respiratory, cardiovascular, and neurological disorders. Based on animal inhalation study and human lung tumor risk extrapolation, current authoritative recommendations on exposure limits are either on total mass or number concentrations. Effects of particle size distribution and the implication to regional airway dosages are not elaborated. Real time production of particle concentration and size distribution in the range from 5.52 to 98.2 nm were recorded in a wire-cut electrical discharge machine shop (WEDM) during a typical working day. Under the realistic exposure condition, human inhalation simulations were performed in a physiologically realistic nasal and upper airway replica. The combined experimental and numerical study is the first to establish a realistic exposure condition, and under which, detailed dose metric studies can be performed. In addition to mass concentration guided exposure limit, inhalation risks to nano-pollutant were reexamined accounting for the actual particle size distribution and deposition statistics. Detailed dosimetries of the inhaled nano-pollutants in human nasal and upper airways with respect to particle number, mass and surface area were discussed, and empirical equations were developed. An astonishing enhancement of human airway dosages were detected by current combined experimental and numerical study in the WEDM machine shop. Up to 33 folds in mass, 27 folds in surface area and 8 folds in number dosages were detected during working hours in comparison to the background dosimetry measured at midnight. The real time particle concentration measurement showed substantial emission of nano-pollutants by WEDM machining activity, and the combined experimental and numerical study provided extraordinary details on human inhalation dosimetry. It was found out that human inhalation dosimetry was extremely sensitive to real time particle concentration and size distribution. Averaged particle concentration over 24-h period will inevitably misrepresent the sensible information critical for realistic inhalation risk assessment. Particle size distribution carries very important information in determining human airway dosimetry. A pure number or mass concentration recommendation on the exposure limit at workplace is insufficient. A particle size distribution, together with the deposition equations, is critical to recognize the actual exposure risks. In addition, human airway dosimetry in number, mass and surface area varies significantly. A complete inhalation risk assessment requires the knowledge of toxicity mechanisms in response to each individual metric. Further improvements in these areas are needed.

  15. From grid cells to place cells with realistic field sizes

    PubMed Central

    2017-01-01

    While grid cells in the medial entorhinal cortex (MEC) of rodents have multiple, regularly arranged firing fields, place cells in the cornu ammonis (CA) regions of the hippocampus mostly have single spatial firing fields. Since there are extensive projections from MEC to the CA regions, many models have suggested that a feedforward network can transform grid cell firing into robust place cell firing. However, these models generate place fields that are consistently too small compared to those recorded in experiments. Here, we argue that it is implausible that grid cell activity alone can be transformed into place cells with robust place fields of realistic size in a feedforward network. We propose two solutions to this problem. Firstly, weakly spatially modulated cells, which are abundant throughout EC, provide input to downstream place cells along with grid cells. This simple model reproduces many place cell characteristics as well as results from lesion studies. Secondly, the recurrent connections between place cells in the CA3 network generate robust and realistic place fields. Both mechanisms could work in parallel in the hippocampal formation and this redundancy might account for the robustness of place cell responses to a range of disruptions of the hippocampal circuitry. PMID:28750005

  16. Quantum state discrimination bounds for finite sample size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audenaert, Koenraad M. R.; Mosonyi, Milan; Mathematical Institute, Budapest University of Technology and Economics, Egry Jozsef u 1., Budapest 1111

    2012-12-15

    In the problem of quantum state discrimination, one has to determine by measurements the state of a quantum system, based on the a priori side information that the true state is one of the two given and completely known states, {rho} or {sigma}. In general, it is not possible to decide the identity of the true state with certainty, and the optimal measurement strategy depends on whether the two possible errors (mistaking {rho} for {sigma}, or the other way around) are treated as of equal importance or not. Results on the quantum Chernoff and Hoeffding bounds and the quantum Stein'smore » lemma show that, if several copies of the system are available then the optimal error probabilities decay exponentially in the number of copies, and the decay rate is given by a certain statistical distance between {rho} and {sigma} (the Chernoff distance, the Hoeffding distances, and the relative entropy, respectively). While these results provide a complete solution to the asymptotic problem, they are not completely satisfying from a practical point of view. Indeed, in realistic scenarios one has access only to finitely many copies of a system, and therefore it is desirable to have bounds on the error probabilities for finite sample size. In this paper we provide finite-size bounds on the so-called Stein errors, the Chernoff errors, the Hoeffding errors, and the mixed error probabilities related to the Chernoff and the Hoeffding errors.« less

  17. Dynamical stability of the one-dimensional rigid Brownian rotator: the role of the rotator’s spatial size and shape

    NASA Astrophysics Data System (ADS)

    Jeknić-Dugić, Jasmina; Petrović, Igor; Arsenijević, Momir; Dugić, Miroljub

    2018-05-01

    We investigate dynamical stability of a single propeller-like shaped molecular cogwheel modelled as the fixed-axis rigid rotator. In the realistic situations, rotation of the finite-size cogwheel is subject to the environmentally-induced Brownian-motion effect that we describe by utilizing the quantum Caldeira-Leggett master equation. Assuming the initially narrow (classical-like) standard deviations for the angle and the angular momentum of the rotator, we investigate the dynamics of the first and second moments depending on the size, i.e. on the number of blades of both the free rotator as well as of the rotator in the external harmonic field. The larger the standard deviations, the less stable (i.e. less predictable) rotation. We detect the absence of the simple and straightforward rules for utilizing the rotator’s stability. Instead, a number of the size-related criteria appear whose combinations may provide the optimal rules for the rotator dynamical stability and possibly control. In the realistic situations, the quantum-mechanical corrections, albeit individually small, may effectively prove non-negligible, and also revealing subtlety of the transition from the quantum to the classical dynamics of the rotator. As to the latter, we detect a strong size-dependence of the transition to the classical dynamics beyond the quantum decoherence process.

  18. Synthetic seismograms from vibracores: A case study in correlating the late quaternary seismic stratigraphy of the New Jersey inner continental shelf

    USGS Publications Warehouse

    Esker, D.; Sheridan, R.E.; Ashley, G.M.; Waldner, J.S.; Hall, D.W.

    1996-01-01

    A new technique, using empirical relationships between median grain size and density and velocity to calculate proxy values for density and velocity, avoids many of the problems associated with the use of well logs and shipboard measurements to construct synthetic seismograms. This method was used to groundtruth and correlate across both analog and digital shallow high-resolution seismic data on the New Jersey shelf. Sampling dry vibracores to determine median grain size eliminates the detrimental effects that coring disturbances and preservation variables have on the sediment and water content of the core. The link between seismic response to lithology and bed spacing is more exact. The exact frequency of the field seismic data can be realistically simulated by a 10-20 cm sampling interval of the vibracores. The estimate of the percentage error inherent in this technique, 12% for acoustic impedance and 24% for reflection amplitude, is calculated to one standard deviation and is within a reasonable limit for such a procedure. The synthetic seismograms of two cores, 4-6 m long, were used to correlate specific sedimentary deposits to specific seismic reflection responses. Because this technique is applicable to unconsolidated sediments, it is ideal for upper Pleistocene and Holocene strata. Copyright ?? 1996, SEPM (Society for Sedimentary Geology).

  19. Modeling and simulation of the deposition/relaxation processes of polycrystalline diatomic structures of metallic nitride films

    NASA Astrophysics Data System (ADS)

    García, M. F.; Restrepo-Parra, E.; Riaño-Rojas, J. C.

    2015-05-01

    This work develops a model that mimics the growth of diatomic, polycrystalline thin films by artificially splitting the growth into deposition and relaxation processes including two stages: (1) a grain-based stochastic method (grains orientation randomly chosen) is considered and by means of the Kinetic Monte Carlo method employing a non-standard version, known as Constant Time Stepping, the deposition is simulated. The adsorption of adatoms is accepted or rejected depending on the neighborhood conditions; furthermore, the desorption process is not included in the simulation and (2) the Monte Carlo method combined with the metropolis algorithm is used to simulate the diffusion. The model was developed by accounting for parameters that determine the morphology of the film, such as the growth temperature, the interacting atomic species, the binding energy and the material crystal structure. The modeled samples exhibited an FCC structure with grain formation with orientations in the family planes of < 111 >, < 200 > and < 220 >. The grain size and film roughness were analyzed. By construction, the grain size decreased, and the roughness increased, as the growth temperature increased. Although, during the growth process of real materials, the deposition and relaxation occurs simultaneously, this method may perhaps be valid to build realistic polycrystalline samples.

  20. Poisson-Boltzmann versus Size-Modified Poisson-Boltzmann Electrostatics Applied to Lipid Bilayers.

    PubMed

    Wang, Nuo; Zhou, Shenggao; Kekenes-Huskey, Peter M; Li, Bo; McCammon, J Andrew

    2014-12-26

    Mean-field methods, such as the Poisson-Boltzmann equation (PBE), are often used to calculate the electrostatic properties of molecular systems. In the past two decades, an enhancement of the PBE, the size-modified Poisson-Boltzmann equation (SMPBE), has been reported. Here, the PBE and the SMPBE are reevaluated for realistic molecular systems, namely, lipid bilayers, under eight different sets of input parameters. The SMPBE appears to reproduce the molecular dynamics simulation results better than the PBE only under specific parameter sets, but in general, it performs no better than the Stern layer correction of the PBE. These results emphasize the need for careful discussions of the accuracy of mean-field calculations on realistic systems with respect to the choice of parameters and call for reconsideration of the cost-efficiency and the significance of the current SMPBE formulation.

  1. The Development of Midlatitude Cirrus Models for MODIS Using FIRE-I, FIRE-II, and ARM In Situ Data

    NASA Technical Reports Server (NTRS)

    Nasiri, Shaima L.; Baum, Bryan A.; Heymsfield, Andrew J.; Yang, Ping; Poellot, Michael R.; Kratz, David P.; Hu, Yong-Xiang

    2002-01-01

    Detailed in situ data from cirrus clouds have been collected during dedicated field Campaigns, but the use of the size and habit distribution data has been lagging in the development of more realistic cirrus scattering models. In this study, the authors examine the use of in situ cirrus data collected during three field campaigns to develop more realistic midlatitude cirrus microphysical models. Data are used from the First International Satellite Cloud Climatology Project (ISCCP) Regional Experiment (FIRE)-I (1986) and FIRE-II (1991) campaigns and from a recent Atmospheric Radiation Measurement (ARM) Program campaign held in March-April of 2000. The microphysical models are based on measured vertical distributions of both particle size and particle habit and are used to develop new scattering models for a suite of moderate-resolution imaging spectroradiometer (MODIS) bands spanning visible. near-infrared, and infrared wavelengths. The sensitivity of the resulting scattering properties to the underlying assumptions of the assumed particle size and habit distributions are examined. It is found that the near-infrared bands are sensitive not only to the discretization of the size distribution but also to the assumed habit distribution. In addition. the results indicate that the effective diameter calculated from a given size distribution tends to be sensitive to the number of size bins that are used to discretize the data and also to the ice-crystal habit distribution.

  2. Assessing the Application of a Geographic Presence-Only Model for Land Suitability Mapping

    PubMed Central

    Heumann, Benjamin W.; Walsh, Stephen J.; McDaniel, Phillip M.

    2011-01-01

    Recent advances in ecological modeling have focused on novel methods for characterizing the environment that use presence-only data and machine-learning algorithms to predict the likelihood of species occurrence. These novel methods may have great potential for land suitability applications in the developing world where detailed land cover information is often unavailable or incomplete. This paper assesses the adaptation and application of the presence-only geographic species distribution model, MaxEnt, for agricultural crop suitability mapping in a rural Thailand where lowland paddy rice and upland field crops predominant. To assess this modeling approach, three independent crop presence datasets were used including a social-demographic survey of farm households, a remote sensing classification of land use/land cover, and ground control points, used for geodetic and thematic reference that vary in their geographic distribution and sample size. Disparate environmental data were integrated to characterize environmental settings across Nang Rong District, a region of approximately 1,300 sq. km in size. Results indicate that the MaxEnt model is capable of modeling crop suitability for upland and lowland crops, including rice varieties, although model results varied between datasets due to the high sensitivity of the model to the distribution of observed crop locations in geographic and environmental space. Accuracy assessments indicate that model outcomes were influenced by the sample size and the distribution of sample points in geographic and environmental space. The need for further research into accuracy assessments of presence-only models lacking true absence data is discussed. We conclude that the Maxent model can provide good estimates of crop suitability, but many areas need to be carefully scrutinized including geographic distribution of input data and assessment methods to ensure realistic modeling results. PMID:21860606

  3. Visual difference metric for realistic image synthesis

    NASA Astrophysics Data System (ADS)

    Bolin, Mark R.; Meyer, Gary W.

    1999-05-01

    An accurate and efficient model of human perception has been developed to control the placement of sample in a realistic image synthesis algorithm. Previous sampling techniques have sought to spread the error equally across the image plane. However, this approach neglects the fact that the renderings are intended to be displayed for a human observer. The human visual system has a varying sensitivity to error that is based upon the viewing context. This means that equivalent optical discrepancies can be very obvious in one situation and imperceptible in another. It is ultimately the perceptibility of this error that governs image quality and should be used as the basis of a sampling algorithm. This paper focuses on a simplified version of the Lubin Visual Discrimination Metric (VDM) that was developed for insertion into an image synthesis algorithm. The sampling VDM makes use of a Haar wavelet basis for the cortical transform and a less severe spatial pooling operation. The model was extended for color including the effects of chromatic aberration. Comparisons are made between the execution time and visual difference map for the original Lubin and simplified visual difference metrics. Results for the realistic image synthesis algorithm are also presented.

  4. Simulation of particle size distributions in Polar Mesospheric Clouds from Microphysical Models

    NASA Astrophysics Data System (ADS)

    Thomas, G. E.; Merkel, A.; Bardeen, C.; Rusch, D. W.; Lumpe, J. D.

    2009-12-01

    The size distribution of ice particles is perhaps the most important observable aspect of microphysical processes in Polar Mesospheric Cloud (PMC) formation and evolution. A conventional technique to derive such information is from optical observation of scattering, either passive solar scattering from photometric or spectrometric techniques, or active backscattering by lidar. We present simulated size distributions from two state-of-the-art models using CARMA sectional microphysics: WACCM/CARMA, in which CARMA is interactively coupled with WACCM3 (Bardeen et al, 2009), and stand-alone CARMA forced by WACCM3 meteorology (Merkel et al, this meeting). Both models provide well-resolved size distributions of ice particles as a function of height, location and time for realistic high-latitude summertime conditions. In this paper we present calculations of the UV scattered brightness at multiple scattering angles as viewed by the AIM Cloud Imaging and Particle Size (CIPS) satellite experiment. These simulations are then considered discretely-sampled “data” for the scattering phase function, which are inverted using a technique (Lumpe et al, this meeting) to retrieve particle size information. We employ a T-matrix scattering code which applies to a wide range of non-sphericity of the ice particles, using the conventional idealized prolate/oblate spheroidal shape. This end-to-end test of the relatively new scattering phase function technique provides insight into both the retrieval accuracy and the information content in passive remote sensing of PMC.

  5. Small Body GN and C Research Report: G-SAMPLE - An In-Flight Dynamical Method for Identifying Sample Mass [External Release Version

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Bayard, David S.

    2006-01-01

    G-SAMPLE is an in-flight dynamical method for use by sample collection missions to identify the presence and quantity of collected sample material. The G-SAMPLE method implements a maximum-likelihood estimator to identify the collected sample mass, based on onboard force sensor measurements, thruster firings, and a dynamics model of the spacecraft. With G-SAMPLE, sample mass identification becomes a computation rather than an extra hardware requirement; the added cost of cameras or other sensors for sample mass detection is avoided. Realistic simulation examples are provided for a spacecraft configuration with a sample collection device mounted on the end of an extended boom. In one representative example, a 1000 gram sample mass is estimated to within 110 grams (95% confidence) under realistic assumptions of thruster profile error, spacecraft parameter uncertainty, and sensor noise. For convenience to future mission design, an overall sample-mass estimation error budget is developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.

  6. Chained Bell Inequality Experiment with High-Efficiency Measurements

    NASA Astrophysics Data System (ADS)

    Tan, T. R.; Wan, Y.; Erickson, S.; Bierhorst, P.; Kienzler, D.; Glancy, S.; Knill, E.; Leibfried, D.; Wineland, D. J.

    2017-03-01

    We report correlation measurements on two 9Be+ ions that violate a chained Bell inequality obeyed by any local-realistic theory. The correlations can be modeled as derived from a mixture of a local-realistic probabilistic distribution and a distribution that violates the inequality. A statistical framework is formulated to quantify the local-realistic fraction allowable in the observed distribution without the fair-sampling or independent-and-identical-distributions assumptions. We exclude models of our experiment whose local-realistic fraction is above 0.327 at the 95% confidence level. This bound is significantly lower than 0.586, the minimum fraction derived from a perfect Clauser-Horne-Shimony-Holt inequality experiment. Furthermore, our data provide a device-independent certification of the deterministically created Bell states.

  7. Engendering Anthropocentrism: Lessons from Children's Realistic Animal Stories.

    ERIC Educational Resources Information Center

    Johnson, Kathleen R.

    In children's realistic stories about animals a number of wholly and unambiguously anthropocentric assumptions are at work. For instance, in a study most of the books (81%) in one sampling of 50 stories involve a pet or the process of domesticating a wild animal. In most cases the primary animal character is a dog or horse. The predominance of…

  8. A Comparison of Techniques for Scheduling Earth-Observing Satellites

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna

    2004-01-01

    Scheduling observations by coordinated fleets of Earth Observing Satellites (EOS) involves large search spaces, complex constraints and poorly understood bottlenecks, conditions where evolutionary and related algorithms are often effective. However, there are many such algorithms and the best one to use is not clear. Here we compare multiple variants of the genetic algorithm: stochastic hill climbing, simulated annealing, squeaky wheel optimization and iterated sampling on ten realistically-sized EOS scheduling problems. Schedules are represented by a permutation (non-temperal ordering) of the observation requests. A simple deterministic scheduler assigns times and resources to each observation request in the order indicated by the permutation, discarding those that violate the constraints created by previously scheduled observations. Simulated annealing performs best. Random mutation outperform a more 'intelligent' mutator. Furthermore, the best mutator, by a small margin, was a novel approach we call temperature dependent random sampling that makes large changes in the early stages of evolution and smaller changes towards the end of search.

  9. Freight Terminals Operating Environment

    DOT National Transportation Integrated Search

    1981-06-01

    The research analysis has been directed toward (1) developing a realistic, quantitative description of the structure of the economic zones that are centered upon medium-size urban areas, (2) determining the nature of traffic in manufactured goods whi...

  10. Radon decay products in realistic living rooms and their activity distributions in human respiratory system.

    PubMed

    Mohery, M; Abdallah, A M; Baz, S S; Al-Amoudi, Z M

    2014-12-01

    In this study, the individual activity concentrations of attached short-lived radon decay products ((218)Po, (214)Pb and (214)Po) in aerosol particles were measured in ten poorly ventilated realistic living rooms. Using standard methodologies, the samples were collected using a filter holder technique connected with alpha-spectrometric. The mean value of air activity concentration of these radionuclides was found to be 5.3±0.8, 4.5±0.5 and 3.9±0.4 Bq m(-3), respectively. Based on the physical properties of the attached decay products and physiological parameters of light work activity for an adult human male recommended by ICRP 66 and considering the parameters of activity size distribution (AMD = 0.25 μm and σ(g) = 2.5) given by NRC, the total and regional deposition fractions in each airway generation could be evaluated. Moreover, the total and regional equivalent doses in the human respiratory tract could be estimated. In addition, the surface activity distribution per generation is calculated for the bronchial region (BB) and the bronchiolar region (bb) of the respiratory system. The maximum values of these activities were found in the upper bronchial airway generations. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. A New Aerodynamic Data Dispersion Method for Launch Vehicle Design

    NASA Technical Reports Server (NTRS)

    Pinier, Jeremy T.

    2011-01-01

    A novel method for implementing aerodynamic data dispersion analysis is herein introduced. A general mathematical approach combined with physical modeling tailored to the aerodynamic quantity of interest enables the generation of more realistically relevant dispersed data and, in turn, more reasonable flight simulation results. The method simultaneously allows for the aerodynamic quantities and their derivatives to be dispersed given a set of non-arbitrary constraints, which stresses the controls model in more ways than with the traditional bias up or down of the nominal data within the uncertainty bounds. The adoption and implementation of this new method within the NASA Ares I Crew Launch Vehicle Project has resulted in significant increases in predicted roll control authority, and lowered the induced risks for flight test operations. One direct impact on launch vehicles is a reduced size for auxiliary control systems, and the possibility of an increased payload. This technique has the potential of being applied to problems in multiple areas where nominal data together with uncertainties are used to produce simulations using Monte Carlo type random sampling methods. It is recommended that a tailored physics-based dispersion model be delivered with any aerodynamic product that includes nominal data and uncertainties, in order to make flight simulations more realistic and allow for leaner spacecraft designs.

  12. Quantifying the potential impact of measurement error in an investigation of autism spectrum disorder (ASD).

    PubMed

    Heavner, Karyn; Newschaffer, Craig; Hertz-Picciotto, Irva; Bennett, Deborah; Burstyn, Igor

    2014-05-01

    The Early Autism Risk Longitudinal Investigation (EARLI), an ongoing study of a risk-enriched pregnancy cohort, examines genetic and environmental risk factors for autism spectrum disorders (ASDs). We simulated the potential effects of both measurement error (ME) in exposures and misclassification of ASD-related phenotype (assessed as Autism Observation Scale for Infants (AOSI) scores) on measures of association generated under this study design. We investigated the impact on the power to detect true associations with exposure and the false positive rate (FPR) for a non-causal correlate of exposure (X2, r=0.7) for continuous AOSI score (linear model) versus dichotomised AOSI (logistic regression) when the sample size (n), degree of ME in exposure, and strength of the expected (true) OR (eOR)) between exposure and AOSI varied. Exposure was a continuous variable in all linear models and dichotomised at one SD above the mean in logistic models. Simulations reveal complex patterns and suggest that: (1) There was attenuation of associations that increased with eOR and ME; (2) The FPR was considerable under many scenarios; and (3) The FPR has a complex dependence on the eOR, ME and model choice, but was greater for logistic models. The findings will stimulate work examining cost-effective strategies to reduce the impact of ME in realistic sample sizes and affirm the importance for EARLI of investment in biological samples that help precisely quantify a wide range of environmental exposures.

  13. Polarization-resolved simulations of multiple-order rainbows using realistic raindrop shapes

    NASA Astrophysics Data System (ADS)

    Haußmann, Alexander

    2016-05-01

    This paper presents selected results of a simulation study of the first five (primary-quinary) rainbow orders based on a realistic, size-dependent shape model for falling raindrops, taking into account that the drops' bottom part is flattened to higher degree than the dome-like top part. Moreover, broad drop size distributions are included in the simulations, as it is one goal of this paper to analyze, whether the predicted amplification and attenuation patterns for higher-order rainbows, as derived from previous simulations with monodisperse drop sizes, will still be pronounced under the conditions of natural rainfall. Secondly, deviations of the multiple rainbow orders' polarization state from the reference case of spherical drops are discussed. It is shown that each rainbow order may contain a small amount of circularly polarized light due to total internal reflections. Thirdly, it is investigated, how the conditions that generate twinned primary rainbows will affect the higher orders. For the simulations, geometric-optic ray tracing of the full Stokes vector as well as an approximate approach using appropriately shifted Debye series data is applied.

  14. A battery model that enables consideration of realistic anisotropic environment surrounding an active material particle and its application

    NASA Astrophysics Data System (ADS)

    Lin, Xianke; Lu, Wei

    2017-07-01

    This paper proposes a model that enables consideration of the realistic anisotropic environment surrounding an active material particle by incorporating both diffusion and migration of lithium ions and electrons in the particle. This model makes it possible to quantitatively evaluate effects such as fracture on capacity degradation. In contrast, the conventional model assumes isotropic environment and only considers diffusion in the active particle, which cannot capture the effect of fracture since it would predict results contradictory to experimental observations. With the developed model we have investigated the effects of active material electronic conductivity, particle size, and State of Charge (SOC) swing window when fracture exists. The study shows that the low electronic conductivity of active material has a significant impact on the lithium ion pattern. Fracture increases the resistance for electron transport and therefore reduces lithium intercalation/deintercalation. Particle size plays an important role in lithium ion transport. Smaller particle size is preferable for mitigating capacity loss when fracture happens. The study also shows that operating at high SOC reduces the impact of fracture.

  15. Inter-Individual Variability in Human Response to Low-Dose Ionizing Radiation, Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rocke, David

    2016-08-01

    In order to investigate inter-individual variability in response to low-dose ionizing radiation, we are working with three models, 1) in-vivo irradiated human skin, for which we have a realistic model, but with few subjects, all from a previous project, 2) ex-vivo irradiated human skin, for which we also have a realistic model, though with the limitations involved in keeping skin pieces alive in media, and 3) MatTek EpiDermFT skin plugs, which provides a more realistic model than cell lines, which is more controllable than human samples.

  16. Effects of nasal drug delivery device and its orientation on sprayed particle deposition in a realistic human nasal cavity.

    PubMed

    Tong, Xuwen; Dong, Jingliang; Shang, Yidan; Inthavong, Kiao; Tu, Jiyuan

    2016-10-01

    In this study, the effects of nasal drug delivery device and the spray nozzle orientation on sprayed droplets deposition in a realistic human nasal cavity were numerically studied. Prior to performing the numerical investigation, an in-house designed automated actuation system representing mean adults actuation force was developed to produce realistic spray plume. Then, the spray plume development was filmed by high speed photography system, and spray characteristics such as spray cone angle, break-up length, and average droplet velocity were obtained through off-line image analysis. Continuing studies utilizing those experimental data as boundary conditions were applied in the following numerical spray simulations using a commercially available nasal spray device, which was inserted into a realistic adult nasal passage with external facial features. Through varying the particle releasing direction, the deposition fractions of selected particle sizes on the main nasal passage for targeted drug delivery were compared. The results demonstrated that the middle spray direction showed superior spray efficiency compared with upper or lower directions, and the 10µm agents were the most suitable particle size as the majority of sprayed agents can be delivered to the targeted area, the main passage. This study elaborates a comprehensive approach to better understand nasal spray mechanism and evaluate its performance for existing nasal delivery practices. Results of this study can assist the pharmaceutical industry to improve the current design of nasal drug delivery device and ultimately benefit more patients through optimized medications delivery. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. The Effects of 3D Computer Simulation on Biology Students' Achievement and Memory Retention

    ERIC Educational Resources Information Center

    Elangovan, Tavasuria; Ismail, Zurida

    2014-01-01

    A quasi experimental study was conducted for six weeks to determine the effectiveness of two different 3D computer simulation based teaching methods, that is, realistic simulation and non-realistic simulation on Form Four Biology students' achievement and memory retention in Perak, Malaysia. A sample of 136 Form Four Biology students in Perak,…

  18. Can interface features affect aggression resulting from violent video game play? An examination of realistic controller and large screen size.

    PubMed

    Kim, Ki Joon; Sundar, S Shyam

    2013-05-01

    Aggressiveness attributed to violent video game play is typically studied as a function of the content features of the game. However, can interface features of the game also affect aggression? Guided by the General Aggression Model (GAM), we examine the controller type (gun replica vs. mouse) and screen size (large vs. small) as key technological aspects that may affect the state aggression of gamers, with spatial presence and arousal as potential mediators. Results from a between-subjects experiment showed that a realistic controller and a large screen display induced greater aggression, presence, and arousal than a conventional mouse and a small screen display, respectively, and confirmed that trait aggression was a significant predictor of gamers' state aggression. Contrary to GAM, however, arousal showed no effects on aggression; instead, presence emerged as a significant mediator.

  19. An Experimental Study of Launch Vehicle Propellant Tank Fragmentation

    NASA Technical Reports Server (NTRS)

    Richardson, Erin; Jackson, Austin; Hays, Michael; Bangham, Mike; Blackwood, James; Skinner, Troy; Richman, Ben

    2014-01-01

    In order to better understand launch vehicle abort environments, Bangham Engineering Inc. (BEi) built a test assembly that fails sample materials (steel and aluminum plates of various alloys and thicknesses) under quasi-realistic vehicle failure conditions. Samples are exposed to pressures similar to those expected in vehicle failure scenarios and filmed at high speed to increase understanding of complex fracture mechanics. After failure, the fragments of each test sample are collected, catalogued and reconstructed for further study. Post-test analysis shows that aluminum samples consistently produce fewer fragments than steel samples of similar thickness and at similar failure pressures. Video analysis shows that there are several failure 'patterns' that can be observed for all test samples based on configuration. Fragment velocities are also measured from high speed video data. Sample thickness and material are analyzed for trends in failure pressure. Testing is also done with cryogenic and noncryogenic liquid loading on the samples. It is determined that liquid loading and cryogenic temperatures can decrease material fragmentation for sub-flight thicknesses. A method is developed for capture and collection of fragments that is greater than 97 percent effective in recovering sample mass, addressing the generation of tiny fragments. Currently, samples tested do not match actual launch vehicle propellant tank material thicknesses because of size constraints on test assembly, but test findings are used to inform the design and build of another, larger test assembly with the purpose of testing actual vehicle flight materials that include structural components such as iso-grid and friction stir welds.

  20. Does the foveal shape influence the image formation in human eyes?

    NASA Astrophysics Data System (ADS)

    Frey, Katharina; Zimmerling, Beatrice; Scheibe, Patrick; Rauscher, Franziska G.; Reichenbach, Andreas; Francke, Mike; Brunner, Robert

    2017-10-01

    In human eyes, the maximum visual acuity correlates locally with the fovea, a shallow depression in the retina. Previous examinations have been reduced to simple geometrical fovea models derived from postmortem preparations and considering only a few superficial ray propagation aspects. In the current study, an extended and realistic analysis of ray-optical simulations for a comprehensive anatomical realistic eye model for the anterior part and realistic aspherical human foveal topographical profiles deduced from in vivo optical coherence tomography (OCT) are presented, and the refractive index step at the transition from vitreous to retinal tissue is taken into account. The optical effect of a commonly shaped (averaged) and an extraordinarily shaped foveal pit were both compared to the analysis of an assumed pure spherical boundary layer. The influence of the aperture size, wavelength, and incident angle on the spot size and shape, as well as the axial focal and lateral centroid position is investigated, and a lateral displacement of about 2 μm and an axial shift of the best focal position of less than 4 μm are found. These findings indicate only small optical effects that are laterally in the range of inter-receptor distances and axially less than the photoreceptor outer segment dimension.

  1. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: I. Model implementation

    NASA Astrophysics Data System (ADS)

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    We explore optimization methods for planning the placement, sizing and operations of flexible alternating current transmission system (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to series compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of linear programs (LP) that are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically sized networks that suffer congestion from a range of causes, including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically sized network.

  2. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: I. Model implementation

    DOE PAGES

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-24

    We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l 1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPowermore » Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.« less

  3. Size effects on insect hovering aerodynamics: an integrated computational study.

    PubMed

    Liu, H; Aono, H

    2009-03-01

    Hovering is a miracle of insects that is observed for all sizes of flying insects. Sizing effect in insect hovering on flapping-wing aerodynamics is of interest to both the micro-air-vehicle (MAV) community and also of importance to comparative morphologists. In this study, we present an integrated computational study of such size effects on insect hovering aerodynamics, which is performed using a biology-inspired dynamic flight simulator that integrates the modelling of realistic wing-body morphology, the modelling of flapping-wing and body kinematics and an in-house Navier-Stokes solver. Results of four typical insect hovering flights including a hawkmoth, a honeybee, a fruit fly and a thrips, over a wide range of Reynolds numbers from O(10(4)) to O(10(1)) are presented, which demonstrate the feasibility of the present integrated computational methods in quantitatively modelling and evaluating the unsteady aerodynamics in insect flapping flight. Our results based on realistically modelling of insect hovering therefore offer an integrated understanding of the near-field vortex dynamics, the far-field wake and downwash structures, and their correlation with the force production in terms of sizing and Reynolds number as well as wing kinematics. Our results not only give an integrated interpretation on the similarity and discrepancy of the near- and far-field vortex structures in insect hovering but also demonstrate that our methods can be an effective tool in the MAVs design.

  4. Evaluation of Classifier Performance for Multiclass Phenotype Discrimination in Untargeted Metabolomics.

    PubMed

    Trainor, Patrick J; DeFilippis, Andrew P; Rai, Shesh N

    2017-06-21

    Statistical classification is a critical component of utilizing metabolomics data for examining the molecular determinants of phenotypes. Despite this, a comprehensive and rigorous evaluation of the accuracy of classification techniques for phenotype discrimination given metabolomics data has not been conducted. We conducted such an evaluation using both simulated and real metabolomics datasets, comparing Partial Least Squares-Discriminant Analysis (PLS-DA), Sparse PLS-DA, Random Forests, Support Vector Machines (SVM), Artificial Neural Network, k -Nearest Neighbors ( k -NN), and Naïve Bayes classification techniques for discrimination. We evaluated the techniques on simulated data generated to mimic global untargeted metabolomics data by incorporating realistic block-wise correlation and partial correlation structures for mimicking the correlations and metabolite clustering generated by biological processes. Over the simulation studies, covariance structures, means, and effect sizes were stochastically varied to provide consistent estimates of classifier performance over a wide range of possible scenarios. The effects of the presence of non-normal error distributions, the introduction of biological and technical outliers, unbalanced phenotype allocation, missing values due to abundances below a limit of detection, and the effect of prior-significance filtering (dimension reduction) were evaluated via simulation. In each simulation, classifier parameters, such as the number of hidden nodes in a Neural Network, were optimized by cross-validation to minimize the probability of detecting spurious results due to poorly tuned classifiers. Classifier performance was then evaluated using real metabolomics datasets of varying sample medium, sample size, and experimental design. We report that in the most realistic simulation studies that incorporated non-normal error distributions, unbalanced phenotype allocation, outliers, missing values, and dimension reduction, classifier performance (least to greatest error) was ranked as follows: SVM, Random Forest, Naïve Bayes, sPLS-DA, Neural Networks, PLS-DA and k -NN classifiers. When non-normal error distributions were introduced, the performance of PLS-DA and k -NN classifiers deteriorated further relative to the remaining techniques. Over the real datasets, a trend of better performance of SVM and Random Forest classifier performance was observed.

  5. Galaxies in the Illustris simulation as seen by the Sloan Digital Sky Survey - II. Size-luminosity relations and the deficit of bulge-dominated galaxies in Illustris at low mass

    NASA Astrophysics Data System (ADS)

    Bottrell, Connor; Torrey, Paul; Simard, Luc; Ellison, Sara L.

    2017-05-01

    The interpretive power of the newest generation of large-volume hydrodynamical simulations of galaxy formation rests upon their ability to reproduce the observed properties of galaxies. In this second paper in a series, we employ bulge+disc decompositions of realistic dust-free galaxy images from the Illustris simulation in a consistent comparison with galaxies from the Sloan Digital Sky Survey (SDSS). Examining the size-luminosity relations of each sample, we find that galaxies in Illustris are roughly twice as large and 0.7 mag brighter on average than galaxies in the SDSS. The trend of increasing slope and decreasing normalization of size-luminosity as a function of bulge fraction is qualitatively similar to observations. However, the size-luminosity relations of Illustris galaxies are quantitatively distinguished by higher normalizations and smaller slopes than for real galaxies. We show that this result is linked to a significant deficit of bulge-dominated galaxies in Illustris relative to the SDSS at stellar masses log M_{\\star }/M_{⊙}≲ 11. We investigate this deficit by comparing bulge fraction estimates derived from photometry and internal kinematics. We show that photometric bulge fractions are systematically lower than the kinematic fractions at low masses, but with increasingly good agreement as the stellar mass increases.

  6. Splitting CO2 with a ceria‐based redox cycle in a solar‐driven thermogravimetric analyzer

    PubMed Central

    Takacs, M.; Ackermann, S.; Bonk, A.; Neises‐von Puttkamer, M.; Haueter, Ph.; Scheffe, J. R.; Vogt, U. F.

    2016-01-01

    Thermochemical splitting of CO2 via a ceria‐based redox cycle was performed in a solar‐driven thermogravimetric analyzer. Overall reaction rates, including heat and mass transport, were determined under concentrated irradiation mimicking realistic operation of solar reactors. Reticulated porous ceramic (RPC) structures and fibers made of undoped and Zr4+‐doped CeO2, were endothermally reduced under radiative fluxes of 1280 suns in the temperature range 1200–1950 K and subsequently re‐oxidized with CO2 at 950–1400 K. Rapid and uniform heating was observed for 8 ppi ceria RPC with mm‐sized porosity due to its low optical thickness and volumetric radiative absorption, while ceria fibers with μm‐sized porosity performed poorly due to its opacity to incident irradiation. The 10 ppi RPC exhibited higher fuel yield because of its higher sample density. Zr4+‐doped ceria showed increasing reduction extents with dopant concentration but decreasing specific CO yield due to unfavorable oxidation thermodynamics and slower kinetics. © 2016 American Institute of Chemical Engineers AIChE J, 63: 1263–1271, 2017 PMID:28405030

  7. Investigating Compaction by Intergranular Pressure Solution Using the Discrete Element Method

    NASA Astrophysics Data System (ADS)

    van den Ende, M. P. A.; Marketos, G.; Niemeijer, A. R.; Spiers, C. J.

    2018-01-01

    Intergranular pressure solution creep is an important deformation mechanism in the Earth's crust. The phenomenon has been frequently studied and several analytical models have been proposed that describe its constitutive behavior. These models require assumptions regarding the geometry of the aggregate and the grain size distribution in order to solve for the contact stresses and often neglect shear tractions. Furthermore, analytical models tend to overestimate experimental compaction rates at low porosities, an observation for which the underlying mechanisms remain to be elucidated. Here we present a conceptually simple, 3-D discrete element method (DEM) approach for simulating intergranular pressure solution creep that explicitly models individual grains, relaxing many of the assumptions that are required by analytical models. The DEM model is validated against experiments by direct comparison of macroscopic sample compaction rates. Furthermore, the sensitivity of the overall DEM compaction rate to the grain size and applied stress is tested. The effects of the interparticle friction and of a distributed grain size on macroscopic strain rates are subsequently investigated. Overall, we find that the DEM model is capable of reproducing realistic compaction behavior, and that the strain rates produced by the model are in good agreement with uniaxial compaction experiments. Characteristic features, such as the dependence of the strain rate on grain size and applied stress, as predicted by analytical models, are also observed in the simulations. DEM results show that interparticle friction and a distributed grain size affect the compaction rates by less than half an order of magnitude.

  8. How to develop and write a case for technical writing

    NASA Technical Reports Server (NTRS)

    Couture, B.; Goldstein, J.

    1981-01-01

    Case of different sizes and shapes for teaching technical writing to engineers at Wayne State University have been developed. The case approach was adopted for some assignments because sophomores and juniors lacked technical expertise and professional knowledge of the engineering world. Cases were found to be good exercises, providing realistic practice in specific writing tasks or isolating particular skills in the composing process. A special kind of case which narrates the experiences of one technical person engaged in the problem-solving process in a professional rhetorical situation was developed. This type of long, realistic fiction is called a an "holistic" case. Rather than asking students to role-play a character, an holistic case realistically encompasses the whole of the technical writing process. It allows students to experience the total communication act in which the technical task and data are fully integrated into the rhetorical situation and gives an opportunity to perform in a realistic context, using skills and knowledge required in communication on the job. It is believed that the holistic case most fully exploits the advantages of the case method for students of professional communication.

  9. Simulating patient-specific heart shape and motion using SPECT perfusion images with the MCAT phantom

    NASA Astrophysics Data System (ADS)

    Faber, Tracy L.; Garcia, Ernest V.; Lalush, David S.; Segars, W. Paul; Tsui, Benjamin M.

    2001-05-01

    The spline-based Mathematical Cardiac Torso (MCAT) phantom is a realistic software simulation designed to simulate single photon emission computed tomographic (SPECT) data. It incorporates a heart model of known size and shape; thus, it is invaluable for measuring accuracy of acquisition, reconstruction, and post-processing routines. New functionality has been added by replacing the standard heart model with left ventricular (LV) epicaridal and endocardial surface points detected from actual patient SPECT perfusion studies. LV surfaces detected from standard post-processing quantitation programs are converted through interpolation in space and time into new B-spline models. Perfusion abnormalities are added to the model based on results of standard perfusion quantification. The new LV is translated and rotated to fit within existing atria and right ventricular models, which are scaled based on the size of the LV. Simulations were created for five different patients with myocardial infractions who had undergone SPECT perfusion imaging. Shape, size, and motion of the resulting activity map were compared visually to the original SPECT images. In all cases, size, shape and motion of simulated LVs matched well with the original images. Thus, realistic simulations with known physiologic and functional parameters can be created for evaluating efficacy of processing algorithms.

  10. An Experimental Study of Team Size and Performance on a Complex Task.

    PubMed

    Mao, Andrew; Mason, Winter; Suri, Siddharth; Watts, Duncan J

    2016-01-01

    The relationship between team size and productivity is a question of broad relevance across economics, psychology, and management science. For complex tasks, however, where both the potential benefits and costs of coordinated work increase with the number of workers, neither theoretical arguments nor empirical evidence consistently favor larger vs. smaller teams. Experimental findings, meanwhile, have relied on small groups and highly stylized tasks, hence are hard to generalize to realistic settings. Here we narrow the gap between real-world task complexity and experimental control, reporting results from an online experiment in which 47 teams of size ranging from n = 1 to 32 collaborated on a realistic crisis mapping task. We find that individuals in teams exerted lower overall effort than independent workers, in part by allocating their effort to less demanding (and less productive) sub-tasks; however, we also find that individuals in teams collaborated more with increasing team size. Directly comparing these competing effects, we find that the largest teams outperformed an equivalent number of independent workers, suggesting that gains to collaboration dominated losses to effort. Importantly, these teams also performed comparably to a field deployment of crisis mappers, suggesting that experiments of the type described here can help solve practical problems as well as advancing the science of collective intelligence.

  11. An Experimental Study of Team Size and Performance on a Complex Task

    PubMed Central

    Mao, Andrew; Mason, Winter; Suri, Siddharth; Watts, Duncan J.

    2016-01-01

    The relationship between team size and productivity is a question of broad relevance across economics, psychology, and management science. For complex tasks, however, where both the potential benefits and costs of coordinated work increase with the number of workers, neither theoretical arguments nor empirical evidence consistently favor larger vs. smaller teams. Experimental findings, meanwhile, have relied on small groups and highly stylized tasks, hence are hard to generalize to realistic settings. Here we narrow the gap between real-world task complexity and experimental control, reporting results from an online experiment in which 47 teams of size ranging from n = 1 to 32 collaborated on a realistic crisis mapping task. We find that individuals in teams exerted lower overall effort than independent workers, in part by allocating their effort to less demanding (and less productive) sub-tasks; however, we also find that individuals in teams collaborated more with increasing team size. Directly comparing these competing effects, we find that the largest teams outperformed an equivalent number of independent workers, suggesting that gains to collaboration dominated losses to effort. Importantly, these teams also performed comparably to a field deployment of crisis mappers, suggesting that experiments of the type described here can help solve practical problems as well as advancing the science of collective intelligence. PMID:27082239

  12. Power and instrument strength requirements for Mendelian randomization studies using multiple genetic variants.

    PubMed

    Pierce, Brandon L; Ahsan, Habibul; Vanderweele, Tyler J

    2011-06-01

    Mendelian Randomization (MR) studies assess the causality of an exposure-disease association using genetic determinants [i.e. instrumental variables (IVs)] of the exposure. Power and IV strength requirements for MR studies using multiple genetic variants have not been explored. We simulated cohort data sets consisting of a normally distributed disease trait, a normally distributed exposure, which affects this trait and a biallelic genetic variant that affects the exposure. We estimated power to detect an effect of exposure on disease for varying allele frequencies, effect sizes and samples sizes (using two-stage least squares regression on 10,000 data sets-Stage 1 is a regression of exposure on the variant. Stage 2 is a regression of disease on the fitted exposure). Similar analyses were conducted using multiple genetic variants (5, 10, 20) as independent or combined IVs. We assessed IV strength using the first-stage F statistic. Simulations of realistic scenarios indicate that MR studies will require large (n > 1000), often very large (n > 10,000), sample sizes. In many cases, so-called 'weak IV' problems arise when using multiple variants as independent IVs (even with as few as five), resulting in biased effect estimates. Combining genetic factors into fewer IVs results in modest power decreases, but alleviates weak IV problems. Ideal methods for combining genetic factors depend upon knowledge of the genetic architecture underlying the exposure. The feasibility of well-powered, unbiased MR studies will depend upon the amount of variance in the exposure that can be explained by known genetic factors and the 'strength' of the IV set derived from these genetic factors.

  13. Evaluation of respondent-driven sampling.

    PubMed

    McCreesh, Nicky; Frost, Simon D W; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda N; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G

    2012-01-01

    Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total population data. Total population data on age, tribe, religion, socioeconomic status, sexual activity, and HIV status were available on a population of 2402 male household heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, using current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). We recruited 927 household heads. Full and small RDS samples were largely representative of the total population, but both samples underrepresented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven sampling statistical inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven sampling bootstrap 95% confidence intervals included the population proportion. Respondent-driven sampling produced a generally representative sample of this well-connected nonhidden population. However, current respondent-driven sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience sampling method, and caution is required when interpreting findings based on the sampling method.

  14. Development and analysis of a finite element model to simulate pulmonary emphysema in CT imaging.

    PubMed

    Diciotti, Stefano; Nobis, Alessandro; Ciulli, Stefano; Landini, Nicholas; Mascalchi, Mario; Sverzellati, Nicola; Innocenti, Bernardo

    2015-01-01

    In CT imaging, pulmonary emphysema appears as lung regions with Low-Attenuation Areas (LAA). In this study we propose a finite element (FE) model of lung parenchyma, based on a 2-D grid of beam elements, which simulates pulmonary emphysema related to smoking in CT imaging. Simulated LAA images were generated through space sampling of the model output. We employed two measurements of emphysema extent: Relative Area (RA) and the exponent D of the cumulative distribution function of LAA clusters size. The model has been used to compare RA and D computed on the simulated LAA images with those computed on the models output. Different mesh element sizes and various model parameters, simulating different physiological/pathological conditions, have been considered and analyzed. A proper mesh element size has been determined as the best trade-off between reliable results and reasonable computational cost. Both RA and D computed on simulated LAA images were underestimated with respect to those calculated on the models output. Such underestimations were larger for RA (≈ -44 ÷ -26%) as compared to those for D (≈ -16 ÷ -2%). Our FE model could be useful to generate standard test images and to design realistic physical phantoms of LAA images for the assessment of the accuracy of descriptors for quantifying emphysema in CT imaging.

  15. Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: statistical and systematic error budgets for future experiments

    NASA Astrophysics Data System (ADS)

    Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; Bianchini, Federico; Bleem, Lindsey E.; Crawford, Thomas M.; Holder, Gilbert P.; Manzotti, Alessandro; Reichardt, Christian L.

    2017-08-01

    We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.

  16. "Size Matters": Women in High Tech Start-Ups

    NASA Astrophysics Data System (ADS)

    Lackritz, Hilary

    2001-03-01

    For those who want constant excitement, change, and rapid opportunities to have an impact in the technical world, start-up companies offer wonderful challenges. This talk will focus realistically on rewards and risks in the start-up world. An outline of the differences between the high tech start-ups and the academic and consulting worlds from a personal viewpoint will be presented. Size usually does matter, and in this case, small size can equal independence, entrepreneurship, and other advantages that are hard to come by in Dilbert’s corporate world.

  17. Relative importance of column and adsorption parameters on the productivity in preparative liquid chromatography II: Investigation of separation systems with competitive Langmuir adsorption isotherms.

    PubMed

    Forssén, Patrik; Samuelsson, Jörgen; Fornstedt, Torgny

    2014-06-20

    In this study we investigated how the maximum productivity for commonly used, realistic separation system with a competitive Langmuir adsorption isotherm is affected by changes in column length, packing particle size, mobile phase viscosity, maximum allowed column pressure, column efficiency, sample concentration/solubility, selectivity, monolayer saturation capacity and retention factor of the first eluting compound. The study was performed by generating 1000 random separation systems whose optimal injection volume was determined, i.e., the injection volume that gives the largest achievable productivity. The relative changes in largest achievable productivity when one of the parameters above changes was then studied for each system and the productivity changes for all systems were presented as distributions. We found that it is almost always beneficial to use shorter columns with high pressure drops over the column and that the selectivity should be greater than 2. However, the sample concentration and column efficiency have very limited effect on the maximum productivity. The effect of packing particle size depends on the flow rate limiting factor. If the pumps maximum flow rate is the limiting factor use smaller packing, but if the pressure of the system is the limiting factor use larger packing up to about 40μm. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. A pilot study to characterize fine particles in the environment of an automotive machining facility.

    PubMed

    Sioutas, C

    1999-04-01

    The main goal of this study was to characterize fine particles (e.g., smaller than about 3 microns) in an automotive machining environment. The Toledo Machining Plant of Chrysler Corporation was selected for this purpose. The effect of local mechanical processes as aerosol sources was a major part of this investigation. To determine the size-dependent mass concentration of particles in the plant, the Micro-Orifice Uniform Deposit Impactor (MOUDI Model 100, MSP Corp., Minneapolis, Minnesota) was used. The MOUDI was placed at central locations in departments with sources inside the plant, so that the obtained information on the size distribution realistically represents the aerosol to which plant workers are exposed. Sampling was conducted over a 4-day period, and during three periods per day, each matching the work shifts. A special effort was made to place the MOUDI at a central location of a department with relatively homogeneous particle sources. The selected sampling sites included welding, grinding, steel machining, and heat treating processes. The average 24-hour mass concentrations of particles smaller than 3.2 microns in aerodynamic diameter were 167.8, 103.9, 201.7, and 112.7 micrograms/m3 for welding, grinding, mild steel, and heat treating processes, respectively. Finally, the mass median diameters of welding, heat treatment, machining, and grinding operations were approximately 0.5, 0.5, 0.6, and 0.8 micron, respectively.

  19. Competing risks regression for clustered data

    PubMed Central

    Zhou, Bingqing; Fine, Jason; Latouche, Aurelien; Labopin, Myriam

    2012-01-01

    A population average regression model is proposed to assess the marginal effects of covariates on the cumulative incidence function when there is dependence across individuals within a cluster in the competing risks setting. This method extends the Fine–Gray proportional hazards model for the subdistribution to situations, where individuals within a cluster may be correlated due to unobserved shared factors. Estimators of the regression parameters in the marginal model are developed under an independence working assumption where the correlation across individuals within a cluster is completely unspecified. The estimators are consistent and asymptotically normal, and variance estimation may be achieved without specifying the form of the dependence across individuals. A simulation study evidences that the inferential procedures perform well with realistic sample sizes. The practical utility of the methods is illustrated with data from the European Bone Marrow Transplant Registry. PMID:22045910

  20. Optimal background matching camouflage.

    PubMed

    Michalis, Constantine; Scott-Samuel, Nicholas E; Gibson, David P; Cuthill, Innes C

    2017-07-12

    Background matching is the most familiar and widespread camouflage strategy: avoiding detection by having a similar colour and pattern to the background. Optimizing background matching is straightforward in a homogeneous environment, or when the habitat has very distinct sub-types and there is divergent selection leading to polymorphism. However, most backgrounds have continuous variation in colour and texture, so what is the best solution? Not all samples of the background are likely to be equally inconspicuous, and laboratory experiments on birds and humans support this view. Theory suggests that the most probable background sample (in the statistical sense), at the size of the prey, would, on average, be the most cryptic. We present an analysis, based on realistic assumptions about low-level vision, that estimates the distribution of background colours and visual textures, and predicts the best camouflage. We present data from a field experiment that tests and supports our predictions, using artificial moth-like targets under bird predation. Additionally, we present analogous data for humans, under tightly controlled viewing conditions, searching for targets on a computer screen. These data show that, in the absence of predator learning, the best single camouflage pattern for heterogeneous backgrounds is the most probable sample. © 2017 The Authors.

  1. Evaluation of three-dimensional virtual perception of garments

    NASA Astrophysics Data System (ADS)

    Aydoğdu, G.; Yeşilpinar, S.; Erdem, D.

    2017-10-01

    In recent years, three-dimensional design, dressing and simulation programs came into prominence in the textile industry. By these programs, the need to produce clothing samples for every design in design process has been eliminated. Clothing fit, design, pattern, fabric and accessory details and fabric drape features can be evaluated easily. Also, body size of virtual mannequin can be adjusted so more realistic simulations can be created. Moreover, three-dimensional virtual garment images created by these programs can be used while presenting the product to end-user instead of two-dimensional photograph images. In this study, a survey was carried out to investigate the visual perception of consumers. The survey was conducted for three different garment types, separately. Questions about gender, profession etc. was asked to the participants and expected them to compare real samples and artworks or three-dimensional virtual images of garments. When survey results were analyzed statistically, it is seen that demographic situation of participants does not affect visual perception and three-dimensional virtual garment images reflect the real sample characteristics better than artworks for each garment type. Also, it is reported that there is no perception difference depending on garment type between t-shirt, sweatshirt and tracksuit bottom.

  2. Sample design effects in landscape genetics

    USGS Publications Warehouse

    Oyler-McCance, Sara J.; Fedy, Bradley C.; Landguth, Erin L.

    2012-01-01

    An important research gap in landscape genetics is the impact of different field sampling designs on the ability to detect the effects of landscape pattern on gene flow. We evaluated how five different sampling regimes (random, linear, systematic, cluster, and single study site) affected the probability of correctly identifying the generating landscape process of population structure. Sampling regimes were chosen to represent a suite of designs common in field studies. We used genetic data generated from a spatially-explicit, individual-based program and simulated gene flow in a continuous population across a landscape with gradual spatial changes in resistance to movement. Additionally, we evaluated the sampling regimes using realistic and obtainable number of loci (10 and 20), number of alleles per locus (5 and 10), number of individuals sampled (10-300), and generational time after the landscape was introduced (20 and 400). For a simulated continuously distributed species, we found that random, linear, and systematic sampling regimes performed well with high sample sizes (>200), levels of polymorphism (10 alleles per locus), and number of molecular markers (20). The cluster and single study site sampling regimes were not able to correctly identify the generating process under any conditions and thus, are not advisable strategies for scenarios similar to our simulations. Our research emphasizes the importance of sampling data at ecologically appropriate spatial and temporal scales and suggests careful consideration for sampling near landscape components that are likely to most influence the genetic structure of the species. In addition, simulating sampling designs a priori could help guide filed data collection efforts.

  3. Towards a more realistic population of bright spiral galaxies in cosmological simulations

    NASA Astrophysics Data System (ADS)

    Aumer, Michael; White, Simon D. M.; Naab, Thorsten; Scannapieco, Cecilia

    2013-10-01

    We present an update to the multiphase smoothed particle hydrodynamics galaxy formation code by Scannapieco et al. We include a more elaborate treatment of the production of metals, cooling rates based on individual element abundances and a scheme for the turbulent diffusion of metals. Our supernova feedback model now transfers energy to the interstellar medium (ISM) in kinetic and thermal form, and we include a prescription for the effects of radiation pressure from massive young stars on the ISM. We calibrate our new code on the well-studied Aquarius haloes and then use it to simulate a sample of 16 galaxies with halo masses between 1 × 1011 and 3 × 1012 M⊙. In general, the stellar masses of the sample agree well with the stellar mass to halo mass relation inferred from abundance matching techniques for redshifts z = 0-4. There is however a tendency to overproduce stars at z > 4 and to underproduce them at z < 0.5 in the least massive haloes. Overly high star formation rates (SFRs) at z < 1 for the most massive haloes are likely connected to the lack of active galactic nuclei feedback in our model. The simulated sample also shows reasonable agreement with observed SFRs, sizes, gas fractions and gas-phase metallicities at z = 0-3. Remaining discrepancies can be connected to deviations from predictions for star formation histories from abundance matching. At z = 0, the model galaxies show realistic morphologies, stellar surface density profiles, circular velocity curves and stellar metallicities, but overly flat metallicity gradients. 15 out of 16 of our galaxies contain disc components with kinematic disc fraction ranging between 15 and 65 per cent. The disc fraction depends on the time of the last destructive merger or misaligned infall event. Considering the remaining shortcomings of our simulations we conclude that even higher kinematic disc fractions may be possible for Λ cold dark matter haloes with quiet merger histories, such as the Aquarius haloes.

  4. Screening study of four environmentally relevant microplastic pollutants: Uptake and effects on Daphnia magna and Artemia franciscana.

    PubMed

    Kokalj, Anita Jemec; Kunej, Urban; Skalar, Tina

    2018-06-08

    This study investigated four different environmentally relevant microplastic (MP) pollutants which were derived from two facial cleansers, a plastic bag and polyethylene textile fleece. The mean size range of the particles (according to number distribution) was 20-250 μm when measured as a powder and 0.02-200 μm in suspension. In all MP exposures, plastic particles were found inside the guts of D. magna and A. franciscana, but only in the case of daphnids a clear exponential correlation between MP uptake in the gut and the size of the MP was identified. Exposure tests in which the majority of the MP particles were below 100 μm in size also had higher numbers of daphnids displaying evidence of MP ingestion. As the average MP particle size increased, the percentage of daphnids which had MP in their gut decreased. Using a number distribution value to measure particle size when in a suspension is more experimentally relevant as it provides a more realistic particle size than when samples are measured as a powder. Generally, artemias had fewer MP particles in the gut, than the daphnids, which could be explained by their different food size preferences. No acute effects on D. magna were found, but the growth of A. franciscana was affected. We conclude that zooplankton crustacean can ingest various MPs but none of the exposures tested were highly acutely hazardous to the test species. In addition, no delayed lethal effects in a 24 h post-exposure period were found. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Building test data from real outbreaks for evaluating detection algorithms.

    PubMed

    Texier, Gaetan; Jackson, Michael L; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve

    2017-01-01

    Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals.

  6. Building test data from real outbreaks for evaluating detection algorithms

    PubMed Central

    Texier, Gaetan; Jackson, Michael L.; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve

    2017-01-01

    Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method—ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals. PMID:28863159

  7. Deconvolution of continuous paleomagnetic data from pass-through magnetometer: A new algorithm to restore geomagnetic and environmental information based on realistic optimization

    NASA Astrophysics Data System (ADS)

    Oda, Hirokuni; Xuan, Chuang

    2014-10-01

    development of pass-through superconducting rock magnetometers (SRM) has greatly promoted collection of paleomagnetic data from continuous long-core samples. The output of pass-through measurement is smoothed and distorted due to convolution of magnetization with the magnetometer sensor response. Although several studies could restore high-resolution paleomagnetic signal through deconvolution of pass-through measurement, difficulties in accurately measuring the magnetometer sensor response have hindered the application of deconvolution. We acquired reliable sensor response of an SRM at the Oregon State University based on repeated measurements of a precisely fabricated magnetic point source. In addition, we present an improved deconvolution algorithm based on Akaike's Bayesian Information Criterion (ABIC) minimization, incorporating new parameters to account for errors in sample measurement position and length. The new algorithm was tested using synthetic data constructed by convolving "true" paleomagnetic signal containing an "excursion" with the sensor response. Realistic noise was added to the synthetic measurement using Monte Carlo method based on measurement noise distribution acquired from 200 repeated measurements of a u-channel sample. Deconvolution of 1000 synthetic measurements with realistic noise closely resembles the "true" magnetization, and successfully restored fine-scale magnetization variations including the "excursion." Our analyses show that inaccuracy in sample measurement position and length significantly affects deconvolution estimation, and can be resolved using the new deconvolution algorithm. Optimized deconvolution of 20 repeated measurements of a u-channel sample yielded highly consistent deconvolution results and estimates of error in sample measurement position and length, demonstrating the reliability of the new deconvolution algorithm for real pass-through measurements.

  8. Random-Walk Model of Diffusion in Three Dimensions in Brain Extracellular Space: Comparison with Microfiberoptic Photobleaching Measurements

    PubMed Central

    Jin, Songwan; Zador, Zsolt; Verkman, A. S.

    2008-01-01

    Diffusion through the extracellular space (ECS) in brain is important in drug delivery, intercellular communication, and extracellular ionic buffering. The ECS comprises ∼20% of brain parenchymal volume and contains cell-cell gaps ∼50 nm. We developed a random-walk model to simulate macromolecule diffusion in brain ECS in three dimensions using realistic ECS dimensions. Model inputs included ECS volume fraction (α), cell size, cell-cell gap geometry, intercellular lake (expanded regions of brain ECS) dimensions, and molecular size of the diffusing solute. Model output was relative solute diffusion in water versus brain ECS (Do/D). Experimental Do/D for comparison with model predictions was measured using a microfiberoptic fluorescence photobleaching method involving stereotaxic insertion of a micron-size optical fiber into mouse brain. Do/D for the small solute calcein in different regions of brain was in the range 3.0–4.1, and increased with brain cell swelling after water intoxication. Do/D also increased with increasing size of the diffusing solute, particularly in deep brain nuclei. Simulations of measured Do/D using realistic α, cell size and cell-cell gap required the presence of intercellular lakes at multicell contact points, and the contact length of cell-cell gaps to be least 50-fold smaller than cell size. The model accurately predicted Do/D for different solute sizes. Also, the modeling showed unanticipated effects on Do/D of changing ECS and cell dimensions that implicated solute trapping by lakes. Our model establishes the geometric constraints to account quantitatively for the relatively modest slowing of solute and macromolecule diffusion in brain ECS. PMID:18469079

  9. Random-walk model of diffusion in three dimensions in brain extracellular space: comparison with microfiberoptic photobleaching measurements.

    PubMed

    Jin, Songwan; Zador, Zsolt; Verkman, A S

    2008-08-01

    Diffusion through the extracellular space (ECS) in brain is important in drug delivery, intercellular communication, and extracellular ionic buffering. The ECS comprises approximately 20% of brain parenchymal volume and contains cell-cell gaps approximately 50 nm. We developed a random-walk model to simulate macromolecule diffusion in brain ECS in three dimensions using realistic ECS dimensions. Model inputs included ECS volume fraction (alpha), cell size, cell-cell gap geometry, intercellular lake (expanded regions of brain ECS) dimensions, and molecular size of the diffusing solute. Model output was relative solute diffusion in water versus brain ECS (D(o)/D). Experimental D(o)/D for comparison with model predictions was measured using a microfiberoptic fluorescence photobleaching method involving stereotaxic insertion of a micron-size optical fiber into mouse brain. D(o)/D for the small solute calcein in different regions of brain was in the range 3.0-4.1, and increased with brain cell swelling after water intoxication. D(o)/D also increased with increasing size of the diffusing solute, particularly in deep brain nuclei. Simulations of measured D(o)/D using realistic alpha, cell size and cell-cell gap required the presence of intercellular lakes at multicell contact points, and the contact length of cell-cell gaps to be least 50-fold smaller than cell size. The model accurately predicted D(o)/D for different solute sizes. Also, the modeling showed unanticipated effects on D(o)/D of changing ECS and cell dimensions that implicated solute trapping by lakes. Our model establishes the geometric constraints to account quantitatively for the relatively modest slowing of solute and macromolecule diffusion in brain ECS.

  10. X-ray peak broadening analysis of AA 6061{sub 100-x} - x wt.% Al{sub 2}O{sub 3} nanocomposite prepared by mechanical alloying

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sivasankaran, S., E-mail: sivasankarangs1979@gmail.com; Sivaprasad, K., E-mail: ksp@nitt.edu; Narayanasamy, R., E-mail: narayan@nitt.edu

    2011-07-15

    Nanocrystalline AA 6061 alloy reinforced with alumina (0, 4, 8, and 12 wt.%) in amorphized state composite powder was synthesized by mechanical alloying and consolidated by conventional powder metallurgy route. The as-milled and as-sintered (573 K and 673 K) nanocomposites were characterized by X-ray diffraction (XRD) and transmission electron microscopy (TEM). The peaks corresponding to fine alumina was not observed by XRD patterns due to amorphization. Using high-resolution transmission electron microscope, it is confirmed that the presence of amorphized alumina observed in Al lattice fringes. The crystallite size, lattice strain, deformation stress, and strain energy density of AA 6061 matrixmore » were determined precisely from the first five most intensive reflection of XRD using simple Williamson-Hall models; uniform deformation model, uniform stress deformation model, and uniform energy density deformation model. Among the developed models, uniform energy density deformation model was observed to be the best fit and realistic model for mechanically alloyed powders. This model evidenced the more anisotropic nature of the ball milled powders. The XRD peaks of as-milled powder samples demonstrated a considerable broadening with percentage of reinforcement due to grain refinement and lattice distortions during same milling time (40 h). The as-sintered (673 K) unreinforced AA 6061 matrix crystallite size from well fitted uniform energy density deformation model was 98 nm. The as-milled and as-sintered (673 K) nanocrystallite matrix sizes for 12 wt.% Al{sub 2}O{sub 3} well fitted by uniform energy density deformation model were 38 nm and 77 nm respectively, which indicate that the fine Al{sub 2}O{sub 3} pinned the matrix grain boundary and prevented the grain growth during sintering. Finally, the lattice parameter of Al matrix in as-milled and as-sintered conditions was also investigated in this paper. Research highlights: {yields} Integral breadth methods using various Williamson-Hall models were investigated for line profile analysis. {yields} Uniform energy density deformation model is observed to the best realistic model. {yields} The present analysis is used for understanding the stress and the strain present in the nanocomposites.« less

  11. Visualizing 3D Food Microstructure Using Tomographic Methods: Advantages and Disadvantages.

    PubMed

    Wang, Zi; Herremans, Els; Janssen, Siem; Cantre, Dennis; Verboven, Pieter; Nicolaï, Bart

    2018-03-25

    X-ray micro-computed tomography (micro-CT) provides the unique ability to capture intact internal microstructure data without significant preparation of the sample. The fundamentals of micro-CT technology are briefly described along with a short introduction to basic image processing, quantitative analysis, and derivative computational modeling. The applications and limitations of micro-CT in industries such as meat, dairy, postharvest, and bread/confectionary are discussed to serve as a guideline to the plausibility of utilizing the technique for detecting features of interest. Component volume fractions, their respective size/shape distributions, and connectivity, for example, can be utilized for product development, manufacturing process tuning and/or troubleshooting. In addition to determining structure-function relations, micro-CT can be used for foreign material detection to further ensure product quality and safety. In most usage scenarios, micro-CT in its current form is perfectly adequate for determining microstructure in a wide variety of food products. However, in low-contrast and low-stability samples, emphasis is placed on the shortcomings of the current systems to set realistic expectations for the intended users.

  12. [Ionic liquid based ultrasonication-assisted extraction of essential oil from the leaves of Persicaria minor and conductor-like screening model for realistic solvents study].

    PubMed

    Habib, Ullah; Cecilia, D Wilfred; Maizatul, S Shaharun

    2017-06-08

    Ionic liquids (ILs) based ultrasonic-assisted extract has been applied for the extraction of essential oil from Persicaria minor leaves. The effects of temperature, sonication time, and particle size of the plant material on the yield of essential oil were investigated. Among the different ILs employed, 1-ethyl-3-methylimidazolium acetate was the most effective, providing a 9.55% yield of the essential oil under optimum conditions (70 ℃, 25 min, IL:hexane ratio of 7:10 (v/v), particle size 60-80 mesh). The performance of 1-ethyl-3-methylimidazolium acetate in the extraction was attributed to its low viscosity and ability to disintegrate the structural matrix of the plant material. The ability of 1-ethyl-3-methylimidazolium acetate was also confirmed using the conductor like-screening model for realistic solvents. This research proves that ILs can be used to extract essential oils from lignocellulosic biomass.

  13. Porosity, permeability and 3D fracture network characterisation of dolomite reservoir rock samples

    PubMed Central

    Voorn, Maarten; Exner, Ulrike; Barnhoorn, Auke; Baud, Patrick; Reuschlé, Thierry

    2015-01-01

    With fractured rocks making up an important part of hydrocarbon reservoirs worldwide, detailed analysis of fractures and fracture networks is essential. However, common analyses on drill core and plug samples taken from such reservoirs (including hand specimen analysis, thin section analysis and laboratory porosity and permeability determination) however suffer from various problems, such as having a limited resolution, providing only 2D and no internal structure information, being destructive on the samples and/or not being representative for full fracture networks. In this paper, we therefore explore the use of an additional method – non-destructive 3D X-ray micro-Computed Tomography (μCT) – to obtain more information on such fractured samples. Seven plug-sized samples were selected from narrowly fractured rocks of the Hauptdolomit formation, taken from wellbores in the Vienna basin, Austria. These samples span a range of different fault rocks in a fault zone interpretation, from damage zone to fault core. We process the 3D μCT data in this study by a Hessian-based fracture filtering routine and can successfully extract porosity, fracture aperture, fracture density and fracture orientations – in bulk as well as locally. Additionally, thin sections made from selected plug samples provide 2D information with a much higher detail than the μCT data. Finally, gas- and water permeability measurements under confining pressure provide an important link (at least in order of magnitude) towards more realistic reservoir conditions. This study shows that 3D μCT can be applied efficiently on plug-sized samples of naturally fractured rocks, and that although there are limitations, several important parameters can be extracted. μCT can therefore be a useful addition to studies on such reservoir rocks, and provide valuable input for modelling and simulations. Also permeability experiments under confining pressure provide important additional insights. Combining these and other methods can therefore be a powerful approach in microstructural analysis of reservoir rocks, especially when applying the concepts that we present (on a small set of samples) in a larger study, in an automated and standardised manner. PMID:26549935

  14. Porosity, permeability and 3D fracture network characterisation of dolomite reservoir rock samples.

    PubMed

    Voorn, Maarten; Exner, Ulrike; Barnhoorn, Auke; Baud, Patrick; Reuschlé, Thierry

    2015-03-01

    With fractured rocks making up an important part of hydrocarbon reservoirs worldwide, detailed analysis of fractures and fracture networks is essential. However, common analyses on drill core and plug samples taken from such reservoirs (including hand specimen analysis, thin section analysis and laboratory porosity and permeability determination) however suffer from various problems, such as having a limited resolution, providing only 2D and no internal structure information, being destructive on the samples and/or not being representative for full fracture networks. In this paper, we therefore explore the use of an additional method - non-destructive 3D X-ray micro-Computed Tomography (μCT) - to obtain more information on such fractured samples. Seven plug-sized samples were selected from narrowly fractured rocks of the Hauptdolomit formation, taken from wellbores in the Vienna basin, Austria. These samples span a range of different fault rocks in a fault zone interpretation, from damage zone to fault core. We process the 3D μCT data in this study by a Hessian-based fracture filtering routine and can successfully extract porosity, fracture aperture, fracture density and fracture orientations - in bulk as well as locally. Additionally, thin sections made from selected plug samples provide 2D information with a much higher detail than the μCT data. Finally, gas- and water permeability measurements under confining pressure provide an important link (at least in order of magnitude) towards more realistic reservoir conditions. This study shows that 3D μCT can be applied efficiently on plug-sized samples of naturally fractured rocks, and that although there are limitations, several important parameters can be extracted. μCT can therefore be a useful addition to studies on such reservoir rocks, and provide valuable input for modelling and simulations. Also permeability experiments under confining pressure provide important additional insights. Combining these and other methods can therefore be a powerful approach in microstructural analysis of reservoir rocks, especially when applying the concepts that we present (on a small set of samples) in a larger study, in an automated and standardised manner.

  15. Integrated approach for quantification of fractured tight reservoir rocks: Porosity, permeability analyses and 3D fracture network characterisation on fractured dolomite samples

    NASA Astrophysics Data System (ADS)

    Voorn, Maarten; Barnhoorn, Auke; Exner, Ulrike; Baud, Patrick; Reuschlé, Thierry

    2015-04-01

    Fractured reservoir rocks make up an important part of the hydrocarbon reservoirs worldwide. A detailed analysis of fractures and fracture networks in reservoir rock samples is thus essential to determine the potential of these fractured reservoirs. However, common analyses on drill core and plug samples taken from such reservoirs (including hand specimen analysis, thin section analysis and laboratory porosity and permeability determination) suffer from various problems, such as having a limited resolution, providing only 2D and no internal structure information, being destructive on the samples and/or not being representative for full fracture networks. In this study, we therefore explore the use of an additional method - non-destructive 3D X-ray micro-Computed Tomography (μCT) - to obtain more information on such fractured samples. Seven plug-sized samples were selected from narrowly fractured rocks of the Hauptdolomit formation, taken from wellbores in the Vienna Basin, Austria. These samples span a range of different fault rocks in a fault zone interpretation, from damage zone to fault core. 3D μCT data is used to extract porosity, fracture aperture, fracture density and fracture orientations - in bulk as well as locally. The 3D analyses are complemented with thin sections made to provide some 2D information with a much higher detail than the μCT data. Finally, gas- and water permeability measurements under confining pressure provide an important link (at least in order of magnitude) of the µCT results towards more realistic reservoir conditions. Our results show that 3D μCT can be applied efficiently on plug-sized samples of naturally fractured rocks, and that several important parameters can be extracted. μCT can therefore be a useful addition to studies on such reservoir rocks, and provide valuable input for modelling and simulations. Also permeability experiments under confining pressure provide important additional insights. Combining these and other methods can therefore be a powerful approach in microstructural analysis of reservoir rocks, especially when applying the concepts that we present (on a small set of samples) in a larger study, in an automated and standardised manner.

  16. Automated sizing of large structures by mixed optimization methods

    NASA Technical Reports Server (NTRS)

    Sobieszczanski, J.; Loendorf, D.

    1973-01-01

    A procedure for automating the sizing of wing-fuselage airframes was developed and implemented in the form of an operational program. The program combines fully stressed design to determine an overall material distribution with mass-strength and mathematical programming methods to design structural details accounting for realistic design constraints. The practicality and efficiency of the procedure is demonstrated for transport aircraft configurations. The methodology is sufficiently general to be applicable to other large and complex structures.

  17. Field-scale fluorescence fingerprinting of biochar-borne dissolved organic carbon

    USDA-ARS?s Scientific Manuscript database

    Biochar continues to receive worldwide enthusiasm as means of augmenting recalcitrant organic carbon in agricultural soils. Realistic biochar amendment rate (typically less than 1 wt%) in the field scale, and loss by sizing, rain, and other transport events demand reliable methods to quantify the r...

  18. A Pragmatic Approach to Sales Training

    ERIC Educational Resources Information Center

    Buzzotta, V. R.; And Others

    1974-01-01

    A systematic ten-step approach to behavioral sales training is offered: (1) sales-behavior training goals, (2) cognitive maps, (3) sizing-up of skills, (4) selling techniques, (5) realistic practice, (6) feedback, (7) individual business goals, (8) plan of action, (9) review of results, and (10) research results. (MW)

  19. CHARACTERIZATION OF AEROSOLS FROM A WATER-BASED CLEANER APPLIED WITH A HAND-PUMP SPRAYER

    EPA Science Inventory

    The paper gives results of tests that were performed in a controlled-environment test room to measure particle concentrations and size distributions and concentrations of selected volatily organic compounds during, and following, application of water-based cleaners to realistic s...

  20. A Comparison of Techniques for Scheduling Fleets of Earth-Observing Satellites

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna

    2003-01-01

    Earth observing satellite (EOS) scheduling is a complex real-world domain representative of a broad class of over-subscription scheduling problems. Over-subscription problems are those where requests for a facility exceed its capacity. These problems arise in a wide variety of NASA and terrestrial domains and are .XI important class of scheduling problems because such facilities often represent large capital investments. We have run experiments comparing multiple variants of the genetic algorithm, hill climbing, simulated annealing, squeaky wheel optimization and iterated sampling on two variants of a realistically-sized model of the EOS scheduling problem. These are implemented as permutation-based methods; methods that search in the space of priority orderings of observation requests and evaluate each permutation by using it to drive a greedy scheduler. Simulated annealing performs best and random mutation operators outperform our squeaky (more intelligent) operator. Furthermore, taking smaller steps towards the end of the search improves performance.

  1. Continuous-Variable Instantaneous Quantum Computing is Hard to Sample.

    PubMed

    Douce, T; Markham, D; Kashefi, E; Diamanti, E; Coudreau, T; Milman, P; van Loock, P; Ferrini, G

    2017-02-17

    Instantaneous quantum computing is a subuniversal quantum complexity class, whose circuits have proven to be hard to simulate classically in the discrete-variable realm. We extend this proof to the continuous-variable (CV) domain by using squeezed states and homodyne detection, and by exploring the properties of postselected circuits. In order to treat postselection in CVs, we consider finitely resolved homodyne detectors, corresponding to a realistic scheme based on discrete probability distributions of the measurement outcomes. The unavoidable errors stemming from the use of finitely squeezed states are suppressed through a qubit-into-oscillator Gottesman-Kitaev-Preskill encoding of quantum information, which was previously shown to enable fault-tolerant CV quantum computation. Finally, we show that, in order to render postselected computational classes in CVs meaningful, a logarithmic scaling of the squeezing parameter with the circuit size is necessary, translating into a polynomial scaling of the input energy.

  2. Integrative assessment of multiple pesticides as risk factors for non-Hodgkin's lymphoma among men

    PubMed Central

    De Roos, A J; Zahm, S; Cantor, K; Weisenburger, D; Holmes, F; Burmeister, L; Blair, A

    2003-01-01

    Methods: During the 1980s, the National Cancer Institute conducted three case-control studies of NHL in the midwestern United States. These pooled data were used to examine pesticide exposures in farming as risk factors for NHL in men. The large sample size (n = 3417) allowed analysis of 47 pesticides simultaneously, controlling for potential confounding by other pesticides in the model, and adjusting the estimates based on a prespecified variance to make them more stable. Results: Reported use of several individual pesticides was associated with increased NHL incidence, including organophosphate insecticides coumaphos, diazinon, and fonofos, insecticides chlordane, dieldrin, and copper acetoarsenite, and herbicides atrazine, glyphosate, and sodium chlorate. A subanalysis of these "potentially carcinogenic" pesticides suggested a positive trend of risk with exposure to increasing numbers. Conclusion: Consideration of multiple exposures is important in accurately estimating specific effects and in evaluating realistic exposure scenarios. PMID:12937207

  3. OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods

    NASA Technical Reports Server (NTRS)

    Heath, Christopher M.; Gray, Justin S.

    2012-01-01

    The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.

  4. A dust spectral energy distribution model with hierarchical Bayesian inference - I. Formalism and benchmarking

    NASA Astrophysics Data System (ADS)

    Galliano, Frédéric

    2018-05-01

    This article presents a new dust spectral energy distribution (SED) model, named HerBIE, aimed at eliminating the noise-induced correlations and large scatter obtained when performing least-squares fits. The originality of this code is to apply the hierarchical Bayesian approach to full dust models, including realistic optical properties, stochastic heating, and the mixing of physical conditions in the observed regions. We test the performances of our model by applying it to synthetic observations. We explore the impact on the recovered parameters of several effects: signal-to-noise ratio, SED shape, sample size, the presence of intrinsic correlations, the wavelength coverage, and the use of different SED model components. We show that this method is very efficient: the recovered parameters are consistently distributed around their true values. We do not find any clear bias, even for the most degenerate parameters, or with extreme signal-to-noise ratios.

  5. Epidemic predictions in an imperfect world: modelling disease spread with partial data

    PubMed Central

    Dawson, Peter M.; Werkman, Marleen; Brooks-Pollock, Ellen; Tildesley, Michael J.

    2015-01-01

    ‘Big-data’ epidemic models are being increasingly used to influence government policy to help with control and eradication of infectious diseases. In the case of livestock, detailed movement records have been used to parametrize realistic transmission models. While livestock movement data are readily available in the UK and other countries in the EU, in many countries around the world, such detailed data are not available. By using a comprehensive database of the UK cattle trade network, we implement various sampling strategies to determine the quantity of network data required to give accurate epidemiological predictions. It is found that by targeting nodes with the highest number of movements, accurate predictions on the size and spatial spread of epidemics can be made. This work has implications for countries such as the USA, where access to data is limited, and developing countries that may lack the resources to collect a full dataset on livestock movements. PMID:25948687

  6. Regression analysis of sparse asynchronous longitudinal data.

    PubMed

    Cao, Hongyuan; Zeng, Donglin; Fine, Jason P

    2015-09-01

    We consider estimation of regression models for sparse asynchronous longitudinal observations, where time-dependent responses and covariates are observed intermittently within subjects. Unlike with synchronous data, where the response and covariates are observed at the same time point, with asynchronous data, the observation times are mismatched. Simple kernel-weighted estimating equations are proposed for generalized linear models with either time invariant or time-dependent coefficients under smoothness assumptions for the covariate processes which are similar to those for synchronous data. For models with either time invariant or time-dependent coefficients, the estimators are consistent and asymptotically normal but converge at slower rates than those achieved with synchronous data. Simulation studies evidence that the methods perform well with realistic sample sizes and may be superior to a naive application of methods for synchronous data based on an ad hoc last value carried forward approach. The practical utility of the methods is illustrated on data from a study on human immunodeficiency virus.

  7. Exploring the Effects of Stellar Multiplicity on Exoplanet Occurrence Rates

    NASA Astrophysics Data System (ADS)

    Barclay, Thomas; Shabram, Megan

    2017-06-01

    Determining the frequency of habitable worlds is a key goal of the Kepler mission. During Kepler's four year investigation it detected thousands of transiting exoplanets with sizes varying from smaller than Mercury to larger than Jupiter. Finding planets was just the first step to determining frequency, and for the past few years the mission team has been modeling the reliability and completeness of the Kepler planet sample. One effect that has not typically been built into occurrence rate statistics is that of stellar multiplicity. If a planet orbits the primary star in a binary or triple star system then the transit depth will be somewhat diluted resulting in a modest underestimation in the planet size. However, if a detected planet orbits a fainter star then the error in measured planet radius can be very significant. We have taken a hypothetical star and planet population and passed that through a Kepler detection model. From this we have derived completeness corrections for a realistic case of a Universe with binary stars and compared that with a model Universe where all stars are single. We report on the impact that binaries have on exoplanet population statistics.

  8. Analysis of Crystallization Kinetics

    NASA Technical Reports Server (NTRS)

    Kelton, Kenneth F.

    1997-01-01

    A realistic computer model for polymorphic crystallization (i.e., initial and final phases with identical compositions), which includes time-dependent nucleation and cluster-size-dependent growth rates, is developed and tested by fits to experimental data. Model calculations are used to assess the validity of two of the more common approaches for the analysis of crystallization data. The effects of particle size on transformation kinetics, important for the crystallization of many systems of limited dimension including thin films, fine powders, and nanoparticles, are examined.

  9. Shipbuilding Docks as Experimental Systems for Realistic Assessments of Anthropogenic Stressors on Marine Organisms

    PubMed Central

    Harding, Harry R.; Bunce, Tom; Birch, Fiona; Lister, Jessica; Spiga, Ilaria; Benson, Tom; Rossington, Kate; Jones, Diane; Tyler, Charles R.; Simpson, Stephen D.

    2017-01-01

    Abstract Empirical investigations of the impacts of anthropogenic stressors on marine organisms are typically performed under controlled laboratory conditions, onshore mesocosms, or via offshore experiments with realistic (but uncontrolled) environmental variation. These approaches have merits, but onshore setups are generally small sized and fail to recreate natural stressor fields, whereas offshore studies are often compromised by confounding factors. We suggest the use of flooded shipbuilding docks to allow studying realistic exposure to stressors and their impacts on the intra- and interspecific responses of animals. Shipbuilding docks permit the careful study of groups of known animals, including the evaluation of their behavioral interactions, while enabling full control of the stressor and many environmental conditions. We propose that this approach could be used for assessing the impacts of prominent anthropogenic stressors, including chemicals, ocean warming, and sound. Results from shipbuilding-dock studies could allow improved parameterization of predictive models relating to the environmental risks and population consequences of anthropogenic stressors. PMID:29599545

  10. Shipbuilding Docks as Experimental Systems for Realistic Assessments of Anthropogenic Stressors on Marine Organisms.

    PubMed

    Bruintjes, Rick; Harding, Harry R; Bunce, Tom; Birch, Fiona; Lister, Jessica; Spiga, Ilaria; Benson, Tom; Rossington, Kate; Jones, Diane; Tyler, Charles R; Radford, Andrew N; Simpson, Stephen D

    2017-09-01

    Empirical investigations of the impacts of anthropogenic stressors on marine organisms are typically performed under controlled laboratory conditions, onshore mesocosms, or via offshore experiments with realistic (but uncontrolled) environmental variation. These approaches have merits, but onshore setups are generally small sized and fail to recreate natural stressor fields, whereas offshore studies are often compromised by confounding factors. We suggest the use of flooded shipbuilding docks to allow studying realistic exposure to stressors and their impacts on the intra- and interspecific responses of animals. Shipbuilding docks permit the careful study of groups of known animals, including the evaluation of their behavioral interactions, while enabling full control of the stressor and many environmental conditions. We propose that this approach could be used for assessing the impacts of prominent anthropogenic stressors, including chemicals, ocean warming, and sound. Results from shipbuilding-dock studies could allow improved parameterization of predictive models relating to the environmental risks and population consequences of anthropogenic stressors.

  11. Modifications Of Hydrostatic-Bearing Computer Program

    NASA Technical Reports Server (NTRS)

    Hibbs, Robert I., Jr.; Beatty, Robert F.

    1991-01-01

    Several modifications made to enhance utility of HBEAR, computer program for analysis and design of hydrostatic bearings. Modifications make program applicable to more realistic cases and reduce time and effort necessary to arrive at a suitable design. Uses search technique to iterate on size of orifice to obtain required pressure ratio.

  12. A Theory of Eye Movements during Target Acquisition

    ERIC Educational Resources Information Center

    Zelinsky, Gregory J.

    2008-01-01

    The gaze movements accompanying target localization were examined via human observers and a computational model (target acquisition model [TAM]). Search contexts ranged from fully realistic scenes to toys in a crib to Os and Qs, and manipulations included set size, target eccentricity, and target-distractor similarity. Observers and the model…

  13. Understanding the Listening Process: Rethinking the "One Size Fits All" Model

    ERIC Educational Resources Information Center

    Wolvin, Andrew

    2013-01-01

    Robert Bostrom's seminal contributions to listening theory and research represent an impressive legacy and provide listening scholars with important perspectives on the complexities of listening cognition and behavior. Bostrom's work provides a solid foundation on which to build models that more realistically explain how listeners function…

  14. What does it mean to be pseudo single domain? Demystifying the PSD state

    NASA Astrophysics Data System (ADS)

    Lascu, I.; Harrison, R. J.; Einsle, J. F.; Ball, M.

    2016-12-01

    Until recently, non-interacting stable single domain grains were thought to be the sole reliable paleomagnetic recorders. However most natural samples contain so-called "non-ideal" paleomagnetic recorders, which are either interacting single domain particles, or magnetic grains larger than single domain grains, but smaller than proper multi domain grains, which are poor paleomagnetic recorders. The grain size range for these recorders, which for magnetite comprises grains from 100 nm to a few μm in size, is known as the pseudo single domain (PSD) state. Natural samples containing abundant PSD grains have been shown time and again to reliably record thermomagnetic remanent magnetizations that are stable over billions of years. Here we attempt to shed new light on the PSD state by investigating obsidian varieties found at Glass Butte, Oregon, which present the opportunity to study simple cases of magnetic grains encapsulated in volcanic glass. We do this by combining rock magnetism, scanning electron microscopy (SEM) nanotomography, and finite-element micromagnetic modeling. Using rock magnetism we have identified PSD signatures in these samples via their fingerprint in first-order reversal curve (FORC) diagrams. Tomographic reconstructions obtained by stacking SEM images acquired via sequential milling through sample volumes of a few tens of cubic μm reveal the presence of abundant grains that span the PSD grain size interval. These grains have a variety of shapes, from simple ellipsoidal particles, to more complex morphologies attained through the coalescence of neighboring grains during crystallization, to intricate "rolling snowball" morphologies in larger grains that contain appendices formed as a result of particle growth in a dynamic environment as the flowing lava cooled. Micromagnetic modeling of the simplest morphologies reveals that these grains are in single vortex states, with the remanence controlled by irregularities in grain morphology. Coalesced grains present extreme cases of shape anisotropy, which will control the remanence. The remanence of the largest grains is controlled by the collection of PSD states from areas of the grain with pronounced shape anisotropy. Finally, micromagnetic modeling of realistic grain shapes allows the understanding of PSD signatures in FORC diagrams.

  15. Coulomb Mechanics And Landscape Geometry Explain Landslide Size Distribution

    NASA Astrophysics Data System (ADS)

    Jeandet, L.; Steer, P.; Lague, D.; Davy, P.

    2017-12-01

    It is generally observed that the dimensions of large bedrock landslides follow power-law scaling relationships. In particular, the non-cumulative frequency distribution (PDF) of bedrock landslide area is well characterized by a negative power-law above a critical size, with an exponent 2.4. However, the respective role of bedrock mechanical properties, landscape shape and triggering mechanisms on the scaling properties of landslide dimensions are still poorly understood. Yet, unravelling the factors that control this distribution is required to better estimate the total volume of landslides triggered by large earthquakes or storms. To tackle this issue, we develop a simple probabilistic 1D approach to compute the PDF of rupture depths in a given landscape. The model is applied to randomly sampled points along hillslopes of studied digital elevation models. At each point location, the model determines the range of depth and angle leading to unstable rupture planes, by applying a simple Mohr-Coulomb rupture criterion only to the rupture planes that intersect downhill surface topography. This model therefore accounts for both rock mechanical properties, friction and cohesion, and landscape shape. We show that this model leads to realistic landslide depth distribution, with a power-law arising when the number of samples is high enough. The modeled PDF of landslide size obtained for several landscapes match the ones from earthquakes-driven landslides catalogues for the same landscape. In turn, this allows us to invert landslide effective mechanical parameters, friction and cohesion, associated to those specific events, including Chi-Chi, Wenchuan, Niigata and Gorkha earthquakes. The cohesion and friction ranges (25-35 degrees and 5-20 kPa) are in good agreement with previously inverted values. Our results demonstrate that reduced complexity mechanics is efficient to model the distribution of unstable depths, and show the role of landscape variability in landslide size distribution.

  16. Models for integrated and differential scattering optical properties of encapsulated light absorbing carbon aggregates.

    PubMed

    Kahnert, Michael; Nousiainen, Timo; Lindqvist, Hannakaisa

    2013-04-08

    Optical properties of light absorbing carbon (LAC) aggregates encapsulated in a shell of sulfate are computed for realistic model geometries based on field measurements. Computations are performed for wavelengths from the UV-C to the mid-IR. Both climate- and remote sensing-relevant optical properties are considered. The results are compared to commonly used simplified model geometries, none of which gives a realistic representation of the distribution of the LAC mass within the host material and, as a consequence, fail to predict the optical properties accurately. A new core-gray shell model is introduced, which accurately reproduces the size- and wavelength dependence of the integrated and differential optical properties.

  17. Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: statistical and systematic error budgets for future experiments

    DOE PAGES

    Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; ...

    2017-08-25

    We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment’s beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less

  18. Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: statistical and systematic error budgets for future experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.

    We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment’s beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less

  19. Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: statistical and systematic error budgets for future experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raghunathan, Srinivasan; Patil, Sanjaykumar; Bianchini, Federico

    We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less

  20. Mechanical Degradation of Graphite/PVDF Composite Electrodes: A Model-Experimental Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takahashi, Kenji; Higa, Kenneth; Mair, Sunil

    2015-12-11

    Mechanical failure modes of a graphite/polyvinylidene difluoride (PVDF) composite electrode for lithium-ion batteries were investigated by combining realistic stress-stain tests and mathematical model predictions. Samples of PVDF mixed with conductive additive were prepared in a similar way to graphite electrodes and tested while submerged in electrolyte solution. Young's modulus and tensile strength values of wet samples were found to be approximately one-fifth and one-half of those measured for dry samples. Simulations of graphite particles surrounded by binder layers given the measured material property values suggest that the particles are unlikely to experience mechanical damage during cycling, but that the fatemore » of the surrounding composite of PVDF and conductive additive depends completely upon the conditions under which its mechanical properties were obtained. Simulations using realistic property values produced results that were consistent with earlier experimental observations.« less

  1. The Direct Lighting Computation in Global Illumination Methods

    NASA Astrophysics Data System (ADS)

    Wang, Changyaw Allen

    1994-01-01

    Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.

  2. Biomass particle models with realistic morphology and resolved microstructure for simulations of intraparticle transport phenomena

    DOE PAGES

    Ciesielski, Peter N.; Crowley, Michael F.; Nimlos, Mark R.; ...

    2014-12-09

    Biomass exhibits a complex microstructure of directional pores that impact how heat and mass are transferred within biomass particles during conversion processes. However, models of biomass particles used in simulations of conversion processes typically employ oversimplified geometries such as spheres and cylinders and neglect intraparticle microstructure. In this study, we develop 3D models of biomass particles with size, morphology, and microstructure based on parameters obtained from quantitative image analysis. We obtain measurements of particle size and morphology by analyzing large ensembles of particles that result from typical size reduction methods, and we delineate several representative size classes. Microstructural parameters, includingmore » cell wall thickness and cell lumen dimensions, are measured directly from micrographs of sectioned biomass. A general constructive solid geometry algorithm is presented that produces models of biomass particles based on these measurements. Next, we employ the parameters obtained from image analysis to construct models of three different particle size classes from two different feedstocks representing a hardwood poplar species ( Populus tremuloides, quaking aspen) and a softwood pine ( Pinus taeda, loblolly pine). Finally, we demonstrate the utility of the models and the effects explicit microstructure by performing finite-element simulations of intraparticle heat and mass transfer, and the results are compared to similar simulations using traditional simplified geometries. In conclusion, we show how the behavior of particle models with more realistic morphology and explicit microstructure departs from that of spherical models in simulations of transport phenomena and that species-dependent differences in microstructure impact simulation results in some cases.« less

  3. Statistical power and effect sizes of depression research in Japan.

    PubMed

    Okumura, Yasuyuki; Sakamoto, Shinji

    2011-06-01

    Few studies have been conducted on the rationales for using interpretive guidelines for effect size, and most of the previous statistical power surveys have covered broad research domains. The present study aimed to estimate the statistical power and to obtain realistic target effect sizes of depression research in Japan. We systematically reviewed 18 leading journals of psychiatry and psychology in Japan and identified 974 depression studies that were mentioned in 935 articles published between 1990 and 2006. In 392 studies, logistic regression analyses revealed that using clinical populations was independently associated with being a statistical power of <0.80 (odds ratio 5.9, 95% confidence interval 2.9-12.0) and of <0.50 (odds ratio 4.9, 95% confidence interval 2.3-10.5). Of the studies using clinical populations, 80% did not achieve a power of 0.80 or more, and 44% did not achieve a power of 0.50 or more to detect the medium population effect sizes. A predictive model for the proportion of variance explained was developed using a linear mixed-effects model. The model was then used to obtain realistic target effect sizes in defined study characteristics. In the face of a real difference or correlation in population, many depression researchers are less likely to give a valid result than simply tossing a coin. It is important to educate depression researchers in order to enable them to conduct an a priori power analysis. © 2011 The Authors. Psychiatry and Clinical Neurosciences © 2011 Japanese Society of Psychiatry and Neurology.

  4. Comparison between splines and fractional polynomials for multivariable model building with continuous covariates: a simulation study with continuous response.

    PubMed

    Binder, Harald; Sauerbrei, Willi; Royston, Patrick

    2013-06-15

    In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2)  = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.

  5. Faunal Communities Are Invariant to Fragmentation in Experimental Seagrass Landscapes

    PubMed Central

    Marion, Scott R.; Lombana, Alfonso V.; Orth, Robert J.

    2016-01-01

    Human-driven habitat fragmentation is cited as one of the most pressing threats facing many coastal ecosystems today. Many experiments have explored the consequences of fragmentation on fauna in one foundational habitat, seagrass beds, but have either surveyed along a gradient of existing patchiness, used artificial materials to mimic a natural bed, or sampled over short timescales. Here, we describe faunal responses to constructed fragmented landscapes varying from 4–400 m2 in two transplant garden experiments incorporating live eelgrass (Zostera marina L.). In experiments replicated within two subestuaries of the Chesapeake Bay, USA across multiple seasons and non-consecutive years, we comprehensively censused mesopredators and epifaunal communities using complementary quantitative methods. We found that community properties, including abundance, species richness, Simpson and functional diversity, and composition were generally unaffected by the number of patches and the size of the landscape, or the intensity of sampling. Additionally, an index of competition based on species co-occurrences revealed no trends with increasing patch size, contrary to theoretical predictions. We extend conclusions concerning the invariance of animal communities to habitat fragmentation from small-scale observational surveys and artificial experiments to experiments conducted with actual living plants and at more realistic scales. Our findings are likely a consequence of the rapid life histories and high mobility of the organisms common to eelgrass beds, and have implications for both conservation and restoration, suggesting that even small patches can rapidly promote abundant and diverse faunal communities. PMID:27244652

  6. Faunal Communities Are Invariant to Fragmentation in Experimental Seagrass Landscapes.

    PubMed

    Lefcheck, Jonathan S; Marion, Scott R; Lombana, Alfonso V; Orth, Robert J

    2016-01-01

    Human-driven habitat fragmentation is cited as one of the most pressing threats facing many coastal ecosystems today. Many experiments have explored the consequences of fragmentation on fauna in one foundational habitat, seagrass beds, but have either surveyed along a gradient of existing patchiness, used artificial materials to mimic a natural bed, or sampled over short timescales. Here, we describe faunal responses to constructed fragmented landscapes varying from 4-400 m2 in two transplant garden experiments incorporating live eelgrass (Zostera marina L.). In experiments replicated within two subestuaries of the Chesapeake Bay, USA across multiple seasons and non-consecutive years, we comprehensively censused mesopredators and epifaunal communities using complementary quantitative methods. We found that community properties, including abundance, species richness, Simpson and functional diversity, and composition were generally unaffected by the number of patches and the size of the landscape, or the intensity of sampling. Additionally, an index of competition based on species co-occurrences revealed no trends with increasing patch size, contrary to theoretical predictions. We extend conclusions concerning the invariance of animal communities to habitat fragmentation from small-scale observational surveys and artificial experiments to experiments conducted with actual living plants and at more realistic scales. Our findings are likely a consequence of the rapid life histories and high mobility of the organisms common to eelgrass beds, and have implications for both conservation and restoration, suggesting that even small patches can rapidly promote abundant and diverse faunal communities.

  7. The Application of Elliptic Cylindrical Phantom in Brachytherapy Dosimetric Study of HDR 192Ir Source

    NASA Astrophysics Data System (ADS)

    Ahn, Woo Sang; Park, Sung Ho; Jung, Sang Hoon; Choi, Wonsik; Do Ahn, Seung; Shin, Seong Soo

    2014-06-01

    The purpose of this study is to determine the radial dose function of HDR 192Ir source based on Monte Carlo simulation using elliptic cylindrical phantom, similar to realistic shape of pelvis, in brachytherapy dosimetric study. The elliptic phantom size and shape was determined by analysis of dimensions of pelvis on CT images of 20 patients treated with brachytherapy for cervical cancer. The radial dose function obtained using the elliptic cylindrical water phantom was compared with radial dose functions for different spherical phantom sizes, including the Williamsion's data loaded into conventional planning system. The differences in the radial dose function for the different spherical water phantoms increase with radial distance, r, and the largest differences in the radial dose function appear for the smallest phantom size. The radial dose function of the elliptic cylindrical phantom significantly decreased with radial distance in the vertical direction due to different scatter condition in comparison with the Williamson's data. Considering doses to ICRU rectum and bladder points, doses to reference points can be underestimated up to 1-2% at the distance from 3 to 6 cm. The radial dose function in this study could be used as realistic data for calculating the brachytherapy dosimetry for cervical cancer.

  8. Environmentally relevant concentrations of polyethylene microplastics negatively impact the survival, growth and emergence of sediment-dwelling invertebrates.

    PubMed

    Ziajahromi, Shima; Kumar, Anupama; Neale, Peta A; Leusch, Frederic D L

    2018-05-01

    Microplastics are a widespread environmental pollutant in aquatic ecosystems and have the potential to eventually sink to the sediment, where they may pose a risk to sediment-dwelling organisms. While the impacts of exposure to microplastics have been widely reported for marine biota, the effects of microplastics on freshwater organisms at environmentally realistic concentrations are largely unknown, especially for benthic organisms. Here we examined the effects of a realistic concentration of polyethylene microplastics in sediment on the growth and emergence of a freshwater organism Chironomus tepperi. We also assessed the influence of microplastic size by exposing C. tepperi larvae to four different size ranges of polyethylene microplastics (1-4, 10-27, 43-54 and 100-126 μm). Exposure to an environmentally relevant concentration of microplastics, 500 particles/kg sediment , negatively affected the survival, growth (i.e. body length and head capsule) and emergence of C. tepperi. The observed effects were strongly dependent on microplastic size with exposure to particles in the size range of 10-27 μm inducing more pronounced effects. While growth and survival of C. tepperi were not affected by the larger microplastics (100-126 μm), a significant reduction in the number of emerged adults was observed after exposure to the largest microplastics, with the delayed emergence attributed to exposure to a stressor. While scanning electron microscopy showed a significant reduction in the size of the head capsule and antenna of C. tepperi exposed to microplastics in the 10-27 μm size range, no deformities to the external structure of the antenna and mouth parts in organisms exposed to the same size range of microplastics were observed. These results indicate that environmentally relevant concentrations of microplastics in sediment induce harmful effects on the development and emergence of C. tepperi, with effects greatly dependent on particle size. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Enhanced conformational sampling of nucleic acids by a new Hamiltonian replica exchange molecular dynamics approach.

    PubMed

    Curuksu, Jeremy; Zacharias, Martin

    2009-03-14

    Although molecular dynamics (MD) simulations have been applied frequently to study flexible molecules, the sampling of conformational states separated by barriers is limited due to currently possible simulation time scales. Replica-exchange (Rex)MD simulations that allow for exchanges between simulations performed at different temperatures (T-RexMD) can achieve improved conformational sampling. However, in the case of T-RexMD the computational demand grows rapidly with system size. A Hamiltonian RexMD method that specifically enhances coupled dihedral angle transitions has been developed. The method employs added biasing potentials as replica parameters that destabilize available dihedral substates and was applied to study coupled dihedral transitions in nucleic acid molecules. The biasing potentials can be either fixed at the beginning of the simulation or optimized during an equilibration phase. The method was extensively tested and compared to conventional MD simulations and T-RexMD simulations on an adenine dinucleotide system and on a DNA abasic site. The biasing potential RexMD method showed improved sampling of conformational substates compared to conventional MD simulations similar to T-RexMD simulations but at a fraction of the computational demand. It is well suited to study systematically the fine structure and dynamics of large nucleic acids under realistic conditions including explicit solvent and ions and can be easily extended to other types of molecules.

  10. Validity of strong lensing statistics for constraints on the galaxy evolution model

    NASA Astrophysics Data System (ADS)

    Matsumoto, Akiko; Futamase, Toshifumi

    2008-02-01

    We examine the usefulness of the strong lensing statistics to constrain the evolution of the number density of lensing galaxies by adopting the values of the cosmological parameters determined by recent Wilkinson Microwave Anisotropy Probe observation. For this purpose, we employ the lens-redshift test proposed by Kochanek and constrain the parameters in two evolution models, simple power-law model characterized by the power-law indexes νn and νv, and the evolution model by Mitchell et al. based on cold dark matter structure formation scenario. We use the well-defined lens sample from the Sloan Digital Sky Survey (SDSS) and this is similarly sized samples used in the previous studies. Furthermore, we adopt the velocity dispersion function of early-type galaxies based on SDSS DR1 and DR5. It turns out that the indexes of power-law model are consistent with the previous studies, thus our results indicate the mild evolution in the number and velocity dispersion of early-type galaxies out to z = 1. However, we found that the values for p and q used by Mitchell et al. are inconsistent with the presently available observational data. More complete sample is necessary to withdraw more realistic determination on these parameters.

  11. Simultaneous Gaussian and exponential inversion for improved analysis of shales by NMR relaxometry

    USGS Publications Warehouse

    Washburn, Kathryn E.; Anderssen, Endre; Vogt, Sarah J.; Seymour, Joseph D.; Birdwell, Justin E.; Kirkland, Catherine M.; Codd, Sarah L.

    2014-01-01

    Nuclear magnetic resonance (NMR) relaxometry is commonly used to provide lithology-independent porosity and pore-size estimates for petroleum resource evaluation based on fluid-phase signals. However in shales, substantial hydrogen content is associated with solid and fluid signals and both may be detected. Depending on the motional regime, the signal from the solids may be best described using either exponential or Gaussian decay functions. When the inverse Laplace transform, the standard method for analysis of NMR relaxometry results, is applied to data containing Gaussian decays, this can lead to physically unrealistic responses such as signal or porosity overcall and relaxation times that are too short to be determined using the applied instrument settings. We apply a new simultaneous Gaussian-Exponential (SGE) inversion method to simulated data and measured results obtained on a variety of oil shale samples. The SGE inversion produces more physically realistic results than the inverse Laplace transform and displays more consistent relaxation behavior at high magnetic field strengths. Residuals for the SGE inversion are consistently lower than for the inverse Laplace method and signal overcall at short T2 times is mitigated. Beyond geological samples, the method can also be applied in other fields where the sample relaxation consists of both Gaussian and exponential decays, for example in material, medical and food sciences.

  12. Statistical multi-path exposure method for assessing the whole-body SAR in a heterogeneous human body model in a realistic environment.

    PubMed

    Vermeeren, Günter; Joseph, Wout; Martens, Luc

    2013-04-01

    Assessing the whole-body absorption in a human in a realistic environment requires a statistical approach covering all possible exposure situations. This article describes the development of a statistical multi-path exposure method for heterogeneous realistic human body models. The method is applied for the 6-year-old Virtual Family boy (VFB) exposed to the GSM downlink at 950 MHz. It is shown that the whole-body SAR does not differ significantly over the different environments at an operating frequency of 950 MHz. Furthermore, the whole-body SAR in the VFB for multi-path exposure exceeds the whole-body SAR for worst-case single-incident plane wave exposure by 3.6%. Moreover, the ICNIRP reference levels are not conservative with the basic restrictions in 0.3% of the exposure samples for the VFB at the GSM downlink of 950 MHz. The homogeneous spheroid with the dielectric properties of the head suggested by the IEC underestimates the absorption compared to realistic human body models. Moreover, the variation in the whole-body SAR for realistic human body models is larger than for homogeneous spheroid models. This is mainly due to the heterogeneity of the tissues and the irregular shape of the realistic human body model compared to homogeneous spheroid human body models. Copyright © 2012 Wiley Periodicals, Inc.

  13. Efficient thermal diode with ballistic spacer

    NASA Astrophysics Data System (ADS)

    Chen, Shunda; Donadio, Davide; Benenti, Giuliano; Casati, Giulio

    2018-03-01

    Thermal rectification is of importance not only for fundamental physics, but also for potential applications in thermal manipulations and thermal management. However, thermal rectification effect usually decays rapidly with system size. Here, we show that a mass-graded system, with two diffusive leads separated by a ballistic spacer, can exhibit large thermal rectification effect, with the rectification factor independent of system size. The underlying mechanism is explained in terms of the effective size-independent thermal gradient and the match or mismatch of the phonon bands. We also show the robustness of the thermal diode upon variation of the model's parameters. Our finding suggests a promising way for designing realistic efficient thermal diodes.

  14. Advances in the simulation and automated measurement of well-sorted granular material: 1. Simulation

    USGS Publications Warehouse

    Daniel Buscombe,; Rubin, David M.

    2012-01-01

    1. In this, the first of a pair of papers which address the simulation and automated measurement of well-sorted natural granular material, a method is presented for simulation of two-phase (solid, void) assemblages of discrete non-cohesive particles. The purpose is to have a flexible, yet computationally and theoretically simple, suite of tools with well constrained and well known statistical properties, in order to simulate realistic granular material as a discrete element model with realistic size and shape distributions, for a variety of purposes. The stochastic modeling framework is based on three-dimensional tessellations with variable degrees of order in particle-packing arrangement. Examples of sediments with a variety of particle size distributions and spatial variability in grain size are presented. The relationship between particle shape and porosity conforms to published data. The immediate application is testing new algorithms for automated measurements of particle properties (mean and standard deviation of particle sizes, and apparent porosity) from images of natural sediment, as detailed in the second of this pair of papers. The model could also prove useful for simulating specific depositional structures found in natural sediments, the result of physical alterations to packing and grain fabric, using discrete particle flow models. While the principal focus here is on naturally occurring sediment and sedimentary rock, the methods presented might also be useful for simulations of similar granular or cellular material encountered in engineering, industrial and life sciences.

  15. Evaluation of Respondent-Driven Sampling

    PubMed Central

    McCreesh, Nicky; Frost, Simon; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda Ndagire; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G

    2012-01-01

    Background Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex-workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total-population data. Methods Total-population data on age, tribe, religion, socioeconomic status, sexual activity and HIV status were available on a population of 2402 male household-heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, employing current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). Results We recruited 927 household-heads. Full and small RDS samples were largely representative of the total population, but both samples under-represented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven-sampling statistical-inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven-sampling bootstrap 95% confidence intervals included the population proportion. Conclusions Respondent-driven sampling produced a generally representative sample of this well-connected non-hidden population. However, current respondent-driven-sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience-sampling method, and caution is required when interpreting findings based on the sampling method. PMID:22157309

  16. Face and construct validity of a computer-based virtual reality simulator for ERCP.

    PubMed

    Bittner, James G; Mellinger, John D; Imam, Toufic; Schade, Robert R; Macfadyen, Bruce V

    2010-02-01

    Currently, little evidence supports computer-based simulation for ERCP training. To determine face and construct validity of a computer-based simulator for ERCP and assess its perceived utility as a training tool. Novice and expert endoscopists completed 2 simulated ERCP cases by using the GI Mentor II. Virtual Education and Surgical Simulation Laboratory, Medical College of Georgia. Outcomes included times to complete the procedure, reach the papilla, and use fluoroscopy; attempts to cannulate the papilla, pancreatic duct, and common bile duct; and number of contrast injections and complications. Subjects assessed simulator graphics, procedural accuracy, difficulty, haptics, overall realism, and training potential. Only when performance data from cases A and B were combined did the GI Mentor II differentiate novices and experts based on times to complete the procedure, reach the papilla, and use fluoroscopy. Across skill levels, overall opinions were similar regarding graphics (moderately realistic), accuracy (similar to clinical ERCP), difficulty (similar to clinical ERCP), overall realism (moderately realistic), and haptics. Most participants (92%) claimed that the simulator has definite training potential or should be required for training. Small sample size, single institution. The GI Mentor II demonstrated construct validity for ERCP based on select metrics. Most subjects thought that the simulated graphics, procedural accuracy, and overall realism exhibit face validity. Subjects deemed it a useful training tool. Study repetition involving more participants and cases may help confirm results and establish the simulator's ability to differentiate skill levels based on ERCP-specific metrics.

  17. Digital 3D holographic display using scattering layers for enhanced viewing angle and image size

    NASA Astrophysics Data System (ADS)

    Yu, Hyeonseung; Lee, KyeoReh; Park, Jongchan; Park, YongKeun

    2017-05-01

    In digital 3D holographic displays, the generation of realistic 3D images has been hindered by limited viewing angle and image size. Here we demonstrate a digital 3D holographic display using volume speckle fields produced by scattering layers in which both the viewing angle and the image size are greatly enhanced. Although volume speckle fields exhibit random distributions, the transmitted speckle fields have a linear and deterministic relationship with the input field. By modulating the incident wavefront with a digital micro-mirror device, volume speckle patterns are controlled to generate 3D images of micrometer-size optical foci with 35° viewing angle in a volume of 2 cm × 2 cm × 2 cm.

  18. African Baobabs with False Inner Cavities: The Radiocarbon Investigation of the Lebombo Eco Trail Baobab

    PubMed Central

    Patrut, Adrian; Woodborne, Stephan; von Reden, Karl F.; Hall, Grant; Hofmeyr, Michele; Lowy, Daniel A.; Patrut, Roxana T.

    2015-01-01

    The article reports the radiocarbon investigation results of the Lebombo Eco Trail tree, a representative African baobab from Mozambique. Several wood samples collected from the large inner cavity and from the outer part of the tree were investigated by AMS radiocarbon dating. According to dating results, the age values of all samples increase from the sampling point with the distance into the wood. For samples collected from the cavity walls, the increase of age values with the distance into the wood (up to a point of maximum age) represents a major anomaly. The only realistic explanation for this anomaly is that such inner cavities are, in fact, natural empty spaces between several fused stems disposed in a ring-shaped structure. We named them false cavities. Several important differences between normal cavities and false cavities are presented. Eventually, we dated other African baobabs with false inner cavities. We found that this new architecture enables baobabs to reach large sizes and old ages. The radiocarbon date of the oldest sample was 1425 ± 24 BP, which corresponds to a calibrated age of 1355 ± 15 yr. The dating results also show that the Lebombo baobab consists of five fused stems, with ages between 900 and 1400 years; these five stems build the complete ring. The ring and the false cavity closed 800–900 years ago. The results also indicate that the stems stopped growing toward the false cavity over the past 500 years. PMID:25621989

  19. Quantifying introgression risk with realistic population genetics.

    PubMed

    Ghosh, Atiyo; Meirmans, Patrick G; Haccou, Patsy

    2012-12-07

    Introgression is the permanent incorporation of genes from the genome of one population into another. This can have severe consequences, such as extinction of endemic species, or the spread of transgenes. Quantification of the risk of introgression is an important component of genetically modified crop regulation. Most theoretical introgression studies aimed at such quantification disregard one or more of the most important factors concerning introgression: realistic genetical mechanisms, repeated invasions and stochasticity. In addition, the use of linkage as a risk mitigation strategy has not been studied properly yet with genetic introgression models. Current genetic introgression studies fail to take repeated invasions and demographic stochasticity into account properly, and use incorrect measures of introgression risk that can be manipulated by arbitrary choices. In this study, we present proper methods for risk quantification that overcome these difficulties. We generalize a probabilistic risk measure, the so-called hazard rate of introgression, for application to introgression models with complex genetics and small natural population sizes. We illustrate the method by studying the effects of linkage and recombination on transgene introgression risk at different population sizes.

  20. Quantifying introgression risk with realistic population genetics

    PubMed Central

    Ghosh, Atiyo; Meirmans, Patrick G.; Haccou, Patsy

    2012-01-01

    Introgression is the permanent incorporation of genes from the genome of one population into another. This can have severe consequences, such as extinction of endemic species, or the spread of transgenes. Quantification of the risk of introgression is an important component of genetically modified crop regulation. Most theoretical introgression studies aimed at such quantification disregard one or more of the most important factors concerning introgression: realistic genetical mechanisms, repeated invasions and stochasticity. In addition, the use of linkage as a risk mitigation strategy has not been studied properly yet with genetic introgression models. Current genetic introgression studies fail to take repeated invasions and demographic stochasticity into account properly, and use incorrect measures of introgression risk that can be manipulated by arbitrary choices. In this study, we present proper methods for risk quantification that overcome these difficulties. We generalize a probabilistic risk measure, the so-called hazard rate of introgression, for application to introgression models with complex genetics and small natural population sizes. We illustrate the method by studying the effects of linkage and recombination on transgene introgression risk at different population sizes. PMID:23055068

  1. Radio pulsar glitches as a state-dependent Poisson process

    NASA Astrophysics Data System (ADS)

    Fulgenzi, W.; Melatos, A.; Hughes, B. D.

    2017-10-01

    Gross-Pitaevskii simulations of vortex avalanches in a neutron star superfluid are limited computationally to ≲102 vortices and ≲102 avalanches, making it hard to study the long-term statistics of radio pulsar glitches in realistically sized systems. Here, an idealized, mean-field model of the observed Gross-Pitaevskii dynamics is presented, in which vortex unpinning is approximated as a state-dependent, compound Poisson process in a single random variable, the spatially averaged crust-superfluid lag. Both the lag-dependent Poisson rate and the conditional distribution of avalanche-driven lag decrements are inputs into the model, which is solved numerically (via Monte Carlo simulations) and analytically (via a master equation). The output statistics are controlled by two dimensionless free parameters: α, the glitch rate at a reference lag, multiplied by the critical lag for unpinning, divided by the spin-down rate; and β, the minimum fraction of the lag that can be restored by a glitch. The system evolves naturally to a self-regulated stationary state, whose properties are determined by α/αc(β), where αc(β) ≈ β-1/2 is a transition value. In the regime α ≳ αc(β), one recovers qualitatively the power-law size and exponential waiting-time distributions observed in many radio pulsars and Gross-Pitaevskii simulations. For α ≪ αc(β), the size and waiting-time distributions are both power-law-like, and a correlation emerges between size and waiting time until the next glitch, contrary to what is observed in most pulsars. Comparisons with astrophysical data are restricted by the small sample sizes available at present, with ≤35 events observed per pulsar.

  2. Smsynth: AN Imagery Synthesis System for Soil Moisture Retrieval

    NASA Astrophysics Data System (ADS)

    Cao, Y.; Xu, L.; Peng, J.

    2018-04-01

    Soil moisture (SM) is a important variable in various research areas, such as weather and climate forecasting, agriculture, drought and flood monitoring and prediction, and human health. An ongoing challenge in estimating SM via synthetic aperture radar (SAR) is the development of the retrieval SM methods, especially the empirical models needs as training samples a lot of measurements of SM and soil roughness parameters which are very difficult to acquire. As such, it is difficult to develop empirical models using realistic SAR imagery and it is necessary to develop methods to synthesis SAR imagery. To tackle this issue, a SAR imagery synthesis system based on the SM named SMSynth is presented, which can simulate radar signals that are realistic as far as possible to the real SAR imagery. In SMSynth, SAR backscatter coefficients for each soil type are simulated via the Oh model under the Bayesian framework, where the spatial correlation is modeled by the Markov random field (MRF) model. The backscattering coefficients simulated based on the designed soil parameters and sensor parameters are added into the Bayesian framework through the data likelihood where the soil parameters and sensor parameters are set as realistic as possible to the circumstances on the ground and in the validity range of the Oh model. In this way, a complete and coherent Bayesian probabilistic framework is established. Experimental results show that SMSynth is capable of generating realistic SAR images that suit the needs of a large amount of training samples of empirical models.

  3. Distributional assumptions in food and feed commodities- development of fit-for-purpose sampling protocols.

    PubMed

    Paoletti, Claudia; Esbensen, Kim H

    2015-01-01

    Material heterogeneity influences the effectiveness of sampling procedures. Most sampling guidelines used for assessment of food and/or feed commodities are based on classical statistical distribution requirements, the normal, binomial, and Poisson distributions-and almost universally rely on the assumption of randomness. However, this is unrealistic. The scientific food and feed community recognizes a strong preponderance of non random distribution within commodity lots, which should be a more realistic prerequisite for definition of effective sampling protocols. Nevertheless, these heterogeneity issues are overlooked as the prime focus is often placed only on financial, time, equipment, and personnel constraints instead of mandating acquisition of documented representative samples under realistic heterogeneity conditions. This study shows how the principles promulgated in the Theory of Sampling (TOS) and practically tested over 60 years provide an effective framework for dealing with the complete set of adverse aspects of both compositional and distributional heterogeneity (material sampling errors), as well as with the errors incurred by the sampling process itself. The results of an empirical European Union study on genetically modified soybean heterogeneity, Kernel Lot Distribution Assessment are summarized, as they have a strong bearing on the issue of proper sampling protocol development. TOS principles apply universally in the food and feed realm and must therefore be considered the only basis for development of valid sampling protocols free from distributional constraints.

  4. Rare events in stochastic populations under bursty reproduction

    NASA Astrophysics Data System (ADS)

    Be'er, Shay; Assaf, Michael

    2016-11-01

    Recently, a first step was made by the authors towards a systematic investigation of the effect of reaction-step-size noise—uncertainty in the step size of the reaction—on the dynamics of stochastic populations. This was done by investigating the effect of bursty influx on the switching dynamics of stochastic populations. Here we extend this formalism to account for bursty reproduction processes, and improve the accuracy of the formalism to include subleading-order corrections. Bursty reproduction appears in various contexts, where notable examples include bursty viral production from infected cells, and reproduction of mammals involving varying number of offspring. The main question we quantitatively address is how bursty reproduction affects the overall fate of the population. We consider two complementary scenarios: population extinction and population survival; in the former a population gets extinct after maintaining a long-lived metastable state, whereas in the latter a population proliferates despite undergoing a deterministic drift towards extinction. In both models reproduction occurs in bursts, sampled from an arbitrary distribution. Using the WKB approach, we show in the extinction problem that bursty reproduction broadens the quasi-stationary distribution of population sizes in the metastable state, which results in a drastic reduction of the mean time to extinction compared to the non-bursty case. In the survival problem, it is shown that bursty reproduction drastically increases the survival probability of the population. Close to the bifurcation limit our analytical results simplify considerably and are shown to depend solely on the mean and variance of the burst-size distribution. Our formalism is demonstrated on several realistic distributions which all compare well with numerical Monte-Carlo simulations.

  5. Grain size dependent magnetic discrimination of Iceland and South Greenland terrestrial sediments in the northern North Atlantic sediment record

    NASA Astrophysics Data System (ADS)

    Hatfield, Robert G.; Stoner, Joseph S.; Reilly, Brendan T.; Tepley, Frank J.; Wheeler, Benjamin H.; Housen, Bernard A.

    2017-09-01

    We use isothermal and temperature dependent in-field and magnetic remanence methods together with electron microscopy to characterize different sieved size fractions from terrestrial sediments collected in Iceland and southern Greenland. The magnetic fraction of Greenland silts (3-63 μm) and sands (>63 μm) is primarily composed of near-stoichiometric magnetite that may be oxidized in the finer clay (<3 μm) fractions. In contrast, all Icelandic fractions dominantly contain titanomagnetite of a range of compositions. Ferrimagnetic minerals preferentially reside in the silt-size fraction and exist as fine single-domain (SD) and pseudo-single-domain (PSD) size inclusions in Iceland samples, in contrast to coarser PSD and multi-domain (MD) discrete magnetites from southern Greenland. We demonstrate the potential of using magnetic properties of the silt fraction for source unmixing by creating known endmember mixtures and by using naturally mixed marine sediments from the Eirik Ridge south of Greenland. We develop a novel approach to ferrimagnetic source unmixing by using low temperature magnetic susceptibility curves that are sensitive to the different crystallinity and cation substitution characteristics of the different source regions. Covariation of these properties with hysteresis parameters suggests sediment source changes have driven the magnetic mineral variations observed in Eirik Ridge sediments since the last glacial maximum. These observations assist the development of a routine method and interpretative framework to quantitatively determine provenance in a geologically realistic and meaningful way and assess how different processes combine to drive magnetic variation in the North Atlantic sediment record.

  6. LUNAR DUST GRAIN CHARGING BY ELECTRON IMPACT: COMPLEX ROLE OF SECONDARY ELECTRON EMISSIONS IN SPACE ENVIRONMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbas, M. M.; Craven, P. D.; LeClair, A. C.

    2010-08-01

    Dust grains in various astrophysical environments are generally charged electrostatically by photoelectric emissions with radiation from nearby sources, or by electron/ion collisions by sticking or secondary electron emissions (SEEs). The high vacuum environment on the lunar surface leads to some unusual physical and dynamical phenomena involving dust grains with high adhesive characteristics, and levitation and transportation over long distances. Knowledge of the dust grain charges and equilibrium potentials is important for understanding a variety of physical and dynamical processes in the interstellar medium, and heliospheric, interplanetary/planetary, and lunar environments. It has been well recognized that the charging properties of individualmore » micron-/submicron-size dust grains are expected to be substantially different from the corresponding values for bulk materials. In this paper, we present experimental results on the charging of individual 0.2-13 {mu}m size dust grains selected from Apollo 11 and 17 dust samples, and spherical silica particles by exposing them to mono-energetic electron beams in the 10-200 eV energy range. The dust charging process by electron impact involving the SEEs discussed is found to be a complex charging phenomenon with strong particle size dependence. The measurements indicate substantial differences between the polarity and magnitude of the dust charging rates of individual small-size dust grains, and the measurements and model properties of corresponding bulk materials. A more comprehensive plan of measurements of the charging properties of individual dust grains for developing a database for realistic models of dust charging in astrophysical and lunar environments is in progress.« less

  7. Lunary Dust Grain Charging by Electron Impact: Complex Role of Secondary Electron Emissions in Space Environments

    NASA Technical Reports Server (NTRS)

    Abbas, M. M.; Tankosic, D.; Crave, P. D.; LeClair, A.; Spann, J. F.

    2010-01-01

    Dust grains in various astrophysical environments are generally charged electrostatically by photoelectric emissions with radiation from nearby sources, or by electron/ion collisions by sticking or secondary electron emissions (SEES). The high vacuum environment on the lunar surface leads to some unusual physical and dynamical phenomena involving dust grains with high adhesive characteristics, and levitation and transportation over long distances. Knowledge of the dust grain charges and equilibrium potentials is important for understanding a variety of physical and dynamical processes in the interstellar medium, and heliospheric, interplanetary/ planetary, and lunar environments. It has been well recognized that the charging properties of individual micron-/submicron-size dust grains are expected to be substantially different from the corresponding values for bulk materials. In this paper, we present experimental results on the charging of individual 0.2-13 m size dust grains selected from Apollo 11 and 17 dust samples, and spherical silica particles by exposing them to mono-energetic electron beams in the 10-200 eV energy range. The dust charging process by electron impact involving the SEES discussed is found to be a complex charging phenomenon with strong particle size dependence. The measurements indicate substantial differences between the polarity and magnitude of the dust charging rates of individual small-size dust grains, and the measurements and model properties of corresponding bulk materials. A more comprehensive plan of measurements of the charging properties of individual dust grains for developing a database for realistic models of dust charging in astrophysical and lunar environments is in progress.

  8. Self-assessed driver competence among novice drivers--a comparison of driving test candidate assessments and examiner assessments in a Dutch and Finnish sample.

    PubMed

    Mynttinen, Sami; Sundström, Anna; Vissers, Jan; Koivukoski, Marita; Hakuli, Kari; Keskinen, Esko

    2009-01-01

    This study examined novice drivers' overconfidence by comparing their self-assessed driver competence with the assessments made by driving examiners. A Finnish (n=2,739) and a Dutch sample (n=239) of drivers license candidates assessed their driver competence in six areas and took the driving test. In contrast to previous studies where drivers have assessed their skill in comparison to the average driver, a smaller proportion overestimated and a larger proportion made realistic self-assessments of their driver competence in the present study, where self-assessments were compared with examiner assessments. Between 40% and 50% of the candidates in both samples made realistic assessments and 30% to 40% overestimated their competence. The proportion of overestimation was greater in the Dutch than in the Finnish sample, which might be explained by greater possibilities for practicing self-assessment in the Finnish driver education system. Similar to other self-assessment studies that indicate that incompetence is related to overestimation, a larger proportion of candidates that failed the test overestimated their skill compared to those who passed. In contrast to other studies, males did not overestimate their skills more than females, and younger driver candidates were not more overconfident than older drivers. Although a great proportion of the candidates made a realistic assessment of their own driver competence, overestimation is still a problem that needs to be dealt with. To improve the accuracy of novice drivers' self-assessment, methods for self-assessment training should be developed and implemented in the driver licensing process.

  9. Monte Carlo simulation of ferroelectric domain growth

    NASA Astrophysics Data System (ADS)

    Li, B. L.; Liu, X. P.; Fang, F.; Zhu, J. L.; Liu, J.-M.

    2006-01-01

    The kinetics of two-dimensional isothermal domain growth in a quenched ferroelectric system is investigated using Monte Carlo simulation based on a realistic Ginzburg-Landau ferroelectric model with cubic-tetragonal (square-rectangle) phase transitions. The evolution of the domain pattern and domain size with annealing time is simulated, and the stability of trijunctions and tetrajunctions of domain walls is analyzed. It is found that in this much realistic model with strong dipole alignment anisotropy and long-range Coulomb interaction, the powerlaw for normal domain growth still stands applicable. Towards the late stage of domain growth, both the average domain area and reciprocal density of domain wall junctions increase linearly with time, and the one-parameter dynamic scaling of the domain growth is demonstrated.

  10. Simulations of X-ray diffraction of shock-compressed single-crystal tantalum with synchrotron undulator sources.

    PubMed

    Tang, M X; Zhang, Y Y; E, J C; Luo, S N

    2018-05-01

    Polychromatic synchrotron undulator X-ray sources are useful for ultrafast single-crystal diffraction under shock compression. Here, simulations of X-ray diffraction of shock-compressed single-crystal tantalum with realistic undulator sources are reported, based on large-scale molecular dynamics simulations. Purely elastic deformation, elastic-plastic two-wave structure, and severe plastic deformation under different impact velocities are explored, as well as an edge release case. Transmission-mode diffraction simulations consider crystallographic orientation, loading direction, incident beam direction, X-ray spectrum bandwidth and realistic detector size. Diffraction patterns and reciprocal space nodes are obtained from atomic configurations for different loading (elastic and plastic) and detection conditions, and interpretation of the diffraction patterns is discussed.

  11. Design of helicopter rotor blades for optimum dynamic characteristics

    NASA Technical Reports Server (NTRS)

    Peters, D. A.; Ko, T.; Korn, A. E.; Rossow, M. P.

    1982-01-01

    The possibilities and the limitations of tailoring blade mass and stiffness distributions to give an optimum blade design in terms of weight, inertia, and dynamic characteristics are investigated. Changes in mass or stiffness distribution used to place rotor frequencies at desired locations are determined. Theoretical limits to the amount of frequency shift are established. Realistic constraints on blade properties based on weight, mass moment of inertia size, strength, and stability are formulated. The extent hub loads can be minimized by proper choice of EL distribution is determined. Configurations that are simple enough to yield clear, fundamental insights into the structural mechanisms but which are sufficiently complex to result in a realistic result for an optimum rotor blade are emphasized.

  12. Simulations of X-ray diffraction of shock-compressed single-crystal tantalum with synchrotron undulator sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, M. X.; Zhang, Y. Y.; E, J. C.

    Polychromatic synchrotron undulator X-ray sources are useful for ultrafast single-crystal diffraction under shock compression. Here, simulations of X-ray diffraction of shock-compressed single-crystal tantalum with realistic undulator sources are reported, based on large-scale molecular dynamics simulations. Purely elastic deformation, elastic–plastic two-wave structure, and severe plastic deformation under different impact velocities are explored, as well as an edge release case. Transmission-mode diffraction simulations consider crystallographic orientation, loading direction, incident beam direction, X-ray spectrum bandwidth and realistic detector size. Diffraction patterns and reciprocal space nodes are obtained from atomic configurations for different loading (elastic and plastic) and detection conditions, and interpretation of themore » diffraction patterns is discussed.« less

  13. A comparison of microwave versus direct solar heating for lunar brick production

    NASA Technical Reports Server (NTRS)

    Yankee, S. J.; Strenski, D. G.; Pletka, B. J.; Patil, D. S.; Mutsuddy, B. C.

    1990-01-01

    Two processing techniques considered suitable for producing bricks from lunar regolith are examined: direct solar heating and microwave heating. An analysis was performed to compare the two processes in terms of the amount of power and time required to fabricate bricks of various sizes. Microwave heating was shown to be significantly faster than solar heating for rapid production of realistic-size bricks. However, the relative simplicity of the solar collector(s) used for the solar furnace compared to the equipment necessary for microwave generation may present an economic tradeoff.

  14. Generating Virtual Patients by Multivariate and Discrete Re-Sampling Techniques.

    PubMed

    Teutonico, D; Musuamba, F; Maas, H J; Facius, A; Yang, S; Danhof, M; Della Pasqua, O

    2015-10-01

    Clinical Trial Simulations (CTS) are a valuable tool for decision-making during drug development. However, to obtain realistic simulation scenarios, the patients included in the CTS must be representative of the target population. This is particularly important when covariate effects exist that may affect the outcome of a trial. The objective of our investigation was to evaluate and compare CTS results using re-sampling from a population pool and multivariate distributions to simulate patient covariates. COPD was selected as paradigm disease for the purposes of our analysis, FEV1 was used as response measure and the effects of a hypothetical intervention were evaluated in different populations in order to assess the predictive performance of the two methods. Our results show that the multivariate distribution method produces realistic covariate correlations, comparable to the real population. Moreover, it allows simulation of patient characteristics beyond the limits of inclusion and exclusion criteria in historical protocols. Both methods, discrete resampling and multivariate distribution generate realistic pools of virtual patients. However the use of a multivariate distribution enable more flexible simulation scenarios since it is not necessarily bound to the existing covariate combinations in the available clinical data sets.

  15. Random sphere packing model of heterogeneous propellants

    NASA Astrophysics Data System (ADS)

    Kochevets, Sergei Victorovich

    It is well recognized that combustion of heterogeneous propellants is strongly dependent on the propellant morphology. Recent developments in computing systems make it possible to start three-dimensional modeling of heterogeneous propellant combustion. A key component of such large scale computations is a realistic model of industrial propellants which retains the true morphology---a goal never achieved before. The research presented develops the Random Sphere Packing Model of heterogeneous propellants and generates numerical samples of actual industrial propellants. This is done by developing a sphere packing algorithm which randomly packs a large number of spheres with a polydisperse size distribution within a rectangular domain. First, the packing code is developed, optimized for performance, and parallelized using the OpenMP shared memory architecture. Second, the morphology and packing fraction of two simple cases of unimodal and bimodal packs are investigated computationally and analytically. It is shown that both the Loose Random Packing and Dense Random Packing limits are not well defined and the growth rate of the spheres is identified as the key parameter controlling the efficiency of the packing. For a properly chosen growth rate, computational results are found to be in excellent agreement with experimental data. Third, two strategies are developed to define numerical samples of polydisperse heterogeneous propellants: the Deterministic Strategy and the Random Selection Strategy. Using these strategies, numerical samples of industrial propellants are generated. The packing fraction is investigated and it is shown that the experimental values of the packing fraction can be achieved computationally. It is strongly believed that this Random Sphere Packing Model of propellants is a major step forward in the realistic computational modeling of heterogeneous propellant of combustion. In addition, a method of analysis of the morphology of heterogeneous propellants is developed which uses the concept of multi-point correlation functions. A set of intrinsic length scales of local density fluctuations in random heterogeneous propellants is identified by performing a Monte-Carlo study of the correlation functions. This method of analysis shows great promise for understanding the origins of the combustion instability of heterogeneous propellants, and is believed to become a valuable tool for the development of safe and reliable rocket engines.

  16. Random species loss underestimates dilution effects of host diversity on foliar fungal diseases under fertilization.

    PubMed

    Liu, Xiang; Chen, Fei; Lyu, Shengman; Sun, Dexin; Zhou, Shurong

    2018-02-01

    With increasing attention being paid to the consequences of global biodiversity losses, several recent studies have demonstrated that realistic species losses can have larger impacts than random species losses on community productivity and resilience. However, little is known about the effects of the order in which species are lost on biodiversity-disease relationships. Using a multiyear nitrogen addition and artificial warming experiment in natural assemblages of alpine meadow vegetation on the Qinghai-Tibetan Plateau, we inferred the sequence of plant species losses under fertilization/warming. Then the sequence of species losses under fertilization/warming was used to simulate the species loss orders (both realistic and random) in an adjacently novel removal experiment manipulating plot-level plant diversity. We explicitly compared the effect sizes of random versus realistic species losses simulated from fertilization/warming on plant foliar fungal diseases. We found that realistic species losses simulated from fertilization had greater effects than random losses on fungal diseases, and that species identity drove the diversity-disease relationship. Moreover, the plant species most prone to foliar fungal diseases were also the least vulnerable to extinction under fertilization, demonstrating the importance of protecting low competence species (the ability to maintain and transmit fungal infections was low) to impede the spread of infectious disease. In contrast, there was no difference between random and realistic species loss scenarios simulated from experimental warming (or the combination of warming and fertilization) on the diversity-disease relationship, indicating that the functional consequences of species losses may vary under different drivers.

  17. Realistic micromechanical modeling and simulation of two-phase heterogeneous materials

    NASA Astrophysics Data System (ADS)

    Sreeranganathan, Arun

    This dissertation research focuses on micromechanical modeling and simulations of two-phase heterogeneous materials exhibiting anisotropic and non-uniform microstructures with long-range spatial correlations. Completed work involves development of methodologies for realistic micromechanical analyses of materials using a combination of stereological techniques, two- and three-dimensional digital image processing, and finite element based modeling tools. The methodologies are developed via its applications to two technologically important material systems, namely, discontinuously reinforced aluminum composites containing silicon carbide particles as reinforcement, and boron modified titanium alloys containing in situ formed titanium boride whiskers. Microstructural attributes such as the shape, size, volume fraction, and spatial distribution of the reinforcement phase in these materials were incorporated in the models without any simplifying assumptions. Instrumented indentation was used to determine the constitutive properties of individual microstructural phases. Micromechanical analyses were performed using realistic 2D and 3D models and the results were compared with experimental data. Results indicated that 2D models fail to capture the deformation behavior of these materials and 3D analyses are required for realistic simulations. The effect of clustering of silicon carbide particles and associated porosity on the mechanical response of discontinuously reinforced aluminum composites was investigated using 3D models. Parametric studies were carried out using computer simulated microstructures incorporating realistic microstructural attributes. The intrinsic merit of this research is the development and integration of the required enabling techniques and methodologies for representation, modeling, and simulations of complex geometry of microstructures in two- and three-dimensional space facilitating better understanding of the effects of microstructural geometry on the mechanical behavior of materials.

  18. Processing ultrasonic inspection data from multiple scan patterns for turbine rotor weld build-up evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guan, Xuefei; Zhou, S. Kevin; Rasselkorde, El Mahjoub

    The study presents a data processing methodology for weld build-up using multiple scan patterns. To achieve an overall high probability of detection for flaws with different orientations, an inspection procedure with three different scan patterns is proposed. The three scan patterns are radial-tangential longitude wave pattern, axial-radial longitude wave pattern, and tangential shear wave pattern. Scientific fusion of the inspection data is implemented using volume reconstruction techniques. The idea is to perform spatial domain forward data mapping for all sampling points. A conservative scheme is employed to handle the case that multiple sampling points are mapped to one grid location.more » The scheme assigns the maximum value for the grid location to retain the largest equivalent reflector size for the location. The methodology is demonstrated and validated using a realistic ring of weld build-up. Tungsten balls and bars are embedded to the weld build-up during manufacturing process to represent natural flaws. Flat bottomed holes and side drilled holes are installed as artificial flaws. Automatic flaw identification and extraction are demonstrated. Results indicate the inspection procedure with multiple scan patterns can identify all the artificial and natural flaws.« less

  19. Toward a theory of patient and consumer activation.

    PubMed

    Hibbard, Judith H; Mahoney, Eldon

    2010-03-01

    The purpose of this study is to begin the process of developing a theory of activation, to inform educational efforts and the design of interventions. Because the experience of positive emotions in daily life, tends to widen the individual's array of behavioral responses and increase their openness to new information, we examine how emotions relate to activation levels. A web survey was carried out in 2008 with a National sample of respondents between the ages of 25-75. The study achieved a 63% response rate with a final sample size of 843. The findings indicate that activation is linked with the experience of positive and negative emotion in daily life. Those low in activation are weighted down by negative affect and negative self-perception. Bringing about change in activation, likely means breaking this cycle of negative self-perception and emotions. Experiencing success can start a positive upward cycle, just like failure produces the opposite. By encouraging small steps toward improving health, ones that are realistic, given the individuals level of activation, it is possible to start that positive cycle. Effective educational efforts should focus on improving self-efficacy and the individual's self-concept as a self-manager. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  20. ForceGen 3D structure and conformer generation: from small lead-like molecules to macrocyclic drugs

    NASA Astrophysics Data System (ADS)

    Cleves, Ann E.; Jain, Ajay N.

    2017-05-01

    We introduce the ForceGen method for 3D structure generation and conformer elaboration of drug-like small molecules. ForceGen is novel, avoiding use of distance geometry, molecular templates, or simulation-oriented stochastic sampling. The method is primarily driven by the molecular force field, implemented using an extension of MMFF94s and a partial charge estimator based on electronegativity-equalization. The force field is coupled to algorithms for direct sampling of realistic physical movements made by small molecules. Results are presented on a standard benchmark from the Cambridge Crystallographic Database of 480 drug-like small molecules, including full structure generation from SMILES strings. Reproduction of protein-bound crystallographic ligand poses is demonstrated on four carefully curated data sets: the ConfGen Set (667 ligands), the PINC cross-docking benchmark (1062 ligands), a large set of macrocyclic ligands (182 total with typical ring sizes of 12-23 atoms), and a commonly used benchmark for evaluating macrocycle conformer generation (30 ligands total). Results compare favorably to alternative methods, and performance on macrocyclic compounds approaches that observed on non-macrocycles while yielding a roughly 100-fold speed improvement over alternative MD-based methods with comparable performance.

  1. Processing ultrasonic inspection data from multiple scan patterns for turbine rotor weld build-up evaluations

    NASA Astrophysics Data System (ADS)

    Guan, Xuefei; Rasselkorde, El Mahjoub; Abbasi, Waheed; Zhou, S. Kevin

    2015-03-01

    The study presents a data processing methodology for weld build-up using multiple scan patterns. To achieve an overall high probability of detection for flaws with different orientations, an inspection procedure with three different scan patterns is proposed. The three scan patterns are radial-tangential longitude wave pattern, axial-radial longitude wave pattern, and tangential shear wave pattern. Scientific fusion of the inspection data is implemented using volume reconstruction techniques. The idea is to perform spatial domain forward data mapping for all sampling points. A conservative scheme is employed to handle the case that multiple sampling points are mapped to one grid location. The scheme assigns the maximum value for the grid location to retain the largest equivalent reflector size for the location. The methodology is demonstrated and validated using a realistic ring of weld build-up. Tungsten balls and bars are embedded to the weld build-up during manufacturing process to represent natural flaws. Flat bottomed holes and side drilled holes are installed as artificial flaws. Automatic flaw identification and extraction are demonstrated. Results indicate the inspection procedure with multiple scan patterns can identify all the artificial and natural flaws.

  2. Stochastic, adaptive sampling of information by microvilli in fly photoreceptors.

    PubMed

    Song, Zhuoyi; Postma, Marten; Billings, Stephen A; Coca, Daniel; Hardie, Roger C; Juusola, Mikko

    2012-08-07

    In fly photoreceptors, light is focused onto a photosensitive waveguide, the rhabdomere, consisting of tens of thousands of microvilli. Each microvillus is capable of generating elementary responses, quantum bumps, in response to single photons using a stochastically operating phototransduction cascade. Whereas much is known about the cascade reactions, less is known about how the concerted action of the microvilli population encodes light changes into neural information and how the ultrastructure and biochemical machinery of photoreceptors of flies and other insects evolved in relation to the information sampling and processing they perform. We generated biophysically realistic fly photoreceptor models, which accurately simulate the encoding of visual information. By comparing stochastic simulations with single cell recordings from Drosophila photoreceptors, we show how adaptive sampling by 30,000 microvilli captures the temporal structure of natural contrast changes. Following each bump, individual microvilli are rendered briefly (~100-200 ms) refractory, thereby reducing quantum efficiency with increasing intensity. The refractory period opposes saturation, dynamically and stochastically adjusting availability of microvilli (bump production rate: sample rate), whereas intracellular calcium and voltage adapt bump amplitude and waveform (sample size). These adapting sampling principles result in robust encoding of natural light changes, which both approximates perceptual contrast constancy and enhances novel events under different light conditions, and predict information processing across a range of species with different visual ecologies. These results clarify why fly photoreceptors are structured the way they are and function as they do, linking sensory information to sensory evolution and revealing benefits of stochasticity for neural information processing. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Stochastic, Adaptive Sampling of Information by Microvilli in Fly Photoreceptors

    PubMed Central

    Song, Zhuoyi; Postma, Marten; Billings, Stephen A.; Coca, Daniel; Hardie, Roger C.; Juusola, Mikko

    2012-01-01

    Summary Background In fly photoreceptors, light is focused onto a photosensitive waveguide, the rhabdomere, consisting of tens of thousands of microvilli. Each microvillus is capable of generating elementary responses, quantum bumps, in response to single photons using a stochastically operating phototransduction cascade. Whereas much is known about the cascade reactions, less is known about how the concerted action of the microvilli population encodes light changes into neural information and how the ultrastructure and biochemical machinery of photoreceptors of flies and other insects evolved in relation to the information sampling and processing they perform. Results We generated biophysically realistic fly photoreceptor models, which accurately simulate the encoding of visual information. By comparing stochastic simulations with single cell recordings from Drosophila photoreceptors, we show how adaptive sampling by 30,000 microvilli captures the temporal structure of natural contrast changes. Following each bump, individual microvilli are rendered briefly (∼100–200 ms) refractory, thereby reducing quantum efficiency with increasing intensity. The refractory period opposes saturation, dynamically and stochastically adjusting availability of microvilli (bump production rate: sample rate), whereas intracellular calcium and voltage adapt bump amplitude and waveform (sample size). These adapting sampling principles result in robust encoding of natural light changes, which both approximates perceptual contrast constancy and enhances novel events under different light conditions, and predict information processing across a range of species with different visual ecologies. Conclusions These results clarify why fly photoreceptors are structured the way they are and function as they do, linking sensory information to sensory evolution and revealing benefits of stochasticity for neural information processing. PMID:22704990

  4. Spline Laplacian estimate of EEG potentials over a realistic magnetic resonance-constructed scalp surface model.

    PubMed

    Babiloni, F; Babiloni, C; Carducci, F; Fattorini, L; Onorati, P; Urbano, A

    1996-04-01

    This paper presents a realistic Laplacian (RL) estimator based on a tensorial formulation of the surface Laplacian (SL) that uses the 2-D thin plate spline function to obtain a mathematical description of a realistic scalp surface. Because of this tensorial formulation, the RL does not need an orthogonal reference frame placed on the realistic scalp surface. In simulation experiments the RL was estimated with an increasing number of "electrodes" (up to 256) on a mathematical scalp model, the analytic Laplacian being used as a reference. Second and third order spherical spline Laplacian estimates were examined for comparison. Noise of increasing magnitude and spatial frequency was added to the simulated potential distributions. Movement-related potentials and somatosensory evoked potentials sampled with 128 electrodes were used to estimate the RL on a realistically shaped, MR-constructed model of the subject's scalp surface. The RL was also estimated on a mathematical spherical scalp model computed from the real scalp surface. Simulation experiments showed that the performances of the RL estimator were similar to those of the second and third order spherical spline Laplacians. Furthermore, the information content of scalp-recorded potentials was clearly better when the RL estimator computed the SL of the potential on an MR-constructed scalp surface model.

  5. Using the realist perspective to link theory from qualitative evidence synthesis to quantitative studies: Broadening the matrix approach.

    PubMed

    van Grootel, Leonie; van Wesel, Floryt; O'Mara-Eves, Alison; Thomas, James; Hox, Joop; Boeije, Hennie

    2017-09-01

    This study describes an approach for the use of a specific type of qualitative evidence synthesis in the matrix approach, a mixed studies reviewing method. The matrix approach compares quantitative and qualitative data on the review level by juxtaposing concrete recommendations from the qualitative evidence synthesis against interventions in primary quantitative studies. However, types of qualitative evidence syntheses that are associated with theory building generate theoretical models instead of recommendations. Therefore, the output from these types of qualitative evidence syntheses cannot directly be used for the matrix approach but requires transformation. This approach allows for the transformation of these types of output. The approach enables the inference of moderation effects instead of direct effects from the theoretical model developed in a qualitative evidence synthesis. Recommendations for practice are formulated on the basis of interactional relations inferred from the qualitative evidence synthesis. In doing so, we apply the realist perspective to model variables from the qualitative evidence synthesis according to the context-mechanism-outcome configuration. A worked example shows that it is possible to identify recommendations from a theory-building qualitative evidence synthesis using the realist perspective. We created subsets of the interventions from primary quantitative studies based on whether they matched the recommendations or not and compared the weighted mean effect sizes of the subsets. The comparison shows a slight difference in effect sizes between the groups of studies. The study concludes that the approach enhances the applicability of the matrix approach. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Collective behaviour in vertebrates: a sensory perspective

    PubMed Central

    Collignon, Bertrand; Fernández-Juricic, Esteban

    2016-01-01

    Collective behaviour models can predict behaviours of schools, flocks, and herds. However, in many cases, these models make biologically unrealistic assumptions in terms of the sensory capabilities of the organism, which are applied across different species. We explored how sensitive collective behaviour models are to these sensory assumptions. Specifically, we used parameters reflecting the visual coverage and visual acuity that determine the spatial range over which an individual can detect and interact with conspecifics. Using metric and topological collective behaviour models, we compared the classic sensory parameters, typically used to model birds and fish, with a set of realistic sensory parameters obtained through physiological measurements. Compared with the classic sensory assumptions, the realistic assumptions increased perceptual ranges, which led to fewer groups and larger group sizes in all species, and higher polarity values and slightly shorter neighbour distances in the fish species. Overall, classic visual sensory assumptions are not representative of many species showing collective behaviour and constrain unrealistically their perceptual ranges. More importantly, caution must be exercised when empirically testing the predictions of these models in terms of choosing the model species, making realistic predictions, and interpreting the results. PMID:28018616

  7. Practical theories for service life prediction of critical aerospace structural components

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Monaghan, Richard C.; Jackson, Raymond H.

    1992-01-01

    A new second-order theory was developed for predicting the service lives of aerospace structural components. The predictions based on this new theory were compared with those based on the Ko first-order theory and the classical theory of service life predictions. The new theory gives very accurate service life predictions. An equivalent constant-amplitude stress cycle method was proposed for representing the random load spectrum for crack growth calculations. This method predicts the most conservative service life. The proposed use of minimum detectable crack size, instead of proof load established crack size as an initial crack size for crack growth calculations, could give a more realistic service life.

  8. Five instruments for measuring tree height: an evaluation

    Treesearch

    Michael S. Williams; William A. Bechtold; V.J. LaBau

    1994-01-01

    Five instruments were tested for reliability in measuring tree heights under realistic conditions. Four linear models were used to determine if tree height can be measured unbiasedly over all tree sizes and if any of the instruments were more efficient in estimating tree height. The laser height finder was the only instrument to produce unbiased estimates of the true...

  9. Realistic dust and water cycles in the MarsWRF GCM using coupled two-moment microphysics

    NASA Astrophysics Data System (ADS)

    Lee, Christopher; Richardson, Mark Ian; Mischna, Michael A.; Newman, Claire E.

    2017-10-01

    Dust and water ice aerosols significantly complicate the Martian climate system because the evolution of the two aerosol fields is coupled through microphysics and because both aerosols strongly interact with visible and thermal radiation. The combination of strong forcing feedback and coupling has led to various problems in understanding and modeling of the Martian climate: in reconciling cloud abundances at different locations in the atmosphere, in generating a stable dust cycle, and in preventing numerical instability within models.Using a new microphysics model inside the MarsWRF GCM we show that fully coupled simulations produce more realistic simulation of the Martian climate system compared to a dry, dust only simulations. In the coupled simulations, interannual variability and intra-annual variability are increased, strong 'solstitial pause' features are produced in both winter high latitude regions, and dust storm seasons are more varied, with early southern summer (Ls 180) dust storms and/or more than one storm occurring in some seasons.A new microphysics scheme was developed as a part of this work and has been included in the MarsWRF model. The scheme uses split spectral/spatial size distribution numerics with adaptive bin sizes to track particle size evolution. Significantly, this scheme is highly accurate, numerically stable, and is capable of running with time steps commensurate with those of the parent atmospheric model.

  10. A generic framework to simulate realistic lung, liver and renal pathologies in CT imaging

    NASA Astrophysics Data System (ADS)

    Solomon, Justin; Samei, Ehsan

    2014-11-01

    Realistic three-dimensional (3D) mathematical models of subtle lesions are essential for many computed tomography (CT) studies focused on performance evaluation and optimization. In this paper, we develop a generic mathematical framework that describes the 3D size, shape, contrast, and contrast-profile characteristics of a lesion, as well as a method to create lesion models based on CT data of real lesions. Further, we implemented a technique to insert the lesion models into CT images in order to create hybrid CT datasets. This framework was used to create a library of realistic lesion models and corresponding hybrid CT images. The goodness of fit of the models was assessed using the coefficient of determination (R2) and the visual appearance of the hybrid images was assessed with an observer study using images of both real and simulated lesions and receiver operator characteristic (ROC) analysis. The average R2 of the lesion models was 0.80, implying that the models provide a good fit to real lesion data. The area under the ROC curve was 0.55, implying that the observers could not readily distinguish between real and simulated lesions. Therefore, we conclude that the lesion-modeling framework presented in this paper can be used to create realistic lesion models and hybrid CT images. These models could be instrumental in performance evaluation and optimization of novel CT systems.

  11. Teaching problem solving using non-routine tasks

    NASA Astrophysics Data System (ADS)

    Chong, Maureen Siew Fang; Shahrill, Masitah; Putri, Ratu Ilma Indra; Zulkardi

    2018-04-01

    Non-routine problems are related to real-life context and require some realistic considerations and real-world knowledge in order to resolve them. This study examines several activity tasks incorporated with non-routine problems through the use of an emerging mathematics framework, at two junior colleges in Brunei Darussalam. The three sampled teachers in this study assisted in selecting the topics and the lesson plan designs. They also recommended the development of the four activity tasks: incorporating the use of technology; simulation of a reality television show; designing real-life sized car park spaces for the school; and a classroom activity to design a real-life sized dustpan. Data collected from all four of the activity tasks were analyzed based on the students' group work. The findings revealed that the most effective activity task in teaching problem solving was to design a real-life sized car park. This was because the use of real data gave students the opportunity to explore, gather information and give or receive feedback on the effect of their reasons and proposed solutions. The second most effective activity task was incorporating the use of technology as it enhanced the students' understanding of the concepts learnt in the classroom. This was followed by the classroom activity that used real data as it allowed students to work and assess the results mathematically. The simulation of a television show was found to be the least effective since it was viewed as not sufficiently challenging to the students.

  12. People--things and data--ideas: bipolar dimensions?

    PubMed

    Tay, Louis; Su, Rong; Rounds, James

    2011-07-01

    We examined a longstanding assumption in vocational psychology that people-things and data-ideas are bipolar dimensions. Two minimal criteria for bipolarity were proposed and examined across 3 studies: (a) The correlation between opposite interest types should be negative; (b) after correcting for systematic responding, the correlation should be greater than -.40. In Study 1, a meta-analysis using 26 interest inventories with a sample size of 1,008,253 participants showed that meta-analytic correlations between opposite RIASEC (realistic, investigative, artistic, social, enterprising, conventional) types ranged from -.03 to .18 (corrected meta-analytic correlations ranged from -.23 to -.06). In Study 2, structural equation models (SEMs) were fit to the Interest Finder (IF; Wall, Wise, & Baker, 1996) and the Interest Profiler (IP; Rounds, Smith, Hubert, Lewis, & Rivkin, 1999) with sample sizes of 13,939 and 1,061, respectively. The correlations of opposite RIASEC types were positive, ranging from .17 to .53. No corrected correlation met the criterion of -.40 except for investigative-enterprising (r = -.67). Nevertheless, a direct estimate of the correlation between data-ideas end poles using targeted factor rotation did not reveal bipolarity. Furthermore, bipolar SEMs fit substantially worse than a multiple-factor representation of vocational interests. In Study 3, a two-way clustering solution on IF and IP respondents and items revealed a substantial number of individuals with interests in both people and things. We discuss key theoretical, methodological, and practical implications such as the structure of vocational interests, interpretation and scoring of interest measures for career counseling, and expert RIASEC ratings of occupations.

  13. Compression experiments on artificial, alpine and marine ice: implications for ice-shelf/continental interactions

    NASA Astrophysics Data System (ADS)

    Dierckx, Marie; Goossens, Thomas; Samyn, Denis; Tison, Jean-Louis

    2010-05-01

    Antarctic ice shelves are important components of continental ice dynamics, in that they control grounded ice flow towards the ocean. As such, Antarctic ice shelves are a key parameter to the stability of the Antarctic ice sheet in the context of global change. Marine ice, formed by sea water accretion beneath some ice shelves, displays distinct physical (grain textures, bubble content, ...) and chemical (salinity, isotopic composition, ...) characteristics as compared to glacier ice and sea ice. The aim is to refine Glen's flow relation (generally used for ice behaviour in deformation) under various parameters (temperature, salinity, debris, grain size ...) to improve deformation laws used in dynamic ice shelf models, which would then give more accurate and / or realistic predictions on ice shelf stability. To better understand the mechanical properties of natural ice, deformation experiments were performed on ice samples in laboratory, using a pneumatic compression device. To do so, we developed a custom built compression rig operated by pneumatic drives. It has been designed for performing uniaxial compression tests at constant load and under unconfined conditions. The operating pressure ranges from about 0.5 to 10 Bars. This allows modifying the experimental conditions to match the conditions found at the grounding zone (in the 1 Bar range). To maintain the ice at low temperature, the samples are immersed in a Silicone oil bath connected to an external refrigeration system. During the experiments, the vertical displacement of the piston and the applied force is measured by sensors which are connected to a digital acquisition system. We started our experiments with artificial ice and went on with continental ice samples from glaciers in the Alps. The first results allowed us to acquire realistic mechanical data for natural ice. Ice viscosity was calculated for different types of artificial ice, using Glen's flow law, and showed the importance of impurities content and ice crystallography (grain size, ice fabrics...) on the deformation behaviour. Glacier ice was also used in our experiments. Calculations of the flow parameter A give a value of 3.10e-16 s-1 kPa-3 at a temperature of -10° C. These results are in accordance with previous lab deformation studies. Compression tests show the effectiveness of the deformation unit for uniaxial strain experiment. In the future, deformation of marine ice and of the ice mélange (consisting of a melange of marine ice, broken blocks of continental ice and blown snow further metamorphosed into firn and then ice) will be studied, to obtain a comprehensive understanding of the parameters that influence the behaviour of both ice types and how they affect the overall flow of the ice shelf and potential future sea level rise.

  14. Small Angle X-ray Scattering for Nanoparticle Research

    DOE PAGES

    Li, Tao; Senesi, Andrew J.; Lee, Byeongdu

    2016-04-07

    X-ray scattering is a structural characterization tool that has impacted diverse fields of study. It is unique in its ability to examine materials in real time and under realistic sample environments, enabling researchers to understand morphology at nanometer and ångström length scales using complementary small and wide angle X-ray scattering (SAXS, WAXS), respectively. Herein, we focus on the use of SAXS to examine nanoscale particulate systems. We provide a theoretical foundation for X-ray scattering, considering both form factor and structure factor, as well as the use of correlation functions, which may be used to determine a particle’s size, size distribution,more » shape, and organization into hierarchal structures. The theory is expanded upon with contemporary use cases. Both transmission and reflection (grazing incidence) geometries are addressed, as well the combination of SAXS with other X-ray and non-X ray characterization tools. Furthermore, we conclude with an examination of several key areas of research where X-rays scattering has played a pivotal role, including in situ nanoparticle synthesis, nanoparticle assembly, and in operando studies of catalysts and energy storage materials. Throughout this review we highlight the unique capabilities of X-ray scattering for structural characterization of materials in their native environment.« less

  15. Small Angle X-ray Scattering for Nanoparticle Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Tao; Senesi, Andrew J.; Lee, Byeongdu

    X-ray scattering is a structural characterization tool that has impacted diverse fields of study. It is unique in its ability to examine materials in real time and under realistic sample environments, enabling researchers to understand morphology at nanometer and ångström length scales using complementary small and wide angle X-ray scattering (SAXS, WAXS), respectively. Herein, we focus on the use of SAXS to examine nanoscale particulate systems. We provide a theoretical foundation for X-ray scattering, considering both form factor and structure factor, as well as the use of correlation functions, which may be used to determine a particle’s size, size distribution,more » shape, and organization into hierarchal structures. The theory is expanded upon with contemporary use cases. Both transmission and reflection (grazing incidence) geometries are addressed, as well the combination of SAXS with other X-ray and non-X ray characterization tools. Furthermore, we conclude with an examination of several key areas of research where X-rays scattering has played a pivotal role, including in situ nanoparticle synthesis, nanoparticle assembly, and in operando studies of catalysts and energy storage materials. Throughout this review we highlight the unique capabilities of X-ray scattering for structural characterization of materials in their native environment.« less

  16. Impact of doping on the carrier dynamics in graphene

    PubMed Central

    Kadi, Faris; Winzer, Torben; Knorr, Andreas; Malic, Ermin

    2015-01-01

    We present a microscopic study on the impact of doping on the carrier dynamics in graphene, in particular focusing on its influence on the technologically relevant carrier multiplication in realistic, doped graphene samples. Treating the time- and momentum-resolved carrier-light, carrier-carrier, and carrier-phonon interactions on the same microscopic footing, the appearance of Auger-induced carrier multiplication up to a Fermi level of 300 meV is revealed. Furthermore, we show that doping favors the so-called hot carrier multiplication occurring within one band. Our results are directly compared to recent time-resolved ARPES measurements and exhibit an excellent agreement on the temporal evolution of the hot carrier multiplication for n- and p-doped graphene. The gained insights shed light on the ultrafast carrier dynamics in realistic, doped graphene samples. PMID:26577536

  17. The VIIRS Ocean Data Simulator Enhancements and Results

    NASA Technical Reports Server (NTRS)

    Robinson, Wayne D.; Patt, Fredrick S.; Franz, Bryan A.; Turpie, Kevin R.; McClain, Charles R.

    2011-01-01

    The VIIRS Ocean Science Team (VOST) has been developing an Ocean Data Simulator to create realistic VIIRS SDR datasets based on MODIS water-leaving radiances. The simulator is helping to assess instrument performance and scientific processing algorithms. Several changes were made in the last two years to complete the simulator and broaden its usefulness. The simulator is now fully functional and includes all sensor characteristics measured during prelaunch testing, including electronic and optical crosstalk influences, polarization sensitivity, and relative spectral response. Also included is the simulation of cloud and land radiances to make more realistic data sets and to understand their important influence on nearby ocean color data. The atmospheric tables used in the processing, including aerosol and Rayleigh reflectance coefficients, have been modeled using VIIRS relative spectral responses. The capabilities of the simulator were expanded to work in an unaggregated sample mode and to produce scans with additional samples beyond the standard scan. These features improve the capability to realistically add artifacts which act upon individual instrument samples prior to aggregation and which may originate from beyond the actual scan boundaries. The simulator was expanded to simulate all 16 M-bands and the EDR processing was improved to use these bands to make an SST product. The simulator is being used to generate global VIIRS data from and in parallel with the MODIS Aqua data stream. Studies have been conducted using the simulator to investigate the impact of instrument artifacts. This paper discusses the simulator improvements and results from the artifact impact studies.

  18. The VIIRS ocean data simulator enhancements and results

    NASA Astrophysics Data System (ADS)

    Robinson, Wayne D.; Patt, Frederick S.; Franz, Bryan A.; Turpie, Kevin R.; McClain, Charles R.

    2011-10-01

    The VIIRS Ocean Science Team (VOST) has been developing an Ocean Data Simulator to create realistic VIIRS SDR datasets based on MODIS water-leaving radiances. The simulator is helping to assess instrument performance and scientific processing algorithms. Several changes were made in the last two years to complete the simulator and broaden its usefulness. The simulator is now fully functional and includes all sensor characteristics measured during prelaunch testing, including electronic and optical crosstalk influences, polarization sensitivity, and relative spectral response. Also included is the simulation of cloud and land radiances to make more realistic data sets and to understand their important influence on nearby ocean color data. The atmospheric tables used in the processing, including aerosol and Rayleigh reflectance coefficients, have been modeled using VIIRS relative spectral responses. The capabilities of the simulator were expanded to work in an unaggregated sample mode and to produce scans with additional samples beyond the standard scan. These features improve the capability to realistically add artifacts which act upon individual instrument samples prior to aggregation and which may originate from beyond the actual scan boundaries. The simulator was expanded to simulate all 16 M-bands and the EDR processing was improved to use these bands to make an SST product. The simulator is being used to generate global VIIRS data from and in parallel with the MODIS Aqua data stream. Studies have been conducted using the simulator to investigate the impact of instrument artifacts. This paper discusses the simulator improvements and results from the artifact impact studies.

  19. Nondestructive assessment of pore size in foam-based hybrid composite materials

    NASA Astrophysics Data System (ADS)

    Chen, M. Y.; Ko, R. T.

    2012-05-01

    In-situ non-destructive evaluation (NDE) during processing of high temperature polymer based hybrids offers great potential to gain close control and achieve the desired level of pore size, with low overall development cost. During the polymer curing cycle, close control over the evolution of volatiles would be beneficial to avoid the presence of pores or at least control their sizes. Traditional NDE methods cannot realistically be expected to evaluate individual pores in such components, as each pore evolves and grows during curing. However, NDE techniques offer the potential to detect and quantify the macroscopic response of many pores that are undesirable or intentionally introduced into these advanced materials. In this paper, preliminary results will be presented for nondestructive assessment of pore size in foam-based hybrid composite materials using ultrasonic techniques. Pore size was evaluated through the frequency content of the ultrasonic signal. The effects of pore size on the attenuation of ultrasound were studied. Feasibility of this method was demonstrated on two types of foams with various pore sizes.

  20. Effect of Particle Size Upon Pt/SiO 2 Catalytic Cracking of n-Dodecane Under Supercritical Conditions: in situ SAXS and XANES Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sungwon; Lee, Sungsik; Kumbhalkar, Mrunmayi

    The endothermic cracking and dehydrogenation of n-dodecane is investigated over well-defined nanometer size platinum catalysts supported on SiO 2 to study the particle size effects in the catalytic cracking reaction, with simultaneous in situ monitoring of the particle size and oxidation state of the working catalysts by in situ SAXS (small angle X-ray scattering) and XAS (X-ray absorption spectroscopy). The selectivity toward olefins products was found dominant in the 1 nm size platinum catalysts, whereas paraffins are dominant in the 2 nm catalysts. This reveals a strong correlation between catalytic performance and catalyst size as well as the stability ofmore » the nanoparticles in supercritical condition of n-dodecane. The presented results suggest that controlling the size and geometric structure of platinum nanocatalysts could lead to a fundamentally new level of understanding of nanoscale materials by monitoring the catalysts in realistic reaction conditions.« less

  1. Developing quantitative seed sampling protocols using simulations: A reply to comments from Guja et al. and Guerrant et al.

    USDA-ARS?s Scientific Manuscript database

    The letter is a reply to comments made on a previous publication: Optimal sampling of seeds from plant populations for ex-situ conservation of genetic biodiversity, considering realistic population structure (Hoban and Schlarbaum, 2014). The intent of the reply is to acknowledge some of the practica...

  2. Research on a bimorph piezoelectric deformable mirror for adaptive optics in optical telescope.

    PubMed

    Wang, Hairen

    2017-04-03

    We have proposed a discrete-layout bimorph piezoelectric deformable mirror (DBPDM) and developed its realistic electromechanical model. Compared with the conventional piezoelectric deformable mirror (CPDM) and the bimorph piezoelectric deformable mirror (BPDM), the DBPDM has both a larger stroke and a higher resonance frequency by integrating the strengths of the CPDM and the BPDM. To verify the advancement, a 21-elements DBPDM is studied in this paper. The results have suggested that the stroke of the DBPDM is larger than 10 microns and its resonance frequency is 53.3 kHz. Furthermore, numerical simulation is conducted on the deformation of the mirror using the realistic electromechanical model, and the dependence of the influence function upon the size of the radius of push pad is analyzed.

  3. Transient finite element analysis of electric double layer using Nernst-Planck-Poisson equations with a modified Stern layer.

    PubMed

    Lim, Jongil; Whitcomb, John; Boyd, James; Varghese, Julian

    2007-01-01

    A finite element implementation of the transient nonlinear Nernst-Planck-Poisson (NPP) and Nernst-Planck-Poisson-modified Stern (NPPMS) models is presented. The NPPMS model uses multipoint constraints to account for finite ion size, resulting in realistic ion concentrations even at high surface potential. The Poisson-Boltzmann equation is used to provide a limited check of the transient models for low surface potential and dilute bulk solutions. The effects of the surface potential and bulk molarity on the electric potential and ion concentrations as functions of space and time are studied. The ability of the models to predict realistic energy storage capacity is investigated. The predicted energy is much more sensitive to surface potential than to bulk solution molarity.

  4. Application of two-component phase Doppler interferometry to the measurement of particle size, mass flux, and velocities in two-phase flows

    NASA Technical Reports Server (NTRS)

    Mcdonell, V. G.; Samuelsen, G. S.

    1989-01-01

    Two-component phase Doppler interferometry is described, along with its application for the spatially-resolved measurements of particle size, velocity, and mass flux as well as continuous phase velocity. This technique measures single particle events at a point in the flow; droplet size is deduced from the spatial phase shift of the Doppler signal. Particle size influence and discrimination of continuous and discrete phases are among issues covered. Applications are presented for four cases: an example of the discrimination of two sizes of glass beads in a jet flow; a demonstration of the discrimination of phases in a spray field; an assessment of atomizer symmetry with respect to fuel distribution; and a characterization of a droplet field in a reacting spray. It is noted that the above technique is especially powerful in delineating droplet interactions in the swirling, complex flows typical of realistic systems.

  5. Children's understanding of maternal breast cancer: A qualitative study.

    PubMed

    Huang, Xiaoyan; O'Connor, Margaret; Hu, Yan; Gao, Hongyun; Lee, Susan

    2018-06-01

    To explore how children understand their mother's diagnosis of and treatment for breast cancer. Interpretive description was adopted as the methodology in this study. Eight children aged 8-18 years old, whose mother has been diagnosed with non-terminal breast cancer, were interviewed individually and six of them drew a picture to express their understanding of maternal breast cancer. Four themes were identified in this study: "the cancer word is scary" - children's understanding of cancer; "scars and tubes" - children's understanding of surgery; "hair loss" - children's understanding of chemotherapy, and "I can't explain it" - children's understanding of other treatments. Children's understanding of maternal breast cancer and its treatment was relatively realistic, although sometimes inaccurate. Individual evaluation and appropriate explanation is significant to further children's understanding of their mother's illness. Future studies with larger sample size are needed to explore the understanding for children of different ages, in order to provide specific help for these children. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. A new method for constructing networks from binary data

    NASA Astrophysics Data System (ADS)

    van Borkulo, Claudia D.; Borsboom, Denny; Epskamp, Sacha; Blanken, Tessa F.; Boschloo, Lynn; Schoevers, Robert A.; Waldorp, Lourens J.

    2014-08-01

    Network analysis is entering fields where network structures are unknown, such as psychology and the educational sciences. A crucial step in the application of network models lies in the assessment of network structure. Current methods either have serious drawbacks or are only suitable for Gaussian data. In the present paper, we present a method for assessing network structures from binary data. Although models for binary data are infamous for their computational intractability, we present a computationally efficient model for estimating network structures. The approach, which is based on Ising models as used in physics, combines logistic regression with model selection based on a Goodness-of-Fit measure to identify relevant relationships between variables that define connections in a network. A validation study shows that this method succeeds in revealing the most relevant features of a network for realistic sample sizes. We apply our proposed method to estimate the network of depression and anxiety symptoms from symptom scores of 1108 subjects. Possible extensions of the model are discussed.

  7. Regression analysis of sparse asynchronous longitudinal data

    PubMed Central

    Cao, Hongyuan; Zeng, Donglin; Fine, Jason P.

    2015-01-01

    Summary We consider estimation of regression models for sparse asynchronous longitudinal observations, where time-dependent responses and covariates are observed intermittently within subjects. Unlike with synchronous data, where the response and covariates are observed at the same time point, with asynchronous data, the observation times are mismatched. Simple kernel-weighted estimating equations are proposed for generalized linear models with either time invariant or time-dependent coefficients under smoothness assumptions for the covariate processes which are similar to those for synchronous data. For models with either time invariant or time-dependent coefficients, the estimators are consistent and asymptotically normal but converge at slower rates than those achieved with synchronous data. Simulation studies evidence that the methods perform well with realistic sample sizes and may be superior to a naive application of methods for synchronous data based on an ad hoc last value carried forward approach. The practical utility of the methods is illustrated on data from a study on human immunodeficiency virus. PMID:26568699

  8. Chest x-ray generation and data augmentation for cardiovascular abnormality classification

    NASA Astrophysics Data System (ADS)

    Madani, Ali; Moradi, Mehdi; Karargyris, Alexandros; Syeda-Mahmood, Tanveer

    2018-03-01

    Medical imaging datasets are limited in size due to privacy issues and the high cost of obtaining annotations. Augmentation is a widely used practice in deep learning to enrich the data in data-limited scenarios and to avoid overfitting. However, standard augmentation methods that produce new examples of data by varying lighting, field of view, and spatial rigid transformations do not capture the biological variance of medical imaging data and could result in unrealistic images. Generative adversarial networks (GANs) provide an avenue to understand the underlying structure of image data which can then be utilized to generate new realistic samples. In this work, we investigate the use of GANs for producing chest X-ray images to augment a dataset. This dataset is then used to train a convolutional neural network to classify images for cardiovascular abnormalities. We compare our augmentation strategy with traditional data augmentation and show higher accuracy for normal vs abnormal classification in chest X-rays.

  9. Integrative assessment of multiple pesticides as risk factors for non-Hodgkin's lymphoma among men.

    PubMed

    De Roos, A J; Zahm, S H; Cantor, K P; Weisenburger, D D; Holmes, F F; Burmeister, L F; Blair, A

    2003-09-01

    An increased rate of non-Hodgkin's lymphoma (NHL) has been repeatedly observed among farmers, but identification of specific exposures that explain this observation has proven difficult. During the 1980s, the National Cancer Institute conducted three case-control studies of NHL in the midwestern United States. These pooled data were used to examine pesticide exposures in farming as risk factors for NHL in men. The large sample size (n = 3417) allowed analysis of 47 pesticides simultaneously, controlling for potential confounding by other pesticides in the model, and adjusting the estimates based on a prespecified variance to make them more stable. Reported use of several individual pesticides was associated with increased NHL incidence, including organophosphate insecticides coumaphos, diazinon, and fonofos, insecticides chlordane, dieldrin, and copper acetoarsenite, and herbicides atrazine, glyphosate, and sodium chlorate. A subanalysis of these "potentially carcinogenic" pesticides suggested a positive trend of risk with exposure to increasing numbers. Consideration of multiple exposures is important in accurately estimating specific effects and in evaluating realistic exposure scenarios.

  10. Uneven-aged management of old-growth spruce-fir forests: Cutting methods and stand structure goals for the initial entry

    Treesearch

    Robert R. Alexander; Carleton B. Edminster

    1977-01-01

    Topics discussed include: (1) cutting methods, (2) stand structure goals, which involve choosing a residual stocking level, selecting a maximum tree size, and establishing a diameter distribution using the "q" technique, and (3) harvesting and removal of trees. Examples illustrate how to determine realistic stand structures for the initial entry for...

  11. What works in implementation of integrated care programs for older adults with complex needs? A realist review

    PubMed Central

    Kirst, Maritt; Im, Jennifer; Burns, Tim; Baker, G. Ross; Goldhar, Jodeme; O'Campo, Patricia; Wojtak, Anne; Wodchis, Walter P

    2017-01-01

    Abstract Purpose A realist review of the evaluative evidence was conducted on integrated care (IC) programs for older adults to identify key processes that lead to the success or failure of these programs in achieving outcomes such as reduced healthcare utilization, improved patient health, and improved patient and caregiver experience. Data sources International academic literature was searched in 12 indexed, electronic databases and gray literature through internet searches, to identify evaluative studies. Study selection Inclusion criteria included evaluative literature on integrated, long-stay health and social care programs, published between January 1980 and July 2015, in English. Data extraction Data were extracted on the study purpose, period, setting, design, population, sample size, outcomes, and study results, as well as explanations of mechanisms and contextual factors influencing outcomes. Results of data synthesis A total of 65 articles, representing 28 IC programs, were included in the review. Two context-mechanism-outcome configurations (CMOcs) were identified: (i) trusting multidisciplinary team relationships and (ii) provider commitment to and understanding of the model. Contextual factors such as strong leadership that sets clear goals and establishes an organizational culture in support of the program, along with joint governance structures, supported team collaboration and subsequent successful implementation. Furthermore, time to build an infrastructure to implement and flexibility in implementation, emerged as key processes instrumental to success of these programs. Conclusions This review included a wide range of international evidence, and identified key processes for successful implementation of IC programs that should be considered by program planners, leaders and evaluators. PMID:28992156

  12. Effects of in situ dual ion beam (He+ and D+) irradiation with simultaneous pulsed heat loading on surface morphology evolution of tungsten-tantalum alloys

    NASA Astrophysics Data System (ADS)

    Gonderman, S.; Tripathi, J. K.; Sinclair, G.; Novakowski, T. J.; Sizyuk, T.; Hassanein, A.

    2018-02-01

    The strong thermal and mechanical properties of tungsten (W) are well suited for the harsh fusion environment. However, increasing interest in using tungsten as plasma-facing components (PFCs) has revealed several key issues. These potential roadblocks necessitate more investigation of W and other alternative W based materials exposed to realistic fusion conditions. In this work, W and tungsten-tantalum (W-Ta) alloys were exposed to single (He+) and dual (He+  +  D+) ion irradiations with simultaneous pulsed heat loading to elucidate PFCs response under more realistic conditions. Laser only exposer revealed significantly more damage in W-Ta samples as compared to pure W samples. This was due to the difference in the mechanical properties of the two different materials. Further erosion studies were conducted to evaluate the material degradation due to transient heat loading in both the presence and absence of He+ and/or D+ ions. We concluded that erosion of PFC materials was significantly enhanced due to the presence of ion irradiation. This is important as it demonstrates that there are key synergistic effects resulting from more realistic fusion loading conditions that need to be considered when evaluating the response of plasma facing materials.

  13. Observations of GEO Debris with the Magellan 6.5-m Telescopes

    NASA Technical Reports Server (NTRS)

    Seitzer, Patrick; Burkhardt, Andrew; Cardonna, Tommaso; Lederer, Susan M.; Cowardin, Heather; Barker, Edwin S.; Abercromby, Kira J.

    2012-01-01

    Optical observations of geosynchronous orbit (GEO) debris are important to address two questions: 1. What is the distribution function of objects at GEO as a function of brightness? With some assumptions, this can be used to infer a size distribution. 2. Can we determine what the likely composition of individual GEO debris pieces is from studies of the spectral reflectance of these objects? In this paper we report on optical observations with the 6.5-m Magellan telescopes at Las Campanas Observatory in Chile that attempt to answer both questions. Imaging observations over a 0.5 degree diameter field-of-view have detected a significant population of optically faint debris candidates with R > 19th magnitude, corresponding to a size smaller than 20 cm assuming an albedo of 0.175. Many of these objects show brightness variations larger than a factor of 2, suggesting either irregular shapes or albedo variations or both. The object detection rate (per square degree per hour) shows an increase over the rate measured in the 0.6-m MODEST observations, implying an increase in the population at optically fainter levels. Assuming that the albedo distribution is the same for both samples, this corresponds to an increase in the population of smaller size debris. To study the second issue, calibrated reflectance spectroscopy has been obtained of a sample of GEO and near GEO objects with orbits in the public U.S. Space Surveillance Network catalog. With a 6.5-m telescope, the exposures times are short (30 seconds or less), and provide simultaneous wavelength coverage from 4500 to 8000 Angstroms. If the observed objects are tumbling, then simultaneous coverage and short exposure times are essential for a realistic assessment of the object fs spectral signature. We will compare the calibrated spectra with lab-based measurements of simple spacecraft surfaces composed of a single material.

  14. A pilot study on body image, attractiveness and body size in Gambians living in an urban community.

    PubMed

    Siervo, M; Grey, P; Nyan, O A; Prentice, A M

    2006-06-01

    We investigated the attitudinal and perceptual components of body image and its link with body mass index (BMI) in a sample of urban Gambians. We also looked at cross-cultural differences in body image and views on attractiveness between Gambians and Americans. Four groups of 50 subjects were assessed: men 14- 25y (YM); women 14-25y (YW); men 35-50y (OM); women 35-50y (OW). Socio-economic status, education, healthy lifestyle and western influences were investigated. Height and weight were measured. Body dissatisfaction was assessed with the body dissatisfaction scale of the Eating Disorder Inventory. Perceptions of body image and attractiveness were assessed using the Body Image Assessment for Obesity (BIA-O) and Figure Rating Scale (FRS). Different generations of Gambians had very different perceptions and attitudes towards obesity. Current body size was realistically perceived and largely well tolerated. Older women had a higher body discrepancy (current minus ideal body size) than other groups (p<0.001). Regression analysis showed they were not worried about their body size until they were overweight (BMI=27.8 kg/m2), whilst OM, YM and YW started to be concerned at a BMI respectively of 22.9, 19.8 and 21.5 kg/m2. A cross-cultural comparison using published data on FRS showed that Gambians were more obesity tolerant than black and white Americans. The Gambia is a country in the early stage of demographic transitions but in urban areas there is an increase in obesity prevalence. Inherent tensions between the preservation of cultural values and traditional habits, and raising awareness of the risks of obesity, may limit health interventions to prevent weight gain.

  15. The architecture of Norway spruce ectomycorrhizae: three-dimensional models of cortical cells, fungal biomass, and interface for potential nutrient exchange.

    PubMed

    Stögmann, Bernhard; Marth, Andreas; Pernfuß, Barbara; Pöder, Reinhold

    2013-08-01

    Gathering realistic data on actual fungal biomass in ectomycorrhized fine root systems is still a matter of concern. Thus far, observations on architecture of ectomycorrhizae (ECMs) have been limited to analyses of two-dimensional (2-D) images of tissue sections. This unavoidably causes stereometrical problems that lead to inadequate assumptions about actual size of cells and their arrangement within ECM's functional compartments. Based on extensive morphological investigations of field samples, we modeled the architectural components of an average-sized Norway spruce ECM. In addition to our comprehensive and detailed quantitative data on cell sizes, we studied actual shape and size, in vivo arrangement, and potential nutrient exchange area of plant cortical cells (CCs) using computer-aided three-dimensional (3-D) reconstructions based on semithin serial sections. We extrapolated a factual fungal biomass in ECMs (Hartig net (HN) included) of 1.71 t ha(-1) FW (0.36 t ha(-1) DW) for the top 5 cm of soil for an autochthonous, montane, optimum Norway spruce stand in the Tyrolean Alps. The corresponding potential nutrient exchange area in ECMs including main axes of ECM systems, which is defined as the sum of interfaces between plant CCs and the HN, amounts to at least 3.2 × 10(5) m(2) ha(-1). This is the first study that determines the contribution of the HN to the total fungal biomass in ECMs as well as the quantification of its contact area. Our results may stimulate future research on fungal below-ground processes and their impact on the global carbon cycle.

  16. TU-AB-BRA-04: Quantitative Radiomics: Sensitivity of PET Textural Features to Image Acquisition and Reconstruction Parameters Implies the Need for Standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nyflot, MJ; Yang, F; Byrd, D

    Purpose: Despite increased use of heterogeneity metrics for PET imaging, standards for metrics such as textural features have yet to be developed. We evaluated the quantitative variability caused by image acquisition and reconstruction parameters on PET textural features. Methods: PET images of the NEMA IQ phantom were simulated with realistic image acquisition noise. 35 features based on intensity histograms (IH), co-occurrence matrices (COM), neighborhood-difference matrices (NDM), and zone-size matrices (ZSM) were evaluated within lesions (13, 17, 22, 28, 33 mm diameter). Variability in metrics across 50 independent images was evaluated as percent difference from mean for three phantom girths (850,more » 1030, 1200 mm) and two OSEM reconstructions (2 iterations, 28 subsets, 5 mm FWHM filtration vs 6 iterations, 28 subsets, 8.6 mm FWHM filtration). Also, patient sample size to detect a clinical effect of 30% with Bonferroni-corrected α=0.001 and 95% power was estimated. Results: As a class, NDM features demonstrated greatest sensitivity in means (5–50% difference for medium girth and reconstruction comparisons and 10–100% for large girth comparisons). Some IH features (standard deviation, energy, entropy) had variability below 10% for all sensitivity studies, while others (kurtosis, skewness) had variability above 30%. COM and ZSM features had complex sensitivities; correlation, energy, entropy (COM) and zone percentage, short-zone emphasis, zone-size non-uniformity (ZSM) had variability less than 5% while other metrics had differences up to 30%. Trends were similar for sample size estimation; for example, coarseness, contrast, and strength required 12, 38, and 52 patients to detect a 30% effect for the small girth case but 38, 88, and 128 patients in the large girth case. Conclusion: The sensitivity of PET textural features to image acquisition and reconstruction parameters is large and feature-dependent. Standards are needed to ensure that prospective trials which incorporate textural features are properly designed to detect clinical endpoints. Supported by NIH grants R01 CA169072, U01 CA148131, NCI Contract (SAIC-Frederick) 24XS036-004, and a research contract from GE Healthcare.« less

  17. Simulation of mixture microstructures via particle packing models and their direct comparison with real mixtures

    NASA Astrophysics Data System (ADS)

    Gulliver, Eric A.

    The objective of this thesis to identify and develop techniques providing direct comparison between simulated and real packed particle mixture microstructures containing submicron-sized particles. This entailed devising techniques for simulating powder mixtures, producing real mixtures with known powder characteristics, sectioning real mixtures, interrogating mixture cross-sections, evaluating and quantifying the mixture interrogation process and for comparing interrogation results between mixtures. A drop and roll-type particle-packing model was used to generate simulations of random mixtures. The simulated mixtures were then evaluated to establish that they were not segregated and free from gross defects. A powder processing protocol was established to provide real mixtures for direct comparison and for use in evaluating the simulation. The powder processing protocol was designed to minimize differences between measured particle size distributions and the particle size distributions in the mixture. A sectioning technique was developed that was capable of producing distortion free cross-sections of fine scale particulate mixtures. Tessellation analysis was used to interrogate mixture cross sections and statistical quality control charts were used to evaluate different types of tessellation analysis and to establish the importance of differences between simulated and real mixtures. The particle-packing program generated crescent shaped pores below large particles but realistic looking mixture microstructures otherwise. Focused ion beam milling was the only technique capable of sectioning particle compacts in a manner suitable for stereological analysis. Johnson-Mehl and Voronoi tessellation of the same cross-sections produced tessellation tiles with different the-area populations. Control charts analysis showed Johnson-Mehl tessellation measurements are superior to Voronoi tessellation measurements for detecting variations in mixture microstructure, such as altered particle-size distributions or mixture composition. Control charts based on tessellation measurements were used for direct, quantitative comparisons between real and simulated mixtures. Four sets of simulated and real mixtures were examined. Data from real mixture was matched with simulated data when the samples were well mixed and the particle size distributions and volume fractions of the components were identical. Analysis of mixture components that occupied less than approximately 10 vol% of the mixture was not practical unless the particle size of the component was extremely small and excellent quality high-resolution compositional micrographs of the real sample are available. These methods of analysis should allow future researchers to systematically evaluate and predict the impact and importance of variables such as component volume fraction and component particle size distribution as they pertain to the uniformity of powder mixture microstructures.

  18. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelbaum, R.; Rowe, B.; Armstrong, R.

    2015-05-01

    We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about amore » spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.« less

  19. Confirmation of saturation equilibrium conditions in crater populations

    NASA Technical Reports Server (NTRS)

    Hartmann, William K.; Gaskell, Robert W.

    1993-01-01

    We have continued work on realistic numerical models of cratered surfaces, as first reported at last year's LPSC. We confirm the saturation equilibrium level with a new, independent test. One of us has developed a realistic computer simulation of a cratered surface. The model starts with a smooth surface or fractal topography, and adds primary craters according to the cumulative power law with exponent -1.83, as observed on lunar maria and Martian plains. Each crater has an ejecta blanket with the volume of the crater, feathering out to a distance of 4 crater radii. We use the model to test the levels of saturation equilibrium reached in naturally occurring systems, by increasing crater density and observing its dependence on various parameters. In particular, we have tested to see if these artificial systems reach the level found by Hartmann on heavily cratered planetary surfaces, hypothesized to be the natural saturation equilibrium level. This year's work gives the first results of a crater population that includes secondaries. Our model 'Gaskell-4' (September, 1992) includes primaries as described above, but also includes a secondary population, defined by exponent -4. We allowed the largest secondary from each primary to be 0.10 times the size of the primary. These parameters will be changed to test their effects in future models. The model gives realistic images of a cratered surface although it appears richer in secondaries than real surfaces are. The effect of running the model toward saturation gives interesting results for the diameter distribution. Our most heavily cratered surface had the input number of primary craters reach about 0.65 times the hypothesized saturation equilibrium, but the input number rises to more than 100 times that level for secondaries below 1.4 km in size.

  20. MEG source localization of spatially extended generators of epileptic activity: comparing entropic and hierarchical bayesian approaches.

    PubMed

    Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe

    2013-01-01

    Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2) to 30 cm(2), whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.

  1. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    DOE PAGES

    Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; ...

    2015-05-11

    The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty aboutmore » a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.« less

  2. MEG Source Localization of Spatially Extended Generators of Epileptic Activity: Comparing Entropic and Hierarchical Bayesian Approaches

    PubMed Central

    Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe

    2013-01-01

    Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm2 to 30 cm2, whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered. PMID:23418485

  3. Image-Based Reverse Engineering and Visual Prototyping of Woven Cloth.

    PubMed

    Schroder, Kai; Zinke, Arno; Klein, Reinhard

    2015-02-01

    Realistic visualization of cloth has many applications in computer graphics. An ongoing research problem is how to best represent and capture cloth models, specifically when considering computer aided design of cloth. Previous methods produce highly realistic images, however, they are either difficult to edit or require the measurement of large databases to capture all variations of a cloth sample. We propose a pipeline to reverse engineer cloth and estimate a parametrized cloth model from a single image. We introduce a geometric yarn model, integrating state-of-the-art textile research. We present an automatic analysis approach to estimate yarn paths, yarn widths, their variation and a weave pattern. Several examples demonstrate that we are able to model the appearance of the original cloth sample. Properties derived from the input image give a physically plausible basis that is fully editable using a few intuitive parameters.

  4. Enhancement and degradation of the R2* relaxation rate resulting from the encapsulation of magnetic particles with hydrophilic coatings.

    PubMed

    de Haan, Hendrick W; Paquet, Chantal

    2011-12-01

    The effects of including a hydrophilic coating around the particles are studied across a wide range of particle sizes by performing Monte Carlo simulations of protons diffusing through a system of magnetic particles. A physically realistic methodology of implementing the coating by cross boundary jump scaling and transition probabilities at the coating surface is developed. Using this formulation, the coating has three distinct impacts on the relaxation rate: an enhancement at small particle sizes, a degradation at intermediate particle sizes, and no effect at large particles sizes. These varied effects are reconciled with the underlying dephasing mechanisms by using the concept of a full dephasing zone to present a physical picture of the dephasing process with and without the coating for all sizes. The enhancement at small particle sizes is studied systemically to demonstrate the existence of an optimal ratio of diffusion coefficients inside/outside the coating to achieve maximal increase in the relaxation rate. Copyright © 2011 Wiley Periodicals, Inc.

  5. When Can Clades Be Potentially Resolved with Morphology?

    PubMed Central

    Bapst, David W.

    2013-01-01

    Morphology-based phylogenetic analyses are the only option for reconstructing relationships among extinct lineages, but often find support for conflicting hypotheses of relationships. The resulting lack of phylogenetic resolution is generally explained in terms of data quality and methodological issues, such as character selection. A previous suggestion is that sampling ancestral morphotaxa or sampling multiple taxa descended from a long-lived, unchanging lineage can also yield clades which have no opportunity to share synapomorphies. This lack of character information leads to a lack of ‘intrinsic’ resolution, an issue that cannot be solved with additional morphological data. It is unclear how often we should expect clades to be intrinsically resolvable in realistic circumstances, as intrinsic resolution must increase as taxonomic sampling decreases. Using branching simulations, I quantify intrinsic resolution across several models of morphological differentiation and taxonomic sampling. Intrinsically unresolvable clades are found to be relatively frequent in simulations of both extinct and living taxa under realistic sampling scenarios, implying that intrinsic resolution is an issue for morphology-based analyses of phylogeny. Simulations which vary the rates of sampling and differentiation were tested for their agreement to observed distributions of durations from well-sampled fossil records and also having high intrinsic resolution. This combination only occurs in those datasets when differentiation and sampling rates are both unrealistically high relative to branching and extinction rates. Thus, the poor phylogenetic resolution occasionally observed in morphological phylogenetics may result from a lack of intrinsic resolvability within groups. PMID:23638034

  6. Formation of S0 galaxies through mergers. Antitruncated stellar discs resulting from major mergers

    NASA Astrophysics Data System (ADS)

    Borlaff, Alejandro; Eliche-Moral, M. Carmen; Rodríguez-Pérez, Cristina; Querejeta, Miguel; Tapia, Trinidad; Pérez-González, Pablo G.; Zamorano, Jaime; Gallego, Jesús; Beckman, John

    2014-10-01

    Context. Lenticular galaxies (S0s) are more likely to host antitruncated (Type III) stellar discs than galaxies of later Hubble types. Major mergers are popularly considered too violent to make these breaks. Aims: We have investigated whether major mergers can result into S0-like remnants with realistic antitruncated stellar discs or not. Methods: We have analysed 67 relaxed S0 and E/S0 remnants resulting from dissipative N-body simulations of major mergers from the GalMer database. We have simulated realistic R-band surface brightness profiles of the remnants to identify those with antitruncated stellar discs. Their inner and outer discs and the breaks have been quantitatively characterized to compare with real data. Results: Nearly 70% of our S0-like remnants are antitruncated, meaning that major mergers that result in S0s have a high probability of producing Type III stellar discs. Our remnants lie on top of the extrapolations of the observational trends (towards brighter magnitudes and higher break radii) in several photometric diagrams, because of the higher luminosities and sizes of the simulations compared to observational samples. In scale-free photometric diagrams, simulations and observations overlap and the remnants reproduce the observational trends, so the physical mechanism after antitruncations is highly scalable. We have found novel photometric scaling relations between the characteristic parameters of the antitruncations in real S0s, which are also reproduced by our simulations. We show that the trends in all the photometric planes can be derived from three basic scaling relations that real and simulated Type III S0s fulfill: hi ∝ RbrkIII, ho ∝ RbrkIII, and μbrkIII ∝ RbrkIII, where hi and ho are the scalelengths of the inner and outer discs, and μbrkIII and RbrkIII are the surface brightness and radius of the breaks. Bars and antitruncations in real S0s are structurally unrelated phenomena according to the studied photometric planes. Conclusions: Major mergers provide a feasible mechanism to form realistic antitruncated S0 galaxies. Table 3 is available in electronic form at http://www.aanda.org

  7. Modeling the nitrogen cycle one gene at a time

    NASA Astrophysics Data System (ADS)

    Coles, V.; Stukel, M. R.; Hood, R. R.; Moran, M. A.; Paul, J. H.; Satinsky, B.; Zielinski, B.; Yager, P. L.

    2016-02-01

    Marine ecosystem models are lagging the revolution in microbial oceanography. As a result, modeling of the nitrogen cycle has largely failed to leverage new genomic information on nitrogen cycling pathways and the organisms that mediate them. We developed a nitrogen based ecosystem model whose community is determined by randomly assigning functional genes to build each organism's "DNA". Microbes are assigned a size that sets their baseline environmental responses using allometric response curves. These responses are modified by the costs and benefits conferred by each gene in an organism's genome. The microbes are embedded in a general circulation model where environmental conditions shape the emergent population. This model is used to explore whether organisms constructed from randomized combinations of metabolic capability alone can self-organize to create realistic oceanic biogeochemical gradients. Community size spectra and chlorophyll-a concentrations emerge in the model with reasonable fidelity to observations. The model is run repeatedly with randomly-generated microbial communities and each time realistic gradients in community size spectra, chlorophyll-a, and forms of nitrogen develop. This supports the hypothesis that the metabolic potential of a community rather than the realized species composition is the primary factor setting vertical and horizontal environmental gradients. Vertical distributions of nitrogen and transcripts for genes involved in nitrification are broadly consistent with observations. Modeled gene and transcript abundance for nitrogen cycling and processing of land-derived organic material match observations along the extreme gradients in the Amazon River plume, and they help to explain the factors controlling observed variability.

  8. Estimating the relative weights of visual and auditory tau versus heuristic-based cues for time-to-contact judgments in realistic, familiar scenes by older and younger adults.

    PubMed

    Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel

    2017-04-01

    Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.

  9. Can a grain size-dependent viscosity help yielding realistic seismic velocities of LLSVPs?

    NASA Astrophysics Data System (ADS)

    Schierjott, J.; Cheng, K. W.; Rozel, A.; Tackley, P. J.

    2017-12-01

    Seismic studies show two antipodal regions of low shear velocity at the core-mantle boundary (CMB), one beneath the Pacific and one beneath Africa. These regions, called Large Low Shear Velocity Provinces (LLSVPs), are thought to be thermally and chemically distinct and thus have a different density and viscosity. Whereas there is some general consensus about the density of the LLSVPs the viscosity is still a very debated topic. So far, in numerical studies the viscosity is treated as either depth- and/or temperature- dependent but the potential grain size- dependence of the viscosity is neglected most of the time. In this study we use a self-consistent convection model which includes a grain size- dependent rheology based on the approach by Rozel et al. (2011) and Rozel (2012). Further, we consider a primordial layer and a time-dependent basalt production at the surface to dynamically form the present-day chemical heterogeneities, similar to earlier studies, e.g by Nakagawa & Tackley (2014). With this model we perform a parameter study which includes different densities and viscosities of the imposed primordial layer. We detect possible thermochemical piles based on different criterions, compute their average effective viscosity, density, rheology and grain size and investigate which detecting criterion yields the most realistic results. Our preliminary results show that a higher density and/or viscosity of the piles is needed to keep them at the core-mantle boundary (CMB). Relatively to the ambient mantle grain size is high in the piles but due to the temperature at the CMB the viscosity is not remarkably different than the one of ordinary plumes. We observe that grain size is lower if the density of the LLSVP is lower than the one of our MORB material. In that case the average temperature of the LLSVP is also reduced. Interestingly, changing the reference viscosity is responsible for a change in the average viscosity of the LLSVP but not for a different average grain size. Finally, we compare the numerical results with seismological observations by computing 1D seismic velocity profiles (p-wave, shear-wave and bulk velocities) inside and outside our detected piles using thermodynamic data calculated from Perple_X .

  10. Understanding molecular motor walking along a microtubule: a themosensitive asymmetric Brownian motor driven by bubble formation.

    PubMed

    Arai, Noriyoshi; Yasuoka, Kenji; Koishi, Takahiro; Ebisuzaki, Toshikazu; Zeng, Xiao Cheng

    2013-06-12

    The "asymmetric Brownian ratchet model", a variation of Feynman's ratchet and pawl system, is invoked to understand the kinesin walking behavior along a microtubule. The model system, consisting of a motor and a rail, can exhibit two distinct binding states, namely, the random Brownian state and the asymmetric potential state. When the system is transformed back and forth between the two states, the motor can be driven to "walk" in one direction. Previously, we suggested a fundamental mechanism, that is, bubble formation in a nanosized channel surrounded by hydrophobic atoms, to explain the transition between the two states. In this study, we propose a more realistic and viable switching method in our computer simulation of molecular motor walking. Specifically, we propose a thermosensitive polymer model with which the transition between the two states can be controlled by temperature pulses. Based on this new motor system, the stepping size and stepping time of the motor can be recorded. Remarkably, the "walking" behavior observed in the newly proposed model resembles that of the realistic motor protein. The bubble formation based motor not only can be highly efficient but also offers new insights into the physical mechanism of realistic biomolecule motors.

  11. Details of regional particle deposition and airflow structures in a realistic model of human tracheobronchial airways: two-phase flow simulation.

    PubMed

    Rahimi-Gorji, Mohammad; Gorji, Tahereh B; Gorji-Bandpy, Mofid

    2016-07-01

    In the present investigation, detailed two-phase flow modeling of airflow, transport and deposition of micro-particles (1-10µm) in a realistic tracheobronchial airway geometry based on CT scan images under various breathing conditions (i.e. 10-60l/min) was considered. Lagrangian particle tracking has been used to investigate the particle deposition patterns in a model comprising mouth up to generation G6 of tracheobronchial airways. The results demonstrated that during all breathing patterns, the maximum velocity change occurred in the narrow throat region (Larynx). Due to implementing a realistic geometry for simulations, many irregularities and bending deflections exist in the airways model. Thereby, at higher inhalation rates, these areas are prone to vortical effects which tend to entrap the inhaled particles. According to the results, deposition fraction has a direct relationship with particle aerodynamic diameter (for dp=1-10µm). Enhancing inhalation flow rate and particle size will largely increase the inertial force and consequently, more particle deposition is evident suggesting that inertial impaction is the dominant deposition mechanism in tracheobronchial airways. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. A novel model to assess the efficacy of steam surface pasteurization of cooked surimi gels inoculated with realistic levels of Listeria innocua.

    PubMed

    Skåra, Torstein; Valdramidis, Vasilis P; Rosnes, Jan Thomas; Noriega, Estefanía; Van Impe, Jan F M

    2014-12-01

    Steam surface pasteurization is a promising decontamination technology for reducing pathogenic bacteria in different stages of food production. The effect of the artificial inoculation type and initial microbial load, however, has not been thoroughly assessed in the context of inactivation studies. In order to optimize the efficacy of the technology, the aim of this study was to design and validate a model system for steam surface pasteurization, assessing different inoculation methods and realistic microbial levels. More specifically, the response of Listeria innocua, a surrogate organism of Listeria monocytogenes, on a model fish product, and the effect of different inoculation levels following treatments with a steam surface pasteurization system was investigated. The variation in the resulting inoculation level on the samples was too large (77%) for the contact inoculation procedure to be further considered. In contrast, the variation of a drop inoculation procedure was 17%. Inoculation with high levels showed a rapid 1-2 log decrease after 3-5 s, and then no further inactivation beyond 20 s. A low level inoculation study was performed by analysing the treated samples using a novel contact plating approach, which can be performed without sample homogenization and dilution. Using logistic regression, results from this method were used to model the binary responses of Listeria on surfaces with realistic inoculation levels. According to this model, a treatment time of 23 s will result in a 1 log reduction (for P = 0.1). Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. 3D mapping of buried rocks by the GPR WISDOM/ExoMars 2020

    NASA Astrophysics Data System (ADS)

    Herve, Yann; Ciarletti, Valerie; Le Gall, Alice; Quantin, Cathy; Guiffaut, Christophe; Plettemeier, Dirk

    2017-04-01

    The main objective of ExoMars 2020 is to search for signs of past and/or present life on Mars. Because these signs may be beneath the inhospitable surface of Mars, the ExoMars Rover has on board a suite of instruments aiming at characterizing the subsurface. In particular, the Rover payload includes WISDOM (Water Ice Subsurface Deposits Observation on Mars), a polarimetric ground penetrating radar designed to investigate the shallow subsurface. WISDOM is able to probe down to a depth of few meters with a resolution of few centimeters; its main objective is to provide insights into the geological context of the investigated Martian sites and to determine the most promising location to collect samples for the ExoMars drill. In this paper, we demonstrate the ability of WISDOM to locate buried rocks and to estimate their size distribution. Indeed, the rock distribution is related to the geological processes at play in the past or currently and thus provides clues to understand the geological context of the investigated site. Rocks also represent a hazard for drilling operations that WISDOM is to guide. We use a 3D FDTD code called TEMSI-FD (which takes into account the radiation pattern of the antenna system) to simulate WISDOM operations on a realistic (both in terms of dielectric properties and structure) ground. More specifically, our geoelectrical models of the Martian subsurface take into account realistic values of the complex permittivity relying on published measurements performed in laboratory on Martian analogues. Further, different distributions of buried rocks are considered based on the size-frequency distribution observed at the Mars Pathfinder landing site and on Oxia Planum, the landing site currently selected for ExoMars 2020. We will describe the algorithm we developed to automatically detect the signature of the buried rocks on radargrams. The radargrams are obtained simulating WISDOM operations along parallel and perpendicular profiles as planned for the ExoMars mission. Our ultimate goal is to show that WISDOM observations can be used to build a 3D map of the subsurface. We will also present experimental data obtained with a prototype of WISDOM to test our method.

  14. Deep Generative Models of Galaxy Images for the Calibration of the Next Generation of Weak Lensing Surveys

    NASA Astrophysics Data System (ADS)

    Lanusse, Francois; Ravanbakhsh, Siamak; Mandelbaum, Rachel; Schneider, Jeff; Poczos, Barnabas

    2017-01-01

    Weak gravitational lensing has long been identified as one of the most powerful probes to investigate the nature of dark energy. As such, weak lensing is at the heart of the next generation of cosmological surveys such as LSST, Euclid or WFIRST.One particularly crititcal source of systematic errors in these surveys comes from the shape measurement algorithms tasked with estimating galaxy shapes. GREAT3, the last community challenge to assess the quality of state-of-the-art shape measurement algorithms has in particular demonstrated that all current methods are biased to various degrees and, more importantly, that these biases depend on the details of the galaxy morphologies. These biases can be measured and calibrated by generating mock observations where a known lensing signal has been introduced and comparing the resulting measurements to the ground-truth. Producing these mock observations however requires input galaxy images of higher resolution and S/N than the simulated survey, which typically implies acquiring extremely expensive space-based observations.The goal of this work is to train a deep generative model on already available Hubble Space Telescope data which can then be used to sample new galaxy images conditioned on parameters such as magnitude, size or redshift and exhibiting complex morphologies. Such model can allow us to inexpensively produce large set of realistic realistic images for calibration purposes.We implement a conditional generative model based on state-of-the-art deep learning methods and fit it to deep galaxy images from the COSMOS survey. The quality of the model is assessed by computing an extensive set of galaxy morphology statistics on the generated images. Beyond simple second moment statistics such as size and ellipticity, we apply more complex statistics specifically designed to be sensitive to disturbed galaxy morphologies. We find excellent agreement between the morphologies of real and model generated galaxies.Our results suggest that such deep generative models represent a reliable alternative to the acquisition of expensive high quality observations for generating the calibration data needed by the next generation of weak lensing surveys.

  15. Elliptic generation of composite three-dimensional grids about realistic aircraft

    NASA Technical Reports Server (NTRS)

    Sorenson, R. L.

    1986-01-01

    An elliptic method for generating composite grids about realistic aircraft is presented. A body-conforming grid is first generated about the entire aircraft by the solution of Poisson's differential equation. This grid has relatively coarse spacing, and it covers the entire physical domain. At boundary surfaces, cell size is controlled and cell skewness is nearly eliminated by inhomogeneous terms, which are found automatically by the program. Certain regions of the grid in which high gradients are expected, and which map into rectangular solids in the computational domain, are then designated for zonal refinement. Spacing in the zonal grids is reduced by adding points with a simple, algebraic scheme. Details of the grid generation method are presented along with results of the present application, a wing-body configuration based on the F-16 fighter aircraft.

  16. Realism and Effectiveness of Robotic Moving Targets

    DTIC Science & Technology

    2017-04-01

    scenario or be manually controlled . The targets can communicate with other nearby targets, which means they can move independently, as a group , or...present a realistic three- dimensional human-sized target that can freely move with semi-autonomous control . The U.S. Army Research Institute for...Procedure: Performance and survey data were collected during multiple training exercises from Soldiers who engaged the RHTTs. Different groups

  17. Realism and Perspectivism: a Reevaluation of Rival Theories of Spatial Vision.

    NASA Astrophysics Data System (ADS)

    Thro, E. Broydrick

    1990-01-01

    My study reevaluates two theories of human space perception, a trigonometric surveying theory I call perspectivism and a "scene recognition" theory I call realism. Realists believe that retinal image geometry can supply no unambiguous information about an object's size and distance--and that, as a result, viewers can locate objects in space only by making discretionary interpretations based on familiar experience of object types. Perspectivists, in contrast, think viewers can disambiguate object sizes/distances on the basis of retinal image information alone. More specifically, they believe the eye responds to perspective image geometry with an automatic trigonometric calculation that not only fixes the directions and shapes, but also roughly fixes the sizes and distances of scene elements in space. Today this surveyor theory has been largely superceded by the realist approach, because most vision scientists believe retinal image geometry is ambiguous about the scale of space. However, I show that there is a considerable body of neglected evidence, both past and present, tending to call this scale ambiguity claim into question. I maintain that this evidence against scale ambiguity could hardly be more important, if one considers its subversive implications for the scene recognition theory that is not only today's reigning approach to spatial vision, but also the foundation for computer scientists' efforts to create space-perceiving robots. If viewers were deemed to be capable of automatic surveying calculations, the discretionary scene recognition theory would lose its main justification. Clearly, it would be difficult for realists to maintain that we viewers rely on scene recognition for space perception in spite of our ability to survey. And in reality, as I show, the surveyor theory does a much better job of describing the everyday space we viewers actually see--a space featuring stable, unambiguous relationships among scene elements, and a single horizon and vanishing point for (meter-scale) receding objects. In addition, I argue, the surveyor theory raises fewer philosophical difficulties, because it is more in harmony with our everyday concepts of material objects, human agency and the self.

  18. Rehabilitation of the psychomotor consequences of falling in an elderly population: A pilot study to evaluate feasibility and tolerability of virtual reality training.

    PubMed

    Marivan, Kevin; Boully, Clémence; Benveniste, Samuel; Reingewirtz, Serge; Rigaud, Anne-Sophie; Kemoun, Gilles; Bloch, Frédéric

    2016-01-01

    A fall in elderly subjects can lead to serious psychological consequences. These symptoms can develop into Fear of Falling with behavioural disorders comparable to PTSD that may severely limit autonomy. Virtual reality training (VRT) could be seen as a worthwhile therapeutic approach for this syndrome since it has been shown to be a useful tool for motor rehabilitation or combat-related PTSD. We thus developed a training scenario for VRT with psychomotor therapists. To test the feasibility and acceptability of VRT when used by elderly adults for fall rehabilitation. Our population of 8 patients older than 75 years, with a Mini Mental Score Examination greater than 18/30 performed sessions of VRT and answered a questionnaire on the feasibility and acceptability of it. This sample showed a highly favourable response to the prototype of VRT. They found it easy to use, enjoyed the experience, and thought it realistic and helpful. The conclusions of our study are limited by sample size. However, applications with VRT can offer the potential of an acceptable technique for elderly subjects. The next step will be to show the efficacy of this method in the management of post-fall PTSD.

  19. Quantifying the extent to which index event biases influence large genetic association studies.

    PubMed

    Yaghootkar, Hanieh; Bancks, Michael P; Jones, Sam E; McDaid, Aaron; Beaumont, Robin; Donnelly, Louise; Wood, Andrew R; Campbell, Archie; Tyrrell, Jessica; Hocking, Lynne J; Tuke, Marcus A; Ruth, Katherine S; Pearson, Ewan R; Murray, Anna; Freathy, Rachel M; Munroe, Patricia B; Hayward, Caroline; Palmer, Colin; Weedon, Michael N; Pankow, James S; Frayling, Timothy M; Kutalik, Zoltán

    2017-03-01

    As genetic association studies increase in size to 100 000s of individuals, subtle biases may influence conclusions. One possible bias is 'index event bias' (IEB) that appears due to the stratification by, or enrichment for, disease status when testing associations between genetic variants and a disease-associated trait. We aimed to test the extent to which IEB influences some known trait associations in a range of study designs and provide a statistical framework for assessing future associations. Analyzing data from 113 203 non-diabetic UK Biobank participants, we observed three (near TCF7L2, CDKN2AB and CDKAL1) overestimated (body mass index (BMI) decreasing) and one (near MTNR1B) underestimated (BMI increasing) associations among 11 type 2 diabetes risk alleles (at P  <  0.05). IEB became even stronger when we tested a type 2 diabetes genetic risk score composed of these 11 variants (-0.010 standard deviations BMI per allele, P  =  5 × 10- 4), which was confirmed in four additional independent studies. Similar results emerged when examining the effect of blood pressure increasing alleles on BMI in normotensive UK Biobank samples. Furthermore, we demonstrated that, under realistic scenarios, common disease alleles would become associated at P <  5 × 10- 8 with disease-related traits through IEB alone, if disease prevalence in the sample differs appreciably from the background population prevalence. For example, some hypertension and type 2 diabetes alleles will be associated with BMI in sample sizes of  >500 000 if the prevalence of those diseases differs by >10% from the background population. In conclusion, IEB may result in false positive or negative genetic associations in very large studies stratified or strongly enriched for/against disease cases. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Benchmarking protein classification algorithms via supervised cross-validation.

    PubMed

    Kertész-Farkas, Attila; Dhir, Somdutta; Sonego, Paolo; Pacurar, Mircea; Netoteia, Sergiu; Nijveen, Harm; Kuzniar, Arnold; Leunissen, Jack A M; Kocsor, András; Pongor, Sándor

    2008-04-24

    Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold, leave-one-out, etc.) may not give reliable estimates on how an algorithm will generalize to novel, distantly related subtypes of the known protein classes. Supervised cross-validation, i.e., selection of test and train sets according to the known subtypes within a database has been successfully used earlier in conjunction with the SCOP database. Our goal was to extend this principle to other databases and to design standardized benchmark datasets for protein classification. Hierarchical classification trees of protein categories provide a simple and general framework for designing supervised cross-validation strategies for protein classification. Benchmark datasets can be designed at various levels of the concept hierarchy using a simple graph-theoretic distance. A combination of supervised and random sampling was selected to construct reduced size model datasets, suitable for algorithm comparison. Over 3000 new classification tasks were added to our recently established protein classification benchmark collection that currently includes protein sequence (including protein domains and entire proteins), protein structure and reading frame DNA sequence data. We carried out an extensive evaluation based on various machine-learning algorithms such as nearest neighbor, support vector machines, artificial neural networks, random forests and logistic regression, used in conjunction with comparison algorithms, BLAST, Smith-Waterman, Needleman-Wunsch, as well as 3D comparison methods DALI and PRIDE. The resulting datasets provide lower, and in our opinion more realistic estimates of the classifier performance than do random cross-validation schemes. A combination of supervised and random sampling was used to construct model datasets, suitable for algorithm comparison.

  1. Percolation galaxy groups and clusters in the sdss redshift survey: identification, catalogs, and the multiplicity function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berlind, Andreas A.; Frieman, Joshua A.; Weinberg, David H.

    2006-01-01

    We identify galaxy groups and clusters in volume-limited samples of the SDSS redshift survey, using a redshift-space friends-of-friends algorithm. We optimize the friends-of-friends linking lengths to recover galaxy systems that occupy the same dark matter halos, using a set of mock catalogs created by populating halos of N-body simulations with galaxies. Extensive tests with these mock catalogs show that no combination of perpendicular and line-of-sight linking lengths is able to yield groups and clusters that simultaneously recover the true halo multiplicity function, projected size distribution, and velocity dispersion. We adopt a linking length combination that yields, for galaxy groups withmore » ten or more members: a group multiplicity function that is unbiased with respect to the true halo multiplicity function; an unbiased median relation between the multiplicities of groups and their associated halos; a spurious group fraction of less than {approx}1%; a halo completeness of more than {approx}97%; the correct projected size distribution as a function of multiplicity; and a velocity dispersion distribution that is {approx}20% too low at all multiplicities. These results hold over a range of mock catalogs that use different input recipes of populating halos with galaxies. We apply our group-finding algorithm to the SDSS data and obtain three group and cluster catalogs for three volume-limited samples that cover 3495.1 square degrees on the sky. We correct for incompleteness caused by fiber collisions and survey edges, and obtain measurements of the group multiplicity function, with errors calculated from realistic mock catalogs. These multiplicity function measurements provide a key constraint on the relation between galaxy populations and dark matter halos.« less

  2. A Modified Experimental Hut Design for Studying Responses of Disease-Transmitting Mosquitoes to Indoor Interventions: The Ifakara Experimental Huts

    PubMed Central

    Okumu, Fredros O.; Moore, Jason; Mbeyela, Edgar; Sherlock, Mark; Sangusangu, Robert; Ligamba, Godfrey; Russell, Tanya; Moore, Sarah J.

    2012-01-01

    Differences between individual human houses can confound results of studies aimed at evaluating indoor vector control interventions such as insecticide treated nets (ITNs) and indoor residual insecticide spraying (IRS). Specially designed and standardised experimental huts have historically provided a solution to this challenge, with an added advantage that they can be fitted with special interception traps to sample entering or exiting mosquitoes. However, many of these experimental hut designs have a number of limitations, for example: 1) inability to sample mosquitoes on all sides of huts, 2) increased likelihood of live mosquitoes flying out of the huts, leaving mainly dead ones, 3) difficulties of cleaning the huts when a new insecticide is to be tested, and 4) the generally small size of the experimental huts, which can misrepresent actual local house sizes or airflow dynamics in the local houses. Here, we describe a modified experimental hut design - The Ifakara Experimental Huts- and explain how these huts can be used to more realistically monitor behavioural and physiological responses of wild, free-flying disease-transmitting mosquitoes, including the African malaria vectors of the species complexes Anopheles gambiae and Anopheles funestus, to indoor vector control-technologies including ITNs and IRS. Important characteristics of the Ifakara experimental huts include: 1) interception traps fitted onto eave spaces and windows, 2) use of eave baffles (panels that direct mosquito movement) to control exit of live mosquitoes through the eave spaces, 3) use of replaceable wall panels and ceilings, which allow safe insecticide disposal and reuse of the huts to test different insecticides in successive periods, 4) the kit format of the huts allowing portability and 5) an improved suite of entomological procedures to maximise data quality. PMID:22347415

  3. Flocculation kinetics and aggregate structure of kaolinite mixtures in laminar tube flow.

    PubMed

    Vaezi G, Farid; Sanders, R Sean; Masliyah, Jacob H

    2011-03-01

    Flocculation is commonly used in various solid-liquid separation processes in chemical and mineral industries to separate desired products or to treat waste streams. This paper presents an experimental technique to study flocculation processes in laminar tube flow. This approach allows for more realistic estimation of the shear rate to which an aggregate is exposed, as compared to more complicated shear fields (e.g. stirred tanks). A direct sampling method is used to minimize the effect of sampling on the aggregate structure. A combination of aggregate settling velocity and image analysis was used to quantify the structure of the aggregate. Aggregate size, density, and fractal dimension were found to be the most important aggregate structural parameters. The two methods used to determine aggregate fractal dimension were in good agreement. The effects of advective flow through an aggregate's porous structure and transition-regime drag coefficient on the evaluation of aggregate density were considered. The technique was applied to investigate the flocculation kinetics and the evolution of the aggregate structure of kaolin particles with an anionic flocculant under conditions similar to those of oil sands fine tailings. Aggregates were formed using a well controlled two-stage aggregation process. Detailed statistical analysis was performed to investigate the establishment of dynamic equilibrium condition in terms of aggregate size and density evolution. An equilibrium steady state condition was obtained within 90 s of the start of flocculation; after which no further change in aggregate structure was observed. Although longer flocculation times inside the shear field could conceivably cause aggregate structure conformation, statistical analysis indicated that this did not occur for the studied conditions. The results show that the technique and experimental conditions employed here produce aggregates having a well-defined, reproducible structure. Copyright © 2011. Published by Elsevier Inc.

  4. Assessing methane emission estimation methods based on atmospheric measurements from oil and gas production using LES simulations

    NASA Astrophysics Data System (ADS)

    Saide, P. E.; Steinhoff, D.; Kosovic, B.; Weil, J.; Smith, N.; Blewitt, D.; Delle Monache, L.

    2017-12-01

    There are a wide variety of methods that have been proposed and used to estimate methane emissions from oil and gas production by using air composition and meteorology observations in conjunction with dispersion models. Although there has been some verification of these methodologies using controlled releases and concurrent atmospheric measurements, it is difficult to assess the accuracy of these methods for more realistic scenarios considering factors such as terrain, emissions from multiple components within a well pad, and time-varying emissions representative of typical operations. In this work we use a large-eddy simulation (LES) to generate controlled but realistic synthetic observations, which can be used to test multiple source term estimation methods, also known as an Observing System Simulation Experiment (OSSE). The LES is based on idealized simulations of the Weather Research & Forecasting (WRF) model at 10 m horizontal grid-spacing covering an 8 km by 7 km domain with terrain representative of a region located in the Barnett shale. Well pads are setup in the domain following a realistic distribution and emissions are prescribed every second for the components of each well pad (e.g., chemical injection pump, pneumatics, compressor, tanks, and dehydrator) using a simulator driven by oil and gas production volume, composition and realistic operational conditions. The system is setup to allow assessments under different scenarios such as normal operations, during liquids unloading events, or during other prescribed operational upset events. Methane and meteorology model output are sampled following the specifications of the emission estimation methodologies and considering typical instrument uncertainties, resulting in realistic observations (see Figure 1). We will show the evaluation of several emission estimation methods including the EPA Other Test Method 33A and estimates using the EPA AERMOD regulatory model. We will also show source estimation results from advanced methods such as variational inverse modeling, and Bayesian inference and stochastic sampling techniques. Future directions including other types of observations, other hydrocarbons being considered, and assessment of additional emission estimation methods will be discussed.

  5. Computational studies of photoluminescence from disordered nanocrystalline systems

    NASA Astrophysics Data System (ADS)

    John, George

    2000-03-01

    The size (d) dependence of emission energies from semiconductor nanocrystallites have been shown to follow an effective exponent ( d^-β) determined by the disorder in the system(V.Ranjan, V.A.Singh and G.C.John, Phys. Rev B 58), 1158 (1998). Our earlier calculation was based on a simple quantum confinement model assuming a normal distribution of crystallites. This model is now extended to study the effects of realistic systems with a lognormal distribution in particle size, accounting for carrier hopping and nonradiative transitions. Computer simulations of this model performed using the Microcal Origin software can explain several conflicting experimental results reported in literature.

  6. Characteristics of Teeth: A Review of Size, Shape, Composition, and Appearance of Maxillary Anterior Teeth.

    PubMed

    McGowan, Steve

    2016-03-01

    Although digital technologies play an increasingly integral role in dentistry, there remains a need for dental professionals to understand the fundamentals of tooth anatomy, form, occlusion, and color science. In this article, the size, shape, composition, and appearance of maxillary anterior teeth will be discussed from esthetic and functional perspectives. A total of 600 extracted maxillary incisors were studied: 200 each of central incisors, lateral incisors, and cuspids. The purpose of the article is to exhibit and discuss factors that make teeth unique and diverse. Understanding these aspects of teeth aids dental professionals in more effectively creating realistic and highly esthetic restorations for patients.

  7. Sample similarity analysis of angles of repose based on experimental results for DEM calibration

    NASA Astrophysics Data System (ADS)

    Tan, Yuan; Günthner, Willibald A.; Kessler, Stephan; Zhang, Lu

    2017-06-01

    As a fundamental material property, particle-particle friction coefficient is usually calculated based on angle of repose which can be obtained experimentally. In the present study, the bottomless cylinder test was carried out to investigate this friction coefficient of a kind of biomass material, i.e. willow chips. Because of its irregular shape and varying particle size distribution, calculation of the angle becomes less applicable and decisive. In the previous studies only one section of those uneven slopes is chosen in most cases, although standard methods in definition of a representable section are barely found. Hence, we presented an efficient and reliable method from the new technology, 3D scan, which was used to digitize the surface of heaps and generate its point cloud. Then, two tangential lines of any selected section were calculated through the linear least-squares regression (LLSR), such that the left and right angle of repose of a pile could be derived. As the next step, a certain sum of sections were stochastic selected, and calculations were repeated correspondingly in order to achieve sample of angles, which was plotted in Cartesian coordinates as spots diagram. Subsequently, different samples were acquired through various selections of sections. By applying similarities and difference analysis of these samples, the reliability of this proposed method was verified. Phased results provides a realistic criterion to reduce the deviation between experiment and simulation as a result of random selection of a single angle, which will be compared with the simulation results in the future.

  8. Population variability complicates the accurate detection of climate change responses.

    PubMed

    McCain, Christy; Szewczyk, Tim; Bracy Knight, Kevin

    2016-06-01

    The rush to assess species' responses to anthropogenic climate change (CC) has underestimated the importance of interannual population variability (PV). Researchers assume sampling rigor alone will lead to an accurate detection of response regardless of the underlying population fluctuations of the species under consideration. Using population simulations across a realistic, empirically based gradient in PV, we show that moderate to high PV can lead to opposite and biased conclusions about CC responses. Between pre- and post-CC sampling bouts of modeled populations as in resurvey studies, there is: (i) A 50% probability of erroneously detecting the opposite trend in population abundance change and nearly zero probability of detecting no change. (ii) Across multiple years of sampling, it is nearly impossible to accurately detect any directional shift in population sizes with even moderate PV. (iii) There is up to 50% probability of detecting a population extirpation when the species is present, but in very low natural abundances. (iv) Under scenarios of moderate to high PV across a species' range or at the range edges, there is a bias toward erroneous detection of range shifts or contractions. Essentially, the frequency and magnitude of population peaks and troughs greatly impact the accuracy of our CC response measurements. Species with moderate to high PV (many small vertebrates, invertebrates, and annual plants) may be inaccurate 'canaries in the coal mine' for CC without pertinent demographic analyses and additional repeat sampling. Variation in PV may explain some idiosyncrasies in CC responses detected so far and urgently needs more careful consideration in design and analysis of CC responses. © 2016 John Wiley & Sons Ltd.

  9. Subnanometer and nanometer catalysts, method for preparing size-selected catalysts

    DOEpatents

    Vajda, Stefan , Pellin, Michael J.; Elam, Jeffrey W [Elmhurst, IL; Marshall, Christopher L [Naperville, IL; Winans, Randall A [Downers Grove, IL; Meiwes-Broer, Karl-Heinz [Roggentin, GR

    2012-04-03

    Highly uniform cluster based nanocatalysts supported on technologically relevant supports were synthesized for reactions of top industrial relevance. The Pt-cluster based catalysts outperformed the very best reported ODHP catalyst in both activity (by up to two orders of magnitude higher turn-over frequencies) and in selectivity. The results clearly demonstrate that highly dispersed ultra-small Pt clusters precisely localized on high-surface area supports can lead to affordable new catalysts for highly efficient and economic propene production, including considerably simplified separation of the final product. The combined GISAXS-mass spectrometry provides an excellent tool to monitor the evolution of size and shape of nanocatalyst at action under realistic conditions. Also provided are sub-nanometer gold and sub-nanometer to few nm size-selected silver catalysts which possess size dependent tunable catalytic properties in the epoxidation of alkenes. Invented size-selected cluster deposition provides a unique tool to tune material properties by atom-by-atom fashion, which can be stabilized by protective overcoats.

  10. Subnanometer and nanometer catalysts, method for preparing size-selected catalysts

    DOEpatents

    Vajda, Stefan [Lisle, IL; Pellin, Michael J [Naperville, IL; Elam, Jeffrey W [Elmhurst, IL; Marshall, Christopher L [Naperville, IL; Winans, Randall A [Downers Grove, IL; Meiwes-Broer, Karl-Heinz [Roggentin, GR

    2012-03-27

    Highly uniform cluster based nanocatalysts supported on technologically relevant supports were synthesized for reactions of top industrial relevance. The Pt-cluster based catalysts outperformed the very best reported ODHP catalyst in both activity (by up to two orders of magnitude higher turn-over frequencies) and in selectivity. The results clearly demonstrate that highly dispersed ultra-small Pt clusters precisely localized on high-surface area supports can lead to affordable new catalysts for highly efficient and economic propene production, including considerably simplified separation of the final product. The combined GISAXS-mass spectrometry provides an excellent tool to monitor the evolution of size and shape of nanocatalyst at action under realistic conditions. Also provided are sub-nanometer gold and sub-nanometer to few nm size-selected silver catalysts which possess size dependent tunable catalytic properties in the epoxidation of alkenes. Invented size-selected cluster deposition provides a unique tool to tune material properties by atom-by-atom fashion, which can be stabilized by protective overcoats.

  11. A stochastic method for stand-alone photovoltaic system sizing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cabral, Claudia Valeria Tavora; Filho, Delly Oliveira; Martins, Jose Helvecio

    Photovoltaic systems utilize solar energy to generate electrical energy to meet load demands. Optimal sizing of these systems includes the characterization of solar radiation. Solar radiation at the Earth's surface has random characteristics and has been the focus of various academic studies. The objective of this study was to stochastically analyze parameters involved in the sizing of photovoltaic generators and develop a methodology for sizing of stand-alone photovoltaic systems. Energy storage for isolated systems and solar radiation were analyzed stochastically due to their random behavior. For the development of the methodology proposed stochastic analysis were studied including the Markov chainmore » and beta probability density function. The obtained results were compared with those for sizing of stand-alone using from the Sandia method (deterministic), in which the stochastic model presented more reliable values. Both models present advantages and disadvantages; however, the stochastic one is more complex and provides more reliable and realistic results. (author)« less

  12. Resolving Size Distribution of Black Carbon Internally Mixed With Snow: Impact on Snow Optical Properties and Albedo

    NASA Astrophysics Data System (ADS)

    He, Cenlin; Liou, Kuo-Nan; Takano, Yoshi

    2018-03-01

    We develop a stochastic aerosol-snow albedo model that explicitly resolves size distribution of aerosols internally mixed with various snow grains. We use the model to quantify black carbon (BC) size effects on snow albedo and optical properties for BC-snow internal mixing. Results show that BC-induced snow single-scattering coalbedo enhancement and albedo reduction decrease by a factor of 2-3 with increasing BC effective radii from 0.05 to 0.25 μm, while polydisperse BC results in up to 40% smaller visible single-scattering coalbedo enhancement and albedo reduction compared to monodisperse BC with equivalent effective radii. We further develop parameterizations for BC size effects for application to climate models. Compared with a realistic polydisperse assumption and observed shifts to larger BC sizes in snow, respectively, assuming monodisperse BC and typical atmospheric BC effective radii could lead to overestimates of 24% and 40% in BC-snow albedo forcing averaged over different BC and snow conditions.

  13. Particle size and shape distributions of hammer milled pine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westover, Tyler Lott; Matthews, Austin Colter; Williams, Christopher Luke

    2015-04-01

    Particle size and shape distributions impact particle heating rates and diffusion of volatized gases out of particles during fast pyrolysis conversion, and consequently must be modeled accurately in order for computational pyrolysis models to produce reliable results for bulk solid materials. For this milestone, lodge pole pine chips were ground using a Thomas-Wiley #4 mill using two screen sizes in order to produce two representative materials that are suitable for fast pyrolysis. For the first material, a 6 mm screen was employed in the mill and for the second material, a 3 mm screen was employed in the mill. Bothmore » materials were subjected to RoTap sieve analysis, and the distributions of the particle sizes and shapes were determined using digital image analysis. The results of the physical analysis will be fed into computational pyrolysis simulations to create models of materials with realistic particle size and shape distributions. This milestone was met on schedule.« less

  14. Experimentally testing and assessing the predictive power of species assembly rules for tropical canopy ants

    PubMed Central

    Fayle, Tom M; Eggleton, Paul; Manica, Andrea; Yusah, Kalsum M; Foster, William A

    2015-01-01

    Understanding how species assemble into communities is a key goal in ecology. However, assembly rules are rarely tested experimentally, and their ability to shape real communities is poorly known. We surveyed a diverse community of epiphyte-dwelling ants and found that similar-sized species co-occurred less often than expected. Laboratory experiments demonstrated that invasion was discouraged by the presence of similarly sized resident species. The size difference for which invasion was less likely was the same as that for which wild species exhibited reduced co-occurrence. Finally we explored whether our experimentally derived assembly rules could simulate realistic communities. Communities simulated using size-based species assembly exhibited diversities closer to wild communities than those simulated using size-independent assembly, with results being sensitive to the combination of rules employed. Hence, species segregation in the wild can be driven by competitive species assembly, and this process is sufficient to generate observed species abundance distributions for tropical epiphyte-dwelling ants. PMID:25622647

  15. Spatial Pattern of Cell Damage in Tissue from Heavy Ions

    NASA Technical Reports Server (NTRS)

    Ponomarev, Artem L.; Huff, Janice L.; Cucinotta, Francis A.

    2007-01-01

    A new Monte Carlo algorithm was developed that can model passage of heavy ions in a tissue, and their action on the cellular matrix for 2- or 3-dimensional cases. The build-up of secondaries such as projectile fragments, target fragments, other light fragments, and delta-rays was simulated. Cells were modeled as a cell culture monolayer in one example, where the data were taken directly from microscopy (2-d cell matrix). A simple model of tissue was given as abstract spheres with close approximation to real cell geometries (3-d cell matrix), as well as a realistic model of tissue was proposed based on microscopy images. Image segmentation was used to identify cells in an irradiated cell culture monolayer, or slices of tissue. The cells were then inserted into the model box pixel by pixel. In the case of cell monolayers (2-d), the image size may exceed the modeled box size. Such image was is moved with respect to the box in order to sample as many cells as possible. In the case of the simple tissue (3-d), the tissue box is modeled with periodic boundary conditions, which extrapolate the technique to macroscopic volumes of tissue. For real tissue, specific spatial patterns for cell apoptosis and necrosis are expected. The cell patterns were modeled based on action cross sections for apoptosis and necrosis estimated based on BNL data, and other experimental data.

  16. Bouncing behavior of microscopic dust aggregates

    NASA Astrophysics Data System (ADS)

    Seizinger, A.; Kley, W.

    2013-03-01

    Context. Bouncing collisions of dust aggregates within the protoplanetary disk may have a significant impact on the growth process of planetesimals. Yet, the conditions that result in bouncing are not very well understood. Existing simulations studying the bouncing behavior used aggregates with an artificial, very regular internal structure. Aims: Here, we study the bouncing behavior of sub-mm dust aggregates that are constructed applying different sample preparation methods. We analyze how the internal structure of the aggregate alters the collisional outcome and we determine the influence of aggregate size, porosity, collision velocity, and impact parameter. Methods: We use molecular dynamics simulations where the individual aggregates are treated as spheres that are made up of several hundred thousand individual monomers. The simulations are run on graphic cards (GPUs). Results: Statistical bulk properties and thus bouncing behavior of sub-mm dust aggregates depend heavily on the preparation method. In particular, there is no unique relation between the average volume filling factor and the coordination number of the aggregate. Realistic aggregates bounce only if their volume filling factor exceeds 0.5 and collision velocities are below 0.1 ms-1. Conclusions: For dust particles in the protoplanetary nebula we suggest that the bouncing barrier may not be such a strong handicap in the growth phase of dust agglomerates, at least in the size range of ≈100 μm.

  17. The Joint Effects of Background Selection and Genetic Recombination on Local Gene Genealogies

    PubMed Central

    Zeng, Kai; Charlesworth, Brian

    2011-01-01

    Background selection, the effects of the continual removal of deleterious mutations by natural selection on variability at linked sites, is potentially a major determinant of DNA sequence variability. However, the joint effects of background selection and genetic recombination on the shape of the neutral gene genealogy have proved hard to study analytically. The only existing formula concerns the mean coalescent time for a pair of alleles, making it difficult to assess the importance of background selection from genome-wide data on sequence polymorphism. Here we develop a structured coalescent model of background selection with recombination and implement it in a computer program that efficiently generates neutral gene genealogies for an arbitrary sample size. We check the validity of the structured coalescent model against forward-in-time simulations and show that it accurately captures the effects of background selection. The model produces more accurate predictions of the mean coalescent time than the existing formula and supports the conclusion that the effect of background selection is greater in the interior of a deleterious region than at its boundaries. The level of linkage disequilibrium between sites is elevated by background selection, to an extent that is well summarized by a change in effective population size. The structured coalescent model is readily extendable to more realistic situations and should prove useful for analyzing genome-wide polymorphism data. PMID:21705759

  18. The joint effects of background selection and genetic recombination on local gene genealogies.

    PubMed

    Zeng, Kai; Charlesworth, Brian

    2011-09-01

    Background selection, the effects of the continual removal of deleterious mutations by natural selection on variability at linked sites, is potentially a major determinant of DNA sequence variability. However, the joint effects of background selection and genetic recombination on the shape of the neutral gene genealogy have proved hard to study analytically. The only existing formula concerns the mean coalescent time for a pair of alleles, making it difficult to assess the importance of background selection from genome-wide data on sequence polymorphism. Here we develop a structured coalescent model of background selection with recombination and implement it in a computer program that efficiently generates neutral gene genealogies for an arbitrary sample size. We check the validity of the structured coalescent model against forward-in-time simulations and show that it accurately captures the effects of background selection. The model produces more accurate predictions of the mean coalescent time than the existing formula and supports the conclusion that the effect of background selection is greater in the interior of a deleterious region than at its boundaries. The level of linkage disequilibrium between sites is elevated by background selection, to an extent that is well summarized by a change in effective population size. The structured coalescent model is readily extendable to more realistic situations and should prove useful for analyzing genome-wide polymorphism data.

  19. How to infer relative fitness from a sample of genomic sequences.

    PubMed

    Dayarian, Adel; Shraiman, Boris I

    2014-07-01

    Mounting evidence suggests that natural populations can harbor extensive fitness diversity with numerous genomic loci under selection. It is also known that genealogical trees for populations under selection are quantifiably different from those expected under neutral evolution and described statistically by Kingman's coalescent. While differences in the statistical structure of genealogies have long been used as a test for the presence of selection, the full extent of the information that they contain has not been exploited. Here we demonstrate that the shape of the reconstructed genealogical tree for a moderately large number of random genomic samples taken from a fitness diverse, but otherwise unstructured, asexual population can be used to predict the relative fitness of individuals within the sample. To achieve this we define a heuristic algorithm, which we test in silico, using simulations of a Wright-Fisher model for a realistic range of mutation rates and selection strength. Our inferred fitness ranking is based on a linear discriminator that identifies rapidly coalescing lineages in the reconstructed tree. Inferred fitness ranking correlates strongly with actual fitness, with a genome in the top 10% ranked being in the top 20% fittest with false discovery rate of 0.1-0.3, depending on the mutation/selection parameters. The ranking also enables us to predict the genotypes that future populations inherit from the present one. While the inference accuracy increases monotonically with sample size, samples of 200 nearly saturate the performance. We propose that our approach can be used for inferring relative fitness of genomes obtained in single-cell sequencing of tumors and in monitoring viral outbreaks. Copyright © 2014 by the Genetics Society of America.

  20. JPRS Report West Europe

    DTIC Science & Technology

    1988-08-08

    30,000 employees ) wants to risk saying what the impact of this restructuring will be on staff size. It could mean several hundred or even several...Bankers Group 42 PORTUGAL EEC Membership Reportedly Affecting National Economy 43 Poll Shows Popular Dissatisfaction With Economic Performance 44...34 "The office of the chancellor and parliament must establish a realistic policy and win the population over for that policy, as a balance to the

  1. Use of methods for specifying the target difference in randomised controlled trial sample size calculations: Two surveys of trialists' practice.

    PubMed

    Cook, Jonathan A; Hislop, Jennifer M; Altman, Doug G; Briggs, Andrew H; Fayers, Peter M; Norrie, John D; Ramsay, Craig R; Harvey, Ian M; Vale, Luke D

    2014-06-01

    Central to the design of a randomised controlled trial (RCT) is a calculation of the number of participants needed. This is typically achieved by specifying a target difference, which enables the trial to identify a difference of a particular magnitude should one exist. Seven methods have been proposed for formally determining what the target difference should be. However, in practice, it may be driven by convenience or some other informal basis. It is unclear how aware the trialist community is of these formal methods or whether they are used. To determine current practice regarding the specification of the target difference by surveying trialists. Two surveys were conducted: (1) Members of the Society for Clinical Trials (SCT): participants were invited to complete an online survey through the society's email distribution list. Respondents were asked about their awareness, use of, and willingness to recommend methods; (2) Leading UK- and Ireland-based trialists: the survey was sent to UK Clinical Research Collaboration registered Clinical Trials Units, Medical Research Council UK Hubs for Trial Methodology Research, and the Research Design Services of the National Institute for Health Research. This survey also included questions about the most recent trial developed by the respondent's group. Survey 1: Of the 1182 members on the SCT membership email distribution list, 180 responses were received (15%). Awareness of methods ranged from 69 (38%) for health economic methods to 162 (90%) for pilot study. Willingness to recommend among those who had used a particular method ranged from 56% for the opinion-seeking method to 89% for the review of evidence-base method. Survey 2: Of the 61 surveys sent out, 34 (56%) responses were received. Awareness of methods ranged from 33 (97%) for the review of evidence-base and pilot methods to 14 (41%) for the distribution method. The highest level of willingness to recommend among users was for the anchor method (87%). Based upon the most recent trial, the target difference was usually one viewed as important by a stakeholder group, mostly also viewed as a realistic difference given the interventions under evaluation, and sometimes one that led to an achievable sample size. The response rates achieved were relatively low despite the surveys being short, well presented, and having utilised reminders. Substantial variations in practice exist with awareness, use, and willingness to recommend methods varying substantially. The findings support the view that sample size calculation is a more complex process than would appear to be the case from trial reports and protocols. Guidance on approaches for sample size estimation may increase both awareness and use of appropriate formal methods. © The Author(s), 2014.

  2. Petrophysical and transport parameters evolution during acid percolation through structurally different limestones

    NASA Astrophysics Data System (ADS)

    Martinez Perez, Laura; Luquot, Linda

    2017-04-01

    Processes affecting geological media often show complex and unpredictable behavior due to the presence of heterogeneities. This remains problematic when facing contaminant transport problems, in the CO2 storage industry or dealing with the mechanisms underneath natural processes where chemical reactions can be observed during the percolation of rock non-equilibrated fluid (e.g. karst formation, seawater intrusion). To understand the mechanisms taking place in a porous medium as a result of this water-rock interaction, we need to know the flow parameters that control them, and how they evolve with time as a result of that concurrence. This is fundamental to ensure realistic predictions of the behavior of natural systems in response of reactive transport processes. We investigate the coupled influence of structural and hydrodynamic heterogeneities in limestone rock samples tracking its variations during chemical reactions. To do so we use laboratory petrophysical techniques such as helium porosimetry, gas permeability, centrifugue, electrical resistivity and sonic waves measurements to obtain the parameters that characterize flow within rock matrix (porosity, permeability, retention curve and pore size distribution, electrical conductivity, formation factor, cementation index and tortuosity) before and after percolation experiments. We built an experimental setup that allows injection of acid brine into core samples under well controlled conditions, monitor changes in hydrodynamic properties and obtain the chemical composition of the injected solution at different stages. 3D rock images were also acquired before and after the experiments using a micro-CT to locate the alteration processes and perform an acurate analysis of the structural changes. Two limestones with distinct textural classification and thus contrasting transport properties have been used in the laboratory experiments: a crinoid limestone and an oolithic limestone. Core samples dimensions were 1 inch in diameter and varied from 0.5 to 2 inches in length. Experiments were performed at room temperature, 8 bar of total pressure and 3 bar of PCO2. The acidic fluid has been injected at constant flow rate ranging from 0.4 mL/min to 6.7 mL/min depending of the rock typology and sample length. As expected, limestone dissolution occurred during the different percolation experiments, porosity and permeability augmented and sonic waves speed propagation decreased, showing an increase in the degree of heterogeneity of the rocks. The integration of all these parameters measured at different stages of dissolution provides contrasted and realistic geochemical, hydrodynamic and structural parameters to improve numerical simulations.

  3. Prospective power calculations for the Four Lab study of a multigenerational reproductive/developmental toxicity rodent bioassay using a complex mixture of disinfection by-products in the low-response region.

    PubMed

    Dingus, Cheryl A; Teuschler, Linda K; Rice, Glenn E; Simmons, Jane Ellen; Narotsky, Michael G

    2011-10-01

    In complex mixture toxicology, there is growing emphasis on testing environmentally representative doses that improve the relevance of results for health risk assessment, but are typically much lower than those used in traditional toxicology studies. Traditional experimental designs with typical sample sizes may have insufficient statistical power to detect effects caused by environmentally relevant doses. Proper study design, with adequate statistical power, is critical to ensuring that experimental results are useful for environmental health risk assessment. Studies with environmentally realistic complex mixtures have practical constraints on sample concentration factor and sample volume as well as the number of animals that can be accommodated. This article describes methodology for calculation of statistical power for non-independent observations for a multigenerational rodent reproductive/developmental bioassay. The use of the methodology is illustrated using the U.S. EPA's Four Lab study in which rodents were exposed to chlorinated water concentrates containing complex mixtures of drinking water disinfection by-products. Possible experimental designs included two single-block designs and a two-block design. Considering the possible study designs and constraints, a design of two blocks of 100 females with a 40:60 ratio of control:treated animals and a significance level of 0.05 yielded maximum prospective power (~90%) to detect pup weight decreases, while providing the most power to detect increased prenatal loss.

  4. Prospective Power Calculations for the Four Lab Study of A Multigenerational Reproductive/Developmental Toxicity Rodent Bioassay Using A Complex Mixture of Disinfection By-Products in the Low-Response Region

    PubMed Central

    Dingus, Cheryl A.; Teuschler, Linda K.; Rice, Glenn E.; Simmons, Jane Ellen; Narotsky, Michael G.

    2011-01-01

    In complex mixture toxicology, there is growing emphasis on testing environmentally representative doses that improve the relevance of results for health risk assessment, but are typically much lower than those used in traditional toxicology studies. Traditional experimental designs with typical sample sizes may have insufficient statistical power to detect effects caused by environmentally relevant doses. Proper study design, with adequate statistical power, is critical to ensuring that experimental results are useful for environmental health risk assessment. Studies with environmentally realistic complex mixtures have practical constraints on sample concentration factor and sample volume as well as the number of animals that can be accommodated. This article describes methodology for calculation of statistical power for non-independent observations for a multigenerational rodent reproductive/developmental bioassay. The use of the methodology is illustrated using the U.S. EPA’s Four Lab study in which rodents were exposed to chlorinated water concentrates containing complex mixtures of drinking water disinfection by-products. Possible experimental designs included two single-block designs and a two-block design. Considering the possible study designs and constraints, a design of two blocks of 100 females with a 40:60 ratio of control:treated animals and a significance level of 0.05 yielded maximum prospective power (~90%) to detect pup weight decreases, while providing the most power to detect increased prenatal loss. PMID:22073030

  5. Dietary Exposure of Fathead Minnows to the Explosives TNT and RDX and to the Pesticide DDT using Contaminated Invertebrates

    PubMed Central

    Houston, Jerre G.; Lotufo, Guilherme R.

    2005-01-01

    Explosive compounds have been released into the environment during manufacturing, handling, and usage procedures. These compounds have been found to persist in the environment and potentially promote detrimental biological effects. The lack of research on bioaccumulation and bioconcentration and especially dietary transfer on aquatic life has resulted in challenges in assessing ecological risks. The objective of this study was to investigate the potential trophic transfer of the explosive compounds 2,4,6-trinitrotoluene (TNT) and hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) using a realistic freshwater prey/predator model and using dichlorodiphenyltrichloroethane (DDT), a highly bioaccumulative compound, to establish relative dietary uptake potential. The oligochaete worm Lumbriculus variegatus was exposed to 14C-labeled TNT, RDX or DDT for 5 hours in water, frozen in meal-size packages and subsequently fed to individual juvenile fathead minnows (Pimephales promelas). Fish were sampled for body residue determination on days 1, 2, 3, 4, 7, and 14 following an 8-hour gut purging period. Extensive metabolism of the parent compound in worms occurred for TNT but not for RDX and DDT. Fish body residue remained relatively unchanged over time for TNT and RDX, but did not approach steady-state concentration for DDT during the exposure period. The bioaccumulation factor (concentration in fish relative to concentration in worms) was 0.018, 0.010, and 0.422 g/g for TNT, RDX and DDT, respectively, confirming the expected relatively low bioaccumulative potential for TNT and RDX through the dietary route. The experimental design was deemed successful in determining the potential for trophic transfer of organic contaminants via a realistic predator/prey exposure scenario. PMID:16705829

  6. Diffusion tensor tracking of neuronal fiber pathways in the living human brain

    NASA Astrophysics Data System (ADS)

    Lori, Nicolas Francisco

    2001-11-01

    The technique of diffusion tensor tracking (DTT) is described, in which diffusion tensor magnetic resonance imaging (DT-MRI) data are processed to allow the visualization of white matter (WM) tracts in a living human brain. To illustrate the methods, a detailed description is given of the physics of DT-MRI, the structure of the DT-MRI experiment, the computer tools that were developed to visualize WM tracts, the anatomical consistency of the obtained WM tracts, and the accuracy and precision of DTT using computer simulations. When presenting the physics of DT-MRI, a completely quantum-mechanical view of DT-MRI is given where some of the results are new. Examples of anatomical tracts viewed using DTT are presented, including the genu and the splenium of the corpus callosum, the ventral pathway with its amygdala connection highlighted, the geniculo- calcarine tract separated into anterior and posterior parts, the geniculo-calcarine tract defined using functional magnetic resonance imaging (MRI), and U- fibers. In the simulation, synthetic DT-MRI data were constructed that would be obtained for a cylindrical WM tract with a helical trajectory surrounded by gray matter. Noise was then added to the synthetic DT-MRI data, and DTT trajectories were calculated using the noisy data (realistic tracks). Simulated DTT errors were calculated as the vector distance between the realistic tracks and the ideal trajectory. The simulation tested the effects of a comprehensive set of experimental conditions, including voxel size, data sampling, data averaging, type of tract tissue, tract diameter and type of tract trajectory. Simulated DTT accuracy and precision were typically below the voxel dimension, and precision was compatible with the experimental results.

  7. Hardware-In-The-Loop Testing of Continuous Control Algorithms for a Precision Formation Flying Demonstration Mission

    NASA Technical Reports Server (NTRS)

    Naasz, Bo J.; Burns, Richard D.; Gaylor, David; Higinbotham, John

    2004-01-01

    A sample mission sequence is defined for a low earth orbit demonstration of Precision Formation Flying (PFF). Various guidance navigation and control strategies are discussed for use in the PFF experiment phases. A sample PFF experiment is implemented and tested in a realistic Hardware-in-the-Loop (HWIL) simulation using the Formation Flying Test Bed (FFTB) at NASA's Goddard Space Flight Center.

  8. Realistic sampling of amino acid geometries for a multipolar polarizable force field

    PubMed Central

    Hughes, Timothy J.; Cardamone, Salvatore

    2015-01-01

    The Quantum Chemical Topological Force Field (QCTFF) uses the machine learning method kriging to map atomic multipole moments to the coordinates of all atoms in the molecular system. It is important that kriging operates on relevant and realistic training sets of molecular geometries. Therefore, we sampled single amino acid geometries directly from protein crystal structures stored in the Protein Databank (PDB). This sampling enhances the conformational realism (in terms of dihedral angles) of the training geometries. However, these geometries can be fraught with inaccurate bond lengths and valence angles due to artefacts of the refinement process of the X‐ray diffraction patterns, combined with experimentally invisible hydrogen atoms. This is why we developed a hybrid PDB/nonstationary normal modes (NM) sampling approach called PDB/NM. This method is superior over standard NM sampling, which captures only geometries optimized from the stationary points of single amino acids in the gas phase. Indeed, PDB/NM combines the sampling of relevant dihedral angles with chemically correct local geometries. Geometries sampled using PDB/NM were used to build kriging models for alanine and lysine, and their prediction accuracy was compared to models built from geometries sampled from three other sampling approaches. Bond length variation, as opposed to variation in dihedral angles, puts pressure on prediction accuracy, potentially lowering it. Hence, the larger coverage of dihedral angles of the PDB/NM method does not deteriorate the predictive accuracy of kriging models, compared to the NM sampling around local energetic minima used so far in the development of QCTFF. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26235784

  9. Impact of Damping Uncertainty on SEA Model Response Variance

    NASA Technical Reports Server (NTRS)

    Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand

    2010-01-01

    Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.

  10. Bimetallic Ag-Pt Sub-nanometer Supported Clusters as Highly Efficient and Robust Oxidation Catalysts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Negreiros, Fabio R.; Halder, Avik; Yin, Chunrong

    A combined experimental and theoretical investigation of Ag-Pt sub-nanometer clusters as heterogeneous catalysts in the CO -> CO2 reaction (COox) is presented. Ag9Pt2 and Ag9Pt3 clusters are size-selected in the gas phase, deposited on an ultrathin amorphous alumina support, and tested as catalysts experimentally under realistic conditions and by first-principles simulations at realistic coverage. Insitu GISAXS/TPRx demonstrates that the clusters do not sinter or deactivate even after prolonged exposure to reactants at high temperature, and present comparable, extremely high COox catalytic efficiency. Such high activity and stability are ascribed to a synergic role of Ag and Pt in ultranano-aggregates, inmore » which Pt anchors the clusters to the support and binds and activates two CO molecules, while Ag binds and activates O-2, and Ag/Pt surface proximity disfavors poisoning by CO or oxidized species.« less

  11. The role of starch and saliva in tribology studies and the sensory perception of protein-added yogurts.

    PubMed

    Morell, Pere; Chen, Jianshe; Fiszman, Susana

    2017-02-22

    Increasing the protein content of yogurts would be a good strategy for enhancing their satiating ability. However, the addition of protein can affect product palatability, contributing astringency or an inhomogeneous texture. Increasingly, studies mimicking oral tribology and oral lubrication have been attracting interest among food researchers because of their link with oral texture sensations. In the present study, four double-protein stirred yogurts were prepared by adding extra skimmed milk powder (MP) or whey protein concentrate (WPC) and by adding a physically modified starch to each (samples MPS and WPCS, respectively) to increase the consistency of the yogurts. The lubricating properties of the four yogurts were examined by tribological methods with the aim of relating these properties to the sensory perception described by flash profiling. Samples were also analysed after mixing with saliva. The tribology results clearly showed that addition of starch reduced the friction coefficient values regardless of the type of protein. Saliva addition produced a further decrease in the friction coefficient values in all the samples. Consequently, adding saliva is recommended when performing tribology measurements of foods in order to give a more realistic picture. The sensory results confirmed that the addition of starch reduced the astringent sensation, especially in sample WPC, while the MP and MPS samples were creamier and smoother. On the other hand, the astringency of sample WPC was not explained by the tribology results. Since this sample was described as "grainy", "gritty", "rough", "acid" and "sour", further studies are necessary to investigate the role of the number, size, shape and distribution of particles in yogurt samples, their role in astringency perception and their interaction with the perception of the tastes mentioned. Oral tribology has shown itself to be an in vitro technique that may aid a better understanding of the dynamics of in-mouth lubrication and the physical mechanisms underlying texture and mouthfeel perception.

  12. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    PubMed

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  13. Optimists or realists? How ants allocate resources in making reproductive investments.

    PubMed

    Enzmann, Brittany L; Nonacs, Peter

    2018-04-24

    Parents often face an investment trade-off between either producing many small or fewer large offspring. When environments vary predictably, the fittest parental solution matches available resources by varying only number of offspring and never optimal individual size. However when mismatches occur often between parental expectations and true resource levels, dynamic models like multifaceted parental investment (MFPI) and parental optimism (PO) both predict offspring size can vary significantly. MFPI is a "realist" strategy: parents assume future environments of average richness. When resources exceed expectations and it is too late to add more offspring, the best-case solution increases investment per individual. Brood size distributions therefore track the degree of mismatch from right-skewed around an optimal size (slight underestimation of resources) to left-skewed around a maximal size (gross underestimation). Conversely, PO is an "optimist" strategy: parents assume maximally good resource futures and match numbers to that situation. Normal or lean years do not affect "core" brood as costs primarily fall on excess "marginal" siblings who die or experience stunted growth (producing left-skewed distributions). Investment patterns supportive of both MFPI and PO models have been observed in nature, but studies that directly manipulate food resources to test predictions are lacking. Ant colonies produce many offspring per reproductive cycle and are amenable to experimental manipulation in ways that can differentiate between MFPI and PO investment strategies. Colonies in a natural population of a harvester ant (Pogonomyrmex salinus) were protein-supplemented over 2 years, and mature sexual offspring were collected annually prior to their nuptial flight. Several results support either MFPI or PO in terms of patterns in offspring size distributions and how protein differentially affected male and female production. Unpredicted by either model, however, is that supplementation affected distributions more strongly across years than within (e.g., small females are significantly rarer in the year after colonies receive protein). Parental investment strategies in P. salinus vary dynamically across years and conditions. Finding that past conditions can more strongly affect reproductive decisions than current ones, however, is not addressed by models of parental investment. © 2018 The Authors. Journal of Animal Ecology © 2018 British Ecological Society.

  14. Dynamic statistical optimization of GNSS radio occultation bending angles: advanced algorithm and performance analysis

    NASA Astrophysics Data System (ADS)

    Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Norman, R.; Yuan, Y. B.; Fritzer, J.; Schwaerz, M.; Zhang, K.

    2015-08-01

    We introduce a new dynamic statistical optimization algorithm to initialize ionosphere-corrected bending angles of Global Navigation Satellite System (GNSS)-based radio occultation (RO) measurements. The new algorithm estimates background and observation error covariance matrices with geographically varying uncertainty profiles and realistic global-mean correlation matrices. The error covariance matrices estimated by the new approach are more accurate and realistic than in simplified existing approaches and can therefore be used in statistical optimization to provide optimal bending angle profiles for high-altitude initialization of the subsequent Abel transform retrieval of refractivity. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.6 (OPSv5.6) algorithm, using simulated data on two test days from January and July 2008 and real observed CHAllenging Minisatellite Payload (CHAMP) and Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) measurements from the complete months of January and July 2008. The following is achieved for the new method's performance compared to OPSv5.6: (1) significant reduction of random errors (standard deviations) of optimized bending angles down to about half of their size or more; (2) reduction of the systematic differences in optimized bending angles for simulated MetOp data; (3) improved retrieval of refractivity and temperature profiles; and (4) realistically estimated global-mean correlation matrices and realistic uncertainty fields for the background and observations. Overall the results indicate high suitability for employing the new dynamic approach in the processing of long-term RO data into a reference climate record, leading to well-characterized and high-quality atmospheric profiles over the entire stratosphere.

  15. Progress on Discrete Fracture Network models with implications on the predictions of permeability and flow channeling structure

    NASA Astrophysics Data System (ADS)

    Darcel, C.; Davy, P.; Le Goc, R.; Maillot, J.; Selroos, J. O.

    2017-12-01

    We present progress on Discrete Fracture Network (DFN) flow modeling, including realistic advanced DFN spatial structures and local fracture transmissivity properties, through an application to the Forsmark site in Sweden. DFN models are a framework to combine fracture datasets from different sources and scales and to interpolate them in combining statistical distributions and stereological relations. The resulting DFN upscaling function - size density distribution - is a model component key to extrapolating fracture size densities between data gaps, from borehole core up to site scale. Another important feature of DFN models lays in the spatial correlations between fractures, with still unevaluated consequences on flow predictions. Indeed, although common Poisson (i.e. spatially random) models are widely used, they do not reflect these geological evidences for more complex structures. To model them, we define a DFN growth process from kinematic rules for nucleation, growth and stopping conditions. It mimics in a simplified way the geological fracturing processes and produces DFN characteristics -both upscaling function and spatial correlations- fully consistent with field observations. DFN structures are first compared for constant transmissivities. Flow simulations for the kinematic and equivalent Poisson DFN models show striking differences: with the kinematic DFN, connectivity and permeability are significantly smaller, down to a difference of one order of magnitude, and flow is much more channelized. Further flow analyses are performed with more realistic transmissivity distribution conditions (sealed parts, relations to fracture sizes, orientations and in-situ stress field). The relative importance of the overall DFN structure in the final flow predictions is discussed.

  16. Work Reviews Can Reduce Turnover and Improve Performance

    ERIC Educational Resources Information Center

    Raphael, Michael A.

    1975-01-01

    Establishing realistic expectations about the company and the job should be the duty of management. It would appear that in today's social-cultural climate the work sample preview is a good tool for management to "tell it like it is" to the prospective employee. (Author)

  17. Hydrogeophysical Assessment of Aquifer Uncertainty Using Simulated Annealing driven MRF-Based Stochastic Joint Inversion

    NASA Astrophysics Data System (ADS)

    Oware, E. K.

    2017-12-01

    Geophysical quantification of hydrogeological parameters typically involve limited noisy measurements coupled with inadequate understanding of the target phenomenon. Hence, a deterministic solution is unrealistic in light of the largely uncertain inputs. Stochastic imaging (SI), in contrast, provides multiple equiprobable realizations that enable probabilistic assessment of aquifer properties in a realistic manner. Generation of geologically realistic prior models is central to SI frameworks. Higher-order statistics for representing prior geological features in SI are, however, usually borrowed from training images (TIs), which may produce undesirable outcomes if the TIs are unpresentatitve of the target structures. The Markov random field (MRF)-based SI strategy provides a data-driven alternative to TI-based SI algorithms. In the MRF-based method, the simulation of spatial features is guided by Gibbs energy (GE) minimization. Local configurations with smaller GEs have higher likelihood of occurrence and vice versa. The parameters of the Gibbs distribution for computing the GE are estimated from the hydrogeophysical data, thereby enabling the generation of site-specific structures in the absence of reliable TIs. In Metropolis-like SI methods, the variance of the transition probability controls the jump-size. The procedure is a standard Markov chain Monte Carlo (McMC) method when a constant variance is assumed, and becomes simulated annealing (SA) when the variance (cooling temperature) is allowed to decrease gradually with time. We observe that in certain problems, the large variance typically employed at the beginning to hasten burn-in may be unideal for sampling at the equilibrium state. The powerfulness of SA stems from its flexibility to adaptively scale the variance at different stages of the sampling. Degeneration of results were reported in a previous implementation of the MRF-based SI strategy based on a constant variance. Here, we present an updated version of the algorithm based on SA that appears to resolve the degeneration problem with seemingly improved results. We illustrate the performance of the SA version with a joint inversion of time-lapse concentration and electrical resistivity measurements in a hypothetical trinary hydrofacies aquifer characterization problem.

  18. On coarse projective integration for atomic deposition in amorphous systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chuang, Claire Y., E-mail: yungc@seas.upenn.edu, E-mail: meister@unm.edu, E-mail: zepedaruiz1@llnl.gov; Sinno, Talid, E-mail: talid@seas.upenn.edu; Han, Sang M., E-mail: yungc@seas.upenn.edu, E-mail: meister@unm.edu, E-mail: zepedaruiz1@llnl.gov

    2015-10-07

    Direct molecular dynamics simulation of atomic deposition under realistic conditions is notoriously challenging because of the wide range of time scales that must be captured. Numerous simulation approaches have been proposed to address the problem, often requiring a compromise between model fidelity, algorithmic complexity, and computational efficiency. Coarse projective integration, an example application of the “equation-free” framework, offers an attractive balance between these constraints. Here, periodically applied, short atomistic simulations are employed to compute time derivatives of slowly evolving coarse variables that are then used to numerically integrate differential equations over relatively large time intervals. A key obstacle to themore » application of this technique in realistic settings is the “lifting” operation in which a valid atomistic configuration is recreated from knowledge of the coarse variables. Using Ge deposition on amorphous SiO{sub 2} substrates as an example application, we present a scheme for lifting realistic atomistic configurations comprised of collections of Ge islands on amorphous SiO{sub 2} using only a few measures of the island size distribution. The approach is shown to provide accurate initial configurations to restart molecular dynamics simulations at arbitrary points in time, enabling the application of coarse projective integration for this morphologically complex system.« less

  19. On Coarse Projective Integration for Atomic Deposition in Amorphous Systems

    DOE PAGES

    Chuang, Claire Y.; Han, Sang M.; Zepeda-Ruiz, Luis A.; ...

    2015-10-02

    Direct molecular dynamics simulation of atomic deposition under realistic conditions is notoriously challenging because of the wide range of timescales that must be captured. Numerous simulation approaches have been proposed to address the problem, often requiring a compromise between model fidelity, algorithmic complexity and computational efficiency. Coarse projective integration, an example application of the ‘equation-free’ framework, offers an attractive balance between these constraints. Here, periodically applied, short atomistic simulations are employed to compute gradients of slowly-evolving coarse variables that are then used to numerically integrate differential equations over relatively large time intervals. A key obstacle to the application of thismore » technique in realistic settings is the ‘lifting’ operation in which a valid atomistic configuration is recreated from knowledge of the coarse variables. Using Ge deposition on amorphous SiO 2 substrates as an example application, we present a scheme for lifting realistic atomistic configurations comprised of collections of Ge islands on amorphous SiO 2 using only a few measures of the island size distribution. In conclusion, the approach is shown to provide accurate initial configurations to restart molecular dynamics simulations at arbitrary points in time, enabling the application of coarse projective integration for this morphologically complex system.« less

  20. Simulating Realistic Test Data for the European Lightning Imager on MTG using Data from Seviri, TRMM-LIS and ISS-LIS

    NASA Astrophysics Data System (ADS)

    Finke, U.; Blakeslee, R. J.; Mach, D. M.

    2017-12-01

    The next generation of European geostationary weather observing satellites (MTG) will operate an optical lightning location instrument (LI) which will be very similar to the Global Lightning Mapper (GLM) on board of GOES-R. For the development and verification of the product processing algorithms realistic test data are necessary. This paper presents a method of test data generation on the basis of optical lightning data from the LIS instrument and cloud image data from the Seviri radiometer.The basis is the lightning data gathered during the 15 year LIS operation time, particularly the empirical distribution functions of the optical pulse size, duration and radiance as well as the inter-correlation of lightning in space and time. These allow for a realistically structured simulation of lightning test data. Due to its low orbit the instantaneous field of view of the LIS is limited and moving with time. For the generation of test data which cover the geostationary visible disk, the LIS data have to be extended. This is realized by 1. simulating random lightning pulses according to the established distribution functions of the lightning parameters and 2. using the cloud radiometer data of the Seviri instrument on board of the geostationary Meteosat second generation (MSG). Particularly, the cloud top height product (CTH) identifies convective storm clouds wherein the simulation places random lightning pulses. The LIS instrument was recently deployed on the International Space Station (ISS). The ISS orbit reaches higher latitudes, particularly Europe. The ISS-LIS data is analyzed for single observation days. Additionally, the statistical distribution of parameters such as radiance, footprint size, and space time correlation of the groups are compared against the long time statistics from TRMM-LIS.Optical lightning detection efficiency from space is affected by the solar radiation reflected from the clouds. This effect is changing with day and night areas across the field of view. For a realistic simulation of this cloud background radiance the Seviri visual channel VIS08 data is used.Additionally to the test data study, this paper gives a comparison of the MTG-LI to the GLM and discusses differences in instrument design, product definition and generation and the merging of data from both geostationary instruments.

  1. Admixture analysis in relation to pedigree studies of introgression in a minority British cattle breed: the Lincoln Red.

    PubMed

    Bray, T C; Hall, S J G; Bruford, M W

    2014-02-01

    Investigation of historic population processes using molecular data has been facilitated by the use of approximate Bayesian computation (ABC), which enables the consideration of multiple alternative demographic scenarios. The Lincoln Red cattle breed provides a relatively simple example of two well-documented admixture events. Using molecular data for this breed, we found that structure did not resolve very low (<5% levels) of introgression, possibly due to sampling limitations. We evaluated the performance of two ABC approaches (2BAD and DIYABC) against those of two earlier methodologies, ADMIX and LEADMIX, by comparing their interpretations with the conclusions drawn from herdbook analysis. The ABC methods gave credible values for the proportions of the Lincoln Red genotype that are attributable to Aberdeen Angus and Limousin, although estimates of effective population size and event timing were not realistic. We suggest ABC methods are a valuable supplement to pedigree-based studies but that the accuracy of admixture determination is likely to diminish with increasing complexity of the admixture scenario. © 2013 Blackwell Verlag GmbH.

  2. On the validity of within-nuclear-family genetic association analysis in samples of extended families.

    PubMed

    Bureau, Alexandre; Duchesne, Thierry

    2015-12-01

    Splitting extended families into their component nuclear families to apply a genetic association method designed for nuclear families is a widespread practice in familial genetic studies. Dependence among genotypes and phenotypes of nuclear families from the same extended family arises because of genetic linkage of the tested marker with a risk variant or because of familial specificity of genetic effects due to gene-environment interaction. This raises concerns about the validity of inference conducted under the assumption of independence of the nuclear families. We indeed prove theoretically that, in a conditional logistic regression analysis applicable to disease cases and their genotyped parents, the naive model-based estimator of the variance of the coefficient estimates underestimates the true variance. However, simulations with realistic effect sizes of risk variants and variation of this effect from family to family reveal that the underestimation is negligible. The simulations also show the greater efficiency of the model-based variance estimator compared to a robust empirical estimator. Our recommendation is therefore, to use the model-based estimator of variance for inference on effects of genetic variants.

  3. Disorder effects in topological states: Brief review of the recent developments

    NASA Astrophysics Data System (ADS)

    Wu, Binglan; Song, Juntao; Zhou, Jiaojiao; Jiang, Hua

    2016-11-01

    Disorder inevitably exists in realistic samples, manifesting itself in various exotic properties for the topological states. In this paper, we summarize and briefly review the work completed over the last few years, including our own, regarding recent developments in several topics about disorder effects in topological states. For weak disorder, the robustness of topological states is demonstrated, especially for both quantum spin Hall states with Z 2 = 1 and size induced nontrivial topological insulators with Z 2 = 0. For moderate disorder, by increasing the randomness of both the impurity distribution and the impurity induced potential, the topological insulator states can be created from normal metallic or insulating states. These phenomena and their mechanisms are summarized. For strong disorder, the disorder causes a metal-insulator transition. Due to their topological nature, the phase diagrams are much richer in topological state systems. Finally, the trends in these areas of disorder research are discussed. Project supported by the National Natural Science Foundation of China (Grant Nos. 11374219, 11474085, and 11534001) and the Natural Science Foundation of Jiangsu Province, China (Grant No BK20160007).

  4. Incorporating measurement error in n = 1 psychological autoregressive modeling.

    PubMed

    Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.

  5. Percolation Thresholds in Angular Grain media: Drude Directed Infiltration

    NASA Astrophysics Data System (ADS)

    Priour, Donald

    Pores in many realistic systems are not well delineated channels, but are void spaces among grains impermeable to charge or fluid flow which comprise the medium. Sparse grain concentrations lead to permeable systems, while concentrations in excess of a critical density block bulk fluid flow. We calculate percolation thresholds in porous materials made up of randomly placed (and oriented) disks, tetrahedrons, and cubes. To determine if randomly generated finite system samples are permeable, we deploy virtual tracer particles which are scattered (e.g. specularly) by collisions with impenetrable angular grains. We hasten the rate of exploration (which would otherwise scale as ncoll1 / 2 where ncoll is the number of collisions with grains if the tracers followed linear trajectories) by considering the tracer particles to be charged in conjunction with a randomly directed uniform electric field. As in the Drude treatment, where a succession of many scattering events leads to a constant drift velocity, tracer displacements on average grow linearly in ncoll. By averaging over many disorder realizations for a variety of systems sizes, we calculate the percolation threshold and critical exponent which characterize the phase transition.

  6. Design of portable ultraminiature flow cytometers for medical diagnostics

    NASA Astrophysics Data System (ADS)

    Leary, James F.

    2018-02-01

    Design of portable microfluidic flow/image cytometry devices for measurements in the field (e.g. initial medical diagnostics) requires careful design in terms of power requirements and weight to allow for realistic portability. True portability with high-throughput microfluidic systems also requires sampling systems without the need for sheath hydrodynamic focusing both to avoid the need for sheath fluid and to enable higher volumes of actual sample, rather than sheath/sample combinations. Weight/power requirements dictate use of super-bright LEDs with top-hat excitation beam architectures and very small silicon photodiodes or nanophotonic sensors that can both be powered by small batteries. Signal-to-noise characteristics can be greatly improved by appropriately pulsing the LED excitation sources and sampling and subtracting noise in between excitation pulses. Microfluidic cytometry also requires judicious use of small sample volumes and appropriate statistical sampling by microfluidic cytometry or imaging for adequate statistical significance to permit real-time (typically in less than 15 minutes) initial medical decisions for patients in the field. This is not something conventional cytometry traditionally worries about, but is very important for development of small, portable microfluidic devices with small-volume throughputs. It also provides a more reasonable alternative to conventional tubes of blood when sampling geriatric and newborn patients for whom a conventional peripheral blood draw can be problematical. Instead one or two drops of blood obtained by pin-prick should be able to provide statistically meaningful results for use in making real-time medical decisions without the need for blood fractionation, which is not realistic in the doctor's office or field.

  7. Idealized vs. Realistic Microstructures: An Atomistic Simulation Case Study on γ/γ' Microstructures.

    PubMed

    Prakash, Aruna; Bitzek, Erik

    2017-01-23

    Single-crystal Ni-base superalloys, consisting of a two-phase γ / γ ' microstructure, retain high strengths at elevated temperatures and are key materials for high temperature applications, like, e.g., turbine blades of aircraft engines. The lattice misfit between the γ and γ ' phases results in internal stresses, which significantly influence the deformation and creep behavior of the material. Large-scale atomistic simulations that are often used to enhance our understanding of the deformation mechanisms in such materials must accurately account for such misfit stresses. In this work, we compare the internal stresses in both idealized and experimentally-informed, i.e., more realistic, γ / γ ' microstructures. The idealized samples are generated by assuming, as is frequently done, a periodic arrangement of cube-shaped γ ' particles with planar γ / γ ' interfaces. The experimentally-informed samples are generated from two different sources to produce three different samples-the scanning electron microscopy micrograph-informed quasi-2D atomistic sample and atom probe tomography-informed stoichiometric and non-stoichiometric atomistic samples. Additionally, we compare the stress state of an idealized embedded cube microstructure with finite element simulations incorporating 3D periodic boundary conditions. Subsequently, we study the influence of the resulting stress state on the evolution of dislocation loops in the different samples. The results show that the stresses in the atomistic and finite element simulations are almost identical. Furthermore, quasi-2D boundary conditions lead to a significantly different stress state and, consequently, different evolution of the dislocation loop, when compared to samples with fully 3D boundary conditions.

  8. Benthic Flux Sampling Device, Prototype Design, Development, and Evaluation

    DTIC Science & Technology

    1993-08-01

    collaboration with Clare Reimers and Matt Christianson at Scripps Institution of Oceanography. Trace metal chemistry was performed by John Andrews and...realistic levels for coastal and inshore sediments using a sample period of 2-4 days. The resulting flux rates will be useful in evaluating the risks...suffi= for detecting release rates at significant levels . Operation Depth. A depth capability of 50 m is sufficient to perform studies in most U.S. bays

  9. Asteroid Impact Deflection and Assessment (AIDA) mission - Full-Scale Modeling and Simulation of Ejecta Evolution and Fates

    NASA Astrophysics Data System (ADS)

    Fahnestock, Eugene G.; Yu, Yang; Hamilton, Douglas P.; Schwartz, Stephen; Stickle, Angela; Miller, Paul L.; Cheng, Andy F.; Michel, Patrick; AIDA Impact Simulation Working Group

    2016-10-01

    The proposed Asteroid Impact Deflection and Assessment (AIDA) mission includes NASA's Double Asteroid Redirection Test (DART), whose impact with the secondary of near-Earth binary asteroid 65803 Didymos is expected to liberate large amounts of ejecta. We present efforts within the AIDA Impact Simulation Working Group to comprehensively simulate the behavior of this impact ejecta as it moves through and exits the system. Group members at JPL, OCA, and UMD have been working largely independently, developing their own strategies and methodologies. Ejecta initial conditions may be imported from output of hydrocode impact simulations or generated from crater scaling laws derived from point-source explosion models. We started with the latter approach, using reasonable assumptions for the secondary's density, porosity, surface cohesive strength, and vanishingly small net gravitational/rotational surface acceleration. We adopted DART's planned size, mass, closing velocity, and impact geometry for the cratering event. Using independent N-Body codes, we performed Monte Carlo integration of ejecta particles sampled over reasonable particle size ranges, and over launch locations within the crater footprint. In some cases we scaled the number of integrated particles in various size bins to the estimated number of particles consistent with a realistic size-frequency distribution. Dynamical models used for the particle integration varied, but all included full gravity potential of both primary and secondary, the solar tide, and solar radiation pressure (accounting for shadowing). We present results for the proportions of ejecta reaching ultimate fates of escape, return impact on the secondary, and transfer impact onto the primary. We also present the time history of reaching those outcomes, i.e., ejecta clearing timescales, and the size-frequency distribution of remaining ejecta at given post-impact durations. We find large numbers of particles remain in the system for several weeks after impact. Clearing timescales are nonlinearly dependent on particle size as expected, such that only the largest ejecta persist longest. We find results are strongly dependent on the local surface geometry at the modeled impact locations.

  10. The Hartung-Knapp-Sidik-Jonkman method for random effects meta-analysis is straightforward and considerably outperforms the standard DerSimonian-Laird method

    PubMed Central

    2014-01-01

    Background The DerSimonian and Laird approach (DL) is widely used for random effects meta-analysis, but this often results in inappropriate type I error rates. The method described by Hartung, Knapp, Sidik and Jonkman (HKSJ) is known to perform better when trials of similar size are combined. However evidence in realistic situations, where one trial might be much larger than the other trials, is lacking. We aimed to evaluate the relative performance of the DL and HKSJ methods when studies of different sizes are combined and to develop a simple method to convert DL results to HKSJ results. Methods We evaluated the performance of the HKSJ versus DL approach in simulated meta-analyses of 2–20 trials with varying sample sizes and between-study heterogeneity, and allowing trials to have various sizes, e.g. 25% of the trials being 10-times larger than the smaller trials. We also compared the number of “positive” (statistically significant at p < 0.05) findings using empirical data of recent meta-analyses with > = 3 studies of interventions from the Cochrane Database of Systematic Reviews. Results The simulations showed that the HKSJ method consistently resulted in more adequate error rates than the DL method. When the significance level was 5%, the HKSJ error rates at most doubled, whereas for DL they could be over 30%. DL, and, far less so, HKSJ had more inflated error rates when the combined studies had unequal sizes and between-study heterogeneity. The empirical data from 689 meta-analyses showed that 25.1% of the significant findings for the DL method were non-significant with the HKSJ method. DL results can be easily converted into HKSJ results. Conclusions Our simulations showed that the HKSJ method consistently results in more adequate error rates than the DL method, especially when the number of studies is small, and can easily be applied routinely in meta-analyses. Even with the HKSJ method, extra caution is needed when there are = <5 studies of very unequal sizes. PMID:24548571

  11. Time Courses of Inflammatory Markers after Aneurysmal Subarachnoid Hemorrhage and Their Possible Relevance for Future Studies.

    PubMed

    Höllig, Anke; Stoffel-Wagner, Birgit; Clusmann, Hans; Veldeman, Michael; Schubert, Gerrit A; Coburn, Mark

    2017-01-01

    Aneurysmal subarachnoid hemorrhage triggers an intense inflammatory response, which is suspected to increase the risk for secondary complications such as delayed cerebral ischemia (DCI). However, to date, the monitoring of the inflammatory response to detect secondary complications such as DCI has not become part of the clinical routine diagnostic. Here, we aim to illustrate the time courses of inflammatory parameters after aneurysmal subarachnoid hemorrhage (aSAH) and discuss the problems of inflammatory parameters as biomarkers but also their possible relevance for deeper understanding of the pathophysiology after aSAH and sophisticated planning of future studies. In this prospective cohort study, 109 patients with aSAH were initially included, n  = 28 patients had to be excluded. Serum and-if possible-cerebral spinal fluid samples ( n  = 48) were retrieved at days 1, 4, 7, 10, and 14 after aSAH. Samples were analyzed for leukocyte count and C-reactive protein (CRP) (serum samples only) as well as matrix metallopeptidase 9 (MMP9), intercellular adhesion molecule 1 (ICAM1), and leukemia inhibitory factor (LIF) [both serum and cerebrospinal fluid (CSF) samples]. Time courses of the inflammatory parameters were displayed and related to the occurrence of DCI. We illustrate the time courses of leukocyte count, CRP, MMP9, ICAM1, and LIF in patients' serum samples from the first until the 14th day after aSAH. Time courses of MMP9, ICAM1, and LIF in CSF samples are demonstrated. Furthermore, no significant difference was shown relating the time courses to the occurrence of DCI. We estimate that the wide range of the measured values hampers their interpretation and usage as a biomarker. However, understanding the inflammatory response after aSAH and generating a multicenter database may facilitate further studies: realistic sample size calculations on the basis of a multicenter database will increase the quality and clinical relevance of the acquired results.

  12. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    PubMed

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  13. Milky Way mass and potential recovery using tidal streams in a realistic halo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonaca, Ana; Geha, Marla; Küpper, Andreas H. W.

    2014-11-01

    We present a new method for determining the Galactic gravitational potential based on forward modeling of tidal stellar streams. We use this method to test the performance of smooth and static analytic potentials in representing realistic dark matter halos, which have substructure and are continually evolving by accretion. Our FAST-FORWARD method uses a Markov Chain Monte Carlo algorithm to compare, in six-dimensional phase space, an 'observed' stream to models created in trial analytic potentials. We analyze a large sample of streams that evolved in the Via Lactea II (VL2) simulation, which represents a realistic Galactic halo potential. The recovered potentialmore » parameters are in agreement with the best fit to the global, present-day VL2 potential. However, merely assuming an analytic potential limits the dark matter halo mass measurement to an accuracy of 5%-20%, depending on the choice of analytic parameterization. Collectively, the mass estimates using streams from our sample reach this fundamental limit, but individually they can be highly biased. Individual streams can both under- and overestimate the mass, and the bias is progressively worse for those with smaller perigalacticons, motivating the search for tidal streams at galactocentric distances larger than 70 kpc. We estimate that the assumption of a static and smooth dark matter potential in modeling of the GD-1- and Pal5-like streams introduces an error of up to 50% in the Milky Way mass estimates.« less

  14. Assessment of tbe Performance of Ablative Insulators Under Realistic Solid Rocket Motor Operating Conditions (a Doctoral Dissertation)

    NASA Technical Reports Server (NTRS)

    Martin, Heath Thomas

    2013-01-01

    Ablative insulators are used in the interior surfaces of solid rocket motors to prevent the mechanical structure of the rocket from failing due to intense heating by the high-temperature solid-propellant combustion products. The complexity of the ablation process underscores the need for ablative material response data procured from a realistic solid rocket motor environment, where all of the potential contributions to material degradation are present and in their appropriate proportions. For this purpose, the present study examines ablative material behavior in a laboratory-scale solid rocket motor. The test apparatus includes a planar, two-dimensional flow channel in which flat ablative material samples are installed downstream of an aluminized solid propellant grain and imaged via real-time X-ray radiography. In this way, the in-situ transient thermal response of an ablator to all of the thermal, chemical, and mechanical erosion mechanisms present in a solid rocket environment can be observed and recorded. The ablative material is instrumented with multiple micro-thermocouples, so that in-depth temperature histories are known. Both total heat flux and thermal radiation flux gauges have been designed, fabricated, and tested to characterize the thermal environment to which the ablative material samples are exposed. These tests not only allow different ablative materials to be compared in a realistic solid rocket motor environment but also improve the understanding of the mechanisms that influence the erosion behavior of a given ablative material.

  15. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    PubMed Central

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  16. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

    PubMed

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

  17. Optimal balance of the striatal medium spiny neuron network.

    PubMed

    Ponzi, Adam; Wickens, Jeffery R

    2013-04-01

    Slowly varying activity in the striatum, the main Basal Ganglia input structure, is important for the learning and execution of movement sequences. Striatal medium spiny neurons (MSNs) form cell assemblies whose population firing rates vary coherently on slow behaviourally relevant timescales. It has been shown that such activity emerges in a model of a local MSN network but only at realistic connectivities of 10 ~ 20% and only when MSN generated inhibitory post-synaptic potentials (IPSPs) are realistically sized. Here we suggest a reason for this. We investigate how MSN network generated population activity interacts with temporally varying cortical driving activity, as would occur in a behavioural task. We find that at unrealistically high connectivity a stable winners-take-all type regime is found where network activity separates into fixed stimulus dependent regularly firing and quiescent components. In this regime only a small number of population firing rate components interact with cortical stimulus variations. Around 15% connectivity a transition to a more dynamically active regime occurs where all cells constantly switch between activity and quiescence. In this low connectivity regime, MSN population components wander randomly and here too are independent of variations in cortical driving. Only in the transition regime do weak changes in cortical driving interact with many population components so that sequential cell assemblies are reproducibly activated for many hundreds of milliseconds after stimulus onset and peri-stimulus time histograms display strong stimulus and temporal specificity. We show that, remarkably, this activity is maximized at striatally realistic connectivities and IPSP sizes. Thus, we suggest the local MSN network has optimal characteristics - it is neither too stable to respond in a dynamically complex temporally extended way to cortical variations, nor is it too unstable to respond in a consistent repeatable way. Rather, it is optimized to generate stimulus dependent activity patterns for long periods after variations in cortical excitation.

  18. Optimal Balance of the Striatal Medium Spiny Neuron Network

    PubMed Central

    Ponzi, Adam; Wickens, Jeffery R.

    2013-01-01

    Slowly varying activity in the striatum, the main Basal Ganglia input structure, is important for the learning and execution of movement sequences. Striatal medium spiny neurons (MSNs) form cell assemblies whose population firing rates vary coherently on slow behaviourally relevant timescales. It has been shown that such activity emerges in a model of a local MSN network but only at realistic connectivities of and only when MSN generated inhibitory post-synaptic potentials (IPSPs) are realistically sized. Here we suggest a reason for this. We investigate how MSN network generated population activity interacts with temporally varying cortical driving activity, as would occur in a behavioural task. We find that at unrealistically high connectivity a stable winners-take-all type regime is found where network activity separates into fixed stimulus dependent regularly firing and quiescent components. In this regime only a small number of population firing rate components interact with cortical stimulus variations. Around connectivity a transition to a more dynamically active regime occurs where all cells constantly switch between activity and quiescence. In this low connectivity regime, MSN population components wander randomly and here too are independent of variations in cortical driving. Only in the transition regime do weak changes in cortical driving interact with many population components so that sequential cell assemblies are reproducibly activated for many hundreds of milliseconds after stimulus onset and peri-stimulus time histograms display strong stimulus and temporal specificity. We show that, remarkably, this activity is maximized at striatally realistic connectivities and IPSP sizes. Thus, we suggest the local MSN network has optimal characteristics – it is neither too stable to respond in a dynamically complex temporally extended way to cortical variations, nor is it too unstable to respond in a consistent repeatable way. Rather, it is optimized to generate stimulus dependent activity patterns for long periods after variations in cortical excitation. PMID:23592954

  19. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    PubMed

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  20. Sci-Thur AM: YIS – 06: A Monte Carlo study of macro- and microscopic dose descriptors and the microdosimetric spread using detailed cellular models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliver, Patricia; Thomson, Rowan

    2016-08-15

    Purpose: To develop Monte Carlo models of cell clusters to investigate the relationships between macro- and microscopic dose descriptors, quantify the microdosimetric spread in energy deposition for subcellular targets, and determine how these results depend on the computational model. Methods: Microscopic tissue structure is modelled as clusters of 13 to 150 cells, with cell (nuclear) radii between 5 and 10 microns (2 and 9 microns). Energy imparted per unit mass (specific energy or dose) is scored in the nucleus (D{sub nuc}) and cytoplasm (D{sub cyt}) for incident photon energies from 20 to 370 keV. Dose-to-water (D{sub w,m}) and dose-to-medium (D{submore » m,m}) are compared to D{sub nuc} and D{sub cyt}. Single cells and single nuclear cavities are also simulated. Results: D{sub nuc} and D{sub cyt} are sensitive to the surrounding environment with deviations of up to 13% for a single nucleus/cell compared with a multicellular cluster. These dose descriptors vary with cell and nucleus size by up to 10%. D{sub nuc} and D{sub cyt} differ from D{sub w,m} and D{sub m,m} by up to 32%. The microdosimetric spread is sensitive to whether cells are arranged randomly or in a hexagonal lattice, and whether subcellular compartment sizes are sampled from a normal distribution or are constant throughout the cluster. Conclusions: D{sub nuc} and D{sub cyt} are sensitive to cell morphology, elemental composition and the presence of surrounding cells. The microdosimetric spread was investigated using realistic elemental compositions for the nucleus and cytoplasm, and depends strongly on subcellular compartment size, source energy and dose.« less

  1. Temporal variability of the bioaerosol background at a subway station: concentration level, size distribution, and diversity of airborne bacteria.

    PubMed

    Dybwad, Marius; Skogan, Gunnar; Blatny, Janet Martha

    2014-01-01

    Naturally occurring bioaerosol environments may present a challenge to biological detection-identification-monitoring (BIODIM) systems aiming at rapid and reliable warning of bioterrorism incidents. One way to improve the operational performance of BIODIM systems is to increase our understanding of relevant bioaerosol backgrounds. Subway stations are enclosed public environments which may be regarded as potential bioterrorism targets. This study provides novel information concerning the temporal variability of the concentration level, size distribution, and diversity of airborne bacteria in a Norwegian subway station. Three different air samplers were used during a 72-h sampling campaign in February 2011. The results suggested that the airborne bacterial environment was stable between days and seasons, while the intraday variability was found to be substantial, although often following a consistent diurnal pattern. The bacterial levels ranged from not detected to 10(3) CFU m(-3) and generally showed increased levels during the daytime compared to the nighttime levels, as well as during rush hours compared to non-rush hours. The airborne bacterial levels showed rapid temporal variation (up to 270-fold) on some occasions, both consistent and inconsistent with the diurnal profile. Airborne bacterium-containing particles were distributed between different sizes for particles of >1.1 μm, although ∼50% were between 1.1 and 3.3 μm. Anthropogenic activities (mainly passengers) were demonstrated as major sources of airborne bacteria and predominantly contributed 1.1- to 3.3-μm bacterium-containing particles. Our findings contribute to the development of realistic testing and evaluation schemes for BIODIM equipment by providing information that may be used to simulate operational bioaerosol backgrounds during controlled aerosol chamber-based challenge tests with biological threat agents.

  2. Temporal Variability of the Bioaerosol Background at a Subway Station: Concentration Level, Size Distribution, and Diversity of Airborne Bacteria

    PubMed Central

    Dybwad, Marius; Skogan, Gunnar

    2014-01-01

    Naturally occurring bioaerosol environments may present a challenge to biological detection-identification-monitoring (BIODIM) systems aiming at rapid and reliable warning of bioterrorism incidents. One way to improve the operational performance of BIODIM systems is to increase our understanding of relevant bioaerosol backgrounds. Subway stations are enclosed public environments which may be regarded as potential bioterrorism targets. This study provides novel information concerning the temporal variability of the concentration level, size distribution, and diversity of airborne bacteria in a Norwegian subway station. Three different air samplers were used during a 72-h sampling campaign in February 2011. The results suggested that the airborne bacterial environment was stable between days and seasons, while the intraday variability was found to be substantial, although often following a consistent diurnal pattern. The bacterial levels ranged from not detected to 103 CFU m−3 and generally showed increased levels during the daytime compared to the nighttime levels, as well as during rush hours compared to non-rush hours. The airborne bacterial levels showed rapid temporal variation (up to 270-fold) on some occasions, both consistent and inconsistent with the diurnal profile. Airborne bacterium-containing particles were distributed between different sizes for particles of >1.1 μm, although ∼50% were between 1.1 and 3.3 μm. Anthropogenic activities (mainly passengers) were demonstrated as major sources of airborne bacteria and predominantly contributed 1.1- to 3.3-μm bacterium-containing particles. Our findings contribute to the development of realistic testing and evaluation schemes for BIODIM equipment by providing information that may be used to simulate operational bioaerosol backgrounds during controlled aerosol chamber-based challenge tests with biological threat agents. PMID:24162566

  3. Aerosol transport simulations in indoor and outdoor environments using computational fluid dynamics (CFD)

    NASA Astrophysics Data System (ADS)

    Landazuri, Andrea C.

    This dissertation focuses on aerosol transport modeling in occupational environments and mining sites in Arizona using computational fluid dynamics (CFD). The impacts of human exposure in both environments are explored with the emphasis on turbulence, wind speed, wind direction and particle sizes. Final emissions simulations involved the digitalization process of available elevation contour plots of one of the mining sites to account for realistic topographical features. The digital elevation map (DEM) of one of the sites was imported to COMSOL MULTIPHYSICSRTM for subsequent turbulence and particle simulations. Simulation results that include realistic topography show considerable deviations of wind direction. Inter-element correlation results using metal and metalloid size resolved concentration data using a Micro-Orifice Uniform Deposit Impactor (MOUDI) under given wind speeds and directions provided guidance on groups of metals that coexist throughout mining activities. Groups between Fe-Mg, Cr-Fe, Al-Sc, Sc-Fe, and Mg-Al are strongly correlated for unrestricted wind directions and speeds, suggesting that the source may be of soil origin (e.g. ore and tailings); also, groups of elements where Cu is present, in the coarse fraction range, may come from mechanical action mining activities and saltation phenomenon. Besides, MOUDI data under low wind speeds (<2 m/s) and at night showed a strong correlation for 1 mum particles between the groups: Sc-Be-Mg, Cr-Al, Cu-Mn, Cd-Pb-Be, Cd-Cr, Cu-Pb, Pb-Cd, As-Cd-Pb. The As-Cd-Pb correlates strongly in almost all ranges of particle sizes. When restricted low wind speeds were imposed more groups of elements are evident and this may be justified with the fact that at lower speeds particles are more likely to settle. When linking these results with CFD simulations and Pb-isotope results it is concluded that the source of elements found in association with Pb in the fine fraction come from the ore that is subsequently processed in the smelter site, whereas the source of elements associated to Pb in the coarse fraction is of different origin. CFD simulation results will not only provide realistic and quantifiable information in terms of potential deleterious effects, but also that the application of CFD represents an important contribution to actual dispersion modeling studies; therefore, Computational Fluid Dynamics can be used as a source apportionment tool to identify areas that have an effect over specific sampling points and susceptible regions under certain meteorological conditions, and these conclusions can be supported with inter-element correlation matrices and lead isotope analysis, especially since there is limited access to the mining sites. Additional results concluded that grid adaption is a powerful tool that allows to refine specific regions that require lots of detail and therefore better resolve flow detail, provides higher number of locations with monotonic convergence than the manual grids, and requires the least computational effort. CFD simulations were approached using the k-epsilon model, with the aid of computer aided engineering software: ANSYSRTM and COMSOL MULTIPHYSICS RTM. The success of aerosol transport simulations depends on a good simulation of the turbulent flow. A lot of attention was placed on investigating and choosing the best models in terms of convergence, independence and computational effort. This dissertation also includes preliminary studies of transient discrete phase, eulerian and species transport modeling, importance of saltation of particles, information on CFD methods, and strategies for future directions that should be taken.

  4. SU-E-CAMPUS-I-05: Internal Dosimetric Calculations for Several Imaging Radiopharmaceuticals in Preclinical Studies and Quantitative Assessment of the Mouse Size Impact On Them. Realistic Monte Carlo Simulations Based On the 4D-MOBY Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostou, T; Papadimitroulas, P; Kagadis, GC

    2014-06-15

    Purpose: Commonly used radiopharmaceuticals were tested to define the most important dosimetric factors in preclinical studies. Dosimetric calculations were applied in two different whole-body mouse models, with varying organ size, so as to determine their impact on absorbed doses and S-values. Organ mass influence was evaluated with computational models and Monte Carlo(MC) simulations. Methods: MC simulations were executed on GATE to determine dose distribution in the 4D digital MOBY mouse phantom. Two mouse models, 28 and 34 g respectively, were constructed based on realistic preclinical exams to calculate the absorbed doses and S-values of five commonly used radionuclides in SPECT/PETmore » studies (18F, 68Ga, 177Lu, 111In and 99mTc).Radionuclide biodistributions were obtained from literature. Realistic statistics (uncertainty lower than 4.5%) were acquired using the standard physical model in Geant4. Comparisons of the dosimetric calculations on the two different phantoms for each radiopharmaceutical are presented. Results: Dose per organ in mGy was calculated for all radiopharmaceuticals. The two models introduced a difference of 0.69% in their brain masses, while the largest differences were observed in the marrow 18.98% and in the thyroid 18.65% masses.Furthermore, S-values of the most important target-organs were calculated for each isotope. Source-organ was selected to be the whole mouse body.Differences on the S-factors were observed in the 6.0–30.0% range. Tables with all the calculations as reference dosimetric data were developed. Conclusion: Accurate dose per organ and the most appropriate S-values are derived for specific preclinical studies. The impact of the mouse model size is rather high (up to 30% for a 17.65% difference in the total mass), and thus accurate definition of the organ mass is a crucial parameter for self-absorbed S values calculation.Our goal is to extent the study for accurate estimations in small animal imaging, whereas it is known that there is a large variety in the anatomy of the organs.« less

  5. Debating Nuclear Energy: Theories of Risk and Purposes of Communication.

    ERIC Educational Resources Information Center

    Mirel, Barbara

    1994-01-01

    Applies theoretical principles of risk perception and communication (from various psychological, social, political, and cultural dynamics) to a sample risk communication on nuclear energy to determine realistic expectations for persuasive risk communications. Stresses that rhetorical researchers need to explore and test the extent to which written…

  6. Applying the Bootstrap to Taxometric Analysis: Generating Empirical Sampling Distributions to Help Interpret Results

    ERIC Educational Resources Information Center

    Ruscio, John; Ruscio, Ayelet Meron; Meron, Mati

    2007-01-01

    Meehl's taxometric method was developed to distinguish categorical and continuous constructs. However, taxometric output can be difficult to interpret because expected results for realistic data conditions and differing procedural implementations have not been derived analytically or studied through rigorous simulations. By applying bootstrap…

  7. Exposure Render: An Interactive Photo-Realistic Volume Rendering Framework

    PubMed Central

    Kroes, Thomas; Post, Frits H.; Botha, Charl P.

    2012-01-01

    The field of volume visualization has undergone rapid development during the past years, both due to advances in suitable computing hardware and due to the increasing availability of large volume datasets. Recent work has focused on increasing the visual realism in Direct Volume Rendering (DVR) by integrating a number of visually plausible but often effect-specific rendering techniques, for instance modeling of light occlusion and depth of field. Besides yielding more attractive renderings, especially the more realistic lighting has a positive effect on perceptual tasks. Although these new rendering techniques yield impressive results, they exhibit limitations in terms of their exibility and their performance. Monte Carlo ray tracing (MCRT), coupled with physically based light transport, is the de-facto standard for synthesizing highly realistic images in the graphics domain, although usually not from volumetric data. Due to the stochastic sampling of MCRT algorithms, numerous effects can be achieved in a relatively straight-forward fashion. For this reason, we have developed a practical framework that applies MCRT techniques also to direct volume rendering (DVR). With this work, we demonstrate that a host of realistic effects, including physically based lighting, can be simulated in a generic and flexible fashion, leading to interactive DVR with improved realism. In the hope that this improved approach to DVR will see more use in practice, we have made available our framework under a permissive open source license. PMID:22768292

  8. Group Size Effect on Cooperation in One-Shot Social Dilemmas II: Curvilinear Effect.

    PubMed

    Capraro, Valerio; Barcelo, Hélène

    2015-01-01

    In a world in which many pressing global issues require large scale cooperation, understanding the group size effect on cooperative behavior is a topic of central importance. Yet, the nature of this effect remains largely unknown, with lab experiments insisting that it is either positive or negative or null, and field experiments suggesting that it is instead curvilinear. Here we shed light on this apparent contradiction by considering a novel class of public goods games inspired to the realistic scenario in which the natural output limits of the public good imply that the benefit of cooperation increases fast for early contributions and then decelerates. We report on a large lab experiment providing evidence that, in this case, group size has a curvilinear effect on cooperation, according to which intermediate-size groups cooperate more than smaller groups and more than larger groups. In doing so, our findings help fill the gap between lab experiments and field experiments and suggest concrete ways to promote large scale cooperation among people.

  9. Making inference from wildlife collision data: inferring predator absence from prey strikes

    PubMed Central

    Hosack, Geoffrey R.; Barry, Simon C.

    2017-01-01

    Wildlife collision data are ubiquitous, though challenging for making ecological inference due to typically irreducible uncertainty relating to the sampling process. We illustrate a new approach that is useful for generating inference from predator data arising from wildlife collisions. By simply conditioning on a second prey species sampled via the same collision process, and by using a biologically realistic numerical response functions, we can produce a coherent numerical response relationship between predator and prey. This relationship can then be used to make inference on the population size of the predator species, including the probability of extinction. The statistical conditioning enables us to account for unmeasured variation in factors influencing the runway strike incidence for individual airports and to enable valid comparisons. A practical application of the approach for testing hypotheses about the distribution and abundance of a predator species is illustrated using the hypothesized red fox incursion into Tasmania, Australia. We estimate that conditional on the numerical response between fox and lagomorph runway strikes on mainland Australia, the predictive probability of observing no runway strikes of foxes in Tasmania after observing 15 lagomorph strikes is 0.001. We conclude there is enough evidence to safely reject the null hypothesis that there is a widespread red fox population in Tasmania at a population density consistent with prey availability. The method is novel and has potential wider application. PMID:28243534

  10. Generalized Redistribute-to-the-Right Algorithm: Application to the Analysis of Censored Cost Data

    PubMed Central

    CHEN, SHUAI; ZHAO, HONGWEI

    2013-01-01

    Medical cost estimation is a challenging task when censoring of data is present. Although researchers have proposed methods for estimating mean costs, these are often derived from theory and are not always easy to understand. We provide an alternative method, based on a replace-from-the-right algorithm, for estimating mean costs more efficiently. We show that our estimator is equivalent to an existing one that is based on the inverse probability weighting principle and semiparametric efficiency theory. We also propose an alternative method for estimating the survival function of costs, based on the redistribute-to-the-right algorithm, that was originally used for explaining the Kaplan–Meier estimator. We show that this second proposed estimator is equivalent to a simple weighted survival estimator of costs. Finally, we develop a more efficient survival estimator of costs, using the same redistribute-to-the-right principle. This estimator is naturally monotone, more efficient than some existing survival estimators, and has a quite small bias in many realistic settings. We conduct numerical studies to examine the finite sample property of the survival estimators for costs, and show that our new estimator has small mean squared errors when the sample size is not too large. We apply both existing and new estimators to a data example from a randomized cardiovascular clinical trial. PMID:24403869

  11. Anisotropic scene geometry resampling with occlusion filling for 3DTV applications

    NASA Astrophysics Data System (ADS)

    Kim, Jangheon; Sikora, Thomas

    2006-02-01

    Image and video-based rendering technologies are receiving growing attention due to their photo-realistic rendering capability in free-viewpoint. However, two major limitations are ghosting and blurring due to their sampling-based mechanism. The scene geometry which supports to select accurate sampling positions is proposed using global method (i.e. approximate depth plane) and local method (i.e. disparity estimation). This paper focuses on the local method since it can yield more accurate rendering quality without large number of cameras. The local scene geometry has two difficulties which are the geometrical density and the uncovered area including hidden information. They are the serious drawback to reconstruct an arbitrary viewpoint without aliasing artifacts. To solve the problems, we propose anisotropic diffusive resampling method based on tensor theory. Isotropic low-pass filtering accomplishes anti-aliasing in scene geometry and anisotropic diffusion prevents filtering from blurring the visual structures. Apertures in coarse samples are estimated following diffusion on the pre-filtered space, the nonlinear weighting of gradient directions suppresses the amount of diffusion. Aliasing artifacts from low density are efficiently removed by isotropic filtering and the edge blurring can be solved by the anisotropic method at one process. Due to difference size of sampling gap, the resampling condition is defined considering causality between filter-scale and edge. Using partial differential equation (PDE) employing Gaussian scale-space, we iteratively achieve the coarse-to-fine resampling. In a large scale, apertures and uncovered holes can be overcoming because only strong and meaningful boundaries are selected on the resolution. The coarse-level resampling with a large scale is iteratively refined to get detail scene structure. Simulation results show the marked improvements of rendering quality.

  12. Body size estimation of self and others in females varying in BMI.

    PubMed

    Thaler, Anne; Geuss, Michael N; Mölbert, Simone C; Giel, Katrin E; Streuber, Stephan; Romero, Javier; Black, Michael J; Mohler, Betty J

    2018-01-01

    Previous literature suggests that a disturbed ability to accurately identify own body size may contribute to overweight. Here, we investigated the influence of personal body size, indexed by body mass index (BMI), on body size estimation in a non-clinical population of females varying in BMI. We attempted to disentangle general biases in body size estimates and attitudinal influences by manipulating whether participants believed the body stimuli (personalized avatars with realistic weight variations) represented their own body or that of another person. Our results show that the accuracy of own body size estimation is predicted by personal BMI, such that participants with lower BMI underestimated their body size and participants with higher BMI overestimated their body size. Further, participants with higher BMI were less likely to notice the same percentage of weight gain than participants with lower BMI. Importantly, these results were only apparent when participants were judging a virtual body that was their own identity (Experiment 1), but not when they estimated the size of a body with another identity and the same underlying body shape (Experiment 2a). The different influences of BMI on accuracy of body size estimation and sensitivity to weight change for self and other identity suggests that effects of BMI on visual body size estimation are self-specific and not generalizable to other bodies.

  13. Body size estimation of self and others in females varying in BMI

    PubMed Central

    Geuss, Michael N.; Mölbert, Simone C.; Giel, Katrin E.; Streuber, Stephan; Romero, Javier; Black, Michael J.; Mohler, Betty J.

    2018-01-01

    Previous literature suggests that a disturbed ability to accurately identify own body size may contribute to overweight. Here, we investigated the influence of personal body size, indexed by body mass index (BMI), on body size estimation in a non-clinical population of females varying in BMI. We attempted to disentangle general biases in body size estimates and attitudinal influences by manipulating whether participants believed the body stimuli (personalized avatars with realistic weight variations) represented their own body or that of another person. Our results show that the accuracy of own body size estimation is predicted by personal BMI, such that participants with lower BMI underestimated their body size and participants with higher BMI overestimated their body size. Further, participants with higher BMI were less likely to notice the same percentage of weight gain than participants with lower BMI. Importantly, these results were only apparent when participants were judging a virtual body that was their own identity (Experiment 1), but not when they estimated the size of a body with another identity and the same underlying body shape (Experiment 2a). The different influences of BMI on accuracy of body size estimation and sensitivity to weight change for self and other identity suggests that effects of BMI on visual body size estimation are self-specific and not generalizable to other bodies. PMID:29425218

  14. Long-range energy transfer in self-assembled quantum dot-DNA cascades

    NASA Astrophysics Data System (ADS)

    Goodman, Samuel M.; Siu, Albert; Singh, Vivek; Nagpal, Prashant

    2015-11-01

    The size-dependent energy bandgaps of semiconductor nanocrystals or quantum dots (QDs) can be utilized in converting broadband incident radiation efficiently into electric current by cascade energy transfer (ET) between layers of different sized quantum dots, followed by charge dissociation and transport in the bottom layer. Self-assembling such cascade structures with angstrom-scale spatial precision is important for building realistic devices, and DNA-based QD self-assembly can provide an important alternative. Here we show long-range Dexter energy transfer in QD-DNA self-assembled single constructs and ensemble devices. Using photoluminescence, scanning tunneling spectroscopy, current-sensing AFM measurements in single QD-DNA cascade constructs, and temperature-dependent ensemble devices using TiO2 nanotubes, we show that Dexter energy transfer, likely mediated by the exciton-shelves formed in these QD-DNA self-assembled structures, can be used for efficient transport of energy across QD-DNA thin films.The size-dependent energy bandgaps of semiconductor nanocrystals or quantum dots (QDs) can be utilized in converting broadband incident radiation efficiently into electric current by cascade energy transfer (ET) between layers of different sized quantum dots, followed by charge dissociation and transport in the bottom layer. Self-assembling such cascade structures with angstrom-scale spatial precision is important for building realistic devices, and DNA-based QD self-assembly can provide an important alternative. Here we show long-range Dexter energy transfer in QD-DNA self-assembled single constructs and ensemble devices. Using photoluminescence, scanning tunneling spectroscopy, current-sensing AFM measurements in single QD-DNA cascade constructs, and temperature-dependent ensemble devices using TiO2 nanotubes, we show that Dexter energy transfer, likely mediated by the exciton-shelves formed in these QD-DNA self-assembled structures, can be used for efficient transport of energy across QD-DNA thin films. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr04778a

  15. Multi-scale Visualization of Molecular Architecture Using Real-Time Ambient Occlusion in Sculptor.

    PubMed

    Wahle, Manuel; Wriggers, Willy

    2015-10-01

    The modeling of large biomolecular assemblies relies on an efficient rendering of their hierarchical architecture across a wide range of spatial level of detail. We describe a paradigm shift currently under way in computer graphics towards the use of more realistic global illumination models, and we apply the so-called ambient occlusion approach to our open-source multi-scale modeling program, Sculptor. While there are many other higher quality global illumination approaches going all the way up to full GPU-accelerated ray tracing, they do not provide size-specificity of the features they shade. Ambient occlusion is an aspect of global lighting that offers great visual benefits and powerful user customization. By estimating how other molecular shape features affect the reception of light at some surface point, it effectively simulates indirect shadowing. This effect occurs between molecular surfaces that are close to each other, or in pockets such as protein or ligand binding sites. By adding ambient occlusion, large macromolecular systems look much more natural, and the perception of characteristic surface features is strongly enhanced. In this work, we present a real-time implementation of screen space ambient occlusion that delivers realistic cues about tunable spatial scale characteristics of macromolecular architecture. Heretofore, the visualization of large biomolecular systems, comprising e.g. hundreds of thousands of atoms or Mega-Dalton size electron microscopy maps, did not take into account the length scales of interest or the spatial resolution of the data. Our approach has been uniquely customized with shading that is tuned for pockets and cavities of a user-defined size, making it useful for visualizing molecular features at multiple scales of interest. This is a feature that none of the conventional ambient occlusion approaches provide. Actual Sculptor screen shots illustrate how our implementation supports the size-dependent rendering of molecular surface features.

  16. Regional gray matter correlates of vocational interests

    PubMed Central

    2012-01-01

    Background Previous studies have identified brain areas related to cognitive abilities and personality, respectively. In this exploratory study, we extend the application of modern neuroimaging techniques to another area of individual differences, vocational interests, and relate the results to an earlier study of cognitive abilities salient for vocations. Findings First, we examined the psychometric relationships between vocational interests and abilities in a large sample. The primary relationships between those domains were between Investigative (scientific) interests and general intelligence and between Realistic (“blue-collar”) interests and spatial ability. Then, using MRI and voxel-based morphometry, we investigated the relationships between regional gray matter volume and vocational interests. Specific clusters of gray matter were found to be correlated with Investigative and Realistic interests. Overlap analyses indicated some common brain areas between the correlates of Investigative interests and general intelligence and between the correlates of Realistic interests and spatial ability. Conclusions Two of six vocational-interest scales show substantial relationships with regional gray matter volume. The overlap between the brain correlates of these scales and cognitive-ability factors suggest there are relationships between individual differences in brain structure and vocations. PMID:22591829

  17. Regional gray matter correlates of vocational interests.

    PubMed

    Schroeder, David H; Haier, Richard J; Tang, Cheuk Ying

    2012-05-16

    Previous studies have identified brain areas related to cognitive abilities and personality, respectively. In this exploratory study, we extend the application of modern neuroimaging techniques to another area of individual differences, vocational interests, and relate the results to an earlier study of cognitive abilities salient for vocations. First, we examined the psychometric relationships between vocational interests and abilities in a large sample. The primary relationships between those domains were between Investigative (scientific) interests and general intelligence and between Realistic ("blue-collar") interests and spatial ability. Then, using MRI and voxel-based morphometry, we investigated the relationships between regional gray matter volume and vocational interests. Specific clusters of gray matter were found to be correlated with Investigative and Realistic interests. Overlap analyses indicated some common brain areas between the correlates of Investigative interests and general intelligence and between the correlates of Realistic interests and spatial ability. Two of six vocational-interest scales show substantial relationships with regional gray matter volume. The overlap between the brain correlates of these scales and cognitive-ability factors suggest there are relationships between individual differences in brain structure and vocations.

  18. Monte Carlo simulation of air sampling methods for the measurement of radon decay products.

    PubMed

    Sima, Octavian; Luca, Aurelian; Sahagia, Maria

    2017-08-01

    A stochastic model of the processes involved in the measurement of the activity of the 222 Rn decay products was developed. The distributions of the relevant factors, including air sampling and radionuclide collection, are propagated using Monte Carlo simulation to the final distribution of the measurement results. The uncertainties of the 222 Rn decay products concentrations in the air are realistically evaluated. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Towards Standardization in Terminal Ballistics Testing: Velocity Representation

    DTIC Science & Technology

    1976-01-01

    d vd vr does not exist at vV, it is true that -. Also avs rd s t d v s approximates...29 3b. Sample of plotter output: v versus v s -r.. ....... .. 30s S 3c. Sample of plotter output: v /vs versus vr/avs. ...... 31 I ’ i Li- Preceding...implicit in sets of ( v s , v r) data. A form is proposed as being sufficiently simple and versatile to usefully and realistically model

  20. Electron tomography simulator with realistic 3D phantom for evaluation of acquisition, alignment and reconstruction methods.

    PubMed

    Wan, Xiaohua; Katchalski, Tsvi; Churas, Christopher; Ghosh, Sreya; Phan, Sebastien; Lawrence, Albert; Hao, Yu; Zhou, Ziying; Chen, Ruijuan; Chen, Yu; Zhang, Fa; Ellisman, Mark H

    2017-05-01

    Because of the significance of electron microscope tomography in the investigation of biological structure at nanometer scales, ongoing improvement efforts have been continuous over recent years. This is particularly true in the case of software developments. Nevertheless, verification of improvements delivered by new algorithms and software remains difficult. Current analysis tools do not provide adaptable and consistent methods for quality assessment. This is particularly true with images of biological samples, due to image complexity, variability, low contrast and noise. We report an electron tomography (ET) simulator with accurate ray optics modeling of image formation that includes curvilinear trajectories through the sample, warping of the sample and noise. As a demonstration of the utility of our approach, we have concentrated on providing verification of the class of reconstruction methods applicable to wide field images of stained plastic-embedded samples. Accordingly, we have also constructed digital phantoms derived from serial block face scanning electron microscope images. These phantoms are also easily modified to include alignment features to test alignment algorithms. The combination of more realistic phantoms with more faithful simulations facilitates objective comparison of acquisition parameters, alignment and reconstruction algorithms and their range of applicability. With proper phantoms, this approach can also be modified to include more complex optical models, including distance-dependent blurring and phase contrast functions, such as may occur in cryotomography. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Idealized vs. Realistic Microstructures: An Atomistic Simulation Case Study on γ/γ′ Microstructures

    PubMed Central

    Prakash, Aruna; Bitzek, Erik

    2017-01-01

    Single-crystal Ni-base superalloys, consisting of a two-phase γ/γ′ microstructure, retain high strengths at elevated temperatures and are key materials for high temperature applications, like, e.g., turbine blades of aircraft engines. The lattice misfit between the γ and γ′ phases results in internal stresses, which significantly influence the deformation and creep behavior of the material. Large-scale atomistic simulations that are often used to enhance our understanding of the deformation mechanisms in such materials must accurately account for such misfit stresses. In this work, we compare the internal stresses in both idealized and experimentally-informed, i.e., more realistic, γ/γ′ microstructures. The idealized samples are generated by assuming, as is frequently done, a periodic arrangement of cube-shaped γ′ particles with planar γ/γ′ interfaces. The experimentally-informed samples are generated from two different sources to produce three different samples—the scanning electron microscopy micrograph-informed quasi-2D atomistic sample and atom probe tomography-informed stoichiometric and non-stoichiometric atomistic samples. Additionally, we compare the stress state of an idealized embedded cube microstructure with finite element simulations incorporating 3D periodic boundary conditions. Subsequently, we study the influence of the resulting stress state on the evolution of dislocation loops in the different samples. The results show that the stresses in the atomistic and finite element simulations are almost identical. Furthermore, quasi-2D boundary conditions lead to a significantly different stress state and, consequently, different evolution of the dislocation loop, when compared to samples with fully 3D boundary conditions. PMID:28772453

  2. Robustness of the far-field response of nonlocal plasmonic ensembles.

    PubMed

    Tserkezis, Christos; Maack, Johan R; Liu, Zhaowei; Wubs, Martijn; Mortensen, N Asger

    2016-06-22

    Contrary to classical predictions, the optical response of few-nm plasmonic particles depends on particle size due to effects such as nonlocality and electron spill-out. Ensembles of such nanoparticles are therefore expected to exhibit a nonclassical inhomogeneous spectral broadening due to size distribution. For a normal distribution of free-electron nanoparticles, and within the simple nonlocal hydrodynamic Drude model, both the nonlocal blueshift and the plasmon linewidth are shown to be considerably affected by ensemble averaging. Size-variance effects tend however to conceal nonlocality to a lesser extent when the homogeneous size-dependent broadening of individual nanoparticles is taken into account, either through a local size-dependent damping model or through the Generalized Nonlocal Optical Response theory. The role of ensemble averaging is further explored in realistic distributions of isolated or weakly-interacting noble-metal nanoparticles, as encountered in experiments, while an analytical expression to evaluate the importance of inhomogeneous broadening through measurable quantities is developed. Our findings are independent of the specific nonclassical theory used, thus providing important insight into a large range of experiments on nanoscale and quantum plasmonics.

  3. Body image, eating disorders, and the relationship to adolescent media use.

    PubMed

    Benowitz-Fredericks, Carson A; Garcia, Kaylor; Massey, Meredith; Vasagar, Brintha; Borzekowski, Dina L G

    2012-06-01

    Historically and currently, media messages around body shape and size emphasize the importance of being below-average weight for women and hypermuscular for men. The media messages around physical appearance are not realistic for most and lead to body dissatisfaction for most adolescents. Interventions designed to mitigate the influence of negative media messages on adolescents' body image are presented; however, most have shown limited success. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Effects of simulation fidelity on user experience in virtual fear of public speaking training - an experimental study.

    PubMed

    Poeschl, Sandra; Doering, Nicola

    2014-01-01

    Realistic models in virtual reality training applications are considered to positively influence presence and performance. The experimental study presented, analyzed the effect of simulation fidelity (static vs. animated audience) on presence as a prerequisite for performance in a prototype virtual fear of public speaking application with a sample of N = 40 academic non-phobic users. Contrary to the state of research, no influence was shown on virtual presence and perceived realism, but an animated audience led to significantly higher effects in anxiety during giving a talk. Although these findings could be explained by an application that might not have been realistic enough, they still question the role of presence as a mediating factor in virtual exposure applications.

  5. Statistical Analysis for Collision-free Boson Sampling.

    PubMed

    Huang, He-Liang; Zhong, Han-Sen; Li, Tan; Li, Feng-Guang; Fu, Xiang-Qun; Zhang, Shuo; Wang, Xiang; Bao, Wan-Su

    2017-11-10

    Boson sampling is strongly believed to be intractable for classical computers but solvable with photons in linear optics, which raises widespread concern as a rapid way to demonstrate the quantum supremacy. However, due to its solution is mathematically unverifiable, how to certify the experimental results becomes a major difficulty in the boson sampling experiment. Here, we develop a statistical analysis scheme to experimentally certify the collision-free boson sampling. Numerical simulations are performed to show the feasibility and practicability of our scheme, and the effects of realistic experimental conditions are also considered, demonstrating that our proposed scheme is experimentally friendly. Moreover, our broad approach is expected to be generally applied to investigate multi-particle coherent dynamics beyond the boson sampling.

  6. Impact of airborne particle size, acoustic airflow and breathing pattern on delivery of nebulized antibiotic into the maxillary sinuses using a realistic human nasal replica.

    PubMed

    Leclerc, Lara; Pourchez, Jérémie; Aubert, Gérald; Leguellec, Sandrine; Vecellio, Laurent; Cottier, Michèle; Durand, Marc

    2014-09-01

    Improvement of clinical outcome in patients with sinuses disorders involves targeting delivery of nebulized drug into the maxillary sinuses. We investigated the impact of nebulization conditions (with and without 100 Hz acoustic airflow), particle size (9.9 μm, 2.8 μm, 550 nm and 230 nm) and breathing pattern (nasal vs. no nasal breathing) on enhancement of aerosol delivery into the sinuses using a realistic nasal replica developed by our team. After segmentation of the airways by means of high-resolution computed tomography scans, a well-characterized nasal replica was created using a rapid prototyping technology. A total of 168 intrasinus aerosol depositions were performed with changes of aerosol particle size and breathing patterns under different nebulization conditions using gentamicin as a marker. The results demonstrate that the fraction of aerosol deposited in the maxillary sinuses is enhanced by use of submicrometric aerosols, e.g. 8.155 ± 1.476 mg/L of gentamicin in the left maxillary sinus for the 2.8 μm particles vs. 2.056 ± 0.0474 for the 550 nm particles. Utilization of 100-Hz acoustic airflow nebulization also produced a 2- to 3-fold increase in drug deposition in the maxillary sinuses (e.g. 8.155 ± 1.476 vs. 3.990 ± 1.690 for the 2.8 μm particles). Our study clearly shows that optimum deposition was achieved using submicrometric particles and 100-Hz acoustic airflow nebulization with no nasal breathing. It is hoped that our new respiratory nasal replica will greatly facilitate the development of more effective delivery systems in the future.

  7. Design of landfill daily cells.

    PubMed

    Panagiotakopoulos, D; Dokas, I

    2001-08-01

    The objective of this paper is to study the behaviour of the landfill soil-to-refuse (S/R) ratio when size, geometry and operating parameters of the daily cell vary over realistic ranges. A simple procedure is presented (1) for calculating the cell parameters values which minimise the S/R ratio and (2) for studying the sensitivity of this minimum S/R ratio to variations in cell size, final refuse density, working face length, lift height and cover thickness. In countries where daily soil cover is required, savings in landfill space could be realised following this procedure. The sensitivity of minimum S/R to variations in cell dimensions decreases with cell size. Working face length and lift height affect the S/R ratio significantly. This procedure also offers the engineer an additional tool for comparing one large daily cell with two or more smaller ones, at two different working faces within the same landfill.

  8. Abruptness of Cascade Failures in Power Grids

    NASA Astrophysics Data System (ADS)

    Pahwa, Sakshi; Scoglio, Caterina; Scala, Antonio

    2014-01-01

    Electric power-systems are one of the most important critical infrastructures. In recent years, they have been exposed to extreme stress due to the increasing demand, the introduction of distributed renewable energy sources, and the development of extensive interconnections. We investigate the phenomenon of abrupt breakdown of an electric power-system under two scenarios: load growth (mimicking the ever-increasing customer demand) and power fluctuations (mimicking the effects of renewable sources). Our results on real, realistic and synthetic networks indicate that increasing the system size causes breakdowns to become more abrupt; in fact, mapping the system to a solvable statistical-physics model indicates the occurrence of a first order transition in the large size limit. Such an enhancement for the systemic risk failures (black-outs) with increasing network size is an effect that should be considered in the current projects aiming to integrate national power-grids into ``super-grids''.

  9. Abruptness of cascade failures in power grids.

    PubMed

    Pahwa, Sakshi; Scoglio, Caterina; Scala, Antonio

    2014-01-15

    Electric power-systems are one of the most important critical infrastructures. In recent years, they have been exposed to extreme stress due to the increasing demand, the introduction of distributed renewable energy sources, and the development of extensive interconnections. We investigate the phenomenon of abrupt breakdown of an electric power-system under two scenarios: load growth (mimicking the ever-increasing customer demand) and power fluctuations (mimicking the effects of renewable sources). Our results on real, realistic and synthetic networks indicate that increasing the system size causes breakdowns to become more abrupt; in fact, mapping the system to a solvable statistical-physics model indicates the occurrence of a first order transition in the large size limit. Such an enhancement for the systemic risk failures (black-outs) with increasing network size is an effect that should be considered in the current projects aiming to integrate national power-grids into "super-grids".

  10. Droplet size in flow: Theoretical model and application to polymer blends

    NASA Astrophysics Data System (ADS)

    Fortelný, Ivan; Jůza, Josef

    2017-05-01

    The paper is focused on prediction of the average droplet radius, R, in flowing polymer blends where the droplet size is determined by dynamic equilibrium between the droplet breakup and coalescence. Expressions for the droplet breakup frequency in systems with low and high contents of the dispersed phase are derived using available theoretical and experimental results for model blends. Dependences of the coalescence probability, Pc, on system parameters, following from recent theories, is considered and approximate equation for Pc in a system with a low polydispersity in the droplet size is proposed. Equations for R in systems with low and high contents of the dispersed phase are derived. Combination of these equations predicts realistic dependence of R on the volume fraction of dispersed droplets, φ. Theoretical prediction of the ratio of R to the critical droplet radius at breakup agrees fairly well with experimental values for steadily mixed polymer blends.

  11. Abruptness of Cascade Failures in Power Grids

    PubMed Central

    Pahwa, Sakshi; Scoglio, Caterina; Scala, Antonio

    2014-01-01

    Electric power-systems are one of the most important critical infrastructures. In recent years, they have been exposed to extreme stress due to the increasing demand, the introduction of distributed renewable energy sources, and the development of extensive interconnections. We investigate the phenomenon of abrupt breakdown of an electric power-system under two scenarios: load growth (mimicking the ever-increasing customer demand) and power fluctuations (mimicking the effects of renewable sources). Our results on real, realistic and synthetic networks indicate that increasing the system size causes breakdowns to become more abrupt; in fact, mapping the system to a solvable statistical-physics model indicates the occurrence of a first order transition in the large size limit. Such an enhancement for the systemic risk failures (black-outs) with increasing network size is an effect that should be considered in the current projects aiming to integrate national power-grids into “super-grids”. PMID:24424239

  12. Catalysis by clusters with precise numbers of atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tyo, Eric C.; Vajda, Stefan

    2015-07-03

    Clusters that contain only a small number of atoms can exhibit unique and often unexpected properties. The clusters are of particular interest in catalysis because they can act as individual active sites, and minor changes in size and composition – such as the addition or removal of a single atom – can have a substantial influence on the activity and selectivity of a reaction. Here we review recent progress in the synthesis, characterization and catalysis of well-defined sub-nanometre clusters. We examine work on size-selected supported clusters in ultra-high vacuum environments and under realistic reaction conditions, and explore the use ofmore » computational methods to provide a mechanistic understanding of their catalytic properties. We also highlight the potential of size-selected clusters to provide insights into important catalytic processes and their use in the development of novel catalytic systems.« less

  13. Risk assessment of turbine rotor failure using probabilistic ultrasonic non-destructive evaluations

    NASA Astrophysics Data System (ADS)

    Guan, Xuefei; Zhang, Jingdan; Zhou, S. Kevin; Rasselkorde, El Mahjoub; Abbasi, Waheed A.

    2014-02-01

    The study presents a method and application of risk assessment methodology for turbine rotor fatigue failure using probabilistic ultrasonic nondestructive evaluations. A rigorous probabilistic modeling for ultrasonic flaw sizing is developed by incorporating the model-assisted probability of detection, and the probability density function (PDF) of the actual flaw size is derived. Two general scenarios, namely the ultrasonic inspection with an identified flaw indication and the ultrasonic inspection without flaw indication, are considered in the derivation. To perform estimations for fatigue reliability and remaining useful life, uncertainties from ultrasonic flaw sizing and fatigue model parameters are systematically included and quantified. The model parameter PDF is estimated using Bayesian parameter estimation and actual fatigue testing data. The overall method is demonstrated using a realistic application of steam turbine rotor, and the risk analysis under given safety criteria is provided to support maintenance planning.

  14. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    PubMed

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  16. Sample Size Estimation: The Easy Way

    ERIC Educational Resources Information Center

    Weller, Susan C.

    2015-01-01

    This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…

  17. A simulation study of gene-by-environment interactions in GWAS implies ample hidden effects

    PubMed Central

    Marigorta, Urko M.; Gibson, Greg

    2014-01-01

    The switch to a modern lifestyle in recent decades has coincided with a rapid increase in prevalence of obesity and other diseases. These shifts in prevalence could be explained by the release of genetic susceptibility for disease in the form of gene-by-environment (GxE) interactions. Yet, the detection of interaction effects requires large sample sizes, little replication has been reported, and a few studies have demonstrated environmental effects only after summing the risk of GWAS alleles into genetic risk scores (GRSxE). We performed extensive simulations of a quantitative trait controlled by 2500 causal variants to inspect the feasibility to detect gene-by-environment interactions in the context of GWAS. The simulated individuals were assigned either to an ancestral or a modern setting that alters the phenotype by increasing the effect size by 1.05–2-fold at a varying fraction of perturbed SNPs (from 1 to 20%). We report two main results. First, for a wide range of realistic scenarios, highly significant GRSxE is detected despite the absence of individual genotype GxE evidence at the contributing loci. Second, an increase in phenotypic variance after environmental perturbation reduces the power to discover susceptibility variants by GWAS in mixed cohorts with individuals from both ancestral and modern environments. We conclude that a pervasive presence of gene-by-environment effects can remain hidden even though it contributes to the genetic architecture of complex traits. PMID:25101110

  18. Normative changes in interests from adolescence to adulthood: A meta-analysis of longitudinal studies.

    PubMed

    Hoff, Kevin A; Briley, Daniel A; Wee, Colin J M; Rounds, James

    2018-04-01

    Vocational interests predict a variety of important outcomes and are among the most widely applied individual difference constructs in psychology and education. Despite over 90 years of research, little is known about the longitudinal development of interests. In this meta-analysis, the authors investigate normative changes in interests through adolescence and young adulthood. Effect sizes were aggregated from 49 longitudinal studies reporting mean-level changes in vocational interests, containing 98 total samples and 20,639 participants. Random effects meta-analytic regression models were used to assess age-related changes and gender differences across Holland's (1959, 1997) RIASEC categories and composite dimensions (people, things, data, and ideas). Results showed that mean-level interest scores generally increase with age, but effect sizes varied across interest categories and developmental periods. Adolescence was defined by two broad patterns of change: interest scores generally decreased during early adolescence, but then increased during late adolescence. During young adulthood, the most striking changes were found across the people and things orientations. Interests involving people tended to increase (artistic, social, and enterprising), whereas interests involving things either decreased (conventional) or remained constant (realistic and investigative). Gender differences associated with occupational stereotypes reached a lifetime peak during early adolescence, then tended to decrease in all subsequent age periods. Overall findings suggest there are normative changes in vocational interests from adolescence to adulthood, with important implications for developmental theories and the applied use of interests. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  19. The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education

    ERIC Educational Resources Information Center

    Slavin, Robert; Smith, Dewi

    2009-01-01

    Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…

  20. Interlaboratory comparison for the determination of the soluble fraction of metals in welding fume samples.

    PubMed

    Berlinger, Balazs; Harper, Martin

    2018-02-01

    There is interest in the bioaccessible metal components of aerosols, but this has been minimally studied because standardized sampling and analytical methods have not yet been developed. An interlaboratory study (ILS) has been carried out to evaluate a method for determining the water-soluble component of realistic welding fume (WF) air samples. Replicate samples were generated in the laboratory and distributed to participating laboratories to be analyzed according to a standardized procedure. Within-laboratory precision of replicate sample analysis (repeatability) was very good. Reproducibility between laboratories was not as good, but within limits of acceptability for the analysis of typical aerosol samples. These results can be used to support the development of a standardized test method.

  1. Phylogenetic effective sample size.

    PubMed

    Bartoszek, Krzysztof

    2016-10-21

    In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. The endothelial sample size analysis in corneal specular microscopy clinical examinations.

    PubMed

    Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci

    2012-05-01

    To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.

  3. Accounting for twin births in sample size calculations for randomised trials.

    PubMed

    Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J

    2018-05-04

    Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.

  4. Size and habit evolution of PETN crystals - a lattice Monte Carlo study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zepeda-Ruiz, L A; Maiti, A; Gee, R

    2006-02-28

    Starting from an accurate inter-atomic potential we develop a simple scheme of generating an ''on-lattice'' molecular potential of short range, which is then incorporated into a lattice Monte Carlo code for simulating size and shape evolution of nanocrystallites. As a specific example, we test such a procedure on the morphological evolution of a molecular crystal of interest to us, e.g., Pentaerythritol Tetranitrate, or PETN, and obtain realistic facetted structures in excellent agreement with experimental morphologies. We investigate several interesting effects including, the evolution of the initial shape of a ''seed'' to an equilibrium configuration, and the variation of growth morphologymore » as a function of the rate of particle addition relative to diffusion.« less

  5. Consolidation of lunar regolith: Microwave versus direct solar heating

    NASA Technical Reports Server (NTRS)

    Kunitzer, J.; Strenski, D. G.; Yankee, S. J.; Pletka, B. J.

    1991-01-01

    The production of construction materials on the lunar surface will require an appropriate fabrication technique. Two processing methods considered as being suitable for producing dense, consolidated products such as bricks are direct solar heating and microwave heating. An analysis was performed to compare the two processes in terms of the amount of power and time required to fabricate bricks of various size. The regolith was considered to be a mare basalt with an overall density of 60 pct. of theoretical. Densification was assumed to take place by vitrification since this process requires moderate amounts of energy and time while still producing dense products. Microwave heating was shown to be significantly faster compared to solar furnace heating for rapid production of realistic-size bricks.

  6. Design and analysis of a supersonic penetration/maneuvering fighter

    NASA Technical Reports Server (NTRS)

    Child, R. D.

    1975-01-01

    The design of three candidate air combat fighters which would cruise effectively at freestream Mach numbers of 1.6, 2.0, and 2.5 while maintaining good transonic maneuvering capability, is considered. These fighters were designed to deliver aerodynamically controlled dogfight missiles at the design Mach numbers. Studies performed by Rockwell International in May 1974 and guidance from NASA determined the shape and size of these missiles. The principle objective of this study is the aerodynamic design of the vehicles; however, configurations are sized to have realistic structures, mass properties, and propulsion systems. The results of this study show that air combat fighters in the 15,000 to 23,000 pound class would cruise supersonically on dry power and still maintain good transonic maneuvering performance.

  7. Electronic shift register memory based on molecular electron-transfer reactions

    NASA Technical Reports Server (NTRS)

    Hopfield, J. J.; Onuchic, Jose Nelson; Beratan, David N.

    1989-01-01

    The design of a shift register memory at the molecular level is described in detail. The memory elements are based on a chain of electron-transfer molecules incorporated on a very large scale integrated (VLSI) substrate, and the information is shifted by photoinduced electron-transfer reactions. The design requirements for such a system are discussed, and several realistic strategies for synthesizing these systems are presented. The immediate advantage of such a hybrid molecular/VLSI device would arise from the possible information storage density. The prospect of considerable savings of energy per bit processed also exists. This molecular shift register memory element design solves the conceptual problems associated with integrating molecular size components with larger (micron) size features on a chip.

  8. Sample size determination for mediation analysis of longitudinal data.

    PubMed

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  9. A Bio-Realistic Analog CMOS Cochlea Filter With High Tunability and Ultra-Steep Roll-Off.

    PubMed

    Wang, Shiwei; Koickal, Thomas Jacob; Hamilton, Alister; Cheung, Rebecca; Smith, Leslie S

    2015-06-01

    This paper presents the design and experimental results of a cochlea filter in analog very large scale integration (VLSI) which highly resembles physiologically measured response of the mammalian cochlea. The filter consists of three specialized sub-filter stages which respectively provide passive response in low frequencies, actively tunable response in mid-band frequencies and ultra-steep roll-off at transition frequencies from pass-band to stop-band. The sub-filters are implemented in balanced ladder topology using floating active inductors. Measured results from the fabricated chip show that wide range of mid-band tuning including gain tuning of over 20 dB, Q factor tuning from 2 to 19 as well as the bio-realistic center frequency shift are achieved by adjusting only one circuit parameter. Besides, the filter has an ultra-steep roll-off reaching over 300 dB/dec. By changing biasing currents, the filter can be configured to operate with center frequencies from 31 Hz to 8 kHz. The filter is 9th order, consumes 59.5 ∼ 90.0 μW power and occupies 0.9 mm2 chip area. A parallel bank of the proposed filter can be used as the front-end in hearing prosthesis devices, speech processors as well as other bio-inspired auditory systems owing to its bio-realistic behavior, low power consumption and small size.

  10. Bivalves: From individual to population modelling

    NASA Astrophysics Data System (ADS)

    Saraiva, S.; van der Meer, J.; Kooijman, S. A. L. M.; Ruardij, P.

    2014-11-01

    An individual based population model for bivalves was designed, built and tested in a 0D approach, to simulate the population dynamics of a mussel bed located in an intertidal area. The processes at the individual level were simulated following the dynamic energy budget theory, whereas initial egg mortality, background mortality, food competition, and predation (including cannibalism) were additional population processes. Model properties were studied through the analysis of theoretical scenarios and by simulation of different mortality parameter combinations in a realistic setup, imposing environmental measurements. Realistic criteria were applied to narrow down the possible combination of parameter values. Field observations obtained in the long-term and multi-station monitoring program were compared with the model scenarios. The realistically selected modeling scenarios were able to reproduce reasonably the timing of some peaks in the individual abundances in the mussel bed and its size distribution but the number of individuals was not well predicted. The results suggest that the mortality in the early life stages (egg and larvae) plays an important role in population dynamics, either by initial egg mortality, larvae dispersion, settlement failure or shrimp predation. Future steps include the coupling of the population model with a hydrodynamic and biogeochemical model to improve the simulation of egg/larvae dispersion, settlement probability, food transport and also to simulate the feedback of the organisms' activity on the water column properties, which will result in an improvement of the food quantity and quality characterization.

  11. Public Opinion Polls, Chicken Soup and Sample Size

    ERIC Educational Resources Information Center

    Nguyen, Phung

    2005-01-01

    Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.

  12. Re-evaluation of groundwater monitoring data for glyphosate and bentazone by taking detection limits into account.

    PubMed

    Hansen, Claus Toni; Ritz, Christian; Gerhard, Daniel; Jensen, Jens Erik; Streibig, Jens Carl

    2015-12-01

    Current regulatory assessment of pesticide contamination of Danish groundwater is exclusively based on samples with pesticide concentrations above detection limit. Here we demonstrate that a realistic quantification of pesticide contamination requires the inclusion of "non-detect" samples i.e. samples with concentrations below the detection limit, as left-censored observations. The median calculated pesticide concentrations are shown to be reduced 10(4) to 10(5) fold for two representative herbicides (glyphosate and bentazone) relative to the median concentrations based upon observations above detection limits alone. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.

    PubMed

    Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto

    2007-07-01

    To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.

  14. RNA-seq mixology: designing realistic control experiments to compare protocols and analysis methods

    PubMed Central

    Holik, Aliaksei Z.; Law, Charity W.; Liu, Ruijie; Wang, Zeya; Wang, Wenyi; Ahn, Jaeil; Asselin-Labat, Marie-Liesse; Smyth, Gordon K.

    2017-01-01

    Abstract Carefully designed control experiments provide a gold standard for benchmarking different genomics research tools. A shortcoming of many gene expression control studies is that replication involves profiling the same reference RNA sample multiple times. This leads to low, pure technical noise that is atypical of regular studies. To achieve a more realistic noise structure, we generated a RNA-sequencing mixture experiment using two cell lines of the same cancer type. Variability was added by extracting RNA from independent cell cultures and degrading particular samples. The systematic gene expression changes induced by this design allowed benchmarking of different library preparation kits (standard poly-A versus total RNA with Ribozero depletion) and analysis pipelines. Data generated using the total RNA kit had more signal for introns and various RNA classes (ncRNA, snRNA, snoRNA) and less variability after degradation. For differential expression analysis, voom with quality weights marginally outperformed other popular methods, while for differential splicing, DEXSeq was simultaneously the most sensitive and the most inconsistent method. For sample deconvolution analysis, DeMix outperformed IsoPure convincingly. Our RNA-sequencing data set provides a valuable resource for benchmarking different protocols and data pre-processing workflows. The extra noise mimics routine lab experiments more closely, ensuring any conclusions are widely applicable. PMID:27899618

  15. Pb isotope geochemistry of Piton de la Fournaise historical lavas

    NASA Astrophysics Data System (ADS)

    Vlastélic, Ivan; Deniel, Catherine; Bosq, Chantal; Télouk, Philippe; Boivin, Pierre; Bachèlery, Patrick; Famin, Vincent; Staudacher, Thomas

    2009-07-01

    Variations of Pb isotopes in historical lavas (1927-2007) from Piton de la Fournaise are investigated based on new (116 samples) and published (127 samples) data. Lead isotopic signal exhibits smooth fluctuations (18.87 < 206Pb/ 204Pb < 18.94) on which superimpose unradiogenic spikes ( 206Pb/ 204Pb down to 18.70). Lead isotopes are decoupled from 87Sr/ 86Sr and 143Nd/ 144Nd, which display small and barely significant variations, respectively. No significant change of Pb isotope composition occurred during the longest (> 3 years) periods of inactivity of the volcano (1939-1942, 1966-1972, 1992-1998), supporting previous inferences that Pb isotopic variations occur mostly during and not between eruptions. Intermediate compositions (18.904 < 206Pb/ 204Pb < 18.917) bracket the longest periods of quiescence. In this respect, the highly frequent occurrence of an intermediate composition (18.90 < 206Pb/ 204Pb < 18.91), which clearly defines an isotopic baseline during the most recent densely sampled period (1975-2007), either suggests direct sampling of plume melts or sampling of a voluminous magma reservoir that buffers Pb isotopic composition. Deviations from this prevalent composition occurred during well-defined time periods, namely 1977-1986 (radiogenic signature), 1986-1990 and 1998-2005 (unradiogenic signatures). The three periods display a progressive isotopic drift ending by a rapid return (mostly during a single eruption) to the isotopic baseline. The isotopic gradients could reflect progressive emptying of small magma reservoirs or magma conduits, which are expected to be more sensitive to wall-rock interactions than the main magma chamber. These gradients provide a lower bound ranging from 0.1 to 0.17 km 3 for the size of the shallow magma storage system. The isotopic shifts (March 1986, January 1990 and February 2005) are interpreted as refilling the plumbing system with deep melts that have not interacted with crustal components. The volume of magma erupted between the two major refilling events of March 1986 and February 2005 (0.28 km 3) could provide a realistic estimate of the magma reservoir size. Unradiogenic anomalies appear to be linked, more or less directly, to the eruption of olivine-rich lavas. The related samples have low 206Pb/ 204Pb and 208Pb/ 204Pb but normal 207Pb/ 204Pb, suggesting a recent decrease of U/Pb and Th/Pb, for instance through sequestration of Pb into sulfides. Olivine and sulfides, which are both denser than silicate melts, could be entrained with magma pulses, which give rise to high-flux oceanite eruptions.

  16. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    PubMed

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.

  17. Phonon bottleneck identification in disordered nanoporous materials

    NASA Astrophysics Data System (ADS)

    Romano, Giuseppe; Grossman, Jeffrey C.

    2017-09-01

    Nanoporous materials are a promising platform for thermoelectrics in that they offer high thermal conductivity tunability while preserving good electrical properties, a crucial requirement for high-efficiency thermal energy conversion. Understanding the impact of the pore arrangement on thermal transport is pivotal to engineering realistic materials, where pore disorder is unavoidable. Although there has been considerable progress in modeling thermal size effects in nanostructures, it has remained a challenge to screen such materials over a large phase space due to the slow simulation time required for accurate results. We use density functional theory in connection with the Boltzmann transport equation to perform calculations of thermal conductivity in disordered porous materials. By leveraging graph theory and regressive analysis, we identify the set of pores representing the phonon bottleneck and obtain a descriptor for thermal transport, based on the sum of the pore-pore distances between such pores. This approach provide a simple tool to estimate phonon suppression in realistic porous materials for thermoelectric applications and enhance our understanding of heat transport in disordered materials.

  18. The island coalescence problem: Scaling of reconnection in extended fluid models including higher-order moments

    DOE PAGES

    Ng, Jonathan; Huang, Yi -Min; Hakim, Ammar; ...

    2015-11-05

    As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Furthermore, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment modelmore » with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results.« less

  19. Indirect Reconstruction of Pore Morphology for Parametric Computational Characterization of Unidirectional Porous Iron.

    PubMed

    Kovačič, Aljaž; Borovinšek, Matej; Vesenjak, Matej; Ren, Zoran

    2018-01-26

    This paper addresses the problem of reconstructing realistic, irregular pore geometries of lotus-type porous iron for computer models that allow for simple porosity and pore size variation in computational characterization of their mechanical properties. The presented methodology uses image-recognition algorithms for the statistical analysis of pore morphology in real material specimens, from which a unique fingerprint of pore morphology at a certain porosity level is derived. The representative morphology parameter is introduced and used for the indirect reconstruction of realistic and statistically representative pore morphologies, which can be used for the generation of computational models with an arbitrary porosity. Such models were subjected to parametric computer simulations to characterize the dependence of engineering elastic modulus on the porosity of lotus-type porous iron. The computational results are in excellent agreement with experimental observations, which confirms the suitability of the presented methodology of indirect pore geometry reconstruction for computational simulations of similar porous materials.

  20. Modeling the Earth's magnetospheric magnetic field confined within a realistic magnetopause

    NASA Technical Reports Server (NTRS)

    Tsyganenko, N. A.

    1995-01-01

    Empirical data-based models of the magnetosphereic magnetic field have been widely used during recent years. However, the existing models (Tsyganenko, 1987, 1989a) have three serious deficiencies: (1) an unstable de facto magnetopause, (2) a crude parametrization by the K(sub p) index, and (3) inaccuracies in the equatorial magnetotail B(sub z) values. This paper describes a new approach to the problem; the essential new features are (1) a realistic shape and size of the magnetopause, based on fits to a large number of observed crossing (allowing a parametrization by the solar wind pressure), (2) fully controlled shielding of the magnetic field produced by all magnetospheric current systems, (3) new flexible representations for the tail and ring currents, and (4) a new directional criterion for fitting the model field to spacecraft data, providing improved accuracy for field line mapping. Results are presented from initial efforts to create models assembled from these modules and calibrated against spacecraft data sets.

  1. The Key to Successful Achievement as an Undergraduate Student: Confidence and Realistic Expectations?

    ERIC Educational Resources Information Center

    Nicholson, Laura; Putwain, David; Connors, Liz; Hornby-Atkinson, Pat

    2013-01-01

    This study examined how expectations of independent study and academic behavioural confidence predicted end-of-semester marks in a sample of undergraduate students. Students' expectations and academic behavioural confidence were measured near the beginning of the semester, and academic performance was taken from aggregated end-of-semester marks.…

  2. The Role of Contexts and Teacher's Questioning to Enhance Students' Thinking

    ERIC Educational Resources Information Center

    Widjaja, Wanty; Dolk, Maarten; Fauzan, Ahmad

    2010-01-01

    This paper discusses results from a design research in line with Realistic Mathematics Education (RME). Daily cycles of design, classroom experiments, and retrospective analysis are enacted in five days of working about division by fractions. Data consists of episodes of video classroom discussions, and samples of students' work. The focus of…

  3. Public's and Police Officers' Interpretation and Handling of Domestic Violence Cases: Divergent Realities

    ERIC Educational Resources Information Center

    Stalans, Loretta J.; Finn, Mary A.

    2006-01-01

    The public's and police officers' interpretation and handling of realistic hypothetical domestic violence cases and their stereotypic views about domestic violence are discussed. A sample of 131 experienced officers, 127 novice officers, and 157 adult laypersons were randomly assigned to read a domestic violence case. Experienced officers were…

  4. Gene family evolution: an in-depth theoretical and simulation analysis of non-linear birth-death-innovation models.

    PubMed

    Karev, Georgy P; Wolf, Yuri I; Berezovskaya, Faina S; Koonin, Eugene V

    2004-09-09

    The size distribution of gene families in a broad range of genomes is well approximated by a generalized Pareto function. Evolution of ensembles of gene families can be described with Birth, Death, and Innovation Models (BDIMs). Analysis of the properties of different versions of BDIMs has the potential of revealing important features of genome evolution. In this work, we extend our previous analysis of stochastic BDIMs. In addition to the previously examined rational BDIMs, we introduce potentially more realistic logistic BDIMs, in which birth/death rates are limited for the largest families, and show that their properties are similar to those of models that include no such limitation. We show that the mean time required for the formation of the largest gene families detected in eukaryotic genomes is limited by the mean number of duplications per gene and does not increase indefinitely with the model degree. Instead, this time reaches a minimum value, which corresponds to a non-linear rational BDIM with the degree of approximately 2.7. Even for this BDIM, the mean time of the largest family formation is orders of magnitude greater than any realistic estimates based on the timescale of life's evolution. We employed the embedding chains technique to estimate the expected number of elementary evolutionary events (gene duplications and deletions) preceding the formation of gene families of the observed size and found that the mean number of events exceeds the family size by orders of magnitude, suggesting a highly dynamic process of genome evolution. The variance of the time required for the formation of the largest families was found to be extremely large, with the coefficient of variation > 1. This indicates that some gene families might grow much faster than the mean rate such that the minimal time required for family formation is more relevant for a realistic representation of genome evolution than the mean time. We determined this minimal time using Monte Carlo simulations of family growth from an ensemble of simultaneously evolving singletons. In these simulations, the time elapsed before the formation of the largest family was much shorter than the estimated mean time and was compatible with the timescale of evolution of eukaryotes. The analysis of stochastic BDIMs presented here shows that non-linear versions of such models can well approximate not only the size distribution of gene families but also the dynamics of their formation during genome evolution. The fact that only higher degree BDIMs are compatible with the observed characteristics of genome evolution suggests that the growth of gene families is self-accelerating, which might reflect differential selective pressure acting on different genes.

  5. Local extinction and recolonization, species effective population size, and modern human origins.

    PubMed

    Eller, Elise; Hawks, John; Relethford, John H

    2004-10-01

    A primary objection from a population genetics perspective to a multiregional model of modern human origins is that the model posits a large census size, whereas genetic data suggest a small effective population size. The relationship between census size and effective size is complex, but arguments based on an island model of migration show that if the effective population size reflects the number of breeding individuals and the effects of population subdivision, then an effective population size of 10,000 is inconsistent with the census size of 500,000 to 1,000,000 that has been suggested by archeological evidence. However, these models have ignored the effects of population extinction and recolonization, which increase the expected variance among demes and reduce the inbreeding effective population size. Using models developed for population extinction and recolonization, we show that a large census size consistent with the multiregional model can be reconciled with an effective population size of 10,000, but genetic variation among demes must be high, reflecting low interdeme migration rates and a colonization process that involves a small number of colonists or kin-structured colonization. Ethnographic and archeological evidence is insufficient to determine whether such demographic conditions existed among Pleistocene human populations, and further work needs to be done. More realistic models that incorporate isolation by distance and heterogeneity in extinction rates and effective deme sizes also need to be developed. However, if true, a process of population extinction and recolonization has interesting implications for human demographic history.

  6. From individuals to populations to communities: a dynamic energy budget model of marine ecosystem size-spectrum including life history diversity.

    PubMed

    Maury, Olivier; Poggiale, Jean-Christophe

    2013-05-07

    Individual metabolism, predator-prey relationships, and the role of biodiversity are major factors underlying the dynamics of food webs and their response to environmental variability. Despite their crucial, complementary and interacting influences, they are usually not considered simultaneously in current marine ecosystem models. In an attempt to fill this gap and determine if these factors and their interaction are sufficient to allow realistic community structure and dynamics to emerge, we formulate a mathematical model of the size-structured dynamics of marine communities which integrates mechanistically individual, population and community levels. The model represents the transfer of energy generated in both time and size by an infinite number of interacting fish species spanning from very small to very large species. It is based on standard individual level assumptions of the Dynamic Energy Budget theory (DEB) as well as important ecological processes such as opportunistic size-based predation and competition for food. Resting on the inter-specific body-size scaling relationships of the DEB theory, the diversity of life-history traits (i.e. biodiversity) is explicitly integrated. The stationary solutions of the model as well as the transient solutions arising when environmental signals (e.g. variability of primary production and temperature) propagate through the ecosystem are studied using numerical simulations. It is shown that in the absence of density-dependent feedback processes, the model exhibits unstable oscillations. Density-dependent schooling probability and schooling-dependent predatory and disease mortalities are proposed to be important stabilizing factors allowing stationary solutions to be reached. At the community level, the shape and slope of the obtained quasi-linear stationary spectrum matches well with empirical studies. When oscillations of primary production are simulated, the model predicts that the variability propagates along the spectrum in a given frequency-dependent size range before decreasing for larger sizes. At the species level, the simulations show that small and large species dominate the community successively (small species being more abundant at small sizes and large species being more abundant at large sizes) and that the total biomass of a species decreases with its maximal size which again corroborates empirical studies. Our results indicate that the simultaneous consideration of individual growth and reproduction, size-structured trophic interactions, the diversity of life-history traits and a density-dependent stabilizing process allow realistic community structure and dynamics to emerge without any arbitrary prescription. As a logical consequence of our model construction and a basis for future studies, we define the function Φ as the relative contribution of each species to the total biomass of the ecosystem, for any given size. We argue that this function is a measure of the functional role of biodiversity characterizing the impact of the structure of the community (its species composition) on its function (the relative proportions of losses, dissipation and biological work). Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.

    PubMed

    Rochon, K; Scoles, G A; Lysyk, T J

    2012-03-01

    A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.

  8. Simple, Defensible Sample Sizes Based on Cost Efficiency

    PubMed Central

    Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.

    2009-01-01

    Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055

  9. RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.

    PubMed

    Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu

    2018-05-30

    One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.

  10. Wave climate and nearshore lakebed response, Illinois Beach State Park, Lake Michigan

    USGS Publications Warehouse

    Booth, J.S.

    1994-01-01

    Only under these major storm conditions is there a realistic potential for wave-lakebed interaction (and associated wind-driven currents) to cause a significant net modification to the outer nearshore lakebed which, in turn, may promulgate change in the inner nearshore (surf) zone. Analysis of bathymetric and sediment grain-size data, used in conjuction with published wave hindcast data, wave propagation modeling, and previous studies in the area, indicates that this potential occurs, most likely, on a scale of years. -from Author

  11. Temporal Planning for Compilation of Quantum Approximate Optimization Algorithm Circuits

    NASA Technical Reports Server (NTRS)

    Venturelli, Davide; Do, Minh Binh; Rieffel, Eleanor Gilbert; Frank, Jeremy David

    2017-01-01

    We investigate the application of temporal planners to the problem of compiling quantum circuits to newly emerging quantum hardware. While our approach is general, we focus our initial experiments on Quantum Approximate Optimization Algorithm (QAOA) circuits that have few ordering constraints and allow highly parallel plans. We report on experiments using several temporal planners to compile circuits of various sizes to a realistic hardware. This early empirical evaluation suggests that temporal planning is a viable approach to quantum circuit compilation.

  12. The special case of the 2 × 2 table: asymptotic unconditional McNemar test can be used to estimate sample size even for analysis based on GEE.

    PubMed

    Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu

    2015-07-01

    Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Global scale variability of the mineral dust long-wave refractive index: a new dataset of in situ measurements for climate modeling and remote sensing

    NASA Astrophysics Data System (ADS)

    Di Biagio, Claudia; Formenti, Paola; Balkanski, Yves; Caponi, Lorenzo; Cazaunau, Mathieu; Pangui, Edouard; Journet, Emilie; Nowak, Sophie; Caquineau, Sandrine; Andreae, Meinrat O.; Kandler, Konrad; Saeed, Thuraya; Piketh, Stuart; Seibert, David; Williams, Earle; Doussin, Jean-François

    2017-02-01

    Modeling the interaction of dust with long-wave (LW) radiation is still a challenge because of the scarcity of information on the complex refractive index of dust from different source regions. In particular, little is known about the variability of the refractive index as a function of the dust mineralogical composition, which depends on the specific emission source, and its size distribution, which is modified during transport. As a consequence, to date, climate models and remote sensing retrievals generally use a spatially invariant and time-constant value for the dust LW refractive index. In this paper, the variability of the mineral dust LW refractive index as a function of its mineralogical composition and size distribution is explored by in situ measurements in a large smog chamber. Mineral dust aerosols were generated from 19 natural soils from 8 regions: northern Africa, the Sahel, eastern Africa and the Middle East, eastern Asia, North and South America, southern Africa, and Australia. Soil samples were selected from a total of 137 available samples in order to represent the diversity of sources from arid and semi-arid areas worldwide and to account for the heterogeneity of the soil composition at the global scale. Aerosol samples generated from soils were re-suspended in the chamber, where their LW extinction spectra (3-15 µm), size distribution, and mineralogical composition were measured. The generated aerosol exhibits a realistic size distribution and mineralogy, including both the sub- and super-micron fractions, and represents in typical atmospheric proportions the main LW-active minerals, such as clays, quartz, and calcite. The complex refractive index of the aerosol is obtained by an optical inversion based upon the measured extinction spectrum and size distribution. Results from the present study show that the imaginary LW refractive index (k) of dust varies greatly both in magnitude and spectral shape from sample to sample, reflecting the differences in particle composition. In the 3-15 µm spectral range, k is between ˜ 0.001 and 0.92. The strength of the dust absorption at ˜ 7 and 11.4 µm depends on the amount of calcite within the samples, while the absorption between 8 and 14 µm is determined by the relative abundance of quartz and clays. The imaginary part (k) is observed to vary both from region to region and for varying sources within the same region. Conversely, for the real part (n), which is in the range 0.84-1.94, values are observed to agree for all dust samples across most of the spectrum within the error bars. This implies that while a constant n can be probably assumed for dust from different sources, a varying k should be used both at the global and the regional scale. A linear relationship between the magnitude of the imaginary refractive index at 7.0, 9.2, and 11.4 µm and the mass concentration of calcite and quartz absorbing at these wavelengths was found. We suggest that this may lead to predictive rules to estimate the LW refractive index of dust in specific bands based on an assumed or predicted mineralogical composition, or conversely, to estimate the dust composition from measurements of the LW extinction at specific wavebands. Based on the results of the present study, we recommend that climate models and remote sensing instruments operating at infrared wavelengths, such as IASI (infrared atmospheric sounder interferometer), use regionally dependent refractive indices rather than generic values. Our observations also suggest that the refractive index of dust in the LW does not change as a result of the loss of coarse particles by gravitational settling, so that constant values of n and k could be assumed close to sources and following transport. The whole dataset of the dust complex refractive indices presented in this paper is made available to the scientific community in the Supplement.

  14. Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review.

    PubMed

    McKeown, Andrew; Gewandter, Jennifer S; McDermott, Michael P; Pawlowski, Joseph R; Poli, Joseph J; Rothstein, Daniel; Farrar, John T; Gilron, Ian; Katz, Nathaniel P; Lin, Allison H; Rappaport, Bob A; Rowbotham, Michael C; Turk, Dennis C; Dworkin, Robert H; Smith, Shannon M

    2015-03-01

    Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. Copyright © 2015 American Pain Society. All rights reserved.

  15. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Comparison of texture synthesis methods for content generation in ultrasound simulation for training

    NASA Astrophysics Data System (ADS)

    Mattausch, Oliver; Ren, Elizabeth; Bajka, Michael; Vanhoey, Kenneth; Goksel, Orcun

    2017-03-01

    Navigation and interpretation of ultrasound (US) images require substantial expertise, the training of which can be aided by virtual-reality simulators. However, a major challenge in creating plausible simulated US images is the generation of realistic ultrasound speckle. Since typical ultrasound speckle exhibits many properties of Markov Random Fields, it is conceivable to use texture synthesis for generating plausible US appearance. In this work, we investigate popular classes of texture synthesis methods for generating realistic US content. In a user study, we evaluate their performance for reproducing homogeneous tissue regions in B-mode US images from small image samples of similar tissue and report the best-performing synthesis methods. We further show that regression trees can be used on speckle texture features to learn a predictor for US realism.

  17. Meshless Modeling of Deformable Shapes and their Motion

    PubMed Central

    Adams, Bart; Ovsjanikov, Maks; Wand, Michael; Seidel, Hans-Peter; Guibas, Leonidas J.

    2010-01-01

    We present a new framework for interactive shape deformation modeling and key frame interpolation based on a meshless finite element formulation. Starting from a coarse nodal sampling of an object’s volume, we formulate rigidity and volume preservation constraints that are enforced to yield realistic shape deformations at interactive frame rates. Additionally, by specifying key frame poses of the deforming shape and optimizing the nodal displacements while targeting smooth interpolated motion, our algorithm extends to a motion planning framework for deformable objects. This allows reconstructing smooth and plausible deformable shape trajectories in the presence of possibly moving obstacles. The presented results illustrate that our framework can handle complex shapes at interactive rates and hence is a valuable tool for animators to realistically and efficiently model and interpolate deforming 3D shapes. PMID:24839614

  18. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

    PubMed Central

    Adnan, Tassha Hilda

    2016-01-01

    Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

  19. Sample size calculations for randomized clinical trials published in anesthesiology journals: a comparison of 2010 versus 2016.

    PubMed

    Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M

    2018-06-01

    Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.

  20. Dynamics of entanglement between two atomic samples with spontaneous scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Lisi, Antonio; De Siena, Silvio; Illuminati, Fabrizio

    2004-07-01

    We investigate the effects of spontaneous scattering on the evolution of entanglement of two atomic samples, probed by phase-shift measurements on optical beams interacting with both samples. We develop a formalism of conditional quantum evolutions and present a wave function analysis implemented in numerical simulations of the state vector dynamics. This method allows us to track the evolution of entanglement and to compare it with the predictions obtained when spontaneous scattering is neglected. We provide numerical evidence that the interferometric scheme to entangle atomic samples is only marginally affected by the presence of spontaneous scattering and should thus be robustmore » even in more realistic situations.« less

  1. Stand-off imaging Raman spectroscopy for forensic analysis of post-blast scenes: trace detection of ammonium nitrate and 2,4,6-trinitrotoluene

    NASA Astrophysics Data System (ADS)

    Ceco, Ema; Önnerud, Hans; Menning, Dennis; Gilljam, John L.; Bââth, Petra; Östmark, Henric

    2014-05-01

    The following paper presents a realistic forensic capability test of an imaging Raman spectroscopy based demonstrator system, developed at FOI, the Swedish Defence Research Agency. The system uses a 532 nm laser to irradiate a surface of 25×25mm. The backscattered radiation from the surface is collected by an 8" telescope with subsequent optical system, and is finally imaged onto an ICCD camera. We present here an explosives trace analysis study of samples collected from a realistic scenario after a detonation. A left-behind 5 kg IED, based on ammonium nitrate with a TNT (2,4,6-trinitrotoluene) booster, was detonated in a plastic garbage bin. Aluminum sample plates were mounted vertically on a holder approximately 6 m from the point of detonation. Minutes after the detonation, the samples were analyzed with stand-off imaging Raman spectroscopy from a distance of 10 m. Trace amounts could be detected from the secondary explosive (ammonium nitrate with an analysis time of 1 min. Measurement results also indicated detection of residues from the booster (TNT). The sample plates were subsequently swabbed and analyzed with HPLC and GC-MS analyses to confirm the results from the stand-off imaging Raman system. The presented findings indicate that it is possible to determine the type of explosive used in an IED from a distance, within minutes after the attack, and without tampering with physical evidence at the crime scene.

  2. High temporal resolution dynamic contrast-enhanced MRI using compressed sensing-combined sequence in quantitative renal perfusion measurement.

    PubMed

    Chen, Bin; Zhao, Kai; Li, Bo; Cai, Wenchao; Wang, Xiaoying; Zhang, Jue; Fang, Jing

    2015-10-01

    To demonstrate the feasibility of the improved temporal resolution by using compressed sensing (CS) combined imaging sequence in dynamic contrast-enhanced MRI (DCE-MRI) of kidney, and investigate its quantitative effects on renal perfusion measurements. Ten rabbits were included in the accelerated scans with a CS-combined 3D pulse sequence. To evaluate the image quality, the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were compared between the proposed CS strategy and the conventional full sampling method. Moreover, renal perfusion was estimated by using the separable compartmental model in both CS simulation and realistic CS acquisitions. The CS method showed DCE-MRI images with improved temporal resolution and acceptable image contrast, while presenting significantly higher SNR than the fully sampled images (p<.01) at 2-, 3- and 4-X acceleration. In quantitative measurements, renal perfusion results were in good agreement with the fully sampled one (concordance correlation coefficient=0.95, 0.91, 0.88) at 2-, 3- and 4-X acceleration in CS simulation. Moreover, in realistic acquisitions, the estimated perfusion by the separable compartmental model exhibited no significant differences (p>.05) between each CS-accelerated acquisition and the full sampling method. The CS-combined 3D sequence could improve the temporal resolution for DCE-MRI in kidney while yielding diagnostically acceptable image quality, and it could provide effective measurements of renal perfusion. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    PubMed

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  4. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    PubMed

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo

    NASA Astrophysics Data System (ADS)

    Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl

    2016-09-01

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.

  6. Creating Realistic Data Sets with Specified Properties via Simulation

    ERIC Educational Resources Information Center

    Goldman, Robert N.; McKenzie, John D. Jr.

    2009-01-01

    We explain how to simulate both univariate and bivariate raw data sets having specified values for common summary statistics. The first example illustrates how to "construct" a data set having prescribed values for the mean and the standard deviation--for a one-sample t test with a specified outcome. The second shows how to create a bivariate data…

  7. Teaching Tip: Active Learning via a Sample Database: The Case of Microsoft's Adventure Works

    ERIC Educational Resources Information Center

    Mitri, Michel

    2015-01-01

    This paper describes the use and benefits of Microsoft's Adventure Works (AW) database to teach advanced database skills in a hands-on, realistic environment. Database management and querying skills are a key element of a robust information systems curriculum, and active learning is an important way to develop these skills. To facilitate active…

  8. Statistical techniques for sampling and monitoring natural resources

    Treesearch

    Hans T. Schreuder; Richard Ernst; Hugo Ramirez-Maldonado

    2004-01-01

    We present the statistical theory of inventory and monitoring from a probabilistic point of view. We start with the basics and show the interrelationships between designs and estimators illustrating the methods with a small artificial population as well as with a mapped realistic population. For such applications, useful open source software is given in Appendix 4....

  9. Using known populations of pronghorn to evaluate sampling plans and estimators

    USGS Publications Warehouse

    Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.

    1995-01-01

    Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.

  10. A comparison of 1D and 1.5D arrays for imaging volumetric flaws in small bore pipework

    NASA Astrophysics Data System (ADS)

    Barber, T. S.; Wilcox, P. D.; Nixon, A. D.

    2015-03-01

    1.5D arrays can be seen as a potentially ideal compromise between 1D arrays and 2D matrix arrays in terms of focusing capability, element density, weld coverage and data processing time. This paper presents an initial study of 1D and 1.5D arrays for high frequency (15MHz) imaging of volumetric flaws in small-bore (30-60mm outer diameter) thin-walled (3-8mm) pipework. A combination of 3D modelling and experimental work is used to determine Signal to Noise Ratio (SNR) improvement with a strong relationship between SNR and the longer dimension of element size observed. Similar behavior is demonstrated experimentally rendering a 1mm diameter Flat Bottom Hole (FBH) in Copper-Nickel alloy undetectable using a larger array element. A 3-5dB SNR increase is predicted when using a 1.5D array assuming a spherical reflector and a 2dB increase was observed on experimental trials with a FBH. It is argued that this improvement is likely to be a lower bound estimate due to the specular behavior of a FBH with future trials planned on welded samples with realistic flaws.

  11. Oracle estimation of parametric models under boundary constraints.

    PubMed

    Wong, Kin Yau; Goldberg, Yair; Fine, Jason P

    2016-12-01

    In many classical estimation problems, the parameter space has a boundary. In most cases, the standard asymptotic properties of the estimator do not hold when some of the underlying true parameters lie on the boundary. However, without knowledge of the true parameter values, confidence intervals constructed assuming that the parameters lie in the interior are generally over-conservative. A penalized estimation method is proposed in this article to address this issue. An adaptive lasso procedure is employed to shrink the parameters to the boundary, yielding oracle inference which adapt to whether or not the true parameters are on the boundary. When the true parameters are on the boundary, the inference is equivalent to that which would be achieved with a priori knowledge of the boundary, while if the converse is true, the inference is equivalent to that which is obtained in the interior of the parameter space. The method is demonstrated under two practical scenarios, namely the frailty survival model and linear regression with order-restricted parameters. Simulation studies and real data analyses show that the method performs well with realistic sample sizes and exhibits certain advantages over standard methods. © 2016, The International Biometric Society.

  12. Physical studies of Centaurs and Trans-Neptunian Objects with the Atacama Large Millimeter Array

    NASA Astrophysics Data System (ADS)

    Moullet, Arielle; Lellouch, Emmanuel; Moreno, Raphael; Gurwell, Mark

    2011-05-01

    Once completed, the Atacama Large Millimeter Array (ALMA) will be the most powerful (sub)millimeter interferometer in terms of sensitivity, spatial resolution and imaging. This paper presents the capabilities of ALMA applied to the observation of Centaurs and Trans-Neptunian Objects, and their possible output in terms of physical properties. Realistic simulations were performed to explore the performances of the different frequency bands and array configurations, and several projects are detailed along with their feasibility, their limitations and their possible targets. Determination of diameters and albedos via the radiometric method appears to be possible on ˜500 objects, while sampling of the thermal lightcurve to derive the bodies' ellipticity could be performed at least 30 bodies that display a significant optical lightcurve. On a limited number of objects, the spatial resolution allows for direct measurement of the size or even surface mapping with a resolution down to 13 milliarcsec. Finally, ALMA could separate members of multiple systems with a separation power comparable to that of the HST. The overall performance of ALMA will make it an invaluable instrument to explore the outer Solar System, complementary to space-based telescopes and spacecrafts.

  13. Miniaturized inertial impactor for personal airborne particulate monitoring: Prototyping

    NASA Astrophysics Data System (ADS)

    Pasini, Silvia; Bianchi, Elena; Dubini, Gabriele; Cortelezzi, Luca

    2017-11-01

    Computational fluid dynamic (CFD) simulations allowed us to conceive and design a miniaturized inertial impactor able to collect fine airborne particulate matter (PM10, PM2.5 and PM1). We created, by 3D printing, a prototype of the impactor. We first performed a set of experiments by applying a suction pump to the outlets and sampling the airborne particulate of our laboratory. The analysis of the slide showed a collection of a large number of particles, spanning a wide range of sizes, organized in a narrow band located below the exit of the nozzle. In order to show that our miniaturized inertial impactor can be truly used as a personal air-quality monitor, we performed a second set of experiments where the suction needed to produce the airflow through the impactor is generated by a human being inhaling through the outlets of the prototype. To guarantee a number of particles sufficient to perform a quantitative characterization, we collected particles performing ten consecutive deep inhalations. Finally, the potentiality for realistic applications of our miniaturized inertial impactor used in combination with a miniaturized single-particle detector will be discussed. CARIPLO Fundation - project MINUTE (Grant No. 2011-2118).

  14. An actuarial approach to retrofit savings in buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Subbarao, Krishnappa; Etingov, Pavel V.; Reddy, T. A.

    An actuarial method has been developed for determining energy savings from retrofits from energy use data for a number of buildings. This method should be contrasted with the traditional method of using pre- and post-retrofit data on the same building. This method supports the U.S. Department of Energy Building Performance Database of real building performance data and related tools that enable engineering and financial practitioners to evaluate retrofits. The actuarial approach derives, from the database, probability density functions (PDFs) for energy savings from retrofits by creating peer groups for the user’s pre post buildings. From the energy use distribution ofmore » the two groups, the savings PDF is derived. This provides the basis for engineering analysis as well as financial risk analysis leading to investment decisions. Several technical issues are addressed: The savings PDF is obtained from the pre- and post-PDF through a convolution. Smoothing using kernel density estimation is applied to make the PDF more realistic. The low data density problem can be mitigated through a neighborhood methodology. Correlations between pre and post buildings are addressed to improve the savings PDF. Sample size effects are addressed through the Kolmogorov--Smirnov tests and quantile-quantile plots.« less

  15. Incorporating measurement error in n = 1 psychological autoregressive modeling

    PubMed Central

    Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988

  16. Noise Reduction and Correction in the IPNS Linac ESEM

    NASA Astrophysics Data System (ADS)

    Dooling, J. C.; Brumwell, F. R.; Donley, L.; McMichael, G. E.; Stipp, V. F.

    2004-11-01

    The Energy Spread and Energy Monitor (ESEM) is an on-line, non-intrusive diagnostic used to characterize the output beam from the 200-MHz, 50-MeV IPNS linac. The energy spread is determined from a 3-size, longitudinal emittance measurement; whereas the energy is derived from time of flight (TOF) analysis. Signals are detected on 50-ohm, stripline beam position monitors (BPMs) terminated in their characteristic impedance. Each BPM is constructed with four striplines: top, bottom, left and right. The ESEM signals are taken from the bottom stripline in four separate BPM locations in the 50-MeV transport line between the linac and the synchrotron. Deterministic linac noise is sampled before and after the 70-microsecond macropulse. The noise phasor is vectorially subtracted from the beam signal. Noise subtraction is required at several frequencies, especially the fundamental and fifth harmonics (200 MHz and 1 GHz). It is also necessary to correct for attenuation and dispersion in the co-axial signal cables. Presently, the analysis assumes a single particle distribution to determine energy and energy spread. Work is on-going to allow for more realistic longitudinal distributions to be included in the analysis.

  17. Modelling realistic TiO2 nanospheres: A benchmark study of SCC-DFTB against hybrid DFT

    NASA Astrophysics Data System (ADS)

    Selli, Daniele; Fazio, Gianluca; Di Valentin, Cristiana

    2017-10-01

    TiO2 nanoparticles (NPs) are nowadays considered fundamental building blocks for many technological applications. Morphology is found to play a key role with spherical NPs presenting higher binding properties and chemical activity. From the experimental point of view, the characterization of these nano-objects is extremely complex, opening a large room for computational investigations. In this work, TiO2 spherical NPs of different sizes (from 300 to 4000 atoms) have been studied with a two-scale computational approach. Global optimization to obtain stable and equilibrated nanospheres was performed with a self-consistent charge density functional tight-binding (SCC-DFTB) simulated annealing process, causing a considerable atomic rearrangement within the nanospheres. Those SCC-DFTB relaxed structures have been then optimized at the DFT(B3LYP) level of theory. We present a systematic and comparative SCC-DFTB vs DFT(B3LYP) study of the structural properties, with particular emphasis on the surface-to-bulk sites ratio, coordination distribution of surface sites, and surface energy. From the electronic point of view, we compare HOMO-LUMO and Kohn-Sham gaps, total and projected density of states. Overall, the comparisons between DFTB and hybrid density functional theory show that DFTB provides a rather accurate geometrical and electronic description of these nanospheres of realistic size (up to a diameter of 4.4 nm) at an extremely reduced computational cost. This opens for new challenges in simulations of very large systems and more extended molecular dynamics.

  18. The formation of disc galaxies in high-resolution moving-mesh cosmological simulations

    NASA Astrophysics Data System (ADS)

    Marinacci, Federico; Pakmor, Rüdiger; Springel, Volker

    2014-01-01

    We present cosmological hydrodynamical simulations of eight Milky Way-sized haloes that have been previously studied with dark matter only in the Aquarius project. For the first time, we employ the moving-mesh code AREPO in zoom simulations combined with a comprehensive model for galaxy formation physics designed for large cosmological simulations. Our simulations form in most of the eight haloes strongly disc-dominated systems with realistic rotation curves, close to exponential surface density profiles, a stellar mass to halo mass ratio that matches expectations from abundance matching techniques, and galaxy sizes and ages consistent with expectations from large galaxy surveys in the local Universe. There is no evidence for any dark matter core formation in our simulations, even so they include repeated baryonic outflows by supernova-driven winds and black hole quasar feedback. For one of our haloes, the object studied in the recent `Aquila' code comparison project, we carried out a resolution study with our techniques, covering a dynamic range of 64 in mass resolution. Without any change in our feedback parameters, the final galaxy properties are reassuringly similar, in contrast to other modelling techniques used in the field that are inherently resolution dependent. This success in producing realistic disc galaxies is reached, in the context of our interstellar medium treatment, without resorting to a high density threshold for star formation, a low star formation efficiency, or early stellar feedback, factors deemed crucial for disc formation by other recent numerical studies.

  19. Hydrodynamic Simulations of the Central Molecular Zone with a Realistic Galactic Potential

    NASA Astrophysics Data System (ADS)

    Shin, Jihye; Kim, Sungsoo S.; Baba, Junichi; Saitoh, Takayuki R.; Hwang, Jeong-Sun; Chun, Kyungwon; Hozumi, Shunsuke

    2017-06-01

    We present hydrodynamic simulations of gas clouds inflowing from the disk to a few hundred parsec region of the Milky Way. A gravitational potential is generated to include realistic Galactic structures by using thousands of multipole expansions (MEs) that describe 6.4 million stellar particles of a self-consistent Galaxy simulation. We find that a hybrid ME model, with two different basis sets and a thick-disk correction, accurately reproduces the overall structures of the Milky Way. Through non-axisymmetric Galactic structures of an elongated bar and spiral arms, gas clouds in the disk inflow to the nuclear region and form a central molecular zone-like nuclear ring. We find that the size of the nuclear ring evolves into ˜ 240 {pc} at T˜ 1500 {Myr}, regardless of the initial size. For most simulation runs, the rate of gas inflow to the nuclear region is equilibrated to ˜ 0.02 {M}⊙ {{yr}}-1. The nuclear ring is off-centered, relative to the Galactic center, by the lopsided central mass distribution of the Galaxy model, and thus an asymmetric mass distribution of the nuclear ring arises accordingly. The vertical asymmetry of the Galaxy model also causes the nuclear ring to be tilted along the Galactic plane. During the first ˜100 Myr, the vertical frequency of the gas motion is twice that of the orbital frequency, thus the projected nuclear ring shows a twisted, ∞ -like shape.

  20. Magnetic drug targeting through a realistic model of human tracheobronchial airways using computational fluid and particle dynamics.

    PubMed

    Pourmehran, Oveis; Gorji, Tahereh B; Gorji-Bandpy, Mofid

    2016-10-01

    Magnetic drug targeting (MDT) is a local drug delivery system which aims to concentrate a pharmacological agent at its site of action in order to minimize undesired side effects due to systemic distribution in the organism. Using magnetic drug particles under the influence of an external magnetic field, the drug particles are navigated toward the target region. Herein, computational fluid dynamics was used to simulate the air flow and magnetic particle deposition in a realistic human airway geometry obtained by CT scan images. Using discrete phase modeling and one-way coupling of particle-fluid phases, a Lagrangian approach for particle tracking in the presence of an external non-uniform magnetic field was applied. Polystyrene (PMS40) particles were utilized as the magnetic drug carrier. A parametric study was conducted, and the influence of particle diameter, magnetic source position, magnetic field strength and inhalation condition on the particle transport pattern and deposition efficiency (DE) was reported. Overall, the results show considerable promise of MDT in deposition enhancement at the target region (i.e., left lung). However, the positive effect of increasing particle size on DE enhancement was evident at smaller magnetic field strengths (Mn [Formula: see text] 1.5 T), whereas, at higher applied magnetic field strengths, increasing particle size has a inverse effect on DE. This implies that for efficient MTD in the human respiratory system, an optimal combination of magnetic drug career characteristics and magnetic field strength has to be achieved.

  1. Capture of shrinking targets with realistic shrink patterns.

    PubMed

    Hoffmann, Errol R; Chan, Alan H S; Dizmen, Coskun

    2013-01-01

    Previous research [Hoffmann, E. R. 2011. "Capture of Shrinking Targets." Ergonomics 54 (6): 519-530] reported experiments for capture of shrinking targets where the target decreased in size at a uniform rate. This work extended this research for targets having a shrink-size versus time pattern that of an aircraft receding from an observer. In Experiment 1, the time to capture the target in this case was well correlated in terms of Fitts' index of difficulty, measured at the time of capture of the target, a result that is in agreement with the 'balanced' model of Johnson and Hart [Johnson, W. W., and Hart, S. G. 1987. "Step Tracking Shrinking Targets." Proceedings of the human factors society 31st annual meeting, New York City, October 1987, 248-252]. Experiment 2 measured the probability of target capture for varying initial target sizes and target shrink rates constant, defined as the time for the target to shrink to half its initial size. Data of shrink time constant for 50% probability of capture were related to initial target size but did not greatly affect target capture as the rate of target shrinking decreased rapidly with time.

  2. What works for whom in pharmacist-led smoking cessation support: realist review.

    PubMed

    Greenhalgh, Trisha; Macfarlane, Fraser; Steed, Liz; Walton, Robert

    2016-12-16

    New models of primary care are needed to address funding and staffing pressures. We addressed the research question "what works for whom in what circumstances in relation to the role of community pharmacies in providing lifestyle interventions to support smoking cessation?" This is a realist review conducted according to RAMESES standards. We began with a sample of 103 papers included in a quantitative review of community pharmacy intervention trials identified through systematic searching of seven databases. We supplemented this with additional papers: studies that had been excluded from the quantitative review but which provided rigorous and relevant additional data for realist theorising; citation chaining (pursuing reference lists and Google Scholar forward tracking of key papers); the 'search similar citations' function on PubMed. After mapping what research questions had been addressed by these studies and how, we undertook a realist analysis to identify and refine candidate theories about context-mechanism-outcome configurations. Our final sample consisted of 66 papers describing 74 studies (12 systematic reviews, 6 narrative reviews, 18 RCTs, 1 process detail of a RCT, 1 cost-effectiveness study, 12 evaluations of training, 10 surveys, 8 qualitative studies, 2 case studies, 2 business models, 1 development of complex intervention). Most studies had been undertaken in the field of pharmacy practice (pharmacists studying what pharmacists do) and demonstrated the success of pharmacist training in improving confidence, knowledge and (in many but not all studies) patient outcomes. Whilst a few empirical studies had applied psychological theories to account for behaviour change in pharmacists or people attempting to quit, we found no studies that had either developed or tested specific theoretical models to explore how pharmacists' behaviour may be affected by organisational context. Because of the nature of the empirical data, only a provisional realist analysis was possible, consisting of five mechanisms (pharmacist identity, pharmacist capability, pharmacist motivation and clinician confidence and public trust). We offer hypotheses about how these mechanisms might play out differently in different contexts to account for the success, failure or partial success of pharmacy-based smoking cessation efforts. Smoking cessation support from community pharmacists and their staff has been extensively studied, but few policy-relevant conclusions are possible. We recommend that further research should avoid duplicating existing literature on individual behaviour change; seek to study the organisational and system context and how this may shape, enable and constrain pharmacists' extended role; and develop and test theory.

  3. Gyrokinetic Simulations of Transport Scaling and Structure

    NASA Astrophysics Data System (ADS)

    Hahm, Taik Soo

    2001-10-01

    There is accumulating evidence from global gyrokinetic particle simulations with profile variations and experimental fluctuation measurements that microturbulence, with its time-averaged eddy size which scales with the ion gyroradius, can cause ion thermal transport which deviates from the gyro-Bohm scaling. The physics here can be best addressed by large scale (rho* = rho_i/a = 0.001) full torus gyrokinetic particle-in-cell turbulence simulations using our massively parallel, general geometry gyrokinetic toroidal code with field-aligned mesh. Simulation results from device-size scans for realistic parameters show that ``wave transport'' mechanism is not the dominant contribution for this Bohm-like transport and that transport is mostly diffusive driven by microscopic scale fluctuations in the presence of self-generated zonal flows. In this work, we analyze the turbulence and zonal flow statistics from simulations and compare to nonlinear theoretical predictions including the radial decorrelation of the transport events by zonal flows and the resulting probability distribution function (PDF). In particular, possible deviation of the characteristic radial size of transport processes from the time-averaged radial size of the density fluctuation eddys will be critically examined.

  4. The Effect of Ownship Information and NexRad Resolution on Pilot Decision Making in the Use of a Cockpit Weather Information Display

    NASA Technical Reports Server (NTRS)

    Novacek, Paul F.; Burgess, Malcolm A.; Heck, Michael L.; Stokes, Alan F.; Stough, H. Paul, III (Technical Monitor)

    2001-01-01

    A two-phase experiment was conducted to explore the effects of data-link weather displays upon pilot decision performance. The experiment was conducted with 49 instrument rated pilots who were divided into four groups and placed in a simulator with a realistic flight scenario involving weather containing convective activity. The inflight weather display depicted NEXRAD images, with graphical and textual METARs over a moving map display. The experiment explored the effect of weather information, ownship position symbology and NEXRAD cell size resolution. The phase-two experiment compared two groups using the data-linked weather display with ownship position symbology. These groups were compared to the phase-one group that did not have ownship position symbology. The phase-two pilots were presented with either large NEXRAD cell size (8 km) or small cell size (4 km). Observations noted that the introduction of ownship symbology did not appear to significantly impact the decision making process, however, the introduction of ownship did reduce workload. Additionally, NEXRAD cell size resolution did appear to influence the tactical decision making process.

  5. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    PubMed

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  6. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    PubMed

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.

  7. The wiper model: avalanche dynamics in an exclusion process

    NASA Astrophysics Data System (ADS)

    Politi, Antonio; Romano, M. Carmen

    2013-10-01

    The exclusion-process model (Ciandrini et al 2010 Phys. Rev. E 81 051904) describing traffic of particles with internal stepping dynamics reveals the presence of strong correlations in realistic regimes. Here we study such a model in the limit of an infinitely fast translocation time, where the evolution can be interpreted as a ‘wiper’ that moves to dry neighbouring sites. We trace back the existence of long-range correlations to the existence of avalanches, where many sites are dried at once. At variance with self-organized criticality, in the wiper model avalanches have a typical size equal to the logarithm of the lattice size. In the thermodynamic limit, we find that the hydrodynamic behaviour is a mixture of stochastic (diffusive) fluctuations and increasingly coherent periodic oscillations that are reminiscent of a collective dynamics.

  8. The Measurement of Hot-spots in Granulated Ammonium Nitrate

    NASA Astrophysics Data System (ADS)

    Proud, William; Field, John

    2001-06-01

    Ammonium Nitrate (AN) is one of the components of the most widely used explosive in the world ammonium nitrate: fuel oil mixtures (ANFO). By itself, it is an oxygen negative explosive with a large critical diameter. Hot-spots are produced in explosives by various means including gas space collapse, localised shear or friction. If these hot-spots reach critical conditions of size, temperature and duration size reaction can grow. This deflagration stage may eventually transition to detonation. This paper describes a system and presents results where high-speed image intensified photography is used to monitor the number and growth of hot spots in granular AN under a range of different impact pressures. The results can be used in detonation codes to provide a more accurate and realistic description of the initiation process.

  9. The prediction of the cavitation phenomena including population balance modeling

    NASA Astrophysics Data System (ADS)

    Bannari, Rachid; Hliwa, Ghizlane Zineb; Bannari, Abdelfettah; Belghiti, Mly Taib

    2017-07-01

    Cavitation is the principal reason behind the behavior's modification of the hydraulic turbines. However, the experimental observations can not be appropriate to all cases due to the limitations in the measurement techniques. The mathematical models which have been implemented, use the mixture multiphase frame. As well as, most of the published work is limited by considering a constant bubble size distribution. However, this assumption is not realist. The aim of this article is the implementation and the use of a non-homogeneous multiphase model which solve two phases transport equation. The evolution of bubble size is considered by the population balance equation. This study is based on the eulerian-eulerian model, associated to the cavitation model. All the inter-phase forces such as drag, lift and virtual mass are used.

  10. CatSim: a new computer assisted tomography simulation environment

    NASA Astrophysics Data System (ADS)

    De Man, Bruno; Basu, Samit; Chandra, Naveen; Dunham, Bruce; Edic, Peter; Iatrou, Maria; McOlash, Scott; Sainath, Paavana; Shaughnessy, Charlie; Tower, Brendon; Williams, Eugene

    2007-03-01

    We present a new simulation environment for X-ray computed tomography, called CatSim. CatSim provides a research platform for GE researchers and collaborators to explore new reconstruction algorithms, CT architectures, and X-ray source or detector technologies. The main requirements for this simulator are accurate physics modeling, low computation times, and geometrical flexibility. CatSim allows simulating complex analytic phantoms, such as the FORBILD phantoms, including boxes, ellipsoids, elliptical cylinders, cones, and cut planes. CatSim incorporates polychromaticity, realistic quantum and electronic noise models, finite focal spot size and shape, finite detector cell size, detector cross-talk, detector lag or afterglow, bowtie filtration, finite detector efficiency, non-linear partial volume, scatter (variance-reduced Monte Carlo), and absorbed dose. We present an overview of CatSim along with a number of validation experiments.

  11. Biased three-intensity decoy-state scheme on the measurement-device-independent quantum key distribution using heralded single-photon sources.

    PubMed

    Zhang, Chun-Hui; Zhang, Chun-Mei; Guo, Guang-Can; Wang, Qin

    2018-02-19

    At present, most of the measurement-device-independent quantum key distributions (MDI-QKD) are based on weak coherent sources and limited in the transmission distance under realistic experimental conditions, e.g., considering the finite-size-key effects. Hence in this paper, we propose a new biased decoy-state scheme using heralded single-photon sources for the three-intensity MDI-QKD, where we prepare the decoy pulses only in X basis and adopt both the collective constraints and joint parameter estimation techniques. Compared with former schemes with WCS or HSPS, after implementing full parameter optimizations, our scheme gives distinct reduced quantum bit error rate in the X basis and thus show excellent performance, especially when the data size is relatively small.

  12. Nuclear forensic analysis of a non-traditional actinide sample

    DOE PAGES

    Doyle, Jamie L.; Kuhn, Kevin John; Byerly, Benjamin; ...

    2016-06-15

    Nuclear forensic publications, performance tests, and research and development efforts typically target the bulk global inventory of intentionally safeguarded materials, such as plutonium (Pu) and uranium (U). Other materials, such as neptunium (Np), pose a nuclear security risk as well. Trafficking leading to recovery of an interdicted Np sample is a realistic concern especially for materials originating in countries that reprocesses fuel. Using complementary forensic methods, potential signatures for an unknown Np oxide sample were investigated. Measurement results were assessed against published Np processes to present hypotheses as to the original intended use, method of production, and origin for thismore » Np oxide.« less

  13. Nuclear forensic analysis of a non-traditional actinide sample.

    PubMed

    Doyle, Jamie L; Kuhn, Kevin; Byerly, Benjamin; Colletti, Lisa; Fulwyler, James; Garduno, Katherine; Keller, Russell; Lujan, Elmer; Martinez, Alexander; Myers, Steve; Porterfield, Donivan; Spencer, Khalil; Stanley, Floyd; Townsend, Lisa; Thomas, Mariam; Walker, Laurie; Xu, Ning; Tandon, Lav

    2016-10-01

    Nuclear forensic publications, performance tests, and research and development efforts typically target the bulk global inventory of intentionally safeguarded materials, such as plutonium (Pu) and uranium (U). Other materials, such as neptunium (Np), pose a nuclear security risk as well. Trafficking leading to recovery of an interdicted Np sample is a realistic concern especially for materials originating in countries that reprocesses fuel. Using complementary forensic methods, potential signatures for an unknown Np oxide sample were investigated. Measurement results were assessed against published Np processes to present hypotheses as to the original intended use, method of production, and origin for this Np oxide. Published by Elsevier B.V.

  14. How Big is Too Big? The Girth of Bestselling Insertive Sex Toys to Guide Maximal Neophallus Dimensions.

    PubMed

    Isaacson, Dylan; Aghili, Roxana; Wongwittavas, Non; Garcia, Maurice

    2017-11-01

    In our practice we have encountered 4 female-to-male transgender patients seeking neophallus revision surgery for girth precluding penetrative vaginal or anal intercourse. Despite this, there is little evidence available to guide transitioning patients in neophallus sizing. In this work we examined the dimensions of bestselling realistic dildos, presuming that the most popular dimensions would reflect population preferences for penetrative toys and phalluses. To determine a maximal upper limit for girth compatible with penetrative intercourse based on measurements of bestselling realistic dildos and published erect penile dimensions. We collected measurements for "realistic dildos" designated as bestsellers for the top 5 Alexa.com-rated online adult retailers in the United States and for Amazon.com. We compared these with measurements of dildos available at Good Vibrations in San Francisco and with studies of erect natal dimensions. We compared all data with measurements of 4 index patients whose neophallus girth prevented penetrative intercourse. Length and circumference of overall bestselling and largest bestselling realistic dildos as reported on top websites and measured by investigators. The average insertive length of the compiled dildos (16.7 ± 1.6 cm) was 1 SD longer than natal functional erect penile length as reported in the literature (15.7 ± 2.6 cm); however, their average circumference (12.7 ± 0.8 cm) mirrored natal erect penile girth (12.3 ± 1.3). The average girth of vendors' top 3 largest-girth dildos was 15.1 ± 0.9 cm, 2 SD wider than natal erect penile girth. Index patients had an average length of 16.3 ± 3.2 cm and an average girth of 17.6 ± 1.3 cm. Index patient girth was 4 to 5 SD wider than the average natal erect girth. Based on our data, we suggest that a surgically created neophallus should have a girth no wider than 15.1 cm after implantation of an inflatable penile prosthesis. This corresponds to 2 SD wider than the average natal man's erect girth. Strengths include in-person measurements of patients whose girth prevented penetrative intercourse, the large number of dildos assessed, and correlations with in-person measurements. Limitations include the inability to account for the pliability of different materials, whether dildos were used for vaginal and/or anal insertion, the limited sample of 4 transmen for in-person measurement, and the absence of implanted inflatable penile prostheses in index neophalluses. Neophallus girth wider than 15.1 cm could lead to difficulty in penetrative intercourse for many individuals. A conservative recommendation for neophallus girth is 13 to 14 cm, or 0.5 to 1.5 SD wider than natal erect penile girth. Isaacson D, Aghili R, Wongwittavas N, Garcia M. How Big is Too Big? The Girth of Bestselling Insertive Sex Toys to Guide Maximal Neophallus Dimensions. J Sex Med 2017;14:1455-1461. Copyright © 2017. Published by Elsevier Inc.

  15. Concentrations of selected constituents in surface-water and streambed-sediment samples collected from streams in and near an area of oil and natural-gas development, south-central Texas, 2011-13

    USGS Publications Warehouse

    Opsahl, Stephen P.; Crow, Cassi L.

    2014-01-01

    During collection of streambed-sediment samples, additional samples from a subset of three sites (the SAR Elmendorf, SAR 72, and SAR McFaddin sites) were processed by using a 63-µm sieve on one aliquot and a 2-mm sieve on a second aliquot for PAH and n-alkane analyses. The purpose of analyzing PAHs and n-alkanes on a sample containing sand, silt, and clay versus a sample containing only silt and clay was to provide data that could be used to determine if these organic constituents had a greater affinity for silt- and clay-sized particles relative to sand-sized particles. The greater concentrations of PAHs in the <63-μm size-fraction samples at all three of these sites are consistent with a greater percentage of binding sites associated with fine-grained (<63 μm) sediment versus coarse-grained (<2 mm) sediment. The larger difference in total PAHs between the <2-mm and <63-μm size-fraction samples at the SAR Elmendorf site might be related to the large percentage of sand in the <2-mm size-fraction sample which was absent in the <63-μm size-fraction sample. In contrast, the <2-mm size-fraction sample collected from the SAR McFaddin site contained very little sand and was similar in particle-size composition to the <63-μm size-fraction sample.

  16. HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE

    NASA Technical Reports Server (NTRS)

    De, Salvo L. J.

    1994-01-01

    HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.

  17. Evaluation of attenuation and scatter correction requirements in small animal PET and SPECT imaging

    NASA Astrophysics Data System (ADS)

    Konik, Arda Bekir

    Positron emission tomography (PET) and single photon emission tomography (SPECT) are two nuclear emission-imaging modalities that rely on the detection of high-energy photons emitted from radiotracers administered to the subject. The majority of these photons are attenuated (absorbed or scattered) in the body, resulting in count losses or deviations from true detection, which in turn degrades the accuracy of images. In clinical emission tomography, sophisticated correction methods are often required employing additional x-ray CT or radionuclide transmission scans. Having proven their potential in both clinical and research areas, both PET and SPECT are being adapted for small animal imaging. However, despite the growing interest in small animal emission tomography, little scientific information exists about the accuracy of these correction methods on smaller size objects, and what level of correction is required. The purpose of this work is to determine the role of attenuation and scatter corrections as a function of object size through simulations. The simulations were performed using Interactive Data Language (IDL) and a Monte Carlo based package, Geant4 application for emission tomography (GATE). In IDL simulations, PET and SPECT data acquisition were modeled in the presence of attenuation. A mathematical emission and attenuation phantom approximating a thorax slice and slices from real PET/CT data were scaled to 5 different sizes (i.e., human, dog, rabbit, rat and mouse). The simulated emission data collected from these objects were reconstructed. The reconstructed images, with and without attenuation correction, were compared to the ideal (i.e., non-attenuated) reconstruction. Next, using GATE, scatter fraction values (the ratio of the scatter counts to the total counts) of PET and SPECT scanners were measured for various sizes of NEMA (cylindrical phantoms representing small animals and human), MOBY (realistic mouse/rat model) and XCAT (realistic human model) digital phantoms. In addition, PET projection files for different sizes of MOBY phantoms were reconstructed in 6 different conditions including attenuation and scatter corrections. Selected regions were analyzed for these different reconstruction conditions and object sizes. Finally, real mouse data from the real version of the same small animal PET scanner we modeled in our simulations were analyzed for similar reconstruction conditions. Both our IDL and GATE simulations showed that, for small animal PET and SPECT, even the smallest size objects (˜2 cm diameter) showed ˜15% error when both attenuation and scatter were not corrected. However, a simple attenuation correction using a uniform attenuation map and object boundary obtained from emission data significantly reduces this error in non-lung regions (˜1% for smallest size and ˜6% for largest size). In lungs, emissions values were overestimated when only attenuation correction was performed. In addition, we did not observe any significant improvement between the uses of uniform or actual attenuation map (e.g., only ˜0.5% for largest size in PET studies). The scatter correction was not significant for smaller size objects, but became increasingly important for larger sizes objects. These results suggest that for all mouse sizes and most rat sizes, uniform attenuation correction can be performed using emission data only. For smaller sizes up to ˜ 4 cm, scatter correction is not required even in lung regions. For larger sizes if accurate quantization needed, additional transmission scan may be required to estimate an accurate attenuation map for both attenuation and scatter corrections.

  18. Study samples are too small to produce sufficiently precise reliability coefficients.

    PubMed

    Charter, Richard A

    2003-04-01

    In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.

  19. Analysis of Sample Size, Counting Time, and Plot Size from an Avian Point Count Survey on Hoosier National Forest, Indiana

    Treesearch

    Frank R. Thompson; Monica J. Schwalbach

    1995-01-01

    We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...

  20. Postcraniometric sex and ancestry estimation in South Africa: a validation study.

    PubMed

    Liebenberg, Leandi; Krüger, Gabriele C; L'Abbé, Ericka N; Stull, Kyra E

    2018-05-24

    With the acceptance of the Daubert criteria as the standards for best practice in forensic anthropological research, more emphasis is being placed on the validation of published methods. Methods, both traditional and novel, need to be validated, adjusted, and refined for optimal performance within forensic anthropological analyses. Recently, a custom postcranial database of modern South Africans was created for use in Fordisc 3.1. Classification accuracies of up to 85% for ancestry estimation and 98% for sex estimation were achieved using a multivariate approach. To measure the external validity and report more realistic performance statistics, an independent sample was tested. The postcrania from 180 black, white, and colored South Africans were measured and classified using the custom postcranial database. A decrease in accuracy was observed for both ancestry estimation (79%) and sex estimation (95%) of the validation sample. When incorporating both sex and ancestry simultaneously, the method achieved 70% accuracy, and 79% accuracy when sex-specific ancestry analyses were run. Classification matrices revealed that postcrania were more likely to misclassify as a result of ancestry rather than sex. While both sex and ancestry influence the size of an individual, sex differences are more marked in the postcranial skeleton and are therefore easier to identify. The external validity of the postcranial database was verified and therefore shown to be a useful tool for forensic casework in South Africa. While the classification rates were slightly lower than the original method, this is expected when a method is generalized.

  1. AGREEMENT AND COVERAGE OF INDICATORS OF RESPONSE TO INTERVENTION: A MULTI-METHOD COMPARISON AND SIMULATION

    PubMed Central

    Fletcher, Jack M.; Stuebing, Karla K.; Barth, Amy E.; Miciak, Jeremy; Francis, David J.; Denton, Carolyn A.

    2013-01-01

    Purpose Agreement across methods for identifying students as inadequate responders or as learning disabled is often poor. We report (1) an empirical examination of final status (post-intervention benchmarks) and dual-discrepancy growth methods based on growth during the intervention and final status for assessing response to intervention; and (2) a statistical simulation of psychometric issues that may explain low agreement. Methods After a Tier 2 intervention, final status benchmark criteria were used to identify 104 inadequate and 85 adequate responders to intervention, with comparisons of agreement and coverage for these methods and a dual-discrepancy method. Factors affecting agreement were investigated using computer simulation to manipulate reliability, the intercorrelation between measures, cut points, normative samples, and sample size. Results Identification of inadequate responders based on individual measures showed that single measures tended not to identify many members of the pool of 104 inadequate responders. Poor to fair levels of agreement for identifying inadequate responders were apparent between pairs of measures In the simulation, comparisons across two simulated measures generated indices of agreement (kappa) that were generally low because of multiple psychometric issues inherent in any test. Conclusions Expecting excellent agreement between two correlated tests with even small amounts of unreliability may not be realistic. Assessing outcomes based on multiple measures, such as level of CBM performance and short norm-referenced assessments of fluency may improve the reliability of diagnostic decisions. PMID:25364090

  2. Value for money or making the healthy choice: the impact of proportional pricing on consumers' portion size choices.

    PubMed

    Vermeer, Willemijn M; Alting, Esther; Steenhuis, Ingrid H M; Seidell, Jacob C

    2010-02-01

    Large food portion sizes are determinants of a high caloric intake, especially if they have been made attractive through value size pricing (i.e. lower unit prices for large than for small portion sizes). The purpose of the two questionnaire studies that are reported in this article was to assess the impact of proportional pricing (i.e. removing beneficial prices for large sizes) on people's portion size choices of high caloric food and drink items. Both studies employed an experimental design with a proportional pricing condition and a value size pricing condition. Study 1 was conducted in a fast food restaurant (N = 150) and study 2 in a worksite cafeteria (N = 141). Three different food products (i.e. soft drink, chicken nuggets in study 1 and a hot meal in study 2) with corresponding prices were displayed on pictures in the questionnaire. Outcome measures were consumers' intended portion size choices. No main effects of pricing were found. However, confronted with proportional pricing a trend was found for overweight fast food restaurant visitors being more likely to choose small portion sizes of chicken nuggets (OR = 4.31, P = 0.07) and less likely to choose large soft drink sizes (OR = 0.07, P = 0.04). Among a general public, proportional pricing did not reduce consumers' size choices. However, pricing strategies can help overweight and obese consumers selecting appropriate portion sizes of soft drink and high caloric snacks. More research in realistic settings with actual behaviour as outcome measure is required.

  3. Will Your Battery Survive a World With Fast Chargers?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neubauer, J. S.; Wood, E.

    Fast charging is attractive to battery electric vehicle (BEV) drivers for its ability to enable long-distance travel and quickly recharge depleted batteries on short notice. However, such aggressive charging and the sustained vehicle operation that result could lead to excessive battery temperatures and degradation. Properly assessing the consequences of fast charging requires accounting for disparate cycling, heating, and aging of individual cells in large BEV packs when subjected to realistic travel patterns, usage of fast chargers, and climates over long durations (i.e., years). The U.S. Department of Energy's Vehicle Technologies Office has supported the National Renewable Energy Laboratory's development ofmore » BLAST-V-the Battery Lifetime Analysis and Simulation Tool for Vehicles-to create a tool capable of accounting for all of these factors. We present on the findings of applying this tool to realistic fast charge scenarios. The effects of different travel patterns, climates, battery sizes, battery thermal management systems, and other factors on battery performance and degradation are presented. We find that the impact of realistic fast charging on battery degradation is minimal for most drivers, due to the low frequency of use. However, in the absence of active battery cooling systems, a driver's desired utilization of a BEV and fast charging infrastructure can result in unsafe peak battery temperatures. We find that active battery cooling systems can control peak battery temperatures to safe limits while allowing the desired use of the vehicle.« less

  4. 7 CFR 51.1406 - Sample for grade or size determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., AND STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Sample for Grade Or Size Determination § 51.1406 Sample for grade or size determination. Each sample shall consist of 100 pecans. The...

  5. Larger than life: billboard communication in Southeast Asia.

    PubMed

    Barnard, B

    1983-01-01

    Billboards are widely used in Southeast Asia, and especially in Malaysia, Singapore, Indonesia, and Thailand, for delivering persuasive political and commercial messages and for advertising the cinema. Billboards are a cost effective way of communicating with all segments of society including illiterate persons, poor people who cannot afford television sets and radios, rural populations, and diverse ethnic and linguistic groups. Billboards are a form of applied art and are used to deliver temporary messages. Each country has its own billboard traditions and styles, and within each country, commercial, cinema, and political boards also have their own styles. In Indonesia and Thailand, almost all billboards are hand painted and gigantic in size. The paintings are highly realistic and detailed. In Thailand billboards are produced in large studios employing many artists, and the boards cost about US$9.00/square meter or more. The Four Art Studio in Bankok produces commercial boards in Renaissance, Impressionistic, Pop, and Op art styles. Both Indonesia and Thailand were early centers of artistic and cultural influence in Asia, and each country has highly developed art traditions. In Indonesia, the Japanese occupation led to the development of propaganda and nationalistic art. After independence nationalistic art was developed still further. At the present time, socialist-realistism predominates as an art style, and large air brushed political billboards are prominantly displayed throughout the country. In Malaysia and Singapore billboards are small in size. Most of the boards, except those used to advertise the cinema, are printed rather than painted. Neither country has a strong tradition of art. Realism is not stressed in their fine arts nor in their art training. The lack of a realistic art tradition probably accounts for the emphasis placed on printed billboards. Cinema boards are painted but they are not produced by applied artists and are generally mediocre in quality. Political boards in Malaysia generally contain only verbal messages. In Singapore there are few political billboards. In Japan billboards are also widely used. They are sophisticated and usually printed with advanced technological methods. An innovative form of billboard, a 20 meter audiovisual screen, is appearing more and more frequently in Japanese cities.

  6. The waiting time problem in a model hominin population.

    PubMed

    Sanford, John; Brewer, Wesley; Smith, Franzine; Baumgardner, John

    2015-09-17

    Functional information is normally communicated using specific, context-dependent strings of symbolic characters. This is true within the human realm (texts and computer programs), and also within the biological realm (nucleic acids and proteins). In biology, strings of nucleotides encode much of the information within living cells. How do such information-bearing nucleotide strings arise and become established? This paper uses comprehensive numerical simulation to understand what types of nucleotide strings can realistically be established via the mutation/selection process, given a reasonable timeframe. The program Mendel's Accountant realistically simulates the mutation/selection process, and was modified so that a starting string of nucleotides could be specified, and a corresponding target string of nucleotides could be specified. We simulated a classic pre-human hominin population of at least 10,000 individuals, with a generation time of 20 years, and with very strong selection (50% selective elimination). Random point mutations were generated within the starting string. Whenever an instance of the target string arose, all individuals carrying the target string were assigned a specified reproductive advantage. When natural selection had successfully amplified an instance of the target string to the point of fixation, the experiment was halted, and the waiting time statistics were tabulated. Using this methodology we tested the effect of mutation rate, string length, fitness benefit, and population size on waiting time to fixation. Biologically realistic numerical simulations revealed that a population of this type required inordinately long waiting times to establish even the shortest nucleotide strings. To establish a string of two nucleotides required on average 84 million years. To establish a string of five nucleotides required on average 2 billion years. We found that waiting times were reduced by higher mutation rates, stronger fitness benefits, and larger population sizes. However, even using the most generous feasible parameters settings, the waiting time required to establish any specific nucleotide string within this type of population was consistently prohibitive. We show that the waiting time problem is a significant constraint on the macroevolution of the classic hominin population. Routine establishment of specific beneficial strings of two or more nucleotides becomes very problematic.

  7. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    PubMed

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  8. IGBT Switching Characteristic Curve Embedded Half-Bridge MMC Modelling and Real Time Simulation Realization

    NASA Astrophysics Data System (ADS)

    Zhengang, Lu; Hongyang, Yu; Xi, Yang

    2017-05-01

    The Modular Multilevel Converter (MMC) is one of the most attractive topologies in recent years for medium or high voltage industrial applications, such as high voltage dc transmission (HVDC) and medium voltage varying speed motor drive. The wide adoption of MMCs in industry is mainly due to its flexible expandability, transformer-less configuration, common dc bus, high reliability from redundancy, and so on. But, when the sub module number of MMC is more, the test of MMC controller will cost more time and effort. Hardware in the loop test based on real time simulator will save a lot of time and money caused by the MMC test. And due to the flexible of HIL, it becomes more and more popular in the industry area. The MMC modelling method remains an important issue for the MMC HIL test. Specifically, the VSC model should realistically reflect the nonlinear device switching characteristics, switching and conduction losses, tailing current, and diode reverse recovery behaviour of a realistic converter. In this paper, an IGBT switching characteristic curve embedded half-bridge MMC modelling method is proposed. This method is based on the switching curve referring and sample circuit calculation, and it is sample for implementation. Based on the proposed method, a FPGA real time simulation is carried out with 200ns sample time. The real time simulation results show the proposed method is correct.

  9. Distribution of the two-sample t-test statistic following blinded sample size re-estimation.

    PubMed

    Lu, Kaifeng

    2016-05-01

    We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. ENHANCEMENT OF LEARNING ON SAMPLE SIZE CALCULATION WITH A SMARTPHONE APPLICATION: A CLUSTER-RANDOMIZED CONTROLLED TRIAL.

    PubMed

    Ngamjarus, Chetta; Chongsuvivatwong, Virasakdi; McNeil, Edward; Holling, Heinz

    2017-01-01

    Sample size determination usually is taught based on theory and is difficult to understand. Using a smartphone application to teach sample size calculation ought to be more attractive to students than using lectures only. This study compared levels of understanding of sample size calculations for research studies between participants attending a lecture only versus lecture combined with using a smartphone application to calculate sample sizes, to explore factors affecting level of post-test score after training sample size calculation, and to investigate participants’ attitude toward a sample size application. A cluster-randomized controlled trial involving a number of health institutes in Thailand was carried out from October 2014 to March 2015. A total of 673 professional participants were enrolled and randomly allocated to one of two groups, namely, 341 participants in 10 workshops to control group and 332 participants in 9 workshops to intervention group. Lectures on sample size calculation were given in the control group, while lectures using a smartphone application were supplied to the test group. Participants in the intervention group had better learning of sample size calculation (2.7 points out of maximnum 10 points, 95% CI: 24 - 2.9) than the participants in the control group (1.6 points, 95% CI: 1.4 - 1.8). Participants doing research projects had a higher post-test score than those who did not have a plan to conduct research projects (0.9 point, 95% CI: 0.5 - 1.4). The majority of the participants had a positive attitude towards the use of smartphone application for learning sample size calculation.

  11. Using digital colour to increase the realistic appearance of SEM micrographs of bloodstains.

    PubMed

    Hortolà, Policarp

    2010-10-01

    Although in the scientific-research literature the micrographs from scanning electron microscopes (SEMs) are usually displayed in greyscale, the potential of colour resources provided by the SEM-coupled image-acquiring systems and, subsidiarily, by image-manipulation free softwares deserves be explored as a tool for colouring SEM micrographs of bloodstains. After acquiring greyscale SEM micrographs of a (dark red to the naked eye) human blood smear on grey chert, they were manually obtained in red tone using both the SEM-coupled image-acquiring system and an image-manipulation free software, as well as they were automatically generated in thermal tone using the SEM-coupled system. Red images obtained by the SEM-coupled system demonstrated lower visual-discrimination capability than the other coloured images, whereas those in red generated by the free software rendered better magnitude of scopic information than the red images generated by the SEM-coupled system. Thermal-tone images, although were further from the real sample colour than the red ones, not only increased their realistic appearance over the greyscale images, but also yielded the best visual-discrimination capability among all the coloured SEM micrographs, and fairly enhanced the relief effect of the SEM micrographs over both the greyscale and the red images. The application of digital colour by means of the facilities provided by an SEM-coupled image-acquiring system or, when required, by an image-manipulation free software provides a user-friendly, quick and inexpensive way of obtaining coloured SEM micrographs of bloodstains, avoiding to do sophisticated, time-consuming colouring procedures. Although this work was focused on bloodstains, well probably other monochromatic or quasi-monochromatic samples are also susceptible of increasing their realistic appearance by colouring them using the simple methods utilized in this study.

  12. Simulation of HLNC and NCC measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Ming-Shih; Teichmann, T.; De Ridder, P.

    1994-03-01

    This report discusses an automatic method of simulating the results of High Level Neutron Coincidence Counting (HLNC) and Neutron Collar Coincidence Counting (NCC) measurements to facilitate the safeguards` inspectors understanding and use of these instruments under realistic conditions. This would otherwise be expensive, and time-consuming, except at sites designed to handle radioactive materials, and having the necessary variety of fuel elements and other samples. This simulation must thus include the behavior of the instruments for variably constituted and composed fuel elements (including poison rods and Gd loading), and must display the changes in the count rates as a function ofmore » these characteristics, as well as of various instrumental parameters. Such a simulation is an efficient way of accomplishing the required familiarization and training of the inspectors by providing a realistic reproduction of the results of such measurements.« less

  13. Diet misreporting can be corrected: confirmation of the association between energy intake and fat-free mass in adolescents.

    PubMed

    Vainik, Uku; Konstabel, Kenn; Lätt, Evelin; Mäestu, Jarek; Purge, Priit; Jürimäe, Jaak

    2016-10-01

    Subjective energy intake (sEI) is often misreported, providing unreliable estimates of energy consumed. Therefore, relating sEI data to health outcomes is difficult. Recently, Börnhorst et al. compared various methods to correct sEI-based energy intake estimates. They criticised approaches that categorise participants as under-reporters, plausible reporters and over-reporters based on the sEI:total energy expenditure (TEE) ratio, and thereafter use these categories as statistical covariates or exclusion criteria. Instead, they recommended using external predictors of sEI misreporting as statistical covariates. We sought to confirm and extend these findings. Using a sample of 190 adolescent boys (mean age=14), we demonstrated that dual-energy X-ray absorptiometry-measured fat-free mass is strongly associated with objective energy intake data (onsite weighted breakfast), but the association with sEI (previous 3-d dietary interview) is weak. Comparing sEI with TEE revealed that sEI was mostly under-reported (74 %). Interestingly, statistically controlling for dietary reporting groups or restricting samples to plausible reporters created a stronger-than-expected association between fat-free mass and sEI. However, the association was an artifact caused by selection bias - that is, data re-sampling and simulations showed that these methods overestimated the effect size because fat-free mass was related to sEI both directly and indirectly via TEE. A more realistic association between sEI and fat-free mass was obtained when the model included common predictors of misreporting (e.g. BMI, restraint). To conclude, restricting sEI data only to plausible reporters can cause selection bias and inflated associations in later analyses. Therefore, we further support statistically correcting sEI data in nutritional analyses. The script for running simulations is provided.

  14. SDSS-IV MaNGA: stellar angular momentum of about 2300 galaxies: unveiling the bimodality of massive galaxy properties

    NASA Astrophysics Data System (ADS)

    Graham, Mark T.; Cappellari, Michele; Li, Hongyu; Mao, Shude; Bershady, Matthew A.; Bizyaev, Dmitry; Brinkmann, Jonathan; Brownstein, Joel R.; Bundy, Kevin; Drory, Niv; Law, David R.; Pan, Kaike; Thomas, Daniel; Wake, David A.; Weijmans, Anne-Marie; Westfall, Kyle B.; Yan, Renbin

    2018-07-01

    We measure λ _{R_e}, a proxy for galaxy specific stellar angular momentum within one effective radius, and the ellipticity, ɛ, for about 2300 galaxies of all morphological types observed with integral field spectroscopy as part of the Mapping Nearby Galaxies at Apache Point Observatory survey, the largest such sample to date. We use the (λ _{R_e}, ɛ ) diagram to separate early-type galaxies into fast and slow rotators. We also visually classify each galaxy according to its optical morphology and two-dimensional stellar velocity field. Comparing these classifications to quantitative λ _{R_e} measurements reveals tight relationships between angular momentum and galaxy structure. In order to account for atmospheric seeing, we use realistic models of galaxy kinematics to derive a general approximate analytic correction for λ _{R_e}. Thanks to the size of the sample and the large number of massive galaxies, we unambiguously detect a clear bimodality in the (λ _{R_e}, ɛ ) diagram which may result from fundamental differences in galaxy assembly history. There is a sharp secondary density peak inside the region of the diagram with low λ _{R_e} and ɛ < 0.4, previously suggested as the definition for slow rotators. Most of these galaxies are visually classified as non-regular rotators and have high velocity dispersion. The intrinsic bimodality must be stronger, as it tends to be smoothed by noise and inclination. The large sample of slow rotators allows us for the first time to unveil a secondary peak at ±90° in their distribution of the misalignments between the photometric and kinematic position angles. We confirm that genuine slow rotators start appearing above M ≥ 2 × 1011 M⊙ where a significant number of high-mass fast rotators also exist.

  15. SDSS-IV MaNGA: Stellar angular momentum of about 2300 galaxies: unveiling the bimodality of massive galaxy properties

    NASA Astrophysics Data System (ADS)

    Graham, Mark T.; Cappellari, Michele; Li, Hongyu; Mao, Shude; Bershady, Matthew; Bizyaev, Dmitry; Brinkmann, Jonathan; Brownstein, Joel R.; Bundy, Kevin; Drory, Niv; Law, David R.; Pan, Kaike; Thomas, Daniel; Wake, David A.; Weijmans, Anne-Marie; Westfall, Kyle B.; Yan, Renbin

    2018-03-01

    We measure λ _{R_e}, a proxy for galaxy specific stellar angular momentum within one effective radius, and the ellipticity, ɛ, for about 2300 galaxies of all morphological types observed with integral field spectroscopy as part of the MaNGA survey, the largest such sample to date. We use the (λ _{R_e}, ɛ ) diagram to separate early-type galaxies into fast and slow rotators. We also visually classify each galaxy according to its optical morphology and two-dimensional stellar velocity field. Comparing these classifications to quantitative λ _{R_e} measurements reveals tight relationships between angular momentum and galaxy structure. In order to account for atmospheric seeing, we use realistic models of galaxy kinematics to derive a general approximate analytic correction for λ _{R_e}. Thanks to the size of the sample and the large number of massive galaxies, we unambiguously detect a clear bimodality in the (λ _{R_e}, ɛ ) diagram which may result from fundamental differences in galaxy assembly history. There is a sharp secondary density peak inside the region of the diagram with low λ _{R_e} and ɛ < 0.4, previously suggested as the definition for slow rotators. Most of these galaxies are visually classified as non-regular rotators and have high velocity dispersion. The intrinsic bimodality must be stronger, as it tends to be smoothed by noise and inclination. The large sample of slow rotators allows us for the first time to unveil a secondary peak at ±90○ in their distribution of the misalignments between the photometric and kinematic position angles. We confirm that genuine slow rotators start appearing above M ≥ 2 × 1011M⊙ where a significant number of high-mass fast rotators also exist.

  16. Inter-dot strain field effect on the optoelectronic properties of realistic InP lateral quantum-dot molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barettin, Daniele, E-mail: Daniele.Barettin@uniroma2.it; Auf der Maur, Matthias; De Angelis, Roberta

    2015-03-07

    We report on numerical simulations of InP surface lateral quantum-dot molecules on In{sub 0.48}Ga{sub 0.52 }P buffer, using a model strictly derived by experimental results by extrapolation of the molecules shape from atomic force microscopy images. Our study has been inspired by the comparison of a photoluminescence spectrum of a high-density InP surface quantum dot sample with a numerical ensemble average given by a weighted sum of simulated single quantum-dot spectra. A lack of experimental optical response from the smaller dots of the sample is found to be due to strong inter-dot strain fields, which influence the optoelectronic properties of lateralmore » quantum-dot molecules. Continuum electromechanical, k{sup →}·p{sup →} bandstructure, and optical calculations are presented for two different molecules, the first composed of two dots of nearly identical dimensions (homonuclear), the second of two dots with rather different sizes (heteronuclear). We show that in the homonuclear molecule the hydrostatic strain raises a potential barrier for the electrons in the connection zone between the dots, while conversely the holes do not experience any barrier, which considerably increases the coupling. Results for the heteronuclear molecule show instead that its dots do not appear as two separate and distinguishable structures, but as a single large dot, and no optical emission is observed in the range of higher energies where the smaller dot is supposed to emit. We believe that in samples of such a high density the smaller dots result as practically incorporated into bigger molecular structures, an effect strongly enforced by the inter-dot strain fields, and consequently it is not possible to experimentally obtain a separate optical emission from the smaller dots.« less

  17. Inter-dot strain field effect on the optoelectronic properties of realistic InP lateral quantum-dot molecules

    NASA Astrophysics Data System (ADS)

    Barettin, Daniele; Auf der Maur, Matthias; De Angelis, Roberta; Prosposito, Paolo; Casalboni, Mauro; Pecchia, Alessandro

    2015-03-01

    We report on numerical simulations of InP surface lateral quantum-dot molecules on In0.48Ga0.52P buffer, using a model strictly derived by experimental results by extrapolation of the molecules shape from atomic force microscopy images. Our study has been inspired by the comparison of a photoluminescence spectrum of a high-density InP surface quantum dot sample with a numerical ensemble average given by a weighted sum of simulated single quantum-dot spectra. A lack of experimental optical response from the smaller dots of the sample is found to be due to strong inter-dot strain fields, which influence the optoelectronic properties of lateral quantum-dot molecules. Continuum electromechanical, k →.p → bandstructure, and optical calculations are presented for two different molecules, the first composed of two dots of nearly identical dimensions (homonuclear), the second of two dots with rather different sizes (heteronuclear). We show that in the homonuclear molecule the hydrostatic strain raises a potential barrier for the electrons in the connection zone between the dots, while conversely the holes do not experience any barrier, which considerably increases the coupling. Results for the heteronuclear molecule show instead that its dots do not appear as two separate and distinguishable structures, but as a single large dot, and no optical emission is observed in the range of higher energies where the smaller dot is supposed to emit. We believe that in samples of such a high density the smaller dots result as practically incorporated into bigger molecular structures, an effect strongly enforced by the inter-dot strain fields, and consequently it is not possible to experimentally obtain a separate optical emission from the smaller dots.

  18. Efficient photocatalytic degradation of rhodamine-B by Fe doped CuS diluted magnetic semiconductor nanoparticles under the simulated sunlight irradiation

    NASA Astrophysics Data System (ADS)

    Sreelekha, N.; Subramanyam, K.; Amaranatha Reddy, D.; Murali, G.; Rahul Varma, K.; Vijayalakshmi, R. P.

    2016-12-01

    The present work is planned for a simple, inexpensive and efficient approach for the synthesis of Cu1-xFexS (x = 0.00, 0.01, 0.03, 0.05 and 0.07) nanoparticles via simplistic chemical co-precipitation route by using ethylene diamine tetra acetic acid (EDTA) as a capping molecules. As synthesized nanoparticles were used as competent catalysts for degradation of rhodamine-B organic dye pollutant. The properties of prepared samples were analyzed with energy dispersive analysis of X-rays (EDAX), X-ray diffraction (XRD), transmission electron microscopy (TEM), UV-visible optical absorption spectroscopy, Fourier transform infrared (FTIR) spectra, Raman spectra and vibrating sample magnetometer (VSM). EDAX spectra corroborated the existence of Fe in prepared nanoparticles within close proximity to stoichiometric ratio. XRD, FTIR and Raman patterns affirmed that configuration of single phase hexagonal crystal structure as that of (P63/mmc) CuS, without impurity crystals. The average particle size estimated by TEM scrutiny is in the assortment of 5-10 nm. UV-visible optical absorption measurements showed that band gap narrowing with increasing the Fe doping concentration. VSM measurements revealed that 3% Fe doped CuS nanoparticles exhibited strong ferromagnetism at room temperature and changeover of magnetic signs from ferromagnetic to the paramagnetic nature with increasing the Fe doping concentration in CuS host lattice. Among all Fe doped CuS nanoparticles, 3% Fe inclusion CuS sample shows better photocatalytic performance in decomposition of RhB compared with the pristine CuS. Thus as synthesized Cu0·97Fe0·03S nanocatalysts are tremendously realistic compounds for photocatalytic fictionalization in the direction of organic dye degradation under visible light.

  19. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  20. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    PubMed

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Electronic structures of GeSi nanoislands grown on pit-patterned Si(001) substrate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Han, E-mail: Dabombyh@aliyun.com; Yu, Zhongyuan

    2014-11-15

    Patterning pit on Si(001) substrate prior to Ge deposition is an important approach to achieve GeSi nanoislands with high ordering and size uniformity. In present work, the electronic structures of realistic uncapped pyramid, dome, barn and cupola nanoislands grown in (105) pits are systematically investigated by solving Schrödinger equation for heavy-hole, which resorts to inhomogeneous strain distribution and nonlinear composition-dependent band parameters. Uniform, partitioned and equilibrium composition profile (CP) in nanoisland and inverted pyramid structure are simulated separately. We demonstrate the huge impact of composition profile on localization of heavy-hole: wave function of ground state is confined near pit facetsmore » for uniform CP, at bottom of nanoisland for partitioned CP and at top of nanoisland for equilibrium CP. Moreover, such localization is gradually compromised by the size effect as pit filling ratio or pit size decreases. The results pave the fundamental guideline of designing nanoislands on pit-patterned substrates for desired applications.« less

  2. Computer Modeling of Non-Isothermal Crystallization

    NASA Technical Reports Server (NTRS)

    Kelton, K. F.; Narayan, K. Lakshmi; Levine, L. E.; Cull, T. C.; Ray, C. S.

    1996-01-01

    A realistic computer model for simulating isothermal and non-isothermal phase transformations proceeding by homogeneous and heterogeneous nucleation and interface-limited growth is presented. A new treatment for particle size effects on the crystallization kinetics is developed and is incorporated into the numerical model. Time-dependent nucleation rates, size-dependent growth rates, and surface crystallization are also included. Model predictions are compared with experimental measurements of DSC/DTA peak parameters for the crystallization of lithium disilicate glass as a function of particle size, Pt doping levels, and water content. The quantitative agreement that is demonstrated indicates that the numerical model can be used to extract key kinetic data from easily obtained calorimetric data. The model can also be used to probe nucleation and growth behavior in regimes that are otherwise inaccessible. Based on a fit to data, an earlier prediction that the time-dependent nucleation rate in a DSC/DTA scan can rise above the steady-state value at a temperature higher than the peak in the steady-state rate is demonstrated.

  3. Six-Degree-of-Freedom Trajectory Optimization Utilizing a Two-Timescale Collocation Architecture

    NASA Technical Reports Server (NTRS)

    Desai, Prasun N.; Conway, Bruce A.

    2005-01-01

    Six-degree-of-freedom (6DOF) trajectory optimization of a reentry vehicle is solved using a two-timescale collocation methodology. This class of 6DOF trajectory problems are characterized by two distinct timescales in their governing equations, where a subset of the states have high-frequency dynamics (the rotational equations of motion) while the remaining states (the translational equations of motion) vary comparatively slowly. With conventional collocation methods, the 6DOF problem size becomes extraordinarily large and difficult to solve. Utilizing the two-timescale collocation architecture, the problem size is reduced significantly. The converged solution shows a realistic landing profile and captures the appropriate high-frequency rotational dynamics. A large reduction in the overall problem size (by 55%) is attained with the two-timescale architecture as compared to the conventional single-timescale collocation method. Consequently, optimum 6DOF trajectory problems can now be solved efficiently using collocation, which was not previously possible for a system with two distinct timescales in the governing states.

  4. A path-oriented knowledge representation system: Defusing the combinatorial system

    NASA Technical Reports Server (NTRS)

    Karamouzis, Stamos T.; Barry, John S.; Smith, Steven L.; Feyock, Stefan

    1995-01-01

    LIMAP is a programming system oriented toward efficient information manipulation over fixed finite domains, and quantification over paths and predicates. A generalization of Warshall's Algorithm to precompute paths in a sparse matrix representation of semantic nets is employed to allow questions involving paths between components to be posed and answered easily. LIMAP's ability to cache all paths between two components in a matrix cell proved to be a computational obstacle, however, when the semantic net grew to realistic size. The present paper describes a means of mitigating this combinatorial explosion to an extent that makes the use of the LIMAP representation feasible for problems of significant size. The technique we describe radically reduces the size of the search space in which LIMAP must operate; semantic nets of more than 500 nodes have been attacked successfully. Furthermore, it appears that the procedure described is applicable not only to LIMAP, but to a number of other combinatorially explosive search space problems found in AI as well.

  5. Accuracy Assessments of Cloud Droplet Size Retrievals from Polarized Reflectance Measurements by the Research Scanning Polarimeter

    NASA Technical Reports Server (NTRS)

    Alexandrov, Mikhail Dmitrievic; Cairns, Brian; Emde, Claudia; Ackerman, Andrew S.; vanDiedenhove, Bastiaan

    2012-01-01

    We present an algorithm for the retrieval of cloud droplet size distribution parameters (effective radius and variance) from the Research Scanning Polarimeter (RSP) measurements. The RSP is an airborne prototype for the Aerosol Polarimetery Sensor (APS), which was on-board of the NASA Glory satellite. This instrument measures both polarized and total reflectance in 9 spectral channels with central wavelengths ranging from 410 to 2260 nm. The cloud droplet size retrievals use the polarized reflectance in the scattering angle range between 135deg and 165deg, where they exhibit the sharply defined structure known as the rain- or cloud-bow. The shape of the rainbow is determined mainly by the single scattering properties of cloud particles. This significantly simplifies both forward modeling and inversions, while also substantially reducing uncertainties caused by the aerosol loading and possible presence of undetected clouds nearby. In this study we present the accuracy evaluation of our algorithm based on the results of sensitivity tests performed using realistic simulated cloud radiation fields.

  6. Influence of system size and solvent flow on the distribution of wormlike micelles in a contraction-expansion geometry

    NASA Astrophysics Data System (ADS)

    Stukan, M. R.; Boek, E. S.; Padding, J. T.; Crawshaw, J. P.

    2008-05-01

    Viscoelastic wormlike micelles are formed by surfactants assembling into elongated cylindrical structures. These structures respond to flow by aligning, breaking and reforming. Their response to the complex flow fields encountered in porous media is particularly rich. Here we use a realistic mesoscopic Brownian Dynamics model to investigate the flow of a viscoelastic surfactant (VES) fluid through individual pores idealized as a step expansion-contraction of size around one micron. In a previous study, we assumed the flow field to be Newtonian. Here we extend the work to include the non-Newtonian flow field previously obtained by experiment. The size of the simulations is also increased so that the pore is much larger than the radius of gyration of the micelles. For the non-Newtonian flow field at the higher flow rates in relatively large pores, the density of the micelles becomes markedly non-uniform. In this case, we find that the density in the large, slowly moving entry corner regions is substantially increased.

  7. Solar granulation and statistical crystallography: A modeling approach using size-shape relations

    NASA Technical Reports Server (NTRS)

    Noever, D. A.

    1994-01-01

    The irregular polygonal pattern of solar granulation is analyzed for size-shape relations using statistical crystallography. In contrast to previous work which has assumed perfectly hexagonal patterns for granulation, more realistic accounting of cell (granule) shapes reveals a broader basis for quantitative analysis. Several features emerge as noteworthy: (1) a linear correlation between number of cell-sides and neighboring shapes (called Aboav-Weaire's law); (2) a linear correlation between both average cell area and perimeter and the number of cell-sides (called Lewis's law and a perimeter law, respectively) and (3) a linear correlation between cell area and squared perimeter (called convolution index). This statistical picture of granulation is consistent with a finding of no correlation in cell shapes beyond nearest neighbors. A comparative calculation between existing model predictions taken from luminosity data and the present analysis shows substantial agreements for cell-size distributions. A model for understanding grain lifetimes is proposed which links convective times to cell shape using crystallographic results.

  8. Beyond statistical inference: A decision theory for science

    PubMed Central

    KILLEEN, PETER R.

    2008-01-01

    Traditional null hypothesis significance testing does not yield the probability of the null or its alternative and, therefore, cannot logically ground scientific decisions. The decision theory proposed here calculates the expected utility of an effect on the basis of (1) the probability of replicating it and (2) a utility function on its size. It takes significance tests—which place all value on the replicability of an effect and none on its magnitude—as a special case, one in which the cost of a false positive is revealed to be an order of magnitude greater than the value of a true positive. More realistic utility functions credit both replicability and effect size, integrating them for a single index of merit. The analysis incorporates opportunity cost and is consistent with alternate measures of effect size, such as r2 and information transmission, and with Bayesian model selection criteria. An alternate formulation is functionally equivalent to the formal theory, transparent, and easy to compute. PMID:17201351

  9. Beyond statistical inference: a decision theory for science.

    PubMed

    Killeen, Peter R

    2006-08-01

    Traditional null hypothesis significance testing does not yield the probability of the null or its alternative and, therefore, cannot logically ground scientific decisions. The decision theory proposed here calculates the expected utility of an effect on the basis of (1) the probability of replicating it and (2) a utility function on its size. It takes significance tests--which place all value on the replicability of an effect and none on its magnitude--as a special case, one in which the cost of a false positive is revealed to be an order of magnitude greater than the value of a true positive. More realistic utility functions credit both replicability and effect size, integrating them for a single index of merit. The analysis incorporates opportunity cost and is consistent with alternate measures of effect size, such as r2 and information transmission, and with Bayesian model selection criteria. An alternate formulation is functionally equivalent to the formal theory, transparent, and easy to compute.

  10. Flight Test Evaluation of Synthetic Vision Concepts at a Terrain Challenged Airport

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Prince, Lawrence J., III; Bailey, Randell E.; Arthur, Jarvis J., III; Parrish, Russell V.

    2004-01-01

    NASA's Synthetic Vision Systems (SVS) Project is striving to eliminate poor visibility as a causal factor in aircraft accidents as well as enhance operational capabilities of all aircraft through the display of computer generated imagery derived from an onboard database of terrain, obstacle, and airport information. To achieve these objectives, NASA 757 flight test research was conducted at the Eagle-Vail, Colorado airport to evaluate three SVS display types (Head-up Display, Head-Down Size A, Head-Down Size X) and two terrain texture methods (photo-realistic, generic) in comparison to the simulated Baseline Boeing-757 Electronic Attitude Direction Indicator and Navigation/Terrain Awareness and Warning System displays. The results of the experiment showed significantly improved situation awareness, performance, and workload for SVS concepts compared to the Baseline displays and confirmed the retrofit capability of the Head-Up Display and Size A SVS concepts. The research also demonstrated that the tunnel guidance display concept used within the SVS concepts achieved required navigation performance (RNP) criteria.

  11. Correlation between electronic structure and electron conductivity in MoX2 (X = S, Se, and Te)

    NASA Astrophysics Data System (ADS)

    Muzakir, Saifful Kamaluddin

    2017-12-01

    Layered structure molybdenum dichalcogenides, MoX2 (X = S, Se, and Te) are in focus as reversible charge storage electrode for pseudocapacitor applications. Correlation between number of layer and bandgap of the materials has been established by previous researchers. The correlation would reveal a connection between the bandgap and charge storage properties i.e., amount of charges that could be stored, and speed of storage or dissociation. In this work, fundamental parameters viz., (i) size-offset between a monolayer and exciton Bohr radius of MoX2 and (ii) ground and excited state electron density have been studied. We have identified realistic monolayer models of MoX2 using quantum chemical calculations which explain a correlation between size-offset and charge storage properties. We conclude that as the size-offset decreases, the higher possibility of wave functions overlap between the excited state, and ground state electrons; therefore the higher the electron mobility, and conductivity of the MoX2 would be.

  12. Competing nucleation pathways in a mixture of oppositely charged colloids: out-of-equilibrium nucleation revisited.

    PubMed

    Peters, Baron

    2009-12-28

    Recent simulations of crystal nucleation from a compressed liquid of oppositely charged colloids show that the natural Brownian dynamics results in nuclei of a charge-disordered FCC (DFCC) solid whereas artificially accelerated dynamics with charge swap moves result in charge-ordered nuclei of a CsCl phase. These results were interpreted as a breakdown of the quasiequilibrium assumption for precritical nuclei. We use structure-specific nucleus size coordinates for the CsCl and DFCC structures and equilibrium based sampling methods to understand the dynamical effects on structure selectivity in this system. Nonequilibrium effects observed in previous simulations emerge from a diffusion tensor that dramatically changes when charge swap moves are used. Without the charge swap moves diffusion is strongly anisotropic with very slow motion along the charge-ordered CsCl axis and faster motion along the DFCC axis. Kramers-Langer-Berezhkovskii-Szabo theory predicts that under the realistic dynamics, the diffusion anisotropy shifts the current toward the DFCC axis. The diffusion tensor also varies with location on the free energy landscape. A numerical calculation of the current field with a diffusion tensor that depends on the location in the free energy landscape exacerbates the extent to which the current is skewed toward DFCC structures. Our analysis confirms that quasiequilibrium theories based on equilibrium properties can explain the nonequilibrium behavior of this system. Our analysis also shows that using a structure-specific nucleus size coordinate for each possible nucleation product can provide mechanistic insight on selectivity and competition between nucleation pathways.

  13. Competing nucleation pathways in a mixture of oppositely charged colloids: Out-of-equilibrium nucleation revisited

    NASA Astrophysics Data System (ADS)

    Peters, Baron

    2009-12-01

    Recent simulations of crystal nucleation from a compressed liquid of oppositely charged colloids show that the natural Brownian dynamics results in nuclei of a charge-disordered FCC (DFCC) solid whereas artificially accelerated dynamics with charge swap moves result in charge-ordered nuclei of a CsCl phase. These results were interpreted as a breakdown of the quasiequilibrium assumption for precritical nuclei. We use structure-specific nucleus size coordinates for the CsCl and DFCC structures and equilibrium based sampling methods to understand the dynamical effects on structure selectivity in this system. Nonequilibrium effects observed in previous simulations emerge from a diffusion tensor that dramatically changes when charge swap moves are used. Without the charge swap moves diffusion is strongly anisotropic with very slow motion along the charge-ordered CsCl axis and faster motion along the DFCC axis. Kramers-Langer-Berezhkovskii-Szabo theory predicts that under the realistic dynamics, the diffusion anisotropy shifts the current toward the DFCC axis. The diffusion tensor also varies with location on the free energy landscape. A numerical calculation of the current field with a diffusion tensor that depends on the location in the free energy landscape exacerbates the extent to which the current is skewed toward DFCC structures. Our analysis confirms that quasiequilibrium theories based on equilibrium properties can explain the nonequilibrium behavior of this system. Our analysis also shows that using a structure-specific nucleus size coordinate for each possible nucleation product can provide mechanistic insight on selectivity and competition between nucleation pathways.

  14. Detection of susceptibility genes as modifiers due to subgroup differences in complex disease.

    PubMed

    Bergen, Sarah E; Maher, Brion S; Fanous, Ayman H; Kendler, Kenneth S

    2010-08-01

    Complex diseases invariably involve multiple genes and often exhibit variable symptom profiles. The extent to which disease symptoms, course, and severity differ between affected individuals may result from underlying genetic heterogeneity. Genes with modifier effects may or may not also influence disease susceptibility. In this study, we have simulated data in which a subset of cases differ by some effect size (ES) on a quantitative trait and are also enriched for a risk allele. Power to detect this 'pseudo-modifier' gene in case-only and case-control designs was explored blind to case substructure. Simulations involved 1000 iterations and calculations for 80% power at P<0.01 while varying the risk allele frequency (RAF), sample size (SS), ES, odds ratio (OR), and proportions of the case subgroups. With realistic values for the RAF (0.20), SS (3000) and ES (1), an OR of 1.7 is necessary to detect a pseudo-modifier gene. Unequal numbers of subjects in the case groups result in little decrement in power until the group enriched for the risk allele is <30% or >70% of the total case population. In practice, greater numbers of subjects and selection of a quantitative trait with a large range will provide researchers with greater power to detect a pseudo-modifier gene. However, even under ideal conditions, studies involving alleles with low frequencies or low ORs are usually underpowered for detection of a modifier or susceptibility gene. This may explain some of the inconsistent association results for many candidate gene studies of complex diseases.

  15. Design of association studies with pooled or un-pooled next-generation sequencing data.

    PubMed

    Kim, Su Yeon; Li, Yingrui; Guo, Yiran; Li, Ruiqiang; Holmkvist, Johan; Hansen, Torben; Pedersen, Oluf; Wang, Jun; Nielsen, Rasmus

    2010-07-01

    Most common hereditary diseases in humans are complex and multifactorial. Large-scale genome-wide association studies based on SNP genotyping have only identified a small fraction of the heritable variation of these diseases. One explanation may be that many rare variants (a minor allele frequency, MAF <5%), which are not included in the common genotyping platforms, may contribute substantially to the genetic variation of these diseases. Next-generation sequencing, which would allow the analysis of rare variants, is now becoming so cheap that it provides a viable alternative to SNP genotyping. In this paper, we present cost-effective protocols for using next-generation sequencing in association mapping studies based on pooled and un-pooled samples, and identify optimal designs with respect to total number of individuals, number of individuals per pool, and the sequencing coverage. We perform a small empirical study to evaluate the pooling variance in a realistic setting where pooling is combined with exon-capturing. To test for associations, we develop a likelihood ratio statistic that accounts for the high error rate of next-generation sequencing data. We also perform extensive simulations to determine the power and accuracy of this method. Overall, our findings suggest that with a fixed cost, sequencing many individuals at a more shallow depth with larger pool size achieves higher power than sequencing a small number of individuals in higher depth with smaller pool size, even in the presence of high error rates. Our results provide guidelines for researchers who are developing association mapping studies based on next-generation sequencing. (c) 2010 Wiley-Liss, Inc.

  16. Simplified galaxy formation with mesh-less hydrodynamics

    NASA Astrophysics Data System (ADS)

    Lupi, Alessandro; Volonteri, Marta; Silk, Joseph

    2017-09-01

    Numerical simulations have become a necessary tool to describe the complex interactions among the different processes involved in galaxy formation and evolution, unfeasible via an analytic approach. The last decade has seen a great effort by the scientific community in improving the sub-grid physics modelling and the numerical techniques used to make numerical simulations more predictive. Although the recently publicly available code gizmo has proven to be successful in reproducing galaxy properties when coupled with the model of the MUFASA simulations and the more sophisticated prescriptions of the Feedback In Realistic Environment (FIRE) set-up, it has not been tested yet using delayed cooling supernova feedback, which still represent a reasonable approach for large cosmological simulations, for which detailed sub-grid models are prohibitive. In order to limit the computational cost and to be able to resolve the disc structure in the galaxies we perform a suite of zoom-in cosmological simulations with rather low resolution centred around a sub-L* galaxy with a halo mass of 3 × 1011 M⊙ at z = 0, to investigate the ability of this simple model, coupled with the new hydrodynamic method of gizmo, to reproduce observed galaxy scaling relations (stellar to halo mass, stellar and baryonic Tully-Fisher, stellar mass-metallicity and mass-size). We find that the results are in good agreement with the main scaling relations, except for the total stellar mass, larger than that predicted by the abundance matching technique, and the effective sizes for the most massive galaxies in the sample, which are too small.

  17. Sample size determination in group-sequential clinical trials with two co-primary endpoints

    PubMed Central

    Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi

    2014-01-01

    We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799

  18. Perceived realism moderates the relation between sexualized media consumption and permissive sexual attitudes in Dutch adolescents.

    PubMed

    Baams, Laura; Overbeek, Geertjan; Dubas, Judith Semon; Doornwaard, Suzan M; Rommes, Els; van Aken, Marcel A G

    2015-04-01

    This study examined whether the development of sexualized media consumption and permissive sexual attitudes would be more strongly interrelated when adolescents perceived sexualized media images as highly realistic. We used data from a three-wave longitudinal sample of 444 Dutch adolescents aged 13-16 years at baseline. Results from parallel process latent growth modeling multigroup analyses showed that higher initial levels of sexualized media consumption were associated with higher initial level of permissive sexual attitudes. Moreover, increases of sexualized media consumption over time were associated with increases of permissive sexual attitudes over time. Considering the moderation by perceived realism, we found these effects only for those who perceived sexualized media as more realistic. Findings for male and female adolescents were similar except for the relations between initial levels and subsequent development. Among male adolescents who perceived sexualized media images to be realistic, higher initial levels of permissive sexual attitudes were related to subsequent less rapid development of sexualized media consumption. For male adolescents who perceived sexualized media to be less realistic, higher initial levels of sexualized media consumption were related to a subsequent less rapid development of permissive sexual attitudes. These relations were not found for female adolescents. Overall, our results suggest that, in male and female adolescents, those with a high level of perceived realism showed a correlated development of sexualized media consumption and permissive sexual attitudes. These findings point to a need for extended information on how to guide adolescents in interpreting and handling sexualized media in everyday life.

  19. Effect of progressive wear on the contact mechanics of hip replacements--does the realistic surface profile matter?

    PubMed

    Wang, Ling; Yang, Wenjian; Peng, Xifeng; Li, Dichen; Dong, Shuangpeng; Zhang, Shu; Zhu, Jinyu; Jin, Zhongmin

    2015-04-13

    The contact mechanics of artificial metal-on-polyethylene hip joints are believed to affect the lubrication, wear and friction of the articulating surfaces and may lead to the joint loosening. Finite element analysis has been widely used for contact mechanics studies and good agreements have been achieved with current experimental data; however, most studies were carried out with idealist spherical geometries of the hip prostheses rather than the realistic worn surfaces, either for simplification reason or lacking of worn surface profile. In this study, the worn surfaces of the samples from various stages of hip simulator testing (0 to 5 million cycles) were reconstructed as solid models and were applied in the contact mechanics study. The simulator testing results suggested that the center of the head has various departure value from that of the cup and the value of the departure varies with progressively increased wear. This finding was adopted into the finite element study for better evaluation accuracy. Results indicated that the realistic model provided different evaluation from that of the ideal spherical model. Moreover, with the progressively increased wear, large increase of the contact pressure (from 12 to 31 MPa) was predicted on the articulating surface, and the predicted maximum von Mises stress was increased from 7.47 to 13.26 MPa, indicating the marked effect of the worn surface profiles on the contact mechanics of the joint. This study seeks to emphasize the importance of realistic worn surface profile of the acetabular cup especially following large wear volume. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.

    PubMed

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2007-05-01

    Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.

Top