Accuracy or precision: Implications of sample design and methodology on abundance estimation
Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.
2015-01-01
Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.
(I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.
van Rijnsoever, Frank J
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.
(I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: “random chance,” which is based on probability sampling, “minimal information,” which yields at least one new code per sampling step, and “maximum information,” which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario. PMID:28746358
Soares, André E R; Schrago, Carlos G
2015-01-07
Although taxon sampling is commonly considered an important issue in phylogenetic inference, it is rarely considered in the Bayesian estimation of divergence times. In fact, the studies conducted to date have presented ambiguous results, and the relevance of taxon sampling for molecular dating remains unclear. In this study, we developed a series of simulations that, after six hundred Bayesian molecular dating analyses, allowed us to evaluate the impact of taxon sampling on chronological estimates under three scenarios of among-lineage rate heterogeneity. The first scenario allowed us to examine the influence of the number of terminals on the age estimates based on a strict molecular clock. The second scenario imposed an extreme example of lineage specific rate variation, and the third scenario permitted extensive rate variation distributed along the branches. We also analyzed empirical data on selected mitochondrial genomes of mammals. Our results showed that in the strict molecular-clock scenario (Case I), taxon sampling had a minor impact on the accuracy of the time estimates, although the precision of the estimates was greater with an increased number of terminals. The effect was similar in the scenario (Case III) based on rate variation distributed among the branches. Only under intensive rate variation among lineages (Case II) taxon sampling did result in biased estimates. The results of an empirical analysis corroborated the simulation findings. We demonstrate that taxonomic sampling affected divergence time inference but that its impact was significant if the rates deviated from those derived for the strict molecular clock. Increased taxon sampling improved the precision and accuracy of the divergence time estimates, but the impact on precision is more relevant. On average, biased estimates were obtained only if lineage rate variation was pronounced. Copyright © 2014 Elsevier Ltd. All rights reserved.
[Effects of sampling plot number on tree species distribution prediction under climate change].
Liang, Yu; He, Hong-Shi; Wu, Zhi-Wei; Li, Xiao-Na; Luo, Xu
2013-05-01
Based on the neutral landscapes under different degrees of landscape fragmentation, this paper studied the effects of sampling plot number on the prediction of tree species distribution at landscape scale under climate change. The tree species distribution was predicted by the coupled modeling approach which linked an ecosystem process model with a forest landscape model, and three contingent scenarios and one reference scenario of sampling plot numbers were assumed. The differences between the three scenarios and the reference scenario under different degrees of landscape fragmentation were tested. The results indicated that the effects of sampling plot number on the prediction of tree species distribution depended on the tree species life history attributes. For the generalist species, the prediction of their distribution at landscape scale needed more plots. Except for the extreme specialist, landscape fragmentation degree also affected the effects of sampling plot number on the prediction. With the increase of simulation period, the effects of sampling plot number on the prediction of tree species distribution at landscape scale could be changed. For generalist species, more plots are needed for the long-term simulation.
Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun
2014-12-19
In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different situations.
Sampling and counting genome rearrangement scenarios
2015-01-01
Background Even for moderate size inputs, there are a tremendous number of optimal rearrangement scenarios, regardless what the model is and which specific question is to be answered. Therefore giving one optimal solution might be misleading and cannot be used for statistical inferring. Statistically well funded methods are necessary to sample uniformly from the solution space and then a small number of samples are sufficient for statistical inferring. Contribution In this paper, we give a mini-review about the state-of-the-art of sampling and counting rearrangement scenarios, focusing on the reversal, DCJ and SCJ models. Above that, we also give a Gibbs sampler for sampling most parsimonious labeling of evolutionary trees under the SCJ model. The method has been implemented and tested on real life data. The software package together with example data can be downloaded from http://www.renyi.hu/~miklosi/SCJ-Gibbs/ PMID:26452124
A Novel Method to Handle the Effect of Uneven Sampling Effort in Biodiversity Databases
Pardo, Iker; Pata, María P.; Gómez, Daniel; García, María B.
2013-01-01
How reliable are results on spatial distribution of biodiversity based on databases? Many studies have evidenced the uncertainty related to this kind of analysis due to sampling effort bias and the need for its quantification. Despite that a number of methods are available for that, little is known about their statistical limitations and discrimination capability, which could seriously constrain their use. We assess for the first time the discrimination capacity of two widely used methods and a proposed new one (FIDEGAM), all based on species accumulation curves, under different scenarios of sampling exhaustiveness using Receiver Operating Characteristic (ROC) analyses. Additionally, we examine to what extent the output of each method represents the sampling completeness in a simulated scenario where the true species richness is known. Finally, we apply FIDEGAM to a real situation and explore the spatial patterns of plant diversity in a National Park. FIDEGAM showed an excellent discrimination capability to distinguish between well and poorly sampled areas regardless of sampling exhaustiveness, whereas the other methods failed. Accordingly, FIDEGAM values were strongly correlated with the true percentage of species detected in a simulated scenario, whereas sampling completeness estimated with other methods showed no relationship due to null discrimination capability. Quantifying sampling effort is necessary to account for the uncertainty in biodiversity analyses, however, not all proposed methods are equally reliable. Our comparative analysis demonstrated that FIDEGAM was the most accurate discriminator method in all scenarios of sampling exhaustiveness, and therefore, it can be efficiently applied to most databases in order to enhance the reliability of biodiversity analyses. PMID:23326357
A novel method to handle the effect of uneven sampling effort in biodiversity databases.
Pardo, Iker; Pata, María P; Gómez, Daniel; García, María B
2013-01-01
How reliable are results on spatial distribution of biodiversity based on databases? Many studies have evidenced the uncertainty related to this kind of analysis due to sampling effort bias and the need for its quantification. Despite that a number of methods are available for that, little is known about their statistical limitations and discrimination capability, which could seriously constrain their use. We assess for the first time the discrimination capacity of two widely used methods and a proposed new one (FIDEGAM), all based on species accumulation curves, under different scenarios of sampling exhaustiveness using Receiver Operating Characteristic (ROC) analyses. Additionally, we examine to what extent the output of each method represents the sampling completeness in a simulated scenario where the true species richness is known. Finally, we apply FIDEGAM to a real situation and explore the spatial patterns of plant diversity in a National Park. FIDEGAM showed an excellent discrimination capability to distinguish between well and poorly sampled areas regardless of sampling exhaustiveness, whereas the other methods failed. Accordingly, FIDEGAM values were strongly correlated with the true percentage of species detected in a simulated scenario, whereas sampling completeness estimated with other methods showed no relationship due to null discrimination capability. Quantifying sampling effort is necessary to account for the uncertainty in biodiversity analyses, however, not all proposed methods are equally reliable. Our comparative analysis demonstrated that FIDEGAM was the most accurate discriminator method in all scenarios of sampling exhaustiveness, and therefore, it can be efficiently applied to most databases in order to enhance the reliability of biodiversity analyses.
NASA Astrophysics Data System (ADS)
Miller, Carla J.; Glenn, D. F.; Hartenstein, Steven D.; Hallowell, Susan F.
1998-12-01
Recent efforts at the Idaho National Engineering and Environmental Laboratory (INEEL) have included mapping explosive contamination resulting from manufacturing and carrying improvised explosive devices (IEDs). Two types of trace detection equipment were used to determine levels of contamination from designated sampling areas. A total of twenty IEDs were constructed: ten using TNT and ten using C-4. Two test scenarios were used. The first scenario tracked the activities of a manufacturer who straps the device onto an independent courier. The courier then performed a series of activities to simulate waiting in an airport. The second scenario tracked the activities of a manufacturer who also served as the courier. A sample set for each test consisted of thirty samples from various locations on each IED manufacturer, thirty from each IED courier, twenty-five from the manufacturing area, and twenty-five from the courier area. Pre-samples and post-samples were collected for analysis with each detection technique. Samples analyzed by gc/chemiluminescence were taken by swiping a teflon- coated sampling swipe across the surface of the sampling area to pick up any explosive particles. Samples analyzed by ion mobility spectrometry (IMS) were taken from the clothing of the manufacturer and courier by vacuuming the surface and collecting particulates on a fiberglass filter. Samples for IMS analysis from the manufacturing and courier rooms were taken by wiping a cotton sampling swipe across the surface area. Currently, building IEDs and monitoring the explosive contamination is being directed toward detection with portal monitors.
Uyei, Jennifer; Braithwaite, R Scott
2016-01-01
Despite the benefits of the placebo-controlled trial design, it is limited by its inability to quantify total benefits and harms. Such trials, for example, are not designed to detect an intervention's placebo or nocebo effects, which if detected could alter the benefit-to-harm balance and change a decision to adopt or reject an intervention. In this article, we explore scenarios in which alternative experimental trial designs, which differ in the type of control used, influence expected value across a range of pretest assumptions and study sample sizes. We developed a decision model to compare 3 trial designs and their implications for decision making: 2-arm placebo-controlled trial ("placebo-control"), 2-arm intervention v. do nothing trial ("null-control"), and an innovative 3-arm trial design: intervention v. do nothing v. placebo trial ("novel design"). Four scenarios were explored regarding particular attributes of a hypothetical intervention: 1) all benefits and no harm, 2) no biological effect, 3) only biological effects, and 4) surreptitious harm (no biological benefit or nocebo effect). Scenario 1: When sample sizes were very small, the null-control was preferred, but as sample sizes increased, expected value of all 3 designs converged. Scenario 2: The null-control was preferred regardless of sample size when the ratio of placebo to nocebo effect was >1; otherwise, the placebo-control was preferred. Scenario 3: When sample size was very small, the placebo-control was preferred when benefits outweighed harms, but the novel design was preferred when harms outweighed benefits. Scenario 4: The placebo-control was preferred when harms outweighed placebo benefits; otherwise, preference went to the null-control. Scenarios are hypothetical, study designs have not been tested in a real-world setting, blinding is not possible in all designs, and some may argue the novel design poses ethical concerns. We identified scenarios in which alternative experimental study designs would confer greater expected value than the placebo-controlled trial design. The likelihood and prevalence of such situations warrant further study. © The Author(s) 2015.
Cornuet, Jean-Marie; Santos, Filipe; Beaumont, Mark A; Robert, Christian P; Marin, Jean-Michel; Balding, David J; Guillemaud, Thomas; Estoup, Arnaud
2008-12-01
Genetic data obtained on population samples convey information about their evolutionary history. Inference methods can extract part of this information but they require sophisticated statistical techniques that have been made available to the biologist community (through computer programs) only for simple and standard situations typically involving a small number of samples. We propose here a computer program (DIY ABC) for inference based on approximate Bayesian computation (ABC), in which scenarios can be customized by the user to fit many complex situations involving any number of populations and samples. Such scenarios involve any combination of population divergences, admixtures and population size changes. DIY ABC can be used to compare competing scenarios, estimate parameters for one or more scenarios and compute bias and precision measures for a given scenario and known values of parameters (the current version applies to unlinked microsatellite data). This article describes key methods used in the program and provides its main features. The analysis of one simulated and one real dataset, both with complex evolutionary scenarios, illustrates the main possibilities of DIY ABC. The software DIY ABC is freely available at http://www.montpellier.inra.fr/CBGP/diyabc.
Subsurface Noble Gas Sampling Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrigan, C. R.; Sun, Y.
2017-09-18
The intent of this document is to provide information about best available approaches for performing subsurface soil gas sampling during an On Site Inspection or OSI. This information is based on field sampling experiments, computer simulations and data from the NA-22 Noble Gas Signature Experiment Test Bed at the Nevada Nuclear Security Site (NNSS). The approaches should optimize the gas concentration from the subsurface cavity or chimney regime while simultaneously minimizing the potential for atmospheric radioxenon and near-surface Argon-37 contamination. Where possible, we quantitatively assess differences in sampling practices for the same sets of environmental conditions. We recognize that allmore » sampling scenarios cannot be addressed. However, if this document helps to inform the intuition of the reader about addressing the challenges resulting from the inevitable deviations from the scenario assumed here, it will have achieved its goal.« less
DNA transfer-a never ending story. A study on scenarios involving a second person as carrier.
Helmus, Janine; Bajanowski, Thomas; Poetsch, Micaela
2016-01-01
The transfer of DNA directly from one item to another has been shown in many studies with elaborate discussions on the nature of the DNA donor as well as material and surface of the items or surrounding features. Every DNA transfer scenario one can imagine seems to be possible. This evokes more and more intricate scenarios proposed by lawyers or attorneys searching for an explanation of the DNA of a certain person on a distinct item with impact on a crime. At court, the forensic genetic scientist has to comment on the probability of these scenarios thus calling for extensive studies on such settings. Here, the possibility of an involvement of a second person as a carrier of the donor's DNA in a variety of different scenarios including three pairs of people and two kinds of items (textiles and plastic bags) was investigated. All transfer settings were executed with and without gloves on the carrier's hands. DNA left on the items was isolated and analyzed using the Powerplex® ESX17 kit. In 21 out of 180 samples, all alleles of the donor DNA could be obtained on the second item (12%), on eight samples, the donor's DNA was dominant compared to all other alleles (38% of samples with complete donor profile). Additionally, 51 samples displayed at least more than half of the donor's alleles (28%). The complete DNA profile of the carrier was found in 47 out of 180 samples (42 partial profiles). In summary, it could be shown that a transfer of donor DNA from epithelial cells through a carrier to a second item is possible, even if the carrier does not wear gloves.
FIELD SAMPLING PROTOCOLS AND ANALYSIS
I have been asked to speak again to the environmental science class regarding actual research scenarios related to my work at Kerr Lab. I plan to discuss sampling protocols along with various field analyses performed during sampling activities. Many of the students have never see...
Cornuet, Jean-Marie; Santos, Filipe; Beaumont, Mark A.; Robert, Christian P.; Marin, Jean-Michel; Balding, David J.; Guillemaud, Thomas; Estoup, Arnaud
2008-01-01
Summary: Genetic data obtained on population samples convey information about their evolutionary history. Inference methods can extract part of this information but they require sophisticated statistical techniques that have been made available to the biologist community (through computer programs) only for simple and standard situations typically involving a small number of samples. We propose here a computer program (DIY ABC) for inference based on approximate Bayesian computation (ABC), in which scenarios can be customized by the user to fit many complex situations involving any number of populations and samples. Such scenarios involve any combination of population divergences, admixtures and population size changes. DIY ABC can be used to compare competing scenarios, estimate parameters for one or more scenarios and compute bias and precision measures for a given scenario and known values of parameters (the current version applies to unlinked microsatellite data). This article describes key methods used in the program and provides its main features. The analysis of one simulated and one real dataset, both with complex evolutionary scenarios, illustrates the main possibilities of DIY ABC. Availability: The software DIY ABC is freely available at http://www.montpellier.inra.fr/CBGP/diyabc. Contact: j.cornuet@imperial.ac.uk Supplementary information: Supplementary data are also available at http://www.montpellier.inra.fr/CBGP/diyabc PMID:18842597
Darnaude, Audrey M.
2016-01-01
Background Mixture models (MM) can be used to describe mixed stocks considering three sets of parameters: the total number of contributing sources, their chemical baseline signatures and their mixing proportions. When all nursery sources have been previously identified and sampled for juvenile fish to produce baseline nursery-signatures, mixing proportions are the only unknown set of parameters to be estimated from the mixed-stock data. Otherwise, the number of sources, as well as some/all nursery-signatures may need to be also estimated from the mixed-stock data. Our goal was to assess bias and uncertainty in these MM parameters when estimated using unconditional maximum likelihood approaches (ML-MM), under several incomplete sampling and nursery-signature separation scenarios. Methods We used a comprehensive dataset containing otolith elemental signatures of 301 juvenile Sparus aurata, sampled in three contrasting years (2008, 2010, 2011), from four distinct nursery habitats. (Mediterranean lagoons) Artificial nursery-source and mixed-stock datasets were produced considering: five different sampling scenarios where 0–4 lagoons were excluded from the nursery-source dataset and six nursery-signature separation scenarios that simulated data separated 0.5, 1.5, 2.5, 3.5, 4.5 and 5.5 standard deviations among nursery-signature centroids. Bias (BI) and uncertainty (SE) were computed to assess reliability for each of the three sets of MM parameters. Results Both bias and uncertainty in mixing proportion estimates were low (BI ≤ 0.14, SE ≤ 0.06) when all nursery-sources were sampled but exhibited large variability among cohorts and increased with the number of non-sampled sources up to BI = 0.24 and SE = 0.11. Bias and variability in baseline signature estimates also increased with the number of non-sampled sources, but tended to be less biased, and more uncertain than mixing proportion ones, across all sampling scenarios (BI < 0.13, SE < 0.29). Increasing separation among nursery signatures improved reliability of mixing proportion estimates, but lead to non-linear responses in baseline signature parameters. Low uncertainty, but a consistent underestimation bias affected the estimated number of nursery sources, across all incomplete sampling scenarios. Discussion ML-MM produced reliable estimates of mixing proportions and nursery-signatures under an important range of incomplete sampling and nursery-signature separation scenarios. This method failed, however, in estimating the true number of nursery sources, reflecting a pervasive issue affecting mixture models, within and beyond the ML framework. Large differences in bias and uncertainty found among cohorts were linked to differences in separation of chemical signatures among nursery habitats. Simulation approaches, such as those presented here, could be useful to evaluate sensitivity of MM results to separation and variability in nursery-signatures for other species, habitats or cohorts. PMID:27761305
NASA Astrophysics Data System (ADS)
Lamontagne, J. R.; Reed, P. M.
2017-12-01
Impacts and adaptations to global change largely occur at regional scales, yet they are shaped globally through the interdependent evolution of the climate, energy, agriculture, and industrial systems. It is important for regional actors to account for the impacts of global changes on their systems in a globally consistent but regionally relevant way. This can be challenging because emerging global reference scenarios may not reflect regional challenges. Likewise, regionally specific scenarios may miss important global feedbacks. In this work, we contribute a scenario discovery framework to identify regionally-specific decision relevant scenarios from an ensemble of scenarios of global change. To this end, we generated a large ensemble of time evolving regional, multi-sector global change scenarios by a full factorial sampling of the underlying assumptions in the emerging shared socio-economic pathways (SSPs), using the Global Change Assessment Model (GCAM). Statistical and visual analytics were then used to discover which SSP assumptions are particularly consequential for various regions, considering a broad range of time-evolving metrics that encompass multiple spatial scales and sectors. In an illustrative examples, we identify the most important global change narratives to inform water resource scenarios for several geographic regions using the proposed scenario discovery framework. Our results highlight the importance of demographic and agricultural evolution compared to technical improvements in the energy sector. We show that narrowly sampling a few canonical reference scenarios provides a very narrow view of the consequence space, increasing the risk of tacitly ignoring major impacts. Even optimistic scenarios contain unintended, disproportionate regional impacts and intergenerational transfers of consequence. Formulating consequential scenarios of deeply and broadly uncertain futures requires a better exploration of which quantitative measures of consequences are important, for whom are they important, where, and when. To this end, we have contributed a large database of climate change futures that can support `backwards' scenario generation techniques, that capture a broader array of consequences than those that emerge from limited sampling of a few reference scenarios.
The relation between statistical power and inference in fMRI
Wager, Tor D.; Yarkoni, Tal
2017-01-01
Statistically underpowered studies can result in experimental failure even when all other experimental considerations have been addressed impeccably. In fMRI the combination of a large number of dependent variables, a relatively small number of observations (subjects), and a need to correct for multiple comparisons can decrease statistical power dramatically. This problem has been clearly addressed yet remains controversial—especially in regards to the expected effect sizes in fMRI, and especially for between-subjects effects such as group comparisons and brain-behavior correlations. We aimed to clarify the power problem by considering and contrasting two simulated scenarios of such possible brain-behavior correlations: weak diffuse effects and strong localized effects. Sampling from these scenarios shows that, particularly in the weak diffuse scenario, common sample sizes (n = 20–30) display extremely low statistical power, poorly represent the actual effects in the full sample, and show large variation on subsequent replications. Empirical data from the Human Connectome Project resembles the weak diffuse scenario much more than the localized strong scenario, which underscores the extent of the power problem for many studies. Possible solutions to the power problem include increasing the sample size, using less stringent thresholds, or focusing on a region-of-interest. However, these approaches are not always feasible and some have major drawbacks. The most prominent solutions that may help address the power problem include model-based (multivariate) prediction methods and meta-analyses with related synthesis-oriented approaches. PMID:29155843
2013-01-01
Background The extent to which a genomic test will be used in practice is affected by factors such as ability of the test to correctly predict response to treatment (i.e. sensitivity and specificity of the test), invasiveness of the testing procedure, test cost, and the probability and severity of side effects associated with treatment. Methods Using discrete choice experimentation (DCE), we elicited preferences of the public (Sample 1, N = 533 and Sample 2, N = 525) and cancer patients (Sample 3, N = 38) for different attributes of a hypothetical genomic test for guiding cancer treatment. Samples 1 and 3 considered the test/treatment in the context of an aggressive curable cancer (scenario A) while the scenario for sample 2 was based on a non-aggressive incurable cancer (scenario B). Results In aggressive curable cancer (scenario A), everything else being equal, the odds ratio (OR) of choosing a test with 95% sensitivity was 1.41 (versus a test with 50% sensitivity) and willingness to pay (WTP) was $1331, on average, for this amount of improvement in test sensitivity. In this scenario, the OR of choosing a test with 95% specificity was 1.24 times that of a test with 50% specificity (WTP = $827). In non-aggressive incurable cancer (scenario B), the OR of choosing a test with 95% sensitivity was 1.65 (WTP = $1344), and the OR of choosing a test with 95% specificity was 1.50 (WTP = $1080). Reducing severity of treatment side effects from severe to mild was associated with large ORs in both scenarios (OR = 2.10 and 2.24 in scenario A and B, respectively). In contrast, patients had a very large preference for 95% sensitivity of the test (OR = 5.23). Conclusion The type and prognosis of cancer affected preferences for genomically-guided treatment. In aggressive curable cancer, individuals emphasized more on the sensitivity rather than the specificity of the test. In contrast, for a non-aggressive incurable cancer, individuals put similar emphasis on sensitivity and specificity of the test. While the public expressed strong preference toward lowering severity of side effects, improving sensitivity of the test had by far the largest influence on patients’ decision to use genomic testing. PMID:24176050
Shepard, Michele; Brenner, Sara
2014-01-01
Numerous studies are ongoing in the fields of nanotoxicology and exposure science; however, gaps remain in identifying and evaluating potential exposures from skin contact with engineered nanoparticles in occupational settings. The aim of this study was to identify potential cutaneous exposure scenarios at a workplace using engineered nanoparticles (alumina, ceria, amorphous silica) and evaluate the presence of these materials on workplace surfaces. Process review, workplace observations, and preliminary surface sampling were conducted using microvacuum and wipe sample collection methods and transmission electron microscopy with elemental analysis. Exposure scenarios were identified with potential for incidental contact. Nanoparticles of silica or silica and/or alumina agglomerates (or aggregates) were identified in surface samples from work areas where engineered nanoparticles were used or handled. Additional data are needed to evaluate occupational exposures from skin contact with engineered nanoparticles; precautionary measures should be used to minimize potential cutaneous exposures in the workplace.
Dols, W. Stuart; Persily, Andrew K.; Morrow, Jayne B.; Matzke, Brett D.; Sego, Landon H.; Nuffer, Lisa L.; Pulsipher, Brent A.
2010-01-01
In an effort to validate and demonstrate response and recovery sampling approaches and technologies, the U.S. Department of Homeland Security (DHS), along with several other agencies, have simulated a biothreat agent release within a facility at Idaho National Laboratory (INL) on two separate occasions in the fall of 2007 and the fall of 2008. Because these events constitute only two realizations of many possible scenarios, increased understanding of sampling strategies can be obtained by virtually examining a wide variety of release and dispersion scenarios using computer simulations. This research effort demonstrates the use of two software tools, CONTAM, developed by the National Institute of Standards and Technology (NIST), and Visual Sample Plan (VSP), developed by Pacific Northwest National Laboratory (PNNL). The CONTAM modeling software was used to virtually contaminate a model of the INL test building under various release and dissemination scenarios as well as a range of building design and operation parameters. The results of these CONTAM simulations were then used to investigate the relevance and performance of various sampling strategies using VSP. One of the fundamental outcomes of this project was the demonstration of how CONTAM and VSP can be used together to effectively develop sampling plans to support the various stages of response to an airborne chemical, biological, radiological, or nuclear event. Following such an event (or prior to an event), incident details and the conceptual site model could be used to create an ensemble of CONTAM simulations which model contaminant dispersion within a building. These predictions could then be used to identify priority area zones within the building and then sampling designs and strategies could be developed based on those zones. PMID:27134782
Dols, W Stuart; Persily, Andrew K; Morrow, Jayne B; Matzke, Brett D; Sego, Landon H; Nuffer, Lisa L; Pulsipher, Brent A
2010-01-01
In an effort to validate and demonstrate response and recovery sampling approaches and technologies, the U.S. Department of Homeland Security (DHS), along with several other agencies, have simulated a biothreat agent release within a facility at Idaho National Laboratory (INL) on two separate occasions in the fall of 2007 and the fall of 2008. Because these events constitute only two realizations of many possible scenarios, increased understanding of sampling strategies can be obtained by virtually examining a wide variety of release and dispersion scenarios using computer simulations. This research effort demonstrates the use of two software tools, CONTAM, developed by the National Institute of Standards and Technology (NIST), and Visual Sample Plan (VSP), developed by Pacific Northwest National Laboratory (PNNL). The CONTAM modeling software was used to virtually contaminate a model of the INL test building under various release and dissemination scenarios as well as a range of building design and operation parameters. The results of these CONTAM simulations were then used to investigate the relevance and performance of various sampling strategies using VSP. One of the fundamental outcomes of this project was the demonstration of how CONTAM and VSP can be used together to effectively develop sampling plans to support the various stages of response to an airborne chemical, biological, radiological, or nuclear event. Following such an event (or prior to an event), incident details and the conceptual site model could be used to create an ensemble of CONTAM simulations which model contaminant dispersion within a building. These predictions could then be used to identify priority area zones within the building and then sampling designs and strategies could be developed based on those zones.
Le, Hoa V; Poole, Charles; Brookhart, M Alan; Schoenbach, Victor J; Beach, Kathleen J; Layton, J Bradley; Stürmer, Til
2013-11-19
The High-Dimensional Propensity Score (hd-PS) algorithm can select and adjust for baseline confounders of treatment-outcome associations in pharmacoepidemiologic studies that use healthcare claims data. How hd-PS performance is affected by aggregating medications or medical diagnoses has not been assessed. We evaluated the effects of aggregating medications or diagnoses on hd-PS performance in an empirical example using resampled cohorts with small sample size, rare outcome incidence, or low exposure prevalence. In a cohort study comparing the risk of upper gastrointestinal complications in celecoxib or traditional NSAIDs (diclofenac, ibuprofen) initiators with rheumatoid arthritis and osteoarthritis, we (1) aggregated medications and International Classification of Diseases-9 (ICD-9) diagnoses into hierarchies of the Anatomical Therapeutic Chemical classification (ATC) and the Clinical Classification Software (CCS), respectively, and (2) sampled the full cohort using techniques validated by simulations to create 9,600 samples to compare 16 aggregation scenarios across 50% and 20% samples with varying outcome incidence and exposure prevalence. We applied hd-PS to estimate relative risks (RR) using 5 dimensions, predefined confounders, ≤ 500 hd-PS covariates, and propensity score deciles. For each scenario, we calculated: (1) the geometric mean RR; (2) the difference between the scenario mean ln(RR) and the ln(RR) from published randomized controlled trials (RCT); and (3) the proportional difference in the degree of estimated confounding between that scenario and the base scenario (no aggregation). Compared with the base scenario, aggregations of medications into ATC level 4 alone or in combination with aggregation of diagnoses into CCS level 1 improved the hd-PS confounding adjustment in most scenarios, reducing residual confounding compared with the RCT findings by up to 19%. Aggregation of codes using hierarchical coding systems may improve the performance of the hd-PS to control for confounders. The balance of advantages and disadvantages of aggregation is likely to vary across research settings.
Using larval fish community structure to guide long-term monitoring of fish spawning activity
Pritt, Jeremy J.; Roseman, Edward F.; Ross, Jason E.; DeBruyne, Robin L.
2015-01-01
Larval fishes provide a direct indication of spawning activity and may therefore be useful for long-term monitoring efforts in relation to spawning habitat restoration. However, larval fish sampling can be time intensive and costly. We sought to understand the spatial and temporal structure of larval fish communities in the St. Clair–Detroit River system, Michigan–Ontario, to determine whether targeted larval fish sampling can be made more efficient for long-term monitoring. We found that larval fish communities were highly nested, with lower river segments and late-spring samples containing the highest genus richness of larval fish. We created four sampling scenarios for each river system: (1) using all available data, (2) limiting temporal sampling to late spring, (3) limiting spatial sampling to lower river segments only, and (4) limiting both spatial and temporal sampling. By limiting the spatial extent of sampling to lower river sites and/or limiting the temporal extent to the late-spring period, we found that effort could be reduced by more than 50% while maintaining over 75% of the observed and estimated total genus richness. Similarly, limiting the sampling effort to lower river sites and/or the late-spring period maintained between 65% and 93% of the observed richness of lithophilic-spawning genera and invasive genera. In general, community composition remained consistent among sampling scenarios. Targeted sampling offers a lower-cost alternative to exhaustive spatial and temporal sampling and may be more readily incorporated into long-term monitoring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milbrath, Brian; Sussman, Aviva Joy
As part of this training course, we have created a scenario at a location that will provide you with an opportunity to practice the techniques you have learned during the week. For the first hour, you will have the opportunity to conduct a Visual Observation and use VOB to determine ideal locations for RN soil sampling, swipe sampling, and in situ measurements. After the VOB and sampling locating, you will rotate between soil sample, swipe sample, and two in situ activities.
Appendix E - Sample Production Facility Plan
This sample Spill Prevention, Control and Countermeasure (SPCC) Plan in Appendix E is intended to provide examples and illustrations of how a production facility could address a variety of scenarios in its SPCC Plan.
Appendix D - Sample Bulk Storage Facility Plan
This sample Spill Prevention, Control and Countermeasure (SPCC) Plan in Appendix D is intended to provide examples and illustrations of how a bulk storage facility could address a variety of scenarios in its SPCC Plan.
DNA analysis in Disaster Victim Identification.
Montelius, Kerstin; Lindblom, Bertil
2012-06-01
DNA profiling and matching is one of the primary methods to identify missing persons in a disaster, as defined by the Interpol Disaster Victim Identification Guide. The process to identify a victim by DNA includes: the collection of the best possible ante-mortem (AM) samples, the choice of post-mortem (PM) samples, DNA-analysis, matching and statistical weighting of the genetic relationship or match. Each disaster has its own scenario, and each scenario defines its own methods for identification of the deceased.
Study on Aerosol Penetration Through Clothing and Individual Protective Equipment
2009-05-01
8.4X10-3 mg.m-3 (2.57X105 particles per cubic meter of air) over a 30 minute period. This scenario represents a very high end threat with a large... Isokinetic air sampling was applied and the effect of aerosol losses in sampling lines and other parts of the test rig were incorporated in analysis...eliminate any “memory” effect. The aerosol sampling (airflow direction control, start of sampling) was operated manually. Isokinetic sampling conditions
Comparisons of discrete and integrative sampling accuracy in estimating pulsed aquatic exposures.
Morrison, Shane A; Luttbeg, Barney; Belden, Jason B
2016-11-01
Most current-use pesticides have short half-lives in the water column and thus the most relevant exposure scenarios for many aquatic organisms are pulsed exposures. Quantifying exposure using discrete water samples may not be accurate as few studies are able to sample frequently enough to accurately determine time-weighted average (TWA) concentrations of short aquatic exposures. Integrative sampling methods that continuously sample freely dissolved contaminants over time intervals (such as integrative passive samplers) have been demonstrated to be a promising measurement technique. We conducted several modeling scenarios to test the assumption that integrative methods may require many less samples for accurate estimation of peak 96-h TWA concentrations. We compared the accuracies of discrete point samples and integrative samples while varying sampling frequencies and a range of contaminant water half-lives (t 50 = 0.5, 2, and 8 d). Differences the predictive accuracy of discrete point samples and integrative samples were greatest at low sampling frequencies. For example, when the half-life was 0.5 d, discrete point samples required 7 sampling events to ensure median values > 50% and no sampling events reporting highly inaccurate results (defined as < 10% of the true 96-h TWA). Across all water half-lives investigated, integrative sampling only required two samples to prevent highly inaccurate results and measurements resulting in median values > 50% of the true concentration. Regardless, the need for integrative sampling diminished as water half-life increased. For an 8-d water half-life, two discrete samples produced accurate estimates and median values greater than those obtained for two integrative samples. Overall, integrative methods are the more accurate method for monitoring contaminants with short water half-lives due to reduced frequency of extreme values, especially with uncertainties around the timing of pulsed events. However, the acceptability of discrete sampling methods for providing accurate concentration measurements increases with increasing aquatic half-lives. Copyright © 2016 Elsevier Ltd. All rights reserved.
Christensen, Jette; Stryhn, Henrik; Vallières, André; El Allaki, Farouk
2011-05-01
In 2008, Canada designed and implemented the Canadian Notifiable Avian Influenza Surveillance System (CanNAISS) with six surveillance activities in a phased-in approach. CanNAISS was a surveillance system because it had more than one surveillance activity or component in 2008: passive surveillance; pre-slaughter surveillance; and voluntary enhanced notifiable avian influenza surveillance. Our objectives were to give a short overview of two active surveillance components in CanNAISS; describe the CanNAISS scenario tree model and its application to estimation of probability of populations being free of NAI virus infection and sample size determination. Our data from the pre-slaughter surveillance component included diagnostic test results from 6296 serum samples representing 601 commercial chicken and turkey farms collected from 25 August 2008 to 29 January 2009. In addition, we included data from a sub-population of farms with high biosecurity standards: 36,164 samples from 55 farms sampled repeatedly over the 24 months study period from January 2007 to December 2008. All submissions were negative for Notifiable Avian Influenza (NAI) virus infection. We developed the CanNAISS scenario tree model, so that it will estimate the surveillance component sensitivity and the probability of a population being free of NAI at the 0.01 farm-level and 0.3 within-farm-level prevalences. We propose that a general model, such as the CanNAISS scenario tree model, may have a broader application than more detailed models that require disease specific input parameters, such as relative risk estimates. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.
Robotic Mars Sample Return: Risk Assessment and Analysis Report
NASA Technical Reports Server (NTRS)
Lalk, Thomas R.; Spence, Cliff A.
2003-01-01
A comparison of the risk associated with two alternative scenarios for a robotic Mars sample return mission was conducted. Two alternative mission scenarios were identified, the Jet Propulsion Lab (JPL) reference Mission and a mission proposed by Johnson Space Center (JSC). The JPL mission was characterized by two landers and an orbiter, and a Mars orbit rendezvous to retrieve the samples. The JSC mission (Direct/SEP) involves a solar electric propulsion (SEP) return to earth followed by a rendezvous with the space shuttle in earth orbit. A qualitative risk assessment to identify and characterize the risks, and a risk analysis to quantify the risks were conducted on these missions. Technical descriptions of the competing scenarios were developed in conjunction with NASA engineers and the sequence of events for each candidate mission was developed. Risk distributions associated with individual and combinations of events were consolidated using event tree analysis in conjunction with Monte Carlo techniques to develop probabilities of mission success for each of the various alternatives. The results were the probability of success of various end states for each candidate scenario. These end states ranged from complete success through various levels of partial success to complete failure. Overall probability of success for the Direct/SEP mission was determined to be 66% for the return of at least one sample and 58% for the JPL mission for the return of at least one sample cache. Values were also determined for intermediate events and end states as well as for the probability of violation of planetary protection. Overall mission planetary protection event probabilities of occurrence were determined to be 0.002% and 1.3% for the Direct/SEP and JPL Reference missions respectively.
Noel, Nora E; Ogle, Richard L; Maisto, Stephen A; Jackson, Lee A; Loomis, Randi B; Heaton, Jennifer A
2016-07-01
These three related studies created a set of ecologically valid scenarios for assessing relative associations of both attraction and sexual coercion risk-recognition in college women's heterosocial situational drinking decisions. The first study constructed nine scenarios using input from heterosexual drinking women in the age cohort (18-30) most likely to experience alcohol-related sexual coercion. In the second study, 50 female undergraduates (ages 18-25) assessed the salience of three important dimensions (attraction, risk, and realism) in these scenarios. The third study was a factor analysis (and a follow-up confirmatory factor analysis) of the elements of coercion-risk as perceived by the target group with two female samples recruited 1 year apart (Sample 1: N = 157, ages 18-29); Sample 2: N = 157, ages 18-30). Results confirmed that the scenarios could be a useful vehicle for assessing how women balance out risk and attraction to make in-the moment heterosocial drinking decisions. The factor analysis showed participants perceived two types of situations, based on whether the male character was "Familiar" or "Just Met" and perceived themselves as happier and more excited with Familiar males. However, in contrast to HIV risk studies, Familiar males were perceived as higher risk for unwanted sex. Future research will use the six scenarios that emerged from the factor analysis to study how attraction and risk perception differentially affect young adult women's social drinking decisions.
Shepard, Michele; Brenner, Sara
2014-01-01
Background: Numerous studies are ongoing in the fields of nanotoxicology and exposure science; however, gaps remain in identifying and evaluating potential exposures from skin contact with engineered nanoparticles in occupational settings. Objectives: The aim of this study was to identify potential cutaneous exposure scenarios at a workplace using engineered nanoparticles (alumina, ceria, amorphous silica) and evaluate the presence of these materials on workplace surfaces. Methods: Process review, workplace observations, and preliminary surface sampling were conducted using microvacuum and wipe sample collection methods and transmission electron microscopy with elemental analysis. Results: Exposure scenarios were identified with potential for incidental contact. Nanoparticles of silica or silica and/or alumina agglomerates (or aggregates) were identified in surface samples from work areas where engineered nanoparticles were used or handled. Conclusions: Additional data are needed to evaluate occupational exposures from skin contact with engineered nanoparticles; precautionary measures should be used to minimize potential cutaneous exposures in the workplace. PMID:25000112
Development of Monitors for Assessing Exposure of Military Personnel to Toxic Chemicals.
2000-01-01
Residues " S ampler Preparation 7 Transfer and Analysis 7 Temperature Effects on PIMS Sampling Rate 8 Environmental Air Sampling 8 Results and...of exposure and potential toxicity to personnel. While progress has been made in improving active water and air sampling technology, such devices...streams, 3) the apparatus is also applicable for use in air sampling deployments in indoor and outdoor scenarios, and 4) the apparatus is commercially
Egger, C; Maurer, M
2015-04-15
Urban drainage design relying on observed precipitation series neglects the uncertainties associated with current and indeed future climate variability. Urban drainage design is further affected by the large stochastic variability of precipitation extremes and sampling errors arising from the short observation periods of extreme precipitation. Stochastic downscaling addresses anthropogenic climate impact by allowing relevant precipitation characteristics to be derived from local observations and an ensemble of climate models. This multi-climate model approach seeks to reflect the uncertainties in the data due to structural errors of the climate models. An ensemble of outcomes from stochastic downscaling allows for addressing the sampling uncertainty. These uncertainties are clearly reflected in the precipitation-runoff predictions of three urban drainage systems. They were mostly due to the sampling uncertainty. The contribution of climate model uncertainty was found to be of minor importance. Under the applied greenhouse gas emission scenario (A1B) and within the period 2036-2065, the potential for urban flooding in our Swiss case study is slightly reduced on average compared to the reference period 1981-2010. Scenario planning was applied to consider urban development associated with future socio-economic factors affecting urban drainage. The impact of scenario uncertainty was to a large extent found to be case-specific, thus emphasizing the need for scenario planning in every individual case. The results represent a valuable basis for discussions of new drainage design standards aiming specifically to include considerations of uncertainty. Copyright © 2015 Elsevier Ltd. All rights reserved.
STELLAR POPULATION AND GAS KINEMATICS OF POST-STARBURST QUASARS
NASA Astrophysics Data System (ADS)
Sanmartim, David; Storchi-Bergmann, Thaisa
2018-01-01
Post-Starburst Quasars (PSQs) are an intriguing set of galaxies that simultaneously host AGNs and post-starburst stellar populations, making them one of the most suitable objects to investigate the nature of the connection between these two components. The simultaneous presence of a post-starburst population and nuclear activity may be explained by two possible scenarios. In the secular evolutionary scenario star formation may cease due to exhaustion of the gas, while in the quenching one it may cease abruptly when the nuclear activity is triggered. In order to test these scenarios we have mapped the star formation history, manifestations of nuclear activity and excitation mechanisms in the central kpc of two nearby PSQs by using GMOS-IFU observations. In these two first exploratory studies, we have found that the young and intermediate age populations are located in a ring at ≈300-500 kpc, with some contribution of the intermediate age component also in the central region. In both of them, the gas outflow does not coincide with the young stellar population ring, which suggests that the ring is not being affected by the AGN feedback, but only the innermost regions. The individual study one of the PSQs of the sample has supported the evolutionary scenario, since the post-starburst population is not located close enough to the nucleus, where the outflow is observed. As a general behaviour, we found that outflows velocity are on the order of ~600-800 km/s and the mass outflow rates of ≈0.03-0.1 M⊙/yr, one order of magnitude greater than the AGN accretion rate, which suggests a scenario where the AGN-driven wind has entrained material from the circumnuclear region. In order to increase the statistical significance of our previous results and to distinguish between the proposed scenarios, we are conducting the same analysis to a wider sample of PSQs, which we hope will indicate more conclusively which is the favored scenario. During the meeting, we will present more detailed results of our two first exploratory studies as well for for other 3 PSQs of our sample and compare them to a control sample.
Identifying the potential of changes to blood sample logistics using simulation.
Jørgensen, Pelle; Jacobsen, Peter; Poulsen, Jørgen Hjelm
2013-01-01
Using simulation as an approach to display and improve internal logistics at hospitals has great potential. This study shows how a simulation model displaying the morning blood-taking round at a Danish public hospital can be developed and utilized with the aim of improving the logistics. The focus of the simulation was to evaluate changes made to the transportation of blood samples between wards and the laboratory. The average- (AWT) and maximum waiting time (MWT) from a blood sample was drawn at the ward until it was received at the laboratory, and the distribution of arrivals of blood samples in the laboratory were used as the evaluation criteria. Four different scenarios were tested and compared with the current approach: (1) Using AGVs (mobile robots), (2) using a pneumatic tube system, (3) using porters that are called upon, or (4) using porters that come to the wards every 45 minutes. Furthermore, each of the scenarios was tested in terms of what amount of resources would give the optimal result. The simulations showed a big improvement potential in implementing a new technology/mean for transporting the blood samples. The pneumatic tube system showed the biggest potential lowering the AWT and MWT with approx. 36% and 18%, respectively. Additionally, all of the scenarios had a more even distribution of arrivals except for porters coming to the wards every 45 min. As a consequence of the results obtained in the study, the hospital decided to implement a pneumatic tube system.
When Is Rapid On-Site Evaluation Cost-Effective for Fine-Needle Aspiration Biopsy?
Schmidt, Robert L.; Walker, Brandon S.; Cohen, Michael B.
2015-01-01
Background Rapid on-site evaluation (ROSE) can improve adequacy rates of fine-needle aspiration biopsy (FNAB) but increases operational costs. The performance of ROSE relative to fixed sampling depends on many factors. It is not clear when ROSE is less costly than sampling with a fixed number of needle passes. The objective of this study was to determine the conditions under which ROSE is less costly than fixed sampling. Methods Cost comparison of sampling with and without ROSE using mathematical modeling. Models were based on a societal perspective and used a mechanistic, micro-costing approach. Sampling policies (ROSE, fixed) were compared using the difference in total expected costs per case. Scenarios were based on procedure complexity (palpation-guided or image-guided), adequacy rates (low, high) and sampling protocols (stopping criteria for ROSE and fixed sampling). One-way, probabilistic, and scenario-based sensitivity analysis was performed to determine which variables had the greatest influence on the cost difference. Results ROSE is favored relative to fixed sampling under the following conditions: (1) the cytologist is accurate, (2) the total variable cost ($/hr) is low, (3) fixed costs ($/procedure) are high, (4) the setup time is long, (5) the time between needle passes for ROSE is low, (6) when the per-pass adequacy rate is low, and (7) ROSE stops after observing one adequate sample. The model is most sensitive to variation in the fixed cost, the per-pass adequacy rate, and the time per needle pass with ROSE. Conclusions Mathematical modeling can be used to predict the difference in cost between sampling with and without ROSE. PMID:26317785
Sources of error in estimating truck traffic from automatic vehicle classification data
DOT National Transportation Integrated Search
1998-10-01
Truck annual average daily traffic estimation errors resulting from sample classification counts are computed in this paper under two scenarios. One scenario investigates an improper factoring procedure that may be used by highway agencies. The study...
A Simple and Robust Method for Partially Matched Samples Using the P-Values Pooling Approach
Kuan, Pei Fen; Huang, Bo
2013-01-01
This paper focuses on statistical analyses in scenarios where some samples from the matched pairs design are missing, resulting in partially matched samples. Motivated by the idea of meta-analysis, we recast the partially matched samples as coming from two experimental designs, and propose a simple yet robust approach based on the weighted Z-test to integrate the p-values computed from these two designs. We show that the proposed approach achieves better operating characteristics in simulations and a case study, compared to existing methods for partially matched samples. PMID:23417968
Kurland, Brenda F; Muzi, Mark; Peterson, Lanell M; Doot, Robert K; Wangerin, Kristen A; Mankoff, David A; Linden, Hannah M; Kinahan, Paul E
2016-02-01
Uptake time (interval between tracer injection and image acquisition) affects the SUV measured for tumors in (18)F-FDG PET images. With dissimilar uptake times, changes in tumor SUVs will be under- or overestimated. This study examined the influence of uptake time on tumor response assessment using a virtual clinical trials approach. Tumor kinetic parameters were estimated from dynamic (18)F-FDG PET scans of breast cancer patients and used to simulate time-activity curves for 45-120 min after injection. Five-minute uptake time frames followed 4 scenarios: the first was a standardized static uptake time (the SUV from 60 to 65 min was selected for all scans), the second was uptake times sampled from an academic PET facility with strict adherence to standardization protocols, the third was a distribution similar to scenario 2 but with greater deviation from standards, and the fourth was a mixture of hurried scans (45- to 65-min start of image acquisition) and frequent delays (58- to 115-min uptake time). The proportion of out-of-range scans (<50 or >70 min, or >15-min difference between paired scans) was 0%, 20%, 44%, and 64% for scenarios 1, 2, 3, and 4, respectively. A published SUV correction based on local linearity of uptake-time dependence was applied in a separate analysis. Influence of uptake-time variation was assessed as sensitivity for detecting response (probability of observing a change of ≥30% decrease in (18)F-FDG PET SUV given a true decrease of 40%) and specificity (probability of observing an absolute change of <30% given no true change). Sensitivity was 96% for scenario 1, and ranged from 73% for scenario 4 (95% confidence interval, 70%-76%) to 92% (90%-93%) for scenario 2. Specificity for all scenarios was at least 91%. Single-arm phase II trials required an 8%-115% greater sample size for scenarios 2-4 than for scenario 1. If uptake time is known, SUV correction methods may raise sensitivity to 87%-95% and reduce the sample size increase to less than 27%. Uptake-time deviations from standardized protocols occur frequently, potentially decreasing the performance of (18)F-FDG PET response biomarkers. Correcting SUV for uptake time improves sensitivity, but algorithm refinement is needed. Stricter uptake-time control and effective correction algorithms could improve power and decrease costs for clinical trials using (18)F-FDG PET endpoints. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
NASA Astrophysics Data System (ADS)
Fan, Y. R.; Huang, G. H.; Baetz, B. W.; Li, Y. P.; Huang, K.
2017-06-01
In this study, a copula-based particle filter (CopPF) approach was developed for sequential hydrological data assimilation by considering parameter correlation structures. In CopPF, multivariate copulas are proposed to reflect parameter interdependence before the resampling procedure with new particles then being sampled from the obtained copulas. Such a process can overcome both particle degeneration and sample impoverishment. The applicability of CopPF is illustrated with three case studies using a two-parameter simplified model and two conceptual hydrologic models. The results for the simplified model indicate that model parameters are highly correlated in the data assimilation process, suggesting a demand for full description of their dependence structure. Synthetic experiments on hydrologic data assimilation indicate that CopPF can rejuvenate particle evolution in large spaces and thus achieve good performances with low sample size scenarios. The applicability of CopPF is further illustrated through two real-case studies. It is shown that, compared with traditional particle filter (PF) and particle Markov chain Monte Carlo (PMCMC) approaches, the proposed method can provide more accurate results for both deterministic and probabilistic prediction with a sample size of 100. Furthermore, the sample size would not significantly influence the performance of CopPF. Also, the copula resampling approach dominates parameter evolution in CopPF, with more than 50% of particles sampled by copulas in most sample size scenarios.
Uncertainty in monitoring E. coli concentrations in streams and stormwater runoff
NASA Astrophysics Data System (ADS)
Harmel, R. D.; Hathaway, J. M.; Wagner, K. L.; Wolfe, J. E.; Karthikeyan, R.; Francesconi, W.; McCarthy, D. T.
2016-03-01
Microbial contamination of surface waters, a substantial public health concern throughout the world, is typically identified by fecal indicator bacteria such as Escherichia coli. Thus, monitoring E. coli concentrations is critical to evaluate current conditions, determine restoration effectiveness, and inform model development and calibration. An often overlooked component of these monitoring and modeling activities is understanding the inherent random and systematic uncertainty present in measured data. In this research, a review and subsequent analysis was performed to identify, document, and analyze measurement uncertainty of E. coli data collected in stream flow and stormwater runoff as individual discrete samples or throughout a single runoff event. Data on the uncertainty contributed by sample collection, sample preservation/storage, and laboratory analysis in measured E. coli concentrations were compiled and analyzed, and differences in sampling method and data quality scenarios were compared. The analysis showed that: (1) manual integrated sampling produced the lowest random and systematic uncertainty in individual samples, but automated sampling typically produced the lowest uncertainty when sampling throughout runoff events; (2) sample collection procedures often contributed the highest amount of uncertainty, although laboratory analysis introduced substantial random uncertainty and preservation/storage introduced substantial systematic uncertainty under some scenarios; and (3) the uncertainty in measured E. coli concentrations was greater than that of sediment and nutrients, but the difference was not as great as may be assumed. This comprehensive analysis of uncertainty in E. coli concentrations measured in streamflow and runoff should provide valuable insight for designing E. coli monitoring projects, reducing uncertainty in quality assurance efforts, regulatory and policy decision making, and fate and transport modeling.
Mars Sample Return mission: Two alternate scenarios
NASA Technical Reports Server (NTRS)
1991-01-01
Two scenarios for accomplishing a Mars Sample Return mission are presented herein. Mission A is a low cost, low mass scenario, while Mission B is a high technology, high science alternative. Mission A begins with the launch of one Titan IV rocket with a Centaur G' upper stage. The Centaur performs the trans-Mars injection burn and is then released. The payload consists of two lander packages and the Orbital Transfer Vehicle, which is responsible for supporting the landers during launch and interplanetary cruise. After descending to the surface, the landers deploy small, local rovers to collect samples. Mission B starts with 4 Titan IV launches, used to place the parts of the Planetary Transfer Vehicle (PTV) into orbit. The fourth launch payload is able to move to assemble the entire vehicle by simple docking routines. Once complete, the PTV begins a low thrust trajectory out from low Earth orbit, through interplanetary space, and into low Martian orbit. It deploys a communication satellite into a 1/2 sol orbit and then releases the lander package at 500 km altitude. The lander package contains the lander, the Mars Ascent Vehicle (MAV), two lighter than air rovers (called Aereons), and one conventional land rover. The entire package is contained with a biconic aeroshell. After release from the PTV, the lander package descends to the surface, where all three rovers are released to collect samples and map the terrain.
Directed Diffusion Modelling for Tesso Nilo National Parks Case Study
NASA Astrophysics Data System (ADS)
Yasri, Indra; Safrianti, Ery
2018-01-01
— Directed Diffusion (DD has ability to achieve energy efficiency in Wireless Sensor Network (WSN). This paper proposes Directed Diffusion (DD) modelling for Tesso Nilo National Parks (TNNP) case study. There are 4 stages of scenarios involved in this modelling. It’s started by appointing of sampling area through GPS coordinate. The sampling area is determined by optimization processes from 500m x 500m up to 1000m x 1000m with 100m increment in between. The next stage is sensor node placement. Sensor node is distributed in sampling area with three different quantities i.e. 20 nodes, 30 nodes and 40 nodes. One of those quantities is choose as an optimized sensor node placement. The third stage is to implement all scenarios in stages 1 and stages 2 on DD modelling. In the last stage, the evaluation process to achieve most energy efficient in the combination of optimized sampling area and optimized sensor node placement on Direct Diffusion (DD) routing protocol. The result shows combination between sampling area 500m x 500m and 20 nodes able to achieve energy efficient to support a forest preventive fire system at Tesso Nilo National Parks.
NASA Astrophysics Data System (ADS)
Booth, B. B. B.; Bernie, D.; McNeall, D.; Hawkins, E.; Caesar, J.; Boulton, C.; Friedlingstein, P.; Sexton, D. M. H.
2013-04-01
We compare future changes in global mean temperature in response to different future scenarios which, for the first time, arise from emission-driven rather than concentration-driven perturbed parameter ensemble of a global climate model (GCM). These new GCM simulations sample uncertainties in atmospheric feedbacks, land carbon cycle, ocean physics and aerosol sulphur cycle processes. We find broader ranges of projected temperature responses arising when considering emission rather than concentration-driven simulations (with 10-90th percentile ranges of 1.7 K for the aggressive mitigation scenario, up to 3.9 K for the high-end, business as usual scenario). A small minority of simulations resulting from combinations of strong atmospheric feedbacks and carbon cycle responses show temperature increases in excess of 9 K (RCP8.5) and even under aggressive mitigation (RCP2.6) temperatures in excess of 4 K. While the simulations point to much larger temperature ranges for emission-driven experiments, they do not change existing expectations (based on previous concentration-driven experiments) on the timescales over which different sources of uncertainty are important. The new simulations sample a range of future atmospheric concentrations for each emission scenario. Both in the case of SRES A1B and the Representative Concentration Pathways (RCPs), the concentration scenarios used to drive GCM ensembles, lies towards the lower end of our simulated distribution. This design decision (a legacy of previous assessments) is likely to lead concentration-driven experiments to under-sample strong feedback responses in future projections. Our ensemble of emission-driven simulations span the global temperature response of the CMIP5 emission-driven simulations, except at the low end. Combinations of low climate sensitivity and low carbon cycle feedbacks lead to a number of CMIP5 responses to lie below our ensemble range. The ensemble simulates a number of high-end responses which lie above the CMIP5 carbon cycle range. These high-end simulations can be linked to sampling a number of stronger carbon cycle feedbacks and to sampling climate sensitivities above 4.5 K. This latter aspect highlights the priority in identifying real-world climate-sensitivity constraints which, if achieved, would lead to reductions on the upper bound of projected global mean temperature change. The ensembles of simulations presented here provides a framework to explore relationships between present-day observables and future changes, while the large spread of future-projected changes highlights the ongoing need for such work.
Who Recommends Long-Term Care Matters
ERIC Educational Resources Information Center
Kane, Robert L.; Bershadsky, Boris; Bershadsky, Julie
2006-01-01
Purpose: Making good consumer decisions requires having good information. This study compared long-term-care recommendations among various types of health professionals. Design and Methods: We gave randomly varied scenarios to a convenience national sample of 211 professionals from varying disciplines and work locations. For each scenario, we…
Computer-aided testing of pilot response to critical in-flight events
NASA Technical Reports Server (NTRS)
Giffin, W. C.; Rockwell, T. H.
1984-01-01
This research on pilot response to critical in-flight events employs a unique methodology including an interactive computer-aided scenario-testing system. Navigation displays, instrument-panel displays, and assorted textual material are presented on a touch-sensitive CRT screen. Problem diagnosis scenarios, destination-diversion scenarios and combined destination/diagnostic tests are available. A complete time history of all data inquiries and responses is maintained. Sample results of diagnosis scenarios obtained from testing 38 licensed pilots are presented and discussed.
Methodology Series Module 5: Sampling Strategies.
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.
An evaluation of methods for estimating decadal stream loads
NASA Astrophysics Data System (ADS)
Lee, Casey J.; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.
2016-11-01
Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen - lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale's ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Udey, R. N.; Corzett, T. H.; Alcaraz, A.
Following the successful completion of the 3rd biomedical confidence building exercise (February 2013 – March 2013), which included the analysis of plasma and urine samples spiked at low ppb levels as part of the exercise scenario, another confidence building exercise was targeted to be conducted in 2014. In this 4th exercise, it was desired to focus specifically on the analysis of plasma samples. The scenario was designed as an investigation of an alleged use of chemical weapons where plasma samples were collected, as plasma has been reported to contain CWA adducts which remain present in the human body for severalmore » weeks (Solano et al. 2008). In the 3rd exercise most participants used the fluoride regeneration method to analyze for the presence of nerve agents in plasma samples. For the 4th biomedical exercise it was decided to evaluate the analysis of human plasma samples for the presence/absence of the VX adducts and aged adducts to blood proteins (e.g., VX-butyrylcholinesterase (BuChE) and aged BuChE adducts using a pepsin digest technique to yield nonapeptides; or equivalent). As the aging of VX-BuChE adducts is relatively slow (t1/2 = 77 hr at 37 °C [Aurbek et al. 2009]), soman (GD), which ages much more quickly (t1/2 = 9 min at 37 °C [Masson et al. 2010]), was used to simulate an aged VX sample. Additional objectives of this exercise included having laboratories assess novel OP-adducted plasma sample preparation techniques and analytical instrumentation methodologies, as well as refining/designating the reporting formats for these new techniques.« less
An evaluation of methods for estimating decadal stream loads
Lee, Casey; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.
2016-01-01
Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen – lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale’s ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.
NASA Astrophysics Data System (ADS)
Booth, B. B. B.; Bernie, D.; McNeall, D.; Hawkins, E.; Caesar, J.; Boulton, C.; Friedlingstein, P.; Sexton, D.
2012-09-01
We compare future changes in global mean temperature in response to different future scenarios which, for the first time, arise from emission driven rather than concentration driven perturbed parameter ensemble of a Global Climate Model (GCM). These new GCM simulations sample uncertainties in atmospheric feedbacks, land carbon cycle, ocean physics and aerosol sulphur cycle processes. We find broader ranges of projected temperature responses arising when considering emission rather than concentration driven simulations (with 10-90 percentile ranges of 1.7 K for the aggressive mitigation scenario up to 3.9 K for the high end business as usual scenario). A small minority of simulations resulting from combinations of strong atmospheric feedbacks and carbon cycle responses show temperature increases in excess of 9 degrees (RCP8.5) and even under aggressive mitigation (RCP2.6) temperatures in excess of 4 K. While the simulations point to much larger temperature ranges for emission driven experiments, they do not change existing expectations (based on previous concentration driven experiments) on the timescale that different sources of uncertainty are important. The new simulations sample a range of future atmospheric concentrations for each emission scenario. Both in case of SRES A1B and the Representative Concentration Pathways (RCPs), the concentration pathways used to drive GCM ensembles lies towards the lower end of our simulated distribution. This design decision (a legecy of previous assessments) is likely to lead concentration driven experiments to under-sample strong feedback responses in concentration driven projections. Our ensemble of emission driven simulations span the global temperature response of other multi-model frameworks except at the low end, where combinations of low climate sensitivity and low carbon cycle feedbacks lead to responses outside our ensemble range. The ensemble simulates a number of high end responses which lie above the CMIP5 carbon cycle range. These high end simulations can be linked to sampling a number of stronger carbon cycle feedbacks and to sampling climate sensitivities above 4.5 K. This latter aspect highlights the priority in identifying real world climate sensitivity constraints which, if achieved, would lead to reductions on the uppper bound of projected global mean temperature change. The ensembles of simulations presented here provides a framework to explore relationships between present day observables and future changes while the large spread of future projected changes, highlights the ongoing need for such work.
Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power
Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon
2016-01-01
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%–155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%–71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power. PMID:28479943
Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power.
Miciak, Jeremy; Taylor, W Pat; Stuebing, Karla K; Fletcher, Jack M; Vaughn, Sharon
2016-01-01
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%-155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%-71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power.
NASA Technical Reports Server (NTRS)
Rockwell, T. H.; Giffin, W. C.
1982-01-01
Computer displays using PLATO are illustrated. Diagnostic scenarios are described. A sample of subject data is presented. Destination diversion displays, a combined destination, diversion scenario, and critical in-flight event (CIFE) data collection/subject testing system are presented.
29 CFR 2590.715-2715 - Summary of benefits and coverage and uniform glossary.
Code of Federal Regulations, 2014 CFR
2014-07-01
... definitions of standard insurance terms and medical terms so that consumers may compare health coverage and... scenarios (including pregnancy and serious or chronic medical conditions) in accordance with this paragraph... scenario is a hypothetical situation, consisting of a sample treatment plan for a specified medical...
29 CFR 2590.715-2715 - Summary of benefits and coverage and uniform glossary.
Code of Federal Regulations, 2012 CFR
2012-07-01
... definitions of standard insurance terms and medical terms so that consumers may compare health coverage and... scenarios (including pregnancy and serious or chronic medical conditions) in accordance with this paragraph... scenario is a hypothetical situation, consisting of a sample treatment plan for a specified medical...
29 CFR 2590.715-2715 - Summary of benefits and coverage and uniform glossary.
Code of Federal Regulations, 2013 CFR
2013-07-01
... definitions of standard insurance terms and medical terms so that consumers may compare health coverage and... scenarios (including pregnancy and serious or chronic medical conditions) in accordance with this paragraph... scenario is a hypothetical situation, consisting of a sample treatment plan for a specified medical...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-13
... irradiation scenarios? F. How should the impact of delays in sampling, delays in testing, combined injury, and... biodosimeter for use in a mass exposure scenario, the development of proper radiation biodosimetry tools is a... clinical animal model testing might be necessary to demonstrate radiation biodosimeter performance? D...
Methodology Series Module 5: Sampling Strategies
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438
Mir, Taskia; Dirks, Peter; Mason, Warren P; Bernstein, Mark
2014-10-01
This is a qualitative study designed to examine patient acceptability of re-sampling surgery for glioblastoma multiforme (GBM) electively post-therapy or at asymptomatic relapse. Thirty patients were selected using the convenience sampling method and interviewed. Patients were presented with hypothetical scenarios including a scenario in which the surgery was offered to them routinely and a scenario in which the surgery was in a clinical trial. The results of the study suggest that about two thirds of the patients offered the surgery on a routine basis would be interested, and half of the patients would agree to the surgery as part of a clinical trial. Several overarching themes emerged, some of which include: patients expressed ethical concerns about offering financial incentives or compensation to the patients or surgeons involved in the study; patients were concerned about appropriate communication and full disclosure about the procedures involved, the legalities of tumor ownership and the use of the tumor post-surgery; patients may feel alone or vulnerable when they are approached about the surgery; patients and their families expressed immense trust in their surgeon and indicated that this trust is a major determinant of their agreeing to surgery. The overall positive response to re-sampling surgery suggests that this procedure, if designed with all the ethical concerns attended to, would be welcomed by most patients. This approach of asking patients beforehand if a treatment innovation is acceptable would appear to be more practical and ethically desirable than previous practice.
Hofmann, Jennifer; Ruch, Willibald; Proyer, René T.; Platt, Tracey; Gander, Fabian
2017-01-01
The current paper addresses the measurement of three dispositions toward ridicule and laughter; i.e., gelotophobia (the fear of being laughed at), gelotophilia (the joy of being laughed at), and katagelasticism (the joy of laughing at others). These traits explain inter-individual differences in responses to humor, laughter, and social situations related to humorous encounters. First, an ultra-short form of the PhoPhiKat-45 (Ruch and Proyer, 2009) was adapted in two independent samples (Construction Sample N = 157; Replication Sample N = 1,774). Second, we tested the validity of the PhoPhiKat-9 in two further independent samples. Results showed that the psychometric properties of the ultra-short form were acceptable and the proposed factor structure could be replicated. In Validation Sample 1 (N = 246), we investigated the relation of the three traits to responses in a ridicule and teasing scenario questionnaire. The results replicated findings from earlier studies by showing that gelotophobes assigned the same emotions to friendly teasing and malicious ridicule (predominantly low joy, high fear, and shame). Gelotophilia was mainly predicted by relating joy to both, teasing and ridicule scenarios, while katagelasticism was predicted by assigning joy and contempt to ridicule scenarios. In Validation Sample 2 (N = 1,248), we investigated whether the fear of being laughed at is a vulnerability at the workplace: If friendly teasing and laughter of co-workers, superiors, or customers are misperceived as being malicious, individuals may feel less satisfied and more stressed. The results from a representative sample of Swiss employees showed that individuals with a fear of being laughed at are generally less satisfied with life and work and experience more work stress. Moreover, gelotophilia went along with positive evaluations of one's life and work, while katagelasticism was negatively related to work satisfaction and positively related to work stress. In order to establish good work practices and build procedures against workplace bullying, one needs to consider that individual differences impact on a person's perception of being bullied and assessing the three dispositions may give important insights into team processes. PMID:28553241
NASA Astrophysics Data System (ADS)
Kozlova, Tatiana; Karol Seweryn, D..; Grygorczuk, Jerzy; Kozlov, Oleg
The sample return missions have made a very significant progress to understanding of geology, the extra-terrestrial materials, processes occurring on surface and subsurface level, as well as of interactions between such materials and mechanisms operating there. The various sample return missions in the past (e.g. Apollo missions, Luna missions, Hayabusa mission) have provided scientists with samples of extra-terrestrial materials allowing to discover answers to critical scientific questions concerning the origin and evolution of the Solar System. Several new missions are currently planned: sample return missions, e.g Russian Luna-28, ESA Phootprint and MarcoPolo-R as well as both robotic and manned exploration missions to the Moon and Mars. One of the key challenges in such missions is the reliable sampling process which can be achieved by using many different techniques, e.g. static excavating technique (scoop), core drilling, sampling using dynamic mechanisms (penetrators), brushes and pneumatic systems. The effectiveness of any sampling strategy depends on many factors, including the required sample size, the mechanical and chemical soil properties (cohesive, hard or porous regolith, stones), the environment conditions (gravity, temperature, pressure, radiation). Many sampling mechanism have been studied, designed and built in the past, two techniques to collect regolith samples were chosen for the Phobos-Grunt mission. The proposed system consisted of a robotic arm with a 1,2m reach beyond the lander (IKI RAN); a tubular sampling device designed for collecting both regolith and small rock fragments (IKI RAN); the CHOMIK device (CBK PAN) - the low velocity penetrator with a single-sample container for collecting samples from the rocky surface. The functional tests were essential step in robotic arm, sampling device and CHOMIK device development process in the frame of Phobos-Grunt mission. Three major results were achieved: (i) operation scenario for autonomous sampling; (ii) technical characteristics of both devices, i.e. progress cycles of CHOMIK device in different materials and torque in the manipulator joints during sampling operations; (iii) confirmation of applicability of both devices to perform such type of tasks. The phases in operational scenario were prepared to meet mission and system requirements mainly connected with: (i) environment (near zero gravity, vacuum, dust), (ii) safety and (iii) to avoid common operation of both devices at the same time.
Green, Mark B; Campbell, John L; Yanai, Ruth D; Bailey, Scott W; Bailey, Amey S; Grant, Nicholas; Halm, Ian; Kelsey, Eric P; Rustad, Lindsey E
2018-01-01
The design of a precipitation monitoring network must balance the demand for accurate estimates with the resources needed to build and maintain the network. If there are changes in the objectives of the monitoring or the availability of resources, network designs should be adjusted. At the Hubbard Brook Experimental Forest in New Hampshire, USA, precipitation has been monitored with a network established in 1955 that has grown to 23 gauges distributed across nine small catchments. This high sampling intensity allowed us to simulate reduced sampling schemes and thereby evaluate the effect of decommissioning gauges on the quality of precipitation estimates. We considered all possible scenarios of sampling intensity for the catchments on the south-facing slope (2047 combinations) and the north-facing slope (4095 combinations), from the current scenario with 11 or 12 gauges to only 1 gauge remaining. Gauge scenarios differed by as much as 6.0% from the best estimate (based on all the gauges), depending on the catchment, but 95% of the scenarios gave estimates within 2% of the long-term average annual precipitation. The insensitivity of precipitation estimates and the catchment fluxes that depend on them under many reduced monitoring scenarios allowed us to base our reduction decision on other factors such as technician safety, the time required for monitoring, and co-location with other hydrometeorological measurements (snow, air temperature). At Hubbard Brook, precipitation gauges could be reduced from 23 to 10 with a change of <2% in the long-term precipitation estimates. The decision-making approach illustrated in this case study is applicable to the redesign of monitoring networks when reduction of effort seems warranted.
Collecting cometary soil samples? Development of the ROSETTA sample acquisition system
NASA Technical Reports Server (NTRS)
Coste, P. A.; Fenzi, M.; Eiden, Michael
1993-01-01
In the reference scenario of the ROSETTA CNRS mission, the Sample Acquisition System is mounted on the Comet Lander. Its tasks are to acquire three kinds of cometary samples and to transfer them to the Earth Return Capsule. Operations are to be performed in vacuum and microgravity, on a probably rough and dusty surface, in a largely unknown material, at temperatures in the order of 100 K. The concept and operation of the Sample Acquisition System are presented. The design of the prototype corer and surface sampling tool, and of the equipment for testing them at cryogenic temperatures in ambient conditions and in vacuum in various materials representing cometary soil, are described. Results of recent preliminary tests performed in low temperature thermal vacuum in a cometary analog ice-dust mixture are provided.
Adaptation of G-TAG Software for Validating Touch-and-Go Comet Surface Sampling Design Methodology
NASA Technical Reports Server (NTRS)
Mandic, Milan; Acikmese, Behcet; Blackmore, Lars
2011-01-01
The G-TAG software tool was developed under the R&TD on Integrated Autonomous Guidance, Navigation, and Control for Comet Sample Return, and represents a novel, multi-body dynamics simulation software tool for studying TAG sampling. The G-TAG multi-body simulation tool provides a simulation environment in which a Touch-and-Go (TAG) sampling event can be extensively tested. TAG sampling requires the spacecraft to descend to the surface, contact the surface with a sampling collection device, and then to ascend to a safe altitude. The TAG event lasts only a few seconds but is mission-critical with potentially high risk. Consequently, there is a need for the TAG event to be well characterized and studied by simulation and analysis in order for the proposal teams to converge on a reliable spacecraft design. This adaptation of the G-TAG tool was developed to support the Comet Odyssey proposal effort, and is specifically focused to address comet sample return missions. In this application, the spacecraft descends to and samples from the surface of a comet. Performance of the spacecraft during TAG is assessed based on survivability and sample collection performance. For the adaptation of the G-TAG simulation tool to comet scenarios, models are developed that accurately describe the properties of the spacecraft, approach trajectories, and descent velocities, as well as the models of the external forces and torques acting on the spacecraft. The adapted models of the spacecraft, descent profiles, and external sampling forces/torques were more sophisticated and customized for comets than those available in the basic G-TAG simulation tool. Scenarios implemented include the study of variations in requirements, spacecraft design (size, locations, etc. of the spacecraft components), and the environment (surface properties, slope, disturbances, etc.). The simulations, along with their visual representations using G-View, contributed to the Comet Odyssey New Frontiers proposal effort by indicating problems and/or benefits of different approaches and designs.
NASA Astrophysics Data System (ADS)
Hurst, A.; Bowden, S. A.; Parnell, J.; Burchell, M. J.; Ball, A. J.
2007-12-01
There are a number of measurements relevant to planetary geology that can only be adequately performed by physically contacting a sample. This necessitates landing on the surface of a moon or planetary body or returning samples to earth. The need to physically contact a sample is particularly important in the case of measurements that could detect medium to low concentrations of large organic molecules present in surface materials. Large organic molecules, although a trace component of many meteoritic materials and rocks on the surface of earth, carry crucial information concerning the processing of meteoritic material in the surface and subsurface environments, and can be crucial indicators for the presence of life. Unfortunately landing on the surface of a small planetary body or moon is complicated, particularly if surface topography is only poorly characterised and the atmosphere thin thus requiring a propulsion system for a soft landing. One alternative to a surface landing may be to use an impactor launched from an orbiting spacecraft to launch material from the planets surface and shallow sub-surface into orbit. Ejected material could then be collected by a follow-up spacecraft and analyzed. The mission scenario considered in the Europa-Ice Clipper mission proposal included both sample return and the analysis of captured particles. Employing such a sampling procedure to analyse large organic molecules is only viable if large organic molecules present in ices survive hypervelocity impacts (HVIs). To investigate the survival of large organic molecules in HVIs with icy bodies a two stage light air gas gun was used to fire steel projectiles (1-1.5 mm diameter) at samples of water ice containing large organic molecules (amino acids, anthracene and beta-carotene a biological pigment) at velocities > 4.8 km/s.UV-VIS spectroscopy of ejected material detected beta-carotene indicating large organic molecules can survive hypervelocity impacts. These preliminary results are yet to be scaled up to a point where they can be accurately interpreted in the context of a likely mission scenario. However, they strongly indicate that in a low mass payload mission scenario where a lander has been considered unfeasible, such a sampling strategy merits further consideration.
Sample size considerations when groups are the appropriate unit of analyses
Sadler, Georgia Robins; Ko, Celine Marie; Alisangco, Jennifer; Rosbrook, Bradley P.; Miller, Eric; Fullerton, Judith
2007-01-01
This paper discusses issues to be considered by nurse researchers when groups should be used as a unit of randomization. Advantages and disadvantages are presented, with statistical calculations needed to determine effective sample size. Examples of these concepts are presented using data from the Black Cosmetologists Promoting Health Program. Different hypothetical scenarios and their impact on sample size are presented. Given the complexity of calculating sample size when using groups as a unit of randomization, it’s advantageous for researchers to work closely with statisticians when designing and implementing studies that anticipate the use of groups as the unit of randomization. PMID:17693219
DRME: Count-based differential RNA methylation analysis at small sample size scenario.
Liu, Lian; Zhang, Shao-Wu; Gao, Fan; Zhang, Yixin; Huang, Yufei; Chen, Runsheng; Meng, Jia
2016-04-15
Differential methylation, which concerns difference in the degree of epigenetic regulation via methylation between two conditions, has been formulated as a beta or beta-binomial distribution to address the within-group biological variability in sequencing data. However, a beta or beta-binomial model is usually difficult to infer at small sample size scenario with discrete reads count in sequencing data. On the other hand, as an emerging research field, RNA methylation has drawn more and more attention recently, and the differential analysis of RNA methylation is significantly different from that of DNA methylation due to the impact of transcriptional regulation. We developed DRME to better address the differential RNA methylation problem. The proposed model can effectively describe within-group biological variability at small sample size scenario and handles the impact of transcriptional regulation on RNA methylation. We tested the newly developed DRME algorithm on simulated and 4 MeRIP-Seq case-control studies and compared it with Fisher's exact test. It is in principle widely applicable to several other RNA-related data types as well, including RNA Bisulfite sequencing and PAR-CLIP. The code together with an MeRIP-Seq dataset is available online (https://github.com/lzcyzm/DRME) for evaluation and reproduction of the figures shown in this article. Copyright © 2016 Elsevier Inc. All rights reserved.
Asteroid exploration and utilization: The Hawking explorer
NASA Technical Reports Server (NTRS)
Carlson, Alan; Date, Medha; Duarte, Manny; Erian, Neil; Gafka, George; Kappler, Peter; Patano, Scott; Perez, Martin; Ponce, Edgar; Radovich, Brian
1991-01-01
The Earth is nearing depletion of its natural resources at a time when human beings are rapidly expanding the frontiers of space. The resources which may exist on asteroids could have enormous potential for aiding and enhancing human space exploration as well as life on Earth. With the possibly limitless opportunities that exist, it is clear that asteroids are the next step for human existence in space. This report comprises the efforts of NEW WORLDS, Inc. to develop a comprehensive design for an asteroid exploration/sample return mission. This mission is a precursor to proof-of-concept missions that will investigate the validity of mining and materials processing on an asteroid. Project STONER (Systematic Transfer of Near Earth Resources) is based on two utilization scenarios: (1) moving an asteroid to an advantageous location for use by Earth; and (2) mining an asteroids and transporting raw materials back to Earth. The asteroid explorer/sample return mission is designed in the context of both scenarios and is the first phase of a long range plane for humans to utilize asteroid resources. The report concentrates specifically on the selection of the most promising asteroids for exploration and the development of an exploration scenario. Future utilization as well as subsystem requirements of an asteroid sample return probe are also addressed.
Abrahamyan, Lusine; Li, Chuan Silvia; Beyene, Joseph; Willan, Andrew R; Feldman, Brian M
2011-03-01
The study evaluated the power of the randomized placebo-phase design (RPPD)-a new design of randomized clinical trials (RCTs), compared with the traditional parallel groups design, assuming various response time distributions. In the RPPD, at some point, all subjects receive the experimental therapy, and the exposure to placebo is for only a short fixed period of time. For the study, an object-oriented simulation program was written in R. The power of the simulated trials was evaluated using six scenarios, where the treatment response times followed the exponential, Weibull, or lognormal distributions. The median response time was assumed to be 355 days for the placebo and 42 days for the experimental drug. Based on the simulation results, the sample size requirements to achieve the same level of power were different under different response time to treatment distributions. The scenario where the response times followed the exponential distribution had the highest sample size requirement. In most scenarios, the parallel groups RCT had higher power compared with the RPPD. The sample size requirement varies depending on the underlying hazard distribution. The RPPD requires more subjects to achieve a similar power to the parallel groups design. Copyright © 2011 Elsevier Inc. All rights reserved.
Asteroid exploration and utilization: The Hawking explorer
NASA Astrophysics Data System (ADS)
Carlson, Alan; Date, Medha; Duarte, Manny; Erian, Neil; Gafka, George; Kappler, Peter; Patano, Scott; Perez, Martin; Ponce, Edgar; Radovich, Brian
1991-12-01
The Earth is nearing depletion of its natural resources at a time when human beings are rapidly expanding the frontiers of space. The resources which may exist on asteroids could have enormous potential for aiding and enhancing human space exploration as well as life on Earth. With the possibly limitless opportunities that exist, it is clear that asteroids are the next step for human existence in space. This report comprises the efforts of NEW WORLDS, Inc. to develop a comprehensive design for an asteroid exploration/sample return mission. This mission is a precursor to proof-of-concept missions that will investigate the validity of mining and materials processing on an asteroid. Project STONER (Systematic Transfer of Near Earth Resources) is based on two utilization scenarios: (1) moving an asteroid to an advantageous location for use by Earth; and (2) mining an asteroids and transporting raw materials back to Earth. The asteroid explorer/sample return mission is designed in the context of both scenarios and is the first phase of a long range plane for humans to utilize asteroid resources. The report concentrates specifically on the selection of the most promising asteroids for exploration and the development of an exploration scenario. Future utilization as well as subsystem requirements of an asteroid sample return probe are also addressed.
Shuhama, R; Del-Ben, C M; Loureiro, S R; Graeff, F G
2008-04-01
A former study with scenarios conducted in Hawaii has suggested that humans share with non-human mammals the same basic defensive strategies - risk assessment, freezing, defensive threat, defensive attack, and flight. The selection of the most adaptive strategy is strongly influenced by features of the threat stimulus - magnitude, escapability, distance, ambiguity, and availability of a hiding place. Aiming at verifying if these strategies would be consistent in a different culture, 12 defensive scenarios were translated into Portuguese and adapted to the Brazilian culture. The sample consisted of male and female undergraduate students divided into two groups: 76 students, who evaluated the five dimensions of each scenario and 248 medical students, who chose the most likely response for each scenario. In agreement with the findings from studies of non-human mammal species, the scenarios were able to elicit different defensive behavioral responses, depending on features of the threat. "Flight" was chosen as the most likely response in scenarios evaluated as an unambiguous and intense threat, but with an available route of escape, whereas "attack" was chosen in an unambiguous, intense and close dangerous situation without an escape route. Less urgent behaviors, such as "check out", were chosen in scenarios evaluated as less intense, more distant and more ambiguous. Moreover, the results from the Brazilian sample were similar to the results obtained in the original study with Hawaiian students. These data suggest that a basic repertoire of defensive strategies is conserved along the mammalian evolution because they share similar functional benefits in maintaining fitness.
An optimized network for phosphorus load monitoring for Lake Okeechobee, Florida
Gain, W.S.
1997-01-01
Phosphorus load data were evaluated for Lake Okeechobee, Florida, for water years 1982 through 1991. Standard errors for load estimates were computed from available phosphorus concentration and daily discharge data. Components of error were associated with uncertainty in concentration and discharge data and were calculated for existing conditions and for 6 alternative load-monitoring scenarios for each of 48 distinct inflows. Benefit-cost ratios were computed for each alternative monitoring scenario at each site by dividing estimated reductions in load uncertainty by the 5-year average costs of each scenario in 1992 dollars. Absolute and marginal benefit-cost ratios were compared in an iterative optimization scheme to determine the most cost-effective combination of discharge and concentration monitoring scenarios for the lake. If the current (1992) discharge-monitoring network around the lake is maintained, the water-quality sampling at each inflow site twice each year is continued, and the nature of loading remains the same, the standard error of computed mean-annual load is estimated at about 98 metric tons per year compared to an absolute loading rate (inflows and outflows) of 530 metric tons per year. This produces a relative uncertainty of nearly 20 percent. The standard error in load can be reduced to about 20 metric tons per year (4 percent) by adopting an optimized set of monitoring alternatives at a cost of an additional $200,000 per year. The final optimized network prescribes changes to improve both concentration and discharge monitoring. These changes include the addition of intensive sampling with automatic samplers at 11 sites, the initiation of event-based sampling by observers at another 5 sites, the continuation of periodic sampling 12 times per year at 1 site, the installation of acoustic velocity meters to improve discharge gaging at 9 sites, and the improvement of a discharge rating at 1 site.
The Adequacy of Different Robust Statistical Tests in Comparing Two Independent Groups
ERIC Educational Resources Information Center
Pero-Cebollero, Maribel; Guardia-Olmos, Joan
2013-01-01
In the current study, we evaluated various robust statistical methods for comparing two independent groups. Two scenarios for simulation were generated: one of equality and another of population mean differences. In each of the scenarios, 33 experimental conditions were used as a function of sample size, standard deviation and asymmetry. For each…
Location tests for biomarker studies: a comparison using simulations for the two-sample case.
Scheinhardt, M O; Ziegler, A
2013-01-01
Gene, protein, or metabolite expression levels are often non-normally distributed, heavy tailed and contain outliers. Standard statistical approaches may fail as location tests in this situation. In three Monte-Carlo simulation studies, we aimed at comparing the type I error levels and empirical power of standard location tests and three adaptive tests [O'Gorman, Can J Stat 1997; 25: 269 -279; Keselman et al., Brit J Math Stat Psychol 2007; 60: 267- 293; Szymczak et al., Stat Med 2013; 32: 524 - 537] for a wide range of distributions. We simulated two-sample scenarios using the g-and-k-distribution family to systematically vary tail length and skewness with identical and varying variability between groups. All tests kept the type I error level when groups did not vary in their variability. The standard non-parametric U-test performed well in all simulated scenarios. It was outperformed by the two non-parametric adaptive methods in case of heavy tails or large skewness. Most tests did not keep the type I error level for skewed data in the case of heterogeneous variances. The standard U-test was a powerful and robust location test for most of the simulated scenarios except for very heavy tailed or heavy skewed data, and it is thus to be recommended except for these cases. The non-parametric adaptive tests were powerful for both normal and non-normal distributions under sample variance homogeneity. But when sample variances differed, they did not keep the type I error level. The parametric adaptive test lacks power for skewed and heavy tailed distributions.
Extreme Magnitude Earthquakes and their Economical Consequences
NASA Astrophysics Data System (ADS)
Chavez, M.; Cabrera, E.; Ashworth, M.; Perea, N.; Emerson, D.; Salazar, A.; Moulinec, C.
2011-12-01
The frequency of occurrence of extreme magnitude earthquakes varies from tens to thousands of years, depending on the considered seismotectonic region of the world. However, the human and economic losses when their hypocenters are located in the neighborhood of heavily populated and/or industrialized regions, can be very large, as recently observed for the 1985 Mw 8.01 Michoacan, Mexico and the 2011 Mw 9 Tohoku, Japan, earthquakes. Herewith, a methodology is proposed in order to estimate the probability of exceedance of: the intensities of extreme magnitude earthquakes, PEI and of their direct economical consequences PEDEC. The PEI are obtained by using supercomputing facilities to generate samples of the 3D propagation of extreme earthquake plausible scenarios, and enlarge those samples by Monte Carlo simulation. The PEDEC are computed by using appropriate vulnerability functions combined with the scenario intensity samples, and Monte Carlo simulation. An example of the application of the methodology due to the potential occurrence of extreme Mw 8.5 subduction earthquakes on Mexico City is presented.
A hard-to-read font reduces the framing effect in a large sample.
Korn, Christoph W; Ries, Juliane; Schalk, Lennart; Oganian, Yulia; Saalbach, Henrik
2018-04-01
How can apparent decision biases, such as the framing effect, be reduced? Intriguing findings within recent years indicate that foreign language settings reduce framing effects, which has been explained in terms of deeper cognitive processing. Because hard-to-read fonts have been argued to trigger deeper cognitive processing, so-called cognitive disfluency, we tested whether hard-to-read fonts reduce framing effects. We found no reliable evidence for an effect of hard-to-read fonts on four framing scenarios in a laboratory (final N = 158) and an online study (N = 271). However, in a preregistered online study with a rather large sample (N = 732), a hard-to-read font reduced the framing effect in the classic "Asian disease" scenario (in a one-sided test). This suggests that hard-read-fonts can modulate decision biases-albeit with rather small effect sizes. Overall, our findings stress the importance of large samples for the reliability and replicability of modulations of decision biases.
Revised direct radiocarbon dating of the Vindija G1 Upper Paleolithic Neandertals.
Higham, Tom; Ramsey, Christopher Bronk; Karavanić, Ivor; Smith, Fred H; Trinkaus, Erik
2006-01-17
The 1998/1999 direct dating of two Neandertal specimens from level G(1) of Vindija Cave in Croatia to approximately 28,000 and approximately 29,000 radiocarbon ((14)C) years ago has led to interpretations concerning the late survival of Neandertals in south-central Europe, patterns of interaction between Neandertals and in-dispersing early modern humans in Europe, and complex biocultural scenarios for the earlier phases of the Upper Paleolithic. Given improvements, particularly in sample pretreatment techniques for bone radiocarbon samples, especially ultrafiltration of collagen samples, these Vindija G(1) Neandertal fossils are redated to approximately 32,000-33,000 (14)C years ago and possibly earlier. These results and the recent redating of a number of purportedly old modern human skeletal remains in Europe to younger time periods highlight the importance of fine chronological control when studying this biocultural time period and the tenuous nature of monolithic scenarios for the establishment of modern humans and earlier phases of the Upper Paleolithic in Europe.
2D Models for the evolving distribution of impact melt at the lunar near-surface
NASA Astrophysics Data System (ADS)
Liu, T.; Michael, G. G.; Oberst, J.
2017-09-01
This study aims to investigate the cumulative effect of the impact gardening process. The lateral distribution of the melt with diverse ages is traced in this model. Using the observed distribution of melt age in lunar samples and meteorites, the possible scenarios of the lunar impact history can be discriminated. The record is also helpful for the future lunar sampling, guiding the choice of site to obtain samples from different impact basins, and to understand the mixture of melt ages observed at any one site.
Physician-assisted death. Opinions of a sample of Mexican physicians.
Lisker, Rubén; Alvarez Del Rio, Asunción; Villa, Antonio R; Carnevale, Alessandra
2008-05-01
There is insufficient information on what Mexicans think of physician-assisted death, a problem that is currently being discussed in our legislative bodies. This paper discusses the findings among a sample of physicians. The sample was formed by 2097 physicians from several specialties employed by a Mexican government health system, distributed throughout the country. Each physician received a structured questionnaire exploring what they thought of two different scenarios related to physician-assisted death: 1) intolerable suffering of patients; and 2) persistent vegetative state (PVS). Questions included data on several personal characteristics of the respondents and two open-ended questions asking the reasons why they answered the main questions as they did. There was an overall response rate of 47.3%. Approximately 40% agreed with physicians helping terminally ill patients request to die because of intolerable suffering caused by incurable diseases, whereas 44% said no and the rest were undecided. This was statistically different from the answers to the scenario where the relatives of a patient in a PVS ask their physician to help him or her die, where 48% of respondents said yes, and 35% said no. The main reasons to say yes in both scenarios were respect for patients or family autonomy and to avoid suffering, whereas those opposed cited other ethical and mainly religious considerations. The variable with the highest probability to approve both scenarios was of a legal nature, whereas strong religious beliefs were against accepting physician-assisted death. The group was evenly divided with approximately 40% each between those for and against the idea of helping die a patient and approximately 20% were undecided.
DECHADE: DEtecting slight Changes with HArd DEcisions in Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Ciuonzo, D.; Salvo Rossi, P.
2018-07-01
This paper focuses on the problem of change detection through a Wireless Sensor Network (WSN) whose nodes report only binary decisions (on the presence/absence of a certain event to be monitored), due to bandwidth/energy constraints. The resulting problem can be modelled as testing the equality of samples drawn from independent Bernoulli probability mass functions, when the bit probabilities under both hypotheses are not known. Both One-Sided (OS) and Two-Sided (TS) tests are considered, with reference to: (i) identical bit probability (a homogeneous scenario), (ii) different per-sensor bit probabilities (a non-homogeneous scenario) and (iii) regions with identical bit probability (a block-homogeneous scenario) for the observed samples. The goal is to provide a systematic framework collecting a plethora of viable detectors (designed via theoretically founded criteria) which can be used for each instance of the problem. Finally, verification of the derived detectors in two relevant WSN-related problems is provided to show the appeal of the proposed framework.
NASA Astrophysics Data System (ADS)
Yahyaei, Mohsen; Bashiri, Mahdi
2017-12-01
The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.
Aerobraking strategies for the sample of comet coma earth return mission
NASA Astrophysics Data System (ADS)
Abe, Takashi; Kawaguchi, Jun'ichiro; Uesugi, Kuninori; Yen, Chen-Wan L.
The results of a study to the validate the applicability of the aerobraking concept to the SOCCER (sample of comet coma earth return) mission using a six-DOF computer simulation of the aerobraking process are presented. The SOCCER spacecraft and the aerobraking scenario and power supply problem are briefly described. Results are presented for the spin effect, payload exposure problem, and sun angle effect.
Aerobraking strategies for the sample of comet coma earth return mission
NASA Technical Reports Server (NTRS)
Abe, Takashi; Kawaguchi, Jun'ichiro; Uesugi, Kuninori; Yen, Chen-Wan L.
1990-01-01
The results of a study to the validate the applicability of the aerobraking concept to the SOCCER (sample of comet coma earth return) mission using a six-DOF computer simulation of the aerobraking process are presented. The SOCCER spacecraft and the aerobraking scenario and power supply problem are briefly described. Results are presented for the spin effect, payload exposure problem, and sun angle effect.
Ali, Nadeem; Ismail, Iqbal Mohammad Ibrahim; Khoder, Mamdouh; Shamy, Magdy; Alghamdi, Mansour; Costa, Max; Ali, Lulwa Naseer; Wang, Wei; Eqani, Syed Ali Musstjab Akber Shah
2016-12-15
This study reports levels and profiles of polycyclic aromatic hydrocarbons (PAHs) in dust samples collected from three different microenvironments (cars, air conditioner (AC) filters and household floor dust) of Jeddah, Saudi Arabia (KSA) and Kuwait. To the best of our knowledge, this is first study reporting PAHs in indoor microenvironments of KSA, which makes these findings important. Benzo(b)fluoranthene (BbF), benzo(a)pyrene (BaP), phenanthrene (Phe), and pyrene (Pyr) were found to be the major chemicals in dust samples from all selected microenvironments. ΣPAHs occurred at median concentrations (ng/g) of 3450, 2200, and 2650 in Saudi AC filter, car and household floor dust, respectively. The median levels (ng/g) of ΣPAHs in Kuwaiti car (950) and household floor (1675) dust samples were lower than Saudi dust. The PAHs profile in Saudi dust was dominated by high molecular weight (HMW) (4-5 ring) PAHs while in Kuwaiti dust 3 ring PAHs have marked contribution. BaP equivalent, a marker for carcinogenic PAHs, was high in Saudi household floor and AC filter dust with median levels (ng/g) of 370 and 455, respectively. Different exposure scenarios, using 5th percentile, median, mean, and 95th percentile levels, were estimated for adults and toddlers. For Saudi and Kuwaiti toddlers worst exposure scenario of ΣPAHs was calculated at 175 and 85ng/kg body weight/day (ng/kgbw/d), respectively. For Saudi toddlers, the calculated worst exposure scenarios for carcinogenic BaP (27.7) and BbF (29.3ng/kgbw/d) was 2-4 times higher than Kuwaiti toddlers. This study is based on small number of samples which necessitate more detailed studies for better understanding of dynamics of PAHs in the indoor environments of this region. Nevertheless, our finding supports the ongoing exposure of organic pollutants to population that accumulates indoor. Copyright © 2016. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Svejkosky, Joseph
The spectral signatures of vehicles in hyperspectral imagery exhibit temporal variations due to the preponderance of surfaces with material properties that display non-Lambertian bi-directional reflectance distribution functions (BRDFs). These temporal variations are caused by changing illumination conditions, changing sun-target-sensor geometry, changing road surface properties, and changing vehicle orientations. To quantify these variations and determine their relative importance in a sub-pixel vehicle reacquisition and tracking scenario, a hyperspectral vehicle BRDF sampling experiment was conducted in which four vehicles were rotated at different orientations and imaged over a six-hour period. The hyperspectral imagery was calibrated using novel in-scene methods and converted to reflectance imagery. The resulting BRDF sampled time-series imagery showed a strong vehicle level BRDF dependence on vehicle shape in off-nadir imaging scenarios and a strong dependence on vehicle color in simulated nadir imaging scenarios. The imagery also exhibited spectral features characteristic of sampling the BRDF of non-Lambertian targets, which were subsequently verified with simulations. In addition, the imagery demonstrated that the illumination contribution from vehicle adjacent horizontal surfaces significantly altered the shape and magnitude of the vehicle reflectance spectrum. The results of the BRDF sampling experiment illustrate the need for a target vehicle BRDF model and detection scheme that incorporates non-Lambertian BRDFs. A new detection algorithm called Eigenvector Loading Regression (ELR) is proposed that learns a hyperspectral vehicle BRDF from a series of BRDF measurements using regression in a lower dimensional space and then applies the learned BRDF to make test spectrum predictions. In cases of non-Lambertian vehicle BRDF, this detection methodology performs favorably when compared to subspace detections algorithms and graph-based detection algorithms that do not account for the target BRDF. The algorithms are compared using a test environment in which observed spectral reflectance signatures from the BRDF sampling experiment are implanted into aerial hyperspectral imagery that contain large quantities of vehicles.
NASA Astrophysics Data System (ADS)
Dwyer, Linnea; Yadav, Kamini; Congalton, Russell G.
2017-04-01
Providing adequate food and water for a growing, global population continues to be a major challenge. Mapping and monitoring crops are useful tools for estimating the extent of crop productivity. GFSAD30 (Global Food Security Analysis Data at 30m) is a program, funded by NASA, that is producing global cropland maps by using field measurements and remote sensing images. This program studies 8 major crop types, and includes information on cropland area/extent, if crops are irrigated or rainfed, and the cropping intensities. Using results from the US and the extensive reference data available, CDL (USDA Crop Data Layer), we will experiment with various sampling simulations to determine optimal sampling for thematic map accuracy assessment. These simulations will include varying the sampling unit, the sampling strategy, and the sample number. Results of these simulations will allow us to recommend assessment approaches to handle different cropping scenarios.
Voelz, David G; Roggemann, Michael C
2009-11-10
Accurate simulation of scalar optical diffraction requires consideration of the sampling requirement for the phase chirp function that appears in the Fresnel diffraction expression. We describe three sampling regimes for FFT-based propagation approaches: ideally sampled, oversampled, and undersampled. Ideal sampling, where the chirp and its FFT both have values that match analytic chirp expressions, usually provides the most accurate results but can be difficult to realize in practical simulations. Under- or oversampling leads to a reduction in the available source plane support size, the available source bandwidth, or the available observation support size, depending on the approach and simulation scenario. We discuss three Fresnel propagation approaches: the impulse response/transfer function (angular spectrum) method, the single FFT (direct) method, and the two-step method. With illustrations and simulation examples we show the form of the sampled chirp functions and their discrete transforms, common relationships between the three methods under ideal sampling conditions, and define conditions and consequences to be considered when using nonideal sampling. The analysis is extended to describe the sampling limitations for the more exact Rayleigh-Sommerfeld diffraction solution.
Norström, Madelaine; Jonsson, Malin E; Åkerstedt, Johan; Whist, Anne Cathrine; Kristoffersen, Anja Bråthen; Sviland, Ståle; Hopp, Petter; Wahlström, Helene
2014-09-01
Disease caused by Bovine virus diarrhoea virus (BVDV) is notifiable in Norway. An eradication programme started in 1992. The number of herds with restrictions decreased from 2950 in 1994 to zero at the end of 2006. From 2007, the aim of the programme has been surveillance in order to document freedom from the infection. To estimate the probability of freedom from BVDV infection in the Norwegian cattle population by the end of 2011, a scenario tree model of the surveillance program during the years 2007-2011 was used. Three surveillance system components (SSCs) were included in the model: dairy, beef suckler sampled at farms (2007-2010) and beef suckler sampled at slaughterhouses (2011). The design prevalence was set to 0.2% at herd level and to 30% at within-herd level for the whole cattle population. The median probability of freedom from BVDV in Norway at the end of 2011 was 0.996; (0.995-0.997, credibility interval). The results from the scenario tree model support that the Norwegian cattle population is free from BVDV. The highest estimate of the annual sensitivity for the beef suckling SSCs originated from the surveillance at the slaughterhouses in 2011. The change to sampling at the slaughterhouse level further increased the sensitivity of the surveillance. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Ironmonger, Dean; Edeghere, Obaghe; Gossain, Savita; Hawkey, Peter M
2016-05-24
There is a marked variation in both antibiotic prescribing practice and urine sampling rates for diagnostic microbiology across general practices in England. To help understand factors driving this variation, we undertook a survey in 2012/13 to determine sampling protocols and antibiotic formularies used by general practitioners (GPs) for managing urinary tract infections (UTIs) in the West Midlands region of England. Cross-sectional survey of all eligible general practices in the West Midlands region of England undertaken in November 2012. GPs were invited to complete an online survey questionnaire to gather information on policies used within the practice for urine sampling for microbiological examination, and the source of antibiotic formularies used to guide treatment of UTIs. The questionnaire also gathered information on how they would manage five hypothetical clinical scenarios encountered in the community. The response rate was 11.3 % (409/3635 GPs), equivalent to a practice response rate of 26 % (248/950). Only 50 % of GPs reported having a practice policy for urine sampling. Although there was good agreement from GPs regarding collecting specimens in scenarios symbolising treatment failure (98 %), UTI in an adult male (98 %) and asymptomatic UTI in pregnancy (97 %), there was variation in GPs requesting a specimen for the scenarios involving a suspected uncomplicated urinary tract infection (UTI) and an asymptomatic catheterised elderly patient; with 40 and 38 % respectively indicating they would collect a specimen for microbiological examination. Standardised evidence based clinical management policies and antibiotic formularies for GPs should be readily available. This will promote the rational use of diagnostic microbiology services, improve antimicrobial stewardship and aid the interpretation of ongoing antimicrobial resistance surveillance.
Radon Sampling, Building 54, Nellis AFB, NV
2012-07-13
BEF) performed radon testing in response to a concern of elevated radon levels in Building 54. The building was previously remediated to reduce the... TESTING METHODOLOGY: a. Test Scenario: Building 54 was chosen to test for radon gas levels. Radon detectors were placed in the test ...Consultative Letter 3. DATES COVERED (From – To) 22-24 March 2012 4. TITLE AND SUBTITLE Radon Sampling, Building 54, Nellis AFB, NV 5a. CONTRACT
NASA Technical Reports Server (NTRS)
Tsou, P.; Albee, A.
1985-01-01
The results of a joint JPL/CSFC feasability study of a low-cost comet sample return flyby mission are presented. It is shown that the mission could be undertaken using current earth orbiter spacecraft technology in conjunction with pathfinder or beacon spacrcraft. Detailed scenarios of missions to the comets Honda-Mrkos-Pajdusakova (HMP), comet Kopff, and comet Giacobini-Zinner (GZ) are given, and some crossectional diagrams of the spacecraft designs are provided.
Bouffard, Jeff; Bry, Jeff; Smith, Shamayne; Bry, Rhonda
2008-12-01
Much of the criminological literature testing rational choice theory has utilized hypothetical scenarios presented to university students. Although this research generally supports rational choice theory, a common criticism is that conclusions from these studies may not generalize to samples of actual offenders. This study proceeds to examine this issue in two steps. First, a traditional sample of university students is examined to determine how various costs and benefits relate to their hypothetical likelihood of offending. Then the same data collection procedures are employed with a somewhat different sample of younger, adjudicated, and institutionalized offenders to determine whether the conclusions drawn from the student sample generalize to this offender sample. Results generally suggest that the content and process of hypothetical criminal decision making differ in the sample of known offenders relative to the university students. Limitations of the current study, as well as suggestions for future research, are discussed.
Braeye, Toon; Verheagen, Jan; Mignon, Annick; Flipse, Wim; Pierard, Denis; Huygen, Kris; Schirvel, Carole; Hens, Niel
2016-01-01
Introduction Surveillance networks are often not exhaustive nor completely complementary. In such situations, capture-recapture methods can be used for incidence estimation. The choice of estimator and their robustness with respect to the homogeneity and independence assumptions are however not well documented. Methods We investigated the performance of five different capture-recapture estimators in a simulation study. Eight different scenarios were used to detect and combine case-information. The scenarios increasingly violated assumptions of independence of samples and homogeneity of detection probabilities. Belgian datasets on invasive pneumococcal disease (IPD) and pertussis provided motivating examples. Results No estimator was unbiased in all scenarios. Performance of the parametric estimators depended on how much of the dependency and heterogeneity were correctly modelled. Model building was limited by parameter estimability, availability of additional information (e.g. covariates) and the possibilities inherent to the method. In the most complex scenario, methods that allowed for detection probabilities conditional on previous detections estimated the total population size within a 20–30% error-range. Parametric estimators remained stable if individual data sources lost up to 50% of their data. The investigated non-parametric methods were more susceptible to data loss and their performance was linked to the dependence between samples; overestimating in scenarios with little dependence, underestimating in others. Issues with parameter estimability made it impossible to model all suggested relations between samples for the IPD and pertussis datasets. For IPD, the estimates for the Belgian incidence for cases aged 50 years and older ranged from 44 to58/100,000 in 2010. The estimates for pertussis (all ages, Belgium, 2014) ranged from 24.2 to30.8/100,000. Conclusion We encourage the use of capture-recapture methods, but epidemiologists should preferably include datasets for which the underlying dependency structure is not too complex, a priori investigate this structure, compensate for it within the model and interpret the results with the remaining unmodelled heterogeneity in mind. PMID:27529167
Contamination of food crops grown on soils with elevated heavy metals content.
Dziubanek, Grzegorz; Piekut, Agata; Rusin, Monika; Baranowska, Renata; Hajok, Ilona
2015-08-01
The exposure of inhabitants from 13 cities of The Upper Silesia Industrial Region to cadmium and lead has been estimated on the basis of heavy metals content in commonly consumed vegetables. The samples were collected from agricultural fields, allotments and home gardens in these cities. Cadmium and lead concentrations in samples of soil and vegetables - cabbage, carrots and potatoes were determined. High content of heavy metals in the arable layer of soil in Upper Silesia (max. 48.8 and 2470mgkg(-1) d.w. for Cd and Pb, respectively) explained high Cd and Pb concentrations in locally cultivated vegetables which are well above the permissible level. Three exposure scenarios with different concentrations of Pb and Cd in vegetables were taken into consideration. In the Scenario I where the content of heavy metals was equal to maximum permissible level, the value of hazard quotient (HQ) for Pb and Cd was 0.530 and 0.704, respectively. In the scenarios where were assumed consumption of contaminated vegetables from Upper Silesia the value of hazard quotient (HQ) for Pb and Cd was 0.755 and 1.337 for Scenario II and 1.806 and 4.542 for Scenario III. The study showed that consumption of vegetables cultivated in Upper Silesia Region on the agricultural fields, allotments and in home gardens may pose a significant health risk. Copyright © 2015 Elsevier Inc. All rights reserved.
On the Prediction of Ground Motion
NASA Astrophysics Data System (ADS)
Lavallee, D.; Schmedes, J.; Archuleta, R. J.
2012-12-01
Using a slip-weakening dynamic model of rupture, we generated earthquake scenarios that provided the spatio-temporal evolution of the slip on the fault and the radiated field at the free surface. We observed scenarios where the rupture propagates at a supershear speed on some parts of the fault while remaining subshear for other parts of the fault. For some scenarios with nearly identical initial conditions, the rupture speed was always subshear. For both types of scenarios (mixture of supershear and subshear speeds and only subshear), we compute the peak ground accelerations (PGA) regularly distributed over the Earth's surface. We then calculate the probability density functions (PDF) of the PGA. For both types of scenarios, the PDF curves are asymmetrically shaped and asymptotically attenuated according to power law. This behavior of the PDF is similar to that observed for the PDF curves of PGA recorded during earthquakes. The main difference between scenarios with a supershear rupture speed and scenarios with only subshear rupture speed is the range of PGA values. Based on these results, we investigate three issues fundamental for the prediction of ground motion. It is important to recognize that recorded ground motions during an earthquake sample a small fraction of the radiation field. It is not obvious that such sampling will capture the largest ground motion generated during an earthquake, nor that the number of stations is large enough to properly infer the statistical properties associated with the radiation field. To quantify the effect of under (or low) sampling of the radiation field, we design three experiments. For a scenario where the rupture speed is only subshear, we construct multiple sets of observations. Each set is comprised of 100 randomly selected PGA values from all of the PGA's calculated at the Earth's surface. In the first experiment, we evaluate how the distributions of PGA in the sets compare with the distribution of all the PGA. For this experiment, we used different statistical tests (e.g. chi-square). This experiment quantifies the likelihood that a random set of PGA can be used to infer the statistical properties of all the PGA. In the second experiment, we fit the PDF of the PGA of every set with probability laws used in the literature to describe the PDF of recorded PGA: the lognormal law, the generalized maximum extreme value law, and the Levy law. For each set, the probability laws are then used to compute the probability to observe a PGA value that will cause "moderate to heavy" potential damage according to Instrumental Intensity scale developed by USGS. For each probability law, we compare predictions based on the set with the prediction estimated from all the PGA. This experiment quantifies the reliability and uncertainty in predicting an outcome due to under sampling the radiation field. The third experiment consists in using the sets discussed above and repeats the two investigations discussed above but this time comparing with a scenario where the rupture has a supershear speed over part of the fault. The objective here is to assess additional uncertainty in predicting PGA and damage resulting from ruptures that have supershear speeds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bessac, Julie; Constantinescu, Emil; Anitescu, Mihai
We propose a statistical space-time model for predicting atmospheric wind speed based on deterministic numerical weather predictions and historical measurements. We consider a Gaussian multivariate space-time framework that combines multiple sources of past physical model outputs and measurements in order to produce a probabilistic wind speed forecast within the prediction window. We illustrate this strategy on wind speed forecasts during several months in 2012 for a region near the Great Lakes in the United States. The results show that the prediction is improved in the mean-squared sense relative to the numerical forecasts as well as in probabilistic scores. Moreover, themore » samples are shown to produce realistic wind scenarios based on sample spectra and space-time correlation structure.« less
Bessac, Julie; Constantinescu, Emil; Anitescu, Mihai
2018-03-01
We propose a statistical space-time model for predicting atmospheric wind speed based on deterministic numerical weather predictions and historical measurements. We consider a Gaussian multivariate space-time framework that combines multiple sources of past physical model outputs and measurements in order to produce a probabilistic wind speed forecast within the prediction window. We illustrate this strategy on wind speed forecasts during several months in 2012 for a region near the Great Lakes in the United States. The results show that the prediction is improved in the mean-squared sense relative to the numerical forecasts as well as in probabilistic scores. Moreover, themore » samples are shown to produce realistic wind scenarios based on sample spectra and space-time correlation structure.« less
Alignment of cD-galaxies with their surroundings
NASA Technical Reports Server (NTRS)
Vankampen, Eelco; Rhee, George
1990-01-01
For a sample of 122 rich Abell clusters the authors find a strong correlation of the position angle (orientation) of the first-ranked galaxy and its parent cluster. This alignment effect is strongest for cD-galaxies. Formation scenarios for cD galaxies, like the merging scenario, must produce such a strong alignment effect. The authors show some N-body simulations done for this purpose.
Nougadère, Alexandre; Sirot, Véronique; Kadar, Ali; Fastier, Antony; Truchot, Eric; Vergnet, Claude; Hommet, Frédéric; Baylé, Joëlle; Gros, Philippe; Leblanc, Jean-Charles
2012-09-15
Chronic dietary exposure to pesticide residues was assessed for the French population using a total diet study (TDS) to take into account realistic levels in foods as consumed at home (table-ready). Three hundred and twenty-five pesticides and their transformation products, grouped into 283 pesticides according to their residue definition, were sought in 1235 composite samples corresponding to 194 individual food items that cover 90% of the adult and child diet. To make up the composite samples, about 19,000 food products were bought during different seasons from 2007 to 2009 in 36 French cities and prepared according to the food preparation practices recorded in the individual and national consumption survey (INCA2). The results showed that 37% of the samples contained one or more residues. Seventy-three pesticides were detected and 55 quantified at levels ranging from 0.003 to 8.7mg/kg. The most frequently detected pesticides, identified as monitoring priorities in 2006, were the post-harvest insecticides pirimiphos-methyl and chlorpyrifos-methyl-particularly in wheat-based products-together with chlorpyrifos, iprodione, carbendazim and imazalil, mainly in fruit and fruit juices. Dietary intakes were estimated for each subject of INCA2 survey, under two contamination scenarios to handle left-censored data: lower-bound scenario (LB) where undetected results were set to zero, and upper-bound (UB) scenario where undetected results were set to the detection limit. For 90% of the pesticides, exposure levels were below the acceptable daily intake (ADI) under the two scenarios. Under the LB scenario, which tends to underestimate exposure levels, only dimethoate intakes exceeded the ADI for high level consumers of cherry (0.6% of children and 0.4% of adults). This pesticide, authorised in Europe, and its metabolite were detected in both cherries and endives. Under the UB scenario, that overestimates exposure, a chronic risk could not be excluded for nine other pesticides (dithiocarbamates, ethoprophos, carbofuran, diazinon, methamidophos, disulfoton, dieldrin, endrin and heptachlor). For these pesticides, more sensitive analyses of the main food contributors are needed in order to refine exposure assessment. Copyright © 2012 Elsevier Ltd. All rights reserved.
Raupach-Rosin, Heike; Duddeck, Arne; Gehrlich, Maike; Helmke, Charlotte; Huebner, Johannes; Pletz, Mathias W; Mikolajczyk, Rafael; Karch, André
2017-08-01
Blood culture (BC) sampling rates in Germany are considerably lower than recommended. Aim of our study was to assess knowledge, attitudes, and practice of physicians in Germany regarding BC diagnostics. We conducted a cross-sectional mixed-methods study among physicians working in inpatient care in Germany. Based on the results of qualitative focus groups, a questionnaire-based quantitative study was conducted in 2015-2016. In total, 706 medical doctors and final-year medical students from 11 out of 16 federal states in Germany participated. BC sampling was considered an important diagnostic tool by 95% of the participants. However, only 23% of them would collect BCs in three scenarios for which BC ordering is recommended by present guidelines in Germany; almost one out of ten physicians would not have taken blood cultures in any of the three scenarios. The majority of participants (74%) reported not to adhere to the guideline recommendation that blood culture sampling should include at least two blood culture sets from two different injection sites. High routine in blood culture sampling, perceived importance of blood culture diagnostics, the availability of an in-house microbiological lab, and the department the physician worked in were identified as predictors for good blood culture practice. Our study suggests that there are substantial deficits in BC ordering and the application of guidelines for good BC practice in Germany. Based on these findings, multimodal interventions appear necessary for improving BC diagnostics.
NASA Astrophysics Data System (ADS)
Benkler, Erik; Telle, Harald R.
2007-06-01
An improved phase-locked loop (PLL) for versatile synchronization of a sampling pulse train to an optical data stream is presented. It enables optical sampling of the true waveform of repetitive high bit-rate optical time division multiplexed (OTDM) data words such as pseudorandom bit sequences. Visualization of the true waveform can reveal details, which cause systematic bit errors. Such errors cannot be inferred from eye diagrams and require word-synchronous sampling. The programmable direct-digital-synthesis circuit used in our novel PLL approach allows flexible adaption of virtually any problem-specific synchronization scenario, including those required for waveform sampling, for jitter measurements by slope detection, and for classical eye-diagrams. Phase comparison of the PLL is performed at 10-GHz OTDM base clock rate, leading to a residual synchronization jitter of less than 70 fs.
Puzzle Imaging: Using Large-Scale Dimensionality Reduction Algorithms for Localization.
Glaser, Joshua I; Zamft, Bradley M; Church, George M; Kording, Konrad P
2015-01-01
Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, "puzzle imaging," that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1) relatively precise 3-dimensional brain imaging is possible; (2) the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3) a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples.
Lin, Chingju; Hsu, Jing-Fang; Liao, Pao-Chi
2012-02-29
The consumption of free-range eggs is becoming more popular worldwide. We analyzed the levels of 12 dioxin-like polychlorinated biphenyls (dl-PCBs) and their congener profiles from 6 free-range and 12 caged egg samples. The mean levels of dl-PCBs in the free-range samples were 5.4 times higher than those in caged eggs. All egg samples exhibited at least two characteristic dl-PCB congener patterns, which reflected distinctive contamination sources. Additionally, for the first time, we demonstrated that the dl-PCB levels in the free-range eggs were highly correlated with elevated levels of 17 polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) (r = 0.986; p < 0.001), indicating a coexposure scenario in free-range hens. Cluster analysis of congener patterns implied that this coexposure scenario could be attributed to distinct dl-PCB and PCDD/F sources. This congener profile information provides insights from a different perspective for further identifying potential dl-PCB and PCDD/F sources in the polluted free-range eggs.
Working group session report: Neutron beam line shielding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Russell, G. J.; Ikedo, Y.
2001-01-01
We have examined the differences between a 2-D model and a 3-D model for designing the beam-line shield for the HIPPO instrument at the Lujan Center at the Los Alamos National Laboratory. We have calculated the total (neutron and gamma ray) dose equivalent rate coming out of the personal access ports from the HIPPO instrument experiment cave. In order to answer this question, we have investigated two possible worst-case scenarios: (a) failure of the T{sub 0}-chopper and no sample at the sample position; and (b) failure of the T{sub 0}-chopper with a thick sample (a piece of Inconel-718, 10 cmmore » diam by 30 cm long) at the sample position.« less
Sub-sampling genetic data to estimate black bear population size: A case study
Tredick, C.A.; Vaughan, M.R.; Stauffer, D.F.; Simek, S.L.; Eason, T.
2007-01-01
Costs for genetic analysis of hair samples collected for individual identification of bears average approximately US$50 [2004] per sample. This can easily exceed budgetary allowances for large-scale studies or studies of high-density bear populations. We used 2 genetic datasets from 2 areas in the southeastern United States to explore how reducing costs of analysis by sub-sampling affected precision and accuracy of resulting population estimates. We used several sub-sampling scenarios to create subsets of the full datasets and compared summary statistics, population estimates, and precision of estimates generated from these subsets to estimates generated from the complete datasets. Our results suggested that bias and precision of estimates improved as the proportion of total samples used increased, and heterogeneity models (e.g., Mh[CHAO]) were more robust to reduced sample sizes than other models (e.g., behavior models). We recommend that only high-quality samples (>5 hair follicles) be used when budgets are constrained, and efforts should be made to maximize capture and recapture rates in the field.
ERIC Educational Resources Information Center
Hayel Al-Srour, Nadia; Al-Ali, Safa M.; Al-Oweidi, Alia
2016-01-01
The present study aims to detect the impact of teacher training on creative writing and problem-solving using both Futuristic scenarios program to solve problems creatively, and creative problem solving. To achieve the objectives of the study, the sample was divided into two groups, the first consist of 20 teachers, and 23 teachers to second…
Adaptive Peer Sampling with Newscast
NASA Astrophysics Data System (ADS)
Tölgyesi, Norbert; Jelasity, Márk
The peer sampling service is a middleware service that provides random samples from a large decentralized network to support gossip-based applications such as multicast, data aggregation and overlay topology management. Lightweight gossip-based implementations of the peer sampling service have been shown to provide good quality random sampling while also being extremely robust to many failure scenarios, including node churn and catastrophic failure. We identify two problems with these approaches. The first problem is related to message drop failures: if a node experiences a higher-than-average message drop rate then the probability of sampling this node in the network will decrease. The second problem is that the application layer at different nodes might request random samples at very different rates which can result in very poor random sampling especially at nodes with high request rates. We propose solutions for both problems. We focus on Newscast, a robust implementation of the peer sampling service. Our solution is based on simple extensions of the protocol and an adaptive self-control mechanism for its parameters, namely—without involving failure detectors—nodes passively monitor local protocol events using them as feedback for a local control loop for self-tuning the protocol parameters. The proposed solution is evaluated by simulation experiments.
Effect of separate sampling on classification accuracy.
Shahrokh Esfahani, Mohammad; Dougherty, Edward R
2014-01-15
Measurements are commonly taken from two phenotypes to build a classifier, where the number of data points from each class is predetermined, not random. In this 'separate sampling' scenario, the data cannot be used to estimate the class prior probabilities. Moreover, predetermined class sizes can severely degrade classifier performance, even for large samples. We employ simulations using both synthetic and real data to show the detrimental effect of separate sampling on a variety of classification rules. We establish propositions related to the effect on the expected classifier error owing to a sampling ratio different from the population class ratio. From these we derive a sample-based minimax sampling ratio and provide an algorithm for approximating it from the data. We also extend to arbitrary distributions the classical population-based Anderson linear discriminant analysis minimax sampling ratio derived from the discriminant form of the Bayes classifier. All the codes for synthetic data and real data examples are written in MATLAB. A function called mmratio, whose output is an approximation of the minimax sampling ratio of a given dataset, is also written in MATLAB. All the codes are available at: http://gsp.tamu.edu/Publications/supplementary/shahrokh13b.
Puzzle Imaging: Using Large-Scale Dimensionality Reduction Algorithms for Localization
Glaser, Joshua I.; Zamft, Bradley M.; Church, George M.; Kording, Konrad P.
2015-01-01
Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, “puzzle imaging,” that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1) relatively precise 3-dimensional brain imaging is possible; (2) the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3) a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples. PMID:26192446
Excreta Sampling as an Alternative to In Vivo Measurements at the Hanford Site.
Carbaugh, Eugene H; Antonio, Cheryl L; Lynch, Timothy P
2015-08-01
The capabilities of indirect radiobioassay by urine and fecal sample analysis were compared with the direct radiobioassay methods of whole body counting and lung counting for the most common radionuclides and inhalation exposure scenarios encountered by Hanford workers. Radionuclides addressed by in vivo measurement included 137Cs, 60Co, 154Eu, and 241Am as an indicator for plutonium mixtures. The same radionuclides were addressed using gamma energy analysis of urine samples, augmented by radiochemistry and alpha spectrometry methods for plutonium in urine and fecal samples. It was concluded that in vivo whole body counting and lung counting capability should be maintained at the Hanford Site for the foreseeable future, however, urine and fecal sample analysis could provide adequate, though degraded, monitoring capability for workers as a short-term alternative, should in vivo capability be lost due to planned or unplanned circumstances.
Daly, Caitlin H; Higgins, Victoria; Adeli, Khosrow; Grey, Vijay L; Hamid, Jemila S
2017-12-01
To statistically compare and evaluate commonly used methods of estimating reference intervals and to determine which method is best based on characteristics of the distribution of various data sets. Three approaches for estimating reference intervals, i.e. parametric, non-parametric, and robust, were compared with simulated Gaussian and non-Gaussian data. The hierarchy of the performances of each method was examined based on bias and measures of precision. The findings of the simulation study were illustrated through real data sets. In all Gaussian scenarios, the parametric approach provided the least biased and most precise estimates. In non-Gaussian scenarios, no single method provided the least biased and most precise estimates for both limits of a reference interval across all sample sizes, although the non-parametric approach performed the best for most scenarios. The hierarchy of the performances of the three methods was only impacted by sample size and skewness. Differences between reference interval estimates established by the three methods were inflated by variability. Whenever possible, laboratories should attempt to transform data to a Gaussian distribution and use the parametric approach to obtain the most optimal reference intervals. When this is not possible, laboratories should consider sample size and skewness as factors in their choice of reference interval estimation method. The consequences of false positives or false negatives may also serve as factors in this decision. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Lucernoni, Federico; Rizzotto, Matteo; Tapparo, Federica; Capelli, Laura; Sironi, Selena; Busini, Valentina
2016-11-01
The work focuses on the principles for the design of a specific static hood and on the definition of an optimal sampling procedure for the assessment of landfill gas (LFG) surface emissions. This is carried out by means of computational fluid dynamics (CFD) simulations to investigate the fluid dynamics conditions of the hood. The study proves that understanding the fluid dynamic conditions is fundamental in order to understand the sampling results and correctly interpret the measured concentration values by relating them to a suitable LFG emission model, and therefore to estimate emission rates. For this reason, CFD is a useful tool for the design and evaluation of sampling systems, among others, to verify the fundamental hypotheses on which the mass balance for the sampling hood is defined. The procedure here discussed, which is specific for the case of the investigated landfill, can be generalized to be applied also to different scenarios, where hood sampling is involved. Copyright © 2016 Elsevier Ltd. All rights reserved.
Exploring revictimization risk in a community sample of sexual assault survivors.
Chu, Ann T; Deprince, Anne P; Mauss, Iris B
2014-01-01
Previous research points to links between risk detection (the ability to detect danger cues in various situations) and sexual revictimization in college women. Given important differences between college and community samples that may be relevant to revictimization risk (e.g., the complexity of trauma histories), the current study explored the link between risk detection and revictimization in a community sample of women. Community-recruited women (N = 94) reported on their trauma histories in a semistructured interview. In a laboratory session, participants listened to a dating scenario involving a woman and a man that culminated in sexual assault. Participants were instructed to press a button "when the man had gone too far." Unlike in college samples, revictimized community women (n = 47) did not differ in terms of risk detection response times from women with histories of no victimization (n = 10) or single victimization (n = 15). Data from this study point to the importance of examining revictimization in heterogeneous community samples where risk mechanisms may differ from college samples.
Finding a Second Sample of Life on Earth
NASA Astrophysics Data System (ADS)
Davies, P. C. W.; Lineweaver, Charles H.
2005-06-01
If life emerges readily under Earth-like conditions, the possibility arises of multiple terrestrial genesis events. We seek to quantify the probability of this scenario using estimates of the Archean bombardment rate and the fact that life established itself fairly rapidly on Earth once conditions became favorable. We find a significant likelihood that at least one more sample of life, referred to here as alien life, may have emerged on Earth, and could have coexisted with known life. Indeed, it is difficult to rule out the possibility of extant alien life. We offer some suggestions for how an alternative sample of life might be detected.
Petrology of lunar rocks and implication to lunar evolution
NASA Technical Reports Server (NTRS)
Ridley, W. I.
1976-01-01
Recent advances in lunar petrology, based on studies of lunar rock samples available through the Apollo program, are reviewed. Samples of bedrock from both maria and terra have been collected where micrometeorite impact penetrated the regolith and brought bedrock to the surface, but no in situ cores have been taken. Lunar petrogenesis and lunar thermal history supported by studies of the rock sample are discussed and a tentative evolutionary scenario is constructed. Mare basalts, terra assemblages of breccias, soils, rocks, and regolith are subjected to elemental analysis, mineralogical analysis, trace content analysis, with studies of texture, ages and isotopic composition. Probable sources of mare basalts are indicated.
Fries, Michael; Williams, Pamela R D; Ovesen, Jerald; Maier, Andrew
2018-04-19
Many petroleum-based products are used for degreasing and cleaning purposes during vehicle maintenance and repairs. Although prior studies have evaluated chemical exposures associated with this type of work, most of these have focused on gasoline and exhaust emissions, with few samples collected solely during the use of an aerosol cleaning product. In this case study, we assess the type of airborne exposures that would be expected from the typical use of an aerosol brake cleaner during vehicle repair work. Eight exposure scenarios were evaluated over a two-day study in which the benzene content of the brake cleaner and potential for dilution ventilation and air flow varied. Both short-term (15 min) and task-based (≥1 hr) charcoal tube samples were collected in the breathing zone and adjacent work area and analyzed for total hydrocarbons (THCs), toluene, and benzene. The majority of personal (N = 48) and area (N = 47) samples had detectable levels of THC and toluene, but no detections of benzene were found. For the personal short-term samples, average airborne concentrations ranged from 3.1 - 61.5 ppm (13.8-217.5 mg/m 3 ) for THC and 2.2 - 44.0 ppm (8.2-162.5 mg/m 3 ) for toluene, depending on the scenario. Compared to the personal short-term samples, average concentrations were generally 2 to 3 times lower for the personal task-based samples and 2 to 5 times lower for the area short-term samples. The highest exposures occurred when the garage bay doors were closed, floor fan was turned off, or greatest amount of brake cleaner was used. These findings add to the limited dataset on this topic and can be used to bound or approximate worker or consumer exposures from use of aerosol cleaning products with similar compositions and use patterns.
NASA Astrophysics Data System (ADS)
Carrigan, Charles R.; Sun, Yunwei
2014-03-01
The development of a technically sound approach to detecting the subsurface release of noble gas radionuclides is a critical component of the on-site inspection (OSI) protocol under the Comprehensive Nuclear Test Ban Treaty. In this context, we are investigating a variety of technical challenges that have a significant bearing on policy development and technical guidance regarding the detection of noble gases and the creation of a technically justifiable OSI concept of operation. The work focuses on optimizing the ability to capture radioactive noble gases subject to the constraints of possible OSI scenarios. This focus results from recognizing the difficulty of detecting gas releases in geologic environments—a lesson we learned previously from the non-proliferation experiment (NPE). Most of our evaluations of a sampling or transport issue necessarily involve computer simulations. This is partly due to the lack of OSI-relevant field data, such as that provided by the NPE, and partly a result of the ability of computer-based models to test a range of geologic and atmospheric scenarios far beyond what could ever be studied by field experiments, making this approach very highly cost effective. We review some highlights of the transport and sampling issues we have investigated and complete the discussion of these issues with a description of a preliminary design for subsurface sampling that addresses some of the sampling challenges discussed here.
The complexities of DNA transfer during a social setting.
Goray, Mariya; van Oorschot, Roland A H
2015-03-01
When questions relating to how a touch DNA sample from a specific individual got to where it was sampled from, one has limited data available to provide an assessment on the likelihood of specific transfer events within a proposed scenario. This data is mainly related to the impact of some key variables affecting transfer that are derived from structured experiments. Here we consider the effects of unstructured social interactions on the transfer of touch DNA. Unscripted social exchanges of three individuals having a drink together while sitting at a table were video recorded and DNA samples were collected and profiled from all relevant items touched during each sitting. Attempts were made to analyze when and how DNA was transferred from one object to another. The analyses demonstrate that simple minor everyday interactions involving only a few items in some instances lead to detectable DNA being transferred among individuals and objects without them having contacted each other through secondary and further transfer. Transfer was also observed to be bi-directional. Furthermore, DNA of unknown source on hands or objects can be transferred and interfere with the interpretation of profiles generated from targeted touched surfaces. This study provides further insight into the transfer of DNA that may be useful when considering the likelihood of alternate scenarios of how a DNA sample got to where it was found. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
van den Berge, M; Ozcanhan, G; Zijlstra, S; Lindenbergh, A; Sijen, T
2016-03-01
Especially when minute evidentiary traces are analysed, background cell material unrelated to the crime may contribute to detectable levels in the genetic analyses. To gain understanding on the composition of human cell material residing on surfaces contributing to background traces, we performed DNA and mRNA profiling on samplings of various items. Samples were selected by considering events contributing to cell material deposits in exemplary activities (e.g. dragging a person by the trouser ankles), and can be grouped as public objects, private samples, transfer-related samples and washing machine experiments. Results show that high DNA yields do not necessarily relate to an increased number of contributors or to the detection of other cell types than skin. Background cellular material may be found on any type of public or private item. When a major contributor can be deduced in DNA profiles from private items, this can be a different person than the owner of the item. Also when a specific activity is performed and the areas of physical contact are analysed, the "perpetrator" does not necessarily represent the major contributor in the STR profile. Washing machine experiments show that transfer and persistence during laundry is limited for DNA and cell type dependent for RNA. Skin conditions such as the presence of sebum or sweat can promote DNA transfer. Results of this study, which encompasses 549 samples, increase our understanding regarding the prevalence of human cell material in background and activity scenarios. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Kovačević, Mira; Burazin, Jelena; Pavlović, Hrvoje; Kopjar, Mirela; Piližota, Vlasta
2013-04-01
Minimally processed and refrigerated vegetables can be contaminated with Listeria species bacteria including Listeria monocytogenes due to extensive handling during processing or by cross contamination from the processing environment. The objective of this study was to examine the microbiological quality of ready-to-eat minimally processed and refrigerated vegetables from supermarkets in Osijek, Croatia. 100 samples of ready-to-eat vegetables collected from different supermarkets in Osijek, Croatia, were analyzed for presence of Listeria species and Listeria monocytogenes. The collected samples were cut iceberg lettuces (24 samples), other leafy vegetables (11 samples), delicatessen salads (23 samples), cabbage salads (19 samples), salads from mixed (17 samples) and root vegetables (6 samples). Listeria species was found in 20 samples (20 %) and Listeria monocytogenes was detected in only 1 sample (1 %) of cut red cabbage (less than 100 CFU/g). According to Croatian and EU microbiological criteria these results are satisfactory. However, the presence of Listeria species and Listeria monocytogenes indicates poor hygiene quality. The study showed that these products are often improperly labeled, since 24 % of analyzed samples lacked information about shelf life, and 60 % of samples lacked information about storage conditions. With regard to these facts, cold chain abruption with extended use after expiration date is a probable scenario. Therefore, the microbiological risk for consumers of ready-to-eat minimally processed and refrigerated vegetables is not completely eliminated.
Buffer AVL Alone Does Not Inactivate Ebola Virus in a Representative Clinical Sample Type.
Smither, Sophie J; Weller, Simon A; Phelps, Amanda; Eastaugh, Lin; Ngugi, Sarah; O'Brien, Lyn M; Steward, Jackie; Lonsdale, Steve G; Lever, Mark S
2015-10-01
Rapid inactivation of Ebola virus (EBOV) is crucial for high-throughput testing of clinical samples in low-resource, outbreak scenarios. The EBOV inactivation efficacy of Buffer AVL (Qiagen) was tested against marmoset serum (EBOV concentration of 1 × 10(8) 50% tissue culture infective dose per milliliter [TCID50 · ml(-1)]) and murine blood (EBOV concentration of 1 × 10(7) TCID50 · ml(-1)) at 4:1 vol/vol buffer/sample ratios. Posttreatment cell culture and enzyme-linked immunosorbent assay (ELISA) analysis indicated that treatment with Buffer AVL did not inactivate EBOV in 67% of samples, indicating that Buffer AVL, which is designed for RNA extraction and not virus inactivation, cannot be guaranteed to inactivate EBOV in diagnostic samples. Murine blood samples treated with ethanol (4:1 [vol/vol] ethanol/sample) or heat (60°C for 15 min) also showed no viral inactivation in 67% or 100% of samples, respectively. However, combined Buffer AVL and ethanol or Buffer AVL and heat treatments showed total viral inactivation in 100% of samples tested. The Buffer AVL plus ethanol and Buffer AVL plus heat treatments were also shown not to affect the extraction of PCR quality RNA from EBOV-spiked murine blood samples. © Crown copyright 2015.
ERIC Educational Resources Information Center
Hirca, Necati
2013-01-01
The objective of this study is to get pre-service teachers to develop an awareness of first aid knowledge and skills related to electrical shocking and safety within a scenario based animation based on a Constructivist 5E model. The sample of the study was composed of 78 (46 girls and 32 boys) pre-service classroom teachers from two faculties of…
CyberCIEGE Scenario Development Tool User’s Guide
2010-04-01
also required. Play some sample scenarios and browse the CyberCIEGE Encyclopedia to learn about the game. If game behavior is relevant to the...basic structure of the tool. Then follow the tutorial within this guide to learn some of the mechanics of using the SDT. SDT Layout Reusable... machine keeps crashing. 8) Trigger player feedback You should have noticed the computer was crashing, and maybe you noticed its availability went
TSCA Chemical Data Reporting Fact Sheet: Imported Articles
This fact sheet provides guidance and sample reporting scenarios on the reporting exemption for the import of a chemical substance as part of an article, for purposes of the Chemical Data Reporting (CDR) rule.
Aigner, Annette; Grittner, Ulrike; Becher, Heiko
2018-01-01
Low response rates in epidemiologic research potentially lead to the recruitment of a non-representative sample of controls in case-control studies. Problems in the unbiased estimation of odds ratios arise when characteristics causing the probability of participation are associated with exposure and outcome. This is a specific setting of selection bias and a realistic hazard in many case-control studies. This paper formally describes the problem and shows its potential extent, reviews existing approaches for bias adjustment applicable under certain conditions, compares and applies them. We focus on two scenarios: a characteristic C causing differential participation of controls is linked to the outcome through its association with risk factor E (scenario I), and C is additionally a genuine risk factor itself (scenario II). We further assume external data sources are available which provide an unbiased estimate of C in the underlying population. Given these scenarios, we (i) review available approaches and their performance in the setting of bias due to differential participation; (ii) describe two existing approaches to correct for the bias in both scenarios in more detail; (iii) present the magnitude of the resulting bias by simulation if the selection of a non-representative sample is ignored; and (iv) demonstrate the approaches' application via data from a case-control study on stroke. The bias of the effect measure for variable E in scenario I and C in scenario II can be large and should therefore be adjusted for in any analysis. It is positively associated with the difference in response rates between groups of the characteristic causing differential participation, and inversely associated with the total response rate in the controls. Adjustment in a standard logistic regression framework is possible in both scenarios if the population distribution of the characteristic causing differential participation is known or can be approximated well.
Rutten, Niels; Gonzales, José L.; Elbers, Armin R. W.; Velthuis, Annet G. J.
2012-01-01
Background As low pathogenic avian influenza viruses can mutate into high pathogenic viruses the Dutch poultry sector implemented a surveillance system for low pathogenic avian influenza (LPAI) based on blood samples. It has been suggested that egg yolk samples could be sampled instead of blood samples to survey egg layer farms. To support future decision making about AI surveillance economic criteria are important. Therefore a cost analysis is performed on systems that use either blood or eggs as sampled material. Methodology/Principal Findings The effectiveness of surveillance using egg or blood samples was evaluated using scenario tree models. Then an economic model was developed that calculates the total costs for eight surveillance systems that have equal effectiveness. The model considers costs for sampling, sample preparation, sample transport, testing, communication of test results and for the confirmation test on false positive results. The surveillance systems varied in sampled material (eggs or blood), sampling location (farm or packing station) and location of sample preparation (laboratory or packing station). It is shown that a hypothetical system in which eggs are sampled at the packing station and samples prepared in a laboratory had the lowest total costs (i.e. € 273,393) a year. Compared to this a hypothetical system in which eggs are sampled at the farm and samples prepared at a laboratory, and the currently implemented system in which blood is sampled at the farm and samples prepared at a laboratory have 6% and 39% higher costs respectively. Conclusions/Significance This study shows that surveillance for avian influenza on egg yolk samples can be done at lower costs than surveillance based on blood samples. The model can be used in future comparison of surveillance systems for different pathogens and hazards. PMID:22523543
Xun-Ping, W; An, Z
2017-07-27
Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.
A Survey of Extended H_{2} Emission Towards a Sample of Massive YSOs
NASA Astrophysics Data System (ADS)
Navarete, F.; Damineli, A.; Barbosa, C. L.; Blum, R. D.
2014-10-01
Very few massive stars in early formation stages were clearly identified in the Milky Way and moreover, the processes of formation of such objects lacks of observational evidences. Two theories predict the formation of massive star: i) by merging of low mass stars or ii) by an accretion disk. One of the most prominent evidences for the accretion scenario is the presence of bipolar outflows associated to the central sources. Those structures were found on both intermediate and low-mass YSOs, but there are no evidences for associations with MYSOs. Based on that, a survey was designed to investigate the earliest stages of massive star formation through the molecular hydrogen transition at 2.12μm. A sample of ˜ 300 MYSOs candidates was selected from the Red MSX Source program and the sources were observed with the IR cameras Spartan (SOAR, Chile) and WIRCam (CFHT, Hawaii). Extended H_{2} emission was found toward 55% of the sample and 30% of the positive detections (50 sources) have bipolar morphology, suggesting collimated outflows. These results support the accretion scenario, since the merging of low mass stars would not produce jet-like structures.
An Analysis on the Detection of Biological Contaminants Aboard Aircraft
Hwang, Grace M.; DiCarlo, Anthony A.; Lin, Gene C.
2011-01-01
The spread of infectious disease via commercial airliner travel is a significant and realistic threat. To shed some light on the feasibility of detecting airborne pathogens, a sensor integration study has been conducted and computational investigations of contaminant transport in an aircraft cabin have been performed. Our study took into consideration sensor sensitivity as well as the time-to-answer, size, weight and the power of best available commercial off-the-shelf (COTS) devices. We conducted computational fluid dynamics simulations to investigate three types of scenarios: (1) nominal breathing (up to 20 breaths per minute) and coughing (20 times per hour); (2) nominal breathing and sneezing (4 times per hour); and (3) nominal breathing only. Each scenario was implemented with one or seven infectious passengers expelling air and sneezes or coughs at the stated frequencies. Scenario 2 was implemented with two additional cases in which one infectious passenger expelled 20 and 50 sneezes per hour, respectively. All computations were based on 90 minutes of sampling using specifications from a COTS aerosol collector and biosensor. Only biosensors that could provide an answer in under 20 minutes without any manual preparation steps were included. The principal finding was that the steady-state bacteria concentrations in aircraft would be high enough to be detected in the case where seven infectious passengers are exhaling under scenarios 1 and 2 and where one infectious passenger is actively exhaling in scenario 2. Breathing alone failed to generate sufficient bacterial particles for detection, and none of the scenarios generated sufficient viral particles for detection to be feasible. These results suggest that more sensitive sensors than the COTS devices currently available and/or sampling of individual passengers would be needed for the detection of bacteria and viruses in aircraft. PMID:21264266
2008-09-01
related scenarios related to US armed forces around the world . In the civilian setting, complete decontamination is the only accepted criteria before a...observation, a door for sample introduction, and four ports on the front panel for sensor placement. All glass surfaces were covered when used with CD gas...1999, 281, 1735-1745. 9. AOAC International Method 966.04; Official Methods of Analisis , 21’t ed.; Chapter 6: AOAC International: Gaithersburg, MD
1997-02-01
application with a strong resemblance to a video game , concern has been raised that prior video game experience might have a moderating effect on scores. Much...such as spatial ability. The effects of computer or video game experience on work sample scores have not been systematically investigated. The purpose...of this study was to evaluate the incremental validity of prior video game experience over that of general aptitude as a predictor of work sample test
NASA Astrophysics Data System (ADS)
Cescutti, G.; Chiappini, C.
2014-05-01
Context. Thanks to the heroic observational campaigns carried out in recent years we now have large samples of metal-poor stars for which measurements of detailed abundances exist. In particular, large samples of stars with metallicities -5 < [Fe/H] <-1 and measured abundances of Sr, Ba, Y, and Eu are now available. These data hold important clues on the nature of the contribution of the first stellar generations to the enrichment of our Galaxy. Aims: We aim to explain the scatter in Sr, Ba, Y, and Eu abundance ratio diagrams unveiled by the metal-poor halo stars. Methods: We computed inhomogeneous chemical evolution models for the Galactic halo assuming different scenarios for the r-process site: the electron-capture (EC) supernovae and the magnetorotationally driven (MRD) supernovae scenarios. We also considered models with and without the contribution of fast-rotating massive stars (spinstars) to an early enrichment by the s-process. A detailed comparison with the now large sample of stars with measured abundances of Sr, Ba, Y, Eu, and Fe is provided (both in terms of scatter plots and number distributions for several abundance ratios). Results: The scatter observed in these abundance ratios of the very metal-poor stars (with [Fe/H] <-2.5) can be explained by combining the s-process production in spinstars, and the r-process contribution coming from massive stars. For the r-process we have developed models for both the EC and the MRD scenarios that match the observations. Conclusions: With the present observational and theoretical constraints we cannot distinguish between the EC and the MRD scenarios in the Galactic halo. Independently of the r-process scenarios adopted, the production of elements by an s-process in spinstars is needed to reproduce the spread in abundances of the light neutron capture elements (Sr and Y) over heavy neutron capture elements (Ba and Eu). We provide a way to test our suggestions by means of the distribution of the Ba isotopic ratios in a [Ba/Fe] or [Sr/Ba] vs. [Fe/H] diagram. Appendix A is available in electronic form at http://www.aanda.org
The price elasticity of demand for heroin: matched longitudinal and experimental evidence#
Olmstead, Todd A.; Alessi, Sheila M.; Kline, Brendan; Pacula, Rosalie Liccardo; Petry, Nancy M.
2015-01-01
This paper reports estimates of the price elasticity of demand for heroin based on a newly constructed dataset. The dataset has two matched components concerning the same sample of regular heroin users: longitudinal information about real-world heroin demand (actual price and actual quantity at daily intervals for each heroin user in the sample) and experimental information about laboratory heroin demand (elicited by presenting the same heroin users with scenarios in a laboratory setting). Two empirical strategies are used to estimate the price elasticity of demand for heroin. The first strategy exploits the idiosyncratic variation in the price experienced by a heroin user over time that occurs in markets for illegal drugs. The second strategy exploits the experimentally-induced variation in price experienced by a heroin user across experimental scenarios. Both empirical strategies result in the estimate that the conditional price elasticity of demand for heroin is approximately −0.80. PMID:25702687
Asteroid exploration and utilization
NASA Technical Reports Server (NTRS)
Radovich, Brian M.; Carlson, Alan E.; Date, Medha D.; Duarte, Manny G.; Erian, Neil F.; Gafka, George K.; Kappler, Peter H.; Patano, Scott J.; Perez, Martin; Ponce, Edgar
1992-01-01
The Earth is nearing depletion of its natural resources at a time when human beings are rapidly expanding the frontiers of space. The resources possessed by asteroids have enormous potential for aiding and enhancing human space exploration as well as life on Earth. Project STONER (Systematic Transfer of Near Earth Resources) is based on mining an asteroid and transporting raw materials back to Earth. The asteroid explorer/sample return mission is designed in the context of both scenarios and is the first phase of a long range plan for humans to utilize asteroid resources. Project STONER is divided into two parts: asteroid selection and explorer spacecraft design. The spacecraft design team is responsible for the selection and integration of the subsystems: GNC, communications, automation, propulsion, power, structures, thermal systems, scientific instruments, and mechanisms used on the surface to retrieve and store asteroid regolith. The sample return mission scenario consists of eight primary phases that are critical to the mission.
Sampling of temporal networks: Methods and biases
NASA Astrophysics Data System (ADS)
Rocha, Luis E. C.; Masuda, Naoki; Holme, Petter
2017-11-01
Temporal networks have been increasingly used to model a diversity of systems that evolve in time; for example, human contact structures over which dynamic processes such as epidemics take place. A fundamental aspect of real-life networks is that they are sampled within temporal and spatial frames. Furthermore, one might wish to subsample networks to reduce their size for better visualization or to perform computationally intensive simulations. The sampling method may affect the network structure and thus caution is necessary to generalize results based on samples. In this paper, we study four sampling strategies applied to a variety of real-life temporal networks. We quantify the biases generated by each sampling strategy on a number of relevant statistics such as link activity, temporal paths and epidemic spread. We find that some biases are common in a variety of networks and statistics, but one strategy, uniform sampling of nodes, shows improved performance in most scenarios. Given the particularities of temporal network data and the variety of network structures, we recommend that the choice of sampling methods be problem oriented to minimize the potential biases for the specific research questions on hand. Our results help researchers to better design network data collection protocols and to understand the limitations of sampled temporal network data.
NASA Technical Reports Server (NTRS)
Hudson, Nicolas; Lin, Ying; Barengoltz, Jack
2010-01-01
A method for evaluating the probability of a Viable Earth Microorganism (VEM) contaminating a sample during the sample acquisition and handling (SAH) process of a potential future Mars Sample Return mission is developed. A scenario where multiple core samples would be acquired using a rotary percussive coring tool, deployed from an arm on a MER class rover is analyzed. The analysis is conducted in a structured way by decomposing sample acquisition and handling process into a series of discrete time steps, and breaking the physical system into a set of relevant components. At each discrete time step, two key functions are defined: The probability of a VEM being released from each component, and the transport matrix, which represents the probability of VEM transport from one component to another. By defining the expected the number of VEMs on each component at the start of the sampling process, these decompositions allow the expected number of VEMs on each component at each sampling step to be represented as a Markov chain. This formalism provides a rigorous mathematical framework in which to analyze the probability of a VEM entering the sample chain, as well as making the analysis tractable by breaking the process down into small analyzable steps.
NASA Technical Reports Server (NTRS)
Franz, H. B.; Mahaffy, P. R.; Stern, J.; Archer, P., Jr.; Conrad, P.; Eigenbrode, J.; Freissinet, C.; Glavin, D.; Grotzinger, J. P.; Jones, J.;
2015-01-01
In October 2014, the Mars Science Laboratory (MSL) "Curiosity" rover drilled into the sediment at the base of Mount Sharp in a location namsed Cionfidence Hills (CH). CH marked the fifth sample pocessed by the Sample Analysis at Mars (SAM) instrument suite since Curiosity arrived in Gale Crater, with previous analyses performed at Rocknest (RN), John Klein (JK), Cumberland (CB), and Windjana (WJ). Evolved gas analysis (EGA) of all samples has indicated H2O as well as O-, C- and S-bearing phases in the samples, often at abundances that would be below the detection limit of the CheMin instrument. By examining the temperatures at which gases are evolved from samples, SAM EGA data can help provide clues to the mineralogy of volatile-bearing phases when their identities are unclear to CheMin. SAM may also detect gases evolved from amorphous material in solid samples, which is not suitable for analysis by CheMin. Finally, the isotopic composition of these gases may suggest possible formation scenarios and relationships between phases. We will discuss C isotope ratios of CO2 evolved from the CH sample as measured with SAM's quadrupole mass spectrometer (QMS) and draw comparisons to samples previously analyzed by SAM.
Rückerl, I; Muhterem-Uyar, M; Muri-Klinger, S; Wagner, K-H; Wagner, M; Stessl, B
2014-10-17
The aim of this study was to analyze the changing patterns of Listeria monocytogenes contamination in a cheese processing facility manufacturing a wide range of ready-to-eat products. Characterization of L. monocytogenes isolates included genotyping by pulsed-field gel electrophoresis (PFGE) and multi-locus sequence typing (MLST). Disinfectant-susceptibility tests and the assessment of L. monocytogenes survival in fresh cheese were also conducted. During the sampling period between 2010 and 2013, a total of 1284 environmental samples were investigated. Overall occurrence rates of Listeria spp. and L. monocytogenes were 21.9% and 19.5%, respectively. Identical L. monocytogenes genotypes were found in the food processing environment (FPE), raw materials and in products. Interventions after the sampling events changed contamination scenarios substantially. The high diversity of globally, widely distributed L. monocytogenes genotypes was reduced by identifying the major sources of contamination. Although susceptible to a broad range of disinfectants and cleaners, one dominant L. monocytogenes sequence type (ST) 5 could not be eradicated from drains and floors. Significantly, intense humidity and steam could be observed in all rooms and water residues were visible on floors due to increased cleaning strategies. This could explain the high L. monocytogenes contamination of the FPE (drains, shoes and floors) throughout the study (15.8%). The outcome of a challenge experiment in fresh cheese showed that L. monocytogenes could survive after 14days of storage at insufficient cooling temperatures (8 and 16°C). All efforts to reduce L. monocytogenes environmental contamination eventually led to a transition from dynamic to stable contamination scenarios. Consequently, implementation of systematic environmental monitoring via in-house systems should either aim for total avoidance of FPE colonization, or emphasize a first reduction of L. monocytogenes to sites where contamination of the processed product is unlikely. Drying of surfaces after cleaning is highly recommended to facilitate the L. monocytogenes eradication. Copyright © 2014 Elsevier B.V. All rights reserved.
Simulation on Poisson and negative binomial models of count road accident modeling
NASA Astrophysics Data System (ADS)
Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.
2016-11-01
Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.
Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit
2013-01-01
Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.
Yang, Bing; Zhou, Lingli; Xue, Nandong; Li, Fasheng; Wu, Guanglong; Ding, Qiong; Yan, Yunzhong; Liu, Bo
2013-10-01
Scarce data are available so far on emissions in a given scenario for excavation and thermal desorption, a common practice, of soils contaminated with polychlorinated biphenyls (PCBs). As part of China action of "Cleanup Plan for PCBs Burial Sites", this study roughly estimated PCBs emissions in the scenario for a capacitor-burial site. The concentrations of total PCBs (22 congeners) in soils were in the range of 2.1-16,000μg/g with a mean of 2300μg/g, among the same order of magnitude as the highest values obtained in various PCBs-contaminated sites. Only six congeners belonging to Di-, Tri-, and Tetra-CBs were observed above limits of detection in air samples in the scenario, partially which can be estimated by the USEPA air emission model. Comparing concentrations and composition profiles of PCBs in the soil and air samples further indicated a leaked source of commercial PCBs formulations of trichlorobiphenyl (China PCB no. 1). The measures taken if any to mitigate the volatilization and movement of PCBs and to minimize worker exposure were discussed for improvements of the excavation practice. Copyright © 2013 Elsevier Inc. All rights reserved.
Schiwy, Sabrina; Bräunig, Jennifer; Alert, Henriette; Hollert, Henner; Keiter, Steffen H
2015-11-01
The European Water Framework Directive aims to achieve a good ecological and chemical status in surface waters until 2015. Sediment toxicology plays a major role in this intention as sediments can act as a secondary source of pollution. In order to fulfill this legal obligation, there is an urgent need to develop whole-sediment exposure protocols, since sediment contact assays represent the most realistic scenario to simulate in situ exposure conditions. Therefore, in the present study, a vertebrate sediment contact assay to determine aryl hydrocarbon receptor (AhR)-mediated activity of particle-bound pollutants was developed. Furthermore, the activity and the expression of the CYP1 family in early life stages of zebrafish after exposure to freeze-dried sediment samples were investigated. In order to validate the developed protocol, effects of β-naphthoflavone and three selected sediment on zebrafish embryos were investigated. Results documented clearly AhR-mediated toxicity after exposure to β-naphthoflavone (β-NF) and to the sediment from the Vering canal. Upregulation of mRNA levels was observed for all investigated sediment samples. The highest levels of all investigated cyp genes (cyp1a, cyp1b1, cyp1c1, and cyp1c2) were recorded after exposure to the sediment sample of the Vering canal. In conclusion, the newly developed sediment contact assay can be recommended for the investigation of dioxin-like activities of single substances and the bioavailable fraction of complex environmental samples. Moreover, the exposure of whole zebrafish embryos to native (freeze-dried) sediment samples represents a highly realistic and ecologically relevant exposure scenario.
Gärtner, Fania R; de Bekker-Grob, Esther W; Stiggelbout, Anne M; Rijnders, Marlies E; Freeman, Liv M; Middeldorp, Johanna M; Bloemenkamp, Kitty W M; de Miranda, Esteriek; van den Akker-van Marle, M Elske
2015-09-01
The aim of this study was to calculate preference weights for the Labor and Delivery Index (LADY-X) to make it suitable as a utility measure for perinatal care studies. In an online discrete choice experiment, 18 pairs of hypothetical scenarios were presented to respondents, from which they had to choose a preferred option. The scenarios describe the birth experience in terms of the seven LADY-X attributes. A D-efficient discrete choice experiment design with priors based on a small sample (N = 110) was applied. Two samples were gathered, women who had recently given birth and subjects from the general population. Both samples were analyzed separately using a panel mixed logit (MMNL) model. Using the panel mixed multinomial logit (MMNL) model results and accounting for preference heterogeneity, we calculated the average preference weights for LADY-X attribute levels. These were transformed to represent a utility score between 0 and 1, with 0 representing the worst and 1 representing the best birth experience. In total, 1097 women who had recently given birth and 367 subjects from the general population participated. Greater value was placed on differences between bottom and middle attribute levels than on differences between middle and top levels. The attributes that resulted in larger utility increases than the other attributes were "feeling of safety" in the sample of women who had recently given birth and "feeling of safety" and "availability of professionals" in the general population sample. By using the derived preference weights, LADY-X has the potential to be used as a utility measure for perinatal (cost-) effectiveness studies. Copyright © 2015 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Milliren, Carly E; Evans, Clare R; Richmond, Tracy K; Dunn, Erin C
2018-06-06
Recent advances in multilevel modeling allow for modeling non-hierarchical levels (e.g., youth in non-nested schools and neighborhoods) using cross-classified multilevel models (CCMM). Current practice is to cluster samples from one context (e.g., schools) and utilize the observations however they are distributed from the second context (e.g., neighborhoods). However, it is unknown whether an uneven distribution of sample size across these contexts leads to incorrect estimates of random effects in CCMMs. Using the school and neighborhood data structure in Add Health, we examined the effect of neighborhood sample size imbalance on the estimation of variance parameters in models predicting BMI. We differentially assigned students from a given school to neighborhoods within that school's catchment area using three scenarios of (im)balance. 1000 random datasets were simulated for each of five combinations of school- and neighborhood-level variance and imbalance scenarios, for a total of 15,000 simulated data sets. For each simulation, we calculated 95% CIs for the variance parameters to determine whether the true simulated variance fell within the interval. Across all simulations, the "true" school and neighborhood variance parameters were estimated 93-96% of the time. Only 5% of models failed to capture neighborhood variance; 6% failed to capture school variance. These results suggest that there is no systematic bias in the ability of CCMM to capture the true variance parameters regardless of the distribution of students across neighborhoods. Ongoing efforts to use CCMM are warranted and can proceed without concern for the sample imbalance across contexts. Copyright © 2018 Elsevier Ltd. All rights reserved.
Orphan therapies: making best use of postmarket data.
Maro, Judith C; Brown, Jeffrey S; Dal Pan, Gerald J; Li, Lingling
2014-08-01
Postmarket surveillance of the comparative safety and efficacy of orphan therapeutics is challenging, particularly when multiple therapeutics are licensed for the same orphan indication. To make best use of product-specific registry data collected to fulfill regulatory requirements, we propose the creation of a distributed electronic health data network among registries. Such a network could support sequential statistical analyses designed to detect early warnings of excess risks. We use a simulated example to explore the circumstances under which a distributed network may prove advantageous. We perform sample size calculations for sequential and non-sequential statistical studies aimed at comparing the incidence of hepatotoxicity following initiation of two newly licensed therapies for homozygous familial hypercholesterolemia. We calculate the sample size savings ratio, or the proportion of sample size saved if one conducted a sequential study as compared to a non-sequential study. Then, using models to describe the adoption and utilization of these therapies, we simulate when these sample sizes are attainable in calendar years. We then calculate the analytic calendar time savings ratio, analogous to the sample size savings ratio. We repeat these analyses for numerous scenarios. Sequential analyses detect effect sizes earlier or at the same time as non-sequential analyses. The most substantial potential savings occur when the market share is more imbalanced (i.e., 90% for therapy A) and the effect size is closest to the null hypothesis. However, due to low exposure prevalence, these savings are difficult to realize within the 30-year time frame of this simulation for scenarios in which the outcome of interest occurs at or more frequently than one event/100 person-years. We illustrate a process to assess whether sequential statistical analyses of registry data performed via distributed networks may prove a worthwhile infrastructure investment for pharmacovigilance.
Sample size for post-marketing safety studies based on historical controls.
Wu, Yu-te; Makuch, Robert W
2010-08-01
As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. 2010 John Wiley & Sons, Ltd.
Defendant mental illness and juror decision-making: A comparison of sample types.
Mossière, Annik; Maeder, Evelyn M
2015-01-01
Two studies were conducted with separate student and community samples to explore the effect of sample types and the influence of defendant mental illness on juror decision-making. Following the completion of a pre-trial questionnaire in which jurors' attitudes towards mental illness were assessed, participants were provided with a robbery trial transcript, wherein the mental illness of the defendant was manipulated. Participants then answered a questionnaire to assess their knowledge of the scenario, their verdict, verdict confidence, and sentencing decision. Limited relationships were found between the variables in both Study 1 and Study 2. Neither attitude ratings nor mental illness type had a significant effect on juror decisions. Samples differed in terms of the paths through which juror decisions were achieved. Findings suggest that sample type may be particularly relevant for this topic of study, and that future research is required on legal proceedings for cases involving a defendant with a mental illness. Copyright © 2015 Elsevier Ltd. All rights reserved.
Aerial surveillance based on hierarchical object classification for ground target detection
NASA Astrophysics Data System (ADS)
Vázquez-Cervantes, Alberto; García-Huerta, Juan-Manuel; Hernández-Díaz, Teresa; Soto-Cajiga, J. A.; Jiménez-Hernández, Hugo
2015-03-01
Unmanned aerial vehicles have turned important in surveillance application due to the flexibility and ability to inspect and displace in different regions of interest. The instrumentation and autonomy of these vehicles have been increased; i.e. the camera sensor is now integrated. Mounted cameras allow flexibility to monitor several regions of interest, displacing and changing the camera view. A well common task performed by this kind of vehicles correspond to object localization and tracking. This work presents a hierarchical novel algorithm to detect and locate objects. The algorithm is based on a detection-by-example approach; this is, the target evidence is provided at the beginning of the vehicle's route. Afterwards, the vehicle inspects the scenario, detecting all similar objects through UTM-GPS coordinate references. Detection process consists on a sampling information process of the target object. Sampling process encode in a hierarchical tree with different sampling's densities. Coding space correspond to a huge binary space dimension. Properties such as independence and associative operators are defined in this space to construct a relation between the target object and a set of selected features. Different densities of sampling are used to discriminate from general to particular features that correspond to the target. The hierarchy is used as a way to adapt the complexity of the algorithm due to optimized battery duty cycle of the aerial device. Finally, this approach is tested in several outdoors scenarios, proving that the hierarchical algorithm works efficiently under several conditions.
Cognitive deficits are associated with poorer simulated driving in older adults with heart failure
2013-01-01
Background Cognitive impairment is prevalent in older adults with heart failure (HF) and associated with reduced functional independence. HF patients appear at risk for reduced driving ability, as past work in other medical samples has shown cognitive dysfunction to be an important contributor to driving performance. The current study examined whether cognitive dysfunction was independently associated with reduced driving simulation performance in a sample of HF patients. Methods 18 persons with HF (67.72; SD = 8.56 year) completed echocardiogram and a brief neuropsychological test battery assessing global cognitive function, attention/executive function, memory and motor function. All participants then completed the Kent Multidimensional Assessment Driving Simulation (K-MADS), a driving simulator scenario with good psychometric properties. Results The sample exhibited an average Mini Mental State Examination (MMSE) score of 27.83 (SD = 2.09). Independent sample t-tests showed that HF patients performed worse than healthy adults on the driving simulation scenario. Finally, partial correlations showed worse attention/executive and motor function were independently associated with poorer driving simulation performance across several indices reflective of driving ability (i.e., centerline crossings, number of collisions, % of time over the speed limit, among others). Conclusion The current findings showed that reduced cognitive function was associated with poor simulated driving performance in older adults with HF. If replicated using behind-the-wheel testing, HF patients may be at elevated risk for unsafe driving and routine driving evaluations in this population may be warranted. PMID:24499466
NASA Astrophysics Data System (ADS)
Benassai, Mario; Cotronei, Vittorio
The Mice Drawer System (MDS) is a scientific payload developed by the Italian Space Agency (ASI), it hosted 6 mice on the International Space Station (ISS) and re-entered on ground on November 28, 2009 with the STS 129 at KSC. Linked to the MDS experiment, a Tissue Sharing Program (TSP), was developed in order to make available to 16 Payload Investigators (PI) (located in USA, Canada, EU -Italy, Belgium and Germany -and Japan) the biological samples coming from the mice. ALTEC SpA (a PPP owned by ASI, TAS-I and local institutions) was responsible to support the logistics aspects of the MDS samples for the first MDS mission, in the frame of Italian Space Agency (ASI) OSMA program (OSteoporosis and Muscle Atrophy). The TSP resulted in a complex scenario, as ASI, progressively, extended the original OSMA Team also to researchers from other ASI programs and from other Agencies (ESA, NASA, JAXA). The science coordination was performed by the University of Genova (UNIGE). ALTEC has managed all the logistic process with the support of a specialized freight forwarder agent during the whole shipping operation phases. ALTEC formalized all the steps from the handover of samples by the dissection Team to the packaging and shipping process in a dedicated procedure. ALTEC approached all the work in a structured way, performing: A study of the aspects connected to international shipments of biological samples. A coopera-tive work with UNIGE/ASI /PIs to identify all the needs of the various researchers and their compatibility. A complete revision and integration of shipment requirements (addresses, tem-peratures, samples, materials and so on). A complete definition of the final shipment scenario in terms of boxes, content, refrigerant and requirements. A formal approach to identification and selection of the most suited and specialized Freight Forwarder. A clear identification of all the processes from sample dissection by PI Team, sample processing, freezing, tube preparation, labeling, packaging, shipment and inspection at receiving sites, by introducing and using Forms and Procedures. A clear identification of Roles, by introducing a Board composed of NASA, ASI, PIs and ALTEC to drive and certify the critical points. Support to Team also for acces-sory aspects like badging or other kind of shipments. Support to meetings and teleconferences to cooperate in definition of mission scenario, that resulted also in the need of splitting the teams for KSC and Dryden. Support to dry run tests and meetings to tune the procedures and activities to the most refined detail. Support at both landing sites, with final shipment from KSC to all the final destination. Support data collection about status of the various incoming inspections at receiving sites. All the above mentioned activities will be detailed in the paper.
Introduction to Sample Plan Package for Farms
An example of a completed and self-certified Tier I Qualified Facility SPCC Plan using the template found in Appendix G of the SPCC rule (40 CFR part 112). This example illustrates how to develop an SPCC Plan using a farm scenario.
ASBESTOS EXPOSURE RESEARCH - AIR, SOIL AND BULK MATERIAL SCENARIOS
Presently, asbestos and other mineral fibers are monitored in the workplace and in the environment using several basic analytical techniques, based primarily upon observing the fiber by either optical or electron microscopy. EPA is conducting research to determine which sampling ...
Probabilistic Asteroid Impact Risk Assessment for the Hypothetical PDC17 Impact Exercise
NASA Technical Reports Server (NTRS)
Wheeler, Lorien; Mathias, Donovan
2017-01-01
Performing impact risk assessment for the 2017 Planetary Defense Conference (PDC17) hypothetical impact exercise, to take place at the PDC17 conference, May 15-20, 2017. Impact scenarios and trajectories are developed and provided by NASA's Near Earth Objects Office at JPL (Paul Chodas). These results represent purely hypothetical impact scenarios, and do not reflect any known asteroid threat. Risk assessment was performed using the Probabilistic Asteroid Impact Risk (PAIR) model developed by the Asteroid Threat Assessment Project (ATAP) at NASA Ames Research Center. This presentation includes sample results that may be presented or used in discussions during the various stages of the impact exercisecenter dot Some cases represent alternate scenario options that may not be used during the actual impact exercise at the PDC17 conference. Updates to these initial assessments and/or additional scenario assessments may be performed throughout the impact exercise as different scenario options unfold.
MARCO POLO: near earth object sample return mission
NASA Astrophysics Data System (ADS)
Barucci, M. A.; Yoshikawa, M.; Michel, P.; Kawagushi, J.; Yano, H.; Brucato, J. R.; Franchi, I. A.; Dotto, E.; Fulchignoni, M.; Ulamec, S.
2009-03-01
MARCO POLO is a joint European-Japanese sample return mission to a Near-Earth Object. This Euro-Asian mission will go to a primitive Near-Earth Object (NEO), which we anticipate will contain primitive materials without any known meteorite analogue, scientifically characterize it at multiple scales, and bring samples back to Earth for detailed scientific investigation. Small bodies, as primitive leftover building blocks of the Solar System formation process, offer important clues to the chemical mixture from which the planets formed some 4.6 billion years ago. Current exobiological scenarios for the origin of Life invoke an exogenous delivery of organic matter to the early Earth: it has been proposed that primitive bodies could have brought these complex organic molecules capable of triggering the pre-biotic synthesis of biochemical compounds. Moreover, collisions of NEOs with the Earth pose a finite hazard to life. For all these reasons, the exploration of such objects is particularly interesting and urgent. The scientific objectives of MARCO POLO will therefore contribute to a better understanding of the origin and evolution of the Solar System, the Earth, and possibly Life itself. Moreover, MARCO POLO provides important information on the volatile-rich (e.g. water) nature of primitive NEOs, which may be particularly important for future space resource utilization as well as providing critical information for the security of Earth. MARCO POLO is a proposal offering several options, leading to great flexibility in the actual implementation. The baseline mission scenario is based on a launch with a Soyuz-type launcher and consists of a Mother Spacecraft (MSC) carrying a possible Lander named SIFNOS, small hoppers, sampling devices, a re-entry capsule and scientific payloads. The MSC leaves Earth orbit, cruises toward the target with ion engines, rendezvous with the target, conducts a global characterization of the target to select a sampling site, and delivers small hoppers (MINERVA type, JAXA) and SIFNOS. The latter, if added, will perform a soft landing, anchor to the target surface, and make various in situ measurements of surface/subsurface materials near the sampling site. Two surface samples will be collected by the MSC using “touch and go” manoeuvres. Two complementary sample collection devices will be used in this phase: one developed by ESA and another provided by JAXA, mounted on a retractable extension arm. After the completion of the sampling and ascent of the MSC, the arm will be retracted to transfer the sample containers into the MSC. The MSC will then make its journey back to Earth and release the re-entry capsule into the Earth’s atmosphere.
NG09 And CTBT On-Site Inspection Noble Gas Sampling and Analysis Requirements
NASA Astrophysics Data System (ADS)
Carrigan, Charles R.; Tanaka, Junichi
2010-05-01
A provision of the Comprehensive Test Ban Treaty (CTBT) allows on-site inspections (OSIs) of suspect nuclear sites to determine if the occurrence of a detected event is nuclear in origin. For an underground nuclear explosion (UNE), the potential success of an OSI depends significantly on the containment scenario of the alleged event as well as the application of air and soil-gas radionuclide sampling techniques in a manner that takes into account both the suspect site geology and the gas transport physics. UNE scenarios may be broadly divided into categories involving the level of containment. The simplest to detect is a UNE that vents a significant portion of its radionuclide inventory and is readily detectable at distance by the International Monitoring System (IMS). The most well contained subsurface events will only be detectable during an OSI. In such cases, 37 Ar and radioactive xenon cavity gases may reach the surface through either "micro-seepage" or the barometric pumping process and only the careful siting of sampling locations, timing of sampling and application of the most site-appropriate atmospheric and soil-gas capturing methods will result in a confirmatory signal. The OSI noble gas field tests NG09 was recently held in Stupava, Slovakia to consider, in addition to other field sampling and analysis techniques, drilling and subsurface noble gas extraction methods that might be applied during an OSI. One of the experiments focused on challenges to soil-gas sampling near the soil-atmosphere interface. During withdrawal of soil gas from shallow, subsurface sample points, atmospheric dilution of the sample and the potential for introduction of unwanted atmospheric gases were considered. Tests were designed to evaluate surface infiltration and the ability of inflatable well-packers to seal out atmospheric gases during sample acquisition. We discuss these tests along with some model-based predictions regarding infiltration under different near-surface hydrologic conditions. We also consider how naturally occurring as well as introduced (e.g., SF6) soil-gas tracers might be used to guard against the possibility of atmospheric contamination of soil gases while sampling during an actual OSI. The views expressed here do not necessarily reflect the opinion of the United States Government, the United States Department of Energy, or Lawrence Livermore National Laboratory. This work has been performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-418791
Determination of Inorganic Ion Profiles of Illicit Drugs by Capillary Electrophoresis.
Evans, Elizabeth; Costrino, Carolina; do Lago, Claudimir L; Garcia, Carlos D; Roux, Claude; Blanes, Lucas
2016-11-01
A portable capillary electrophoresis instrument with dual capacitively coupled contactless conductivity detection (C 4 D) was used to determine the inorganic ionic profiles of three pharmaceutical samples and precursors of two illicit drugs (contemporary samples of methylone and para-methoxymethamphetamine). The LODs ranged from 0.10 μmol/L to 1.25 μmol/L for the 10 selected cations, and from 0.13 μmol/L to 1.03 μmol/L for the eight selected anions. All separations were performed in less than 6 min with migration times and peak area RSD values ranging from 2 to 7%. The results demonstrate the potential of the analysis of inorganic ionic species to aid in the identification and/or differentiation of unknown tablets, and real samples found in illicit drug manufacture scenarios. From the resulting ionic fingerprint, the unknown tablets and samples can be further classified. © 2016 American Academy of Forensic Sciences.
Optimizing Integrated Terminal Airspace Operations Under Uncertainty
NASA Technical Reports Server (NTRS)
Bosson, Christabelle; Xue, Min; Zelinski, Shannon
2014-01-01
In the terminal airspace, integrated departures and arrivals have the potential to increase operations efficiency. Recent research has developed geneticalgorithm- based schedulers for integrated arrival and departure operations under uncertainty. This paper presents an alternate method using a machine jobshop scheduling formulation to model the integrated airspace operations. A multistage stochastic programming approach is chosen to formulate the problem and candidate solutions are obtained by solving sample average approximation problems with finite sample size. Because approximate solutions are computed, the proposed algorithm incorporates the computation of statistical bounds to estimate the optimality of the candidate solutions. A proof-ofconcept study is conducted on a baseline implementation of a simple problem considering a fleet mix of 14 aircraft evolving in a model of the Los Angeles terminal airspace. A more thorough statistical analysis is also performed to evaluate the impact of the number of scenarios considered in the sampled problem. To handle extensive sampling computations, a multithreading technique is introduced.
Optical sectioning in induced coherence tomography with frequency-entangled photons
NASA Astrophysics Data System (ADS)
Vallés, Adam; Jiménez, Gerard; Salazar-Serrano, Luis José; Torres, Juan P.
2018-02-01
We demonstrate a different scheme to perform optical sectioning of a sample based on the concept of induced coherence [Zou et al., Phys. Rev. Lett. 67, 318 (1991), 10.1103/PhysRevLett.67.318]. This can be viewed as a different type of optical coherence tomography scheme where the varying reflectivity of the sample along the direction of propagation of an optical beam translates into changes of the degree of first-order coherence between two beams. As a practical advantage the scheme allows probing the sample with one wavelength and measuring photons with another wavelength. In a bio-imaging scenario, this would result in a deeper penetration into the sample because of probing with longer wavelengths, while still using the optimum wavelength for detection. The scheme proposed here could achieve submicron axial resolution by making use of nonlinear parametric sources with broad spectral bandwidth emission.
NASA Astrophysics Data System (ADS)
Stanley, F. E.; Byerly, Benjamin L.; Thomas, Mariam R.; Spencer, Khalil J.
2016-06-01
Actinide isotope measurements are a critical signature capability in the modern nuclear forensics "toolbox", especially when interrogating anthropogenic constituents in real-world scenarios. Unfortunately, established methodologies, such as traditional total evaporation via thermal ionization mass spectrometry, struggle to confidently measure low abundance isotope ratios (<10-6) within already limited quantities of sample. Herein, we investigate the application of static, mixed array total evaporation techniques as a straightforward means of improving plutonium minor isotope measurements, which have been resistant to enhancement in recent years because of elevated radiologic concerns. Results are presented for small sample (~20 ng) applications involving a well-known plutonium isotope reference material, CRM-126a, and compared with traditional total evaporation methods.
Stanley, F E; Byerly, Benjamin L; Thomas, Mariam R; Spencer, Khalil J
2016-06-01
Actinide isotope measurements are a critical signature capability in the modern nuclear forensics "toolbox", especially when interrogating anthropogenic constituents in real-world scenarios. Unfortunately, established methodologies, such as traditional total evaporation via thermal ionization mass spectrometry, struggle to confidently measure low abundance isotope ratios (<10(-6)) within already limited quantities of sample. Herein, we investigate the application of static, mixed array total evaporation techniques as a straightforward means of improving plutonium minor isotope measurements, which have been resistant to enhancement in recent years because of elevated radiologic concerns. Results are presented for small sample (~20 ng) applications involving a well-known plutonium isotope reference material, CRM-126a, and compared with traditional total evaporation methods. Graphical Abstract ᅟ.
Colalillo, Sara; Williamson, David; Johnston, Charlotte
2014-12-01
Attributions for parents' behavior were examined in a sample of boys with and without Attention-Deficit/Hyperactivity Disorder (ADHD). Sixty-six boys (mean age = 9.75 years) rated attributions for their mothers' and their fathers' behavior, across positive and negative scenarios, and along four attribution dimensions (parent ability, parent effort, task difficulty, and child responsibility). Three-way interactions emerged among child ADHD status, parent gender, and attribution type, and among scenario valence, parent gender, and attribution type. All children rated attributions higher in the positive scenarios, and attributions of child responsibility higher for fathers than mothers. Children rated task-related attributions higher for mothers in negative scenarios, but higher for fathers in positive scenarios. Boys with ADHD rated child responsibility attributions higher than controls, across all scenarios. Results highlight important differences in children's perceptions of their parents' behavior that may have implications for understanding parent-child relationships in families of children with and without ADHD.
Using Estimations of Entropy to Optimize Complex Human Dynamic Networks under Stress
2013-12-30
Age Race Weight (lb.) Height (in.) BMI Body Fat % Class Year Rank 4102 M 22 White 137.6 65 22.9 9.4 Senior Cadet Captain 4103 M 20 White 166.2 74 21.9...mission acts as a failed mission and concludes with a “walk of shame ”. Blood samples were taken before CSH1 and after CSH3; saliva samples were taken...individual scenarios, one subject did not have their body fat % recorded, therefore N=15 for individual correlations involving body fat %. Any
Kuijpers, Laura Maria Francisca; Maltha, Jessica; Guiraud, Issa; Kaboré, Bérenger; Lompo, Palpouguini; Devlieger, Hugo; Van Geet, Chris; Tinto, Halidou; Jacobs, Jan
2016-06-02
Plasmodium falciparum infection may cause severe anaemia, particularly in children. When planning a diagnostic study on children suspected of severe malaria in sub-Saharan Africa, it was questioned how much blood could be safely sampled; intended blood volumes (blood cultures and EDTA blood) were 6 mL (children aged <6 years) and 10 mL (6-12 years). A previous review [Bull World Health Organ. 89: 46-53. 2011] recommended not to exceed 3.8 % of total blood volume (TBV). In a simulation exercise using data of children previously enrolled in a study about severe malaria and bacteraemia in Burkina Faso, the impact of this 3.8 % safety guideline was evaluated. For a total of 666 children aged >2 months to <12 years, data of age, weight and haemoglobin value (Hb) were available. For each child, the estimated TBV (TBVe) (mL) was calculated by multiplying the body weight (kg) by the factor 80 (ml/kg). Next, TBVe was corrected for the degree of anaemia to obtain the functional TBV (TBVf). The correction factor consisted of the rate 'Hb of the child divided by the reference Hb'; both the lowest ('best case') and highest ('worst case') reference Hb values were used. Next, the exact volume that a 3.8 % proportion of this TBVf would present was calculated and this volume was compared to the blood volumes that were intended to be sampled. When applied to the Burkina Faso cohort, the simulation exercise pointed out that in 5.3 % (best case) and 11.4 % (worst case) of children the blood volume intended to be sampled would exceed the volume as defined by the 3.8 % safety guideline. Highest proportions would be in the age groups 2-6 months (19.0 %; worst scenario) and 6 months-2 years (15.7 %; worst case scenario). A positive rapid diagnostic test for P. falciparum was associated with an increased risk of violating the safety guideline in the worst case scenario (p = 0.016). Blood sampling in children for research in P. falciparum endemic settings may easily violate the proposed safety guideline when applied to TBVf. Ethical committees and researchers should be wary of this and take appropriate precautions.
Städler, Thomas; Haubold, Bernhard; Merino, Carlos; Stephan, Wolfgang; Pfaffelhuber, Peter
2009-01-01
Using coalescent simulations, we study the impact of three different sampling schemes on patterns of neutral diversity in structured populations. Specifically, we are interested in two summary statistics based on the site frequency spectrum as a function of migration rate, demographic history of the entire substructured population (including timing and magnitude of specieswide expansions), and the sampling scheme. Using simulations implementing both finite-island and two-dimensional stepping-stone spatial structure, we demonstrate strong effects of the sampling scheme on Tajima's D (DT) and Fu and Li's D (DFL) statistics, particularly under specieswide (range) expansions. Pooled samples yield average DT and DFL values that are generally intermediate between those of local and scattered samples. Local samples (and to a lesser extent, pooled samples) are influenced by local, rapid coalescence events in the underlying coalescent process. These processes result in lower proportions of external branch lengths and hence lower proportions of singletons, explaining our finding that the sampling scheme affects DFL more than it does DT. Under specieswide expansion scenarios, these effects of spatial sampling may persist up to very high levels of gene flow (Nm > 25), implying that local samples cannot be regarded as being drawn from a panmictic population. Importantly, many data sets on humans, Drosophila, and plants contain signatures of specieswide expansions and effects of sampling scheme that are predicted by our simulation results. This suggests that validating the assumption of panmixia is crucial if robust demographic inferences are to be made from local or pooled samples. However, future studies should consider adopting a framework that explicitly accounts for the genealogical effects of population subdivision and empirical sampling schemes. PMID:19237689
Sample design effects in landscape genetics
Oyler-McCance, Sara J.; Fedy, Bradley C.; Landguth, Erin L.
2012-01-01
An important research gap in landscape genetics is the impact of different field sampling designs on the ability to detect the effects of landscape pattern on gene flow. We evaluated how five different sampling regimes (random, linear, systematic, cluster, and single study site) affected the probability of correctly identifying the generating landscape process of population structure. Sampling regimes were chosen to represent a suite of designs common in field studies. We used genetic data generated from a spatially-explicit, individual-based program and simulated gene flow in a continuous population across a landscape with gradual spatial changes in resistance to movement. Additionally, we evaluated the sampling regimes using realistic and obtainable number of loci (10 and 20), number of alleles per locus (5 and 10), number of individuals sampled (10-300), and generational time after the landscape was introduced (20 and 400). For a simulated continuously distributed species, we found that random, linear, and systematic sampling regimes performed well with high sample sizes (>200), levels of polymorphism (10 alleles per locus), and number of molecular markers (20). The cluster and single study site sampling regimes were not able to correctly identify the generating process under any conditions and thus, are not advisable strategies for scenarios similar to our simulations. Our research emphasizes the importance of sampling data at ecologically appropriate spatial and temporal scales and suggests careful consideration for sampling near landscape components that are likely to most influence the genetic structure of the species. In addition, simulating sampling designs a priori could help guide filed data collection efforts.
McDonald, Catherine C; Curry, Allison E; Kandadai, Venk; Sommers, Marilyn S; Winston, Flaura K
2014-11-01
Motor vehicle crashes are the leading cause of death and acquired disability during the first four decades of life. While teen drivers have the highest crash risk, few studies examine the similarities and differences in teen and adult driver crashes. We aimed to: (1) identify and compare the most frequent crash scenarios-integrated information on a vehicle's movement prior to crash, immediate pre-crash event, and crash configuration-for teen and adult drivers involved in serious crashes, and (2) for the most frequent scenarios, explore whether the distribution of driver critical errors differed for teens and adult drivers. We analyzed data from the National Motor Vehicle Crash Causation Survey, a nationally representative study of serious crashes conducted by the U.S. National Highway Traffic Safety Administration from 2005 to 2007. Our sample included 642 16- to 19-year-old and 1167 35- to 54-year-old crash-involved drivers (weighted n=296,482 and 439,356, respectively) who made a critical error that led to their crash's critical pre-crash event (i.e., event that made the crash inevitable). We estimated prevalence ratios (PR) and 95% confidence intervals (CI) to compare the relative frequency of crash scenarios and driver critical errors. The top five crash scenarios among teen drivers, accounting for 37.3% of their crashes, included: (1) going straight, other vehicle stopped, rear end; (2) stopped in traffic lane, turning left at intersection, turn into path of other vehicle; (3) negotiating curve, off right edge of road, right roadside departure; (4) going straight, off right edge of road, right roadside departure; and (5) stopped in lane, turning left at intersection, turn across path of other vehicle. The top five crash scenarios among adult drivers, accounting for 33.9% of their crashes, included the same scenarios as the teen drivers with the exception of scenario (3) and the addition of going straight, crossing over an intersection, and continuing on a straight path. For two scenarios ((1) and (3) above), teens were more likely than adults to make a critical decision error (e.g., traveling too fast for conditions). Our findings indicate that among those who make a driver critical error in a serious crash, there are few differences in the scenarios or critical driver errors for teen and adult drivers. Copyright © 2014 Elsevier Ltd. All rights reserved.
The TESIS Project: Revealing Massive Early-Type Galaxies at z > 1
NASA Astrophysics Data System (ADS)
Saracco, P.; Longhetti, M.; Severgnini, P.; Della Ceca, R.; Braito, V.; Bender, R.; Drory, N.; Feulner, G.; Hopp, U.; Mannucci, F.; Maraston, C.
How and when present-day massive early-type galaxies built up and what type of evolution has characterized their growth (star formation and/or merging) still remain open issues. The different competing scenarios of galaxy formation predict much different properties of early-type galaxies at z > 1. The "monolithic" collapse predicts that massive spheroids formed at high redshift (z > 2.5-3) and that their comoving density is constant at z < 2.5-3 since they evolve only in luminosity. On the contrary, in the hierarchical scenario massive spheroids are built up through subsequent mergers reaching their final masses at z < 1.5 [3,5]. As a consequence, massive systems are very rare at z > 1, their comoving density decreases from z = 0 to z ~ 1.5 and they should experience their last burst of star formation at z < 1.5, concurrent with the merging event(s) of their formation. These opposed predicted properties of early-types at z > 1 can be probed observationally once a well defined sample of massive early-types at z > 1 is available. We are constructing such a sample through a dedicated near-IR very low resolution (λ/Δλ≃50) spectroscopic survey (TNG EROs Spectroscopic Identification Survey, TESIS, [6]) of a complete sample of 30 bright (K < 18.5) Extremely Red Objects (EROs).
The Evolution of the Galactic Thick Disk with the LAMOST Survey
NASA Astrophysics Data System (ADS)
Li, Chengdong; Zhao, Gang
2017-11-01
We select giant stars from LAMOST data release 3 (hereafter DR3) based on their spectral properties and atmospheric parameters in order to detect the structure and kinematic properties of the Galactic thick disk. The spatial motions of our sample stars are calculated. We obtain 2035 thick-disk giant stars by using a kinematic criterion. We confirm the existence of the metal-weak thick disk. The most metal-deficient star in our sample has [{Fe}/{{H}}]=-2.34. We derive the radial and vertical metallicity gradients, which are +0.035 ± 0.010 and -0.164 ± 0.010 dex kpc-1respectively. Then we estimate the scale length and scale height of the thick disk using the Jeans equation, and the results are {h}R=3.0+/- 0.1 {kpc} and {h}Z=0.9+/- 0.1 {kpc}. The scale length of the thick disk is approximately equal to that of the thin disk from several previous works. Finally, we calculate the orbital parameters of our sample stars, and discuss the formation scenario of the thick disk. Our result for the distribution of stellar orbital eccentricity excludes the accretion scenario. We conclude that the thick disk stars are mainly born inside the Milky Way.
Oremus, Mark; Tarride, Jean-Eric; Clayton, Natasha; Raina, Parminder
2009-12-29
Public drug insurance plans provide limited reimbursement for Alzheimer's disease (AD) medications in many jurisdictions, including Canada and the United Kingdom. This study was conducted to assess Canadians' level of support for an increase in annual personal income taxes to fund a public program of unrestricted access to AD medications. A telephone survey was administered to a national sample of 500 adult Canadians. The survey contained four scenarios describing a hypothetical, new AD medication. Descriptions varied across scenarios: the medication was alternatively described as being capable of treating the symptoms of cognitive decline or of halting the progression of cognitive decline, with either no probability of adverse effects or a 30% probability of primarily gastrointestinal adverse effects. After each scenario, participants were asked whether they would support a tax increase to provide unrestricted access to the drug. Participants who responded affirmatively were asked whether they would pay an additional $75, $150, or $225 per annum in taxes. Multivariable logistic regression analysis was conducted to examine the determinants of support for a tax increase. Eighty percent of participants supported a tax increase for at least one scenario. Support was highest (67%) for the most favourable scenario (halt progression - no adverse effects) and lowest (49%) for the least favourable scenario (symptom treatment - 30% chance of adverse effects). The odds of supporting a tax increase under at least one scenario were approximately 55% less for participants who attached higher ratings to their health state under the assumption that they had moderate AD and almost five times greater if participants thought family members or friends would somewhat or strongly approve of their decision to support a tax increase. A majority of participants would pay an additional $150 per annum in taxes, regardless of scenario. Less than 50% would pay $225. Four out of five persons in a sample of adult Canadians reported they would support a tax increase to fund unrestricted access to a hypothetical, new AD medication. These results signal a willingness to pay for at least some relaxation of reimbursement restrictions on AD medications.
2009-01-01
Background Public drug insurance plans provide limited reimbursement for Alzheimer's disease (AD) medications in many jurisdictions, including Canada and the United Kingdom. This study was conducted to assess Canadians' level of support for an increase in annual personal income taxes to fund a public program of unrestricted access to AD medications. Methods A telephone survey was administered to a national sample of 500 adult Canadians. The survey contained four scenarios describing a hypothetical, new AD medication. Descriptions varied across scenarios: the medication was alternatively described as being capable of treating the symptoms of cognitive decline or of halting the progression of cognitive decline, with either no probability of adverse effects or a 30% probability of primarily gastrointestinal adverse effects. After each scenario, participants were asked whether they would support a tax increase to provide unrestricted access to the drug. Participants who responded affirmatively were asked whether they would pay an additional $75, $150, or $225 per annum in taxes. Multivariable logistic regression analysis was conducted to examine the determinants of support for a tax increase. Results Eighty percent of participants supported a tax increase for at least one scenario. Support was highest (67%) for the most favourable scenario (halt progression - no adverse effects) and lowest (49%) for the least favourable scenario (symptom treatment - 30% chance of adverse effects). The odds of supporting a tax increase under at least one scenario were approximately 55% less for participants who attached higher ratings to their health state under the assumption that they had moderate AD and almost five times greater if participants thought family members or friends would somewhat or strongly approve of their decision to support a tax increase. A majority of participants would pay an additional $150 per annum in taxes, regardless of scenario. Less than 50% would pay $225. Conclusions Four out of five persons in a sample of adult Canadians reported they would support a tax increase to fund unrestricted access to a hypothetical, new AD medication. These results signal a willingness to pay for at least some relaxation of reimbursement restrictions on AD medications. PMID:20040110
NASA Astrophysics Data System (ADS)
Kirkham, R.; Olsen, K.; Hayes, J. C.; Emer, D. F.
2013-12-01
Underground nuclear tests may be first detected by seismic or air samplers operated by the CTBTO (Comprehensive Nuclear-Test-Ban Treaty Organization). After initial detection of a suspicious event, member nations may call for an On-Site Inspection (OSI) that in part, will sample for localized releases of radioactive noble gases and particles. Although much of the commercially available equipment and methods used for surface and subsurface environmental sampling of gases can be used for an OSI scenario, on-site sampling conditions, required sampling volumes and establishment of background concentrations of noble gases require development of specialized methodologies. To facilitate development of sampling equipment and methodologies that address OSI sampling volume and detection objectives, and to collect information required for model development, a field test site was created at a former underground nuclear explosion site located in welded volcanic tuff. A mixture of SF-6, Xe127 and Ar37 was metered into 4400 m3 of air as it was injected into the top region of the UNE cavity. These tracers were expected to move towards the surface primarily in response to barometric pumping or through delayed cavity pressurization (accelerated transport to minimize source decay time). Sampling approaches compared during the field exercise included sampling at the soil surface, inside surface fractures, and at soil vapor extraction points at depths down to 2 m. Effectiveness of various sampling approaches and the results of tracer gas measurements will be presented.
An Experimental Study of Launch Vehicle Propellant Tank Fragmentation
NASA Technical Reports Server (NTRS)
Richardson, Erin; Jackson, Austin; Hays, Michael; Bangham, Mike; Blackwood, James; Skinner, Troy; Richman, Ben
2014-01-01
In order to better understand launch vehicle abort environments, Bangham Engineering Inc. (BEi) built a test assembly that fails sample materials (steel and aluminum plates of various alloys and thicknesses) under quasi-realistic vehicle failure conditions. Samples are exposed to pressures similar to those expected in vehicle failure scenarios and filmed at high speed to increase understanding of complex fracture mechanics. After failure, the fragments of each test sample are collected, catalogued and reconstructed for further study. Post-test analysis shows that aluminum samples consistently produce fewer fragments than steel samples of similar thickness and at similar failure pressures. Video analysis shows that there are several failure 'patterns' that can be observed for all test samples based on configuration. Fragment velocities are also measured from high speed video data. Sample thickness and material are analyzed for trends in failure pressure. Testing is also done with cryogenic and noncryogenic liquid loading on the samples. It is determined that liquid loading and cryogenic temperatures can decrease material fragmentation for sub-flight thicknesses. A method is developed for capture and collection of fragments that is greater than 97 percent effective in recovering sample mass, addressing the generation of tiny fragments. Currently, samples tested do not match actual launch vehicle propellant tank material thicknesses because of size constraints on test assembly, but test findings are used to inform the design and build of another, larger test assembly with the purpose of testing actual vehicle flight materials that include structural components such as iso-grid and friction stir welds.
Abandoning the dead donor rule? A national survey of public views on death and organ donation
Nair-Collins, Michael; Green, Sydney R; Sutin, Angelina R
2015-01-01
Brain dead organ donors are the principal source of transplantable organs. However, it is controversial whether brain death is the same as biological death. Therefore, it is unclear whether organ removal in brain death is consistent with the ‘dead donor rule’, which states that organ removal must not cause death. Our aim was to evaluate the public's opinion about organ removal if explicitly described as causing the death of a donor in irreversible apneic coma. We conducted a cross-sectional internet survey of the American public (n=1096). Questionnaire domains included opinions about a hypothetical scenario of organ removal described as causing the death of a patient in irreversible coma, and items measuring willingness to donate organs after death. Some 71% of the sample agreed that it should be legal for patients to donate organs in the scenario described and 67% agreed that they would want to donate organs in a similar situation. Of the 85% of the sample who agreed that they were willing to donate organs after death, 76% agreed that they would donate in the scenario of irreversible coma with organ removal causing death. There appears to be public support for organ donation in a scenario explicitly described as violating the dead donor rule. Further, most but not all people who would agree to donate when organ removal is described as occurring after death would also agree to donate when organ removal is described as causing death in irreversible coma. PMID:25260779
Sampling ARG of multiple populations under complex configurations of subdivision and admixture.
Carrieri, Anna Paola; Utro, Filippo; Parida, Laxmi
2016-04-01
Simulating complex evolution scenarios of multiple populations is an important task for answering many basic questions relating to population genomics. Apart from the population samples, the underlying Ancestral Recombinations Graph (ARG) is an additional important means in hypothesis checking and reconstruction studies. Furthermore, complex simulations require a plethora of interdependent parameters making even the scenario-specification highly non-trivial. We present an algorithm SimRA that simulates generic multiple population evolution model with admixture. It is based on random graphs that improve dramatically in time and space requirements of the classical algorithm of single populations.Using the underlying random graphs model, we also derive closed forms of expected values of the ARG characteristics i.e., height of the graph, number of recombinations, number of mutations and population diversity in terms of its defining parameters. This is crucial in aiding the user to specify meaningful parameters for the complex scenario simulations, not through trial-and-error based on raw compute power but intelligent parameter estimation. To the best of our knowledge this is the first time closed form expressions have been computed for the ARG properties. We show that the expected values closely match the empirical values through simulations.Finally, we demonstrate that SimRA produces the ARG in compact forms without compromising any accuracy. We demonstrate the compactness and accuracy through extensive experiments. SimRA (Simulation based on Random graph Algorithms) source, executable, user manual and sample input-output sets are available for downloading at: https://github.com/ComputationalGenomics/SimRA CONTACT: : parida@us.ibm.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Reproducibility of apatite fission-track length data and thermal history reconstruction
NASA Astrophysics Data System (ADS)
Ketcham, Richard A.; Donelick, Raymond A.; Balestrieri, Maria Laura; Zattin, Massimiliano
2009-07-01
The ability to derive detailed thermal history information from apatite fission-track analysis is predicated on the reliability of track length measurements. However, insufficient attention has been given to whether and how these measurements should be standardized. In conjunction with a fission-track workshop we conducted an experiment in which 11 volunteers measured ~ 50 track lengths on one or two samples. One mount contained Durango apatite with unannealed induced tracks, and one contained apatite from a crystalline rock containing spontaneous tracks with a broad length distribution caused by partial resetting. Results for both mounts showed scatter indicative of differences in measurement technique among the individual analysts. The effects of this variability on thermal history inversion were tested using the HeFTy computer program to model the spontaneous track measurements. A cooling-only scenario and a reheating scenario more consistent with the sample's geological history were posed. When a uniform initial length value from the literature was used, results among analysts were very inconsistent in both scenarios, although normalizing for track angle by projecting all lengths to a c-axis parallel crystallographic orientation improved some aspects of congruency. When the induced track measurement was used as the basis for thermal history inversion congruency among analysts, and agreement with inversions based on data previously collected, was significantly improved. Further improvement was obtained by using c-axis projection. Differences among inversions that persisted could be traced to differential sampling of long- and short-track populations among analysts. The results of this study, while demonstrating the robustness of apatite fission-track thermal history inversion, nevertheless point to the necessity for a standardized length calibration schema that accounts for analyst variation.
Mars Sample Handling Protocol Workshop Series: Workshop 2a (Sterilization)
NASA Technical Reports Server (NTRS)
Rummel, John D. (Editor); Brunch, Carl W. (Editor); Setlow, Richard B. (Editor); DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
The Space Studies Board of the National Research Council provided a series of recommendations to NASA on planetary protection requirements for future Mars sample return missions. One of the Board's key findings suggested, although current evidence of the martian surface suggests that life as we know it would not tolerate the planet's harsh environment, there remain 'plausible scenarios for extant microbial life on Mars.' Based on this conclusion, all samples returned from Mars should be considered potentially hazardous until it has been demonstrated that they are not. In response to the National Research Council's findings and recommendations, NASA has undertaken a series of workshops to address issues regarding NASA's proposed sample return missions. Work was previously undertaken at the Mars Sample Handling and Protocol Workshop 1 (March 2000) to formulate recommendations on effective methods for life detection and/or biohazard testing on returned samples. The NASA Planetary Protection Officer convened the Mars Sample Sterilization Workshop, the third in the Mars Sample Handling Protocol Workshop Series, on November 28-30, 2000 at the Holiday Inn Rosslyn Westpark, Arlington, Virginia. Because of the short timeframe between this Workshop and the second Workshop in the Series, which was convened in October 2000 in Bethesda, Maryland, they were developed in parallel, so the Sterilization Workshop and its report have therefore been designated as '2a'). The focus of Workshop 2a was to make recommendations for effective sterilization procedures for all phases of Mars sample return missions, and to answer the question of whether we can sterilize samples in such a way that the geological characteristics of the samples are not significantly altered.
Robotic sampling system for an unmanned Mars mission
NASA Technical Reports Server (NTRS)
Chun, Wendell
1989-01-01
A major robotics opportunity for NASA will be the Mars Rover/Sample Return Mission which could be launched as early as the 1990s. The exploratory portion of this mission will include two autonomous subsystems: the rover vehicle and a sample handling system. The sample handling system is the key to the process of collecting Martian soils. This system could include a core drill, a general-purpose manipulator, tools, containers, a return canister, certification hardware and a labeling system. Integrated into a functional package, the sample handling system is analogous to a complex robotic workcell. Discussed here are the different components of the system, their interfaces, forseeable problem areas and many options based on the scientific goals of the mission. The various interfaces in the sample handling process (component to component and handling system to rover) will be a major engineering effort. Two critical evaluation criteria that will be imposed on the system are flexibility and reliability. It needs to be flexible enough to adapt to different scenarios and environments and acquire the most desirable specimens for return to Earth. Scientists may decide to change the distribution and ratio of core samples to rock samples in the canister. The long distance and duration of this planetary mission places a reliability burden on the hardware. The communication time delay between Earth and Mars minimizes operator interaction (teleoperation, supervisory modes) with the sample handler. An intelligent system will be required to plan the actions, make sample choices, interpret sensor inputs, and query unknown surroundings. A combination of autonomous functions and supervised movements will be integrated into the sample handling system.
Creation of the selection list for the Experiment Scheduling Program (ESP)
NASA Technical Reports Server (NTRS)
Deuermeyer, B. L.; Shannon, R. E.; Underbrink, A. J., Jr.
1986-01-01
The efforts to develop a procedure to construct selection groups to augment the Experiment Scheduling Program (ESP) are summarized. Included is a User's Guide and a sample scenario to guide in the use of the software system that implements the developed procedures.
Energy-Water Modeling and Analysis | Energy Analysis | NREL
future electricity scenarios under cases of limited water availability and electricity sector impacts of Manufacturing Water Use The Water Intensity and Resource Impacts of Unconventional Hydrocarbon Development Impacts of Unconventional Hydrocarbon Development Life Cycle Harmonization Project (Water) Sample
NASA Astrophysics Data System (ADS)
Klemm, Richard; Schattschneider, Sebastian; Jahn, Tobias; Hlawatsch, Nadine; Julich, Sandra; Becker, Holger; Gärtner, Claudia
2013-05-01
The ability to integrate complete assays on a microfluidic chip helps to greatly simplify instrument requirements and allows the use of lab-on-a-chip technology in the field. A core application for such field-portable systems is the detection of pathogens in a CBRNE scenario such as permanent monitoring of airborne pathogens, e.g. in metro stations or hospitals etc. As one assay methodology for the pathogen identification, enzymatic assays were chosen. In order evaluate different detection strategies, the realized on-chip enzyme assay module has been designed as a general platform chip. In all application cases, the assays are based on immobilized probes located in microfluidic channels. Therefore a microfluidic chip was realized containing a set of three individually addressable channels, not only for detection of the sample itself also to have a set of references for a quantitative analysis. It furthermore includes two turning valves and a waste container for clear and sealed storage of potential pathogenic liquids to avoid contamination of the environment. All liquids remain in the chip and can be disposed of in proper way subsequently to the analysis. The chip design includes four inlet ports consisting of one sample port (Luer interface) and three mini Luer interfaces for fluidic support of e.g. washing buffer, substrate and enzyme solution. The sample can be applied via a special, sealable sampling vessel with integrated female Luer interface. Thereby also pre-anaytical contamination of the environment can be provided. Other reagents that are required for analysis will be stored off chip.
NASA Astrophysics Data System (ADS)
Mølgaard, Lasse L.; Buus, Ole T.; Larsen, Jan; Babamoradi, Hamid; Thygesen, Ida L.; Laustsen, Milan; Munk, Jens Kristian; Dossi, Eleftheria; O'Keeffe, Caroline; Lässig, Lina; Tatlow, Sol; Sandström, Lars; Jakobsen, Mogens H.
2017-05-01
We present a data-driven machine learning approach to detect drug- and explosives-precursors using colorimetric sensor technology for air-sampling. The sensing technology has been developed in the context of the CRIM-TRACK project. At present a fully- integrated portable prototype for air sampling with disposable sensing chips and automated data acquisition has been developed. The prototype allows for fast, user-friendly sampling, which has made it possible to produce large datasets of colorimetric data for different target analytes in laboratory and simulated real-world application scenarios. To make use of the highly multi-variate data produced from the colorimetric chip a number of machine learning techniques are employed to provide reliable classification of target analytes from confounders found in the air streams. We demonstrate that a data-driven machine learning method using dimensionality reduction in combination with a probabilistic classifier makes it possible to produce informative features and a high detection rate of analytes. Furthermore, the probabilistic machine learning approach provides a means of automatically identifying unreliable measurements that could produce false predictions. The robustness of the colorimetric sensor has been evaluated in a series of experiments focusing on the amphetamine pre-cursor phenylacetone as well as the improvised explosives pre-cursor hydrogen peroxide. The analysis demonstrates that the system is able to detect analytes in clean air and mixed with substances that occur naturally in real-world sampling scenarios. The technology under development in CRIM-TRACK has the potential as an effective tool to control trafficking of illegal drugs, explosive detection, or in other law enforcement applications.
NASA Astrophysics Data System (ADS)
Blanco, Yolanda; Gallardo-Carreño, Ignacio; Ruiz-Bermejo, Marta; Puente-Sánchez, Fernando; Cavalcante-Silva, Erika; Quesada, Antonio; Prieto-Ballesteros, Olga; Parro, Víctor
2017-10-01
The search for biomarkers of present or past life is one of the major challenges for in situ planetary exploration. Multiple constraints limit the performance and sensitivity of remote in situ instrumentation. In addition, the structure, chemical, and mineralogical composition of the sample may complicate the analysis and interpretation of the results. The aim of this work is to highlight the main constraints, performance, and complementarity of several techniques that have already been implemented or are planned to be implemented on Mars for detection of organic and molecular biomarkers on a best-case sample scenario. We analyzed a 1000-year-old desiccated and mummified microbial mat from Antarctica by Raman and IR (infrared) spectroscopies (near- and mid-IR), thermogravimetry (TG), differential thermal analysis, mass spectrometry (MS), and immunological detection with a life detector chip. In spite of the high organic content (ca. 20% wt/wt) of the sample, the Raman spectra only showed the characteristic spectral peaks of the remaining beta-carotene biomarker and faint peaks of phyllosilicates over a strong fluorescence background. IR spectra complemented the mineralogical information from Raman spectra and showed the main molecular vibrations of the humic acid functional groups. The TG-MS system showed the release of several volatile compounds attributed to biopolymers. An antibody microarray for detecting cyanobacteria (CYANOCHIP) detected biomarkers from Chroococcales, Nostocales, and Oscillatoriales orders. The results highlight limitations of each technique and suggest the necessity of complementary approaches in the search for biomarkers because some analytical techniques might be impaired by sample composition, presentation, or processing.
The price elasticity of demand for heroin: Matched longitudinal and experimental evidence.
Olmstead, Todd A; Alessi, Sheila M; Kline, Brendan; Pacula, Rosalie Liccardo; Petry, Nancy M
2015-05-01
This paper reports estimates of the price elasticity of demand for heroin based on a newly constructed dataset. The dataset has two matched components concerning the same sample of regular heroin users: longitudinal information about real-world heroin demand (actual price and actual quantity at daily intervals for each heroin user in the sample) and experimental information about laboratory heroin demand (elicited by presenting the same heroin users with scenarios in a laboratory setting). Two empirical strategies are used to estimate the price elasticity of demand for heroin. The first strategy exploits the idiosyncratic variation in the price experienced by a heroin user over time that occurs in markets for illegal drugs. The second strategy exploits the experimentally induced variation in price experienced by a heroin user across experimental scenarios. Both empirical strategies result in the estimate that the conditional price elasticity of demand for heroin is approximately -0.80. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Fries, M. D.; Steele, Andrew; Hynek, B. M.
2015-01-01
We present the hypothesis that halite may play a role in methane sequestration on the martian surface. In terrestrial examples, halite deposits sequester large volumes of methane and chloromethane. Also, examples of chloromethane-bearing, approximately 4.5 Ga old halite from the Monahans meteorite show that this system is very stable unless the halite is damaged. On Mars, methane may be generated from carbonaceous material trapped in ancient halite deposits and sequestered. The methane may be released by damaging its halite host; either by aqueous alteration, aeolian abrasion, heating, or impact shock. Such a scenario may help to explain the appearance of short-lived releases of methane on the martian surface. The methane may be of either biogenic or abiogenic origin. If this scenario plays a significant role on Mars, then martian halite deposits may contain samples of organic compounds dating to the ancient desiccation of the planet, accessible at the surface for future sample return missions.
Win-Stay, Lose-Sample: a simple sequential algorithm for approximating Bayesian inference.
Bonawitz, Elizabeth; Denison, Stephanie; Gopnik, Alison; Griffiths, Thomas L
2014-11-01
People can behave in a way that is consistent with Bayesian models of cognition, despite the fact that performing exact Bayesian inference is computationally challenging. What algorithms could people be using to make this possible? We show that a simple sequential algorithm "Win-Stay, Lose-Sample", inspired by the Win-Stay, Lose-Shift (WSLS) principle, can be used to approximate Bayesian inference. We investigate the behavior of adults and preschoolers on two causal learning tasks to test whether people might use a similar algorithm. These studies use a "mini-microgenetic method", investigating how people sequentially update their beliefs as they encounter new evidence. Experiment 1 investigates a deterministic causal learning scenario and Experiments 2 and 3 examine how people make inferences in a stochastic scenario. The behavior of adults and preschoolers in these experiments is consistent with our Bayesian version of the WSLS principle. This algorithm provides both a practical method for performing Bayesian inference and a new way to understand people's judgments. Copyright © 2014 Elsevier Inc. All rights reserved.
Electrostatic sampling of trace DNA from clothing.
Zieger, Martin; Defaux, Priscille Merciani; Utz, Silvia
2016-05-01
During acts of physical aggression, offenders frequently come into contact with clothes of the victim, thereby leaving traces of DNA-bearing biological material on the garments. Since tape-lifting and swabbing, the currently established methods for non-destructive trace DNA sampling from clothing, both have their shortcomings in collection efficiency and handling, we thought about a new collection method for these challenging samples. Testing two readily available electrostatic devices for their potential to sample biological material from garments made of different fabrics, we found one of them, the electrostatic dust print lifter (DPL), to perform comparable to well-established sampling with wet cotton swabs. In simulated aggression scenarios, we had the same success rate for the establishment of single aggressor profiles, suitable for database submission, with both the DPL and wet swabbing. However, we lost a substantial amount of information with electrostatic sampling, since almost no mixed aggressor-victim profiles suitable for database entry could be established, compared to conventional swabbing. This study serves as a proof of principle for electrostatic DNA sampling from items of clothing. The technique still requires optimization before it might be used in real casework. But we are confident that in the future it could be an efficient and convenient contribution to the toolbox of forensic practitioners.
NASA Technical Reports Server (NTRS)
Franz, H. B.; Mahaffy, P. R.; Stern, J. C.; Eigenbrode, J. L.; Steele, A.; Ming, D. W.; McAdam, A. C.; Freissinet, C.; Glavin, D. P.; Archer, P. D.;
2014-01-01
Since landing at Gale Crater in Au-gust 2012, the Sample Analysis at Mars (SAM) instru-ment suite on the Mars Science Laboratory (MSL) “Curiosity” rover has analyzed solid samples from the martian regolith in three locations, beginning with a scoop of aeolian deposits from the Rocknest (RN) sand shadow. Curiosity subsequently traveled to Yellowknife Bay, where SAM analyzed samples from two separate holes drilled into the Sheepbed Mudstone, designated John Klein (JK) and Cumberland (CB). Evolved gas analysis (EGA) of all samples revealed the presence of H2O as well as O-, C- and S-bearing phas-es, in most cases at abundances below the detection limit of the CheMin instrument. In the absence of definitive mineralogical identification by CheMin, SAM EGA data can help provide clues to the mineralogy of volatile-bearing phases through examination of tem-peratures at which gases are evolved from solid sam-ples. In addition, the isotopic composition of these gas-es may be used to identify possible formation scenarios and relationships between phases. Here we report C and S isotope ratios for CO2 and SO2 evolved from the JK and CB mudstone samples as measured with SAM’s quadrupole mass spectrometer (QMS) and draw com-parisons to RN.
Self-averaging and weak ergodicity breaking of diffusion in heterogeneous media
NASA Astrophysics Data System (ADS)
Russian, Anna; Dentz, Marco; Gouze, Philippe
2017-08-01
Diffusion in natural and engineered media is quantified in terms of stochastic models for the heterogeneity-induced fluctuations of particle motion. However, fundamental properties such as ergodicity and self-averaging and their dependence on the disorder distribution are often not known. Here, we investigate these questions for diffusion in quenched disordered media characterized by spatially varying retardation properties, which account for particle retention due to physical or chemical interactions with the medium. We link self-averaging and ergodicity to the disorder sampling efficiency Rn, which quantifies the number of disorder realizations a noise ensemble may sample in a single disorder realization. Diffusion for disorder scenarios characterized by a finite mean transition time is ergodic and self-averaging for any dimension. The strength of the sample to sample fluctuations decreases with increasing spatial dimension. For an infinite mean transition time, particle motion is weakly ergodicity breaking in any dimension because single particles cannot sample the heterogeneity spectrum in finite time. However, even though the noise ensemble is not representative of the single-particle time statistics, subdiffusive motion in q ≥2 dimensions is self-averaging, which means that the noise ensemble in a single realization samples a representative part of the heterogeneity spectrum.
Elevated PCDD/F levels and distinctive PCDD/F congener profiles in free range eggs.
Hsu, Jing-Fang; Chen, Chun; Liao, Pao-Chi
2010-07-14
Chicken eggs are one of the most important foods in the human diet all over the world, and the demand for eggs from free range hens has steadily increased. Congener-specific analyses of 17 polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) were performed on 6 free range and 12 caged chicken egg samples collected in Taiwan. The mean level of PCDD/Fs in the free range egg samples was 5.7 (1.79/0.314) times higher than those in the caged egg samples. Principle component analysis revealed that at least three characteristic patterns of PCDD/F congener were observed among the 18 egg samples. The different PCDD/F congener patterns between free range and caged egg samples may reflect distinctive exposure scenarios among the free range and caged hens. We suggest that the differences of PCDD/F levels and congener patterns between free range and caged egg samples give rise to the issues related to the safety of eating free range chicken eggs. The present data may provide useful information for further investigation of the possible PCDD/F sources in the contaminated free range eggs.
Cahú, Graziela Pontes Ribeiro; Rosenstock, Karelline Izaltemberg Vasconcelos; da Costa, Solange Fátima Geraldo; Leite, Alice Iana Tavares; Costa, Isabelle Cristinne Pinto; Gomes e Claudino, Hellen
2011-09-01
The discussion about the practice of psychological harassment has gained notoriety due to the intensification and the severity of the phenomenon in various scenarios, such as educational institutions, public agencies and private companies. This study aims to characterize the scientific production about the scenarios of psychological harassment in online journals in the areas of Health Sciences, Social Sciences and Humanities from 2002 to 2010. This is an integrative literature review. Its sample consists of 24 publications. The survey revealed that most of the publications were made from 2005 to 2010. Highlighted areas of knowledge were Health Sciences and Humanities. We identified three scenarios of psychological harrassment: education institutions, public agencies and private companies. We conclude that psychological harrassment affects workers employed in various work scenarios, and that the discussion about this phenomenon stands in the interdisciplinary field.
Zimmer-Faust, Amity G; Ambrose, Richard F; Tamburri, Mario N
2014-01-01
With the maturation and certification of several ballast water management systems that employ chlorine as biocide to prevent the spread of invasive species, there is a clear need for accurate and reliable total residual oxidants (TRO) technology to monitor treatment dose and assure the environmental safety of treated water discharged from ships. In this study, instruments used to measure TRO in wastewater and drinking water applications were evaluated for their performance in scenarios mimicking a ballast water treatment application (e.g., diverse hold times, temperatures, and salinities). Parameters chosen for testing these technologies in the past do not reflect conditions expected during ballast water treatment. Salinity, temperature, and oxidant concentration all influenced the response of amperometric sensors. Oxidation reduction potential (ORP) sensors performed more consistently than amperometric sensors under different conditions but it may be difficult to correlate ORP and TRO measurements for the multitude of biogeochemical conditions found naturally in ballast water. N,N-diethyl-p-phenylenediamine (DPD) analyzers and amperometric sensors were also tested under intermittent sampling conditions mimicking a ballasting scenario, with cyclical dosage and discharge operations. When sampling was intermittent, amperometric sensors required excessive response and conditioning times, whereas DPD analyzers provided reasonable estimates of TRO under the ballasting scenario.
Chen, Peichen; Liu, Shih-Chia; Liu, Hung-I; Chen, Tse-Wei
2011-01-01
For quarantine sampling, it is of fundamental importance to determine the probability of finding an infestation when a specified number of units are inspected. In general, current sampling procedures assume 100% probability (perfect) of detecting a pest if it is present within a unit. Ideally, a nematode extraction method should remove all stages of all species with 100% efficiency regardless of season, temperature, or other environmental conditions; in practice however, no method approaches these criteria. In this study we determined the probability of detecting nematode infestations for quarantine sampling with imperfect extraction efficacy. Also, the required sample and the risk involved in detecting nematode infestations with imperfect extraction efficacy are presented. Moreover, we developed a computer program to calculate confidence levels for different scenarios with varying proportions of infestation and efficacy of detection. In addition, a case study, presenting the extraction efficacy of the modified Baermann's Funnel method on Aphelenchoides besseyi, is used to exemplify the use of our program to calculate the probability of detecting nematode infestations in quarantine sampling with imperfect extraction efficacy. The result has important implications for quarantine programs and highlights the need for a very large number of samples if perfect extraction efficacy is not achieved in such programs. We believe that the results of the study will be useful for the determination of realistic goals in the implementation of quarantine sampling. PMID:22791911
Penetrator role in Mars sample strategy
NASA Technical Reports Server (NTRS)
Boynton, William; Dwornik, Steve; Eckstrom, William; Roalstad, David A.
1988-01-01
The application of the penetrator to a Mars Return Sample Mission (MRSM) has direct advantages to meet science objectives and mission safety. Based on engineering data and work currently conducted at Ball Aerospace Systems Division, the concept of penetrators as scientific instruments is entirely practical. The primary utilization of a penetrator for MRSM would be to optimize the selection of the sample site location and to help in selection of the actual sample to be returned to Earth. It is recognized that the amount of sample to be returned is very limited, therefore the selection of the sample site is critical to the success of the mission. The following mission scenario is proposed. The site selection of a sample to be acquired will be performed by science working groups. A decision will be reached and a set of target priorities established based on data to give geochemical, geophysical and geological information. The first task of a penetrator will be to collect data at up to 4 to 6 possible landing sites. The penetrator can include geophysical, geochemical, geological and engineering instruments to confirm that scientific data requirements at that site will be met. This in situ near real-time data, collected prior to final targeting of the lander, will insure that the sample site is both scientifically valuable and also that it is reachable within limits of the capability of the lander.
Huang, Yunda; Huang, Ying; Moodie, Zoe; Li, Sue; Self, Steve
2014-01-01
Summary In biomedical research such as the development of vaccines for infectious diseases or cancer, measures from the same assay are often collected from multiple sources or laboratories. Measurement error that may vary between laboratories needs to be adjusted for when combining samples across laboratories. We incorporate such adjustment in comparing and combining independent samples from different labs via integration of external data, collected on paired samples from the same two laboratories. We propose: 1) normalization of individual level data from two laboratories to the same scale via the expectation of true measurements conditioning on the observed; 2) comparison of mean assay values between two independent samples in the Main study accounting for inter-source measurement error; and 3) sample size calculations of the paired-sample study so that hypothesis testing error rates are appropriately controlled in the Main study comparison. Because the goal is not to estimate the true underlying measurements but to combine data on the same scale, our proposed methods do not require that the true values for the errorprone measurements are known in the external data. Simulation results under a variety of scenarios demonstrate satisfactory finite sample performance of our proposed methods when measurement errors vary. We illustrate our methods using real ELISpot assay data generated by two HIV vaccine laboratories. PMID:22764070
McCulloch, G; Dawson, L A; Ross, J M; Morgan, R M
2018-07-01
There is a need to develop a wider empirical research base to expand the scope for utilising the organic fraction of soil in forensic geoscience, and to demonstrate the capability of the analytical techniques used in forensic geoscience to discriminate samples at close proximity locations. The determination of wax markers from soil samples by GC analysis has been used extensively in court and is known to be effective in discriminating samples from different land use types. A new HPLC method for the analysis of the organic fraction of forensic sediment samples has also been shown recently to add value in conjunction with existing inorganic techniques for the discrimination of samples derived from close proximity locations. This study compares the ability of these two organic techniques to discriminate samples derived from close proximity locations and finds the GC technique to provide good discrimination at this scale, providing quantification of known compounds, whilst the HPLC technique offered a shorter and simpler sample preparation method and provided very good discrimination between groups of samples of different provenance in most cases. The use of both data sets together gave further improved accuracy rates in some cases, suggesting that a combined organic approach can provide added benefits in certain case scenarios and crime reconstruction contexts. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Franz, H. B.; McAdam, C.; Stern, J. C.; Archer, P. D., Jr.; Sutter, B.; Grotzinger, J. P.; Jones, J. H.; Leshin, L. A.; Mahaffy, P. R.; Ming, D. W.;
2013-01-01
The Sample Analysis at Mars (SAM) instrument suite on the Mars Science Laboratory (MSL) Curiosity rover got its first taste of solid Mars in the form of loose, unconsolidated materials (soil) acquired from an aeolian bedform designated Rocknest. Evolved gas analysis (EGA) revealed the presence of H2O as well as O-, C- and S-bearing phases in these samples. CheMin did not detect crystalline phases containing these gaseous species but did detect the presence of X-ray amorphous materials. In the absence of definitive mineralogical identification by CheMin, SAM EGA data can provide clues to the nature and/or mineralogy of volatile-bearing phases through examination of temperatures at which gases are evolved from solid samples. In addition, the isotopic composition of these gases, particularly when multiple sources contribute to a given EGA curve, may be used to identify possible formation scenarios and relationships between phases. Here we report C and S isotope ratios for CO2 and SO2 evolved from Rocknest soil samples as measured with SAM's quadrupole mass spectrometer (QMS).
Estimating acreage by double sampling using LANDSAT data
NASA Technical Reports Server (NTRS)
Pont, F.; Horwitz, H.; Kauth, R. (Principal Investigator)
1982-01-01
Double sampling techniques employing LANDSAT data for estimating the acreage of corn and soybeans was investigated and evaluated. The evaluation was based on estimated costs and correlations between two existing procedures having differing cost/variance characteristics, and included consideration of their individual merits when coupled with a fictional 'perfect' procedure of zero bias and variance. Two features of the analysis are: (1) the simultaneous estimation of two or more crops; and (2) the imposition of linear cost constraints among two or more types of resource. A reasonably realistic operational scenario was postulated. The costs were estimated from current experience with the measurement procedures involved, and the correlations were estimated from a set of 39 LACIE-type sample segments located in the U.S. Corn Belt. For a fixed variance of the estimate, double sampling with the two existing LANDSAT measurement procedures can result in a 25% or 50% cost reduction. Double sampling which included the fictional perfect procedure results in a more cost effective combination when it is used with the lower cost/higher variance representative of the existing procedures.
EVA Suit Microbial Leakage Investigation Project
NASA Technical Reports Server (NTRS)
Falker, Jay; Baker, Christopher; Clayton, Ronald; Rucker, Michelle
2016-01-01
The objective of this project is to collect microbial samples from various EVA suits to determine how much microbial contamination is typically released during simulated planetary exploration activities. Data will be released to the planetary protection and science communities, and advanced EVA system designers. In the best case scenario, we will discover that very little microbial contamination leaks from our current or prototype suit designs, in the worst case scenario, we will identify leak paths, learn more about what affects leakage--and we'll have a new, flight-certified swab tool for our EVA toolbox.
Stanley, F. E.; Byerly, Benjamin L.; Thomas, Mariam R.; ...
2016-03-31
Actinide isotope measurements are a critical signature capability in the modern nuclear forensics “toolbox”, especially when interrogating anthropogenic constituents in real-world scenarios. Unfortunately, established methodologies, such as traditional total evaporation via thermal ionization mass spectrometry, struggle to confidently measure low abundance isotope ratios (<10 -6) within already limited quantities of sample. Herein, we investigate the application of static, mixed array total evaporation techniques as a straightforward means of improving plutonium minor isotope measurements, which have been resistant to enhancement in recent years because of elevated radiologic concerns. Furthermore, results are presented for small sample (~20 ng) applications involving a well-knownmore » plutonium isotope reference material, CRM-126a, and compared with traditional total evaporation methods.« less
Agnihotri, S.; Rostam-Abadi, M.; Mota, J.P.B.; Rood, M.J.
2005-01-01
Hexane adsorption on single-walled carbon nanotube (SWNT) bundles was studied. Hexane adsorption capacities of two purified SWNT samples was gravimetrically determined at isothermal conditions of 25??, 37??, and 50??C for 10-4 < p/po < 0.9, where p/po is hexane vapor pressure relative to its saturation pressure. Simulation of hexane adsorption under similar temperature and pressure conditions were performed on the external and internal sites of nanotube bundles of diameters same as those in experimental samples. The simulations could predict isotherms for a hypothetical scenario where all nanotubes in a sample would be open. This is an abstract of a paper presented at the AIChE Annual Meeting and Fall Showcase (Cincinnati, OH 10/30/2005-11/4/2005).
Gehrig, Nicolas; Dragotti, Pier Luigi
2009-03-01
In this paper, we study the sampling and the distributed compression of the data acquired by a camera sensor network. The effective design of these sampling and compression schemes requires, however, the understanding of the structure of the acquired data. To this end, we show that the a priori knowledge of the configuration of the camera sensor network can lead to an effective estimation of such structure and to the design of effective distributed compression algorithms. For idealized scenarios, we derive the fundamental performance bounds of a camera sensor network and clarify the connection between sampling and distributed compression. We then present a distributed compression algorithm that takes advantage of the structure of the data and that outperforms independent compression algorithms on real multiview images.
Maddalena, Damian; Hoffman, Forrest; Kumar, Jitendra; Hargrove, William
2014-08-01
Sampling networks rarely conform to spatial and temporal ideals, often comprised of network sampling points which are unevenly distributed and located in less than ideal locations due to access constraints, budget limitations, or political conflict. Quantifying the global, regional, and temporal representativeness of these networks by quantifying the coverage of network infrastructure highlights the capabilities and limitations of the data collected, facilitates upscaling and downscaling for modeling purposes, and improves the planning efforts for future infrastructure investment under current conditions and future modeled scenarios. The work presented here utilizes multivariate spatiotemporal clustering analysis and representativeness analysis for quantitative landscape characterization and assessment of the Fluxnet, RAINFOR, and ForestGEO networks. Results include ecoregions that highlight patterns of bioclimatic, topographic, and edaphic variables and quantitative representativeness maps of individual and combined networks.
Lopes Antunes, Ana Carolina; Dórea, Fernanda; Halasa, Tariq; Toft, Nils
2016-05-01
Surveillance systems are critical for accurate, timely monitoring and effective disease control. In this study, we investigated the performance of univariate process monitoring control algorithms in detecting changes in seroprevalence for endemic diseases. We also assessed the effect of sample size (number of sentinel herds tested in the surveillance system) on the performance of the algorithms. Three univariate process monitoring control algorithms were compared: Shewart p Chart(1) (PSHEW), Cumulative Sum(2) (CUSUM) and Exponentially Weighted Moving Average(3) (EWMA). Increases in seroprevalence were simulated from 0.10 to 0.15 and 0.20 over 4, 8, 24, 52 and 104 weeks. Each epidemic scenario was run with 2000 iterations. The cumulative sensitivity(4) (CumSe) and timeliness were used to evaluate the algorithms' performance with a 1% false alarm rate. Using these performance evaluation criteria, it was possible to assess the accuracy and timeliness of the surveillance system working in real-time. The results showed that EWMA and PSHEW had higher CumSe (when compared with the CUSUM) from week 1 until the end of the period for all simulated scenarios. Changes in seroprevalence from 0.10 to 0.20 were more easily detected (higher CumSe) than changes from 0.10 to 0.15 for all three algorithms. Similar results were found with EWMA and PSHEW, based on the median time to detection. Changes in the seroprevalence were detected later with CUSUM, compared to EWMA and PSHEW for the different scenarios. Increasing the sample size 10 fold halved the time to detection (CumSe=1), whereas increasing the sample size 100 fold reduced the time to detection by a factor of 6. This study investigated the performance of three univariate process monitoring control algorithms in monitoring endemic diseases. It was shown that automated systems based on these detection methods identified changes in seroprevalence at different times. Increasing the number of tested herds would lead to faster detection. However, the practical implications of increasing the sample size (such as the costs associated with the disease) should also be taken into account. Copyright © 2016 Elsevier B.V. All rights reserved.
Gustavsson, Mikael; Kreuger, Jenny; Bundschuh, Mirco; Backhaus, Thomas
2017-11-15
This paper presents the ecotoxicological assessment and environmental risk evaluation of complex pesticide mixtures occurring in freshwater ecosystems in southern Sweden. The evaluation is based on exposure data collected between 2002 and 2013 by the Swedish pesticide monitoring program and includes 1308 individual samples, detecting mixtures of up to 53 pesticides (modal=8). Pesticide mixture risks were evaluated using three different scenarios for non-detects (best-case, worst-case and using the Kaplan-Meier method). The risk of each scenario was analyzed using Swedish Water Quality Objectives (WQO) and trophic-level specific environmental thresholds. Using the Kaplan-Meier method the environmental risk of 73% of the samples exceeded acceptable levels, based on an assessment using Concentration-Addition and WQOs for the individual pesticides. Algae were the most sensitive organism group. However, analytical detection limits, especially for insecticides, were insufficient to analyze concentrations at or near their WQO's. Thus, the risk of the analyzed pesticide mixtures to crustaceans and fish is systematically underestimated. Treating non-detects as being present at their individual limit of detection increased the estimated risk by a factor 100 or more, compared to the best-case or the Kaplan-Meier scenario. Pesticide mixture risks are often driven by only 1-3 compounds. However, the risk-drivers (i.e., individual pesticides explaining the largest share of potential effects) differ substantially between sites and samples, and 83 of the 141 monitored pesticides need to be included in the assessment to account for 95% of the risk at all sites and years. Single-substance oriented risk mitigation measures that would ensure that each individual pesticide is present at a maximum of 95% of its individual WQO, would also reduce the mixture risk, but only from a median risk quotient of 2.1 to a median risk quotient of 1.8. Also, acceptable total risk levels would still be exceeded in more than 70% of the samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Sample size determination for mediation analysis of longitudinal data.
Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying
2018-03-27
Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.
Egenberg, Signe; Masenga, Gileard; Bru, Lars Edvin; Eggebø, Torbjørn Moe; Mushi, Cecilia; Massay, Deodatus; Øian, Pål
2017-09-05
Tanzania has a relatively high maternal mortality ratio of 410 per 100,000 live births. Severe postpartum hemorrhage (PPH) is a major cause of maternal deaths, but in most cases, it is preventable. However, most pregnant women that develop PPH, have no known risk factors. Therefore, preventive measures must be offered to all pregnant women. This study investigated the effects of multi-professional, scenario-based training on the prevention and management of PPH at a Tanzanian zonal consultant hospital. We hypothesized that scenario-based training could contribute to improved competence on PPH-management, which would result in improved team efficiency and patient outcome. This quasi-experimental, pre-vs. post-interventional study involved on-site multi-professional, scenario-based PPH training, conducted in a two-week period in October 2013 and another 2 weeks in November 2014. Training teams included nurses, midwives, doctors, and medical attendants in the Department of Obstetrics and Gynecology. After technical skill training on the birthing simulator MamaNatalie®, the teams practiced in realistic scenarios on PPH. Each scenario was followed by debriefing and repeated scenario. Afterwards, the group swapped roles and the observers became the participants. To evaluate the effects of training, we measured patient outcomes by determining blood transfusion rates. Patient data were collected by randomly sampling Medical birth registry files from the pre-training and post-training study periods (n = 1667 and 1641 files, respectively). Data were analyzed with the Chi-square test, Mann-Whitney U-test, and binary logistic regression. The random patient samples (n = 3308) showed that, compared to pre-training, post-training patients had a 47% drop in whole blood transfusion rates and significant increases in cesarean section rates, birth weights, and vacuum deliveries. The logistic regression analysis showed that transfusion rates were significantly associated with the time period (pre- vs. post-training), cesarean section, patients tranferred from other hospitals, maternal age, and female genital mutilation and cutting. We found that multi-professional, scenario-based training was associated with a significant, 47% reduction in whole blood transfusion rates. These results suggested that training that included all levels of maternity staff, repeated sessions with realistic scenarios, and debriefing may have contributed to reduced blood transfusion rates in this high-risk maternity setting.
Orbiting Quarantine Facility. The Antaeus report, summary
NASA Technical Reports Server (NTRS)
1981-01-01
Requirements for handling extraterrestrial samples in an orbiting quarantine facility are examined. The major concepts and findings of the study are outlined. One approach that could be taken for receiving, containing, and analyzing samples returned from the surface of Mars in a mission analogous to the lunar return missions of the late 1960s and early 1970s is described. It constructs a general mission scenario and presents an overall systems design, including an approach to cost assessment. Particular attention is paid to the design of system hardware components and to the elaboration of an experimental protocol.
Information Foraging and Change Detection for Automated Science Exploration
NASA Technical Reports Server (NTRS)
Furlong, P. Michael; Dille, Michael
2016-01-01
This paper presents a new algorithm for autonomous on-line exploration in unknown environments. The objective is to free remote scientists from possibly-infeasible extensive preliminary site investigation prior to sending robotic agents. We simulate a common exploration task for an autonomous robot sampling the environment at various locations and compare performance against simpler control strategies. An extension is proposed and evaluated that further permits operation in the presence of environmental variability in which the robot encounters a change in the distribution underlying sampling targets. Experimental results indicate a strong improvement in performance across varied parameter choices for the scenario.
EM in high-dimensional spaces.
Draper, Bruce A; Elliott, Daniel L; Hayes, Jeremy; Baek, Kyungim
2005-06-01
This paper considers fitting a mixture of Gaussians model to high-dimensional data in scenarios where there are fewer data samples than feature dimensions. Issues that arise when using principal component analysis (PCA) to represent Gaussian distributions inside Expectation-Maximization (EM) are addressed, and a practical algorithm results. Unlike other algorithms that have been proposed, this algorithm does not try to compress the data to fit low-dimensional models. Instead, it models Gaussian distributions in the (N - 1)-dimensional space spanned by the N data samples. We are able to show that this algorithm converges on data sets where low-dimensional techniques do not.
A lower bound on the number of cosmic ray events required to measure source catalogue correlations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolci, Marco; Romero-Wolf, Andrew; Wissel, Stephanie, E-mail: marco.dolci@polito.it, E-mail: Andrew.Romero-Wolf@jpl.nasa.gov, E-mail: swissel@calpoly.edu
2016-10-01
Recent analyses of cosmic ray arrival directions have resulted in evidence for a positive correlation with active galactic nuclei positions that has weak significance against an isotropic source distribution. In this paper, we explore the sample size needed to measure a highly statistically significant correlation to a parent source catalogue. We compare several scenarios for the directional scattering of ultra-high energy cosmic rays given our current knowledge of the galactic and intergalactic magnetic fields. We find significant correlations are possible for a sample of >1000 cosmic ray protons with energies above 60 EeV.
NASA Astrophysics Data System (ADS)
Bondi, M.; Marchã, M. J. M.; Dallacasa, D.; Stanghellini, C.
2001-08-01
The 200-mJy sample, defined by Marchã et al., contains about 60 nearby, northern, flat-spectrum radio sources. In particular, the sample has proved effective at finding nearby radio-selected BL Lac objects with radio luminosities comparable to those of X-ray-selected objects, and low-luminosity flat-spectrum weak emission-line radio galaxies (WLRGs). The 200-mJy sample contains 23 BL Lac objects (including 6 BL Lac candidates) and 19 WLRGs. We will refer to these subsamples as the 200-mJy BL Lac sample and the 200-mJy WLRG sample, respectively. We have started a systematic analysis of the morphological pc-scale properties of the 200-mJy radio sources using VLBI observations. This paper presents VLBI observations at 5 and 1.6GHz of 14 BL Lac objects and WLRGs selected from the 200-mJy sample. The pc-scale morphology of these objects is briefly discussed. We derive the radio beaming parameters of the 200-mJy BL Lac objects and WLRGs and compare them with those of other BL Lac samples and with a sample of FR I radio galaxies. The overall broad-band radio, optical and X-ray properties of the 200-mJy BL Lac sample are discussed and compared with those of other BL Lac samples, radio- and X-ray-selected. We find that the 200-mJy BL Lac objects fill the gap between HBL and LBL objects in the colour-colour plot, and have intermediate αXOX as expected in the spectral energy distribution unification scenario. Finally, we briefly discuss the role of the WLRGs.
Sample selection via angular distance in the space of the arguments of an artificial neural network
NASA Astrophysics Data System (ADS)
Fernández Jaramillo, J. M.; Mayerle, R.
2018-05-01
In the construction of an artificial neural network (ANN) a proper data splitting of the available samples plays a major role in the training process. This selection of subsets for training, testing and validation affects the generalization ability of the neural network. Also the number of samples has an impact in the time required for the design of the ANN and the training. This paper introduces an efficient and simple method for reducing the set of samples used for training a neural network. The method reduces the required time to calculate the network coefficients, while keeping the diversity and avoiding overtraining the ANN due the presence of similar samples. The proposed method is based on the calculation of the angle between two vectors, each one representing one input of the neural network. When the angle formed among samples is smaller than a defined threshold only one input is accepted for the training. The accepted inputs are scattered throughout the sample space. Tidal records are used to demonstrate the proposed method. The results of a cross-validation show that with few inputs the quality of the outputs is not accurate and depends on the selection of the first sample, but as the number of inputs increases the accuracy is improved and differences among the scenarios with a different starting sample have and important reduction. A comparison with the K-means clustering algorithm shows that for this application the proposed method with a smaller number of samples is producing a more accurate network.
Burger, Emily A; Sy, Stephen; Nygård, Mari; Kim, Jane J
2017-01-01
Human papillomavirus (HPV) testing allows women to self-collect cervico-vaginal cells at home (i.e., self-sampling). Using primary data from a randomized pilot study, we evaluated the long-term consequences and cost-effectiveness of using self-sampling to improve participation to routine cervical cancer screening in Norway. We compared a strategy reflecting screening participation (using reminder letters) to strategies that involved mailing self-sampling device kits to women noncompliant to screening within a 5- or 10-year period under two scenarios: (A) self-sampling respondents had moderate under-screening histories, or (B) respondents to self-sampling had moderate and severe under-screening histories. Model outcomes included quality-adjusted life-years (QALY) and lifetime costs. The "most cost-effective" strategy was identified as the strategy just below $100,000 per QALY gained. Mailing self-sampling device kits to all women noncompliant to screening within a 5- or 10-year period can be more effective and less costly than the current reminder letter policy; however, the optimal self-sampling strategy was dependent on the profile of self-sampling respondents. For example, "10-yearly self-sampling" is preferred ($95,500 per QALY gained) if "5-yearly self-sampling" could only attract moderate under-screeners; however, "5-yearly self-sampling" is preferred if this strategy could additionally attract severe under-screeners. Targeted self-sampling of noncompliers likely represents good value-for-money; however, the preferred strategy is contingent on the screening histories and compliance of respondents. The magnitude of the health benefit and optimal self-sampling strategy is dependent on the profile and behavior of respondents. Health authorities should understand these factors prior to selecting and implementing a self-sampling policy. Cancer Epidemiol Biomarkers Prev; 26(1); 95-103. ©2016 AACR. ©2016 American Association for Cancer Research.
Gao, Yi-Xiong; Zhang, Hongxia; Yu, Xinwei; He, Jia-lu; Shang, Xiaohong; Li, Xiaowei; Zhao, Yunfeng; Wu, Yongning
2014-06-04
The aim of this study was to assess net neurodevelopmental effect via maternal consumption of marine fish. A total of thirty-one species were collected from Zhoushan, China. The net IQ point gain was assessed by FAO/WHO deterministic approach and probabilistic computation (if necessary). Results of the deterministic assessment of two samples belonging to Scoliodon sorrakowah showed negative IQ point gain in both common and extreme consumption scenarios (175 and 450 g/week, respectively); the net IQ gain caused by both consumption scenarios of other species were positive. Both consumption scenarios of Scoliodon sorrakowah showed beneficial neurodevelopmental effect according to probabilistic computation (95% CI for mean of net IQ gain: 0.0536-0.0554 and 0.1377-0.1425, respectively). Except for Scoliodon sorrakowah, this study indicates that both consumption scenarios of other studied species would be recommended according to the FAO/WHO approach. There would be no recommendation of consumption scenarios of Scoliodon sorrakowah for the reason for carefulness.
Offredy, Maxine; Kendall, Sally; Goodman, Claire
2008-06-01
Nurses have been involved in prescribing in England since 1996, and to date over 41,000 nurses are registered with the Nursing and Midwifery Council as prescribers. The majority of evaluative research on nurse prescribing is descriptive and relies on self-report and assessment of patient satisfaction. To explore and test nurse prescribers' pharmacological knowledge and decision-making. An exploratory approach to test the usefulness of patient scenarios in addressing the reasons why nurses decide whether or not to prescribe was utilised. Semi-structured interviews with nurse prescribers using patient scenarios were used as proxy methods of assessment of how nurses made their prescribing decisions. Two primary care trusts in the southeast of England were the settings for this study. Purposive sampling to ensure there was a mixed group of prescribers was used to enable detailed exploration of the research objectives and to obtain in-depth understanding of the complex activities involved in nurse prescribing. Interviews and case scenarios. The use of cognitive continuum theory guided the analysis. The majority of participants were unable to identify the issues involved in all the scenarios; they also failed to provide an acceptable solution to the problem, suggesting that they would refer the patient to the general practitioner. A similar number described themselves as 'very confident' while seven participants felt that they were 'not confident' in dealing with medication issues, four of whom were practising prescribing. The effects of social and institutional factors are important in the decision-making process. The lack of appropriate pharmacological knowledge coupled with lack of confidence in prescribing was demonstrated. The scenarios used in this study indicate that nurses are perhaps knowledgeable in their small area of practise but flounder outside this. Further research could be conducted with a larger sample and with more scenarios to explore the decision-making and the pharmacological knowledge base of nurse prescribers, particularly in the light of government policy to extend prescribing rights to non-medical prescribers, including pharmacists.
Wolf-Rayet spin at low metallicity and its implication for black hole formation channels
NASA Astrophysics Data System (ADS)
Vink, Jorick S.; Harries, Tim J.
2017-07-01
Context. The spin of Wolf-Rayet (WR) stars at low metallicity (Z) is most relevant for our understanding of gravitational wave sources, such as GW 150914, and of the incidence of long-duration gamma-ray bursts (GRBs). Two scenarios have been suggested for both phenomena: one of them involves rapid rotation and quasi-chemical homogeneous evolution (CHE) and the other invokes classical evolution through mass loss in single and binary systems. Aims: The stellar spin of WR stars might enable us to test these two scenarios. In order to obtain empirical constraints on black hole progenitor spin we infer wind asymmetries in all 12 known WR stars in the Small Magellanic Cloud (SMC) at Z = 1 / 5 Z⊙ and within a significantly enlarged sample of single and binary WR stars in the Large Magellanic Cloud (LMC at Z = 1 / 2 Z⊙), thereby tripling the sample of Vink from 2007. This brings the total LMC sample to 39, making it appropriate for comparison to the Galactic sample. Methods: We measured WR wind asymmetries with VLT-FORS linear spectropolarimetry, a tool that is uniquely poised to perform such tasks in extragalactic environments. Results: We report the detection of new line effects in the LMC WN star BAT99-43 and the WC star BAT99-70, along with the well-known WR LBV HD 5980 in the SMC, which might be undergoing a chemically homogeneous evolution. With the previous reported line effects in the late-type WNL (Ofpe/WN9) objects BAT99-22 and BAT99-33, this brings the total LMC WR sample to four, I.e. a frequency of 10%. Perhaps surprisingly, the incidence of line effects amongst low Z WR stars is not found to be any higher than amongst the Galactic WR sample, challenging the rotationally induced CHE model. Conclusions: As WR mass loss is likely Z-dependent, our Magellanic Cloud line-effect WR stars may maintain their surface rotation and fulfill the basic conditions for producing long GRBs, both via the classical post-red supergiant or luminous blue variable channel, or resulting from CHE due to physics specific to very massive stars.
Health risk for children and adults consuming apples with pesticide residue.
Lozowicka, Bozena
2015-01-01
The presence of pesticide residues in apples raises serious health concerns, especially when the fresh fruits are consumed by children, particularly vulnerable to the pesticide hazards. This study demonstrates the results from nine years of investigation (2005-2013) of 696 samples of Polish apples for 182 pesticides using gas and liquid chromatography and spectrophotometric techniques. Only 33.5% of the samples did not contain residues above the limit of detection. In 66.5% of the samples, 34 pesticides were detected, of which maximum residue level (MRL) was exceeded in 3%. Multiple residues were present in 35% of the samples with two to six pesticides, and one sample contained seven compounds. A study of the health risk for children, adults and the general population consuming apples with these pesticides was performed. The pesticide residue data have been combined with the consumption of apples in the 97.5 percentile and the mean diet. A deterministic model was used to assess the chronic and acute exposures that are based on the average and high concentrations of residues. Additionally, the "worst-case scenario" and "optimistic case scenario" were used to assess the chronic risk. In certain cases, the total dietary pesticide intake calculated from the residue levels observed in apples exceeds the toxicological criteria. Children were the group most exposed to the pesticides, and the greatest short-term hazard stemmed from flusilazole at 624%, dimethoate at 312%, tebuconazole at 173%, and chlorpyrifos methyl and captan with 104% Acute Reference Dose (ARfD) each. In the cumulative chronic exposure, among the 17 groups of compounds studied, organophosphate insecticides constituted 99% acceptable daily intake (ADI). The results indicate that the occurrence of pesticide residues in apples could not be considered a serious public health problem. Nevertheless, an investigation into continuous monitoring and tighter regulation of pesticide residues is recommended. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sadegh, M.; Moftakhari, H.; AghaKouchak, A.
2017-12-01
Many natural hazards are driven by multiple forcing variables, and concurrence/consecutive extreme events significantly increases risk of infrastructure/system failure. It is a common practice to use univariate analysis based upon a perceived ruling driver to estimate design quantiles and/or return periods of extreme events. A multivariate analysis, however, permits modeling simultaneous occurrence of multiple forcing variables. In this presentation, we introduce the Multi-hazard Assessment and Scenario Toolbox (MhAST) that comprehensively analyzes marginal and joint probability distributions of natural hazards. MhAST also offers a wide range of scenarios of return period and design levels and their likelihoods. Contribution of this study is four-fold: 1. comprehensive analysis of marginal and joint probability of multiple drivers through 17 continuous distributions and 26 copulas, 2. multiple scenario analysis of concurrent extremes based upon the most likely joint occurrence, one ruling variable, and weighted random sampling of joint occurrences with similar exceedance probabilities, 3. weighted average scenario analysis based on a expected event, and 4. uncertainty analysis of the most likely joint occurrence scenario using a Bayesian framework.
ECCM Scheme against Interrupted Sampling Repeater Jammer Based on Parameter-Adjusted Waveform Design
Wei, Zhenhua; Peng, Bo; Shen, Rui
2018-01-01
Interrupted sampling repeater jamming (ISRJ) is an effective way of deceiving coherent radar sensors, especially for linear frequency modulated (LFM) radar. In this paper, for a simplified scenario with a single jammer, we propose a dynamic electronic counter-counter measure (ECCM) scheme based on jammer parameter estimation and transmitted signal design. Firstly, the LFM waveform is transmitted to estimate the main jamming parameters by investigating the discontinuousness of the ISRJ’s time-frequency (TF) characteristics. Then, a parameter-adjusted intra-pulse frequency coded signal, whose ISRJ signal after matched filtering only forms a single false target, is designed adaptively according to the estimated parameters, i.e., sampling interval, sampling duration and repeater times. Ultimately, for typical jamming scenes with different jamming signal ratio (JSR) and duty cycle, we propose two particular ISRJ suppression approaches. Simulation results validate the effective performance of the proposed scheme for countering the ISRJ, and the trade-off relationship between the two approaches is demonstrated. PMID:29642508
Enhancement of low-temperature thermometry by strong coupling
NASA Astrophysics Data System (ADS)
Correa, Luis A.; Perarnau-Llobet, Martí; Hovhannisyan, Karen V.; Hernández-Santana, Senaida; Mehboudi, Mohammad; Sanpera, Anna
2017-12-01
We consider the problem of estimating the temperature T of a very cold equilibrium sample. The temperature estimates are drawn from measurements performed on a quantum Brownian probe strongly coupled to it. We model this scenario by resorting to the canonical Caldeira-Leggett Hamiltonian and find analytically the exact stationary state of the probe for arbitrary coupling strength. In general, the probe does not reach thermal equilibrium with the sample, due to their nonperturbative interaction. We argue that this is advantageous for low-temperature thermometry, as we show in our model that (i) the thermometric precision at low T can be significantly enhanced by strengthening the probe-sampling coupling, (ii) the variance of a suitable quadrature of our Brownian thermometer can yield temperature estimates with nearly minimal statistical uncertainty, and (iii) the spectral density of the probe-sample coupling may be engineered to further improve thermometric performance. These observations may find applications in practical nanoscale thermometry at low temperatures—a regime which is particularly relevant to quantum technologies.
Pearson-Readhead survey from space
NASA Astrophysics Data System (ADS)
Preston, R. A.; Lister, M. L.; Tingay, S. J.; Piner, B. G.; Murphy, D. W.; Jones, D. L.; Meier, D. L.; Pearson, T. J.; Readhead, A. C. S.; Hirabayashi, H.; Kobayashi, H.; Inoue, M.
2001-01-01
We are using the VSOP space VLBI mission to observe a complete sample of Pearson-Readhead survey sources at 4.8 GHz to determine core brightness temperatures and pc-scale jet properties. The Pearson-Readhead sample has been used for extensive ground-based VLBI survey studies, and is ideal for a VSOP survey because the sources are strong, the VSOP u-v coverages are especially good above +35o declination, and multi-epoch ground-based VLBI data and other existing supporting data exceed that of any other sample. To date we have imaged 27 of the 31 objects in our sample. Our preliminary results show that the majority of objects contain strong core components that remain unresolved on baselines of ~30,000 km. The brightness temperatures of several cores significantly exceed 1012 K, which is indicative of highly relativistically beamed emission. We discuss correlations with several other beaming indicators, such as variability and spectral index, that support this scenario. This research was performed in part at the Jet Propulsion Laboratory, California Institute of Technology, under contract to NASA.
The ARIEL mission reference sample
NASA Astrophysics Data System (ADS)
Zingales, Tiziano; Tinetti, Giovanna; Pillitteri, Ignazio; Leconte, Jérémy; Micela, Giuseppina; Sarkar, Subhajit
2018-02-01
The ARIEL (Atmospheric Remote-sensing Exoplanet Large-survey) mission concept is one of the three M4 mission candidates selected by the European Space Agency (ESA) for a Phase A study, competing for a launch in 2026. ARIEL has been designed to study the physical and chemical properties of a large and diverse sample of exoplanets and, through those, understand how planets form and evolve in our galaxy. Here we describe the assumptions made to estimate an optimal sample of exoplanets - including already known exoplanets and expected ones yet to be discovered - observable by ARIEL and define a realistic mission scenario. To achieve the mission objectives, the sample should include gaseous and rocky planets with a range of temperatures around stars of different spectral type and metallicity. The current ARIEL design enables the observation of ˜1000 planets, covering a broad range of planetary and stellar parameters, during its four year mission lifetime. This nominal list of planets is expected to evolve over the years depending on the new exoplanet discoveries.
Poverty among Elderly in India
ERIC Educational Resources Information Center
Srivastava, Akanksha; Mohanty, Sanjay K.
2012-01-01
Using consumption expenditure data of the National Sample Survey 2004-2005, this paper estimates the size of elderly poor and tests the hypotheses that elderly households are not economically better-off compared to non-elderly households in India. Poverty estimates are derived under three scenarios--by applying the official cut-off point of the…
This research addresses both the effects and mechanisms by which current and future climate conditions affect the risk factors related to allergic airway disease in humans. Our intensive sampling of pollen production, output, and potency in ecologically distinct ragweed popul...
Profiling Local Optima in K-Means Clustering: Developing a Diagnostic Technique
ERIC Educational Resources Information Center
Steinley, Douglas
2006-01-01
Using the cluster generation procedure proposed by D. Steinley and R. Henson (2005), the author investigated the performance of K-means clustering under the following scenarios: (a) different probabilities of cluster overlap; (b) different types of cluster overlap; (c) varying samples sizes, clusters, and dimensions; (d) different multivariate…
Flight dynamics system software development environment (FDS/SDE) tutorial
NASA Technical Reports Server (NTRS)
Buell, John; Myers, Philip
1986-01-01
A sample development scenario using the Flight Dynamics System Software Development Environment (FDS/SDE) is presented. The SDE uses a menu-driven, fill-in-the-blanks format that provides online help at all steps, thus eliminating lengthy training and allowing immediate use of this new software development tool.
Fear of Failure, Self-Handicapping, and Negative Emotions in Response to Failure
ERIC Educational Resources Information Center
Bartels, Jared M.; Herman, William E.
2011-01-01
Research suggests that students who fear failure are likely to utilize cognitive strategies such as self-handicapping that serve to perpetuate failure. Such devastating motivational dispositions clearly limit academic success. The present study examined negative emotional responses to scenarios involving academic failure among a sample of…
Better for Both--Thoughts on Teacher-Pupil Interaction.
ERIC Educational Resources Information Center
Kilburn, John
1978-01-01
To remove the adversary emphasis from pupil-teacher interactions, the author presents a simple model, showing how an intervention can potentially make a situation better, worse, or unchanged for the pupil and the teacher. A sample scenario is provided of two teachers dealing with a misbehaving child. (SJL)
Verdoodt, F; Jentschke, M; Hillemanns, P; Racey, C S; Snijders, P J F; Arbyn, M
2015-11-01
Population coverage for cervical cancer screening is an important determinant explaining differences in the incidence of cervical cancer between countries. Offering devices for self-sampling has the potential to increase participation of hard-to-reach women. A systematic review and meta-analysis were performed to evaluate the participation after an invitation including a self-sampling device (self-sampling arm) versus an invitation to have a sample taken by a health professional (control arm), sent to under-screened women. Sixteen randomised studies were found eligible. In an intention-to-treat analysis, the pooled participation in the self-sampling arm was 23.6% (95% confidence interval (CI)=20.2-27.3%), when self-sampling kits were sent by mail to all women, versus 10.3% (95% CI=6.2-15.2%) in the control arm (participation difference: 12.6% [95% CI=9.3-15.9]). When women had to opt-in to receive the self-sampling device, as used in three studies, the pooled participation was not higher in the self-sampling compared to the control arm (participation difference: 0.2% [95% CI=-4.5-4.9%]). An increased participation was observed in the self-sampling arm compared to the control arm, if self-sampling kits were sent directly to women at their home address. However, the size of the effect varied substantially among studies. Since participation was similar in both arms when women had to opt-in, future studies are warranted to discern opt-in scenarios that are most acceptable to women. Copyright © 2015 Elsevier Ltd. All rights reserved.
Tank 241-AY-101 Privatization Push Mode Core Sampling and Analysis Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
TEMPLETON, A.M.
2000-05-19
This sampling and analysis plan (SAP) identifies characterization objectives pertaining to sample collection, laboratory analytical evaluation, and reporting requirements for samples obtained from tank 241-AY-101. The purpose of this sampling event is to obtain information about the characteristics of the contents of 241-AY-101 required to satisfy ''Data Quality Objectives For RPP Privatization Phase I: Confirm Tank T Is An Appropriate Feed Source For High-Level Waste Feed Batch X(HLW DQO)' (Nguyen 1999a), ''Data Quality Objectives For TWRS Privatization Phase I: Confirm Tank T Is An Appropriate Feed Source For Low-Activity Waste Feed Butch X (LAW DQO) (Nguyen 1999b)'', ''Low Activity Wastemore » and High-Level Waste Feed Data Quality Objectives (L&H DQO)'' (Patello et al. 1999), and ''Characterization Data Needs for Development, Design, and Operation of Retrieval Equipment Developed through the Data Quality Objective Process (Equipment DQO)'' (Bloom 1996). Special instructions regarding support to the LAW and HLW DQOs are provided by Baldwin (1999). Push mode core samples will be obtained from risers 15G and 150 to provide sufficient material for the chemical analyses and tests required to satisfy these data quality objectives. The 222-S Laboratory will extrude core samples; composite the liquids and solids; perform chemical analyses on composite and segment samples; archive half-segment samples; and provide sub-samples to the Process Chemistry Laboratory. The Process Chemistry Laboratory will prepare test plans and perform process tests to evaluate the behavior of the 241-AY-101 waste undergoing the retrieval and treatment scenarios defined in the applicable DQOs. Requirements for analyses of samples originating in the process tests will be documented in the corresponding test plans and are not within the scope of this SAP.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Annette Rohr
2004-12-02
This report documents progress made on the subject project during the period of March 1, 2004 through August 31, 2004. The TERESA Study is designed to investigate the role played by specific emissions sources and components in the induction of adverse health effects by examining the relative toxicity of coal combustion and mobile source (gasoline and/or diesel engine) emissions and their oxidative products. The study involves on-site sampling, dilution, and aging of coal combustion emissions at three coal-fired power plants, as well as mobile source emissions, followed by animal exposures incorporating a number of toxicological endpoints. The DOE-EPRI Cooperative Agreementmore » (henceforth referred to as ''the Agreement'') for which this technical progress report has been prepared covers the analysis and interpretation of the field data collected at the first power plant (henceforth referred to as Plant 0, and located in the Upper Midwest), followed by the performance and analysis of similar field experiments at two additional coal-fired power plants (Plants 1 and 2) utilizing different coal types and with different plant configurations. Significant progress was made on the Project during this reporting period, with field work being initiated at Plant 0. Initial testing of the stack sampling system and reaction apparatus revealed that primary particle concentrations were lower than expected in the emissions entering the mobile chemical laboratory. Initial animal exposures to primary emissions were carried out (Scenario 1) to ensure successful implementation of all study methodologies and toxicological assessments. Results indicated no significant toxicological effects in response to primary emissions exposures. Exposures were then carried out to diluted, oxidized, neutralized emissions with the addition of secondary organic aerosol (Scenario 5), both during the day and also at night when primary particle concentrations in the sampled stack emissions tended to be slightly higher. Exposure concentrations were about 249 {micro}g/m{sup 3} PM, of which 87 {micro}g/m{sup 3} was sulfate and approximately 110 {micro}g/m{sup 3} was secondary organic material ({approx}44%). Results indicated subtle differences in breathing pattern between exposed and control (sham) animals, but no differences in other endpoints (in vivo chemiluminescence, blood cytology, bronchoalveolar lavage fluid analysis). It was suspected that primary particle losses may have been occurring in the venturi aspirator/orifice sampler; therefore, the stack sampling system was redesigned. The modified system resulted in no substantial increase in particle concentration in the emissions, leading us to conclude that the electrostatic precipitator at the power plant has high efficiency, and that the sampled emissions are representative of those exiting the stack into the atmosphere. This is important, since the objective of the Project is to carry out exposures to realistic coal combustion-derived secondary PM arising from power plants. During the next reporting period, we will document and describe the remainder of the fieldwork at Plant 0, which we expect to be complete by mid-November 2004. This report will include detailed Phase I toxicological findings for all scenarios run, and Phase II toxicological findings for one selected scenario. Depending upon the outcome of the ongoing fieldwork at Plant 0 (i.e. the biological effects observed), not all the proposed scenarios may be evaluated. The next report is also expected to include preliminary field data for Plant 1, located in the Southeast.« less
Boo, Gianluca; Leyk, Stefan; Fabrikant, Sara Irina; Pospischil, Andreas; Graf, Ramona
2017-05-11
Epidemiological research of canine cancers could inform comparative studies of environmental determinants for a number of human cancers. However, such an approach is currently limited because canine cancer data sources are still few in number and often incomplete. Incompleteness is typically due to under-ascertainment of canine cancers. A main reason for this is because dog owners commonly do not seek veterinary care for this diagnosis. Deeper knowledge on under-ascertainment is critical for modelling canine cancer incidence, as an indication of zero incidence might originate from the sole absence of diagnostic examinations within a given sample unit. In the present case study, we investigated effects of such structural zeros on models of canine cancer incidence. In doing so, we contrasted two scenarios for modelling incidence data retrieved from the Swiss Canine Cancer Registry. The first scenario was based on the complete enumeration of incidence data for all Swiss municipal units. The second scenario was based on a filtered sample that systematically discarded structural zeros in those municipal units where no diagnostic examination had been performed. By means of cross-validation, we assessed and contrasted statistical performance and predictive power of the two modelling scenarios. This analytical step allowed us to demonstrate that structural zeros impact on the generalisability of the model of canine cancer incidence, thus challenging future comparative studies of canine and human cancers. The results of this case study show that increased awareness about the effects of structural zeros is critical to epidemiological research.
Gaspar, Paulo; Seixas, Susana; Rocha, Jorge
2004-04-01
The genetic variation at a compound nonrecombining haplotype system, consisting of the previously reported SB19.3 Alu insertion polymorphism and a newly identified adjacent short tandem repeat (STR), was studied in population samples from Portugal and São Tomé (Gulf of Guinea, West Africa). Age estimates based on the linked microsatellite variation suggest that the Alu insertion occurred about 190,000 years ago. In accordance with the global patterns of distribution of human genetic variation, the highest haplotype diversity was found in the African sample. This excess in African diversity was due to both a substantial reduction in heterozygosity at the Alu polymorphism and a lower STR variability associated with the predominant Alu insertion allele in the Portuguese sample. The high level of interpopulation differentiation observed at the Alu locus (F(ST) = 0.43) was interpreted under alternative selective and demographic scenarios. The need for compatibility between patterns of variation at the STR and Alu loci could be used to restrict the range of selection coefficients in selection-driven genetic hitchhiking frameworks and to favor demographic scenarios dominated by larger pre-expansion African population sizes. Taken together, the data show that the SB19.3 Alu-STR system is an informative marker that can be included in more extended batteries of compound haplotypes used in human evolutionary studies.
Li, Yanhe; Guo, Xianwu; Chen, Liping; Bai, Xiaohui; Wei, Xinlan; Zhou, Xiaoyun; Huang, Songqian; Wang, Weimin
2015-01-01
Identifying the dispersal pathways of an invasive species is useful for adopting the appropriate strategies to prevent and control its spread. However, these processes are exceedingly complex. So, it is necessary to apply new technology and collect representative samples for analysis. This study used Approximate Bayesian Computation (ABC) in combination with traditional genetic tools to examine extensive sample data and historical records to infer the invasion history of the red swamp crayfish, Procambarus clarkii, in China. The sequences of the mitochondrial control region and the proPOx intron in the nuclear genome of samples from 37 sites (35 in China and one each in Japan and the USA) were analyzed. The results of combined scenarios testing and historical records revealed a much more complex invasion history in China than previously believed. P. clarkii was most likely originally introduced into China from Japan from an unsampled source, and the species then expanded its range primarily into the middle and lower reaches and, to a lesser extent, into the upper reaches of the Changjiang River in China. No transfer was observed from the upper reaches to the middle and lower reaches of the Changjiang River. Human-mediated jump dispersal was an important dispersal pathway for P. clarkii. The results provide a better understanding of the evolutionary scenarios involved in the rapid invasion of P. clarkii in China. PMID:26132567
Cheng, Keding; Sloan, Angela; McCorrister, Stuart; Peterson, Lorea; Chui, Huixia; Drebot, Mike; Nadon, Celine; Knox, J David; Wang, Gehua
2014-12-01
The need for rapid and accurate H typing is evident during Escherichia coli outbreak situations. This study explores the transition of MS-H, a method originally developed for rapid H antigen typing of E. coli using LC-MS/MS of flagella digest of reference strains and some clinical strains, to E. coli isolates in clinical scenario through quantitative analysis and method validation. Motile and nonmotile strains were examined in batches to simulate clinical sample scenario. Various LC-MS/MS batch run procedures and MS-H typing rules were compared and summarized through quantitative analysis of MS-H data output for a standard method development. Label-free quantitative data analysis of MS-H typing was proven very useful for examining the quality of MS-H result and the effects of some sample carryovers from motile E. coli isolates. Based on this, a refined procedure and protein identification rule specific for clinical MS-H typing was established and validated. With LC-MS/MS batch run procedure and database search parameter unique for E. coli MS-H typing, the standard procedure maintained high accuracy and specificity in clinical situations, and its potential to be used in a clinical setting was clearly established. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Panahbehagh, B.; Smith, D.R.; Salehi, M.M.; Hornbach, D.J.; Brown, D.J.; Chan, F.; Marinova, D.; Anderssen, R.S.
2011-01-01
Assessing populations of rare species is challenging because of the large effort required to locate patches of occupied habitat and achieve precise estimates of density and abundance. The presence of a rare species has been shown to be correlated with presence or abundance of more common species. Thus, ecological community richness or abundance can be used to inform sampling of rare species. Adaptive sampling designs have been developed specifically for rare and clustered populations and have been applied to a wide range of rare species. However, adaptive sampling can be logistically challenging, in part, because variation in final sample size introduces uncertainty in survey planning. Two-stage sequential sampling (TSS), a recently developed design, allows for adaptive sampling, but avoids edge units and has an upper bound on final sample size. In this paper we present an extension of two-stage sequential sampling that incorporates an auxiliary variable (TSSAV), such as community attributes, as the condition for adaptive sampling. We develop a set of simulations to approximate sampling of endangered freshwater mussels to evaluate the performance of the TSSAV design. The performance measures that we are interested in are efficiency and probability of sampling a unit occupied by the rare species. Efficiency measures the precision of population estimate from the TSSAV design relative to a standard design, such as simple random sampling (SRS). The simulations indicate that the density and distribution of the auxiliary population is the most important determinant of the performance of the TSSAV design. Of the design factors, such as sample size, the fraction of the primary units sampled was most important. For the best scenarios, the odds of sampling the rare species was approximately 1.5 times higher for TSSAV compared to SRS and efficiency was as high as 2 (i.e., variance from TSSAV was half that of SRS). We have found that design performance, especially for adaptive designs, is often case-specific. Efficiency of adaptive designs is especially sensitive to spatial distribution. We recommend that simulations tailored to the application of interest are highly useful for evaluating designs in preparation for sampling rare and clustered populations.
Methods for sample size determination in cluster randomized trials
Rutterford, Clare; Copas, Andrew; Eldridge, Sandra
2015-01-01
Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515
The micron- to kilometer-scale Moon: linking samples to orbital observations, Apollo to LRO
NASA Astrophysics Data System (ADS)
Crites, S.; Lucey, P. G.; Taylor, J.; Martel, L.; Sun, L.; Honniball, C.; Lemelin, M.
2017-12-01
The Apollo missions have shaped the field of lunar science and our understanding of the Moon, from global-scale revelations like the magma ocean hypothesis, to providing ground truth for compositional remote sensing and absolute ages to anchor cratering chronologies. While lunar meteorite samples can provide a global- to regional-level view of the Moon, samples returned from known locations are needed to directly link orbital-scale observations with laboratory measurements-a link that can be brought to full fruition with today's extremely high spatial resolution observations from Lunar Reconnaissance Orbiter and other recent missions. Korotev et al. (2005) described a scenario of the Moon without Apollo to speculate about our understanding of the Moon if our data were confined to lunar meteorites and remote sensing. I will review some of the major points discussed by Korotev et al. (2005), and focus on some of the ways in which spectroscopic remote sensing in particular has benefited from the Apollo samples. For example, could the causes and effects of lunar-style space weathering have been unraveled without the Apollo samples? What would be the limitations on remote sensing compositional measurements that rely on Apollo samples for calibration and validation? And what new opportunities to bring together orbital and sample analyses now exist, in light of today's high spatial and spectral resolution remote sensing datasets?
Graf, Alexandra C; Bauer, Peter
2011-06-30
We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.
Groups of two galaxies in SDSS: implications of colours on star formation quenching time-scales
NASA Astrophysics Data System (ADS)
Trinh, Christopher Q.; Barton, Elizabeth J.; Bullock, James S.; Cooper, Michael C.; Zentner, Andrew R.; Wechsler, Risa H.
2013-11-01
We have devised a method to select galaxies that are isolated in their dark matter halo (N = 1 systems) and galaxies that reside in a group of exactly two (N = 2 systems). Our N = 2 systems are widely separated (up to ˜200 h-1 kpc), where close galaxy-galaxy interactions are not dominant. We apply our selection criteria to two volume-limited samples of galaxies from Sloan Digital Sky Survey Data Release 6 (SDSS DR6) with Mr - 5 log10 h ≤ -19 and -20 to study the effects of the environment of very sparse groups on galaxy colour. For satellite galaxies in a group of two, we find a red excess attributed to star formation quenching of 0.15 ± 0.01 and 0.14 ± 0.01 for the -19 and -20 samples, respectively, relative to isolated galaxies of the same stellar mass. Assuming N = 1 systems are the progenitors of N = 2 systems, an immediate-rapid star formation quenching scenario is inconsistent with these observations. A delayed-then-rapid star formation quenching scenario with a delay time of 3.3 and 3.7 Gyr for the -19 and -20 samples, respectively, yields a red excess prediction in agreement with the observations. The observations also reveal that central galaxies in a group of two have a slight blue excess of 0.06 ± 0.02 and 0.02 ± 0.01 for the -19 and -20 samples, respectively, relative to N = 1 populations of the same stellar mass. Our results demonstrate that even the environment of very sparse groups of luminous galaxies influence galaxy evolution and in-depth studies of these simple systems are an essential step towards understanding galaxy evolution in general.
NASA Astrophysics Data System (ADS)
Zhang, Hong-Xin; Puzia, Thomas H.; Peng, Eric W.; Liu, Chengze; Côté, Patrick; Ferrarese, Laura; Duc, Pierre-Alain; Eigenthaler, Paul; Lim, Sungsoon; Lançon, Ariane; Muñoz, Roberto P.; Roediger, Joel; Sánchez-Janssen, Ruben; Taylor, Matthew A.; Yu, Jincheng
2018-05-01
We derive stellar population parameters for a representative sample of ultracompact dwarfs (UCDs) and a large sample of massive globular clusters (GCs) with stellar masses ≳ 106 M ⊙ in the central galaxy M87 of the Virgo galaxy cluster, based on model fitting to the Lick-index measurements from both the literature and new observations. After necessary spectral stacking of the relatively faint objects in our initial sample of 40 UCDs and 118 GCs, we obtain 30 sets of Lick-index measurements for UCDs and 80 for GCs. The M87 UCDs have ages ≳ 8 Gyr and [α/Fe] ≃ 0.4 dex, in agreement with previous studies based on smaller samples. The literature UCDs, located in lower-density environments than M87, extend to younger ages and smaller [α/Fe] (at given metallicities) than M87 UCDs, resembling the environmental dependence of the stellar nuclei of dwarf elliptical galaxies (dEs) in the Virgo cluster. The UCDs exhibit a positive mass–metallicity relation (MZR), which flattens and connects compact ellipticals at stellar masses ≳ 108 M ⊙. The Virgo dE nuclei largely follow the average MZR of UCDs, whereas most of the M87 GCs are offset toward higher metallicities for given stellar masses. The difference between the mass–metallicity distributions of UCDs and GCs may be qualitatively understood as a result of their different physical sizes at birth in a self-enrichment scenario or of galactic nuclear cluster star formation efficiency being relatively low in a tidal stripping scenario for UCD formation. The existing observations provide the necessary but not sufficient evidence for tidally stripped dE nuclei being the dominant contributors to the M87 UCDs.
An Indoor Location-Based Control System Using Bluetooth Beacons for IoT Systems.
Huh, Jun-Ho; Seo, Kyungryong
2017-12-19
The indoor location-based control system estimates the indoor position of a user to provide the service he/she requires. The major elements involved in the system are the localization server, service-provision client, user application positioning technology. The localization server controls access of terminal devices (e.g., Smart Phones and other wireless devices) to determine their locations within a specified space first and then the service-provision client initiates required services such as indoor navigation and monitoring/surveillance. The user application provides necessary data to let the server to localize the devices or allow the user to receive various services from the client. The major technological elements involved in this system are indoor space partition method, Bluetooth 4.0, RSSI (Received Signal Strength Indication) and trilateration. The system also employs the BLE communication technology when determining the position of the user in an indoor space. The position information obtained is then used to control a specific device(s). These technologies are fundamental in achieving a "Smart Living". An indoor location-based control system that provides services by estimating user's indoor locations has been implemented in this study (First scenario). The algorithm introduced in this study (Second scenario) is effective in extracting valid samples from the RSSI dataset but has it has some drawbacks as well. Although we used a range-average algorithm that measures the shortest distance, there are some limitations because the measurement results depend on the sample size and the sample efficiency depends on sampling speeds and environmental changes. However, the Bluetooth system can be implemented at a relatively low cost so that once the problem of precision is solved, it can be applied to various fields.
An Indoor Location-Based Control System Using Bluetooth Beacons for IoT Systems
Huh, Jun-Ho; Seo, Kyungryong
2017-01-01
The indoor location-based control system estimates the indoor position of a user to provide the service he/she requires. The major elements involved in the system are the localization server, service-provision client, user application positioning technology. The localization server controls access of terminal devices (e.g., Smart Phones and other wireless devices) to determine their locations within a specified space first and then the service-provision client initiates required services such as indoor navigation and monitoring/surveillance. The user application provides necessary data to let the server to localize the devices or allow the user to receive various services from the client. The major technological elements involved in this system are indoor space partition method, Bluetooth 4.0, RSSI (Received Signal Strength Indication) and trilateration. The system also employs the BLE communication technology when determining the position of the user in an indoor space. The position information obtained is then used to control a specific device(s). These technologies are fundamental in achieving a “Smart Living”. An indoor location-based control system that provides services by estimating user’s indoor locations has been implemented in this study (First scenario). The algorithm introduced in this study (Second scenario) is effective in extracting valid samples from the RSSI dataset but has it has some drawbacks as well. Although we used a range-average algorithm that measures the shortest distance, there are some limitations because the measurement results depend on the sample size and the sample efficiency depends on sampling speeds and environmental changes. However, the Bluetooth system can be implemented at a relatively low cost so that once the problem of precision is solved, it can be applied to various fields. PMID:29257044
Fromme, H; Nitschke, L; Boehmer, S; Kiranoglu, M; Göen, T
2013-03-01
Glycol ethers are a class of semi-volatile substances used as solvents in a variety of consumer products like cleaning agents, paints, cosmetics as well as chemical intermediates. We determined 11 metabolites of ethylene and propylene glycol ethers in 44 urine samples of German residents (background level study) and in urine samples of individuals after exposure to glycol ethers during cleaning activities (exposure study). In the study on the background exposure, methoxyacetic acid and phenoxyacetic acid (PhAA) could be detected in each urine sample with median (95th percentile) values of 0.11 mgL(-1) (0.30 mgL(-1)) and 0.80 mgL(-1) (23.6 mgL(-1)), respectively. The other metabolites were found in a limited number of samples or in none. In the exposure study, 5-8 rooms were cleaned with a cleaner containing ethylene glycol monobutyl ether (EGBE), propylene glycol monobutyl ether (PGBE), or ethylene glycol monopropyl ether (EGPE). During cleaning the mean levels in the indoor air were 7.5 mgm(-3) (EGBE), 3.0 mgm(-3) (PGBE), and 3.3 mgm(-3) (EGPE), respectively. The related metabolite levels analysed in the urine of the residents of the rooms at the day of cleaning were 2.4 mgL(-1) for butoxyacetic acid, 0.06 mgL(-1) for 2-butoxypropionic acid, and 2.3 mgL(-1) for n-propoxyacetic acid. Overall, our study indicates that the exposure of the population to glycol ethers is generally low, with the exception of PhAA. Moreover, the results of the cleaning scenarios demonstrate that the use of indoor cleaning agents containing glycol ethers can lead to a detectable internal exposure of residents. Copyright © 2012 Elsevier Ltd. All rights reserved.
Blanco, Yolanda; Gallardo-Carreño, Ignacio; Ruiz-Bermejo, Marta; Puente-Sánchez, Fernando; Cavalcante-Silva, Erika; Quesada, Antonio; Prieto-Ballesteros, Olga
2017-01-01
Abstract The search for biomarkers of present or past life is one of the major challenges for in situ planetary exploration. Multiple constraints limit the performance and sensitivity of remote in situ instrumentation. In addition, the structure, chemical, and mineralogical composition of the sample may complicate the analysis and interpretation of the results. The aim of this work is to highlight the main constraints, performance, and complementarity of several techniques that have already been implemented or are planned to be implemented on Mars for detection of organic and molecular biomarkers on a best-case sample scenario. We analyzed a 1000-year-old desiccated and mummified microbial mat from Antarctica by Raman and IR (infrared) spectroscopies (near- and mid-IR), thermogravimetry (TG), differential thermal analysis, mass spectrometry (MS), and immunological detection with a life detector chip. In spite of the high organic content (ca. 20% wt/wt) of the sample, the Raman spectra only showed the characteristic spectral peaks of the remaining beta-carotene biomarker and faint peaks of phyllosilicates over a strong fluorescence background. IR spectra complemented the mineralogical information from Raman spectra and showed the main molecular vibrations of the humic acid functional groups. The TG-MS system showed the release of several volatile compounds attributed to biopolymers. An antibody microarray for detecting cyanobacteria (CYANOCHIP) detected biomarkers from Chroococcales, Nostocales, and Oscillatoriales orders. The results highlight limitations of each technique and suggest the necessity of complementary approaches in the search for biomarkers because some analytical techniques might be impaired by sample composition, presentation, or processing. Key Words: Planetary exploration—Life detection—Microbial mat—Life detector chip—Thermogravimetry—Raman spectroscopy—NIR—DRIFTS. Astrobiology 17, 984–996. PMID:29016195
Cau, Andrea
2017-01-01
Bayesian phylogenetic methods integrating simultaneously morphological and stratigraphic information have been applied increasingly among paleontologists. Most of these studies have used Bayesian methods as an alternative to the widely-used parsimony analysis, to infer macroevolutionary patterns and relationships among species-level or higher taxa. Among recently introduced Bayesian methodologies, the Fossilized Birth-Death (FBD) model allows incorporation of hypotheses on ancestor-descendant relationships in phylogenetic analyses including fossil taxa. Here, the FBD model is used to infer the relationships among an ingroup formed exclusively by fossil individuals, i.e., dipnoan tooth plates from four localities in the Ain el Guettar Formation of Tunisia. Previous analyses of this sample compared the results of phylogenetic analysis using parsimony with stratigraphic methods, inferred a high diversity (five or more genera) in the Ain el Guettar Formation, and interpreted it as an artifact inflated by depositional factors. In the analysis performed here, the uncertainty on the chronostratigraphic relationships among the specimens was included among the prior settings. The results of the analysis confirm the referral of most of the specimens to the taxa Asiatoceratodus , Equinoxiodus, Lavocatodus and Neoceratodus , but reject those to Ceratodus and Ferganoceratodus . The resulting phylogeny constrained the evolution of the Tunisian sample exclusively in the Early Cretaceous, contrasting with the previous scenario inferred by the stratigraphically-calibrated topology resulting from parsimony analysis. The phylogenetic framework also suggests that (1) the sampled localities are laterally equivalent, (2) but three localities are restricted to the youngest part of the section; both results are in agreement with previous stratigraphic analyses of these localities. The FBD model of specimen-level units provides a novel tool for phylogenetic inference among fossils but also for independent tests of stratigraphic scenarios.
Blanco, Yolanda; Gallardo-Carreño, Ignacio; Ruiz-Bermejo, Marta; Puente-Sánchez, Fernando; Cavalcante-Silva, Erika; Quesada, Antonio; Prieto-Ballesteros, Olga; Parro, Víctor
2017-10-01
The search for biomarkers of present or past life is one of the major challenges for in situ planetary exploration. Multiple constraints limit the performance and sensitivity of remote in situ instrumentation. In addition, the structure, chemical, and mineralogical composition of the sample may complicate the analysis and interpretation of the results. The aim of this work is to highlight the main constraints, performance, and complementarity of several techniques that have already been implemented or are planned to be implemented on Mars for detection of organic and molecular biomarkers on a best-case sample scenario. We analyzed a 1000-year-old desiccated and mummified microbial mat from Antarctica by Raman and IR (infrared) spectroscopies (near- and mid-IR), thermogravimetry (TG), differential thermal analysis, mass spectrometry (MS), and immunological detection with a life detector chip. In spite of the high organic content (ca. 20% wt/wt) of the sample, the Raman spectra only showed the characteristic spectral peaks of the remaining beta-carotene biomarker and faint peaks of phyllosilicates over a strong fluorescence background. IR spectra complemented the mineralogical information from Raman spectra and showed the main molecular vibrations of the humic acid functional groups. The TG-MS system showed the release of several volatile compounds attributed to biopolymers. An antibody microarray for detecting cyanobacteria (CYANOCHIP) detected biomarkers from Chroococcales, Nostocales, and Oscillatoriales orders. The results highlight limitations of each technique and suggest the necessity of complementary approaches in the search for biomarkers because some analytical techniques might be impaired by sample composition, presentation, or processing. Key Words: Planetary exploration-Life detection-Microbial mat-Life detector chip-Thermogravimetry-Raman spectroscopy-NIR-DRIFTS. Astrobiology 17, 984-996.
Galea, Karen S.; McGonagle, Carolyn; Sleeuwenhoek, Anne; Todd, David; Jiménez, Araceli Sánchez
2014-01-01
Dermal exposure to drilling fluids and crude oil is an exposure route of concern. However, there have been no published studies describing sampling methods or reporting dermal exposure measurements. We describe a study that aimed to evaluate a wipe sampling method to assess dermal exposure to an oil-based drilling fluid and crude oil, as well as to investigate the feasibility of using an interception cotton glove sampler for exposure on the hands/wrists. A direct comparison of the wipe and interception methods was also completed using pigs’ trotters as a surrogate for human skin and a direct surface contact exposure scenario. Overall, acceptable recovery and sampling efficiencies were reported for both methods, and both methods had satisfactory storage stability at 1 and 7 days, although there appeared to be some loss over 14 days. The methods’ comparison study revealed significantly higher removal of both fluids from the metal surface with the glove samples compared with the wipe samples (on average 2.5 times higher). Both evaluated sampling methods were found to be suitable for assessing dermal exposure to oil-based drilling fluids and crude oil; however, the comparison study clearly illustrates that glove samplers may overestimate the amount of fluid transferred to the skin. Further comparison of the two dermal sampling methods using additional exposure situations such as immersion or deposition, as well as a field evaluation, is warranted to confirm their appropriateness and suitability in the working environment. PMID:24598941
When Can Clades Be Potentially Resolved with Morphology?
Bapst, David W.
2013-01-01
Morphology-based phylogenetic analyses are the only option for reconstructing relationships among extinct lineages, but often find support for conflicting hypotheses of relationships. The resulting lack of phylogenetic resolution is generally explained in terms of data quality and methodological issues, such as character selection. A previous suggestion is that sampling ancestral morphotaxa or sampling multiple taxa descended from a long-lived, unchanging lineage can also yield clades which have no opportunity to share synapomorphies. This lack of character information leads to a lack of ‘intrinsic’ resolution, an issue that cannot be solved with additional morphological data. It is unclear how often we should expect clades to be intrinsically resolvable in realistic circumstances, as intrinsic resolution must increase as taxonomic sampling decreases. Using branching simulations, I quantify intrinsic resolution across several models of morphological differentiation and taxonomic sampling. Intrinsically unresolvable clades are found to be relatively frequent in simulations of both extinct and living taxa under realistic sampling scenarios, implying that intrinsic resolution is an issue for morphology-based analyses of phylogeny. Simulations which vary the rates of sampling and differentiation were tested for their agreement to observed distributions of durations from well-sampled fossil records and also having high intrinsic resolution. This combination only occurs in those datasets when differentiation and sampling rates are both unrealistically high relative to branching and extinction rates. Thus, the poor phylogenetic resolution occasionally observed in morphological phylogenetics may result from a lack of intrinsic resolvability within groups. PMID:23638034
Khan, Latifa B; Read, Hannah M; Ritchie, Stephen R; Proft, Thomas
2017-01-01
Dipstick urinalysis is an informative, quick, cost-effective and non-invasive diagnostic tool that is useful in clinical practice for the diagnosis of urinary tract infections (UTIs), kidney diseases, and diabetes. We used dipstick urinalysis as a hands-on microbiology laboratory exercise to reinforce student learning about UTIs with a particular focus on cystitis, which is a common bacterial infection. To avoid exposure to potentially contaminated human urine samples, we prepared artificial urine using easily acquired and affordable ingredients, which allowed less-experienced students to perform urinalysis without the risk of exposure to pathogenic organisms and ensured reliable availability of the urine samples. This practical class taught medical students how to use urinalysis data in conjunction with medical history to diagnose diseases from urine samples and to determine a treatment plan for clinical scenarios.
Elastic anisotropy effects on the electrical responses of a thin sample of nematic liquid crystal.
Gomes, O A; Yednak, C A R; Ribeiro de Almeida, R R; Teixeira-Souza, R T; Evangelista, L R
2017-03-01
The electrical responses of a nematic liquid crystal cell are investigated by means of the elastic continuum theory. The nematic medium is considered as a parallel circuit of a resistance and a capacitance and the electric current profile across the sample is determined as a function of the elastic constants. In the reorientation process of the nematic director, the resistance and capacitance of the sample are determined by taking into account the elastic anisotropy. A nonmonotonic profile for the current is observed in which a minimum value of the current may be used to estimate the elastic constants values. This scenario suggests a theoretical method to determine the values of the bulk elastic constants in a single planar aligned cell just by changing the direction of applied electrical field and measuring the resulting electrical current.
NASA Astrophysics Data System (ADS)
Pillaca, E. J. D. M.; Ueda, M.; Oliveira, R. M.; Pichon, L.
2014-08-01
Effects of E × B fields as mechanism to carbon-nitrogen plasma immersion ion implantation (PIII) have been investigated. This magnetic configuration when used in PIII allows obtaining high nitrogen plasma density close to the ion implantation region. Consequently, high ions dose on the target is possible to be achieved compared with standard PIII. In this scenario, nitrogen and carbon ions were implanted simultaneously on stainless steel, as measured by GDOES and detected by X-ray diffraction. Carbon-tape disposed on the sample-holder was sputtered by intense bombardment of nitrogen ions, being the source of carbon atoms in this experiment. The implantation of both N and C caused changes on sample morphology and improvement of the tribological properties of the stainless steel.
Enumerating Sparse Organisms in Ships’ Ballast Water: Why Counting to 10 Is Not So Easy
2011-01-01
To reduce ballast water-borne aquatic invasions worldwide, the International Maritime Organization and United States Coast Guard have each proposed discharge standards specifying maximum concentrations of living biota that may be released in ships’ ballast water (BW), but these regulations still lack guidance for standardized type approval and compliance testing of treatment systems. Verifying whether BW meets a discharge standard poses significant challenges. Properly treated BW will contain extremely sparse numbers of live organisms, and robust estimates of rare events require extensive sampling efforts. A balance of analytical rigor and practicality is essential to determine the volume of BW that can be reasonably sampled and processed, yet yield accurate live counts. We applied statistical modeling to a range of sample volumes, plankton concentrations, and regulatory scenarios (i.e., levels of type I and type II errors), and calculated the statistical power of each combination to detect noncompliant discharge concentrations. The model expressly addresses the roles of sampling error, BW volume, and burden of proof on the detection of noncompliant discharges in order to establish a rigorous lower limit of sampling volume. The potential effects of recovery errors (i.e., incomplete recovery and detection of live biota) in relation to sample volume are also discussed. PMID:21434685
Detection of resistance to linezolid in Staphylococcus aureus infecting orthopedic patients.
Thool, Vaishali U; Bhoosreddy, Girish L; Wadher, Bharat J
2012-01-01
In today's medical scenario, the human race is battling the most intelligent enemy who has unending alternatives to combat with the potent elements they have produced against it. To study the resistance to linezolid among Staphylococcus aureus isolated from pus samples of orthopedic patients. Pus samples were collected from dirty wounds of orthopedic patients undergoing long antimicrobial treatment programs. The sampling period was from July 2010 to June 2011. The samples were collected from different orthopedic hospitals of Nagpur (central India) representing a mixed sample of patients. One hundred pus samples were screened for S. aureus, by growth on mannitol salt agar (MSA), Baird-Parker agar (BPA), deoxyribonuclease test, tube coagulase test, and HiStaph latex agglutination test. Fifty-one S. aureus isolates were obtained which were further subjected to antimicrobial susceptibility testing by Kirby-Bauer disc diffusion method (DDM). Minimal inhibitory concentrations (MICs) were determined by an automated system, the VITEK 2 system. Also, Ezy MIC strip method was carried out in accordance with Clinical and Laboratory Standards Institute (CLSI) guidelines. Twelve linezolid-resistant S. aureus (LRSA) isolates were recovered from 51 S. aureus cultures tested for susceptibility to linezolid using the DDM, VITEK 2 system, and Ezy MIC strip method. The emergence of resistance suggests nosocomial spread and abuse of antibiotic.
Monitoring of pesticide residues in vegetarian diet.
Kumari, Beena; Kathpal, T S
2009-04-01
Samples (28) of complete vegetarian diet consumed from morning till night i.e. tea, milk, breakfast, lunch, snacks, dinner, sweet dish etc. were collected from homes, hostels and hotels periodically from Hisar and analysed for detecting the residues of organochlorine, synthetic pyrethriod, organophosphate and carbamate insecticides. The estimation was carried out by using multi-residue analytical technique employing gas chromatograph (GC)-electron capture detector and GC-nitrogen phosphorous detector systems equipped with capillary columns. The whole diet sample was macerated in a mixer grinder and a representative sample in duplicate was analyzed for residues keeping the average daily diet of an adult to be 1,300 g. On comparing the data, it was found that actual daily intake (microgram/person/day) of lindane in two and endosulfan in four samples exceeded the acceptable daily intake. Residues of other pesticides in all the diet samples were lower than the acceptable daily intake (ADI) of the respective pesticides. The study concluded that although all the diet samples were found contaminated with one or the other pesticide, the actual daily intake of only a few pesticides was higher than their respective ADI. More extensive study covering other localities of Haryana has been suggested to know the overall scenario of contamination of vegetarian diet.
Enumerating sparse organisms in ships' ballast water: why counting to 10 is not so easy.
Miller, A Whitman; Frazier, Melanie; Smith, George E; Perry, Elgin S; Ruiz, Gregory M; Tamburri, Mario N
2011-04-15
To reduce ballast water-borne aquatic invasions worldwide, the International Maritime Organization and United States Coast Guard have each proposed discharge standards specifying maximum concentrations of living biota that may be released in ships' ballast water (BW), but these regulations still lack guidance for standardized type approval and compliance testing of treatment systems. Verifying whether BW meets a discharge standard poses significant challenges. Properly treated BW will contain extremely sparse numbers of live organisms, and robust estimates of rare events require extensive sampling efforts. A balance of analytical rigor and practicality is essential to determine the volume of BW that can be reasonably sampled and processed, yet yield accurate live counts. We applied statistical modeling to a range of sample volumes, plankton concentrations, and regulatory scenarios (i.e., levels of type I and type II errors), and calculated the statistical power of each combination to detect noncompliant discharge concentrations. The model expressly addresses the roles of sampling error, BW volume, and burden of proof on the detection of noncompliant discharges in order to establish a rigorous lower limit of sampling volume. The potential effects of recovery errors (i.e., incomplete recovery and detection of live biota) in relation to sample volume are also discussed.
The OSIRIS-Rex Asteroid Sample Return: Mission Operations Design
NASA Technical Reports Server (NTRS)
Gal-Edd, Jonathan; Cheuvront, Allan
2014-01-01
The OSIRIS-REx mission employs a methodical, phased approach to ensure success in meeting the missions science requirements. OSIRIS-REx launches in September 2016, with a backup launch period occurring one year later. Sampling occurs in 2019. The departure burn from Bennu occurs in March 2021. On September 24, 2023, the SRC lands at the Utah Test and Training Range (UTTR). Stardust heritage procedures are followed to transport the SRC to Johnson Space Center, where the samples are removed and delivered to the OSIRIS-REx curation facility. After a six-month preliminary examination period the mission will produce a catalog of the returned sample, allowing the worldwide community to request samples for detailed analysis.Traveling and returning a sample from an Asteroid that has not been explored before requires unique operations consideration. The Design Reference Mission (DRM) ties together space craft, instrument and operations scenarios. The project implemented lessons learned from other small body missions: APLNEAR, JPLDAWN and ESARosetta. The key lesson learned was expected the unexpected and implement planning tools early in the lifecycle. In preparation to PDR, the project changed the asteroid arrival date, to arrive one year earlier and provided additional time margin. STK is used for Mission Design and STKScheduler for instrument coverage analysis.
NASA Astrophysics Data System (ADS)
Kawaguchi, J.; Mori, O.; Shirasawa, Y.; Yoshikawa, M.
2014-07-01
The science and engineering communities in the world are seeking what comes next. Especially for asteroids and comets, as those objects lie in relatively far area in our solar system, and new engineering solutions are essential to explore them. JAXA has studied the next-step mission since 2000, a solar-power sail demonstrator combining the use of photon propulsion with electric propulsion, ion thruster, targeting the untrodden challenge for the sample return attempt from a Trojan asteroid around the libration points in the Sun-Jupiter system. The Ikaros spacecraft was literally developed and launched as a preliminary technology demonstration. The mission will perform in-situ measurement and on-site analysis of the samples in addition to the sample return to the Earth, and will also deploy a small lander on the surface for collecting surface samples and convey them to the mother spacecraft. From a scientific point of view, there is an enormous reward in the most primitive samples containing information about the ancient solar system and also about the origin of life in our solar system. JAXA presently looks for international partners to develop and build the lander. The presentation will elaborate the current mission scenario as well as what we think the international collaboration will be.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Annette Rohr
2005-03-31
This report documents progress made on the subject project during the period of September 1, 2004 through February 28, 2005. The TERESA Study is designed to investigate the role played by specific emissions sources and components in the induction of adverse health effects by examining the relative toxicity of coal combustion and mobile source (gasoline and/or diesel engine) emissions and their oxidative products. The study involves on-site sampling, dilution, and aging of coal combustion emissions at three coal-fired power plants, as well as mobile source emissions, followed by animal exposures incorporating a number of toxicological endpoints. The DOE-EPRI Cooperative Agreementmore » (henceforth referred to as ''the Agreement'') for which this technical progress report has been prepared covers the performance and analysis of field experiments at the first TERESA plant, located in the Upper Midwest and henceforth referred to as Plant 0, and at two additional coal-fired power plants (Plants 1 and 2) utilizing different coal types and with different plant configurations. During this reporting period, all fieldwork at Plant 0 was completed. Stack sampling was conducted in October to determine if there were significant differences between the in-stack PM concentrations and the diluted concentrations used for the animal exposures. Results indicated no significant differences and therefore confidence that the revised stack sampling methodology described in the previous semiannual report is appropriate for use in the Project. Animal exposures to three atmospheric scenarios were carried out. From October 4-7, we conducted exposures to oxidized emissions with the addition of secondary organic aerosol (SOA). Later in October, exposures to the most complex scenario (oxidized, neutralized emissions plus SOA) were repeated to ensure comparability with the results of the June/July exposures where a different stack sampling setup was employed. In November, exposures to oxidized emissions were performed. Stage I toxicological assessments were carried out in Sprague-Dawley rats. Biological endpoints included breathing pattern/pulmonary function; in vivo chemiluminescence (an indicator of oxidative stress); blood cytology; bronchoalveolar lavage (BAL) fluid analysis; and histopathology. No significant differences between exposed animals and sham animals (exposed to filtered air) were observed for any of the endpoints; histopathological results are pending and will be reported in the next semiannual report. The scenarios evaluated during this reporting period were slightly modified from those originally proposed. We substituted a new scenario, secondary aerosol + SOA, to investigate the effects of a strongly acidic aerosol with a biogenic component. Since we did not observe any biological response to this scenario, the neutralized secondary aerosol scenario (i.e., oxidized emissions + ammonia) was deemed unnecessary. Moreover, in light of the lack of response observed in the Stage I assessment, it was decided that a Stage II assessment (evaluation of cardiac function in a compromised rat model) was unlikely to provide useful information. However, this model will be employed at Plant 1 and/or 2. During this reporting period, significant progress was made in planning for fieldwork at Plant 1. Stack sampling was carried out at the plant in mid-December to determine the concentration of primary particles. It was found that PM{sub 2.5} mass concentrations were approximately three times higher than those observed at Plant 0. In mid-February, installation and setup for the mobile laboratories began. Animal exposures are scheduled to begin at this plant on March 21, 2005. During the next reporting period, we will initiate fieldwork at Plant 1. At either or both Plants 1 and 2, a detailed Stage II assessment will be performed, even if no significant findings are observed in Stage I. The next semiannual report is expected to include a detailed description of the fieldwork at Plant 1, including toxicological findings and interpretation.« less
Leitão, Sara; Moreira-Santos, Matilde; Van den Brink, Paul J; Ribeiro, Rui; José Cerejeira, M; Sousa, José Paulo
2014-05-01
The present study aimed to assess the environmental fate of the insecticide and nematicide ethoprophos in the soil-water interface following the pesticide application in simulated maize and potato crops under Mediterranean agricultural conditions, particularly of irrigation. Focus was given to the soil-water transfer pathways (leaching and runoff), to the pesticide transport in soil between pesticide application (crop row) and non-application areas (between crop rows), as well as to toxic effects of the various matrices on terrestrial and aquatic biota. A semi-field methodology mimicking a "worst-case" ethoprophos application (twice the recommended dosage for maize and potato crops: 100% concentration v/v) in agricultural field situations was used, in order to mimic a possible misuse by the farmer under realistic conditions. A rainfall was simulated under a slope of 20° for both crop-based scenarios. Soil and water samples were collected for the analysis of pesticide residues. Ecotoxicity of soil and aquatic samples was assessed by performing lethal and sublethal bioassays with organisms from different trophic levels: the collembolan Folsomia candida, the earthworm Eisenia andrei and the cladoceran Daphnia magna. Although the majority of ethoprophos sorbed to the soil application area, pesticide concentrations were detected in all water matrices illustrating pesticide transfer pathways of water contamination between environmental compartments. Leaching to groundwater proved to be an important transfer pathway of ethoprophos under both crop-based scenarios, as it resulted in high pesticide concentration in leachates from Maize (130µgL(-1)) and Potato (630µgL(-1)) crop scenarios, respectively. Ethoprophos application at the Potato crop scenario caused more toxic effects on terrestrial and aquatic biota than at the Maize scenario at the recommended dosage and lower concentrations. In both crop-based scenarios, ethoprophos moved with the irrigation water flow to the soil between the crop rows where no pesticide was applied, causing toxic effects on terrestrial organisms. The two simulated agricultural crop-based scenarios had the merit to illustrate the importance of transfer pathways of pesticides from soil to groundwater through leaching and from crop rows to the surrounding soil areas in a soil-water interface environment, which is representative for irrigated agricultural crops under Mediterranean conditions. Copyright © 2014 Elsevier Inc. All rights reserved.
Assessment the effect of homogenized soil on soil hydraulic properties and soil water transport
NASA Astrophysics Data System (ADS)
Mohawesh, O.; Janssen, M.; Maaitah, O.; Lennartz, B.
2017-09-01
Soil hydraulic properties play a crucial role in simulating water flow and contaminant transport. Soil hydraulic properties are commonly measured using homogenized soil samples. However, soil structure has a significant effect on the soil ability to retain and to conduct water, particularly in aggregated soils. In order to determine the effect of soil homogenization on soil hydraulic properties and soil water transport, undisturbed soil samples were carefully collected. Five different soil structures were identified: Angular-blocky, Crumble, Angular-blocky (different soil texture), Granular, and subangular-blocky. The soil hydraulic properties were determined for undisturbed and homogenized soil samples for each soil structure. The soil hydraulic properties were used to model soil water transport using HYDRUS-1D.The homogenized soil samples showed a significant increase in wide pores (wCP) and a decrease in narrow pores (nCP). The wCP increased by 95.6, 141.2, 391.6, 3.9, 261.3%, and nCP decreased by 69.5, 10.5, 33.8, 72.7, and 39.3% for homogenized soil samples compared to undisturbed soil samples. The soil water retention curves exhibited a significant decrease in water holding capacity for homogenized soil samples compared with the undisturbed soil samples. The homogenized soil samples showed also a decrease in soil hydraulic conductivity. The simulated results showed that water movement and distribution were affected by soil homogenizing. Moreover, soil homogenizing affected soil hydraulic properties and soil water transport. However, field studies are being needed to find the effect of these differences on water, chemical, and pollutant transport under several scenarios.
Scripting Scenarios for the Human Patient Simulator
NASA Technical Reports Server (NTRS)
Bacal, Kira; Miller, Robert; Doerr, Harold
2004-01-01
The Human Patient Simulator (HPS) is particularly useful in providing scenario-based learning which can be tailored to fit specific scenarios and which can be modified in realtime to enhance the teaching environment. Scripting these scenarios so as to maximize learning requires certain skills, in order to ensure that a change in student performance, understanding, critical thinking, and/or communication skills results. Methods: A "good" scenario can be defined in terms of applicability, learning opportunities, student interest, and clearly associated metrics. Obstacles to such a scenario include a lack of understanding of the applicable environment by the scenario author(s), a desire (common among novices) to cover too many topics, failure to define learning objectives, mutually exclusive or confusing learning objectives, unskilled instructors, poor preparation , disorganized approach, or an inappropriate teaching philosophy (such as "trial by fire" or education through humiliation). Results: Descriptions of several successful teaching programs, used in the military, civilian, and NASA medical environments , will be provided, along with sample scenarios. Discussion: Simulator-based lessons have proven to be a time- and cost-efficient manner by which to educate medical personnel. Particularly when training for medical care in austere environments (pre-hospital, aeromedical transport, International Space Station, military operations), the HPS can enhance the learning experience.
Pinsent, Amy; Blake, Isobel M; White, Michael T; Riley, Steven
2014-08-01
Both high and low pathogenic subtype A avian influenza remain ongoing threats to the commercial poultry industry globally. The emergence of a novel low pathogenic H7N9 lineage in China presents itself as a new concern to both human and animal health and may necessitate additional surveillance in commercial poultry operations in affected regions. Sampling data was simulated using a mechanistic model of H7N9 influenza transmission within commercial poultry barns together with a stochastic observation process. Parameters were estimated using maximum likelihood. We assessed the probability of detecting an outbreak at time of slaughter using both real-time polymerase chain reaction (rt-PCR) and a hemagglutinin inhibition assay (HI assay) before considering more intense sampling prior to slaughter. The day of virus introduction and R0 were estimated jointly from weekly flock sampling data. For scenarios where R0 was known, we estimated the day of virus introduction into a barn under different sampling frequencies. If birds were tested at time of slaughter, there was a higher probability of detecting evidence of an outbreak using an HI assay compared to rt-PCR, except when the virus was introduced <2 weeks before time of slaughter. Prior to the initial detection of infection N sample = 50 (1%) of birds were sampled on a weekly basis once, but after infection was detected, N sample = 2000 birds (40%) were sampled to estimate both parameters. We accurately estimated the day of virus introduction in isolation with weekly and 2-weekly sampling. A strong sampling effort would be required to infer both the day of virus introduction and R0. Such a sampling effort would not be required to estimate the day of virus introduction alone once R0 was known, and sampling N sample = 50 of birds in the flock on a weekly or 2 weekly basis would be sufficient.
A space transportation system operations model
NASA Technical Reports Server (NTRS)
Morris, W. Douglas; White, Nancy H.
1987-01-01
Presented is a description of a computer program which permits assessment of the operational support requirements of space transportation systems functioning in both a ground- and space-based environment. The scenario depicted provides for the delivery of payloads from Earth to a space station and beyond using upper stages based at the station. Model results are scenario dependent and rely on the input definitions of delivery requirements, task times, and available resources. Output is in terms of flight rate capabilities, resource requirements, and facility utilization. A general program description, program listing, input requirements, and sample output are included.
Digital Curation of Marine Physical Samples at Ocean Networks Canada
NASA Astrophysics Data System (ADS)
Jenkyns, R.; Tomlin, M. C.; Timmerman, R.
2015-12-01
Ocean Networks Canada (ONC) has collected hundreds of geological, biological and fluid samples from the water column and seafloor during its maintenance expeditions. These samples have been collected by Remotely Operated Vehicles (ROVs), divers, networked and autonomously deployed instruments, and rosettes. Subsequent measurements are used for scientific experiments, calibration of in-situ and remote sensors, monitoring of Marine Protected Areas, and environment characterization. Tracking the life cycles of these samples from collection to dissemination of results with all the pertinent documents (e.g., protocols, imagery, reports), metadata (e.g., location, identifiers, purpose, method) and data (e.g., measurements, taxonomic classification) is a challenge. The initial collection of samples is normally documented in SeaScribe (an ROV dive logging tool within ONC's Oceans 2.0 software) for which ONC has defined semantics and syntax. Next, samples are often sent to individual scientists and institutions (e.g., Royal BC Museum) for processing and storage, making acquisition of results and life cycle metadata difficult. Finally, this information needs to be retrieved and collated such that multiple user scenarios can be addressed. ONC aims to improve and extend its digital infrastructure for physical samples to support this complex array of samples, workflows and applications. However, in order to promote effective data discovery and exchange, interoperability and community standards must be an integral part of the design. Thus, integrating recommendations and outcomes of initiatives like the EarthCube iSamples working groups are essential. Use cases, existing tools, schemas and identifiers are reviewed, while remaining gaps and challenges are identified. The current status, selected approaches and possible future directions to enhance ONC's digital infrastructure for each sample type are presented.
Tank 241-AY-101 Privatization Push Mode Core Sampling and Analysis Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
TEMPLETON, A.M.
2000-01-12
This sampling and analysis plan (SAP) identifies characterization objectives pertaining to sample collection, laboratory analytical evaluation, and reporting requirements for samples obtained from tank 241-AY-101. The purpose of this sampling event is to obtain information about the characteristics of the contents of 241-AY-101 required to satisfy Data Quality Objectives For RPP Privatization Phase I: Confirm Tank T Is An Appropriate Feed Source For High-Level Waste Feed Batch X(HLW DQO) (Nguyen 1999a), Data Quality Objectives For TWRS Privatization Phase I : Confirm Tank T Is An Appropriate Feed Source For Low-Activity Waste Feed Batch X (LAW DQO) (Nguyen 1999b), Low Activitymore » Waste and High-Level Waste Feed Data Quality Objectives (L and H DQO) (Patello et al. 1999), and Characterization Data Needs for Development, Design, and Operation of Retrieval Equipment Developed through the Data Quality Objective Process (Equipment DQO) (Bloom 1996). Special instructions regarding support to the LAW and HLW DQOs are provided by Baldwin (1999). Push mode core samples will be obtained from risers 15G and 150 to provide sufficient material for the chemical analyses and tests required to satisfy these data quality objectives. The 222-S Laboratory will extrude core samples; composite the liquids and solids; perform chemical analyses on composite and segment samples; archive half-segment samples; and provide subsamples to the Process Chemistry Laboratory. The Process Chemistry Laboratory will prepare test plans and perform process tests to evaluate the behavior of the 241-AY-101 waste undergoing the retrieval and treatment scenarios defined in the applicable DQOs. Requirements for analyses of samples originating in the process tests will be documented in the corresponding test plans and are not within the scope of this SAP.« less
GeoLab: A Geological Workstation for Future Missions
NASA Technical Reports Server (NTRS)
Evans, Cynthia; Calaway, Michael; Bell, Mary Sue; Li, Zheng; Tong, Shuo; Zhong, Ye; Dahiwala, Ravi
2014-01-01
The GeoLab glovebox was, until November 2012, fully integrated into NASA's Deep Space Habitat (DSH) Analog Testbed. The conceptual design for GeoLab came from several sources, including current research instruments (Microgravity Science Glovebox) used on the International Space Station, existing Astromaterials Curation Laboratory hardware and clean room procedures, and mission scenarios developed for earlier programs. GeoLab allowed NASA scientists to test science operations related to contained sample examination during simulated exploration missions. The team demonstrated science operations that enhance theThe GeoLab glovebox was, until November 2012, fully integrated into NASA's Deep Space Habitat (DSH) Analog Testbed. The conceptual design for GeoLab came from several sources, including current research instruments (Microgravity Science Glovebox) used on the International Space Station, existing Astromaterials Curation Laboratory hardware and clean room procedures, and mission scenarios developed for earlier programs. GeoLab allowed NASA scientists to test science operations related to contained sample examination during simulated exploration missions. The team demonstrated science operations that enhance the early scientific returns from future missions and ensure that the best samples are selected for Earth return. The facility was also designed to foster the development of instrument technology. Since 2009, when GeoLab design and construction began, the GeoLab team [a group of scientists from the Astromaterials Acquisition and Curation Office within the Astromaterials Research and Exploration Science (ARES) Directorate at JSC] has progressively developed and reconfigured the GeoLab hardware and software interfaces and developed test objectives, which were to 1) determine requirements and strategies for sample handling and prioritization for geological operations on other planetary surfaces, 2) assess the scientific contribution of selective in-situ sample characterization for mission planning, operations, and sample prioritization, 3) evaluate analytical instruments and tools for providing efficient and meaningful data in advance of sample return and 4) identify science operations that leverage human presence with robotic tools. In the first year of tests (2010), GeoLab examined basic glovebox operations performed by one and two crewmembers and science operations performed by a remote science team. The 2010 tests also examined the efficacy of basic sample characterization [descriptions, microscopic imagery, X-ray fluorescence (XRF) analyses] and feedback to the science team. In year 2 (2011), the GeoLab team tested enhanced software and interfaces for the crew and science team (including Web-based and mobile device displays) and demonstrated laboratory configurability with a new diagnostic instrument (the Multispectral Microscopic Imager from the JPL and Arizona State University). In year 3 (2012), the GeoLab team installed and tested a robotic sample manipulator and evaluated robotic-human interfaces for science operations.
Locatello, Lisa; Rasotto, Maria B
2017-08-01
Emerging evidence suggests the occurrence of comparative decision-making processes in mate choice, questioning the traditional idea of female choice based on rules of absolute preference. In such a scenario, females are expected to use a typical best-of-n sampling strategy, being able to recall previous sampled males based on memory of their quality and location. Accordingly, the quality of preferred mate is expected to be unrelated to both the number and the sequence of female visits. We found support for these predictions in the peacock blenny, Salaria pavo, a fish where females have the opportunity to evaluate the attractiveness of many males in a short time period and in a restricted spatial range. Indeed, even considering the variability in preference among females, most of them returned to previous sampled males for further evaluations; thus, the preferred male did not represent the last one in the sequence of visited males. Moreover, there was no relationship between the attractiveness of the preferred male and the number of further visits assigned to the other males. Our results suggest the occurrence of a best-of-n mate sampling strategy in the peacock blenny.
NASA Astrophysics Data System (ADS)
Locatello, Lisa; Rasotto, Maria B.
2017-08-01
Emerging evidence suggests the occurrence of comparative decision-making processes in mate choice, questioning the traditional idea of female choice based on rules of absolute preference. In such a scenario, females are expected to use a typical best-of- n sampling strategy, being able to recall previous sampled males based on memory of their quality and location. Accordingly, the quality of preferred mate is expected to be unrelated to both the number and the sequence of female visits. We found support for these predictions in the peacock blenny, Salaria pavo, a fish where females have the opportunity to evaluate the attractiveness of many males in a short time period and in a restricted spatial range. Indeed, even considering the variability in preference among females, most of them returned to previous sampled males for further evaluations; thus, the preferred male did not represent the last one in the sequence of visited males. Moreover, there was no relationship between the attractiveness of the preferred male and the number of further visits assigned to the other males. Our results suggest the occurrence of a best-of- n mate sampling strategy in the peacock blenny.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eckert-Gallup, Aubrey Celia; Lewis, John R.; Brooks, Dusty Marie
This report describes the methods, results, and conclusions of the analysis of 11 scenarios defined to exercise various options available in the xLPR (Extremely Low Probability of Rupture) Version 2 .0 code. The scope of the scenario analysis is three - fold: (i) exercise the various options and components comprising xLPR v2.0 and defining each scenario; (ii) develop and exercise methods for analyzing and interpreting xLPR v2.0 outputs ; and (iii) exercise the various sampling options available in xLPR v2.0. The simulation workflow template developed during the course of this effort helps to form a basis for the application ofmore » the xLPR code to problems with similar inputs and probabilistic requirements and address in a systematic manner the three points covered by the scope.« less
African-American Undergraduates' Perceptions and Attributions of Child Sexual Abuse
ERIC Educational Resources Information Center
Hestick, Henrietta; Perrino, Carrol S.
2009-01-01
This study examined perceptions of child sexual abuse and attributions of responsibility in a cross-sectional convenience sample of 384 African-American undergraduates using a scenario manipulating the age of the victim, gender of the victim, and gender of the perpetrator. Multiple interactions of respondent, victim, and perpetrator gender on…
Identity Orientation, Voice, and Judgments of Procedural Justice during Late Adolescence
ERIC Educational Resources Information Center
Fondacaro, Mark R.; Brank, Eve M.; Stuart, Jennifer; Villanueva-Abraham, Sara; Luescher, Jennifer; McNatt, Penny S.
2006-01-01
This study focused on the relationship between voice and judgments of procedural justice in a sample of older adolescents and examined potential moderating and mediating influences of identity orientation (personal, social, and collective) and negative emotional response. Participants read 1 of 2 different family conflict scenarios (voice and no…
Barriers to Helpseeking among New Zealand Prison Inmates
ERIC Educational Resources Information Center
Skogstad, Philip; Deane, Frank P.; Spicer, John
2005-01-01
Treatment avoidance or help-negation has been described in clinical and non-clinical samples, in response to real or imagined suicidal scenarios (Carlton & Deane, 2000; Rudd, Joiner & Rajab, 1995). The aims of the present study were to describe the process of seeking psychological help in prison based on inmate interviews and to assess the…
The Role of Emotion Expectancies in Adolescents' Moral Decision Making
ERIC Educational Resources Information Center
Krettenauer, Tobias; Jia, Fanli; Mosleh, Maureen
2011-01-01
This study investigated the impact of emotion expectancies on adolescents' moral decision making in hypothetical situations. The sample consisted of 160 participants from three different grade levels (mean age=15.79 years, SD=2.96). Participants were confronted with a set of scenarios that described various emotional outcomes of (im)moral actions…
Differential Item Functioning: Its Consequences. Research Report. ETS RR-10-01
ERIC Educational Resources Information Center
Lee, Yi-Hsuan; Zhang, Jinming
2010-01-01
This report examines the consequences of differential item functioning (DIF) using simulated data. Its impact on total score, item response theory (IRT) ability estimate, and test reliability was evaluated in various testing scenarios created by manipulating the following four factors: test length, percentage of DIF items per form, sample sizes of…
A Database Design and Development Case: Home Theater Video
ERIC Educational Resources Information Center
Ballenger, Robert; Pratt, Renee
2012-01-01
This case consists of a business scenario of a small video rental store, Home Theater Video, which provides background information, a description of the functional business requirements, and sample data. The case provides sufficient information to design and develop a moderately complex database to assist Home Theater Video in solving their…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mor, Rivay; Netzer, Hagai; Trakhtenbrot, Benny
We report new Herschel observations of 25 z {approx_equal} 4.8 extremely luminous optically selected active galactic nuclei (AGNs). Five of the sources have extremely large star-forming (SF) luminosities, L{sub SF}, corresponding to SF rates (SFRs) of 2800-5600 M{sub Sun} yr{sup -1} assuming a Salpeter initial mass function. The remaining sources have only upper limits on their SFRs, but stacking their Herschel images results in a mean SFR of 700 {+-} 150 M{sub Sun} yr{sup -1}. The higher SFRs in our sample are comparable to the highest observed values so far at any redshift. Our sample does not contain obscured AGNs,more » which enables us to investigate several evolutionary scenarios connecting supermassive black holes and SF activity in the early universe. The most probable scenario is that we are witnessing the peak of SF activity in some sources and the beginning of the post-starburst decline in others. We suggest that all 25 sources, which are at their peak AGN activity, are in large mergers. AGN feedback may be responsible for diminishing the SF activity in 20 of them, but is not operating efficiently in 5 others.« less
Uncertainty analyses of CO2 plume expansion subsequent to wellbore CO2 leakage into aquifers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Zhangshuan; Bacon, Diana H.; Engel, David W.
2014-08-01
In this study, we apply an uncertainty quantification (UQ) framework to CO2 sequestration problems. In one scenario, we look at the risk of wellbore leakage of CO2 into a shallow unconfined aquifer in an urban area; in another scenario, we study the effects of reservoir heterogeneity on CO2 migration. We combine various sampling approaches (quasi-Monte Carlo, probabilistic collocation, and adaptive sampling) in order to reduce the number of forward calculations while trying to fully explore the input parameter space and quantify the input uncertainty. The CO2 migration is simulated using the PNNL-developed simulator STOMP-CO2e (the water-salt-CO2 module). For computationally demandingmore » simulations with 3D heterogeneity fields, we combined the framework with a scalable version module, eSTOMP, as the forward modeling simulator. We built response curves and response surfaces of model outputs with respect to input parameters, to look at the individual and combined effects, and identify and rank the significance of the input parameters.« less
Lunar placement of Mars quarantine facility
NASA Technical Reports Server (NTRS)
Davidson, James E.; Mitchell, W. F.
1988-01-01
Advanced mission scenarios are currently being contemplated that would call for the retrieval of surface samples from Mars, from a comet, and from other places in the solar system. An important consideration for all of these sample return missions is quarantine. Quarantine facilities on the Moon offer unique advantages over other locations. The Moon offers gravity, distance, and vacuum. It is sufficiently near the Earth to allow rapid resupply and easy communication. It is sufficiently distant to lessen the psychological impact of a quarantine facility on Earth's human inhabitants. Finally, the Moon is airless, and seems to be devoid of life. It is, therefore, more suited to contamination control efforts.
Higher order correlations of IRAS galaxies
NASA Technical Reports Server (NTRS)
Meiksin, Avery; Szapudi, Istvan; Szalay, Alexander
1992-01-01
The higher order irreducible angular correlation functions are derived up to the eight-point function, for a sample of 4654 IRAS galaxies, flux-limited at 1.2 Jy in the 60 microns band. The correlations are generally found to be somewhat weaker than those for the optically selected galaxies, consistent with the visual impression of looser clusters in the IRAS sample. It is found that the N-point correlation functions can be expressed as the symmetric sum of products of N - 1 two-point functions, although the correlations above the four-point function are consistent with zero. The coefficients are consistent with the hierarchical clustering scenario as modeled by Hamilton and by Schaeffer.
Single-arm phase II trial design under parametric cure models.
Wu, Jianrong
2015-01-01
The current practice of designing single-arm phase II survival trials is limited under the exponential model. Trial design under the exponential model may not be appropriate when a portion of patients are cured. There is no literature available for designing single-arm phase II trials under the parametric cure model. In this paper, a test statistic is proposed, and a sample size formula is derived for designing single-arm phase II trials under a class of parametric cure models. Extensive simulations showed that the proposed test and sample size formula perform very well under different scenarios. Copyright © 2015 John Wiley & Sons, Ltd.
Kang, Shuli; Li, Qingjiao; Chen, Quan; Zhou, Yonggang; Park, Stacy; Lee, Gina; Grimes, Brandon; Krysan, Kostyantyn; Yu, Min; Wang, Wei; Alber, Frank; Sun, Fengzhu; Dubinett, Steven M; Li, Wenyuan; Zhou, Xianghong Jasmine
2017-03-24
We propose a probabilistic method, CancerLocator, which exploits the diagnostic potential of cell-free DNA by determining not only the presence but also the location of tumors. CancerLocator simultaneously infers the proportions and the tissue-of-origin of tumor-derived cell-free DNA in a blood sample using genome-wide DNA methylation data. CancerLocator outperforms two established multi-class classification methods on simulations and real data, even with the low proportion of tumor-derived DNA in the cell-free DNA scenarios. CancerLocator also achieves promising results on patient plasma samples with low DNA methylation sequencing coverage.
Large Ensemble Analytic Framework for Consequence-Driven Discovery of Climate Change Scenarios
NASA Astrophysics Data System (ADS)
Lamontagne, Jonathan R.; Reed, Patrick M.; Link, Robert; Calvin, Katherine V.; Clarke, Leon E.; Edmonds, James A.
2018-03-01
An analytic scenario generation framework is developed based on the idea that the same climate outcome can result from very different socioeconomic and policy drivers. The framework builds on the Scenario Matrix Framework's abstraction of "challenges to mitigation" and "challenges to adaptation" to facilitate the flexible discovery of diverse and consequential scenarios. We combine visual and statistical techniques for interrogating a large factorial data set of 33,750 scenarios generated using the Global Change Assessment Model. We demonstrate how the analytic framework can aid in identifying which scenario assumptions are most tied to user-specified measures for policy relevant outcomes of interest, specifically for our example high or low mitigation costs. We show that the current approach for selecting reference scenarios can miss policy relevant scenario narratives that often emerge as hybrids of optimistic and pessimistic scenario assumptions. We also show that the same scenario assumption can be associated with both high and low mitigation costs depending on the climate outcome of interest and the mitigation policy context. In the illustrative example, we show how agricultural productivity, population growth, and economic growth are most predictive of the level of mitigation costs. Formulating policy relevant scenarios of deeply and broadly uncertain futures benefits from large ensemble-based exploration of quantitative measures of consequences. To this end, we have contributed a large database of climate change futures that can support "bottom-up" scenario generation techniques that capture a broader array of consequences than those that emerge from limited sampling of a few reference scenarios.
Modeling Compound Flood Hazards in Coastal Embayments
NASA Astrophysics Data System (ADS)
Moftakhari, H.; Schubert, J. E.; AghaKouchak, A.; Luke, A.; Matthew, R.; Sanders, B. F.
2017-12-01
Coastal cities around the world are built on lowland topography adjacent to coastal embayments and river estuaries, where multiple factors threaten increasing flood hazards (e.g. sea level rise and river flooding). Quantitative risk assessment is required for administration of flood insurance programs and the design of cost-effective flood risk reduction measures. This demands a characterization of extreme water levels such as 100 and 500 year return period events. Furthermore, hydrodynamic flood models are routinely used to characterize localized flood level intensities (i.e., local depth and velocity) based on boundary forcing sampled from extreme value distributions. For example, extreme flood discharges in the U.S. are estimated from measured flood peaks using the Log-Pearson Type III distribution. However, configuring hydrodynamic models for coastal embayments is challenging because of compound extreme flood events: events caused by a combination of extreme sea levels, extreme river discharges, and possibly other factors such as extreme waves and precipitation causing pluvial flooding in urban developments. Here, we present an approach for flood risk assessment that coordinates multivariate extreme analysis with hydrodynamic modeling of coastal embayments. First, we evaluate the significance of correlation structure between terrestrial freshwater inflow and oceanic variables; second, this correlation structure is described using copula functions in unit joint probability domain; and third, we choose a series of compound design scenarios for hydrodynamic modeling based on their occurrence likelihood. The design scenarios include the most likely compound event (with the highest joint probability density), preferred marginal scenario and reproduced time series of ensembles based on Monte Carlo sampling of bivariate hazard domain. The comparison between resulting extreme water dynamics under the compound hazard scenarios explained above provides an insight to the strengths/weaknesses of each approach and helps modelers choose the appropriate scenario that best fit to the needs of their project. The proposed risk assessment approach can help flood hazard modeling practitioners achieve a more reliable estimate of risk, by cautiously reducing the dimensionality of the hazard analysis.
Assessing Discriminative Performance at External Validation of Clinical Prediction Models
Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.
2016-01-01
Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients. PMID:26881753
Assessing Discriminative Performance at External Validation of Clinical Prediction Models.
Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W
2016-01-01
External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.
Thompson, Katy-Anne; Paton, Susan; Pottage, Thomas; Bennett, Allan
2018-05-09
Four commercially available robotic vacuum cleaners were assessed for sampling efficiency of wet disseminated Bacillus atrophaeus spores on carpet, Polyvinyl Chloride (PVC) and laminate flooring. Furthermore, their operability was evaluated and decontamination efficiency of one robot was assessed using a sodium hypochlorite solution. In an environmental chamber, robots self-navigated around 4 m 2 of flooring containing a single contaminated 0.25 m 2 tile (ca. 10 4 spores per cm 2 ). Contamination levels at pre-determined locations were assessed by macrofoam swabs (PVC and laminate) or water soluble tape (carpet), before and after sampling. Robots were dismantled post-sampling and spore recoveries assessed. Aerosol contamination was also measured during sampling. Robot sampling efficiencies were variable, however, robots recovered most spores from laminate (up to 17.1%), then PVC, and lastly carpet. All robots spread contamination from the 'hotspot' (all robots spread < 0.6% of the contamination to other areas) and became surface contaminated. Spores were detected at low levels during air sampling (<5.6 spores l -1 ). Liquid decontamination inactivated 99.1% of spores from PVC. Robotic vacuum cleaners show promise for both sampling and initial decontamination of indoor flooring. In the event of a bioterror incident, e.g. deliberate release of Bacillus anthracis spores, areas require sampling to determine the magnitude and extent of contamination, and to establish decontamination efficacy. In this study we investigate robotic sampling methods against high concentrations of bacterial spores applied by wet deposition to different floorings, contamination spread to other areas, potential transfer of spores to the operators and assessment of a wet vacuum robot for spore inactivation. The robots' usability was evaluated and how they can be employed in real life scenarios. This will help to reduce the economic cost of sampling and the risk to sampling/decontamination teams. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Burger, Emily A; Sy, Stephen; Nygård, Mari; Kim, Jane J
2016-01-01
Background Human papillomavirus (HPV) testing allows women to self-collect cervico-vaginal cells at home (i.e., self-sampling). Using primary data from a randomized pilot study, we evaluated the long-term consequences and cost-effectiveness of using self-sampling to improve participation to routine cervical cancer screening in Norway. Methods We compared a strategy reflecting screening participation (using reminder letters) to strategies that involved mailing self-sampling device kits to women non-compliant to screening within a 5-year or 10-year period under two scenarios: A) self-sampling respondents had moderate under-screening histories, or B) respondents to self-sampling had moderate and severe under-screening histories. Model outcomes included quality-adjusted life-years (QALY) and lifetime costs. The ‘most cost-effective’ strategy was identified as the strategy just below $100,000 per QALY gained. Results Mailing self-sampling device kits to all women non-compliant to screening within a 5-year or 10-year period can be more effective and less costly than the current reminder letter policy; however, the optimal self-sampling strategy was dependent on the profile of self-sampling respondents. For example, ‘10-yearly self-sampling’ is preferred ($95,500 per QALY gained) if ‘5-yearly self-sampling’ could only attract moderate under-screeners; however, ‘5-yearly self-sampling’ is preferred if this strategy could additionally attract severe under-screeners. Conclusions Targeted self-sampling of non-compliers likely represents good value-for-money; however, the preferred strategy is contingent on the screening histories and compliance of respondents. Impact The magnitude of the health benefit and optimal self-sampling strategy is dependent on the profile and behavior of respondents. Health authorities should understand these factors prior to selecting and implementing a self-sampling policy. PMID:27624639
NASA Astrophysics Data System (ADS)
Wu, A. M.; Nater, E. A.; Dalzell, B. J.; Perry, C. H.
2014-12-01
The USDA Forest Service's Forest Inventory Analysis (FIA) program is a national effort assessing current forest resources to ensure sustainable management practices, to assist planning activities, and to report critical status and trends. For example, estimates of carbon stocks and stock change in FIA are reported as the official United States submission to the United Nations Framework Convention on Climate Change. While the main effort in FIA has been focused on aboveground biomass, soil is a critical component of this system. FIA sampled forest soils in the early 2000s and has remeasurement now underway. However, soil sampling is repeated on a 10-year interval (or longer), and it is uncertain what magnitude of changes in soil organic carbon (SOC) may be detectable with the current sampling protocol. We aim to identify the sensitivity and variability of SOC in the FIA database, and to determine the amount of SOC change that can be detected with the current sampling scheme. For this analysis, we attempt to answer the following questions: 1) What is the sensitivity (power) of SOC data in the current FIA database? 2) How does the minimum detectable change in forest SOC respond to changes in sampling intervals and/or sample point density? Soil samples in the FIA database represent 0-10 cm and 10-20 cm depth increments with a 10-year sampling interval. We are investigating the variability of SOC and its change over time for composite soil data in each FIA region (Pacific Northwest, Interior West, Northern, and Southern). To guide future sampling efforts, we are employing statistical power analysis to examine the minimum detectable change in SOC storage. We are also investigating the sensitivity of SOC storage changes under various scenarios of sample size and/or sample frequency. This research will inform the design of future FIA soil sampling schemes and improve the information available to international policy makers, university and industry partners, and the public.
Minorczyk, Maria; Góralczyk, Katarzyna; Struciński, Paweł; Hernik, Agnieszka; Czaja, Katarzyna; Łyczewska, Monika; Korcz, Wojciech; Starski, Andrzej; Ludwicki, Jan K
2012-01-01
Thermal processes and long storage of food lead to reactions between reducing sugars and amino acids, or with ascorbic acid, carbohydrates or polyunsaturated fatty acids. As a result of these reactions, new compounds are created. One of these compounds having an adverse effect on human health is furan. The aim of this paper was to estimate the infants exposure to furan found in thermally processed jarred food products, as well as characterizing the risk by comparing the exposure to the reference dose (RfD) and calculating margins of exposure. The material consisted of 301 samples of thermally processed food for infants taken from the Polish market in years 2008 - 2010. The samples included vegetable-meat, vegetables and fruit jarred meals for infants and young children in which the furan levels were analyzed by GC/MS technique. The exposure to furan has been assessed for the 3, 4, 6, 9,12 months old infants using different consumption scenarios. The levels of furan ranged from <1 microg/kg (LOQ) to 166.9 microg/kg. The average furan concentration in all samples was 40.2 microg/kg. The estimated exposures, calculated with different nutrition scenarios, were in the range from 0.03 to 3.56 microg/kg bw/day and exceeded in some cases RfD set at level of 1 microg/kg bw/day. Margins of exposure (MOE) achieved values even below 300 for scenarios assuming higher consumption of vegetable and vegetable-meat products. The magnitude of exposure to furan present in ready-to-eat meals among Polish infants is similar to data reported previously in other European countries but slightly higher than indicated in the recent EFSA report. As for some cases the estimated intake exceeds the RfD, and MOE) values are much lower than 10000 indicating a potential health concern, it is necessary to continue monitoring of furan in jarred food and estimate of its intake by infants.
Two-step adaptive management for choosing between two management actions
Moore, Alana L.; Walker, Leila; Runge, Michael C.; McDonald-Madden, Eve; McCarthy, Michael A
2017-01-01
Adaptive management is widely advocated to improve environmental management. Derivations of optimal strategies for adaptive management, however, tend to be case specific and time consuming. In contrast, managers might seek relatively simple guidance, such as insight into when a new potential management action should be considered, and how much effort should be expended on trialing such an action. We constructed a two-time-step scenario where a manager is choosing between two possible management actions. The manager has a total budget that can be split between a learning phase and an implementation phase. We use this scenario to investigate when and how much a manager should invest in learning about the management actions available. The optimal investment in learning can be understood intuitively by accounting for the expected value of sample information, the benefits that accrue during learning, the direct costs of learning, and the opportunity costs of learning. We find that the optimal proportion of the budget to spend on learning is characterized by several critical thresholds that mark a jump from spending a large proportion of the budget on learning to spending nothing. For example, as sampling variance increases, it is optimal to spend a larger proportion of the budget on learning, up to a point: if the sampling variance passes a critical threshold, it is no longer beneficial to invest in learning. Similar thresholds are observed as a function of the total budget and the difference in the expected performance of the two actions. We illustrate how this model can be applied using a case study of choosing between alternative rearing diets for hihi, an endangered New Zealand passerine. Although the model presented is a simplified scenario, we believe it is relevant to many management situations. Managers often have relatively short time horizons for management, and might be reluctant to consider further investment in learning and monitoring beyond collecting data from a single time period.
An assessment of climate change impacts on maize yields in Hebei Province of China.
Chen, Yongfu; Han, Xinru; Si, Wei; Wu, Zhigang; Chien, Hsiaoping; Okamoto, Katsuo
2017-03-01
The climate change impacts on maize yields are quantified in this paper using statistical models with panel data from 3731 farmers' observations across nine sample villages in Hebei Province of China. The marginal impacts of climate change and the simulated impacts on maize yields based on scenarios of Representative Concentration Pathways 2.6, 4.5, 6.0, and 8.5 from the global climate models of Model for Interdisciplinary Research on Climate version 5 (MIROC5) and Meteorological Research Institute Coupled General Circulation Model version 3 (MRI-CGCM3) were then calculated, analyzed, and explained. The results indicate that, first, the most important finding was that climate change impacts on maize yields were significant and a 1°C warming or a 1mm decrease in precipitation resulted in a 150.255kg or a 1.941kg loss in maize yields per hectare, respectively. Second, villages with latitudes of less than 39.832 and longitudes of more than 114.839 in Hebei province suffered losses due to warm weather. Third, the simulated impacts for the full sample are all negative based on scenarios from MIROC5, and their magnitudes are more than those of MRI-CGCM3 are. Based on scenarios in the 2050s, the biggest loss for maize yields per hectare for the full sample accounts for about one-tenth of the mean maize yield from 2004 to 2010, and all of the villages are impacted. Hence, it is important to help farms adopt an adaptation strategy to tackle the risk of loss for maize yields from climate change, and it is necessary to develop agricultural synthesis services as a public adaptation policy at the village level to interact with the adaptation strategy at the farm level. Copyright © 2016 Elsevier B.V. All rights reserved.
Two-step adaptive management for choosing between two management actions.
Moore, Alana L; Walker, Leila; Runge, Michael C; McDonald-Madden, Eve; McCarthy, Michael A
2017-06-01
Adaptive management is widely advocated to improve environmental management. Derivations of optimal strategies for adaptive management, however, tend to be case specific and time consuming. In contrast, managers might seek relatively simple guidance, such as insight into when a new potential management action should be considered, and how much effort should be expended on trialing such an action. We constructed a two-time-step scenario where a manager is choosing between two possible management actions. The manager has a total budget that can be split between a learning phase and an implementation phase. We use this scenario to investigate when and how much a manager should invest in learning about the management actions available. The optimal investment in learning can be understood intuitively by accounting for the expected value of sample information, the benefits that accrue during learning, the direct costs of learning, and the opportunity costs of learning. We find that the optimal proportion of the budget to spend on learning is characterized by several critical thresholds that mark a jump from spending a large proportion of the budget on learning to spending nothing. For example, as sampling variance increases, it is optimal to spend a larger proportion of the budget on learning, up to a point: if the sampling variance passes a critical threshold, it is no longer beneficial to invest in learning. Similar thresholds are observed as a function of the total budget and the difference in the expected performance of the two actions. We illustrate how this model can be applied using a case study of choosing between alternative rearing diets for hihi, an endangered New Zealand passerine. Although the model presented is a simplified scenario, we believe it is relevant to many management situations. Managers often have relatively short time horizons for management, and might be reluctant to consider further investment in learning and monitoring beyond collecting data from a single time period. © 2017 by the Ecological Society of America.
Treatment decision-making and the form of risk communication: results of a factorial survey.
Hembroff, Larry A; Holmes-Rovner, Margaret; Wills, Celia E
2004-11-16
Prospective users of preventive therapies often must evaluate complex information about therapeutic risks and benefits. The purpose of this study was to evaluate the effect of relative and absolute risk information on patient decision-making in scenarios typical of health information for patients. Factorial experiments within a telephone survey of the Michigan adult, non-institutionalized, English-speaking population. Average interview lasted 23 minutes. Subjects and sample design: 952 randomly selected adults within a random-digit dial sample of Michigan households. Completion rate was 54.3%. When presented hypothetical information regarding additional risks of breast cancer from a medication to prevent a bone disease, respondents reduced their willingness to recommend a female friend take the medication compared to the baseline rate (66.8% = yes). The decrease was significantly greater with relative risk information. Additional benefit information regarding preventing heart disease from the medication increased willingness to recommend the medication to a female friend relative to the baseline scenario, but did not differ between absolute and relative risk formats. When information about both increased risk of breast cancer and reduced risk of heart disease were provided, typical respondents appeared to make rational decisions consistent with Expected Utility Theory, but the information presentation format affected choices. Those 11% - 33% making decisions contrary to the medical indications were more likely to be Hispanic, older, more educated, smokers, and to have children in the home. In scenarios typical of health risk information, relative risk information led respondents to make non-normative decisions that were "corrected" when the frame used absolute risk information. This population sample made generally rational decisions when presented with absolute risk information, even in the context of a telephone interview requiring remembering rates given. The lack of effect of gender and race suggests that a standard strategy of presenting absolute risk information may improve patient decision-making.
Abandoning the dead donor rule? A national survey of public views on death and organ donation.
Nair-Collins, Michael; Green, Sydney R; Sutin, Angelina R
2015-04-01
Brain dead organ donors are the principal source of transplantable organs. However, it is controversial whether brain death is the same as biological death. Therefore, it is unclear whether organ removal in brain death is consistent with the 'dead donor rule', which states that organ removal must not cause death. Our aim was to evaluate the public's opinion about organ removal if explicitly described as causing the death of a donor in irreversible apneic coma. We conducted a cross-sectional internet survey of the American public (n=1096). Questionnaire domains included opinions about a hypothetical scenario of organ removal described as causing the death of a patient in irreversible coma, and items measuring willingness to donate organs after death. Some 71% of the sample agreed that it should be legal for patients to donate organs in the scenario described and 67% agreed that they would want to donate organs in a similar situation. Of the 85% of the sample who agreed that they were willing to donate organs after death, 76% agreed that they would donate in the scenario of irreversible coma with organ removal causing death. There appears to be public support for organ donation in a scenario explicitly described as violating the dead donor rule. Further, most but not all people who would agree to donate when organ removal is described as occurring after death would also agree to donate when organ removal is described as causing death in irreversible coma. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Infrastructure to Support Ultra High Throughput Biodosimetry Screening after a Radiological Event
Garty, G.; Karam, P.A.; Brenner, D. J.
2011-01-01
Purpose After a large-scale radiological event, there will be a pressing need to assess, within a few days, the radiation doses received by tens or hundreds of thousands of individuals. This is for triage, to prevent treatment locations from being overwhelmed, in what is sure to be a resource limited scenario, as well as to facilitate dose-dependent treatment decisions. In addition there are psychosocial considerations, in that active reassurance of minimal exposure is a potentially effective antidote to mass panic, as well as long-term considerations, to facilitate later studies of cancer and other long-term disease risks. Materials and Methods As described elsewhere in this issue, we are developing a Rapid Automated Biodosimetry Tool (RABiT). The RABiT allows high throughput analysis of thousands of blood samples per day, providing a dose estimate that can be used to support clinical triage and treatment decisions. Results Development of the RABiT has motivated us to consider the logistics of incorporating such a system into the existing emergency response scenarios of a large metropolitan area. We present here a view of how one or more centralized biodosimetry readout devices might be incorporated into an infrastructure in which fingerstick blood samples are taken at many distributed locations within an affected city or region and transported to centralized locations. Conclusions High throughput biodosimetry systems offer the opportunity to perform biodosimetric assessments on a large number of persons. As such systems reach a high level of maturity, emergency response scenarios will need to be tweaked to make use of these powerful tools. This can be done relatively easily within the framework of current scenarios. PMID:21675819
Riparian vegetation structure under desertification scenarios
NASA Astrophysics Data System (ADS)
Rosário Fernandes, M.; Segurado, Pedro; Jauch, Eduardo; Ferreira, M. Teresa
2015-04-01
Riparian areas are responsible for many ecological and ecosystems services, including the filtering function, that are considered crucial to the preservation of water quality and social benefits. The main goal of this study is to quantify and understand the riparian variability under desertification scenario(s) and identify the optimal riparian indicators for water scarcity and droughts (WS&D), henceforth improving river basin management. This study was performed in the Iberian Tâmega basin, using riparian woody patches, mapped by visual interpretation on Google Earth imagery, along 130 Sampling Units of 250 m long river stretches. Eight riparian structural indicators, related with lateral dimension, weighted area and shape complexity of riparian patches were calculated using Patch Analyst extension for ArcGis 10. A set of 29 hydrological, climatic, and hydrogeomorphological variables were computed, by a water modelling system (MOHID), using monthly meteorological data between 2008 and 2014. Land-use classes were also calculated, in a 250m-buffer surrounding each sampling unit, using a classification based system on Corine Land Cover. Boosted Regression Trees identified Mean-width (MW) as the optimal riparian indicator for water scarcity and drought, followed by the Weighted Class Area (WCA) (classification accuracy =0.79 and 0.69 respectively). Average Flow and Strahler number were consistently selected, by all boosted models, as the most important explanatory variables. However, a combined effect of hidrogeomorphology and land-use can explain the high variability found in the riparian width mainly in Tâmega tributaries. Riparian patches are larger towards Tâmega river mouth although with lower shape complexity, probably related with more continuous and almost monospecific stands. Climatic, hydrological and land use scenarios, singly and combined, were used to quantify the riparian variability responding to these changes, and to assess the loss of riparian functions such as nutrient incorporation and sediment flux alterations.
Stratification-Based Outlier Detection over the Deep Web.
Xian, Xuefeng; Zhao, Pengpeng; Sheng, Victor S; Fang, Ligang; Gu, Caidong; Yang, Yuanfeng; Cui, Zhiming
2016-01-01
For many applications, finding rare instances or outliers can be more interesting than finding common patterns. Existing work in outlier detection never considers the context of deep web. In this paper, we argue that, for many scenarios, it is more meaningful to detect outliers over deep web. In the context of deep web, users must submit queries through a query interface to retrieve corresponding data. Therefore, traditional data mining methods cannot be directly applied. The primary contribution of this paper is to develop a new data mining method for outlier detection over deep web. In our approach, the query space of a deep web data source is stratified based on a pilot sample. Neighborhood sampling and uncertainty sampling are developed in this paper with the goal of improving recall and precision based on stratification. Finally, a careful performance evaluation of our algorithm confirms that our approach can effectively detect outliers in deep web.
Mars rover sample return: An exobiology science scenario
NASA Technical Reports Server (NTRS)
Rosenthal, D. A.; Sims, M. H.; Schwartz, Deborah E.; Nedell, S. S.; Mckay, Christopher P.; Mancinelli, Rocco L.
1988-01-01
A mission designed to collect and return samples from Mars will provide information regarding its composition, history, and evolution. At the same time, a sample return mission generates a technical challenge. Sophisticated, semi-autonomous, robotic spacecraft systems must be developed in order to carry out complex operations at the surface of a very distant planet. An interdisciplinary effort was conducted to consider how much a Mars mission can be realistically structured to maximize the planetary science return. The focus was to concentrate on a particular set of scientific objectives (exobiology), to determine the instrumentation and analyses required to search for biological signatures, and to evaluate what analyses and decision making can be effectively performed by the rover in order to minimize the overhead of constant communication between Mars and the Earth. Investigations were also begun in the area of machine vision to determine whether layered sedimentary structures can be recognized autonomously, and preliminary results are encouraging.
Stratification-Based Outlier Detection over the Deep Web
Xian, Xuefeng; Zhao, Pengpeng; Sheng, Victor S.; Fang, Ligang; Gu, Caidong; Yang, Yuanfeng; Cui, Zhiming
2016-01-01
For many applications, finding rare instances or outliers can be more interesting than finding common patterns. Existing work in outlier detection never considers the context of deep web. In this paper, we argue that, for many scenarios, it is more meaningful to detect outliers over deep web. In the context of deep web, users must submit queries through a query interface to retrieve corresponding data. Therefore, traditional data mining methods cannot be directly applied. The primary contribution of this paper is to develop a new data mining method for outlier detection over deep web. In our approach, the query space of a deep web data source is stratified based on a pilot sample. Neighborhood sampling and uncertainty sampling are developed in this paper with the goal of improving recall and precision based on stratification. Finally, a careful performance evaluation of our algorithm confirms that our approach can effectively detect outliers in deep web. PMID:27313603
Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning
Ichnowski, Jeffrey; Prins, Jan F.; Alterovitz, Ron
2014-01-01
We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU’s cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot’s configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot. PMID:25419474
Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning.
Ichnowski, Jeffrey; Prins, Jan F; Alterovitz, Ron
2014-05-01
We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU's cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot's configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot.
Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qin; Florita, Anthony R; Krishnan, Venkat K
Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power and currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) is analytically deduced.more » The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start-time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.« less
Reeder, Blaine; Hills, Rebecca A.; Turner, Anne M.; Demiris, George
2014-01-01
Objectives The objectives of the study were to use persona-driven and scenario-based design methods to create a conceptual information system design to support public health nursing. Design and Sample We enrolled 19 participants from two local health departments to conduct an information needs assessment, create a conceptual design, and conduct a preliminary design validation. Measures Interviews and thematic analysis were used to characterize information needs and solicit design recommendations from participants. Personas were constructed from participant background information, and scenario-based design was used to create a conceptual information system design. Two focus groups were conducted as a first iteration validation of information needs, personas, and scenarios. Results Eighty-nine information needs were identified. Two personas and 89 scenarios were created. Public health nurses and nurse managers confirmed the accuracy of information needs, personas, scenarios, and the perceived usefulness of proposed features of the conceptual design. Design artifacts were modified based on focus group results. Conclusion Persona-driven design and scenario-based design are feasible methods to design for common work activities in different local health departments. Public health nurses and nurse managers should be engaged in the design of systems that support their work. PMID:24117760
Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qin; Florita, Anthony R; Krishnan, Venkat K
2017-08-31
Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power, and they are currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) ismore » analytically deduced. The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.« less
Fall scenarios In causing older women's hip fractures.
Hägvide, Mona-Lisa; Larsson, Tore J; Borell, Lena
2013-01-01
Falls and fall-related injuries among older women constitute a major public health problem with huge costs for the society and personal suffering. The aim of this study was to describe and illustrate how a number of circumstances, conceptualized as a scenario, that were related to the individual, the environment, and the ongoing occupation contributed to a fall that led to a hip fracture among women. The sample included 48 women over 55 years old. Interviews were conducted during home visits and the analysis provided a descriptive picture of circumstances in the shape of a scenario related to the risk of falling. A number of scenarios were developed based on the data and named to provide an understanding of the interplay between the individual, the environment, and the ongoing occupation at the time of the fall. By applying the concept of a scenario, occupational therapists can increase the awareness of fall risks among older people, and are relevant also for interior designers, architects, and town planners to consider when designing the local environment as well as furniture and other objects.
NASA Astrophysics Data System (ADS)
Clark, Katherine; van Tongeren, Martie; Christensen, Frans M.; Brouwer, Derk; Nowack, Bernd; Gottschalk, Fadri; Micheletti, Christian; Schmid, Kaspar; Gerritsen, Rianda; Aitken, Rob; Vaquero, Celina; Gkanis, Vasileios; Housiadas, Christos; de Ipiña, Jesús María López; Riediker, Michael
2012-09-01
The aim of this paper is to describe the process and challenges in building exposure scenarios for engineered nanomaterials (ENM), using an exposure scenario format similar to that used for the European Chemicals regulation (REACH). Over 60 exposure scenarios were developed based on information from publicly available sources (literature, books, and reports), publicly available exposure estimation models, occupational sampling campaign data from partnering institutions, and industrial partners regarding their own facilities. The primary focus was on carbon-based nanomaterials, nano-silver (nano-Ag) and nano-titanium dioxide (nano-TiO2), and included occupational and consumer uses of these materials with consideration of the associated environmental release. The process of building exposure scenarios illustrated the availability and limitations of existing information and exposure assessment tools for characterizing exposure to ENM, particularly as it relates to risk assessment. This article describes the gaps in the information reviewed, recommends future areas of ENM exposure research, and proposes types of information that should, at a minimum, be included when reporting the results of such research, so that the information is useful in a wider context.
Galea, Karen S; McGonagle, Carolyn; Sleeuwenhoek, Anne; Todd, David; Jiménez, Araceli Sánchez
2014-06-01
Dermal exposure to drilling fluids and crude oil is an exposure route of concern. However, there have been no published studies describing sampling methods or reporting dermal exposure measurements. We describe a study that aimed to evaluate a wipe sampling method to assess dermal exposure to an oil-based drilling fluid and crude oil, as well as to investigate the feasibility of using an interception cotton glove sampler for exposure on the hands/wrists. A direct comparison of the wipe and interception methods was also completed using pigs' trotters as a surrogate for human skin and a direct surface contact exposure scenario. Overall, acceptable recovery and sampling efficiencies were reported for both methods, and both methods had satisfactory storage stability at 1 and 7 days, although there appeared to be some loss over 14 days. The methods' comparison study revealed significantly higher removal of both fluids from the metal surface with the glove samples compared with the wipe samples (on average 2.5 times higher). Both evaluated sampling methods were found to be suitable for assessing dermal exposure to oil-based drilling fluids and crude oil; however, the comparison study clearly illustrates that glove samplers may overestimate the amount of fluid transferred to the skin. Further comparison of the two dermal sampling methods using additional exposure situations such as immersion or deposition, as well as a field evaluation, is warranted to confirm their appropriateness and suitability in the working environment. © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Neeser, Rudolph; Ackermann, Rebecca Rogers; Gain, James
2009-09-01
Various methodological approaches have been used for reconstructing fossil hominin remains in order to increase sample sizes and to better understand morphological variation. Among these, morphometric quantitative techniques for reconstruction are increasingly common. Here we compare the accuracy of three approaches--mean substitution, thin plate splines, and multiple linear regression--for estimating missing landmarks of damaged fossil specimens. Comparisons are made varying the number of missing landmarks, sample sizes, and the reference species of the population used to perform the estimation. The testing is performed on landmark data from individuals of Homo sapiens, Pan troglodytes and Gorilla gorilla, and nine hominin fossil specimens. Results suggest that when a small, same-species fossil reference sample is available to guide reconstructions, thin plate spline approaches perform best. However, if no such sample is available (or if the species of the damaged individual is uncertain), estimates of missing morphology based on a single individual (or even a small sample) of close taxonomic affinity are less accurate than those based on a large sample of individuals drawn from more distantly related extant populations using a technique (such as a regression method) able to leverage the information (e.g., variation/covariation patterning) contained in this large sample. Thin plate splines also show an unexpectedly large amount of error in estimating landmarks, especially over large areas. Recommendations are made for estimating missing landmarks under various scenarios. Copyright 2009 Wiley-Liss, Inc.
Design tradeoffs in long-term research for stream salamanders
Brand, Adrianne B,; Grant, Evan H. Campbell
2017-01-01
Long-term research programs can benefit from early and periodic evaluation of their ability to meet stated objectives. In particular, consideration of the spatial allocation of effort is key. We sampled 4 species of stream salamanders intensively for 2 years (2010–2011) in the Chesapeake and Ohio Canal National Historical Park, Maryland, USA to evaluate alternative distributions of sampling locations within stream networks, and then evaluated via simulation the ability of multiple survey designs to detect declines in occupancy and to estimate dynamic parameters (colonization, extinction) over 5 years for 2 species. We expected that fine-scale microhabitat variables (e.g., cobble, detritus) would be the strongest determinants of occupancy for each of the 4 species; however, we found greater support for all species for models including variables describing position within the stream network, stream size, or stream microhabitat. A monitoring design focused on headwater sections had greater power to detect changes in occupancy and the dynamic parameters in each of 3 scenarios for the dusky salamander (Desmognathus fuscus) and red salamander (Pseudotriton ruber). Results for transect length were more variable, but across all species and scenarios, 25-m transects are most suitable as a balance between maximizing detection probability and describing colonization and extinction. These results inform sampling design and provide a general framework for setting appropriate goals, effort, and duration in the initial planning stages of research programs on stream salamanders in the eastern United States.
What Matters to Women When Making Decisions About Breast Cancer Chemoprevention?
Martinez, Kathryn A; Fagerlin, Angela; Witteman, Holly O; Holmberg, Christine; Hawley, Sarah T
2016-04-01
Despite the effectiveness of chemoprevention (tamoxifen and raloxifene) in preventing breast cancer among women at high risk for the disease, uptake is low. The objective of this study was to determine the tradeoff preferences for various attributes associated with chemoprevention among women not currently taking the drugs. We used rating-based conjoint analysis to evaluate the relative importance of a number of attributes associated with chemoprevention, including risk of side effects, drug effectiveness, time needed to take the drugs, and availability of a blood test to see if the drugs were working in an Internet sample of women. We generated mean importance values and part-worth utilities for all attribute levels associated with taking chemoprevention. We then used multivariable linear regression to examine attribute importance scores controlling for participant age, race, Hispanic ethnicity, educational level, and a family history of breast cancer. Overall interest in taking chemoprevention was low among the 1094 women included in the analytic sample, even for the scenario in which participants would receive the greatest benefit and fewest risks associated with taking the drugs. Time needed to take the pill for it to work and 5-year risk of breast cancer were the most important attributes driving tradeoff preferences between the chemoprevention scenarios. Interest in taking chemoprevention among this sample of women at average risk was low. Addressing women's concerns about the time needed to take chemoprevention for it to work may help clinicians improve uptake of the drugs among those likely to benefit.
Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz
2014-07-01
Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Selebalo-Bereng, Lebohang; Patel, Cynthia Joan
2018-01-17
This study focused on the relationship between religion, religiosity/spirituality (R/S), and attitudes of a sample of South African male secondary school youth toward women's rights to legal abortion in different situations. We distributed 400 self-administered questionnaires assessing the main variables (attitudes toward reasons for abortion and R/S) to the target sample in six different secondary schools in KwaZulu-Natal, South Africa. The responses of a final sample of 327 learners were then analyzed using the Statistical Package for the Social Sciences (SPSS) software. The findings revealed that religion and R/S play a role in the youths' attitudes toward abortion. While the Hindu subsample indicated higher overall support across the different scenarios, the Muslim subsample reported greater disapproval than the other groups on 'Elective reasons' and in instances of 'Objection by significant others.' The Christian youth had the most negative attitudes to abortion for 'Traumatic reasons' and 'When women's health/life' was threatened. Across the sample, higher R/S levels were linked with more negative attitudes toward reasons for abortion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amblard, A.; Riguccini, L.; Temi, P.
We compute the properties of a sample of 221 local, early-type galaxies with a spectral energy distribution (SED) modeling software, CIGALEMC. Concentrating on the star-forming (SF) activity and dust contents, we derive parameters such as the specific star formation rate (sSFR), the dust luminosity, dust mass, and temperature. In our sample, 52% is composed of elliptical (E) galaxies and 48% of lenticular (S0) galaxies. We find a larger proportion of S0 galaxies among galaxies with a large sSFR and large specific dust emission. The stronger activity of S0 galaxies is confirmed by larger dust masses. We investigate the relative proportionmore » of active galactic nuclei (AGNs) and SF galaxies in our sample using spectroscopic Sloan Digital Sky Survey data and near-infrared selection techniques, and find a larger proportion of AGN-dominated galaxies in the S0 sample than the E one. This could corroborate a scenario where blue galaxies evolve into red ellipticals by passing through an S0 AGN active period while quenching its star formation. Finally, we find a good agreement comparing our estimates with color indicators.« less
Conceptual study and key technology development for Mars Aeroflyby sample collection
NASA Astrophysics Data System (ADS)
Fujita, K.; Ozawa, T.; Okudaira, K.; Mikouchi, T.; Suzuki, T.; Takayanagi, H.; Tsuda, Y.; Ogawa, N.; Tachibana, S.; Satoh, T.
2014-01-01
Conceptual study of Mars Aeroflyby Sample Collection (MASC) is conducted as a part of the next Mars exploration mission currently entertained in Japan Aerospace Exploration Agency. In the mission scenario, an atmospheric entry vehicle is flown into the Martian atmosphere, collects the Martian dust particles as well as atmospheric gases during the guided hypersonic flight, exits the Martian atmosphere, and is inserted into a parking orbit from which a return system departs for the earth to deliver the dust and gas samples. In order to accomplish a controlled flight and a successful orbit insertion, aeroassist orbit transfer technologies are introduced into the guidance and control system. System analysis is conducted to assess the feasibility and to make a conceptual design, finding that the MASC system is feasible at the minimum system mass of 600 kg approximately. The aerogel, which is one of the candidates for the dust sample collector, is assessed by arcjet heating tests to examine its behavior when exposed to high-temperature gases, as well as by particle impingement tests to evaluate its dust capturing capability.
[Mission oriented diagnostic real-time PCR].
Tomaso, Herbert; Scholz, Holger C; Al Dahouk, Sascha; Splettstoesser, Wolf D; Neubauer, Heinrich; Pfeffer, Martin; Straube, Eberhard
2007-01-01
In out of area military missions soldiers are potentially exposed to bacteria that are endemic in tropical areas and can be used as biological agents. It can be difficult to culture these bacteria due to sample contamination, low number of bacteria or pretreatment with antibiotics. Commercial biochemical identification systems are not optimized for these agents which can result in misidentification. Immunological assays are often not commercially available or not specific. Real-time PCR assays are very specific and sensitive and can shorten the time required to establish a diagnosis markedly. Therefore, real-time PCRs for the identification of Bacillus anthracis, Brucella spp., Burkholderia mallei und Burkholderia pseudomallei, Francisella tularensis und Yersinia pestis have been developed. PCR results can be false negative due to inadequate clinical samples, low number of bacteria in samples, DNA degradation, inhibitory substances and inappropriate DNA preparation. Hence, it is crucial to cultivate the organisms as a prerequisite for adequate antibiotic therapy and typing of the agent. In a bioterrorist scenario samples have to be treated according to rules applied in forensic medicine and documentation has to be flawless.
A Sampling Based Approach to Spacecraft Autonomous Maneuvering with Safety Specifications
NASA Technical Reports Server (NTRS)
Starek, Joseph A.; Barbee, Brent W.; Pavone, Marco
2015-01-01
This paper presents a methods for safe spacecraft autonomous maneuvering that leverages robotic motion-planning techniques to spacecraft control. Specifically the scenario we consider is an in-plan rendezvous of a chaser spacecraft in proximity to a target spacecraft at the origin of the Clohessy Wiltshire Hill frame. The trajectory for the chaser spacecraft is generated in a receding horizon fashion by executing a sampling based robotic motion planning algorithm name Fast Marching Trees (FMT) which efficiently grows a tree of trajectories over a set of probabillistically drawn samples in the state space. To enforce safety the tree is only grown over actively safe samples for which there exists a one-burn collision avoidance maneuver that circularizes the spacecraft orbit along a collision-free coasting arc and that can be executed under potential thrusters failures. The overall approach establishes a provably correct framework for the systematic encoding of safety specifications into the spacecraft trajectory generations process and appears amenable to real time implementation on orbit. Simulation results are presented for a two-fault tolerant spacecraft during autonomous approach to a single client in Low Earth Orbit.
Effect of exit beam phase aberrations on coherent x-ray reconstructions of Au nanocrystals
NASA Astrophysics Data System (ADS)
Hruszkewycz, Stephan; Harder, Ross; Fuoss, Paul
2010-03-01
Current studies in coherent x-ray diffractive imaging (CXDI) are focusing on in-situ imaging under a variety of environmental conditions. Such studies often involve environmental sample chambers through which the x-ray beam must pass before and after interacting with the sample: i.e. cryostats or high pressure cells. Such sample chambers usually contain polycrystalline x-ray windows with structural imperfections that can in turn interact with the diffracted beam. A phase object in the near field that interacts with the beam exiting the sample can introduce distortions at the detector plane that may affect coherent reconstructions. We investigate the effects of a thin beryllium membrane on the coherent exit beam of a gold nanoparticle. We compare three dimensional reconstructions from experimental diffraction patterns measured with and without a 380 micron thick Be dome and find that the reconstructions are reproducible within experimental errors. Simulated near-field distortions of the exit beam consistent with micron sized voids in Be establish a ``worst case scenario'' where distorted diffraction patterns inhibit accurate inversions.
Model-Based Design of Long-Distance Tracer Transport Experiments in Plants.
Bühler, Jonas; von Lieres, Eric; Huber, Gregor J
2018-01-01
Studies of long-distance transport of tracer isotopes in plants offer a high potential for functional phenotyping, but so far measurement time is a bottleneck because continuous time series of at least 1 h are required to obtain reliable estimates of transport properties. Hence, usual throughput values are between 0.5 and 1 samples h -1 . Here, we propose to increase sample throughput by introducing temporal gaps in the data acquisition of each plant sample and measuring multiple plants one after each other in a rotating scheme. In contrast to common time series analysis methods, mechanistic tracer transport models allow the analysis of interrupted time series. The uncertainties of the model parameter estimates are used as a measure of how much information was lost compared to complete time series. A case study was set up to systematically investigate different experimental schedules for different throughput scenarios ranging from 1 to 12 samples h -1 . Selected designs with only a small amount of data points were found to be sufficient for an adequate parameter estimation, implying that the presented approach enables a substantial increase of sample throughput. The presented general framework for automated generation and evaluation of experimental schedules allows the determination of a maximal sample throughput and the respective optimal measurement schedule depending on the required statistical reliability of data acquired by future experiments.
Schlee, Claudia; Markova, Mariya; Schrank, Julia; Laplagne, Fanette; Schneider, Rüdiger; Lachenmeier, Dirk W
2013-05-15
Substituted imidazoles recently came under scrutiny as they may be indirectly introduced into cola beverages via the use of class IV (E150d) caramel colours and may pose health hazards. A LC/MS/MS method was developed for determining 2- and 4-methylimidazole (2-MI, 4-MI) and 2-acetyl-4-(1,2,3,4)-tetrahydroxybutylimidazole (THI) in beverages and caramel colours. The method is very rapid and easy to conduct as it requires only dilution in eluent for sample preparation. For 4-MI, the recovery was between 94 and 102% for spiked cola samples. The limit of detection was 2μg/L in the measuring solution (corresponding to 40μg/L for cola samples diluted 1:20 during sample preparation). 97 cola samples and 13 caramel colours from Germany and France were analysed. From the 3 analytes, only 4-MI was found in the samples with very varying concentrations (non quantifiable traces to 0.6mg/L in colas and 175-658mg/kg in E150d). The exposure for cola drinkers in worst case scenarios is estimated to be 2-5μg/kg bodyweight/day, which is judged as being only a low risk for public health. Copyright © 2012 Elsevier B.V. All rights reserved.
Wickramasinghe, Kremlin; Rayner, Mike; Goldacre, Michael; Townsend, Nick; Scarborough, Peter
2017-01-01
Objectives The aim of this modelling study was to estimate the expected changes in the nutritional quality and greenhouse gas emissions (GHGEs) of primary school meals due to the adoption of new mandatory food-based standards for school meals. Setting Nationally representative random sample of 136 primary schools in England was selected for the Primary School Food Survey (PSFS) with 50% response rate. Participants A sample of 6690 primary students from PSFS who consumed school meals. Outcome measures Primary School Food Plan (SFP) nutritional impact was assessed using both macronutrient and micronutrient quality. The environmental impact was measured by GHGEs. Methods The scenario tested was one in which every meal served in schools met more than half of the food-based standards mentioned in the SFP (SFP scenario). We used findings from a systematic review to assign GHGE values for each food item in the data set. The GHGE value and nutritional quality of SFP scenario meals was compared with the average primary school meal in the total PSFS data set (pre-SFP scenario). Prior to introduction of the SFP (pre-SFP scenario), the primary school meals had mandatory nutrient-based guidelines. Results The percentage of meals that met the protein standard increased in the SFP scenario and the proportion of meals that met the standards for important micronutrients (eg, iron, calcium, vitamin A and C) also increased. However, the SFP scenario did not improve the salt, saturated fat and free sugar levels. The mean GHGE value of meals which met the SFP standards was 0.79 (95% CI 0.77 to 0.81) kgCO2e compared with a mean value of 0.72 (0.71 to 0.74) kgCO2e for all meals. Adopting the SFP would increase the total emissions associated with primary school meals by 22 000 000 kgCO2e per year. Conclusions The universal adoption of the new food-based standards, without reformulation would result in an increase in the GHGEs of school meals and improve some aspects of the nutritional quality, but it would not improve the average salt, sugar and saturated fat content levels. PMID:28381419
Wickramasinghe, Kremlin; Rayner, Mike; Goldacre, Michael; Townsend, Nick; Scarborough, Peter
2017-04-05
The aim of this modelling study was to estimate the expected changes in the nutritional quality and greenhouse gas emissions (GHGEs) of primary school meals due to the adoption of new mandatory food-based standards for school meals. Nationally representative random sample of 136 primary schools in England was selected for the Primary School Food Survey (PSFS) with 50% response rate. A sample of 6690 primary students from PSFS who consumed school meals. Primary School Food Plan (SFP) nutritional impact was assessed using both macronutrient and micronutrient quality. The environmental impact was measured by GHGEs. The scenario tested was one in which every meal served in schools met more than half of the food-based standards mentioned in the SFP (SFP scenario). We used findings from a systematic review to assign GHGE values for each food item in the data set. The GHGE value and nutritional quality of SFP scenario meals was compared with the average primary school meal in the total PSFS data set (pre-SFP scenario). Prior to introduction of the SFP (pre-SFP scenario), the primary school meals had mandatory nutrient-based guidelines. The percentage of meals that met the protein standard increased in the SFP scenario and the proportion of meals that met the standards for important micronutrients (eg, iron, calcium, vitamin A and C) also increased. However, the SFP scenario did not improve the salt, saturated fat and free sugar levels. The mean GHGE value of meals which met the SFP standards was 0.79 (95% CI 0.77 to 0.81) kgCO 2 e compared with a mean value of 0.72 (0.71 to 0.74) kgCO 2 e for all meals. Adopting the SFP would increase the total emissions associated with primary school meals by 22 000 000 kgCO 2 e per year. The universal adoption of the new food-based standards, without reformulation would result in an increase in the GHGEs of school meals and improve some aspects of the nutritional quality, but it would not improve the average salt, sugar and saturated fat content levels. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Does Gender Matter? An Exploratory Study of Perspectives across Genders, Age and Education
ERIC Educational Resources Information Center
Carinci, Sherrie; Wong, Pia Lindquist
2009-01-01
Using a convenience sample and survey research methods, the authors seek to better understand how perspectives on gender are shaped by individuals' age, level of education and gender. Study participants responded in writing to scenarios and survey questions, revealing their personal views on gender as an identity category and as a marker in the…
ERIC Educational Resources Information Center
Clarke, Allyson K.; Stermac, Lana
2011-01-01
The present study explored the influence of survivor weight and participant gender, rape myth acceptance, and antifat attitudes on perceptions of sexual assault. Using an online survey tool, a community sample of 413 adult Canadian residents reviewed a hypothetical sexual assault scenario and completed a series of evaluations and attitudinal…
ERIC Educational Resources Information Center
Garver, Michael S.; Divine, Richard L.
2008-01-01
An adaptive conjoint analysis was performed on the study abroad preferences of a sample of undergraduate college students. The results indicate that trip location, cost, and time spent abroad are the three most important determinants of student preference for different study abroad trip scenarios. The analysis also uncovered four different study…
The Role of the New "Date Rape Drugs" in Attributions about Date Rape
ERIC Educational Resources Information Center
Girard, April L.; Senn, Charlene Y.
2008-01-01
This study investigates the effect of voluntary and involuntary drug use on attributions about sexual assault. The sample was composed of 280 randomly selected male and female undergraduate students. The type of drug used (GHB, alcohol, or none) and the voluntariness of the administration were varied in an unambiguous date rape scenario.…
Stochastic Multi-Commodity Facility Location Based on a New Scenario Generation Technique
NASA Astrophysics Data System (ADS)
Mahootchi, M.; Fattahi, M.; Khakbazan, E.
2011-11-01
This paper extends two models for stochastic multi-commodity facility location problem. The problem is formulated as two-stage stochastic programming. As a main point of this study, a new algorithm is applied to efficiently generate scenarios for uncertain correlated customers' demands. This algorithm uses Latin Hypercube Sampling (LHS) and a scenario reduction approach. The relation between customer satisfaction level and cost are considered in model I. The risk measure using Conditional Value-at-Risk (CVaR) is embedded into the optimization model II. Here, the structure of the network contains three facility layers including plants, distribution centers, and retailers. The first stage decisions are the number, locations, and the capacity of distribution centers. In the second stage, the decisions are the amount of productions, the volume of transportation between plants and customers.
The relationship of extraversion and neuroticism to two measures of assertive behavior.
Vestewig, R E; Moss, M K
1976-05-01
One hundred forty-four college students completed the Eysenck Personality Inventory and the Rathus Assertiveness Schedule (RAS) and wrote their behavioral reactions to five scenarios in which an assertive behavior was an appropriate response. Extraversion showed a significant positive correlation with the RAS in both males and females. Neuroticism was negatively correlated with RAS in both sexes. Extraversion and RAS correlated significantly with rated Assertiveness in the scenarios only in the male sample. The RAS predicted variance in Assertiveness beyond that predicted by Extraversion. Overall low correlations of the measures with rated Assertiveness were discussed in terms of the low internal consistency reliability of that scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, W; Yin, F; Wang, C
Purpose: To develop a technique to estimate on-board VC-MRI using multi-slice sparsely-sampled cine images, patient prior 4D-MRI, motion-modeling and free-form deformation for real-time 3D target verification of lung radiotherapy. Methods: A previous method has been developed to generate on-board VC-MRI by deforming prior MRI images based on a motion model(MM) extracted from prior 4D-MRI and a single-slice on-board 2D-cine image. In this study, free-form deformation(FD) was introduced to correct for errors in the MM when large anatomical changes exist. Multiple-slice sparsely-sampled on-board 2D-cine images located within the target are used to improve both the estimation accuracy and temporal resolution ofmore » VC-MRI. The on-board 2D-cine MRIs are acquired at 20–30frames/s by sampling only 10% of the k-space on Cartesian grid, with 85% of that taken at the central k-space. The method was evaluated using XCAT(computerized patient model) simulation of lung cancer patients with various anatomical and respirational changes from prior 4D-MRI to onboard volume. The accuracy was evaluated using Volume-Percent-Difference(VPD) and Center-of-Mass-Shift(COMS) of the estimated tumor volume. Effects of region-of-interest(ROI) selection, 2D-cine slice orientation, slice number and slice location on the estimation accuracy were evaluated. Results: VCMRI estimated using 10 sparsely-sampled sagittal 2D-cine MRIs achieved VPD/COMS of 9.07±3.54%/0.45±0.53mm among all scenarios based on estimation with ROI-MM-ROI-FD. The FD optimization improved estimation significantly for scenarios with anatomical changes. Using ROI-FD achieved better estimation than global-FD. Changing the multi-slice orientation to axial, coronal, and axial/sagittal orthogonal reduced the accuracy of VCMRI to VPD/COMS of 19.47±15.74%/1.57±2.54mm, 20.70±9.97%/2.34±0.92mm, and 16.02±13.79%/0.60±0.82mm, respectively. Reducing the number of cines to 8 enhanced temporal resolution of VC-MRI by 25% while maintaining the estimation accuracy. Estimation using slices sampled uniformly through the tumor achieved better accuracy than slices sampled non-uniformly. Conclusions: Preliminary studies showed that it is feasible to generate VC-MRI from multi-slice sparsely-sampled 2D-cine images for real-time 3D-target verification. This work was supported by the National Institutes of Health under Grant No. R01-CA184173 and a research grant from Varian Medical Systems.« less
Backhouse, Martin E
2002-01-01
A number of approaches to conducting economic evaluations could be adopted. However, some decision makers have a preference for wholly stochastic cost-effectiveness analyses, particularly if the sampled data are derived from randomised controlled trials (RCTs). Formal requirements for cost-effectiveness evidence have heightened concerns in the pharmaceutical industry that development costs and times might be increased if formal requirements increase the number, duration or costs of RCTs. Whether this proves to be the case or not will depend upon the timing, nature and extent of the cost-effectiveness evidence required. To illustrate how different requirements for wholly stochastic cost-effectiveness evidence could have a significant impact on two of the major determinants of new drug development costs and times, namely RCT sample size and study duration. Using data collected prospectively in a clinical evaluation, sample sizes were calculated for a number of hypothetical cost-effectiveness study design scenarios. The results were compared with a baseline clinical trial design. The sample sizes required for the cost-effectiveness study scenarios were mostly larger than those for the baseline clinical trial design. Circumstances can be such that a wholly stochastic cost-effectiveness analysis might not be a practical proposition even though its clinical counterpart is. In such situations, alternative research methodologies would be required. For wholly stochastic cost-effectiveness analyses, the importance of prior specification of the different components of study design is emphasised. However, it is doubtful whether all the information necessary for doing this will typically be available when product registration trials are being designed. Formal requirements for wholly stochastic cost-effectiveness evidence based on the standard frequentist paradigm have the potential to increase the size, duration and number of RCTs significantly and hence the costs and timelines associated with new product development. Moreover, it is possible to envisage situations where such an approach would be impossible to adopt. Clearly, further research is required into the issue of how to appraise the economic consequences of alternative economic evaluation research strategies.
Evaluation of swabs and transport media for the recovery of Yersinia pestis.
Gilbert, Sarah E; Rose, Laura J; Howard, Michele; Bradley, Meranda D; Shah, Sanjiv; Silvestri, Erin; Schaefer, Frank W; Noble-Wang, Judith
2014-01-01
The Government Accountability Office report investigating the surface sampling methods used during the 2001 mail contamination with Bacillus anthracis brought to light certain knowledge gaps that existed regarding environmental sampling with biothreat agents. Should a contamination event occur that involves non-spore forming biological select agents, such as Yersinia pestis, surface sample collection and processing protocols specific for these organisms will be needed. Two Y. pestis strains (virulent and avirulent), four swab types (polyester, macrofoam, rayon, and cotton), two pre-moistening solutions, six transport media, three temperatures, two levels of organic load, and four processing methods (vortexing, sonicating, combined sonicating and vortexing, no agitation) were evaluated to determine the conditions that would yield the highest percent of cultivable Y. pestis cells after storage. The optimum pre-moistening agent/transport media combination varied with the Y. pestis strain and swab type. Directly inoculated macrofoam swabs released the highest percent of cells into solution (93.9% recovered by culture) and rayon swabs were considered the second best swab option (77.0% recovered by culture). Storage at 4°C was found to be optimum for all storage times and transport media. In a worst case scenario, where the Y. pestis strain is not known and sample processing and analyses could not occur until 72h after sampling, macrofoam swabs pre-moistened with PBS supplemented with 0.05% Triton X-100 (PBSTX), stored at 4°C in neutralizing buffer (NB) as a transport medium (PBSTX/NB) or pre-moistened with NB and stored in PBSTX as a transport medium (NB/PBSTX), then vortexed 3min in the transport medium, performed significantly better than all other conditions for macrofoam swabs, regardless of strain tested (mean 12 - 72h recovery of 85.9-105.1%, p<0.001). In the same scenario, two combinations of pre-moistening medium/transport medium were found to be optimal for rayon swabs stored at 4°C (p<0.001), then sonicated 3min in the transport medium; PBSTX/PBSTX and NB/PBSTX (mean 12-72h recovery of 83.7-110.1%). © 2013.
US neurologists: attitudes on rationing.
Holloway, R G; Ringel, S P; Bernat, J L; Keran, C M; Lawyer, B L
2000-11-28
To assess neurologists' attitudes on rationing health care and to determine whether neurologists would set healthcare priorities in ways that are consistent with cost-effectiveness research. Cost-effectiveness research can suggest ways to maximize health benefits within fixed budgets but is currently being underused in resource allocation decisions. The authors surveyed a random sample of neurologists practicing in the United States (response rate, 44.4%) with three hypothetical scenarios. Two scenarios were designed to address general attitudes on allocating finite resources with emphasis on formulary decisions for costly drugs. The third scenario was designed to assess whether neurologists would optimize the allocation of a fixed budget as recommended by cost-effectiveness analysis. Three-quarters of respondents thought that neurologists make daily decisions that effectively ration healthcare resources, and 60% felt a professional responsibility to consider the financial impact of individualized treatment decisions on other patients. Only 25% of respondents thought that there should be no restrictions placed on any of the five newer antiepileptic agents. In a 1995 survey, 75% of similarly sampled neurologists agreed that no restrictions should be placed on the availability of FDA-approved medications. Nearly half (46%) of respondents favored a less effective test and would be willing to let patients die to ensure the offering of a more equitable alternative. Most neurologists recognize the need to ration health care, and although they think cost-effectiveness research is one method to achieve efficient distribution of resources, many think that considerable attention should also be given to equity.
Amendola, Alessandra; Coen, Sabrina; Belladonna, Stefano; Pulvirenti, F Renato; Clemens, John M; Capobianchi, M Rosaria
2011-08-01
Diagnostic laboratories need automation that facilitates efficient processing and workflow management to meet today's challenges for expanding services and reducing cost, yet maintaining the highest levels of quality. Processing efficiency of two commercially available automated systems for quantifying HIV-1 and HCV RNA, Abbott m2000 system and Roche COBAS Ampliprep/COBAS TaqMan 96 (docked) systems (CAP/CTM), was evaluated in a mid/high throughput workflow laboratory using a representative daily workload of 24 HCV and 72 HIV samples. Three test scenarios were evaluated: A) one run with four batches on the CAP/CTM system, B) two runs on the Abbott m2000 and C) one run using the Abbott m2000 maxCycle feature (maxCycle) for co-processing these assays. Cycle times for processing, throughput and hands-on time were evaluated. Overall processing cycle time was 10.3, 9.1 and 7.6 h for Scenarios A), B) and C), respectively. Total hands-on time for each scenario was, in order, 100.0 (A), 90.3 (B) and 61.4 min (C). The interface of an automated analyzer to the laboratory workflow, notably system set up for samples and reagents and clean up functions, are as important as the automation capability of the analyzer for the overall impact to processing efficiency and operator hands-on time.
The Use of a Binary Composite Endpoint and Sample Size Requirement: Influence of Endpoints Overlap.
Marsal, Josep-Ramon; Ferreira-González, Ignacio; Bertran, Sandra; Ribera, Aida; Permanyer-Miralda, Gaietà; García-Dorado, David; Gómez, Guadalupe
2017-05-01
Although composite endpoints (CE) are common in clinical trials, the impact of the relationship between the components of a binary CE on the sample size requirement (SSR) has not been addressed. We performed a computational study considering 2 treatments and a CE with 2 components: the relevant endpoint (RE) and the additional endpoint (AE). We assessed the strength of the components' interrelation by the degree of relative overlap between them, which was stratified into 5 groups. Within each stratum, SSR was computed for multiple scenarios by varying the events proportion and the effect of the therapy. A lower SSR using CE was defined as the best scenario for using the CE. In 25 of 66 scenarios the degree of relative overlap determined the benefit of using CE instead of the RE. Adding an AE with greater effect than the RE leads to lower SSR using the CE regardless of the AE proportion and the relative overlap. The influence of overlapping decreases when the effect on RE increases. Adding an AE with lower effect than the RE constitutes the most uncertain situation. In summary, the interrelationship between CE components, assessed by the relative overlap, can help to define the SSR in specific situations and it should be considered for SSR computation. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Microrover Research for Exploration of Mars
NASA Technical Reports Server (NTRS)
Hayati, Samad; Volpe, Richard; Backes, Paul; Balaram, J.; Welch, Richard
1996-01-01
There is great interest in the science community to explore Mars by using microrovers that carry seveal science instruments and are capable of traversing long distances[1]. NASA has planned six additional missions to Mars for 2001, 2003, and 2005. There is an excellent chance that rovers will be utilized in some of these missions. Such rovers would traverse to sites separated by seveal kilometers and place instruments against outcrops or loose rocks, search an area for a sample of interest, and collect rocks and soil samples for return to Earth (2005 mission). Our research objectives are to develop technologies that enable such scenarios within the mission constraints of mass, power, volume, and cost.
Carey, A.E.; Prudic, David E.
1996-01-01
Documentation is provided of model input and sample output used in a previous report for analysis of ground-water flow and simulated pumping scenarios in Paradise Valley, Humboldt County, Nevada.Documentation includes files containing input values and listings of sample output. The files, in American International Standard Code for Information Interchange (ASCII) or binary format, are compressed and put on a 3-1/2-inch diskette. The decompressed files require approximately 8.4 megabytes of disk space on an International Business Machine (IBM)- compatible microcomputer using the MicroSoft Disk Operating System (MS-DOS) operating system version 5.0 or greater.
Forsgren, Eva; Locke, Barbara; Semberg, Emilia; Laugen, Ane T; Miranda, Joachim R de
2017-08-01
Viral infections in managed honey bees are numerous, and most of them are caused by viruses with an RNA genome. Since RNA degrades rapidly, appropriate sample management and RNA extraction methods are imperative to get high quality RNA for downstream assays. This study evaluated the effect of various sampling-transport scenarios (combinations of temperature, RNA stabilizers, and duration) of transport on six RNA quality parameters; yield, purity, integrity, cDNA synthesis efficiency, target detection and quantification. The use of water and extraction buffer were also compared for a primary bee tissue homogenate prior to RNA extraction. The strategy least affected by time was preservation of samples at -80°C. All other regimens turned out to be poor alternatives unless the samples were frozen or processed within 24h. Chemical stabilizers have the greatest impact on RNA quality and adding an extra homogenization step (a QIAshredder™ homogenizer) to the extraction protocol significantly improves the RNA yield and chemical purity. This study confirms that RIN values (RNA Integrity Number), should be used cautiously with bee RNA. Using water for the primary homogenate has no negative effect on RNA quality as long as this step is no longer than 15min. Copyright © 2017 Elsevier B.V. All rights reserved.
Gibb-Snyder, Emily; Gullett, Brian; Ryan, Shawn; Oudejans, Lukas; Touati, Abderrahmane
2006-08-01
Size-selective sampling of Bacillus anthracis surrogate spores from realistic, common aerosol mixtures was developed for analysis by laser-induced breakdown spectroscopy (LIBS). A two-stage impactor was found to be the preferential sampling technique for LIBS analysis because it was able to concentrate the spores in the mixtures while decreasing the collection of potentially interfering aerosols. Three common spore/aerosol scenarios were evaluated, diesel truck exhaust (to simulate a truck running outside of a building air intake), urban outdoor aerosol (to simulate common building air), and finally a protein aerosol (to simulate either an agent mixture (ricin/anthrax) or a contaminated anthrax sample). Two statistical methods, linear correlation and principal component analysis, were assessed for differentiation of surrogate spore spectra from other common aerosols. Criteria for determining percentages of false positives and false negatives via correlation analysis were evaluated. A single laser shot analysis of approximately 4 percent of the spores in a mixture of 0.75 m(3) urban outdoor air doped with approximately 1.1 x 10(5) spores resulted in a 0.04 proportion of false negatives. For that same sample volume of urban air without spores, the proportion of false positives was 0.08.
Assessing respondent-driven sampling.
Goel, Sharad; Salganik, Matthew J
2010-04-13
Respondent-driven sampling (RDS) is a network-based technique for estimating traits in hard-to-reach populations, for example, the prevalence of HIV among drug injectors. In recent years RDS has been used in more than 120 studies in more than 20 countries and by leading public health organizations, including the Centers for Disease Control and Prevention in the United States. Despite the widespread use and growing popularity of RDS, there has been little empirical validation of the methodology. Here we investigate the performance of RDS by simulating sampling from 85 known, network populations. Across a variety of traits we find that RDS is substantially less accurate than generally acknowledged and that reported RDS confidence intervals are misleadingly narrow. Moreover, because we model a best-case scenario in which the theoretical RDS sampling assumptions hold exactly, it is unlikely that RDS performs any better in practice than in our simulations. Notably, the poor performance of RDS is driven not by the bias but by the high variance of estimates, a possibility that had been largely overlooked in the RDS literature. Given the consistency of our results across networks and our generous sampling conditions, we conclude that RDS as currently practiced may not be suitable for key aspects of public health surveillance where it is now extensively applied.
Assessing respondent-driven sampling
Goel, Sharad; Salganik, Matthew J.
2010-01-01
Respondent-driven sampling (RDS) is a network-based technique for estimating traits in hard-to-reach populations, for example, the prevalence of HIV among drug injectors. In recent years RDS has been used in more than 120 studies in more than 20 countries and by leading public health organizations, including the Centers for Disease Control and Prevention in the United States. Despite the widespread use and growing popularity of RDS, there has been little empirical validation of the methodology. Here we investigate the performance of RDS by simulating sampling from 85 known, network populations. Across a variety of traits we find that RDS is substantially less accurate than generally acknowledged and that reported RDS confidence intervals are misleadingly narrow. Moreover, because we model a best-case scenario in which the theoretical RDS sampling assumptions hold exactly, it is unlikely that RDS performs any better in practice than in our simulations. Notably, the poor performance of RDS is driven not by the bias but by the high variance of estimates, a possibility that had been largely overlooked in the RDS literature. Given the consistency of our results across networks and our generous sampling conditions, we conclude that RDS as currently practiced may not be suitable for key aspects of public health surveillance where it is now extensively applied. PMID:20351258
Wu, Zhichao; Medeiros, Felipe A
2018-03-20
Visual field testing is an important endpoint in glaucoma clinical trials, and the testing paradigm used can have a significant impact on the sample size requirements. To investigate this, this study included 353 eyes of 247 glaucoma patients seen over a 3-year period to extract real-world visual field rates of change and variability estimates to provide sample size estimates from computer simulations. The clinical trial scenario assumed that a new treatment was added to one of two groups that were both under routine clinical care, with various treatment effects examined. Three different visual field testing paradigms were evaluated: a) evenly spaced testing, b) United Kingdom Glaucoma Treatment Study (UKGTS) follow-up scheme, which adds clustered tests at the beginning and end of follow-up in addition to evenly spaced testing, and c) clustered testing paradigm, with clusters of tests at the beginning and end of the trial period and two intermediary visits. The sample size requirements were reduced by 17-19% and 39-40% using the UKGTS and clustered testing paradigms, respectively, when compared to the evenly spaced approach. These findings highlight how the clustered testing paradigm can substantially reduce sample size requirements and improve the feasibility of future glaucoma clinical trials.
NASA Technical Reports Server (NTRS)
Billings, Marcus Dwight; Fasanella, Edwin L. (Technical Monitor)
2002-01-01
Nonlinear dynamic finite element simulations were performed to aid in the design of an energy-absorbing impact sphere for a passive Earth Entry Vehicle (EEV) that is a possible architecture for the Mars Sample Return (MSR) mission. The MSR EEV concept uses an entry capsule and energy-absorbing impact sphere designed to contain and limit the acceleration of collected samples during Earth impact without a parachute. The spherical shaped impact sphere is composed of solid hexagonal and pentagonal foam-filled cells with hybrid composite, graphite-epoxy/Kevlar cell walls. Collected Martian samples will fit inside a smaller spherical sample container at the center of the EEV's cellular structure. Comparisons were made of analytical results obtained using MSC.Dytran with test results obtained from impact tests performed at NASA Langley Research Center for impact velocities from 30 to 40 m/s. Acceleration, velocity, and deformation results compared well with the test results. The correlated finite element model was then used for simulations of various off-nominal impact scenarios. Off-nominal simulations at an impact velocity of 40 m/s included a rotated cellular structure impact onto a flat surface, a cellular structure impact onto an angled surface, and a cellular structure impact onto the corner of a step.
Design and analysis of three-arm trials with negative binomially distributed endpoints.
Mütze, Tobias; Munk, Axel; Friede, Tim
2016-02-20
A three-arm clinical trial design with an experimental treatment, an active control, and a placebo control, commonly referred to as the gold standard design, enables testing of non-inferiority or superiority of the experimental treatment compared with the active control. In this paper, we propose methods for designing and analyzing three-arm trials with negative binomially distributed endpoints. In particular, we develop a Wald-type test with a restricted maximum-likelihood variance estimator for testing non-inferiority or superiority. For this test, sample size and power formulas as well as optimal sample size allocations will be derived. The performance of the proposed test will be assessed in an extensive simulation study with regard to type I error rate, power, sample size, and sample size allocation. For the purpose of comparison, Wald-type statistics with a sample variance estimator and an unrestricted maximum-likelihood estimator are included in the simulation study. We found that the proposed Wald-type test with a restricted variance estimator performed well across the considered scenarios and is therefore recommended for application in clinical trials. The methods proposed are motivated and illustrated by a recent clinical trial in multiple sclerosis. The R package ThreeArmedTrials, which implements the methods discussed in this paper, is available on CRAN. Copyright © 2015 John Wiley & Sons, Ltd.
Sampling designs matching species biology produce accurate and affordable abundance indices
Farley, Sean; Russell, Gareth J.; Butler, Matthew J.; Selinger, Jeff
2013-01-01
Wildlife biologists often use grid-based designs to sample animals and generate abundance estimates. Although sampling in grids is theoretically sound, in application, the method can be logistically difficult and expensive when sampling elusive species inhabiting extensive areas. These factors make it challenging to sample animals and meet the statistical assumption of all individuals having an equal probability of capture. Violating this assumption biases results. Does an alternative exist? Perhaps by sampling only where resources attract animals (i.e., targeted sampling), it would provide accurate abundance estimates more efficiently and affordably. However, biases from this approach would also arise if individuals have an unequal probability of capture, especially if some failed to visit the sampling area. Since most biological programs are resource limited, and acquiring abundance data drives many conservation and management applications, it becomes imperative to identify economical and informative sampling designs. Therefore, we evaluated abundance estimates generated from grid and targeted sampling designs using simulations based on geographic positioning system (GPS) data from 42 Alaskan brown bears (Ursus arctos). Migratory salmon drew brown bears from the wider landscape, concentrating them at anadromous streams. This provided a scenario for testing the targeted approach. Grid and targeted sampling varied by trap amount, location (traps placed randomly, systematically or by expert opinion), and traps stationary or moved between capture sessions. We began by identifying when to sample, and if bears had equal probability of capture. We compared abundance estimates against seven criteria: bias, precision, accuracy, effort, plus encounter rates, and probabilities of capture and recapture. One grid (49 km2 cells) and one targeted configuration provided the most accurate results. Both placed traps by expert opinion and moved traps between capture sessions, which raised capture probabilities. The grid design was least biased (−10.5%), but imprecise (CV 21.2%), and used most effort (16,100 trap-nights). The targeted configuration was more biased (−17.3%), but most precise (CV 12.3%), with least effort (7,000 trap-nights). Targeted sampling generated encounter rates four times higher, and capture and recapture probabilities 11% and 60% higher than grid sampling, in a sampling frame 88% smaller. Bears had unequal probability of capture with both sampling designs, partly because some bears never had traps available to sample them. Hence, grid and targeted sampling generated abundance indices, not estimates. Overall, targeted sampling provided the most accurate and affordable design to index abundance. Targeted sampling may offer an alternative method to index the abundance of other species inhabiting expansive and inaccessible landscapes elsewhere, provided their attraction to resource concentrations. PMID:24392290
Fung, Tak; Keenan, Kevin
2014-01-01
The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.
NASA Astrophysics Data System (ADS)
Dariush, A.; Dib, S.; Hony, S.; Smith, D. J. B.; Zhukovska, S.; Dunne, L.; Eales, S.; Andrae, E.; Baes, M.; Baldry, I.; Bauer, A.; Bland-Hawthorn, J.; Brough, S.; Bourne, N.; Cava, A.; Clements, D.; Cluver, M.; Cooray, A.; De Zotti, G.; Driver, S.; Grootes, M. W.; Hopkins, A. M.; Hopwood, R.; Kaviraj, S.; Kelvin, L.; Lara-Lopez, M. A.; Liske, J.; Loveday, J.; Maddox, S.; Madore, B.; Michałowski, M. J.; Pearson, C.; Popescu, C.; Robotham, A.; Rowlands, K.; Seibert, M.; Shabani, F.; Smith, M. W. L.; Taylor, E. N.; Tuffs, R.; Valiante, E.; Virdee, J. S.
2016-02-01
We combine Herschel/SPIRE submillimetre (submm) observations with existing multiwavelength data to investigate the characteristics of low-redshift, optically red galaxies detected in submm bands. We select a sample of galaxies in the redshift range 0.01 ≤ z ≤ 0.2, having >5σ detections in the SPIRE 250 μm submm waveband. Sources are then divided into two sub-samples of red and blue galaxies, based on their UV-optical colours. Galaxies in the red sample account for ≈4.2 per cent of the total number of sources with stellar masses M* ≳ 1010 M⊙. Following visual classification of the red galaxies, we find that ≳30 per cent of them are early-type galaxies and ≳40 per cent are spirals. The colour of the red-spiral galaxies could be the result of their highly inclined orientation and/or a strong contribution of the old stellar population. It is found that irrespective of their morphological types, red and blue sources occupy environments with more or less similar densities (I.e. the Σ5 parameter). From the analysis of the spectral energy distributions of galaxies in our samples based on MAGPHYS, we find that galaxies in the red sample (of any morphological type) have dust masses similar to those in the blue sample (I.e. normal spiral/star-forming systems). However, in comparison to the red-spirals and in particular blue systems, red-ellipticals have lower mean dust-to-stellar mass ratios. Besides galaxies in the red-elliptical sample have much lower mean star formation/specific star formation rates in contrast to their counterparts in the blue sample. Our results support a scenario where dust in early-type systems is likely to be of an external origin.
A multi-stage drop-the-losers design for multi-arm clinical trials.
Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher
2017-02-01
Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.
Aymerich, I; Acuña, V; Ort, C; Rodríguez-Roda, I; Corominas, Ll
2017-11-15
The growing awareness of the relevance of organic microcontaminants on the environment has led to a growing number of studies on attenuation of these compounds in wastewater treatment plants (WWTP) and rivers. However, the effects of the sampling strategies (frequency and duration of composite samples) on the attenuation estimates are largely unknown. Our goal was to assess how frequency and duration of composite samples influence uncertainty of the attenuation estimates in WWTPs and rivers. Furthermore, we also assessed how compound consumption rate and degradability influence uncertainty. The assessment was conducted through simulating the integrated wastewater system of Puigcerdà (NE Iberian Peninsula) using a sewer pattern generator and a coupled model of WWTP and river. Results showed that the sampling strategy is especially critical at the influent of WWTP, particularly when the number of toilet flushes containing the compound of interest is small (≤100 toilet flushes with compound day -1 ), and less critical at the effluent of the WWTP and in the river due to the mixing effects of the WWTP. For example, at the WWTP, when evaluating a compound that is present in 50 pulses·d -1 using a sampling frequency of 15-min to collect a 24-h composite sample, the attenuation uncertainty can range from 94% (0% degradability) to 9% (90% degradability). The estimation of attenuation in rivers is less critical than in WWTPs, as the attenuation uncertainty was lower than 10% for all evaluated scenarios. Interestingly, the errors in the estimates of attenuation are usually lower than those of loads for most sampling strategies and compound characteristics (e.g. consumption and degradability), although the opposite occurs for compounds with low consumption and inappropriate sampling strategies at the WWTP. Hence, when designing a sampling campaign, one should consider the influence of compounds' consumption and degradability as well as the desired level of accuracy in attenuation estimations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Woolfenden, Elizabeth
2010-04-16
Sorbent tubes/traps are widely used in combination with gas chromatographic (GC) analytical methods to monitor the vapour-phase fraction of organic compounds in air. Target compounds range in volatility from acetylene and freons to phthalates and PCBs and include apolar, polar and reactive species. Airborne vapour concentrations will vary depending on the nature of the location, nearby pollution sources, weather conditions, etc. Levels can range from low percent concentrations in stack and vent emissions to low part per trillion (ppt) levels in ultra-clean outdoor locations. Hundreds, even thousands of different compounds may be present in any given atmosphere. GC is commonly used in combination with mass spectrometry (MS) detection especially for environmental monitoring or for screening uncharacterised workplace atmospheres. Given the complexity and variability of organic vapours in air, no one sampling approach suits every monitoring scenario. A variety of different sampling strategies and sorbent media have been developed to address specific applications. Key sorbent-based examples include: active (pumped) sampling onto tubes packed with one or more sorbents held at ambient temperature; diffusive (passive) sampling onto sorbent tubes/cartridges; on-line sampling of air/gas streams into cooled sorbent traps; and transfer of air samples from containers (canisters, Tedlar) bags, etc.) into cooled sorbent focusing traps. Whichever sampling approach is selected, subsequent analysis almost always involves either solvent extraction or thermal desorption (TD) prior to GC(/MS) analysis. The overall performance of the air monitoring method will depend heavily on appropriate selection of key sampling and analytical parameters. This comprehensive review of air monitoring using sorbent tubes/traps is divided into 2 parts. (1) Sorbent-based air sampling option. (2) Sorbent selection and other aspects of optimizing sorbent-based air monitoring methods. The paper presents current state-of-the-art and recent developments in relevant areas such as sorbent research, sampler design, enhanced approaches to analytical quality assurance and on-tube derivatisation. Copyright 2009 Elsevier B.V. All rights reserved.
Mengual, Lourdes; Burset, Moisès; Ribal, María José; Ars, Elisabet; Marín-Aguilera, Mercedes; Fernández, Manuel; Ingelmo-Torres, Mercedes; Villavicencio, Humberto; Alcaraz, Antonio
2010-05-01
To develop an accurate and noninvasive method for bladder cancer diagnosis and prediction of disease aggressiveness based on the gene expression patterns of urine samples. Gene expression patterns of 341 urine samples from bladder urothelial cell carcinoma (UCC) patients and 235 controls were analyzed via TaqMan Arrays. In a first phase of the study, three consecutive gene selection steps were done to identify a gene set expression signature to detect and stratify UCC in urine. Subsequently, those genes more informative for UCC diagnosis and prediction of tumor aggressiveness were combined to obtain a classification system of bladder cancer samples. In a second phase, the obtained gene set signature was evaluated in a routine clinical scenario analyzing only voided urine samples. We have identified a 12+2 gene expression signature for UCC diagnosis and prediction of tumor aggressiveness on urine samples. Overall, this gene set panel had 98% sensitivity (SN) and 99% specificity (SP) in discriminating between UCC and control samples and 79% SN and 92% SP in predicting tumor aggressiveness. The translation of the model to the clinically applicable format corroborates that the 12+2 gene set panel described maintains a high accuracy for UCC diagnosis (SN = 89% and SP = 95%) and tumor aggressiveness prediction (SN = 79% and SP = 91%) in voided urine samples. The 12+2 gene expression signature described in urine is able to identify patients suffering from UCC and predict tumor aggressiveness. We show that a panel of molecular markers may improve the schedule for diagnosis and follow-up in UCC patients. Copyright 2010 AACR.
Sample size requirements for indirect association studies of gene-environment interactions (G x E).
Hein, Rebecca; Beckmann, Lars; Chang-Claude, Jenny
2008-04-01
Association studies accounting for gene-environment interactions (G x E) may be useful for detecting genetic effects. Although current technology enables very dense marker spacing in genetic association studies, the true disease variants may not be genotyped. Thus, causal genes are searched for by indirect association using genetic markers in linkage disequilibrium (LD) with the true disease variants. Sample sizes needed to detect G x E effects in indirect case-control association studies depend on the true genetic main effects, disease allele frequencies, whether marker and disease allele frequencies match, LD between loci, main effects and prevalence of environmental exposures, and the magnitude of interactions. We explored variables influencing sample sizes needed to detect G x E, compared these sample sizes with those required to detect genetic marginal effects, and provide an algorithm for power and sample size estimations. Required sample sizes may be heavily inflated if LD between marker and disease loci decreases. More than 10,000 case-control pairs may be required to detect G x E. However, given weak true genetic main effects, moderate prevalence of environmental exposures, as well as strong interactions, G x E effects may be detected with smaller sample sizes than those needed for the detection of genetic marginal effects. Moreover, in this scenario, rare disease variants may only be detectable when G x E is included in the analyses. Thus, the analysis of G x E appears to be an attractive option for the detection of weak genetic main effects of rare variants that may not be detectable in the analysis of genetic marginal effects only.
Barberena, Ramiro; Durán, Víctor A; Novellino, Paula; Winocur, Diego; Benítez, Anahí; Tessone, Augusto; Quiroga, María N; Marsh, Erik J; Gasco, Alejandra; Cortegoso, Valeria; Lucero, Gustavo; Llano, Carina; Knudson, Kelly J
2017-10-01
The goal of this article is to assess the scale of human paleomobility and ecological complementarity between the lowlands and highlands in the southern Andes during the last 2,300 years. By providing isotope results for human bone and teeth samples, we assess a hypothesis of "high residential mobility" suggested on the basis of oxygen isotopes from human remains. We develop an isotopic assessment of human mobility in a mountain landscape combining strontium and oxygen isotopes. We analyze bone and teeth samples as an approach to life-history changes in spatial residence. Human samples from the main geological units and periods within the last two millennia are selected. We present a framework for the analysis of bioavailable strontium based on the combination of the geological data with isotope results for rodent samples. The 87 Sr/ 86 Sr values from human samples indicate residential stability within geological regions along life history. When comparing strontium and oxygen values for the same human samples, we record a divergent pattern: while δ 18 O values for samples from distant regions overlap widely, there are important differences in 87 Sr/ 86 Sr values. Despite the large socio-economic changes recorded, 87 Sr/ 86 Sr values indicate a persisting scenario of low systematic mobility between the different geological regions. Our results suggest that strontium isotope values provide the most germane means to track patterns of human occupation of distinct regions in complex geological landscapes, offering a much higher spatial resolution than oxygen isotopes in the southern Andes. © 2017 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Basow, Susan A.; Thompson, Janelle
2012-01-01
In this online vignette study, a national sample of domestic violence shelter service providers (N = 282) completed a 10-item questionnaire about a woman experiencing intimate partner violence (IPV). Scenarios varied in terms of couple sexual orientation (heterosexual or lesbian) and type of abuse (physical or nonphysical). Results indicate that…
ERIC Educational Resources Information Center
Puhan, Gautam
2013-01-01
The purpose of this study was to demonstrate that the choice of sample weights when defining the target population under poststratification equating can be a critical factor in determining the accuracy of the equating results under a unique equating scenario, known as "rater comparability scoring and equating." The nature of data…
ERIC Educational Resources Information Center
Krettenauer, Tobias; Eichler, Dana
2006-01-01
The study investigates adolescents' self-attributed moral emotions following a moral transgression by expanding research with children on the happy-victimizer phenomenon. In a sample of 200 German adolescents from Grades 7, 9, 11, and 13 (M=16.18 years, SD=2.41), participants were confronted with various scenarios describing different moral rule…
ERIC Educational Resources Information Center
King, Alan; Grieves, Julie; Opp, Dean
2007-01-01
In a brief survey, the authors solicited professional opinions regarding the probable impact of performing arts on adolescent mood stability using a hypothetical scenario where 20 moderately depressed 15-year-olds agreed to participate in a high school play, musical, or other singing performance. The results of the survey indicated that clinicians…
Self-organization of the reticular structure of polyurethane
NASA Astrophysics Data System (ADS)
Kiselev, M. R.; Roldugin, V. I.
2010-08-01
The morphology of block samples and coatings of reticular polyurethane were studied by transmission electron microscopy. The morphology was correlated with the internal stresses that appeared in the coatings during their formation. A scenario of the self-assembly of complex structures in reticular polymers was suggested. The boundary between the structural elements of the supermolecular level was found to be strained.
Bioaccessibility of metals and human health risk assessment in community urban gardens.
Izquierdo, M; De Miguel, E; Ortega, M F; Mingot, J
2015-09-01
Pseudo-total (i.e. aqua regia extractable) and gastric-bioaccessible (i.e. glycine+HCl extractable) concentrations of Ca, Co, Cr, Cu, Fe, Mn, Ni, Pb and Zn were determined in a total of 48 samples collected from six community urban gardens of different characteristics in the city of Madrid (Spain). Calcium carbonate appears to be the soil property that determines the bioaccessibility of a majority of those elements, and the lack of influence of organic matter, pH and texture can be explained by their low levels in the samples (organic matter) or their narrow range of variation (pH and texture). A conservative risk assessment with bioaccessible concentrations in two scenarios, i.e. adult urban farmers and children playing in urban gardens, revealed acceptable levels of risk, but with large differences between urban gardens depending on their history of land use and their proximity to busy areas in the city center. Only in a worst-case scenario in which children who use urban gardens as recreational areas also eat the produce grown in them would the risk exceed the limits of acceptability. Copyright © 2015 Elsevier Ltd. All rights reserved.
Berger, Jérôme; Bawab, Noura; De Mooij, Jeremy; Sutter Widmer, Denise; Szilas, Nicolas; De Vriese, Carine; Bugnon, Olivier
2018-03-01
To compare online learning tools, looped, branch serious game (SG) and linear text-based scenario (TBS), among a sample of Belgian and Swiss pharmacy students. Open randomized controlled study. The lesson was based on the case of a benign cough in a healthy child. A randomized sample of 117 students: only the Swiss students had attended a previous lecture on coughs. Participation rate, pre- and post-experience Likert scales and students' clinical knowledge were measured. Our primary hypothesis was demonstrated: students favored the SG even if navigation was rated as more complex, and students who performed the SG better understood the aim of pharmacist triage in case of cough. The influence of the SG appeared to be linked to the presence of a previous lecture in the curriculum. SG and TBS are effective to teach pharmacist triage. Higher SG complexity should be used to teach the aim of pharmacist triage in the case of a specific disease and could be an alternative to simulated patients. A simpler TBS does not require a previous lecture and a debriefing to be fully effective. Copyright © 2017 Elsevier Inc. All rights reserved.
Generating Virtual Patients by Multivariate and Discrete Re-Sampling Techniques.
Teutonico, D; Musuamba, F; Maas, H J; Facius, A; Yang, S; Danhof, M; Della Pasqua, O
2015-10-01
Clinical Trial Simulations (CTS) are a valuable tool for decision-making during drug development. However, to obtain realistic simulation scenarios, the patients included in the CTS must be representative of the target population. This is particularly important when covariate effects exist that may affect the outcome of a trial. The objective of our investigation was to evaluate and compare CTS results using re-sampling from a population pool and multivariate distributions to simulate patient covariates. COPD was selected as paradigm disease for the purposes of our analysis, FEV1 was used as response measure and the effects of a hypothetical intervention were evaluated in different populations in order to assess the predictive performance of the two methods. Our results show that the multivariate distribution method produces realistic covariate correlations, comparable to the real population. Moreover, it allows simulation of patient characteristics beyond the limits of inclusion and exclusion criteria in historical protocols. Both methods, discrete resampling and multivariate distribution generate realistic pools of virtual patients. However the use of a multivariate distribution enable more flexible simulation scenarios since it is not necessarily bound to the existing covariate combinations in the available clinical data sets.
A quantitative estimate of schema abnormality in socially anxious and non-anxious individuals.
Wenzel, Amy; Brendle, Jennifer R; Kerr, Patrick L; Purath, Donna; Ferraro, F Richard
2007-01-01
Although cognitive theories of anxiety suggest that anxious individuals are characterized by abnormal threat-relevant schemas, few empirical studies have estimated the nature of these cognitive structures using quantitative methods that lend themselves to inferential statistical analysis. In the present study, socially anxious (n = 55) and non-anxious (n = 62) participants completed 3 Q-Sort tasks to assess their knowledge of events that commonly occur in social or evaluative scenarios. Participants either sorted events according to how commonly they personally believe the events occur (i.e. "self" condition), or to how commonly they estimate that most people believe they occur (i.e. "other" condition). Participants' individual Q-Sorts were correlated with mean sorts obtained from a normative sample to obtain an estimate of schema abnormality, with lower correlations representing greater levels of abnormality. Relative to non-anxious participants, socially anxious participants' sorts were less strongly associated with sorts of the normative sample, particularly in the "self" condition, although secondary analyses suggest that some significant results might be explained, in part, by depression and experience with the scenarios. These results provide empirical support for the theoretical notion that threat-relevant self-schemas of anxious individuals are characterized by some degree of abnormality.
Fonneløp, Ane Elida; Ramse, Merete; Egeland, Thore; Gill, Peter
2017-07-01
In court questions are often raised related to how trace DNA was deposited, directly during the crime or innocently for instance by secondary transfer. It is therefore of interest to have knowledge of the probability of transfer or secondary transfer in different situations. Factors that could influence transfer probabilities are background DNA and the shedder status of the involved persons. In this study, we have classified participants as high or low DNA shedders. We observed DNA transfer in a simulated attack scenario, and demonstrated that shedder status has a significant influence of transfer rates. We have examined the background DNA in samples from T-shirts worn in an area with frequent human traffic and detected multiple contributors. We further demonstrated that DNA from co-workers of a T-shirt wearer can be secondarily transferred from the environment and detected in samples, and that the composition of background DNA is correlated with the shedder status of the wearer. Finally, we have illustrated the inference with the results of transfer probabilities and a fictive case with the use of a Bayesian network. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Angrisano, Antonio; Maratea, Antonio; Gaglione, Salvatore
2018-01-01
In the absence of obstacles, a GPS device is generally able to provide continuous and accurate estimates of position, while in urban scenarios buildings can generate multipath and echo-only phenomena that severely affect the continuity and the accuracy of the provided estimates. Receiver autonomous integrity monitoring (RAIM) techniques are able to reduce the negative consequences of large blunders in urban scenarios, but require both a good redundancy and a low contamination to be effective. In this paper a resampling strategy based on bootstrap is proposed as an alternative to RAIM, in order to estimate accurately position in case of low redundancy and multiple blunders: starting with the pseudorange measurement model, at each epoch the available measurements are bootstrapped—that is random sampled with replacement—and the generated a posteriori empirical distribution is exploited to derive the final position. Compared to standard bootstrap, in this paper the sampling probabilities are not uniform, but vary according to an indicator of the measurement quality. The proposed method has been compared with two different RAIM techniques on a data set collected in critical conditions, resulting in a clear improvement on all considered figures of merit.
Pedroza, Claudia; Truong, Van Thi Thanh
2017-11-02
Analyses of multicenter studies often need to account for center clustering to ensure valid inference. For binary outcomes, it is particularly challenging to properly adjust for center when the number of centers or total sample size is small, or when there are few events per center. Our objective was to evaluate the performance of generalized estimating equation (GEE) log-binomial and Poisson models, generalized linear mixed models (GLMMs) assuming binomial and Poisson distributions, and a Bayesian binomial GLMM to account for center effect in these scenarios. We conducted a simulation study with few centers (≤30) and 50 or fewer subjects per center, using both a randomized controlled trial and an observational study design to estimate relative risk. We compared the GEE and GLMM models with a log-binomial model without adjustment for clustering in terms of bias, root mean square error (RMSE), and coverage. For the Bayesian GLMM, we used informative neutral priors that are skeptical of large treatment effects that are almost never observed in studies of medical interventions. All frequentist methods exhibited little bias, and the RMSE was very similar across the models. The binomial GLMM had poor convergence rates, ranging from 27% to 85%, but performed well otherwise. The results show that both GEE models need to use small sample corrections for robust SEs to achieve proper coverage of 95% CIs. The Bayesian GLMM had similar convergence rates but resulted in slightly more biased estimates for the smallest sample sizes. However, it had the smallest RMSE and good coverage across all scenarios. These results were very similar for both study designs. For the analyses of multicenter studies with a binary outcome and few centers, we recommend adjustment for center with either a GEE log-binomial or Poisson model with appropriate small sample corrections or a Bayesian binomial GLMM with informative priors.
Cumulative uncertainty in measured streamflow and water quality data for small watersheds
Harmel, R.D.; Cooper, R.J.; Slade, R.M.; Haney, R.L.; Arnold, J.G.
2006-01-01
The scientific community has not established an adequate understanding of the uncertainty inherent in measured water quality data, which is introduced by four procedural categories: streamflow measurement, sample collection, sample preservation/storage, and laboratory analysis. Although previous research has produced valuable information on relative differences in procedures within these categories, little information is available that compares the procedural categories or presents the cumulative uncertainty in resulting water quality data. As a result, quality control emphasis is often misdirected, and data uncertainty is typically either ignored or accounted for with an arbitrary margin of safety. Faced with the need for scientifically defensible estimates of data uncertainty to support water resource management, the objectives of this research were to: (1) compile selected published information on uncertainty related to measured streamflow and water quality data for small watersheds, (2) use a root mean square error propagation method to compare the uncertainty introduced by each procedural category, and (3) use the error propagation method to determine the cumulative probable uncertainty in measured streamflow, sediment, and nutrient data. Best case, typical, and worst case "data quality" scenarios were examined. Averaged across all constituents, the calculated cumulative probable uncertainty (??%) contributed under typical scenarios ranged from 6% to 19% for streamflow measurement, from 4% to 48% for sample collection, from 2% to 16% for sample preservation/storage, and from 5% to 21% for laboratory analysis. Under typical conditions, errors in storm loads ranged from 8% to 104% for dissolved nutrients, from 8% to 110% for total N and P, and from 7% to 53% for TSS. Results indicated that uncertainty can increase substantially under poor measurement conditions and limited quality control effort. This research provides introductory scientific estimates of uncertainty in measured water quality data. The results and procedures presented should also assist modelers in quantifying the "quality"of calibration and evaluation data sets, determining model accuracy goals, and evaluating model performance.
Issues Involving The OSI Concept of Operation For Noble Gas Radionuclide Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrigan, C R; Sun, Y
2011-01-21
The development of a technically sound protocol for detecting the subsurface release of noble gas radionuclides is critical to the successful operation of an on site inspection (OSI) under the CTBT and has broad ramifications for all aspects of the OSI regime including the setting of specifications for both sampling and analysis equipment used during an OSI. With NA-24 support, we are investigating a variety of issues and concerns that have significant bearing on policy development and technical guidance regarding the detection of noble gases and the creation of a technically justifiable OSI concept of operation. The work at LLNLmore » focuses on optimizing the ability to capture radioactive noble gases subject to the constraints of possible OSI scenarios. This focus results from recognizing the difficulty of detecting gas releases in geologic environments - a lesson we learned previously from the LLNL Non-Proliferation Experiment (NPE). Evaluation of a number of important noble gas detection issues, potentially affecting OSI policy, has awaited the US re-engagement with the OSI technical community. Thus, there have been numerous issues to address during the past 18 months. Most of our evaluations of a sampling or transport issue necessarily involve computer simulations. This is partly due to the lack of OSI-relevant field data, such as that provided by the NPE, and partly a result of the ability of LLNL computer-based models to test a range of geologic and atmospheric scenarios far beyond what could ever be studied in the field making this approach very highly cost effective. We review some highlights of the transport and sampling issues we have investigated during the past year. We complete the discussion of these issues with a description of a preliminary design for subsurface sampling that is intended to be a practical solution to most if not all the challenges addressed here.« less
Holt, Ashley C; Salkeld, Daniel J; Fritz, Curtis L; Tucker, James R; Gong, Peng
2009-01-01
Background Plague, caused by the bacterium Yersinia pestis, is a public and wildlife health concern in California and the western United States. This study explores the spatial characteristics of positive plague samples in California and tests Maxent, a machine-learning method that can be used to develop niche-based models from presence-only data, for mapping the potential distribution of plague foci. Maxent models were constructed using geocoded seroprevalence data from surveillance of California ground squirrels (Spermophilus beecheyi) as case points and Worldclim bioclimatic data as predictor variables, and compared and validated using area under the receiver operating curve (AUC) statistics. Additionally, model results were compared to locations of positive and negative coyote (Canis latrans) samples, in order to determine the correlation between Maxent model predictions and areas of plague risk as determined via wild carnivore surveillance. Results Models of plague activity in California ground squirrels, based on recent climate conditions, accurately identified case locations (AUC of 0.913 to 0.948) and were significantly correlated with coyote samples. The final models were used to identify potential plague risk areas based on an ensemble of six future climate scenarios. These models suggest that by 2050, climate conditions may reduce plague risk in the southern parts of California and increase risk along the northern coast and Sierras. Conclusion Because different modeling approaches can yield substantially different results, care should be taken when interpreting future model predictions. Nonetheless, niche modeling can be a useful tool for exploring and mapping the potential response of plague activity to climate change. The final models in this study were used to identify potential plague risk areas based on an ensemble of six future climate scenarios, which can help public managers decide where to allocate surveillance resources. In addition, Maxent model results were significantly correlated with coyote samples, indicating that carnivore surveillance programs will continue to be important for tracking the response of plague to future climate conditions. PMID:19558717
NASA Astrophysics Data System (ADS)
Aamir, Muhammad; Liao, Qiang; Hong, Wang; Xun, Zhu; Song, Sihong; Sajid, Muhammad
2017-02-01
High heat transfer performance of spray cooling on structured surface might be an additional measure to increase the safety of an installation against any threat caused by rapid increase in the temperature. The purpose of present experimental study is to explore heat transfer performance of structured surface under different spray conditions and surface temperatures. Two cylindrical stainless steel samples were used, one with pyramid pins structured surface and other with smooth surface. Surface heat flux of 3.60, 3.46, 3.93 and 4.91 MW/m2 are estimated for sample initial average temperature of 600, 700, 800 and 900 °C, respectively for an inlet pressure of 1.0 MPa. A maximum cooling rate of 507 °C/s was estimated for an inlet pressure of 0.7 MPa at 900 °C for structured surface while for smooth surface maximum cooling rate of 356 °C/s was attained at 1.0 MPa for 700 °C. Structured surface performed better to exchange heat during spray cooling at initial sample temperature of 900 °C with a relative increase in surface heat flux by factor of 1.9, 1.56, 1.66 and 1.74 relative to smooth surface, for inlet pressure of 0.4, 0.7, 1.0 and 1.3 MPa, respectively. For smooth surface, a decreasing trend in estimated heat flux is observed, when initial sample temperature was increased from 600 to 900 °C. Temperature-based function specification method was utilized to estimate surface heat flux and surface temperature. Limited published work is available about the application of structured surface spray cooling techniques for safety of stainless steel structures at very high temperature scenario such as nuclear safety vessel and liquid natural gas storage tanks.
The Taxonomy of Blue Amorphous Galaxies. I. Hα and UBVI Data
NASA Astrophysics Data System (ADS)
Marlowe, Amanda T.; Meurer, Gerhardt R.; Heckman, Timothy M.; Schommer, Robert
1997-10-01
Dwarf galaxies play an important role in our understanding of galaxy formation and evolution. We have embarked on a systematic study of 12 nearby dwarf galaxies (most of which have been classified as amorphous) selected preferentially by their blue colors. The properties of the galaxies in the sample suggest that they are in a burst or postburst state. It seems likely that these amorphous galaxies are closely related to other ``starburst'' dwarfs such as blue compact dwarfs (BCDs) and H II galaxies but are considerably closer and therefore easier to study. If so, these galaxies may offer important insights into dwarf galaxy evolution. In an effort to clarify the role of starbursts in evolutionary scenarios for dwarf galaxies, we present Hα and UBVI data for our sample. Blue amorphous galaxies, like BCDs and H II galaxies, have surface brightness profiles that are exponential in the outer regions (r >~ 1.5re) but have a predominantly blue central excess, which suggests a young burst in an older, redder galaxy. Seven of the galaxies have the bubble or filamentary Hα morphology and double-peaked emission lines that are the signature of superbubbles or superwind activity. These galaxies are typically the ones with the strongest central excesses. The underlying exponential galaxies are very similar to those found in BCDs and H II galaxies. How amorphous galaxies fit into the dwarf irregular-``starburst dwarf''-dwarf elliptical evolutionary debate is less clear. In this paper, we present our data and make some preliminary comparisons between amorphous galaxies and other classes of dwarf galaxies. In a future companion paper, we will compare this sample more quantitatively with other dwarf galaxy samples in an effort to determine if amorphous galaxies are a physically different class of object from other starburst dwarfs such as BCDs and H II galaxies and also investigate their place in dwarf galaxy evolution scenarios.
Flowe, Heather D; Ebbesen, Ebbe B; Putcha-Bhagavatula, Anila
2007-04-01
Rape shield laws, which limit the introduction of sexual history evidence in rape trials, challenge the view that women with extensive sexual histories more frequently fabricate charges of rape than other women. The present study examined the relationship between women's actual sexual history and their reporting rape in hypothetical scenarios. Female participants (college students and a community sample, which included women working as prostitutes and topless dancers, and women living in a drug and alcohol rehabilitation center) imagined themselves in dating scenarios that described either a legally definable act of rape or consensual sexual intercourse. Additionally, within the rape scenarios, level of consensual intimate contact (i.e., foreplay) preceding rape was examined to determine its influence on rape reporting. Women were less likely to say that they would take legal action in response to the rape scenarios if they had extensive sexual histories, or if they had consented to an extensive amount of intimate contact before the rape. In response to the consensual sexual intercourse scenarios, women with more extensive sexual histories were not more likely to say that they would report rape, even when the scenario provided them with a motive for seeking revenge against their dating partner.
Ultra faint dwarf galaxies: an arena for testing dark matter versus modified gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Weikang; Ishak, Mustapha, E-mail: wxl123830@utdallas.edu, E-mail: mishak@utdallas.edu
2016-10-01
The scenario consistent with a wealth of observations for the missing mass problem is that of weakly interacting dark matter particles. However, arguments or proposals for a Newtonian or relativistic modified gravity scenario continue to be made. A distinguishing characteristic between the two scenarios is that dark matter particles can produce a gravitational effect, in principle, without the need of baryons while this is not the case for the modified gravity scenario where such an effect must be correlated with the amount of baryonic matter. We consider here ultra-faint dwarf (UFD) galaxies as a promising arena to test the twomore » scenarios based on the above assertion. We compare the correlation of the luminosity with the velocity dispersion between samples of UFD and non-UFD galaxies, finding a significant loss of correlation for UFD galaxies. For example, we find for 28 non-UFD galaxies a strong correlation coefficient of −0.688 which drops to −0.077 for the 23 UFD galaxies. Incoming and future data will determine whether the observed stochasticity for UFD galaxies is physical or due to systematics in the data. Such a loss of correlation (if it is to persist) is possible and consistent with the dark matter scenario for UFD galaxies but would constitute a new challenge for the modified gravity scenario.« less
Development of a safety decision-making scenario to measure worker safety in agriculture.
Mosher, G A; Keren, N; Freeman, S A; Hurburgh, C R
2014-04-01
Human factors play an important role in the management of occupational safety, especially in high-hazard workplaces such as commercial grain-handling facilities. Employee decision-making patterns represent an essential component of the safety system within a work environment. This research describes the process used to create a safety decision-making scenario to measure the process that grain-handling employees used to make choices in a safety-related work task. A sample of 160 employees completed safety decision-making simulations based on a hypothetical but realistic scenario in a grain-handling environment. Their choices and the information they used to make their choices were recorded. Although the employees emphasized safety information in their decision-making process, not all of their choices were safe choices. Factors influencing their choices are discussed, and implications for industry, management, and workers are shared.
You got a problem with that? Exploring evaluators' disagreements about ethics.
Morris, M; Jacobs, L R
2000-08-01
A random sample of American Evaluation Association (AEA) members were surveyed for their reactions to three case scenarios--informed consent, impartial reporting, and stakeholder involvement--in which an evaluator acts in a way that could be deemed ethically problematic. Significant disagreement among respondents was found for each of the scenarios, in terms of respondents' views of whether the evaluator had behaved unethically. Respondents' explanations of their judgments support the notion that general guidelines for professional behavior (such as AEA's Guiding Principles for Evaluators) can encompass sharply conflicting interpretations of how evaluators should behave in specific situations. Respondents employed in private business/consulting were less likely than those in other settings to believe that the scenarios portrayed unethical behavior by the evaluator, a finding that underscores the importance of taking contextual variables into account when analyzing evaluators' ethical perceptions. The need for increased dialogue among evaluators who represent varied perspectives on ethical issues is addressed.
Phobos Sample Return: Next Approach
NASA Astrophysics Data System (ADS)
Zelenyi, Lev; Martynov, Maxim; Zakharov, Alexander; Korablev, Oleg; Ivanov, Alexey; Karabadzak, George
The Martian moons still remain a mystery after numerous studies by Mars orbiting spacecraft. Their study cover three major topics related to (1) Solar system in general (formation and evolution, origin of planetary satellites, origin and evolution of life); (2) small bodies (captured asteroid, or remnants of Mars formation, or reaccreted Mars ejecta); (3) Mars (formation and evolution of Mars; Mars ejecta at the satellites). As reviewed by Galimov [2010] most of the above questions require the sample return from the Martian moon, while some (e.g. the characterization of the organic matter) could be also answered by in situ experiments. There is the possibility to obtain the sample of Mars material by sampling Phobos: following to Chappaz et al. [2012] a 200-g sample could contain 10-7 g of Mars surface material launched during the past 1 mln years, or 5*10-5 g of Mars material launched during the past 10 mln years, or 5*1010 individual particles from Mars, quantities suitable for accurate laboratory analyses. The studies of Phobos have been of high priority in the Russian program on planetary research for many years. Phobos-88 mission consisted of two spacecraft (Phobos-1, Phobos-2) and aimed the approach to Phobos at 50 m and remote studies, and also the release of small landers (long-living stations DAS). This mission implemented the program incompletely. It was returned information about the Martian environment and atmosphere. The next profect Phobos Sample Return (Phobos-Grunt) initially planned in early 2000 has been delayed several times owing to budget difficulties; the spacecraft failed to leave NEO in 2011. The recovery of the science goals of this mission and the delivery of the samples of Phobos to Earth remain of highest priority for Russian scientific community. The next Phobos SR mission named Boomerang was postponed following the ExoMars cooperation, but is considered the next in the line of planetary exploration, suitable for launch around 2022. A possible scenario of the Boomerang mission includes the approach to Deimos prior to the landing of Phobos. The needed excess ΔV w.r.t. simple scenario (elliptical orbit à near-Phobos orbit) amounts to 0.67 km s-1 (1.6 vs 0.93 km s-1). The Boomerang mission basically repeats the Phobos-SR (2011) architecture, where the transfer-orbiting spacecraft lands on the Phobos surface and a small return vehicle launches the return capsule to Earth. We consider the Boomerang mission as an important step in Mars exploration and a direct precursor of Mars Sample Return. The following elements of the Boomerang mission might be directly employed, or serve as the prototypes for the Mars Sample return in future: Return vehicle, Earth descent module, Transfer-orbital spacecraft. We urge the development of this project for its high science value and recognize its elements as potential national contribution to an international Mars Sample Return project. Galimov E.M., Phobos sample return mission: scientific substantiation, Solar System Res., v.44, No.1, pp5-14, 2010. Chappaz L., H.J. Melosh, M. Vaguero, and K.C. Howell, Material transfer from the surface of Mars to Phobos and Deimos, 43rd Lunar and planetary Science Conference, paper 1422, 2012.
NASA Astrophysics Data System (ADS)
Yang, Z. L.; McClelland, J. W.; Su, H.; Cai, X.; Lin, P.; Tavakoly, A. A.; Griffin, C. G.; Turner, E.; Maidment, D. R.; Montagna, P.
2014-12-01
This study seeks to improve our understanding of how upland landscapes and coastal waters, which are connected by watersheds, respond to changes in hydrological and biogeochemical cycles resulting from changes in climate, local weather patterns, and land use. This paper will report our progress in the following areas. (1) The Noah-MP land surface model is augmented to include the soil nitrogen leaching and plants fixation and uptake of nitrogen. (2) We have evaluated temperature, precipitation and runoff change (2039-2048 relative to 1989-1998) patterns in Texas under the A2 emission scenario using the North American Regional Climate Change Assessment Program (NARCCAP) product. (3) We have linked a GIS-based river routing model (RAPID) and a GIS-based nitrogen input dataset (TX-ANB). The modeling framework was conducted for total nitrogen (TN) load estimation in the San Antonio and Guadalupe basins. (4) Beginning in July 2011, the Colorado, Guadalupe, San Antonio, and Nueces rivers have been sampled on a monthly basis. Sampling continued until November 2013. We also have established an on-going citizen science sampling program. We have contacted the Lower Colorado River Authority and the Texas Stream Team at Texas State University to solicit participation in our program. (5) We have tested multiple scenarios of nutrient contribution to South Texas bays. We are modeling the behavior of these systems under stress due to climate change such as less overall freshwater inflow, increased inorganic nutrient loading, and more frequent large storms.
NASA Astrophysics Data System (ADS)
Wu, Puxun; Yu, Hongwei
2007-04-01
Constraints from the Gold sample Type Ia supernova (SN Ia) data, the Supernova Legacy Survey (SNLS) SN Ia data, and the size of the baryonic acoustic oscillation (BAO) peak found in the Sloan Digital Sky Survey (SDSS) on the generalized Chaplygin gas (GCG) model, proposed as a candidate for the unified dark matter-dark energy scenario (UDME), are examined in the cases of both a spatially flat and a spatially curved universe. Our results reveal that the GCG model is consistent with a flat universe up to the 68% confidence level, and the model parameters are within the allowed parameter ranges of the GCG as a candidate for UDME. Meanwhile, we find that in the flat case, both the Gold sample + SDSS BAO data and the SNLS sample + SDSS BAO data break the degeneracy of As and α and allow for the scenario of a cosmological constant plus dark matter (α=0) at the 68% confidence level, although they rule out the standard Chaplygin gas model (α=1) at the 99% confidence level. However, for the case without a flat prior, the SNLS SN Ia + SDSS BAO data do not break the degeneracy between As and α, and they allow for ΛCDM (α=0) and the standard Chaplygin gas model (α=1) at a 68% confidence level, while the Gold SN Ia + SDSS BAO break the degeneracy of As and α and rule out ΛCDM at a 68% confidence level and the standard Chaplygin gas model at a 99% confidence level.
Dépraz, A; Cordellier, M; Hausser, J; Pfenninger, M
2008-05-01
The localization of Last Glacial Maximum (LGM) refugia is crucial information to understand a species' history and predict its reaction to future climate changes. However, many phylogeographical studies often lack sampling designs intensive enough to precisely localize these refugia. The hairy land snail Trochulus villosus has a small range centred on Switzerland, which could be intensively covered by sampling 455 individuals from 52 populations. Based on mitochondrial DNA sequences (COI and 16S), we identified two divergent lineages with distinct geographical distributions. Bayesian skyline plots suggested that both lineages expanded at the end of the LGM. To find where the origin populations were located, we applied the principles of ancestral character reconstruction and identified a candidate refugium for each mtDNA lineage: the French Jura and Central Switzerland, both ice-free during the LGM. Additional refugia, however, could not be excluded, as suggested by the microsatellite analysis of a population subset. Modelling the LGM niche of T. villosus, we showed that suitable climatic conditions were expected in the inferred refugia, but potentially also in the nunataks of the alpine ice shield. In a model selection approach, we compared several alternative recolonization scenarios by estimating the Akaike information criterion for their respective maximum-likelihood migration rates. The 'two refugia' scenario received by far the best support given the distribution of genetic diversity in T. villosus populations. Provided that fine-scale sampling designs and various analytical approaches are combined, it is possible to refine our necessary understanding of species responses to environmental changes.
NASA Astrophysics Data System (ADS)
Sapkota, A.; Das, P.; Böhmer, A. E.; Ueland, B. G.; Abernathy, D. L.; Bud'ko, S. L.; Canfield, P. C.; Kreyssig, A.; Goldman, A. I.; McQueeney, R. J.
2018-05-01
Results of inelastic neutron scattering measurements are reported for two annealed compositions of Ca(Fe 1 -xCox)2As2,x =0.026 and 0.030, which possess stripe-type antiferromagnetically ordered and superconducting ground states, respectively. In the AFM ground state, well-defined and gapped spin waves are observed for x =0.026 , similar to the parent CaFe2As2 compound. We conclude that the well-defined spin waves are likely to be present for all x corresponding to the AFM state. This behavior is in contrast to the smooth evolution to overdamped spin dynamics observed in Ba(Fe 1 -xCox)2As2 , wherein the crossover corresponds to microscopically coexisting AFM order and SC at low temperature. The smooth evolution is likely absent in Ca(Fe 1 -xCox)2As2 due to the mutual exclusion of AFM ordered and SC states. Overdamped spin dynamics characterize paramagnetism of the x =0.030 sample and high-temperature x =0.026 sample. A sizable loss of magnetic intensity is observed over a wide energy range upon cooling the x =0.030 sample, at temperatures just above and within the superconducting phase. This phenomenon is unique amongst the iron-based superconductors and is consistent with a temperature-dependent reduction in the fluctuating moment. One possible scenario ascribes this loss of moment to a sensitivity to the c -axis lattice parameter in proximity to the nonmagnetic collapsed tetragonal phase and another scenario ascribes the loss to a formation of a pseudogap.
Scarborough, Peter; Harrington, Richard A.; Mizdrak, Anja; Zhou, Lijuan Marissa; Doherty, Aiden
2014-01-01
Noncommunicable disease (NCD) scenario models are an essential part of the public health toolkit, allowing for an estimate of the health impact of population-level interventions that are not amenable to assessment by standard epidemiological study designs (e.g., health-related food taxes and physical infrastructure projects) and extrapolating results from small samples to the whole population. The PRIME (Preventable Risk Integrated ModEl) is an openly available NCD scenario model that estimates the effect of population-level changes in diet, physical activity, and alcohol and tobacco consumption on NCD mortality. The structure and methods employed in the PRIME are described here in detail, including the development of open source code that will support a PRIME web application to be launched in 2015. This paper reviews scenario results from eleven papers that have used the PRIME, including estimates of the impact of achieving government recommendations for healthy diets, health-related food taxes and subsidies, and low-carbon diets. Future challenges for NCD scenario modelling, including the need for more comparisons between models and the improvement of future prediction of NCD rates, are also discussed. PMID:25328757
An inverse approach to perturb historical rainfall data for scenario-neutral climate impact studies
NASA Astrophysics Data System (ADS)
Guo, Danlu; Westra, Seth; Maier, Holger R.
2018-01-01
Scenario-neutral approaches are being used increasingly for climate impact assessments, as they allow water resource system performance to be evaluated independently of climate change projections. An important element of these approaches is the generation of perturbed series of hydrometeorological variables that form the inputs to hydrologic and water resource assessment models, with most scenario-neutral studies to-date considering only shifts in the average and a limited number of other statistics of each climate variable. In this study, a stochastic generation approach is used to perturb not only the average of the relevant hydrometeorological variables, but also attributes such as the intermittency and extremes. An optimization-based inverse approach is developed to obtain hydrometeorological time series with uniform coverage across the possible ranges of rainfall attributes (referred to as the 'exposure space'). The approach is demonstrated on a widely used rainfall generator, WGEN, for a case study at Adelaide, Australia, and is shown to be capable of producing evenly-distributed samples over the exposure space. The inverse approach expands the applicability of the scenario-neutral approach in evaluating a water resource system's sensitivity to a wider range of plausible climate change scenarios.
Peel, D; Waples, R S; Macbeth, G M; Do, C; Ovenden, J R
2013-03-01
Theoretical models are often applied to population genetic data sets without fully considering the effect of missing data. Researchers can deal with missing data by removing individuals that have failed to yield genotypes and/or by removing loci that have failed to yield allelic determinations, but despite their best efforts, most data sets still contain some missing data. As a consequence, realized sample size differs among loci, and this poses a problem for unbiased methods that must explicitly account for random sampling error. One commonly used solution for the calculation of contemporary effective population size (N(e) ) is to calculate the effective sample size as an unweighted mean or harmonic mean across loci. This is not ideal because it fails to account for the fact that loci with different numbers of alleles have different information content. Here we consider this problem for genetic estimators of contemporary effective population size (N(e) ). To evaluate bias and precision of several statistical approaches for dealing with missing data, we simulated populations with known N(e) and various degrees of missing data. Across all scenarios, one method of correcting for missing data (fixed-inverse variance-weighted harmonic mean) consistently performed the best for both single-sample and two-sample (temporal) methods of estimating N(e) and outperformed some methods currently in widespread use. The approach adopted here may be a starting point to adjust other population genetics methods that include per-locus sample size components. © 2012 Blackwell Publishing Ltd.
Augner, Christoph; Hacker, Gerhard W; Oberfeld, Gerd; Florian, Matthias; Hitzl, Wolfgang; Hutter, Jörg; Pauser, Gernot
2010-06-01
The present study aimed to test whether exposure to radiofrequency electromagnetic fields (RF-EMF) emitted by mobile phone base stations may have effects on salivary alpha-amylase, immunoglobulin A (IgA), and cortisol levels. Fifty seven participants were randomly allocated to one of three different experimental scenarios (22 participants to scenario 1, 26 to scenario 2, and 9 to scenario 3). Each participant went through five 50-minute exposure sessions. The main RF-EMF source was a GSM-900-MHz antenna located at the outer wall of the building. In scenarios 1 and 2, the first, third, and fifth sessions were "low" (median power flux density 5.2 microW/m(2)) exposure. The second session was "high" (2126.8 microW/m(2)), and the fourth session was "medium" (153.6 microW/m(2)) in scenario 1, and vice versa in scenario 2. Scenario 3 had four "low" exposure conditions, followed by a "high" exposure condition. Biomedical parameters were collected by saliva samples three times a session. Exposure levels were created by shielding curtains. In scenario 3 from session 4 to session 5 (from "low" to "high" exposure), an increase of cortisol was detected, while in scenarios 1 and 2, a higher concentration of alpha-amylase related to the baseline was identified as compared to that in scenario 3. IgA concentration was not significantly related to the exposure. RF-EMF in considerably lower field densities than ICNIRP-guidelines may influence certain psychobiological stress markers. Copyright © 2010 The Editorial Board of Biomedical and Environmental Sciences. Published by Elsevier B.V. All rights reserved.
Gerbasi, David; Shapiro, Moshe; Brumer, Paul
2006-02-21
Enantiomeric control of 1,3 dimethylallene in a collisional environment is examined. Specifically, our previous "laser distillation" scenario wherein three perpendicular linearly polarized light fields are applied to excite a set of vib-rotational eigenstates of a randomly oriented sample is considered. The addition of internal conversion, dissociation, decoherence, and collisional relaxation mimics experimental conditions and molecular decay processes. Of greatest relevance is internal conversion which, in the case of dimethylallene, is followed by molecular dissociation. For various rates of internal conversion, enantiomeric control is maintained in this scenario by a delicate balance between collisional relaxation of excited dimethylallene that enhances control and collisional dephasing, which diminishes control.
Accounting for randomness in measurement and sampling in studying cancer cell population dynamics.
Ghavami, Siavash; Wolkenhauer, Olaf; Lahouti, Farshad; Ullah, Mukhtar; Linnebacher, Michael
2014-10-01
Knowing the expected temporal evolution of the proportion of different cell types in sample tissues gives an indication about the progression of the disease and its possible response to drugs. Such systems have been modelled using Markov processes. We here consider an experimentally realistic scenario in which transition probabilities are estimated from noisy cell population size measurements. Using aggregated data of FACS measurements, we develop MMSE and ML estimators and formulate two problems to find the minimum number of required samples and measurements to guarantee the accuracy of predicted population sizes. Our numerical results show that the convergence mechanism of transition probabilities and steady states differ widely from the real values if one uses the standard deterministic approach for noisy measurements. This provides support for our argument that for the analysis of FACS data one should consider the observed state as a random variable. The second problem we address is about the consequences of estimating the probability of a cell being in a particular state from measurements of small population of cells. We show how the uncertainty arising from small sample sizes can be captured by a distribution for the state probability.
Ait Kaci Azzou, Sadoune; Larribe, Fabrice; Froda, Sorana
2015-01-01
The effective population size over time (demographic history) can be retraced from a sample of contemporary DNA sequences. In this paper, we propose a novel methodology based on importance sampling (IS) for exploring such demographic histories. Our starting point is the generalized skyline plot with the main difference being that our procedure, skywis plot, uses a large number of genealogies. The information provided by these genealogies is combined according to the IS weights. Thus, we compute a weighted average of the effective population sizes on specific time intervals (epochs), where the genealogies that agree more with the data are given more weight. We illustrate by a simulation study that the skywis plot correctly reconstructs the recent demographic history under the scenarios most commonly considered in the literature. In particular, our method can capture a change point in the effective population size, and its overall performance is comparable with the one of the bayesian skyline plot. We also introduce the case of serially sampled sequences and illustrate that it is possible to improve the performance of the skywis plot in the case of an exponential expansion of the effective population size. PMID:26300910
NASA Astrophysics Data System (ADS)
Becker, Holger; Schattschneider, Sebastian; Klemm, Richard; Hlawatsch, Nadine; Gärtner, Claudia
2015-03-01
The continuous monitoring of the environment for lethal pathogens is a central task in the field of biothreat detection. Typical scenarios involve air-sampling in locations such as public transport systems or large public events and a subsequent analysis of the samples by a portable instrument. Lab-on-a-chip technologies are one of the promising technological candidates for such a system. We have developed an integrated microfluidic system with automatic sampling for the detection of CBRNE-related pathogens. The chip contains a two-pronged analysis strategy, on the one hand an immunological track using antibodies immobilized on a frit and a subsequent photometric detection, on the other hand a molecular biology approach using continuous-flow PCR with a fluorescence end-point detection. The cartridge contains two-component molded rotary valve to allow active fluid control and switching between channels. The accompanying instrument contains all elements for fluidic and valve actuation, thermal control, as well as the two detection modalities. Reagents are stored in dedicated reagent packs which are connected directly to the cartridge. With this system, we have been able to demonstrate the detection of a variety of pathogen species.
Radiation-induced melting in coherent X-ray diffractive imaging at the nanoscale
Ponomarenko, O.; Nikulin, A. Y.; Moser, H. O.; Yang, P.; Sakata, O.
2011-01-01
Coherent X-ray diffraction techniques play an increasingly significant role in the imaging of nanoscale structures, ranging from metallic and semiconductor to biological objects. In material science, X-rays are usually considered to be of a low-destructive nature, but under certain conditions they can cause significant radiation damage and heat loading on the samples. The qualitative literature data concerning the tolerance of nanostructured samples to synchrotron radiation in coherent diffraction imaging experiments are scarce. In this work the experimental evidence of a complete destruction of polymer and gold nanosamples by the synchrotron beam is reported in the case of imaging at 1–10 nm spatial resolution. Numerical simulations based on a heat-transfer model demonstrate the high sensitivity of temperature distribution in samples to macroscopic experimental parameters such as the conduction properties of materials, radiation heat transfer and convection. However, for realistic experimental conditions the calculated rates of temperature rise alone cannot explain the melting transitions observed in the nanosamples. Comparison of these results with the literature data allows a specific scenario of the sample destruction in each particular case to be presented, and a strategy for damage reduction to be proposed. PMID:21685675
Armstrong, Alacia; Valverde, Angel; Ramond, Jean-Baptiste; Makhalanyane, Thulani P.; Jansson, Janet K.; Hopkins, David W.; Aspray, Thomas J.; Seely, Mary; Trindade, Marla I.; Cowan, Don A.
2016-01-01
The temporal dynamics of desert soil microbial communities are poorly understood. Given the implications for ecosystem functioning under a global change scenario, a better understanding of desert microbial community stability is crucial. Here, we sampled soils in the central Namib Desert on sixteen different occasions over a one-year period. Using Illumina-based amplicon sequencing of the 16S rRNA gene, we found that α-diversity (richness) was more variable at a given sampling date (spatial variability) than over the course of one year (temporal variability). Community composition remained essentially unchanged across the first 10 months, indicating that spatial sampling might be more important than temporal sampling when assessing β-diversity patterns in desert soils. However, a major shift in microbial community composition was found following a single precipitation event. This shift in composition was associated with a rapid increase in CO2 respiration and productivity, supporting the view that desert soil microbial communities respond rapidly to re-wetting and that this response may be the result of both taxon-specific selection and changes in the availability or accessibility of organic substrates. Recovery to quasi pre-disturbance community composition was achieved within one month after rainfall. PMID:27680878
Armstrong, Alacia; Valverde, Angel; Ramond, Jean-Baptiste; Makhalanyane, Thulani P; Jansson, Janet K; Hopkins, David W; Aspray, Thomas J; Seely, Mary; Trindade, Marla I; Cowan, Don A
2016-09-29
The temporal dynamics of desert soil microbial communities are poorly understood. Given the implications for ecosystem functioning under a global change scenario, a better understanding of desert microbial community stability is crucial. Here, we sampled soils in the central Namib Desert on sixteen different occasions over a one-year period. Using Illumina-based amplicon sequencing of the 16S rRNA gene, we found that α-diversity (richness) was more variable at a given sampling date (spatial variability) than over the course of one year (temporal variability). Community composition remained essentially unchanged across the first 10 months, indicating that spatial sampling might be more important than temporal sampling when assessing β-diversity patterns in desert soils. However, a major shift in microbial community composition was found following a single precipitation event. This shift in composition was associated with a rapid increase in CO 2 respiration and productivity, supporting the view that desert soil microbial communities respond rapidly to re-wetting and that this response may be the result of both taxon-specific selection and changes in the availability or accessibility of organic substrates. Recovery to quasi pre-disturbance community composition was achieved within one month after rainfall.
Porosity estimation by semi-supervised learning with sparsely available labeled samples
NASA Astrophysics Data System (ADS)
Lima, Luiz Alberto; Görnitz, Nico; Varella, Luiz Eduardo; Vellasco, Marley; Müller, Klaus-Robert; Nakajima, Shinichi
2017-09-01
This paper addresses the porosity estimation problem from seismic impedance volumes and porosity samples located in a small group of exploratory wells. Regression methods, trained on the impedance as inputs and the porosity as output labels, generally suffer from extremely expensive (and hence sparsely available) porosity samples. To optimally make use of the valuable porosity data, a semi-supervised machine learning method was proposed, Transductive Conditional Random Field Regression (TCRFR), showing good performance (Görnitz et al., 2017). TCRFR, however, still requires more labeled data than those usually available, which creates a gap when applying the method to the porosity estimation problem in realistic situations. In this paper, we aim to fill this gap by introducing two graph-based preprocessing techniques, which adapt the original TCRFR for extremely weakly supervised scenarios. Our new method outperforms the previous automatic estimation methods on synthetic data and provides a comparable result to the manual labored, time-consuming geostatistics approach on real data, proving its potential as a practical industrial tool.
Acrylamide content in French fries prepared in households: A pilot study in Spanish homes.
Mesias, Marta; Delgado-Andrade, Cristina; Holgado, Francisca; Morales, Francisco J
2018-09-15
An observational cross-sectional pilot study in 73 Spanish households was conducted to evaluate the impact of consumer practices on the formation of acrylamide during the preparation of French fries from fresh potatoes applying one stage frying. 45.2% of samples presented acrylamide concentrations above the benchmark level for French fries (500 µg/kg). 6.9% of samples exceeded 2000 µg/kg and the 95th percentile was 2028 µg/kg. The median and average values were significantly higher than the EFSA report for this food category, suggesting that the total exposure to acrylamide by the population could be underestimated. In this randomised scenario of cooking practices, the content of reducing sugar and asparagine did not explain the acrylamide levels. However, the chromatic parameter a ∗ of the fried potato was a powerful tool to classify the samples according to the acrylamide benchmark level regardless of the agronomical characteristics of the potato or the consumer practices. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Stoker, C. R.; Clarke, J. D. A.; Direito, S.; Foing, B.
2011-01-01
The DOMEX program is a NASA-MMAMA funded project featuring simulations of human crews on Mars focused on science activities that involve collecting samples from the subsurface using both manual and robotic equipment methods and analyzing them in the field and post mission. A crew simulating a human mission to Mars performed activities focused on subsurface science for 2 weeks in November 2009 at Mars Desert Research Station near Hanksville, Utah --an important chemical and morphological Mars analog site. Activities performed included 1) survey of the area to identify geologic provinces, 2) obtaining soil and rock samples from each province and characterizing their mineralogy, chemistry, and biology; 3) site selection and reconnaissance for a future drilling mission; 4) deployment and testing of Mars Underground Mole, a percussive robotic soil sampling device; and 5) recording and analyzing how crew time was used to accomplish these tasks. This paper summarizes results from analysis of soil cores
Solar Sail Application to Comet Nucleus Sample Return
NASA Technical Reports Server (NTRS)
Taylor, Travis S.; Moton, Tryshanda T.; Robinson, Don; Anding, R. Charles; Matloff, Gregory L.; Garbe, Gregory; Montgomery, Edward
2003-01-01
Many comets have perihelions at distances within 1.0 Astronomical Unit (AU) from the sun. These comets typically are inclined out of the ecliptic. We propose that a solar sail spacecraft could be used to increase the inclination of the orbit to match that of these 1.0 AU comets. The solar sail spacecraft would match the orbit velocity for a short period of time, which would be long enough for a container to be injected into the comet's nucleus. The container would be extended from a long durable tether so that the solar sail would not be required to enter into the potentially degrading environment of the comet s atmosphere. Once the container has been filled with sample material, the tether is retracted. The solar sail would then lower its inclination and fly back to Earth for the sample return. In this paper, we describe the selection of cometary targets, the mission design, and the solar sailcraft design suitable for sail-comet rendezvous as well as possible rendezvous scenarios.
Wilson, Tamara; Sleeter, Benjamin M.; Cameron, D. Richard
2017-01-01
With growing demand and highly variable inter-annual water supplies, California’s water use future is fraught with uncertainty. Climate change projections, anticipated population growth, and continued agricultural intensification, will likely stress existing water supplies in coming decades. Using a state-and-transition simulation modeling approach, we examine a broad suite of spatially explicit future land use scenarios and their associated county-level water use demand out to 2062. We examined a range of potential water demand futures sampled from a 20-year record of historical (1992–2012) data to develop a suite of potential future land change scenarios, including low/high change scenarios for urbanization and agriculture as well as “lowest of the low” and “highest of the high” anthropogenic use. Future water demand decreased 8.3 billion cubic meters (Bm3) in the lowest of the low scenario and decreased 0.8 Bm3 in the low agriculture scenario. The greatest increased water demand was projected for the highest of the high land use scenario (+9.4 Bm3), high agricultural expansion (+4.6 Bm3), and high urbanization (+2.1 Bm3) scenarios. Overall, these scenarios show agricultural land use decisions will likely drive future demand more than increasing municipal and industrial uses, yet improved efficiencies across all sectors could lead to potential water use savings. Results provide water managers with information on diverging land use and water use futures, based on historical, observed land change trends and water use histories.
Sleeter, Benjamin M.; Cameron, D. Richard
2017-01-01
With growing demand and highly variable inter-annual water supplies, California’s water use future is fraught with uncertainty. Climate change projections, anticipated population growth, and continued agricultural intensification, will likely stress existing water supplies in coming decades. Using a state-and-transition simulation modeling approach, we examine a broad suite of spatially explicit future land use scenarios and their associated county-level water use demand out to 2062. We examined a range of potential water demand futures sampled from a 20-year record of historical (1992–2012) data to develop a suite of potential future land change scenarios, including low/high change scenarios for urbanization and agriculture as well as “lowest of the low” and “highest of the high” anthropogenic use. Future water demand decreased 8.3 billion cubic meters (Bm3) in the lowest of the low scenario and decreased 0.8 Bm3 in the low agriculture scenario. The greatest increased water demand was projected for the highest of the high land use scenario (+9.4 Bm3), high agricultural expansion (+4.6 Bm3), and high urbanization (+2.1 Bm3) scenarios. Overall, these scenarios show agricultural land use decisions will likely drive future demand more than increasing municipal and industrial uses, yet improved efficiencies across all sectors could lead to potential water use savings. Results provide water managers with information on diverging land use and water use futures, based on historical, observed land change trends and water use histories. PMID:29088254
NASA Technical Reports Server (NTRS)
Goldberg, Benjamin E.
1986-01-01
An initial investigation exploring the effects of gravity on the crystallization of macromolecular systems has been completed. Monodisperse poly(ethylene), molecular weight 48,000 was melted and recrystallized under gravitational conditions: 0, 1, and 2 g. No correlations to gravitational environment were noted for the 20 C/min melt, as monitored with a photodensitometer system. However, post-crystallization testing of the recrystallized samples revealed thicker samples with more regions of large, well defined spherulites for the zero gravity crystallization environment. The results of the post-crystallization analysis have been reviewed and the results related to nucleation concerns. Finally, birefringence data, consistent with, but not explained by, the nucleation scenarios is detailed, and further investigations are proposed.
Shocked materials from the Dutch Peak diamictite, Utah
NASA Technical Reports Server (NTRS)
Hoerz, F.; Bunch, T. E.; Oberbeck, V. R.
1994-01-01
Evidence of shock metamorphism in the Dutch Peak diamictite in the Sheeprock Mountains, Utah, is reported. The Dutch Peak diamictite is of Proterozoic age and is a minor part of the Dutch Peak formation. A shocked sample, specimen A250, was collected during a brief visit of the Harker Canyon area of the Sheeprock Mountains. This sample consists of equant, anhedral grains of quartz, K-feldspar, and plagioclase. The crystallographic orientation of 244 lamellae systems in 106 grains was measured. It is presently difficult to evaluate the significance of this single specimen. Without additional and substantial field work, and petrographic characterization of this formation, a number of scenarios for the presence of a shocked clast and the emplacement of the entire formation remain viable.
Migliori, G. B.; Ambrosetti, M.; Besozzi, G.; Farris, B.; Nutini, S.; Saini, L.; Casali, L.; Nardini, S.; Bugiani, M.; Neri, M.; Raviglione, M. C.
1999-01-01
Although in developing countries the treatment of tuberculosis (TB) cases is among the most cost-effective health interventions, few studies have evaluated the cost-effectiveness of TB control in low-prevalence countries. The aim of the present study was to carry out an economic analysis in Italy that takes into account both the perspective of the resource-allocating authority (i.e. the Ministry of Health) and the broader social perspective, including a cost description based on current outcomes applied to a representative sample of TB patients nationwide (admission and directly observed treatment (DOT) during the initial intensive phase of treatment); a cost-comparison analysis of two alternative programmes: current policy based on available data (scenario 1) and an hypothetical policy oriented more towards outpatient care (scenario 2) (both scenarios included the option of including or not including DOT outside hospital admission, and incentives) were compared in terms of cost per case treated successfully. Indirect costs (such as loss of productivity) were included in considerations of the broader social perspective. The study was designed as a prospective monitoring activity based on the supervised collection of forms from a representative sample of Italian TB units. Individual data were collected and analysed to obtain a complete economic profile of the patients enrolled and to evaluate the effectiveness of the intervention. A separate analysis was done for each scenario to determine the end-point at different levels of cure rate (50-90%). The mean length of treatment was 6.6 months (i.e. patients hospitalized during the intensive phase; length of stay was significantly higher in smear-positive patients and in human immunodeficiency virus (HIV) seropositive patients). Roughly six direct smear and culture examinations were performed during hospital admission and three during ambulatory treatment. The cost of a single bed day was US$186.90, whereas that of a single outpatient visit ranged, according to the different options, from US$2.50 to US$11. Scenario 2 was consistently less costly than scenario 1. The cost per case cured for smear-positive cases was US$16,703 in scenario 1 and US$5946 in scenario 2. The difference in cost between the cheapest option (no DOT) and the more expensive option (DOT, additional staff, incentives) ranged from US$1407 (scenario 1, smear-negative and extrapulmonary cases) to US$1814 (scenario 2, smear-positive cases). The additional cost to society including indirect costs ranged from US$1800 to US$4200. The possible savings at the national level were in the order of US$50 million per year. In conclusion, cost-comparison analysis showed that a relatively minor change in policy can result in significant savings and that the adoption of DOT will represent a relatively modest economic burden, although the real gain in effectiveness resulting from DOT in Italy requires further evaluation. PMID:10427931
Cassim, Naseem; Coetzee, Lindi Marie; Schnippel, Kathryn; Glencross, Deborah Kim
2017-01-01
During 2016, the National Health Laboratory Service (NHLS) introduced laboratory-based reflexed Cryptococcal antigen (CrAg) screening to detect early Cryptococcal disease in immunosuppressed HIV+ patients with a confirmed CD4 count of 100 cells/μl or less. The aim of this study was to assess cost-per-result of a national screening program across different tiers of laboratory service, with variable daily CrAg test volumes. The impact of potential ART treatment guideline and treatment target changes on CrAg volumes, platform choice and laboratory workflow are considered. CD4 data (with counts < = 100 cells/μl) from the fiscal year 2015/16 were extracted from the NHLS Corporate Date Warehouse and used to project anticipated daily CrAg testing volumes with appropriately-matched CrAg testing platforms allocated at each of 52 NHLS CD4 laboratories. A cost-per-result was calculated for four scenarios, including the existing service status quo (Scenario-I), and three other settings (as Scenarios II-IV) which were based on information from recent antiretroviral (ART) guidelines, District Health Information System (DHIS) data and UNAIDS 90/90/90 HIV/AIDS treatment targets. Scenario-II forecast CD4 testing offered only to new ART initiates recorded at DHIS. Scenario-III projected all patients notified as HIV+, but not yet on ART (recorded at DHIS) and Scenario-IV forecast CrAg screening in 90% of estimated HIV+ patients across South Africa (also DHIS). Stata was used to assess daily CrAg volumes at the 5th, 10th, 25th, 50th, 75th, 90th and 95th percentiles across 52 CD4-laboratories. Daily volumes were used to determine technical effort/ operator staff costs (% full time equivalent) and cost-per-result for all scenarios. Daily volumes ranged between 3 and 64 samples for Scenario-I at the 5th and 95th percentile. Similarly, daily volumes ranges of 1-12, 2-45 and 5-100 CrAg-directed samples were noted for Scenario's II, III and IV respectively. A cut-off of 30 CrAg tests per day defined use of either LFA or EIA platform. LFA cost-per-result ranged from $8.24 to $5.44 and EIA cost-per-result between $5.58 and $4.88 across the range of test volumes. The technical effort across scenarios ranged from 3.2-27.6% depending on test volumes and platform used. The study reported the impact of programmatic testing requirements on varying CrAg test volumes that subsequently influenced choice of testing platform, laboratory workflow and cost-per-result. A novel percentiles approach is described that enables an overview of the cost-per-result across a national program. This approach facilitates cross-subsidisation of more expensive lower volume sites with cost-efficient, more centralized higher volume laboratories, mitigating against the risk of costing tests at a single site.
NASA Astrophysics Data System (ADS)
Lin, Y.-H.; Knipping, E. M.; Edgerton, E. S.; Shaw, S. L.; Surratt, J. D.
2013-08-01
Filter-based PM2.5 samples were chemically analyzed to investigate secondary organic aerosol (SOA) formation from isoprene in a rural atmosphere of the southeastern US influenced by both anthropogenic sulfur dioxide (SO2) and ammonia (NH3) emissions. Daytime PM2.5 samples were collected during summer 2010 using conditional sampling approaches based on pre-defined high and low SO2 or NH3 thresholds. Known molecular-level tracers for isoprene SOA formation, including 2-methylglyceric acid, 3-methyltetrahydrofuran-3,4-diols, 2-methyltetrols, C5-alkene triols, dimers, and organosulfate derivatives, were identified and quantified by gas chromatography coupled to electron ionization mass spectrometry (GC/EI-MS) and ultra performance liquid chromatography coupled to electrospray ionization high-resolution quadrupole time-of-flight mass spectrometry (UPLC/ESI-HR-Q-TOFMS). Mass concentrations of six isoprene low-NOx SOA tracers contributed to 12-19% of total organic matter (OM) in PM2.5 samples collected during the sampling period, indicating the importance of the hydroxyl radical (OH)-initiated oxidation (so-called photooxidation) of isoprene under low-NOx conditions that lead to SOA formation through reactive uptake of gaseous isoprene epoxydiols (IEPOX) in this region. The contribution of the IEPOX-derived SOA tracers to total organic matter was enhanced by 1.4% (p = 0.012) under high-SO2 sampling scenarios, although only weak associations between aerosol acidity and mass of IEPOX SOA tracers were observed. This suggests that IEPOX-derived SOA formation might be modulated by other factors simultaneously, rather than only aerosol acidity. No clear associations between isoprene SOA formation and high or low NH3 conditional samples were found. Positive correlations between sulfate aerosol loadings and IEPOX-derived SOA tracers for samples collected under all conditions indicates that sulfate aerosol could be a surrogate for surface accommodation in the uptake of IEPOX onto preexisting aerosols.
The OSIRIS-REx Asteroid Sample Return Mission Operations Design
NASA Technical Reports Server (NTRS)
Gal-Edd, Jonathan S.; Cheuvront, Allan
2015-01-01
OSIRIS-REx is an acronym that captures the scientific objectives: Origins, Spectral Interpretation, Resource Identification, and Security-Regolith Explorer. OSIRIS-REx will thoroughly characterize near-Earth asteroid Bennu (Previously known as 1019551999 RQ36). The OSIRIS-REx Asteroid Sample Return Mission delivers its science using five instruments and radio science along with the Touch-And-Go Sample Acquisition Mechanism (TAGSAM). All of the instruments and data analysis techniques have direct heritage from flown planetary missions. The OSIRIS-REx mission employs a methodical, phased approach to ensure success in meeting the mission's science requirements. OSIRIS-REx launches in September 2016, with a backup launch period occurring one year later. Sampling occurs in 2019. The departure burn from Bennu occurs in March 2021. On September 24, 2023, the Sample Return Capsule (SRC) lands at the Utah Test and Training Range (UTTR). Stardust heritage procedures are followed to transport the SRC to Johnson Space Center, where the samples are removed and delivered to the OSIRIS-REx curation facility. After a six-month preliminary examination period the mission will produce a catalog of the returned sample, allowing the worldwide community to request samples for detailed analysis. Traveling and returning a sample from an Asteroid that has not been explored before requires unique operations consideration. The Design Reference Mission (DRM) ties together spacecraft, instrument and operations scenarios. Asteroid Touch and Go (TAG) has various options varying from ground only to fully automated (natural feature tracking). Spacecraft constraints such as thermo and high gain antenna pointing impact the timeline. The mission is sensitive to navigation errors, so a late command update has been implemented. The project implemented lessons learned from other "small body" missions. The key lesson learned was 'expect the unexpected' and implement planning tools early in the lifecycle. This paper summarizes the ground and spacecraft design as presented at OSIRIS-REx Critical Design Review(CDR) held April 2014.
Kim, Sang M; Brannan, Kevin M; Zeckoski, Rebecca W; Benham, Brian L
2014-01-01
The objective of this study was to develop bacteria total maximum daily loads (TMDLs) for the Hardware River watershed in the Commonwealth of Virginia, USA. The TMDL program is an integrated watershed management approach required by the Clean Water Act. The TMDLs were developed to meet Virginia's water quality standard for bacteria at the time, which stated that the calendar-month geometric mean concentration of Escherichia coli should not exceed 126 cfu/100 mL, and that no single sample should exceed a concentration of 235 cfu/100 mL. The bacteria impairment TMDLs were developed using the Hydrological Simulation Program-FORTRAN (HSPF). The hydrology and water quality components of HSPF were calibrated and validated using data from the Hardware River watershed to ensure that the model adequately simulated runoff and bacteria concentrations. The calibrated and validated HSPF model was used to estimate the contributions from the various bacteria sources in the Hardware River watershed to the in-stream concentration. Bacteria loads were estimated through an extensive source characterization process. Simulation results for existing conditions indicated that the majority of the bacteria came from livestock and wildlife direct deposits and pervious lands. Different source reduction scenarios were evaluated to identify scenarios that meet both the geometric mean and single sample maximum E. coli criteria with zero violations. The resulting scenarios required extreme and impractical reductions from livestock and wildlife sources. Results from studies similar to this across Virginia partially contributed to a reconsideration of the standard's applicability to TMDL development.
Kærsgaard, S; Meluken, I; Kessing, L V; Vinberg, M; Miskowiak, K W
2018-05-01
Abnormalities in affective cognition are putative endophenotypes for bipolar and unipolar disorders but it is unclear whether some abnormalities are disorder-specific. We therefore investigated affective cognition in monozygotic twins at familial risk of bipolar disorder relative to those at risk of unipolar disorder and to low-risk twins. Seventy monozygotic twins with a co-twin history of bipolar disorder (n = 11), of unipolar disorder (n = 38) or without co-twin history of affective disorder (n = 21) were included. Variables of interest were recognition of and vigilance to emotional faces, emotional reactivity and -regulation in social scenarios and non-affective cognition. Twins at familial risk of bipolar disorder showed increased recognition of low to moderate intensity of happy facial expressions relative to both unipolar disorder high-risk twins and low-risk twins. Bipolar disorder high-risk twins also displayed supraliminal attentional avoidance of happy faces compared with unipolar disorder high-risk twins and greater emotional reactivity in positive and neutral social scenarios and less reactivity in negative social scenarios than low-risk twins. In contrast with our hypothesis, there was no negative bias in unipolar disorder high-risk twins. There were no differences between the groups in demographic characteristics or non-affective cognition. The modest sample size limited the statistical power of the study. Increased sensitivity and reactivity to positive social stimuli may be a neurocognitive endophenotype that is specific for bipolar disorder. If replicated in larger samples, this 'positive endophenotype' could potentially aid future diagnostic differentiation between unipolar and bipolar disorder. Copyright © 2018 Elsevier B.V. All rights reserved.
Higher moments of net-proton multiplicity distributions in a heavy-ion event pile-up scenario
NASA Astrophysics Data System (ADS)
Garg, P.; Mishra, D. K.
2017-10-01
High-luminosity modern accelerators, like the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL) and Large Hadron Collider (LHC) at European Organization for Nuclear Research (CERN), inherently have event pile-up scenarios which significantly contribute to physics events as a background. While state-of-the-art tracking algorithms and detector concepts take care of these event pile-up scenarios, several offline analytical techniques are used to remove such events from the physics analysis. It is still difficult to identify the remaining pile-up events in an event sample for physics analysis. Since the fraction of these events is significantly small, it may not be as serious of an issue for other analyses as it would be for an event-by-event analysis. Particularly when the characteristics of the multiplicity distribution are observable, one needs to be very careful. In the present work, we demonstrate how a small fraction of residual pile-up events can change the moments and their ratios of an event-by-event net-proton multiplicity distribution, which are sensitive to the dynamical fluctuations due to the QCD critical point. For this study, we assume that the individual event-by-event proton and antiproton multiplicity distributions follow Poisson, negative binomial, or binomial distributions. We observe a significant effect in cumulants and their ratios of net-proton multiplicity distributions due to pile-up events, particularly at lower energies. It might be crucial to estimate the fraction of pile-up events in the data sample while interpreting the experimental observable for the critical point.
Perceiving Children's Behavior and Reaching Limits in a Risk Environment
ERIC Educational Resources Information Center
Cordovil, Rita; Santos, Carlos; Barreiros, Joao
2012-01-01
The purpose of this study was to investigate the accuracy of parents' perception of children's reaching limits in a risk scenario. A sample of 68 parents of 1- to 4-year-olds were asked to make a prior estimate of their children's behavior and action limits in a task that involved retrieving a toy out of the water. The action modes used for…
Nebular Phase Spectra of SNe Ia from the CSP2 Sample
NASA Astrophysics Data System (ADS)
Diamond, Tiara; Carnegie Supernova Project II
2018-06-01
We present a comparison of late-time spectra in the near-infrared for some of the Type Ia supernovae from the Carnegie Supernova Project II. Particular attention is paid to the shape and evolution of several emission features, including the [Fe II] line at 1.6440 μm. We put our findings in context of several explosion scenarios and progenitor systems.
Developing an Adaptability Training Strategy and Policy for the DoD
2008-10-01
might include monitoring of trainees using electroencephalogram ( EEG ) technology to gain neurofeedback during scenario performance. In order to...group & adequate sample; pre and post iii. Possibly including EEG monitoring (and even neurofeedback ) 4. Should seek to determine general...Dr. John Cowan has developed a system called the Peak Achievement Trainer (PAT) EEG , which traces electrical activity in the brain and provides
ERIC Educational Resources Information Center
Pfeiffer, Vanessa D. I.; Scheiter, Katharina; Gemballa, Sven
2012-01-01
This study investigated the effectiveness of three different instructional materials for learning how to identify fish at the species level in a blended classroom and out-of-classroom scenario. A sample of 195 first-year students of biology or geoecology at the University of Tuebingen participated in a course on identification of European…
ERIC Educational Resources Information Center
Estrada, Fernando; Rigali-Oiler, Marybeth
2016-01-01
Using a scenario-based analogue experiment embedded within an online survey, 174 masters-level counseling students located at a university on the Southwest Coast of the United States provided data to test the notion that the teaching alliance--a framework for enhancing the quality of the student-instructor relationship--is (a) important in…
NASA Astrophysics Data System (ADS)
Brouwer, Derk H.; van Duuren-Stuurman, Birgit; Berges, Markus; Bard, Delphine; Jankowska, Elzbieta; Moehlmann, Carsten; Pelzer, Johannes; Mark, Dave
2013-11-01
Manufactured nano-objects, agglomerates, and aggregates (NOAA) may have adverse effect on human health, but little is known about occupational risks since actual estimates of exposure are lacking. In a large-scale workplace air-monitoring campaign, 19 enterprises were visited and 120 potential exposure scenarios were measured. A multi-metric exposure assessment approach was followed and a decision logic was developed to afford analysis of all results in concert. The overall evaluation was classified by categories of likelihood of exposure. At task level about 53 % showed increased particle number or surface area concentration compared to "background" level, whereas 72 % of the TEM samples revealed an indication that NOAA were present in the workplace. For 54 out of the 120 task-based exposure scenarios, an overall evaluation could be made based on all parameters of the decision logic. For only 1 exposure scenario (approximately 2 %), the highest level of potential likelihood was assigned, whereas in total in 56 % of the exposure scenarios the overall evaluation revealed the lowest level of likelihood. However, for the remaining 42 % exposure to NOAA could not be excluded.
Turkish and Japanese Mycobacterium tuberculosis sublineages share a remote common ancestor.
Refrégier, Guislaine; Abadia, Edgar; Matsumoto, Tomoshige; Ano, Hiromi; Takashima, Tetsuya; Tsuyuguchi, Izuo; Aktas, Elif; Cömert, Füsun; Gomgnimbou, Michel Kireopori; Panaiotov, Stefan; Phelan, Jody; Coll, Francesc; McNerney, Ruth; Pain, Arnab; Clark, Taane G; Sola, Christophe
2016-11-01
Two geographically distant M. tuberculosis sublineages, Tur from Turkey and T3-Osaka from Japan, exhibit partially identical genotypic signatures (identical 12-loci MIRU-VNTR profiles, distinct spoligotyping patterns). We investigated T3-Osaka and Tur sublineages characteristics and potential genetic relatedness, first using MIRU-VNTR locus analysis on 21 and 25 samples of each sublineage respectively, and second comparing Whole Genome Sequences of 8 new samples to public data from 45 samples uncovering human tuberculosis diversity. We then tried to date their Most Recent Common Ancestor (MRCA) using three calibrations of SNP accumulation rate (long-term=0.03SNP/genome/year, derived from a tuberculosis ancestor of around 70,000years old; intermediate=0.2SNP/genome/year derived from a Peruvian mummy; short-term=0.5SNP/genome/year). To disentangle between these scenarios, we confronted the corresponding divergence times with major human history events and knowledge on human genetic divergence. We identified relatively high intrasublineage diversity for both T3-Osaka and Tur. We definitively proved their monophyly; the corresponding super-sublineage (referred to as "T3-Osa-Tur") shares a common ancestor with T3-Ethiopia and Ural sublineages but is only remotely related to other Euro-American sublineages such as X, LAM, Haarlem and S. The evolutionary scenario based on long-term evolution rate being valid until T3-Osa-Tur MRCA was not supported by Japanese fossil data. The evolutionary scenario relying on short-term evolution rate since T3-Osa-Tur MRCA was contradicted by human history and potential traces of past epidemics. T3-Osaka and Tur sublineages were found likely to have diverged between 800y and 2000years ago, potentially at the time of Mongol Empire. Altogether, this study definitively proves a strong genetic link between Turkish and Japanese tuberculosis. It provides a first hypothesis for calibrating TB Euro-American lineage molecular clock; additional studies are needed to reliably date events corresponding to intermediate depths in tuberculosis phylogeny. Copyright © 2016 Elsevier B.V. All rights reserved.
Gibson, Elaine; Brazil, Kevin; Coughlin, Michael D; Emerson, Claudia; Fournier, Francois; Schwartz, Lisa; Szala-Meneok, Karen V; Weisbaum, Karen M; Willison, Donald J
2008-11-14
The amount of research utilizing health information has increased dramatically over the last ten years. Many institutions have extensive biobank holdings collected over a number of years for clinical and teaching purposes, but are uncertain as to the proper circumstances in which to permit research uses of these samples. Research Ethics Boards (REBs) in Canada and elsewhere in the world are grappling with these issues, but lack clear guidance regarding their role in the creation of and access to registries and biobanks. Chairs of 34 REBS and/or REB Administrators affiliated with Faculties of Medicine in Canadian universities were interviewed. Interviews consisted of structured questions dealing with diabetes-related scenarios, with open-ended responses and probing for rationales. The two scenarios involved the development of a diabetes registry using clinical encounter data across several physicians' practices, and the addition of biological samples to the registry to create a biobank. There was a wide range of responses given for the questions raised in the scenarios, indicating a lack of clarity about the role of REBs in registries and biobanks. With respect to the creation of a registry, a minority of sites felt that consent was not required for the information to be entered into the registry. Whether patient consent was required for information to be entered into the registry and the duration for which the consent would be operative differed across sites. With respect to the creation of a biobank linked to the registry, a majority of sites viewed biobank information as qualitatively different from other types of personal health information. All respondents agreed that patient consent was needed for blood samples to be placed in the biobank but the duration of consent again varied. Participants were more attuned to issues surrounding biobanks as compared to registries and demonstrated a higher level of concern regarding biobanks. As registries and biobanks expand, there is a need for critical analysis of suitable roles for REBs and subsequent guidance on these topics. The authors conclude by recommending REB participation in the creation of registries and biobanks and the eventual drafting of comprehensive legislation.
Hyde, Embriette R.; Haarmann, Daniel P.; Lynne, Aaron M.; Bucheli, Sibyl R.; Petrosino, Joseph F.
2013-01-01
Human decomposition is a mosaic system with an intimate association between biotic and abiotic factors. Despite the integral role of bacteria in the decomposition process, few studies have catalogued bacterial biodiversity for terrestrial scenarios. To explore the microbiome of decomposition, two cadavers were placed at the Southeast Texas Applied Forensic Science facility and allowed to decompose under natural conditions. The bloat stage of decomposition, a stage easily identified in taphonomy and readily attributed to microbial physiology, was targeted. Each cadaver was sampled at two time points, at the onset and end of the bloat stage, from various body sites including internal locations. Bacterial samples were analyzed by pyrosequencing of the 16S rRNA gene. Our data show a shift from aerobic bacteria to anaerobic bacteria in all body sites sampled and demonstrate variation in community structure between bodies, between sample sites within a body, and between initial and end points of the bloat stage within a sample site. These data are best not viewed as points of comparison but rather additive data sets. While some species recovered are the same as those observed in culture-based studies, many are novel. Our results are preliminary and add to a larger emerging data set; a more comprehensive study is needed to further dissect the role of bacteria in human decomposition. PMID:24204941
Boyacı, Ezel; Bojko, Barbara; Reyes-Garcés, Nathaly; Poole, Justen J; Gómez-Ríos, Germán Augusto; Teixeira, Alexandre; Nicol, Beate; Pawliszyn, Janusz
2018-01-18
In vitro high-throughput non-depletive quantitation of chemicals in biofluids is of growing interest in many areas. Some of the challenges facing researchers include the limited volume of biofluids, rapid and high-throughput sampling requirements, and the lack of reliable methods. Coupled to the above, growing interest in the monitoring of kinetics and dynamics of miniaturized biosystems has spurred the demand for development of novel and revolutionary methodologies for analysis of biofluids. The applicability of solid-phase microextraction (SPME) is investigated as a potential technology to fulfill the aforementioned requirements. As analytes with sufficient diversity in their physicochemical features, nicotine, N,N-Diethyl-meta-toluamide, and diclofenac were selected as test compounds for the study. The objective was to develop methodologies that would allow repeated non-depletive sampling from 96-well plates, using 100 µL of sample. Initially, thin film-SPME was investigated. Results revealed substantial depletion and consequent disruption in the system. Therefore, new ultra-thin coated fibers were developed. The applicability of this device to the described sampling scenario was tested by determining the protein binding of the analytes. Results showed good agreement with rapid equilibrium dialysis. The presented method allows high-throughput analysis using small volumes, enabling fast reliable free and total concentration determinations without disruption of system equilibrium.
Hyde, Embriette R; Haarmann, Daniel P; Lynne, Aaron M; Bucheli, Sibyl R; Petrosino, Joseph F
2013-01-01
Human decomposition is a mosaic system with an intimate association between biotic and abiotic factors. Despite the integral role of bacteria in the decomposition process, few studies have catalogued bacterial biodiversity for terrestrial scenarios. To explore the microbiome of decomposition, two cadavers were placed at the Southeast Texas Applied Forensic Science facility and allowed to decompose under natural conditions. The bloat stage of decomposition, a stage easily identified in taphonomy and readily attributed to microbial physiology, was targeted. Each cadaver was sampled at two time points, at the onset and end of the bloat stage, from various body sites including internal locations. Bacterial samples were analyzed by pyrosequencing of the 16S rRNA gene. Our data show a shift from aerobic bacteria to anaerobic bacteria in all body sites sampled and demonstrate variation in community structure between bodies, between sample sites within a body, and between initial and end points of the bloat stage within a sample site. These data are best not viewed as points of comparison but rather additive data sets. While some species recovered are the same as those observed in culture-based studies, many are novel. Our results are preliminary and add to a larger emerging data set; a more comprehensive study is needed to further dissect the role of bacteria in human decomposition.
Survival of Salmonella and Staphylococcus aureus in mexican red salsa in a food service setting.
Franco, Wendy; Hsu, Wei-Yea; Simonne, Amarat H
2010-06-01
Mexican red salsa is one of the most common side dishes in Mexican cuisine. According to data on foodborne illnesses collected by the Centers for Disease Control and Prevention, salsa was associated with 70 foodborne illness outbreaks between 1990 and 2006. Salsa ingredients such as tomatoes, cilantro, and onions often have been implicated in foodborne illness outbreaks. Mexican-style restaurants commonly prepare a large batch of red salsa, store it at refrigeration temperatures, and then serve it at room temperature. Salmonella is one of the top etiologies in foodborne illness outbreaks associated with salsa, and our preliminary studies revealed the consistent presence of Staphylococcus aureus in restaurant salsa. In the present study, we evaluated the survival of Salmonella Enteritidis and S. aureus inoculated into restaurant-made salsa samples stored at ambient (20 degrees C) and refrigeration (4 degrees C) temperatures. These test temperature conditions represent best-case and worst-case scenarios in restaurant operations. Salmonella survived in all samples stored at room temperature, but S. aureus populations significantly decreased after 24 h of storage at room temperature. No enterotoxin was detected in samples inoculated with S. aureus at 6.0 log CFU/g. Both microorganisms survived longer in refrigerated samples than in samples stored at room temperature. Overall, both Salmonella and S. aureus survived a sufficient length of time in salsa to pose a food safety risk.
Estimating the Size of a Large Network and its Communities from a Random Sample
Chen, Lin; Karbasi, Amin; Crawford, Forrest W.
2017-01-01
Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V, E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W ⊆ V and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that accurately estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhaustive set of experiments to study the effects of sample size, K, and SBM model parameters on the accuracy of the estimates. The experimental results also demonstrate that PULSE significantly outperforms a widely-used method called the network scale-up estimator in a wide variety of scenarios. PMID:28867924
Estimating the Size of a Large Network and its Communities from a Random Sample.
Chen, Lin; Karbasi, Amin; Crawford, Forrest W
2016-01-01
Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = ( V, E ) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W ⊆ V and letting G ( W ) be the induced subgraph in G of the vertices in W . In addition to G ( W ), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that accurately estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhaustive set of experiments to study the effects of sample size, K , and SBM model parameters on the accuracy of the estimates. The experimental results also demonstrate that PULSE significantly outperforms a widely-used method called the network scale-up estimator in a wide variety of scenarios.
Sklyar, Oleg; Träuble, Markus; Zhao, Chuan; Wittstock, Gunther
2006-08-17
The BEM algorithm developed earlier for steady-state experiments in the scanning electrochemical microscopy (SECM) feedback mode has been expanded to allow for the treatment of more than one independently diffusing species. This allows the treatment of substrate-generation/tip-collection SECM experiments. The simulations revealed the interrelation of sample layout, local kinetics, imaging conditions, and the quality of the obtained SECM images. Resolution in the SECM SG/TC images has been evaluated, and it depends on several factors. For most practical situations, the resolution is limited by the diffusion profiles of the sample. When a dissolved compound is converted at the sample (e.g., oxygen reduction or enzymatic reaction at the sample), the working distance should be significantly larger than in SECM feedback experiments (ca. 3 r(T) for RG = 5) in order to avoid diffusional shielding of the active regions on the sample by the UME body. The resolution ability also depends on the kinetics of the active regions. The best resolution can be expected if all the active regions cause the same flux. In one simulated example, which might mimic a possible scenario of a low-density protein array, considerable compromises in the resolving power, were noted when the flux from two neighboring spots differs by more than a factor of 2.
In situ AFM investigation of slow crack propagation mechanisms in a glassy polymer
NASA Astrophysics Data System (ADS)
George, M.; Nziakou, Y.; Goerke, S.; Genix, A.-C.; Bresson, B.; Roux, S.; Delacroix, H.; Halary, J.-L.; Ciccotti, M.
2018-03-01
A novel experimental technique based on in situ AFM monitoring of the mechanisms of damage and the strain fields associated to the slow steady-state propagation of a fracture in glassy polymers is presented. This micron-scale investigation is complemented by optical measurements of the sample deformation up to the millimetric macroscopic scale of the sample in order to assess the proper crack driving conditions. These multi-scale observations provide important insights towards the modeling of the fracture toughness of glassy polymers and its relationship with the macromolecular structure and non-linear rheological properties. This novel technique is first tested on a standard PMMA thermoplastic in order to both evaluate its performance and the richness of this new kind of observations. Although the fracture propagation in PMMA is well known to proceed through crazing in the bulk of the samples, our observations provide a clear description and quantitative evaluation of a change of fracture mechanism towards shear yielding fracture accompanied by local necking close to the free surface of the sample, which can be explained by the local change of stress triaxiality. Moreover, this primary surface necking mechanism is shown to be accompanied by a network of secondary grooves that can be related to surface crazes propagating towards the interior of the sample. This overall scenario is validated by post-mortem fractographic investigations by scanning electron microscopy.
Gomes, Iva; Pereira, Plácido J P; Harms, Sonja; Oliveira, Andréa M; Schneider, Peter M; Brehm, António
2017-11-01
A male West African sample from Guinea-Bissau (West-African coast) was genetically analyzed using 12 X chromosomal short tandem repeats that are grouped into four haplotype groups. Linkage disequilibrium was tested (p≤0.0008) and association was detected for the majority of markers in three out of the four studied haplotype clusters. The sample of 332 unrelated individuals analyzed in this study belonged to several recognized ethnic groups (n=18) which were used to evaluate the genetic variation of Guinea-Bissau's population. Pairwise genetic distances (F ST ) did not reveal significant differences among the majority of groups. An additional 110 samples from other countries also belonging to West Africa were as well compared with the sample of Guinea-Bissau. No significant differences were found between these two groups of West African individuals, supporting the genetic homogeneity of this region on the X chromosome level. The generation of over 100 DNA West African sequences provided new insights into the repeat sequence structure of some of the present X-STRs. Parameters for forensic evaluation were also calculated for each X-STR, supporting the potential application of these markers in typical kinship scenarios. Also, the high power of discrimination values for samples of female and male origin observed in this study, confirms the usefulness of the present X-STRs in identification analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
The Effect of Salts on Electrospray Ionization of Amino Acids in the Negative Mode
NASA Technical Reports Server (NTRS)
Kim, H. I.; Johnson, P. V.; Beegle, L. W.; Kanik, I.
2004-01-01
The continued search for organics on Mars will require the development of simplified procedures for handling and processing of soil or rock core samples prior to analysis by onboard instrumentation. Extraction of certain organic molecules such as amino acids from rock and soil samples using a liquid solvent (H2O) has been shown to be more efficient (by approximately an order of magnitude) than heat extraction methods. As such, liquid extraction (using H2O) of amino acid molecules from rock cores or regolith material is a prime candidate for the required processing. In this scenario, electrospray ionization (ESI) of the liquid extract would be a natural choice for ionization of the analyte prior to interrogation by one of a variety of potential analytical separation techniques (mass spectroscopy, ion mobility spectroscopy, etc.). Aside from the obvious compatibility of ESI and liquid samples, ESI offers simplicity and a soft ionization capability. In order to demonstrate that liquid extraction and ESI can work as part of an in situ instrument on Mars, we must better understand and quantify the effect salts have on the ESI process. In the current work, we have endeavored to investigate the feasibility and limitations of negative mode ESI of Martian surface samples in the context of sample salt content using ion mobility spectroscopy (IMS).
Oliver, David M; Porter, Kenneth D H; Heathwaite, A Louise; Zhang, Ting; Quilliam, Richard S
2015-07-01
Understanding the role of different rainfall scenarios on faecal indicator organism (FIO) dynamics under variable field conditions is important to strengthen the evidence base on which regulators and land managers can base informed decisions regarding diffuse microbial pollution risks. We sought to investigate the impact of low intensity summer rainfall on Escherichia coli-discharge (Q) patterns at the headwater catchment scale in order to provide new empirical data on FIO concentrations observed during baseflow conditions. In addition, we evaluated the potential impact of using automatic samplers to collect and store freshwater samples for subsequent microbial analysis during summer storm sampling campaigns. The temporal variation of E. coli concentrations with Q was captured during six events throughout a relatively dry summer in central Scotland. The relationship between E. coli concentration and Q was complex with no discernible patterns of cell emergence with Q that were repeated across all events. On several occasions, an order of magnitude increase in E. coli concentrations occurred even with slight increases in Q, but responses were not consistent and highlighted the challenges of attempting to characterise temporal responses of E. coli concentrations relative to Q during low intensity rainfall. Cross-comparison of E. coli concentrations determined in water samples using simultaneous manual grab and automated sample collection was undertaken with no difference in concentrations observed between methods. However, the duration of sample storage within the autosampler unit was found to be more problematic in terms of impacting on the representativeness of microbial water quality, with unrefrigerated autosamplers exhibiting significantly different concentrations of E. coli relative to initial samples after 12-h storage. The findings from this study provide important empirical contributions to the growing evidence base in the field of catchment microbial dynamics.
Murray, Kris A; Skerratt, Lee F; Garland, Stephen; Kriticos, Darren; McCallum, Hamish
2013-01-01
The pandemic amphibian disease chytridiomycosis often exhibits strong seasonality in both prevalence and disease-associated mortality once it becomes endemic. One hypothesis that could explain this temporal pattern is that simple weather-driven pathogen proliferation (population growth) is a major driver of chytridiomycosis disease dynamics. Despite various elaborations of this hypothesis in the literature for explaining amphibian declines (e.g., the chytrid thermal-optimum hypothesis) it has not been formally tested on infection patterns in the wild. In this study we developed a simple process-based model to simulate the growth of the pathogen Batrachochytrium dendrobatidis (Bd) under varying weather conditions to provide an a priori test of a weather-linked pathogen proliferation hypothesis for endemic chytridiomycosis. We found strong support for several predictions of the proliferation hypothesis when applied to our model species, Litoria pearsoniana, sampled across multiple sites and years: the weather-driven simulations of pathogen growth potential (represented as a growth index in the 30 days prior to sampling; GI30) were positively related to both the prevalence and intensity of Bd infections, which were themselves strongly and positively correlated. In addition, a machine-learning classifier achieved ~72% success in classifying positive qPCR results when utilising just three informative predictors 1) GI30, 2) frog body size and 3) rain on the day of sampling. Hence, while intrinsic traits of the individuals sampled (species, size, sex) and nuisance sampling variables (rainfall when sampling) influenced infection patterns obtained when sampling via qPCR, our results also strongly suggest that weather-linked pathogen proliferation plays a key role in the infection dynamics of endemic chytridiomycosis in our study system. Predictive applications of the model include surveillance design, outbreak preparedness and response, climate change scenario modelling and the interpretation of historical patterns of amphibian decline.
NASA Astrophysics Data System (ADS)
Malherbe, C.; Hutchinson, I. B.; Ingley, R.; Boom, A.; Carr, A. S.; Edwards, H.; Vertruyen, B.; Gilbert, B.; Eppe, G.
2017-11-01
In 2020, the ESA ExoMars and NASA Mars 2020 missions will be launched to Mars to search for evidence of past and present life. In preparation for these missions, terrestrial analog samples of rock formations on Mars are studied in detail in order to optimize the scientific information that the analytical instrumentation will return. Desert varnishes are thin mineral coatings found on rocks in arid and semi-arid environments on Earth that are recognized as analog samples. During the formation of desert varnishes (which takes many hundreds of years), organic matter is incorporated, and microorganisms may also play an active role in the formation process. During this study, four complementary analytical techniques proposed for Mars missions (X-ray diffraction [XRD], Raman spectroscopy, elemental analysis, and pyrolysis-gas chromatography-mass spectrometry [Py-GC-MS]) were used to interrogate samples of desert varnish and describe their capacity to sustain life under extreme scenarios. For the first time, both the geochemistry and the organic compounds associated with desert varnish are described with the use of identical sets of samples. XRD and Raman spectroscopy measurements were used to nondestructively interrogate the mineralogy of the samples. In addition, the use of Raman spectroscopy instruments enabled the detection of β-carotene, a highly Raman-active biomarker. The content and the nature of the organic material in the samples were further investigated with elemental analysis and methylated Py-GC-MS, and a bacterial origin was determined to be likely. In the context of planetary exploration, we describe the habitable nature of desert varnish based on the biogeochemical composition of the samples. Possible interference of the geological substrate on the detectability of pyrolysis products is also suggested.
Baldwin, Andrew H; Jensen, Kai; Schönfeldt, Marisa
2014-03-01
Atmospheric warming may influence plant productivity and diversity and induce poleward migration of species, altering communities across latitudes. Complicating the picture is that communities from different continents deviate in evolutionary histories, which may modify responses to warming and migration. We used experimental wetland plant communities grown from seed banks as model systems to determine whether effects of warming on biomass production and species richness are consistent across continents, latitudes, and migration scenarios. We collected soil samples from each of three tidal freshwater marshes in estuaries at three latitudes (north, middle, south) on the Atlantic coasts of Europe and North America. In one experiment, we exposed soil seed bank communities from each latitude and continent to ambient and elevated (+2.8 °C) temperatures in the greenhouse. In a second experiment, soil samples were mixed either within each estuary (limited migration) or among estuaries from different latitudes in each continent (complete migration). Seed bank communities of these migration scenarios were also exposed to ambient and elevated temperatures and contrasted with a no-migration treatment. In the first experiment, warming overall increased biomass (+16%) and decreased species richness (-14%) across latitudes in Europe and North America. Species richness and evenness of south-latitude communities were less affected by warming than those of middle and north latitudes. In the second experiment, warming also stimulated biomass and lowered species richness. In addition, complete migration led to increased species richness (+60% in North America, + 100% in Europe), but this higher diversity did not translate into increased biomass. Species responded idiosyncratically to warming, but Lythrum salicaria and Bidens sp. increased significantly in response to warming in both continents. These results reveal for the first time consistent impacts of warming on biomass and species richness for temperate wetland plant communities across continents, latitudes, and migration scenarios. © 2013 John Wiley & Sons Ltd.
New Lunar Paleointensity Measurements, Ancient Lunar Dynamo or Lunar Dud?
NASA Astrophysics Data System (ADS)
Lawrence, K. P.; Johnson, C. L.; Tauxe, L.; Gee, J. S.
2007-12-01
We analyze published and new paleointensity data from Apollo samples to reexamine the hypothesis of an early (3.9 to 3.6 Ga) lunar dynamo. Our new paleointensity experiments on four Apollo samples use modern absolute and relative measurement techniques. Our samples (60015, 76535, 72215, 62235) have ages ranging from 3.3 to 4.2 Ga, bracketing the putative period of a lunar dynamo. Samples 60015 (anorthosite) and 76535 (troctolite) failed during absolute paleointensity experiments, using the IZZI-modified Thellier-Thellier method. Samples 72215 and 62235 recorded a complicated, multi-component magnetic history that includes a low temperature (< 500°C) component with a high intensity (~90 μT), and a high temperature (> 500°C) component with a low intensity (~2 μT). These two samples were also subjected to a relative paleointensity experiment (sIRM), from which neither provided unambiguous evidence for a thermal origin of the recorded remanent magnetization. We found similar multi-component behavior in several published experiments on lunar samples. We test and present several magnetization scenarios in an attempt to explain the complex magnetization recorded in lunar samples. Specifically, an overprint from exposure to a small magnetic field (i.e. IRM) results in multi-component behavior (similar to lunar sample results), from which we could not recover the correct magnitude of the original TRM. The non-unique interpretation of these multi-component results combined with IRM (isothermal remanent magnetization) contamination during Apollo sample return ( Strangway et al., 1973), indicates that techniques incapable of distinguishing between single- and multi-component records (e.g., sIRM), cannot be reliably used to infer magnetic conditions of the early Moon. In light of these new experiments and a thorough reevaluation of existing paleointensity measurements, we conclude that there is a paucity of lunar samples that demonstrate a primary thermal remanent magnetization. As relative paleointensity measurements for lunar samples are calibrated using absolute paleointensities, the lack of acceptable absolute paleointensity measurements renders the interpretation of relative paleointensity measurements unreliable. Consequently, current lunar paleointensity measurements are inadequate to determine the existence and strength of an early lunar magnetic field. Surface magnetometry measurements and the return of magnetically uncontaminated samples from future missions are much needed for further progress in understanding the characteristics and origin of lunar crustal remanent magnetization.
Threshold-dependent sample sizes for selenium assessment with stream fish tissue
Hitt, Nathaniel P.; Smith, David R.
2015-01-01
Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased precision of composites for estimating mean conditions. However, low sample sizes (<5 fish) did not achieve 80% power to detect near-threshold values (i.e., <1 mg Se/kg) under any scenario we evaluated. This analysis can assist the sampling design and interpretation of Se assessments from fish tissue by accounting for natural variation in stream fish populations.
Tank 241-AZ-102 Privatization Push Mode Core Sampling and Analysis Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
RASMUSSEN, J.H.
1999-08-02
This sampling and analysis plan (SAP) identifies characterization objectives pertaining to sample collection, laboratory analytical evaluation, and reporting requirements for samples obtained from tank 241-AZ-102. The purpose of this sampling event is to obtain information about the characteristics of the contents of 241-AZ-102 required to satisfy the Data Quality Objectives For TWRS Privatization Phase I: Confirm Tank TIS An Appropriate Feed Source For High-Level Waste Feed Batch X(HLW DQO) (Nguyen 1999a), Data Quality Objectives For TWRS Privatization Phase 1: Confirm Tank TIS An Appropriate Feed Source For Low-Activity Waste Feed Batch X (LAW DQO) (Nguyen 1999b), Low Activity Waste andmore » High Level Waste Feed Data Quality Objectives (L&H DQO) (Patello et al. 1999) and Characterization Data Needs for Development, Design, and Operation of Retrieval Equipment Developed through the Data Quality Objective Process (Equipment DQO) (Bloom 1996). The Tank Characterization Technical Sampling Basis document (Brown et al. 1998) indicates that these issues, except the Equipment DQO apply to tank 241-AZ-102 for this sampling event. The Equipment DQO is applied for shear strength measurements of the solids segments only. Poppiti (1999) requires additional americium-241 analyses of the sludge segments. Brown et al. (1998) also identify safety screening, regulatory issues and provision of samples to the Privatization Contractor(s) as applicable issues for this tank. However, these issues will not be addressed via this sampling event. Reynolds et al. (1999) concluded that information from previous sampling events was sufficient to satisfy the safety screening requirements for tank 241 -AZ-102. Push mode core samples will be obtained from risers 15C and 24A to provide sufficient material for the chemical analyses and tests required to satisfy these data quality objectives. The 222-S Laboratory will extrude core samples, composite the liquids and solids, perform chemical analyses, and provide subsamples to the Process Chemistry Laboratory. The Process Chemistry Laboratory will prepare test plans and perform process tests to evaluate the behavior of the 241-AZ-102 waste undergoing the retrieval and treatment scenarios defined in the applicable DQOs. Requirements for analyses of samples originating in the process tests will be documented in the corresponding test plan.« less
Observations of the Hot Horizontal Branch Stars in the Metal-Rich Bulge Globular Cluster NGC 6388
NASA Technical Reports Server (NTRS)
Moehler, S.; Sweigart, A. V.
2006-01-01
The metal-rich bulge globular cluster NGC 6388 shows a distinct blue horizontal-branch tail in its colour-magnitude diagram (Rich et al. 1997) and is thus a strong case of the well-known 2nd Parameter Problem. In addition, its horizontal branch (HB) shows an upward tilt toward bluer colours, which cannot be explained by canonical evolutionary models. Several non-canonical scenarios have been proposed to explain these puzzling observations. In order to test the predictions of these scenarios, we have obtained medium resolution spectra to determine the atmospheric parameters of a sample of the blue HB stars in NGC 6388.Using the medium resolution spectra, we determine effective temperatures, surface gravities and helium abundances by fitting the observed Balmer and helium lines with appropriate theoretical stellar spectra. As we know the distance to the cluster, we can verify our results by determining masses for the stars. During the data reduction we took special care to correctly subtract the background, which is dominated by the overlapping spectra of cool stars. The cool blue tail stars in our sample with T(sub eff) approximately 10000 K have lower than canonical surface gravities, suggesting that these stars are, on average, approximately equal to 0.4 mag brighter than canonical HB stars in agreement with the observed upward slope of the HB in NGC 6388. Moreover, the mean mass of these stars agrees well with theoretical predictions. In contrast, the hot blue tail stars in our sample with T(sub eff) greater than or equal to 12000 K show significantly lower surface gravities than predicted by any scenario, which can reproduce the photometric observations. Their masses are also too low by about a factor of 2 compared to theoretical predictions. The physical parameters of the blue HB stars at about 10,000 K support the helium pollution scenario. The low gravities and masses of the hot blue tail stars, however, are probably caused by problems with the data reduction, most likely due to remaining background light in the spectra, which would affect the fainter hot blue tail stars much more strongly than the brighter cool blue tail stars. Our study of the hot blue tail stars in NGC 6388 illustrates the obstacles which are encountered when attempting to determine the atmospheric parameters of hot HB stars in very crowded fields using ground-based observations. We discuss these obstacles and offer possible solutions for future projects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fritz, Brad G.; Dirkes, Roger L.; Napier, Bruce A.
The Hanford Reach National Monument (HRNM) was created by presidential proclamation in 2000. It is located along the Columbia River in south central Washington and consists of five distinct units. The McGee Ranch-Riverlands and the North Slope units are addressed in this report. North Slope refers to two of the HRNM units: the Saddle Mountain Unit and the Wahluke Slope Unit. The Saddle Mountain and Wahluke Slope Units are located north of the Columbia River, while the McGee Ranch-Riverlands Unit is located south of the Columbia River and north and west of Washington State Highway 24. To fulfill internal U.S.more » Department of Energy (DOE) requirements prior to any radiological clearance of land, the DOE must evaluate the potential for residual radioactive contamination on this land and determine compliance with the requirements of DOE Order 5400.5. Authorized limits for residual radioactive contamination were developed based on the DOE annual exposure limit to the public (100 mrem) using future potential land-use scenarios. The DOE Office of Environmental Management approved these authorized limits on March 1, 2004. Historical soil monitoring conducted on and around the HRNM indicated soil concentrations of radionuclides were well below the authorized limits (Fritz et al. 2003). However, the historical sampling was done at a limited number of sampling locations. Therefore, additional soil sampling was conducted to determine if the concentrations of radionuclides in soil on the McGee Ranch-Riverlands and North Slope units were below the authorized limits. Sixty-seven soil samples were collected from the McGee Ranch-Riverlands and North Slope units. A software package (Visual Sample Plan) was used to plan the collection to assure an adequate number of samples were collected. The number of samples necessary to decide with a high level of confidence (99%) that the soil concentrations of radionuclides on the North Slope and McGee Ranch-Riverlands units did not exceed the authorized limits was determined to be 27. Additional soil samples were collected from areas suspected to have a potential for accumulation of radionuclides. This included samples collected from the riparian zone along the Columbia River, Savage Island, and other locations across the North Slope and McGee Ranch-Riverlands units. The 67 soil samples collected from the McGee Ranch-Riverlands and North Slope units all had concentrations of radionuclides far below the authorized limits established by the DOE. Statistical analysis of the results concluded that the Authorized Limits were not exceeded when total uncertainty was considered. The calculated upper confidence limit for each radionuclide measured in this study (which represents the value at which 99% of the measurements reside below with a 99% confidence level) was lower than the Authorized Limit for each radionuclide. The maximum observed soil concentrations for the radionuclides included in the authorized limits would result in a potential annual dose of 0.23 mrem assuming the most probable use scenario, a recreational visitor. This potential dose is well below the DOE 100-mrem/year dose limit for members of the public. Furthermore, the results of the biota dose assessment screen, which used the RESRAD biota code, indicated that the sum of fractions is less than one. This assumed soil concentrations equal to the maximum concentrations of radionuclides measured on the McGee Ranch-Riverlands and North Slope units’ in this study. Since the sum of fractions was less than 1, dose to terrestrial biota will not exceed the recommended biota dose limit for the soil concentrations measured in this study.« less
An Efficient Approach for Mars Sample Return Using Emerging Commercial Capabilities
NASA Technical Reports Server (NTRS)
Gonzales, Andrew A.; Stoker, Carol R.
2016-01-01
Mars Sample Return is the highest priority science mission for the next decade as recommended by the 2011 Decadal Survey of Planetary Science. This article presents the results of a feasibility study for a Mars Sample Return mission that efficiently uses emerging commercial capabilities expected to be available in the near future. The motivation of our study was the recognition that emerging commercial capabilities might be used to perform Mars Sample Return with an Earth-direct architecture, and that this may offer a desirable simpler and lower cost approach. The objective of the study was to determine whether these capabilities can be used to optimize the number of mission systems and launches required to return the samples, with the goal of achieving the desired simplicity. All of the major element required for the Mars Sample Return mission are described. Mission system elements were analyzed with either direct techniques or by using parametric mass estimating relationships. The analysis shows the feasibility of a complete and closed Mars Sample Return mission design based on the following scenario: A SpaceX Falcon Heavy launch vehicle places a modified version of a SpaceX Dragon capsule, referred to as "Red Dragon", onto a Trans Mars Injection trajectory. The capsule carries all the hardware needed to return to Earth Orbit samples collected by a prior mission, such as the planned NASA Mars 2020 sample collection rover. The payload includes a fully fueled Mars Ascent Vehicle; a fueled Earth Return Vehicle, support equipment, and a mechanism to transfer samples from the sample cache system onboard the rover to the Earth Return Vehicle. The Red Dragon descends to land on the surface of Mars using Supersonic Retropropulsion. After collected samples are transferred to the Earth Return Vehicle, the single-stage Mars Ascent Vehicle launches the Earth Return Vehicle from the surface of Mars to a Mars phasing orbit. After a brief phasing period, the Earth Return Vehicle performs a Trans Earth Injection burn. Once near Earth, the Earth Return Vehicle performs Earth and lunar swing-bys and is placed into a Lunar Trailing Orbit - an Earth orbit, at lunar distance. A retrieval mission then performs a rendezvous with the Earth Return Vehicle, retrieves the sample container, and breaks the chain of contact with Mars by transferring the sample into a sterile and secure container. With the sample contained, the retrieving spacecraft makes a controlled Earth re-entry preventing any unintended release of Martian materials into the Earth's biosphere. The mission can start in any one of three Earth to Mars launch opportunities, beginning in 2022.
Mi, Michael Y; Betensky, Rebecca A
2013-04-01
Currently, a growing placebo response rate has been observed in clinical trials for antidepressant drugs, a phenomenon that has made it increasingly difficult to demonstrate efficacy. The sequential parallel comparison design (SPCD) is a clinical trial design that was proposed to address this issue. The SPCD theoretically has the potential to reduce the sample-size requirement for a clinical trial and to simultaneously enrich the study population to be less responsive to the placebo. Because the basic SPCD already reduces the placebo response by removing placebo responders between the first and second phases of a trial, the purpose of this study was to examine whether we can further improve the efficiency of the basic SPCD and whether we can do so when the projected underlying drug and placebo response rates differ considerably from the actual ones. Three adaptive designs that used interim analyses to readjust the length of study duration for individual patients were tested to reduce the sample-size requirement or increase the statistical power of the SPCD. Various simulations of clinical trials using the SPCD with interim analyses were conducted to test these designs through calculations of empirical power. From the simulations, we found that the adaptive designs can recover unnecessary resources spent in the traditional SPCD trial format with overestimated initial sample sizes and provide moderate gains in power. Under the first design, results showed up to a 25% reduction in person-days, with most power losses below 5%. In the second design, results showed up to a 8% reduction in person-days with negligible loss of power. In the third design using sample-size re-estimation, up to 25% power was recovered from underestimated sample-size scenarios. Given the numerous possible test parameters that could have been chosen for the simulations, the study's results are limited to situations described by the parameters that were used and may not generalize to all possible scenarios. Furthermore, dropout of patients is not considered in this study. It is possible to make an already complex design such as the SPCD adaptive, and thus more efficient, potentially overcoming the problem of placebo response at lower cost. Ultimately, such a design may expedite the approval of future effective treatments.
Mi, Michael Y.; Betensky, Rebecca A.
2013-01-01
Background Currently, a growing placebo response rate has been observed in clinical trials for antidepressant drugs, a phenomenon that has made it increasingly difficult to demonstrate efficacy. The sequential parallel comparison design (SPCD) is a clinical trial design that was proposed to address this issue. The SPCD theoretically has the potential to reduce the sample size requirement for a clinical trial and to simultaneously enrich the study population to be less responsive to the placebo. Purpose Because the basic SPCD design already reduces the placebo response by removing placebo responders between the first and second phases of a trial, the purpose of this study was to examine whether we can further improve the efficiency of the basic SPCD and if we can do so when the projected underlying drug and placebo response rates differ considerably from the actual ones. Methods Three adaptive designs that used interim analyses to readjust the length of study duration for individual patients were tested to reduce the sample size requirement or increase the statistical power of the SPCD. Various simulations of clinical trials using the SPCD with interim analyses were conducted to test these designs through calculations of empirical power. Results From the simulations, we found that the adaptive designs can recover unnecessary resources spent in the traditional SPCD trial format with overestimated initial sample sizes and provide moderate gains in power. Under the first design, results showed up to a 25% reduction in person-days, with most power losses below 5%. In the second design, results showed up to a 8% reduction in person-days with negligible loss of power. In the third design using sample size re-estimation, up to 25% power was recovered from underestimated sample size scenarios. Limitations Given the numerous possible test parameters that could have been chosen for the simulations, the study’s results are limited to situations described by the parameters that were used, and may not generalize to all possible scenarios. Furthermore, drop-out of patients is not considered in this study. Conclusions It is possible to make an already complex design such as the SPCD adaptive, and thus more efficient, potentially overcoming the problem of placebo response at lower cost. Ultimately, such a design may expedite the approval of future effective treatments. PMID:23283576
Emotional and sexual jealousy as a function of sex and sexual orientation in a Brazilian sample.
de Souza, Altay Alves Lino; Verderane, Michele Pereira; Taira, Juliana Tieme; Otta, Emma
2006-04-01
The goal of the present study was to compare the relative distress of homosexual and heterosexual Brazilian men and women on scenarios in which they imagined their partners sexually or emotionally involved with another person, using a forced-choice paradigm and continuous measures. Participants were 68 heterosexual men, 72 heterosexual women, 42 homosexual men, and 35 homosexual women. On the forced-choice questions heterosexual men (39 on one question and 37 on the other) were more upset than their female counterparts (21 on one question and 15 on the other) by scenarios of sexual infidelity than those of emotional infidelity. On questions using continuous measures no significant difference was found between pleasurable sex and attachment scenarios for heterosexual women or heterosexual men. On the highly upsetting scenarios heterosexual men discriminated between flirting and both pleasurable sex and attachment scenarios, being less disturbed by the former. In contrast, heterosexual women were equally distressed by the three scenarios. Scores for the homosexual men and homosexual women fell in between those of the heterosexual men and heterosexual women and did not show a clear cut preference for the sexual infidelity or the emotional alternative on the forced-choice paradigm. However, on the continuous measures of jealousy homosexuals resembled heterosexuals of the opposite sex. There was no evidence that jealousy would be less intense among homosexuals although reproductive outcomes were not at risk.
Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.
Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E
2014-02-28
The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.
Boitard, Simon; Rodríguez, Willy; Jay, Flora; Mona, Stefano; Austerlitz, Frédéric
2016-01-01
Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey), PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles. PMID:26943927
Posadas-Domínguez, Rodolfo Rogelio; Del Razo-Rodríguez, Oscar Enrique; Almaraz-Buendía, Isaac; Pelaez-Acero, Armando; Espinosa-Muñoz, Verónica; Rebollar-Rebollar, Samuel; Salinas-Martínez, Jesús Armando
2018-06-01
This article combines a Policy Analysis Matrix with a sensitivity and poverty line analysis with the objective of evaluating the economic contribution of comparative advantages to the private profitability and competitiveness of small-scale dairy systems. For 1 year, socioeconomic data were collected from 82 farms selected from four strata via statistical sampling. Two scenarios were established to determine the quantitative contribution of comparative advantages: (1) a simulated scenario, which accounted for the cost of purchasing the total food and the opportunity cost of the family labour force (FLF), and (2) an actual production scenario, which accounted for the cost of producing food and eliminating the payment of the FLF and included other income. The E3 and E4 producers were the most profitable and competitive in the simulated scenario and actual production scenario. Of the four scales evaluated, the E2 and E1 producers were the most efficient in taking advantage of the economic contribution provided by the comparative advantages in their own production of food and employment of the FLF, in addition to accounting for other income, a condition that increased their profitability by 171 and 144% and competitiveness by 346 and 273%, respectively. The poverty results indicated that only E3 and E4 producers were non-vulnerable in the simulated scenario and actual production scenario. The purchase of food was the comparative advantage with the greatest sensitivity to cost increases in the two scenarios analysed, which exacerbated the effect on the E1 and E2 producers.
Dandautiya, Rahul; Singh, Ajit Pratap; Kundu, Sanghamitra
2018-05-01
The fly ash, generated at the coal-based thermal power plant, is always a cause of concern to environmentalists owing to its adverse impact on air, water and land. There exists a high environmental risk when it is disposed to the environment. Thus, two different type of fly ash samples (FA-1 and FA-2) have been considered in this study to examine the leaching potential of the elements magnesium, aluminium, silicon, calcium, titanium, vanadium, chromium, manganese, iron, nickel, cobalt, copper, zinc, arsenic, selenium, strontium, cadmium, barium and lead for different types of leachant. Toxicity characteristics leaching procedure and ASTM tests have been performed in the laboratory to simulate different natural leaching scenarios. Characterisation of samples have been done through X-ray diffraction and field emission gun scanning electron microscope. The effect of different liquid to solid ratios (i.e. 5, 10, 20 and 50) on the mobilisation of elements has been analysed. The results indicated that the maximum leaching of all elements occurred at a liquid to solid ratio of 5 except for arsenic, barium and silicon. The groundwater analysis has also been done to understand the actual effects of leachate. The elements presenting the highest leachability in the two fly ash samples under all tested conditions were magnesium, aluminium, silicon and calcium. It has been observed that calcium exhibits greater leaching effects than all other constituents. The study presented here has been found very useful for assessing contamination levels in groundwater owing to leaching effects of fly ash under different scenarios, which can be helpful to prevent spreading of the contaminants by efficient management of fly ash.
NASA Astrophysics Data System (ADS)
van Velzen, S.
2018-01-01
The tidal disruption of a star by a massive black hole is expected to yield a luminous flare of thermal emission. About two dozen of these stellar tidal disruption flares (TDFs) may have been detected in optical transient surveys. However, explaining the observed properties of these events within the tidal disruption paradigm is not yet possible. This theoretical ambiguity has led some authors to suggest that optical TDFs are due to a different process, such as a nuclear supernova or accretion disk instabilities. Here we present a test of a fundamental prediction of the tidal disruption event scenario: a suppression of the flare rate due to the direct capture of stars by the black hole. Using a recently compiled sample of candidate TDFs with black hole mass measurements, plus a careful treatment of selection effects in this flux-limited sample, we confirm that the dearth of observed TDFs from high-mass black holes is statistically significant. All the TDF impostor models we consider fail to explain the observed mass function; the only scenario that fits the data is a suppression of the rate due to direct captures. We find that this suppression can explain the low volumetric rate of the luminous TDF candidate ASASSN-15lh, thus supporting the hypothesis that this flare belongs to the TDF family. Our work is the first to present the optical TDF luminosity function. A steep power law is required to explain the observed rest-frame g-band luminosity, {dN}/{{dL}}g\\propto {L}g-2.5. The mean event rate of the flares in our sample is ≈ 1× {10}-4 galaxy‑1 yr‑1, consistent with the theoretically expected tidal disruption rate.
Characteristics of Excitable Dog Behavior Based on Owners’ Report from a Self-Selected Study
Shabelansky, Anastasia; Dowling-Guyer, Seana
2016-01-01
Simple Summary This study provides information about owners’ experiences with their dogs’ excitable behavior. We found that certain daily scenarios tended to prompt excitable behavior. The majority of owners in this self-selected sample were very frustrated with their excitable dog. Many dogs in the sample had other behavior problems. Abstract Past research has found that excitable dog behavior is prevalent among sheltered and owned dogs and many times is a reason for canine relinquishment. In spite of its prevalence in the canine population, excitable behavior is relatively unstudied in the scientific literature. The intent of this research was to understand the experience of owners of excitable dogs through the analysis of self-administered online questionnaires completed by owners as part of another study. We found that certain daily scenarios tended to prompt excitable behavior, with excitability most common when the owner or other people came to the dog’s home. All owners experienced some level of frustration with their dog’s excitable behavior, with the majority being very frustrated. Many dogs in the sample had other behavior problems, with disobedient, destructive, chasing and barking behaviors being the most commonly reported. Other characteristics of excitable dogs also are discussed. Although the ability to generalize from these results is likely limited, due to targeted recruitment and selection of owners of more excitable dogs, this research provides valuable insights into the owner’s experience of excitable behavior. We hope this study prompts more research into canine excitable behavior which would expand our understanding of this behavior and help behaviorists, veterinarians, and shelters develop tools for managing it, as well as provide better education to owners of excitable dogs. PMID:26999222
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sapkota, A.; Das, P.; Bohmer, A. E.
Results of inelastic neutron scattering measurements are reported for two annealed compositions of Ca(Fe 1–xCo x) 2As 2, x = 0.026 and 0.030, which possess stripe-type antiferromagnetically ordered and superconducting ground states, respectively. In the AFM ground state, well-defined and gapped spin waves are observed for x = 0.026, similar to the parent CaFe 2As 2 compound. We conclude that the well-defined spin waves are likely to be present for all x corresponding to the AFM state. This behavior is in contrast to the smooth evolution to overdamped spin dynamics observed in Ba(Fe 1–xCo x) 2As 2, wherein the crossovermore » corresponds to microscopically coexisting AFM order and SC at low temperature. The smooth evolution is likely absent in Ca(Fe 1–xCo x) 2As 2 due to the mutual exclusion of AFM ordered and SC states. Overdamped spin dynamics characterize paramagnetism of the x = 0.030 sample and high-temperature x = 0.026 sample. A sizable loss of magnetic intensity is observed over a wide energy range upon cooling the x = 0.030 sample, at temperatures just above and within the superconducting phase. This phenomenon is unique amongst the iron-based superconductors and is consistent with a temperature-dependent reduction in the fluctuating moment. In conclusion, one possible scenario ascribes this loss of moment to a sensitivity to the c-axis lattice parameter in proximity to the nonmagnetic collapsed tetragonal phase and another scenario ascribes the loss to a formation of a pseudogap.« less
Demidyuk, Ilya V; Shubin, Andrey V; Gasanov, Eugene V; Kurinov, Alexander M; Demkin, Vladimir V; Vinogradova, Tatyana V; Zinovyeva, Marina V; Sass, Alexander V; Zborovskaya, Irina B; Kostrov, Sergey V
2013-01-01
Proprotein convertases (PCs) is a protein family which includes nine highly specific subtilisin-like serine endopeptidases in mammals. The system of PCs is involved in carcinogenesis and levels of PC mRNAs alter in cancer, which suggests expression status of PCs as a possible marker for cancer typing and prognosis. The goal of this work was to assess the information value of expression profiling of PC genes. Quantitative polymerase chain reaction was used for the first time to analyze mRNA levels of all PC genes as well as matrix metalloproteinase genes MMP2 and MMP14, which are substrates of PCs, in 30 matched pairs of samples of human lung cancer tumor and adjacent tissues without pathology. Significant changes in the expression of PCs have been revealed in tumor tissues: increased FURIN mRNA level (p<0.00005) and decreased mRNA levels of PCSK2 (p<0.007), PCSK5 (p<0.0002), PCSK7 (p<0.002), PCSK9 (p<0.00008), and MBTPS1 (p<0.00004) as well as a tendency to increase in the level of PCSK1 mRNA. Four distinct groups of samples have been identified by cluster analysis of the expression patterns of PC genes in tumor vs. normal tissue. Three of these groups covering 80% of samples feature a strong elevation in the expression of a single gene in cancer: FURIN, PCSK1, or PCSK6. Thus, the changes in the expression of PC genes have a limited number of scenarios, which may reflect different pathways of tumor development and cryptic features of tumors. This finding allows to consider the mRNAs of PC genes as potentially important tumor markers.
When are pathogen genome sequences informative of transmission events?
Ferguson, Neil; Jombart, Thibaut
2018-01-01
Recent years have seen the development of numerous methodologies for reconstructing transmission trees in infectious disease outbreaks from densely sampled whole genome sequence data. However, a fundamental and as of yet poorly addressed limitation of such approaches is the requirement for genetic diversity to arise on epidemiological timescales. Specifically, the position of infected individuals in a transmission tree can only be resolved by genetic data if mutations have accumulated between the sampled pathogen genomes. To quantify and compare the useful genetic diversity expected from genetic data in different pathogen outbreaks, we introduce here the concept of ‘transmission divergence’, defined as the number of mutations separating whole genome sequences sampled from transmission pairs. Using parameter values obtained by literature review, we simulate outbreak scenarios alongside sequence evolution using two models described in the literature to describe transmission divergence of ten major outbreak-causing pathogens. We find that while mean values vary significantly between the pathogens considered, their transmission divergence is generally very low, with many outbreaks characterised by large numbers of genetically identical transmission pairs. We describe the impact of transmission divergence on our ability to reconstruct outbreaks using two outbreak reconstruction tools, the R packages outbreaker and phybreak, and demonstrate that, in agreement with previous observations, genetic sequence data of rapidly evolving pathogens such as RNA viruses can provide valuable information on individual transmission events. Conversely, sequence data of pathogens with lower mean transmission divergence, including Streptococcus pneumoniae, Shigella sonnei and Clostridium difficile, provide little to no information about individual transmission events. Our results highlight the informational limitations of genetic sequence data in certain outbreak scenarios, and demonstrate the need to expand the toolkit of outbreak reconstruction tools to integrate other types of epidemiological data. PMID:29420641
NASA Astrophysics Data System (ADS)
Sawakuchi, A. O.; Hartmann, G. A.; Sawakuchi, H. O.; Pupim, F. N.; Bertassoli, D. J.; Parra, M.; Antinao, J. L.; Sousa, L. M.; Sabaj Pérez, M. H.; Oliveira, P. E.; Santos, R. A.; Savian, J. F.; Grohmann, C. H.; Medeiros, V. B.; McGlue, M. M.; Bicudo, D. C.; Faustino, S. B.
2015-12-01
The Xingu River is a large clearwater river in eastern Amazonia and its downstream sector, known as the Volta Grande do Xingu ("Xingu Great Bend"), is a unique fluvial landscape that plays an important role in the biodiversity, biogeochemistry and prehistoric and historic peopling of Amazonia. The sedimentary dynamics of the Xingu River in the Volta Grande and its downstream sector will be shifted in the next few years due to the construction of dams associated with the Belo Monte hydropower project. Impacts on river biodiversity and carbon cycling are anticipated, especially due to likely changes in sedimentation and riverbed characteristics. This research project aims to define the geological and climate factors responsible for the development of the Volta Grande landscape and to track its environmental changes during the Holocene, using the modern system as a reference. In this context, sediment cores, riverbed rock and sediment samples and greenhouse gas (GHG) samples were collected in the Volta Grande do Xingu and adjacent upstream and downstream sectors. The reconstruction of past conditions in the Volta Grande is necessary for forecasting future scenarios and defining biodiversity conservation strategies under the operation of Belo Monte dams. This paper describes the scientific questions of the project and the sampling surveys performed by an international team of Earth scientists and biologists during the dry seasons of 2013 and 2014. Preliminary results are presented and a future workshop is planned to integrate results, present data to the scientific community and discuss possibilities for deeper drilling in the Xingu ria to extend the sedimentary record of the Volta Grande do Xingu.
Sapkota, A.; Das, P.; Bohmer, A. E.; ...
2018-05-29
Results of inelastic neutron scattering measurements are reported for two annealed compositions of Ca(Fe 1–xCo x) 2As 2, x = 0.026 and 0.030, which possess stripe-type antiferromagnetically ordered and superconducting ground states, respectively. In the AFM ground state, well-defined and gapped spin waves are observed for x = 0.026, similar to the parent CaFe 2As 2 compound. We conclude that the well-defined spin waves are likely to be present for all x corresponding to the AFM state. This behavior is in contrast to the smooth evolution to overdamped spin dynamics observed in Ba(Fe 1–xCo x) 2As 2, wherein the crossovermore » corresponds to microscopically coexisting AFM order and SC at low temperature. The smooth evolution is likely absent in Ca(Fe 1–xCo x) 2As 2 due to the mutual exclusion of AFM ordered and SC states. Overdamped spin dynamics characterize paramagnetism of the x = 0.030 sample and high-temperature x = 0.026 sample. A sizable loss of magnetic intensity is observed over a wide energy range upon cooling the x = 0.030 sample, at temperatures just above and within the superconducting phase. This phenomenon is unique amongst the iron-based superconductors and is consistent with a temperature-dependent reduction in the fluctuating moment. In conclusion, one possible scenario ascribes this loss of moment to a sensitivity to the c-axis lattice parameter in proximity to the nonmagnetic collapsed tetragonal phase and another scenario ascribes the loss to a formation of a pseudogap.« less
Galaxy Zoo: secular evolution of barred galaxies from structural decomposition of multiband images
NASA Astrophysics Data System (ADS)
Kruk, Sandor J.; Lintott, Chris J.; Bamford, Steven P.; Masters, Karen L.; Simmons, Brooke D.; Häußler, Boris; Cardamone, Carolin N.; Hart, Ross E.; Kelvin, Lee; Schawinski, Kevin; Smethurst, Rebecca J.; Vika, Marina
2018-02-01
We present the results of two-component (disc+bar) and three-component (disc+bar+bulge) multiwavelength 2D photometric decompositions of barred galaxies in five Sloan Digital Sky Survey (SDSS) bands (ugriz). This sample of ∼3500 nearby (z < 0.06) galaxies with strong bars selected from the Galaxy Zoo citizen science project is the largest sample of barred galaxies to be studied using photometric decompositions that include a bar component. With detailed structural analysis, we obtain physical quantities such as the bar- and bulge-to-total luminosity ratios, effective radii, Sérsic indices and colours of the individual components. We observe a clear difference in the colours of the components, the discs being bluer than the bars and bulges. An overwhelming fraction of bulge components have Sérsic indices consistent with being pseudo-bulges. By comparing the barred galaxies with a mass-matched and volume-limited sample of unbarred galaxies, we examine the connection between the presence of a large-scale galactic bar and the properties of discs and bulges. We find that the discs of unbarred galaxies are significantly bluer compared to the discs of barred galaxies, while there is no significant difference in the colours of the bulges. We find possible evidence of secular evolution via bars that leads to the build-up of pseudo-bulges and to the quenching of star formation in the discs. We identify a subsample of unbarred galaxies with an inner lens/oval and find that their properties are similar to barred galaxies, consistent with an evolutionary scenario in which bars dissolve into lenses. This scenario deserves further investigation through both theoretical and observational work.
NASA Astrophysics Data System (ADS)
Khodabakhshi, M.; Jafarpour, B.
2013-12-01
Characterization of complex geologic patterns that create preferential flow paths in certain reservoir systems requires higher-order geostatistical modeling techniques. Multipoint statistics (MPS) provides a flexible grid-based approach for simulating such complex geologic patterns from a conceptual prior model known as a training image (TI). In this approach, a stationary TI that encodes the higher-order spatial statistics of the expected geologic patterns is used to represent the shape and connectivity of the underlying lithofacies. While MPS is quite powerful for describing complex geologic facies connectivity, the nonlinear and complex relation between the flow data and facies distribution makes flow data conditioning quite challenging. We propose an adaptive technique for conditioning facies simulation from a prior TI to nonlinear flow data. Non-adaptive strategies for conditioning facies simulation to flow data can involves many forward flow model solutions that can be computationally very demanding. To improve the conditioning efficiency, we develop an adaptive sampling approach through a data feedback mechanism based on the sampling history. In this approach, after a short period of sampling burn-in time where unconditional samples are generated and passed through an acceptance/rejection test, an ensemble of accepted samples is identified and used to generate a facies probability map. This facies probability map contains the common features of the accepted samples and provides conditioning information about facies occurrence in each grid block, which is used to guide the conditional facies simulation process. As the sampling progresses, the initial probability map is updated according to the collective information about the facies distribution in the chain of accepted samples to increase the acceptance rate and efficiency of the conditioning. This conditioning process can be viewed as an optimization approach where each new sample is proposed based on the sampling history to improve the data mismatch objective function. We extend the application of this adaptive conditioning approach to the case where multiple training images are proposed to describe the geologic scenario in a given formation. We discuss the advantages and limitations of the proposed adaptive conditioning scheme and use numerical experiments from fluvial channel formations to demonstrate its applicability and performance compared to non-adaptive conditioning techniques.
Avoiding treatment bias of REDD+ monitoring by sampling with partial replacement.
Köhl, Michael; Scott, Charles T; Lister, Andrew J; Demon, Inez; Plugge, Daniel
2015-12-01
Implementing REDD+ renders the development of a measurement, reporting and verification (MRV) system necessary to monitor carbon stock changes. MRV systems generally apply a combination of remote sensing techniques and in-situ field assessments. In-situ assessments can be based on 1) permanent plots, which are assessed on all successive occasions, 2) temporary plots, which are assessed only once, and 3) a combination of both. The current study focuses on in-situ assessments and addresses the effect of treatment bias, which is introduced by managing permanent sampling plots differently than the surrounding forests. Temporary plots are not subject to treatment bias, but are associated with large sampling errors and low cost-efficiency. Sampling with partial replacement (SPR) utilizes both permanent and temporary plots. We apply a scenario analysis with different intensities of deforestation and forest degradation to show that SPR combines cost-efficiency with the handling of treatment bias. Without treatment bias permanent plots generally provide lower sampling errors for change estimates than SPR and temporary plots, but do not provide reliable estimates, if treatment bias occurs, SPR allows for change estimates that are comparable to those provided by permanent plots, offers the flexibility to adjust sample sizes in the course of time, and allows to compare data on permanent versus temporary plots for detecting treatment bias. Equivalence of biomass or carbon stock estimates between permanent and temporary plots serves as an indication for the absence of treatment bias while differences suggest that there is evidence for treatment bias. SPR is a flexible tool for estimating emission factors from successive measurements. It does not entirely depend on sample plots that are installed at the first occasion but allows for the adjustment of sample sizes and placement of new plots at any occasion. This ensures that in-situ samples provide representative estimates over time. SPR offers the possibility to increase sampling intensity in areas with high degradation intensities or to establish new plots in areas where permanent plots are lost due to deforestation. SPR is also an ideal approach to mitigate concerns about treatment bias.
Geoscience and a Lunar Base: A Comprehensive Plan for Lunar Exploration
NASA Technical Reports Server (NTRS)
Taylor, G. Jeffrey (Editor); Spudis, Paul D. (Editor)
1990-01-01
This document represents the proceedings of the Workshop on Geoscience from a Lunar Base. It describes a comprehensive plan for the geologic exploration of the Moon. The document begins by explaining the scientific importance of studying the Moon and outlines the many unsolved problems in lunar science. Subsequent chapters detail different, complementary approaches to geologic studies: global surveys, including orbiting spacecraft such as Lunar Observer and installation of a global geophysical network; reconnaissance sample return mission, by either automated rovers or landers, or by piloted forays; detailed field studies, which involve astronauts and teleoperated robotic field geologists. The document then develops a flexible scenario for exploration and sketches the technological developments needed to carry out the exploration scenario.
ERIC Educational Resources Information Center
Harding, Hilary G.; Zinzow, Heidi M.; Burns, Erin E.; Jackson, Joan L.
2010-01-01
Previous research suggests that similarity to a victim may influence attributions of responsibility in hypothetical child sexual abuse scenarios. One aspect of similarity receiving mixed support in the literature is respondent child sexual abuse history. Using a sample of 1,345 college women, the present study examined child sexual abuse history,…
ERIC Educational Resources Information Center
Suzuki, Yumi E.; Bonner, Heidi S.
2017-01-01
Few studies examine the role of friends in victims' decisions to seek help from health professionals. This study used a sample of college students (N = 637) to examine the factors that may influence whether students would advise a friend to seek help from health professionals. After providing an open-ended response to a vignette, students answered…
Leveraging prior quantitative knowledge in guiding pediatric drug development: a case study.
Jadhav, Pravin R; Zhang, Jialu; Gobburu, Jogarao V S
2009-01-01
The manuscript presents the FDA's focus on leveraging prior knowledge in designing informative pediatric trial through this case study. In developing written request for Drug X, an anti-hypertensive for immediate blood pressure (BP) control, the sponsor and FDA conducted clinical trial simulations (CTS) to design trial with proper sample size and support the choice of dose range. The objective was to effectively use prior knowledge from adult patients for drug X, pediatric data from Corlopam (approved for a similar indication) trial and general experience in developing anti-hypertensive agents. Different scenarios governing the exposure response relationship in the pediatric population were simulated to perturb model assumptions. The choice of scenarios was based on the past observation that pediatric population is less responsive and sensitive compared with adults. The conceptual framework presented here should serve as an example on how the industry and FDA scientists can collaborate in designing the pediatric exclusivity trial. Using CTS, inter-disciplinary scientists with the sponsor and FDA can objectively discuss the choice of dose range, sample size, endpoints and other design elements. These efforts are believed to yield plausible trial design, qrational dosing recommendations and useful labeling information in pediatrics. Published in 2009 by John Wiley & Sons, Ltd.
Baine, Katherine; Jones, Michael P; Cox, Sherry; Martín-Jiménez, Tomás
2015-09-01
Neuropathic pain is a manifestation of chronic pain that arises with damage to the somatosensory system. Pharmacologic treatment recommendations for alleviation of neuropathic pain are often multimodal, and the few reports communicating treatment of suspected neuropathic pain in avian patients describe the use of gabapentin as part of the therapeutic regimen. To determine the pharmacokinetics of gabapentin in Hispaniolan Amazon parrots ( Amazona ventralis ), compounded gabapentin suspensions were administered at 30 mg/kg IV to 2 birds, 10 mg/kg PO to 3 birds, and 30 mg/kg PO to 3 birds. Blood samples were collected immediately before and at 9 different time points after drug administration. Plasma samples were analyzed for gabapentin concentration, and pharmacokinetic parameters were calculated with both a nonlinear mixed-effect approach and a noncompartmental analysis. The best compartmental, oral model was used to simulate the concentration-time profiles resulting from different dosing scenarios. Mild sedation was observed in both study birds after intravenous injection. Computer simulation of different dosing scenarios with the mean parameter estimates showed that 15 mg/kg every 8 hours would be a starting point for oral dosing in Hispaniolan Amazon parrots based on effective plasma concentrations reported for human patients; however, additional studies need to be performed to establish a therapeutic dose.
Lehmann, Edouard; Turrero, Nuria; Kolia, Marius; Konaté, Yacouba; de Alencastro, Luiz Felippe
2017-12-01
Vegetables and water samples have been collected around the lake of Loumbila in Burkina Faso. Pesticides residues in food commodities were analyzed using a modified QuEChERS extraction method prior analysis on GC-MS and UPLC-MS/MS of 31 pesticides. Maximum Residue Limits (MRLs) were exceeded in 36% of the samples for seven pesticides: acetamiprid, carbofuran, chlorpyrifos, lambda-cyhalothrin, dieldrin, imidacloprid and profenofos. Exceedance of MRLs suggests a risk for the consumers and limits the opportunities of exportation. In order to define estimated daily intake, dietary surveys were conducted on 126 gardeners using a 24hours recall method. Single pesticide and cumulative exposure risks were assessed for children and adults. Risk was identified for: chlorpyrifos and lambda-cyhalothrin in acute and chronic exposure scenarios. Hazardous chronic exposure to the endocrine disruptor and probable carcinogen dieldrin was also detected. In the studied population, cumulative dietary exposure presented a risk (acute and chronic) for children and adults in respectively >17% and 4% of the cases when considering the worst case scenarios. Processing factor largely influenced the risk of occurrence suggesting that simple washing of vegetables with water considerably reduced the risk of hazardous exposure. Copyright © 2017 Elsevier B.V. All rights reserved.
Can People Guess What Happened to Others from Their Reactions?
Pillai, Dhanya; Sheppard, Elizabeth; Mitchell, Peter
2012-01-01
Are we able to infer what happened to a person from a brief sample of his/her behaviour? It has been proposed that mentalising skills can be used to retrodict as well as predict behaviour, that is, to determine what mental states of a target have already occurred. The current study aimed to develop a paradigm to explore these processes, which takes into account the intricacies of real-life situations in which reasoning about mental states, as embodied in behaviour, may be utilised. A novel task was devised which involved observing subtle and naturalistic reactions of others in order to determine the event that had previously taken place. Thirty-five participants viewed videos of real individuals reacting to the researcher behaving in one of four possible ways, and were asked to judge which of the four ‘scenarios’ they thought the individual was responding to. Their eye movements were recorded to establish the visual strategies used. Participants were able to deduce successfully from a small sample of behaviour which scenario had previously occurred. Surprisingly, looking at the eye region was associated with poorer identification of the scenarios, and eye movement strategy varied depending on the event experienced by the person in the video. This suggests people flexibly deploy their attention using a retrodictive mindreading process to infer events. PMID:23226227
Guna, Jože; Jakus, Grega; Pogačnik, Matevž; Tomažič, Sašo; Sodnik, Jaka
2014-02-21
We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controller's sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system.
Guna, Jože; Jakus, Grega; Pogačnik, Matevž; Tomažič, Sašo; Sodnik, Jaka
2014-01-01
We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controller's sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system. PMID:24566635
Phobos-Grunt ; Russian Sample Return Mission
NASA Astrophysics Data System (ADS)
Marov, M.
As an important milestone in the Mars exploration, space vehicle of new generation "Phobos-Grunt" is planned to be launched by the Russian Aviation and Space Agency. The project is optimized around Phobos sample return mission and follow up missions targeted to study some Main asteroid belt bodies, NEO , and short period comets. The principal constrain is "Soyuz-Fregat" rather than "Proton" launcher utilization to accomplish these challenging goals. The vehicle design incorporates innovative SEP technology involving electrojet engines that allowed us to increase significantly the missions energetic capabilities, as well as high autonomous on- board systems . Basic criteria underlining the "Phobos-Grunt" mission scenario, scientific objections and rationale, involving Mars observations during the vehicle insertion into Mars orbit and Phobos approach manoeuvres, are discussed and an opportunity for international cooperation is suggested.
Silva, Catarina; Cavaco, Carina; Perestrelo, Rosa; Pereira, Jorge; Câmara, José S.
2014-01-01
For a long time, sample preparation was unrecognized as a critical issue in the analytical methodology, thus limiting the performance that could be achieved. However, the improvement of microextraction techniques, particularly microextraction by packed sorbent (MEPS) and solid-phase microextraction (SPME), completely modified this scenario by introducing unprecedented control over this process. Urine is a biological fluid that is very interesting for metabolomics studies, allowing human health and disease characterization in a minimally invasive form. In this manuscript, we will critically review the most relevant and promising works in this field, highlighting how the metabolomic profiling of urine can be an extremely valuable tool for the early diagnosis of highly prevalent diseases, such as cardiovascular, oncologic and neurodegenerative ones. PMID:24958388
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vandersall, K S; Tarver, C M; Garcia, F
Shock initiation experiments on the HMX based explosives LX-10 (95% HMX, 5% Viton by weight) and LX-07 (90% HMX, 10% Viton by weight) were performed to obtain in-situ pressure gauge data, run-distance-to-detonation thresholds, and Ignition and Growth modeling parameters. A 101 mm diameter propellant driven gas gun was utilized to initiate the explosive samples with manganin piezoresistive pressure gauge packages placed between sample slices. The run-distance-to-detonation points on the Pop-plot for these experiments and prior experiments on another HMX based explosive LX LX-04 (85% HMX, 15% Viton by weight) will be shown, discussed, and compared as a function of themore » binder content. This parameter set will provide additional information to ensure accurate code predictions for safety scenarios involving HMX explosives with different percent binder content additions.« less
A fortran program for Monte Carlo simulation of oil-field discovery sequences
Bohling, Geoffrey C.; Davis, J.C.
1993-01-01
We have developed a program for performing Monte Carlo simulation of oil-field discovery histories. A synthetic parent population of fields is generated as a finite sample from a distribution of specified form. The discovery sequence then is simulated by sampling without replacement from this parent population in accordance with a probabilistic discovery process model. The program computes a chi-squared deviation between synthetic and actual discovery sequences as a function of the parameters of the discovery process model, the number of fields in the parent population, and the distributional parameters of the parent population. The program employs the three-parameter log gamma model for the distribution of field sizes and employs a two-parameter discovery process model, allowing the simulation of a wide range of scenarios. ?? 1993.
Characterizing sampling and quality screening biases in infrared and microwave limb sounding
NASA Astrophysics Data System (ADS)
Millán, Luis F.; Livesey, Nathaniel J.; Santee, Michelle L.; von Clarmann, Thomas
2018-03-01
This study investigates orbital sampling biases and evaluates the additional impact caused by data quality screening for the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) and the Aura Microwave Limb Sounder (MLS). MIPAS acts as a proxy for typical infrared limb emission sounders, while MLS acts as a proxy for microwave limb sounders. These biases were calculated for temperature and several trace gases by interpolating model fields to real sampling patterns and, additionally, screening those locations as directed by their corresponding quality criteria. Both instruments have dense uniform sampling patterns typical of limb emission sounders, producing almost identical sampling biases. However, there is a substantial difference between the number of locations discarded. MIPAS, as a mid-infrared instrument, is very sensitive to clouds, and measurements affected by them are thus rejected from the analysis. For example, in the tropics, the MIPAS yield is strongly affected by clouds, while MLS is mostly unaffected. The results show that upper-tropospheric sampling biases in zonally averaged data, for both instruments, can be up to 10 to 30 %, depending on the species, and up to 3 K for temperature. For MIPAS, the sampling reduction due to quality screening worsens the biases, leading to values as large as 30 to 100 % for the trace gases and expanding the 3 K bias region for temperature. This type of sampling bias is largely induced by the geophysical origins of the screening (e.g. clouds). Further, analysis of long-term time series reveals that these additional quality screening biases may affect the ability to accurately detect upper-tropospheric long-term changes using such data. In contrast, MLS data quality screening removes sufficiently few points that no additional bias is introduced, although its penetration is limited to the upper troposphere, while MIPAS may cover well into the mid-troposphere in cloud-free scenarios. We emphasize that the results of this study refer only to the representativeness of the respective data, not to their intrinsic quality.
NASA Technical Reports Server (NTRS)
Zolensky, Michael E.
2011-01-01
I describe lessons learned from my participation on the Hayabusa Mission, which returned regolith grains from asteroid Itokawa in 2010 [1], comparing this with the recently returned Stardust Spacecraft, which sampled the Jupiter Family comet Wild 2. Spacecraft Recovery Operations: The mission Science and Curation teams must actively participate in planning, testing and implementing spacecraft recovery operations. The crash of the Genesis spacecraft underscored the importance of thinking through multiple contingency scenarios and practicing field recovery for these potential circumstances. Having the contingency supplies on-hand was critical, and at least one full year of planning for Stardust and Hayabusa recovery operations was necessary. Care must be taken to coordinate recovery operations with local organizations and inform relevant government bodies well in advance. Recovery plans for both Stardust and Hayabusa had to be adjusted for unexpectedly wet landing site conditions. Documentation of every step of spacecraft recovery and deintegration was necessary, and collection and analysis of launch and landing site soils was critical. We found the operation of the Woomera Text Range (South Australia) to be excellent in the case of Hayabusa, and in many respects this site is superior to the Utah Test and Training Range (used for Stardust) in the USA. Recovery operations for all recovered spacecraft suffered from the lack of a hermetic seal for the samples. Mission engineers should be pushed to provide hermetic seals for returned samples. Sample Curation Issues: More than two full years were required to prepare curation facilities for Stardust and Hayabusa. Despite this seemingly adequate lead time, major changes to curation procedures were required once the actual state of the returned samples became apparent. Sample databases must be fully implemented before sample return for Stardust we did not adequately think through all of the possible sub sampling and analytical activities before settling on a database design - Hayabusa has done a better job of this. Also, analysis teams must not be permitted to devise their own sample naming schemes. The sample handling and storage facilities for Hayabusa are the finest that exist, and we are now modifying Stardust curation to take advantage of the Hayabusa facilities. Remote storage of a sample subset is desirable. Preliminary Examination (PE) of Samples: There must be some determination of the state and quantity of the returned samples, to provide a necessary guide to persons requesting samples and oversight committees tasked with sample curation oversight. Hayabusa s sample PE, which is called HASPET, was designed so that late additions to the analysis protocols were possible, as new analytical techniques became available. A small but representative number of recovered grains are being subjected to in-depth characterization. The bulk of the recovered samples are being left untouched, to limit contamination. The HASPET plan takes maximum advantage of the unique strengths of sample return missions
Volumetric CT with sparse detector arrays (and application to Si-strip photon counters).
Sisniega, A; Zbijewski, W; Stayman, J W; Xu, J; Taguchi, K; Fredenberg, E; Lundqvist, Mats; Siewerdsen, J H
2016-01-07
Novel x-ray medical imaging sensors, such as photon counting detectors (PCDs) and large area CCD and CMOS cameras can involve irregular and/or sparse sampling of the detector plane. Application of such detectors to CT involves undersampling that is markedly different from the commonly considered case of sparse angular sampling. This work investigates volumetric sampling in CT systems incorporating sparsely sampled detectors with axial and helical scan orbits and evaluates performance of model-based image reconstruction (MBIR) with spatially varying regularization in mitigating artifacts due to sparse detector sampling. Volumetric metrics of sampling density and uniformity were introduced. Penalized-likelihood MBIR with a spatially varying penalty that homogenized resolution by accounting for variations in local sampling density (i.e. detector gaps) was evaluated. The proposed methodology was tested in simulations and on an imaging bench based on a Si-strip PCD (total area 5 cm × 25 cm) consisting of an arrangement of line sensors separated by gaps of up to 2.5 mm. The bench was equipped with translation/rotation stages allowing a variety of scanning trajectories, ranging from a simple axial acquisition to helical scans with variable pitch. Statistical (spherical clutter) and anthropomorphic (hand) phantoms were considered. Image quality was compared to that obtained with a conventional uniform penalty in terms of structural similarity index (SSIM), image uniformity, spatial resolution, contrast, and noise. Scan trajectories with intermediate helical width (~10 mm longitudinal distance per 360° rotation) demonstrated optimal tradeoff between the average sampling density and the homogeneity of sampling throughout the volume. For a scan trajectory with 10.8 mm helical width, the spatially varying penalty resulted in significant visual reduction of sampling artifacts, confirmed by a 10% reduction in minimum SSIM (from 0.88 to 0.8) and a 40% reduction in the dispersion of SSIM in the volume compared to the constant penalty (both penalties applied at optimal regularization strength). Images of the spherical clutter and wrist phantoms confirmed the advantages of the spatially varying penalty, showing a 25% improvement in image uniformity and 1.8 × higher CNR (at matched spatial resolution) compared to the constant penalty. The studies elucidate the relationship between sampling in the detector plane, acquisition orbit, sampling of the reconstructed volume, and the resulting image quality. They also demonstrate the benefit of spatially varying regularization in MBIR for scenarios with irregular sampling patterns. Such findings are important and integral to the incorporation of a sparsely sampled Si-strip PCD in CT imaging.
Volumetric CT with sparse detector arrays (and application to Si-strip photon counters)
NASA Astrophysics Data System (ADS)
Sisniega, A.; Zbijewski, W.; Stayman, J. W.; Xu, J.; Taguchi, K.; Fredenberg, E.; Lundqvist, Mats; Siewerdsen, J. H.
2016-01-01
Novel x-ray medical imaging sensors, such as photon counting detectors (PCDs) and large area CCD and CMOS cameras can involve irregular and/or sparse sampling of the detector plane. Application of such detectors to CT involves undersampling that is markedly different from the commonly considered case of sparse angular sampling. This work investigates volumetric sampling in CT systems incorporating sparsely sampled detectors with axial and helical scan orbits and evaluates performance of model-based image reconstruction (MBIR) with spatially varying regularization in mitigating artifacts due to sparse detector sampling. Volumetric metrics of sampling density and uniformity were introduced. Penalized-likelihood MBIR with a spatially varying penalty that homogenized resolution by accounting for variations in local sampling density (i.e. detector gaps) was evaluated. The proposed methodology was tested in simulations and on an imaging bench based on a Si-strip PCD (total area 5 cm × 25 cm) consisting of an arrangement of line sensors separated by gaps of up to 2.5 mm. The bench was equipped with translation/rotation stages allowing a variety of scanning trajectories, ranging from a simple axial acquisition to helical scans with variable pitch. Statistical (spherical clutter) and anthropomorphic (hand) phantoms were considered. Image quality was compared to that obtained with a conventional uniform penalty in terms of structural similarity index (SSIM), image uniformity, spatial resolution, contrast, and noise. Scan trajectories with intermediate helical width (~10 mm longitudinal distance per 360° rotation) demonstrated optimal tradeoff between the average sampling density and the homogeneity of sampling throughout the volume. For a scan trajectory with 10.8 mm helical width, the spatially varying penalty resulted in significant visual reduction of sampling artifacts, confirmed by a 10% reduction in minimum SSIM (from 0.88 to 0.8) and a 40% reduction in the dispersion of SSIM in the volume compared to the constant penalty (both penalties applied at optimal regularization strength). Images of the spherical clutter and wrist phantoms confirmed the advantages of the spatially varying penalty, showing a 25% improvement in image uniformity and 1.8 × higher CNR (at matched spatial resolution) compared to the constant penalty. The studies elucidate the relationship between sampling in the detector plane, acquisition orbit, sampling of the reconstructed volume, and the resulting image quality. They also demonstrate the benefit of spatially varying regularization in MBIR for scenarios with irregular sampling patterns. Such findings are important and integral to the incorporation of a sparsely sampled Si-strip PCD in CT imaging.
Volumetric CT with sparse detector arrays (and application to Si-strip photon counters)
Sisniega, A; Zbijewski, W; Stayman, J W; Xu, J; Taguchi, K; Fredenberg, E; Lundqvist, Mats; Siewerdsen, J H
2016-01-01
Novel x-ray medical imaging sensors, such as photon counting detectors (PCDs) and large area CCD and CMOS cameras can involve irregular and/or sparse sampling of the detector plane. Application of such detectors to CT involves undersampling that is markedly different from the commonly considered case of sparse angular sampling. This work investigates volumetric sampling in CT systems incorporating sparsely sampled detectors with axial and helical scan orbits and evaluates performance of model-based image reconstruction (MBIR) with spatially varying regularization in mitigating artifacts due to sparse detector sampling. Volumetric metrics of sampling density and uniformity were introduced. Penalized-likelihood MBIR with a spatially varying penalty that homogenized resolution by accounting for variations in local sampling density (i.e. detector gaps) was evaluated. The proposed methodology was tested in simulations and on an imaging bench based on a Si-strip PCD (total area 5 cm × 25 cm) consisting of an arrangement of line sensors separated by gaps of up to 2.5 mm. The bench was equipped with translation/rotation stages allowing a variety of scanning trajectories, ranging from a simple axial acquisition to helical scans with variable pitch. Statistical (spherical clutter) and anthropomorphic (hand) phantoms were considered. Image quality was compared to that obtained with a conventional uniform penalty in terms of structural similarity index (SSIM), image uniformity, spatial resolution, contrast, and noise. Scan trajectories with intermediate helical width (~10 mm longitudinal distance per 360° rotation) demonstrated optimal tradeoff between the average sampling density and the homogeneity of sampling throughout the volume. For a scan trajectory with 10.8 mm helical width, the spatially varying penalty resulted in significant visual reduction of sampling artifacts, confirmed by a 10% reduction in minimum SSIM (from 0.88 to 0.8) and a 40% reduction in the dispersion of SSIM in the volume compared to the constant penalty (both penalties applied at optimal regularization strength). Images of the spherical clutter and wrist phantoms confirmed the advantages of the spatially varying penalty, showing a 25% improvement in image uniformity and 1.8 × higher CNR (at matched spatial resolution) compared to the constant penalty. The studies elucidate the relationship between sampling in the detector plane, acquisition orbit, sampling of the reconstructed volume, and the resulting image quality. They also demonstrate the benefit of spatially varying regularization in MBIR for scenarios with irregular sampling patterns. Such findings are important and integral to the incorporation of a sparsely sampled Si-strip PCD in CT imaging. PMID:26611740
A multi-particle crushing apparatus for studying rock fragmentation due to repeated impacts
NASA Astrophysics Data System (ADS)
Huang, S.; Mohanty, B.; Xia, K.
2017-12-01
Rock crushing is a common process in mining and related operations. Although a number of particle crushing tests have been proposed in the literature, most of them are concerned with single-particle crushing, i.e., a single rock sample is crushed in each test. Considering the realistic scenario in crushers where many fragments are involved, a laboratory crushing apparatus is developed in this study. This device consists of a Hopkinson pressure bar system and a piston-holder system. The Hopkinson pressure bar system is used to apply calibrated dynamic loads to the piston-holder system, and the piston-holder system is used to hold rock samples and to recover fragments for subsequent particle size analysis. The rock samples are subjected to three to seven impacts under three impact velocities (2.2, 3.8, and 5.0 m/s), with the feed size of the rock particle samples limited between 9.5 and 12.7 mm. Several key parameters are determined from this test, including particle size distribution parameters, impact velocity, loading pressure, and total work. The results show that the total work correlates well with resulting fragmentation size distribution, and the apparatus provides a useful tool for studying the mechanism of crushing, which further provides guidelines for the design of commercial crushers.
Effect size measures in a two-independent-samples case with nonnormal and nonhomogeneous data.
Li, Johnson Ching-Hong
2016-12-01
In psychological science, the "new statistics" refer to the new statistical practices that focus on effect size (ES) evaluation instead of conventional null-hypothesis significance testing (Cumming, Psychological Science, 25, 7-29, 2014). In a two-independent-samples scenario, Cohen's (1988) standardized mean difference (d) is the most popular ES, but its accuracy relies on two assumptions: normality and homogeneity of variances. Five other ESs-the unscaled robust d (d r * ; Hogarty & Kromrey, 2001), scaled robust d (d r ; Algina, Keselman, & Penfield, Psychological Methods, 10, 317-328, 2005), point-biserial correlation (r pb ; McGrath & Meyer, Psychological Methods, 11, 386-401, 2006), common-language ES (CL; Cliff, Psychological Bulletin, 114, 494-509, 1993), and nonparametric estimator for CL (A w ; Ruscio, Psychological Methods, 13, 19-30, 2008)-may be robust to violations of these assumptions, but no study has systematically evaluated their performance. Thus, in this simulation study the performance of these six ESs was examined across five factors: data distribution, sample, base rate, variance ratio, and sample size. The results showed that A w and d r were generally robust to these violations, and A w slightly outperformed d r . Implications for the use of A w and d r in real-world research are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armstrong, Alacia; Valverde, Angel; Ramond, Jean-Baptiste
The temporal dynamics of desert soil microbial communities are poorly understood. Given the implications for ecosystem functioning under a global change scenario, a better understanding of desert microbial community stability is crucial. Here, we sampled soils in the central Namib Desert on sixteen different occasions over a one-year period. Using Illumina-based amplicon sequencing of the 16S rRNA gene, we found that α-diversity (richness) was more variable at a given sampling date (spatial variability) than over the course of one year (temporal variability). Community composition remained essentially unchanged across the first 10 months, indicating that spatial sampling might be more importantmore » than temporal sampling when assessing β-diversity patterns in desert soils. However, a major shift in microbial community composition was found following a single precipitation event. This shift in composition was associated with a rapid increase in CO2 respiration and productivity, supporting the view that desert soil microbial communities respond rapidly to re-wetting and that this response may be the result of both taxon-specific selection and changes in the availability or accessibility of organic substrates. Recovery to quasi pre-disturbance community composition was achieved within one month after rainfall.« less
Di Giorgio, Marina; Radl, Analía; Taja, María R; Bubniak, Ruth; Deminge, Mayra; Sapienza, Carla; Vázquez, Marina; Baciu, Florian; Kenny, Pat
2014-06-01
It has been observed that victims of accidental overexposures show better chance of survival if they receive medical treatment early. The increased risk of scenarios involving mass casualties has stimulated the scientific community to develop tools that would help the medical doctors to treat victims. The biological dosimetry has become a routine test to estimate the dose, supplementing physical and clinical dosimetry. In case of radiation emergencies, in order to provide timely and effectively biological dosimetry assistance it is essential to guarantee an adequate transport of blood samples in principal, for providing support to countries that do not have biodosimetry laboratories. The objective of the present paper is to provide general guidelines, summarised in 10 points, for timely and proper receiving and sending of blood samples under National and International regulations, for safe and expeditious international transport. These guidelines cover the classification, packaging, marking, labelling, refrigeration and documentation requirements for the international shipping of blood samples and pellets, to provide assistance missions with a tool that would contribute with the preparedness for an effective biodosimetric response in cases of radiological or nuclear emergencies. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Simultaneous determination of specific alpha and beta emitters by LSC-PLS in water samples.
Fons-Castells, J; Tent-Petrus, J; Llauradó, M
2017-01-01
Liquid scintillation counting (LSC) is a commonly used technique for the determination of alpha and beta emitters. However, LSC has poor resolution and the continuous spectra for beta emitters hinder the simultaneous determination of several alpha and beta emitters from the same spectrum. In this paper, the feasibility of multivariate calibration by partial least squares (PLS) models for the determination of several alpha ( nat U, 241 Am and 226 Ra) and beta emitters ( 40 K, 60 Co, 90 Sr/ 90 Y, 134 Cs and 137 Cs) in water samples is reported. A set of alpha and beta spectra from radionuclide calibration standards were used to construct three PLS models. Experimentally mixed radionuclides and intercomparision materials were used to validate the models. The results had a maximum relative bias of 25% when all the radionuclides in the sample were included in the calibration set; otherwise the relative bias was over 100% for some radionuclides. The results obtained show that LSC-PLS is a useful approach for the simultaneous determination of alpha and beta emitters in multi-radionuclide samples. However, to obtain useful results, it is important to include all the radionuclides expected in the studied scenario in the calibration set. Copyright © 2016 Elsevier Ltd. All rights reserved.
Prediction of tropospheric ozone concentrations by using the design system approach.
Abdul-Wahab, Sabah A; Abdo, Jamil
2007-01-01
Data on the concentrations of non-methane hydrocarbons (NMHC), nitrogen oxide (NO), nitrogen dioxide (NO2), carbon monoxide (CO), and meteorological parameters (air temperature and solar radiation) were used to predict the concentration of tropospheric ozone using the Design-Ease software. These data were collected on hourly basis over a 12-month period. Sampling of the data was conducted automatically. The effect of the NMHC, NO, NO2,CO, temperature and solar radiation variables in predicting ozone concentrations was examined under two scenarios: (i) when NO is included with the absence of NO2; and (ii) when NO2 is addressed with the absence of NO. The results of these two scenarios were validated against ozone actual data. The predicted concentration of ozone in the second scenario (i.e., when NO2 is addressed) was in better agreement with the real observations. In addition, the paper indicated that statistical models of hourly surface ozone concentrations require interactions and non-linear relationships between predictor variables in order to accurately capture the ozone behavior.
Walter, Steffen; Wendt, Cornelia; Böhnke, Jan; Crawcour, Stephen; Tan, Jun-Wen; Chan, Andre; Limbrecht, Kerstin; Gruss, Sascha; Traue, Harald C
2014-01-01
Cognitive-technical intelligence is envisioned to be constantly available and capable of adapting to the user's emotions. However, the question is: what specific emotions should be reliably recognised by intelligent systems? Hence, in this study, we have attempted to identify similarities and differences of emotions between human-human (HHI) and human-machine interactions (HMI). We focused on what emotions in the experienced scenarios of HMI are retroactively reflected as compared with HHI. The sample consisted of N = 145 participants, who were divided into two groups. Positive and negative scenario descriptions of HMI and HHI were given by the first and second groups, respectively. Subsequently, the participants evaluated their respective scenarios with the help of 94 adjectives relating to emotions. The correlations between the occurrences of emotions in the HMI versus HHI were very high. The results do not support the statement that only a few emotions in HMI are relevant.
Prudent layperson definition of an emergent pediatric medical condition.
Huang, Craig J; Poirier, Michael P; Cantwell, John R; Ermis, Peter R; Isaacman, Daniel J
2006-03-01
This study was designed to assess how well parents rated pediatric medical conditions based on their perceived degree of urgency so as to determine if the "Prudent Layperson Standard'' is reasonable. A self-administered, supervised survey was given to a convenience sample of 340 caregivers in the emergency department of an urban children's hospital. Respondents were asked to rank the urgency of 15 scenarios. A caregiver response within 1 point of the physician score was considered concordant with medical opinion. A 2-week-old infant with a rectal temperature of 103.7 degrees F was the only emergent scenario underestimated by caregivers. A 1 1/2-yr-old child with an upper respiratory tract infection, a 7-year-old child with ringworm, an 8-month-old infant with a simple forehead contusion, and a 4-year-old child with conjunctivitis were the non-urgent scenarios overestimated by caregivers. Laypeople are able to identify cases constructed to represent obvious pediatric medical emergencies. Several patient subgroups frequently overestimate medical urgency.
Does warm debris dust stem from asteroid belts?
NASA Astrophysics Data System (ADS)
Geiler, Fabian; Krivov, Alexander V.
2017-06-01
Many debris discs reveal a two-component structure, with a cold outer and a warm inner component. While the former are likely massive analogues of the Kuiper belt, the origin of the latter is still a matter of debate. In this work, we investigate whether the warm dust may be a signature of asteroid belt analogues. In the scenario tested here, the current two-belt architecture stems from an originally extended protoplanetary disc, in which planets have opened a gap separating it into the outer and inner discs which, after the gas dispersal, experience a steady-state collisional decay. This idea is explored with an analytic collisional evolution model for a sample of 225 debris discs from a Spitzer/IRS catalogue that are likely to possess a two-component structure. We find that the vast majority of systems (220 out of 225, or 98 per cent) are compatible with this scenario. For their progenitors, original protoplanetary discs, we find an average surface density slope of -0.93 ± 0.06 and an average initial mass of (3.3^{+0.4}_{-0.3})× 10^{-3} solar masses, both of which are in agreement with the values inferred from submillimetre surveys. However, dust production by short-period comets and - more rarely - inward transport from the outer belts may be viable, and not mutually excluding, alternatives to the asteroid belt scenario. The remaining five discs (2 per cent of the sample: HIP 11486, HIP 23497, HIP 57971, HIP 85790, HIP 89770) harbour inner components that appear inconsistent with dust production in an 'asteroid belt.' Warm dust in these systems must either be replenished from cometary sources or represent an aftermath of a recent rare event, such as a major collision or planetary system instability.
Ollson, Christopher A; Knopper, Loren D; Whitfield Aslund, Melissa L; Jayasinghe, Ruwan
2014-01-01
The regions of Durham and York in Ontario, Canada have partnered to construct an energy-from-waste thermal treatment facility as part of a long term strategy for the management of their municipal solid waste. This paper presents the results of a comprehensive human health risk assessment for this facility. This assessment was based on extensive sampling of baseline environmental conditions (e.g., collection and analysis of air, soil, water, and biota samples) as well as detailed site specific modeling to predict facility-related emissions of 87 identified contaminants of potential concern. Emissions were estimated for both the approved initial operating design capacity of the facility (140,000 tonnes per year) and for the maximum design capacity (400,000 tonnes per year). For the 140,000 tonnes per year scenario, this assessment indicated that facility-related emissions are unlikely to cause adverse health risks to local residents, farmers, or other receptors (e.g., recreational users). For the 400,000 tonnes per year scenarios, slightly elevated risks were noted with respect to inhalation (hydrogen chloride) and infant consumption of breast milk (dioxins and furans), but only during predicted 'upset conditions' (i.e. facility start-up, shutdown, and loss of air pollution control) that represent unusual and/or transient occurrences. However, current provincial regulations require that additional environmental screening would be mandatory prior to expansion of the facility beyond the initial approved capacity (140,000 tonnes per year). Therefore, the potential risks due to upset conditions for the 400,000 tonnes per year scenario should be more closely investigated if future expansion is pursued. © 2013.
Rawashdeh, Nathir A.
2018-01-01
Visual inspection through image processing of welding and shot-peened surfaces is necessary to overcome equipment limitations, avoid measurement errors, and accelerate processing to gain certain surface properties such as surface roughness. Therefore, it is important to design an algorithm to quantify surface properties, which enables us to overcome the aforementioned limitations. In this study, a proposed systematic algorithm is utilized to generate and compare the surface roughness of Tungsten Inert Gas (TIG) welded aluminum 6061-T6 alloy treated by two levels of shot-peening, high-intensity and low-intensity. This project is industrial in nature, and the proposed solution was originally requested by local industry to overcome equipment capabilities and limitations. In particular, surface roughness measurements are usually only possible on flat surfaces but not on other areas treated by shot-peening after welding, as in the heat-affected zone and weld beads. Therefore, those critical areas are outside of the measurement limitations. Using the proposed technique, the surface roughness measurements were possible to obtain for weld beads, high-intensity and low-intensity shot-peened surfaces. In addition, a 3D surface topography was generated and dimple size distributions were calculated for the three tested scenarios: control sample (TIG-welded only), high-intensity shot-peened, and low-intensity shot-peened TIG-welded Al6065-T6 samples. Finally, cross-sectional hardness profiles were measured for the three scenarios; in all scenarios, lower hardness measurements were obtained compared to the base metal alloy in the heat-affected zone and in the weld beads even after shot-peening treatments. PMID:29748520
The Nature and Origin of UCDs in the Coma Cluster
NASA Astrophysics Data System (ADS)
Chiboucas, Kristin; Tully, R. Brent; Madrid, Juan; Phillipps, Steven; Carter, David; Peng, Eric
2018-01-01
UCDs are super massive star clusters found largely in dense regions but have also been found around individual galaxies and in smaller groups. Their origin is still under debate but currently favored scenarios include formation as giant star clusters, either as the brightest globular clusters or through mergers of super star clusters, themselves formed during major galaxy mergers, or as remnant nuclei from tidal stripping of nucleated dwarf ellipticals. Establishing the nature of these enigmatic objects has important implications for our understanding of star formation, star cluster formation, the missing satellite problem, and galaxy evolution. We are attempting to disentangle these competing formation scenarios with a large survey of UCDs in the Coma cluster. Using ACS two-passband imaging from the HST/ACS Coma Cluster Treasury Survey, we are using colors and sizes to identify the UCD cluster members. With a large size limited sample of the UCD population within the core region of the Coma cluster, we are investigating the population size, properties, and spatial distribution, and comparing that with the Coma globular cluster and nuclear star cluster populations to discriminate between the threshing and globular cluster scenarios. In previous work, we had found a possible correlation of UCD colors with host galaxy and a possible excess of UCDs around a non-central giant galaxy with an unusually large globular cluster population, both suggestive of a globular cluster origin. With a larger sample size and additional imaging fields that encompass the regions around these giant galaxies, we have found that the color correlation with host persists and the giant galaxy with unusually large globular cluster population does appear to host a large UCD population as well. We present the current status of the survey.
A challenge to dSph formation models: are the most isolated Local Group dSph galaxies truly old?
NASA Astrophysics Data System (ADS)
Monelli, Matteo
2017-08-01
What is the origin of the different dwarf galaxy types? The classification into dwarf irregular (dIrr), spheroidal (dSph), and transition (dT) types is based on their present-day properties. However, star formation histories (SFHs) reconstructed from deep color-magnitude diagrams (CMDs) provide details on the early evolution of galaxies of all these types, and indicate only two basic evolutionary paths. One is characterized by a vigorous but brief initial star-forming event, and little or no star formation thereafter (fast evolution), and the other one by roughly continuous star formation until (nearly) the present time (slow evolution). These two paths do not map directly onto the dIrr, dT and dSph types. Thus, the present galaxy properties do not reflect their lifetime evolution. Since there are some indications that slow dwarfs were assembled in lower-density environments than fast dwarfs, Gallart et al (2015) proposed that the distinction between fast and slow dwarfs reflects the characteristic density of the environment where they formed. This scenario, and more generally scenarios where dSph galaxies formed through the interaction with a massive galaxy, are challenged by a small sample of extremely isolated dSph/dT in the outer fringes of the Local Group. This proposal targets two of these objects (VV124, KKR25) for which we will infer their SFH - through a novel technique that combines the information from their RR Lyrae stars and deep CMDs sampling the intermediate-age population - in order to test these scenarios. This is much less demanding on observing time than classical SFH derivation using full depth CMDs.
Atieh, Anas M; Rawashdeh, Nathir A; AlHazaa, Abdulaziz N
2018-05-10
Visual inspection through image processing of welding and shot-peened surfaces is necessary to overcome equipment limitations, avoid measurement errors, and accelerate processing to gain certain surface properties such as surface roughness. Therefore, it is important to design an algorithm to quantify surface properties, which enables us to overcome the aforementioned limitations. In this study, a proposed systematic algorithm is utilized to generate and compare the surface roughness of Tungsten Inert Gas (TIG) welded aluminum 6061-T6 alloy treated by two levels of shot-peening, high-intensity and low-intensity. This project is industrial in nature, and the proposed solution was originally requested by local industry to overcome equipment capabilities and limitations. In particular, surface roughness measurements are usually only possible on flat surfaces but not on other areas treated by shot-peening after welding, as in the heat-affected zone and weld beads. Therefore, those critical areas are outside of the measurement limitations. Using the proposed technique, the surface roughness measurements were possible to obtain for weld beads, high-intensity and low-intensity shot-peened surfaces. In addition, a 3D surface topography was generated and dimple size distributions were calculated for the three tested scenarios: control sample (TIG-welded only), high-intensity shot-peened, and low-intensity shot-peened TIG-welded Al6065-T6 samples. Finally, cross-sectional hardness profiles were measured for the three scenarios; in all scenarios, lower hardness measurements were obtained compared to the base metal alloy in the heat-affected zone and in the weld beads even after shot-peening treatments.
Challenges of DNA-based mark-recapture studies of American black bears
Settlage, K.E.; Van Manen, F.T.; Clark, J.D.; King, T.L.
2008-01-01
We explored whether genetic sampling would be feasible to provide a region-wide population estimate for American black bears (Ursus americanus) in the southern Appalachians, USA. Specifically, we determined whether adequate capture probabilities (p >0.20) and population estimates with a low coefficient of variation (CV <20%) could be achieved given typical agency budget and personnel constraints. We extracted DNA from hair collected from baited barbed-wire enclosures sampled over a 10-week period on 2 study areas: a high-density black bear population in a portion of Great Smoky Mountains National Park and a lower density population on National Forest lands in North Carolina, South Carolina, and Georgia. We identified individual bears by their unique genotypes obtained from 9 microsatellite loci. We sampled 129 and 60 different bears in the National Park and National Forest study areas, respectively, and applied closed mark–recapture models to estimate population abundance. Capture probabilities and precision of the population estimates were acceptable only for sampling scenarios for which we pooled weekly sampling periods. We detected capture heterogeneity biases, probably because of inadequate spatial coverage by the hair-trapping grid. The logistical challenges of establishing and checking a sufficiently high density of hair traps make DNA-based estimates of black bears impractical for the southern Appalachian region. Alternatives are to estimate population size for smaller areas, estimate population growth rates or survival using mark–recapture methods, or use independent marking and recapturing techniques to reduce capture heterogeneity.
Appannanavar, Suma B; Biswal, Manisha; Rajkumari, Nonika; Mohan, Balvinder; Taneja, Neelam
2013-01-01
Urine culture is a gold standard in the diagnosis of urinary tract infection. Clean catch midstream urine collection and prompt transportation is essential for appropriate diagnosis. Improper collection and delay in transportation leads to diagnostic dilemma. In developing countries, higher ambient temperatures further complicate the scenario. Here, we have evaluated the role of boric acid as a preservative for urine samples prior to culture in female patients attending outpatient department at our center. Consecutive 104 urine samples were cultured simultaneously in plain uricol (Control-C) and boric acid containing tubes from Becton Dickinson urine culture kit (Boric acid group-BA). In the real-time evaluation, we found that in almost 57% (59/104) of the urine samples tested, it was more effective in maintaining the number of the organisms as compared to samples in the container without any preservative. Our in vitro study of simulated urine cultures revealed that urine samples could be kept up to 12 h before culture in the preservative without any inhibitory effect of boric acid. Though the use of boric acid kit may marginally increase the initial cost but has indirect effects like preventing delays in treatment and avoidance of false prescription of antibiotics. If the man-hours spent on repeat investigations are also taken into consideration, then the economic cost borne by the laboratory would also decrease manifold with the use of these containers.
Microbial health risks associated with exposure to stormwater in a water plaza.
Sales-Ortells, Helena; Medema, Gertjan
2015-05-01
Climate change scenarios predict an increase of intense rainfall events in summer in Western Europe. Current urban drainage systems cannot cope with such intense precipitation events. Cities are constructing stormwater storage facilities to prevent pluvial flooding. Combining storage with other functions, such as recreation, may lead to exposure to contaminants. This study assessed the microbial quality of rainwater collected in a water plaza and the health risks associated with recreational exposure. The water plaza collects street run-off, diverges first flush to the sewer system and stores the rest of the run-off in the plaza as open water. Campylobacter, Cryptosporidium and Legionella pneumophila were the pathogens investigated. Microbial source tracking tools were used to determine the origin (human, animal) of the intestinal pathogens. Cryptosporidium was not found in any sample. Campylobacter was found in all samples, with higher concentrations in samples containing human Bacteroides than in samples with zoonotic contamination (15 vs 3.7 gc (genomic copies)/100 mL). In both cases, the estimated disease risk associated with Campylobacter and recreational exposure was higher than the Dutch national incidence. This indicates that the health risk associated with recreational exposure to the water plaza is significant. L. pneumophila was found only in two out of ten pond samples. Legionnaire's disease risks were lower than the Dutch national incidence. Presence of human Bacteroides indicates possible cross-connections with the CSS that should be identified and removed. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cao, Yiping; Sivaganesan, Mano; Kelty, Catherine A; Wang, Dan; Boehm, Alexandria B; Griffith, John F; Weisberg, Stephen B; Shanks, Orin C
2018-01-01
Human fecal pollution of recreational waters remains a public health concern worldwide. As a result, there is a growing interest in the application of human-associated fecal source identification quantitative real-time PCR (qPCR) technologies for water quality research and management. However, there are currently no standardized approaches for field implementation and interpretation of qPCR data. In this study, a standardized HF183/BacR287 qPCR method was combined with a water sampling strategy and a novel Bayesian weighted average approach to establish a human fecal contamination score (HFS) that can be used to prioritize sampling sites for remediation based on measured human waste levels. The HFS was then used to investigate 975 study design scenarios utilizing different combinations of sites with varying sampling intensities (daily to once per week) and number of qPCR replicates per sample (2-14 replicates). Findings demonstrate that site prioritization with HFS is feasible and that both sampling intensity and number of qPCR replicates influence reliability of HFS estimates. The novel data analysis strategy presented here provides a prescribed approach for the implementation and interpretation of human-associated HF183/BacR287 qPCR data with the goal of site prioritization based on human fecal pollution levels. In addition, information is provided for future users to customize study designs for optimal HFS performance. Published by Elsevier Ltd.
Calibration and use of the polar organic chemical integrative sampler--a critical review.
Harman, Christopher; Allan, Ian John; Vermeirssen, Etiënne L M
2012-12-01
The implementation of strict environmental quality standards for polar organic priority pollutants poses a challenge for monitoring programs. The polar organic chemical integrative sampler (POCIS) may help to address the challenge of measuring low and fluctuating trace concentrations of such organic contaminants, offering significant advantages over traditional sampling. In the present review, the authors evaluate POCIS calibration methods and factors affecting sampling rates together with reported environmental applications. Over 300 compounds have been shown to accumulate in POCIS, including pesticides, pharmaceuticals, hormones, and industrial chemicals. Polar organic chemical integrative sampler extracts have been used for both chemical and biological analyses. Several different calibration methods have been described, which makes it difficult to directly compare sampling rates. In addition, despite the fact that some attempts to correlate sampling rates with the properties of target compounds such as log K(OW) have been met with varying success, an overall model that can predict uptake is lacking. Furthermore, temperature, water flow rates, salinity, pH, and fouling have all been shown to affect uptake; however, there is currently no robust method available for adjusting for these differences. Overall, POCIS has been applied to a wide range of sampling environments and scenarios and has been proven to be a useful screening tool. However, based on the existing literature, a more mechanistic approach is required to increase understanding and thus improve the quantitative nature of the measurements. Copyright © 2012 SETAC.
Global effects of land use on local terrestrial biodiversity.
Newbold, Tim; Hudson, Lawrence N; Hill, Samantha L L; Contu, Sara; Lysenko, Igor; Senior, Rebecca A; Börger, Luca; Bennett, Dominic J; Choimes, Argyrios; Collen, Ben; Day, Julie; De Palma, Adriana; Díaz, Sandra; Echeverria-Londoño, Susy; Edgar, Melanie J; Feldman, Anat; Garon, Morgan; Harrison, Michelle L K; Alhusseini, Tamera; Ingram, Daniel J; Itescu, Yuval; Kattge, Jens; Kemp, Victoria; Kirkpatrick, Lucinda; Kleyer, Michael; Correia, David Laginha Pinto; Martin, Callum D; Meiri, Shai; Novosolov, Maria; Pan, Yuan; Phillips, Helen R P; Purves, Drew W; Robinson, Alexandra; Simpson, Jake; Tuck, Sean L; Weiher, Evan; White, Hannah J; Ewers, Robert M; Mace, Georgina M; Scharlemann, Jörn P W; Purvis, Andy
2015-04-02
Human activities, especially conversion and degradation of habitats, are causing global biodiversity declines. How local ecological assemblages are responding is less clear--a concern given their importance for many ecosystem functions and services. We analysed a terrestrial assemblage database of unprecedented geographic and taxonomic coverage to quantify local biodiversity responses to land use and related changes. Here we show that in the worst-affected habitats, these pressures reduce within-sample species richness by an average of 76.5%, total abundance by 39.5% and rarefaction-based richness by 40.3%. We estimate that, globally, these pressures have already slightly reduced average within-sample richness (by 13.6%), total abundance (10.7%) and rarefaction-based richness (8.1%), with changes showing marked spatial variation. Rapid further losses are predicted under a business-as-usual land-use scenario; within-sample richness is projected to fall by a further 3.4% globally by 2100, with losses concentrated in biodiverse but economically poor countries. Strong mitigation can deliver much more positive biodiversity changes (up to a 1.9% average increase) that are less strongly related to countries' socioeconomic status.
External Standards or Standard Addition? Selecting and Validating a Method of Standardization
NASA Astrophysics Data System (ADS)
Harvey, David T.
2002-05-01
A common feature of many problem-based laboratories in analytical chemistry is a lengthy independent project involving the analysis of "real-world" samples. Students research the literature, adapting and developing a method suitable for their analyte, sample matrix, and problem scenario. Because these projects encompass the complete analytical process, students must consider issues such as obtaining a representative sample, selecting a method of analysis, developing a suitable standardization, validating results, and implementing appropriate quality assessment/quality control practices. Most textbooks and monographs suitable for an undergraduate course in analytical chemistry, however, provide only limited coverage of these important topics. The need for short laboratory experiments emphasizing important facets of method development, such as selecting a method of standardization, is evident. The experiment reported here, which is suitable for an introductory course in analytical chemistry, illustrates the importance of matrix effects when selecting a method of standardization. Students also learn how a spike recovery is used to validate an analytical method, and obtain a practical experience in the difference between performing an external standardization and a standard addition.
Levecke, Bruno; Anderson, Roy M; Berkvens, Dirk; Charlier, Johannes; Devleesschauwer, Brecht; Speybroeck, Niko; Vercruysse, Jozef; Van Aelst, Stefan
2015-03-01
In the present study, we present a hierarchical model based on faecal egg counts (FECs; expressed in eggs per 1g of stool) in which we first describe the variation in FECs between individuals in a particular population, followed by describing the variance due to counting eggs under a microscope separately for each stool sample. From this general framework, we discuss how to calculate a sample size for assessing a population mean FEC and the impact of an intervention, measured as reduction in FECs, for any scenario of soil-transmitted helminth (STH) epidemiology (the intensity and aggregation of FECs within a population) and diagnostic strategy (amount of stool examined (∼sensitivity of the diagnostic technique) and examination of individual/pooled stool samples) and on how to estimate prevalence of STH in the absence of a gold standard. To give these applications the most wide relevance as possible, we illustrate each of them with hypothetical examples. Copyright © 2015 Elsevier Ltd. All rights reserved.
Evolution of London penetration depth with scattering in single crystals of K1-xNaxFe2As2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, H; Tanatar, M A; Liu, Yong
2014-05-01
London penetration depth, λ(T), was measured in single crystals of K1-xNaxFe2As2, x=0 and 0.07, down to temperatures of 50 mK, ~Tc/50. Isovalent substitution of Na for K significantly increases impurity scattering, with ρ(Tc) rising from 0.2 to 2.2 μΩ cm, and leads to a suppression of Tc from 3.5 to 2.8 K. At the same time, a close to T-linear Δλ(T) in pure samples changes to almost T2 in the substituted samples. The behavior never becomes exponential as expected for the accidental nodes, as opposed to T2 dependence in superconductors with symmetry imposed line nodes. The superfluid density in themore » full temperature range follows a simple clean and dirty d-wave dependence, for pure and substituted samples, respectively. This result contradicts suggestions of multiband scenarios with strongly different gap structure on four sheets of the Fermi surface.« less
Global effects of land use on local terrestrial biodiversity
NASA Astrophysics Data System (ADS)
Newbold, Tim; Hudson, Lawrence N.; Hill, Samantha L. L.; Contu, Sara; Lysenko, Igor; Senior, Rebecca A.; Börger, Luca; Bennett, Dominic J.; Choimes, Argyrios; Collen, Ben; Day, Julie; de Palma, Adriana; Díaz, Sandra; Echeverria-Londoño, Susy; Edgar, Melanie J.; Feldman, Anat; Garon, Morgan; Harrison, Michelle L. K.; Alhusseini, Tamera; Ingram, Daniel J.; Itescu, Yuval; Kattge, Jens; Kemp, Victoria; Kirkpatrick, Lucinda; Kleyer, Michael; Correia, David Laginha Pinto; Martin, Callum D.; Meiri, Shai; Novosolov, Maria; Pan, Yuan; Phillips, Helen R. P.; Purves, Drew W.; Robinson, Alexandra; Simpson, Jake; Tuck, Sean L.; Weiher, Evan; White, Hannah J.; Ewers, Robert M.; Mace, Georgina M.; Scharlemann, Jörn P. W.; Purvis, Andy
2015-04-01
Human activities, especially conversion and degradation of habitats, are causing global biodiversity declines. How local ecological assemblages are responding is less clear--a concern given their importance for many ecosystem functions and services. We analysed a terrestrial assemblage database of unprecedented geographic and taxonomic coverage to quantify local biodiversity responses to land use and related changes. Here we show that in the worst-affected habitats, these pressures reduce within-sample species richness by an average of 76.5%, total abundance by 39.5% and rarefaction-based richness by 40.3%. We estimate that, globally, these pressures have already slightly reduced average within-sample richness (by 13.6%), total abundance (10.7%) and rarefaction-based richness (8.1%), with changes showing marked spatial variation. Rapid further losses are predicted under a business-as-usual land-use scenario; within-sample richness is projected to fall by a further 3.4% globally by 2100, with losses concentrated in biodiverse but economically poor countries. Strong mitigation can deliver much more positive biodiversity changes (up to a 1.9% average increase) that are less strongly related to countries' socioeconomic status.
Sequential Tests of Multiple Hypotheses Controlling Type I and II Familywise Error Rates
Bartroff, Jay; Song, Jinlin
2014-01-01
This paper addresses the following general scenario: A scientist wishes to perform a battery of experiments, each generating a sequential stream of data, to investigate some phenomenon. The scientist would like to control the overall error rate in order to draw statistically-valid conclusions from each experiment, while being as efficient as possible. The between-stream data may differ in distribution and dimension but also may be highly correlated, even duplicated exactly in some cases. Treating each experiment as a hypothesis test and adopting the familywise error rate (FWER) metric, we give a procedure that sequentially tests each hypothesis while controlling both the type I and II FWERs regardless of the between-stream correlation, and only requires arbitrary sequential test statistics that control the error rates for a given stream in isolation. The proposed procedure, which we call the sequential Holm procedure because of its inspiration from Holm’s (1979) seminal fixed-sample procedure, shows simultaneous savings in expected sample size and less conservative error control relative to fixed sample, sequential Bonferroni, and other recently proposed sequential procedures in a simulation study. PMID:25092948
Large-scale structure in the Southern Sky Redshift Survey
NASA Technical Reports Server (NTRS)
Park, Changbom; Gott, J. R., III; Da Costa, L. N.
1992-01-01
The power spectrum from the Southern Sky Redshift Survey and the CfA samples are measured in order to explore the amplitude of fluctuation in the galaxy density. At lambda of less than or equal to 30/h Mpc the observed power spectrum is quite consistent with the standard CDM model. At larger scales the data indicate an excess of power over the standard CDM model. The observed power spectrum from these optical galaxy samples is in good agreement with that drawn from the sparsely sampled IRAS galaxies. The shape of the power spectrum is also studied by examining the relation between the genus per unit volume and the smoothing length. It is found that, over Gaussian smoothing scales from 6 to 14/h Mpc, the power spectrum has a slope of about -1. The topology of the galaxy density field is studied by measuring the shift of the genus curve from the Gaussian case. Over all smoothing scales studied, the observed genus curves are consistent with a random phase distribution of the galaxy density field, as predicted by the inflationary scenarios.
Nour, Svetlana; LaRosa, Jerry; Inn, Kenneth G W
2011-08-01
The present challenge for the international emergency radiobioassay community is to analyze contaminated samples rapidly while maintaining high quality results. The National Institute of Standards and Technology (NIST) runs a radiobioassay measurement traceability testing program to evaluate the radioanalytical capabilities of participating laboratories. The NIST Radiochemistry Intercomparison Program (NRIP) started more than 10 years ago, and emergency performance testing was added to the program seven years ago. Radiobioassay turnaround times under the NRIP program for routine production and under emergency response scenarios are 60 d and 8 h, respectively. Because measurement accuracy and sample turnaround time are very critical in a radiological emergency, response laboratories' analytical systems are best evaluated and improved through traceable Performance Testing (PT) programs. The NRIP provides participant laboratories with metrology tools to evaluate their performance and to improve it. The program motivates the laboratories to optimize their methodologies and minimize the turnaround time of their results. Likewise, NIST has to make adjustments and periodical changes in the bioassay test samples in order to challenge the participating laboratories continually. With practice, radioanalytical measurements turnaround time can be reduced to 3-4 h.
Hard choices in assessing survival past dams — a comparison of single- and paired-release strategies
Zydlewski, Joseph D.; Stich, Daniel S.; Sigourney, Douglas B.
2017-01-01
Mark–recapture models are widely used to estimate survival of salmon smolts migrating past dams. Paired releases have been used to improve estimate accuracy by removing components of mortality not attributable to the dam. This method is accompanied by reduced precision because (i) sample size is reduced relative to a single, large release; and (ii) variance calculations inflate error. We modeled an idealized system with a single dam to assess trade-offs between accuracy and precision and compared methods using root mean squared error (RMSE). Simulations were run under predefined conditions (dam mortality, background mortality, detection probability, and sample size) to determine scenarios when the paired release was preferable to a single release. We demonstrate that a paired-release design provides a theoretical advantage over a single-release design only at large sample sizes and high probabilities of detection. At release numbers typical of many survival studies, paired release can result in overestimation of dam survival. Failures to meet model assumptions of a paired release may result in further overestimation of dam-related survival. Under most conditions, a single-release strategy was preferable.