Sample records for experience sampling method

  1. Job Performance as Multivariate Dynamic Criteria: Experience Sampling and Multiway Component Analysis.

    PubMed

    Spain, Seth M; Miner, Andrew G; Kroonenberg, Pieter M; Drasgow, Fritz

    2010-08-06

    Questions about the dynamic processes that drive behavior at work have been the focus of increasing attention in recent years. Models describing behavior at work and research on momentary behavior indicate that substantial variation exists within individuals. This article examines the rationale behind this body of work and explores a method of analyzing momentary work behavior using experience sampling methods. The article also examines a previously unused set of methods for analyzing data produced by experience sampling. These methods are known collectively as multiway component analysis. Two archetypal techniques of multimode factor analysis, the Parallel factor analysis and the Tucker3 models, are used to analyze data from Miner, Glomb, and Hulin's (2010) experience sampling study of work behavior. The efficacy of these techniques for analyzing experience sampling data is discussed as are the substantive multimode component models obtained.

  2. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    PubMed

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Experience-Sampling Research Methods and Their Potential for Education Research

    ERIC Educational Resources Information Center

    Zirkel, Sabrina; Garcia, Julie A.; Murphy, Mary C.

    2015-01-01

    Experience-sampling methods (ESM) enable us to learn about individuals' lives in context by measuring participants' feelings, thoughts, actions, context, and/or activities as they go about their daily lives. By capturing experience, affect, and action "in the moment" and with repeated measures, ESM approaches allow researchers…

  4. The experience sampling method: Investigating students' affective experience

    NASA Astrophysics Data System (ADS)

    Nissen, Jayson M.; Stetzer, MacKenzie R.; Shemwell, Jonathan T.

    2013-01-01

    Improving non-cognitive outcomes such as attitudes, efficacy, and persistence in physics courses is an important goal of physics education. This investigation implemented an in-the-moment surveying technique called the Experience Sampling Method (ESM) [1] to measure students' affective experience in physics. Measurements included: self-efficacy, cognitive efficiency, activation, intrinsic motivation, and affect. Data are presented that show contrasts in students' experiences (e.g., in physics vs. non-physics courses).

  5. A survey method for characterizing daily life experience: the day reconstruction method.

    PubMed

    Kahneman, Daniel; Krueger, Alan B; Schkade, David A; Schwarz, Norbert; Stone, Arthur A

    2004-12-03

    The Day Reconstruction Method (DRM) assesses how people spend their time and how they experience the various activities and settings of their lives, combining features of time-budget measurement and experience sampling. Participants systematically reconstruct their activities and experiences of the preceding day with procedures designed to reduce recall biases. The DRM's utility is shown by documenting close correspondences between the DRM reports of 909 employed women and established results from experience sampling. An analysis of the hedonic treadmill shows the DRM's potential for well-being research.

  6. Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling

    ERIC Educational Resources Information Center

    Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah

    2014-01-01

    Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…

  7. Obtaining Self-Samples to Diagnose Curable Sexually Transmitted Infections: A Systematic Review of Patients’ Experiences

    PubMed Central

    Paudyal, Priyamvada; Llewellyn, Carrie; Lau, Jason; Mahmud, Mohammad; Smith, Helen

    2015-01-01

    Background Routine screening is key to sexually transmitted infection (STI) prevention and control. Previous studies suggest that clinic-based screening programmes capture only a small proportion of people with STIs. Self-sampling using non- or minimally invasive techniques may be beneficial for those reluctant to actively engage with conventional sampling methods. We systematically reviewed studies of patients’ experiences of obtaining self-samples to diagnose curable STIs. Methods We conducted an electronic search of MEDLINE, EMBASE, CINAHL, PsychINFO, BNI, and Cochrane Database of Systematic Reviews to identify relevant articles published in English between January 1980 and March 2014. Studies were included if participants self-sampled for the diagnosis of a curable STI and had specifically sought participants’ opinions of their experience, acceptability, preferences, or willingness to self-sample. Results The initial search yielded 558 references. Of these, 45 studies met the inclusion criteria. Thirty-six studies assessed patients’ acceptability and experiences of self-sampling. Pooled results from these studies shows that self-sampling is a highly acceptable method with 85% of patients reporting the method to be well received and acceptable. Twenty-eight studies reported on ease of self-sampling; the majority of patients (88%) in these studies found self-sampling an “easy” procedure. Self-sampling was favoured compared to clinician sampling, and home sampling was preferred to clinic-based sampling. Females and older participants were more accepting of self-sampling. Only a small minority of participants (13%) reported pain during self-sampling. Participants were willing to undergo self-sampling and recommend others. Privacy and safety were the most common concerns. Conclusion Self-sampling for diagnostic testing is well accepted with the majority having a positive experience and willingness to use again. Standardization of self-sampling procedures and rigorous validation of outcome measurement will lead to better comparability across studies. Future studies need to conduct rigorous economic evaluations of self-sampling to inform policy development for the management of STI. PMID:25909508

  8. A new sampling method for fibre length measurement

    NASA Astrophysics Data System (ADS)

    Wu, Hongyan; Li, Xianghong; Zhang, Junying

    2018-06-01

    This paper presents a new sampling method for fibre length measurement. This new method can meet the three features of an effective sampling method, also it can produce the beard with two symmetrical ends which can be scanned from the holding line to get two full fibrograms for each sample. The methodology was introduced and experiments were performed to investigate effectiveness of the new method. The results show that the new sampling method is an effective sampling method.

  9. The Social Context of Anger among Violent Forensic Patients: An Analysis via Experience Sampling Method.

    ERIC Educational Resources Information Center

    Hillbrand, Marc; Waite, Bradley M.

    1992-01-01

    Used Experience Sampling Method to investigate experiences of anger in 10 patients at maximum security forensic institute who had histories of severe, violent behavior. Found severity of anger influenced by type of activity in which subject was engaged and by emotional valence of preceding events but not by time of day nor by type of interpersonal…

  10. A Novel Analysis Method for Paired-Sample Microbial Ecology Experiments.

    PubMed

    Olesen, Scott W; Vora, Suhani; Techtmann, Stephen M; Fortney, Julian L; Bastidas-Oyanedel, Juan R; Rodríguez, Jorge; Hazen, Terry C; Alm, Eric J

    2016-01-01

    Many microbial ecology experiments use sequencing data to measure a community's response to an experimental treatment. In a common experimental design, two units, one control and one experimental, are sampled before and after the treatment is applied to the experimental unit. The four resulting samples contain information about the dynamics of organisms that respond to the treatment, but there are no analytical methods designed to extract exactly this type of information from this configuration of samples. Here we present an analytical method specifically designed to visualize and generate hypotheses about microbial community dynamics in experiments that have paired samples and few or no replicates. The method is based on the Poisson lognormal distribution, long studied in macroecology, which we found accurately models the abundance distribution of taxa counts from 16S rRNA surveys. To demonstrate the method's validity and potential, we analyzed an experiment that measured the effect of crude oil on ocean microbial communities in microcosm. Our method identified known oil degraders as well as two clades, Maricurvus and Rhodobacteraceae, that responded to amendment with oil but do not include known oil degraders. Our approach is sensitive to organisms that increased in abundance only in the experimental unit but less sensitive to organisms that increased in both control and experimental units, thus mitigating the role of "bottle effects".

  11. The Positive Psychology of Interested Adolescents.

    ERIC Educational Resources Information Center

    Hunter, Jeremy P.; Csikszentmihalyi, Mihaly

    2003-01-01

    Using the experience sampling method with a diverse national sample of 1,215 high school students, identified 2 groups of adolescents, those who experience chronic interest in everyday life experiences and those who experience widespread boredom. Suggests that a generalized chronic experience of interest can be a signal of psychological health.…

  12. A novel quality by design approach for developing an HPLC method to analyze herbal extracts: A case study of sugar content analysis.

    PubMed

    Shao, Jingyuan; Cao, Wen; Qu, Haibin; Pan, Jianyang; Gong, Xingchu

    2018-01-01

    The aim of this study was to present a novel analytical quality by design (AQbD) approach for developing an HPLC method to analyze herbal extracts. In this approach, critical method attributes (CMAs) and critical method parameters (CMPs) of the analytical method were determined using the same data collected from screening experiments. The HPLC-ELSD method for separation and quantification of sugars in Codonopsis Radix extract (CRE) samples and Astragali Radix extract (ARE) samples was developed as an example method with a novel AQbD approach. Potential CMAs and potential CMPs were found with Analytical Target Profile. After the screening experiments, the retention time of the D-glucose peak of CRE samples, the signal-to-noise ratio of the D-glucose peak of CRE samples, and retention time of the sucrose peak in ARE samples were considered CMAs. The initial and final composition of the mobile phase, flow rate, and column temperature were found to be CMPs using a standard partial regression coefficient method. The probability-based design space was calculated using a Monte-Carlo simulation method and verified by experiments. The optimized method was validated to be accurate and precise, and then it was applied in the analysis of CRE and ARE samples. The present AQbD approach is efficient and suitable for analysis objects with complex compositions.

  13. Comparison of four sampling methods for the detection of Salmonella in broiler litter.

    PubMed

    Buhr, R J; Richardson, L J; Cason, J A; Cox, N A; Fairchild, B D

    2007-01-01

    Experiments were conducted to compare litter sampling methods for the detection of Salmonella. In experiment 1, chicks were challenged orally with a suspension of naladixic acid-resistant Salmonella and wing banded, and additional nonchallenged chicks were placed into each of 2 challenge pens. Nonchallenged chicks were placed into each nonchallenge pen located adjacent to the challenge pens. At 7, 8, 10, and 11 wk of age the litter was sampled using 4 methods: fecal droppings, litter grab, drag swab, and sock. For the challenge pens, Salmonella-positive samples were detected in 3 of 16 fecal samples, 6 of 16 litter grab samples, 7 of 16 drag swabs samples, and 7 of 16 sock samples. Samples from the nonchallenge pens were Salmonella positive in 2 of 16 litter grab samples, 9 of 16 drag swab samples, and 9 of 16 sock samples. In experiment 2, chicks were challenged with Salmonella, and the litter in the challenge and adjacent nonchallenge pens were sampled at 4, 6, and 8 wk of age with broilers remaining in all pens. For the challenge pens, Salmonella was detected in 10 of 36 fecal samples, 20 of 36 litter grab samples, 14 of 36 drag swab samples, and 26 of 36 sock samples. Samples from the adjacent nonchallenge pens were positive for Salmonella in 6 of 36 fecal droppings samples, 4 of 36 litter grab samples, 7 of 36 drag swab samples, and 19 of 36 sock samples. Sock samples had the highest rates of Salmonella detection. In experiment 3, the litter from a Salmonella-challenged flock was sampled at 7, 8, and 9 wk by socks and drag swabs. In addition, comparisons with drag swabs that were stepped on during sampling were made. Both socks (24 of 36, 67%) and drag swabs that were stepped on (25 of 36, 69%) showed significantly more Salmonella-positive samples than the traditional drag swab method (16 of 36, 44%). Drag swabs that were stepped on had comparable Salmonella detection level to that for socks. Litter sampling methods that incorporate stepping on the sample material while in contact with the litter appear to detect Salmonella in greater incidence than traditional sampling methods of dragging swabs over the litter surface.

  14. Sampling strategies for square and boll-feeding plant bugs (Hemiptera: Miridae) occurring on cotton

    USDA-ARS?s Scientific Manuscript database

    Six sampling methods targeting square and boll-feeding plant bugs on cotton were compared during three cotton growth periods (early-season squaring, early bloom, and peak through late bloom) by samplers differing in experience (with prior years of sampling experience or no experience) along the coas...

  15. Statistical inference from multiple iTRAQ experiments without using common reference standards.

    PubMed

    Herbrich, Shelley M; Cole, Robert N; West, Keith P; Schulze, Kerry; Yager, James D; Groopman, John D; Christian, Parul; Wu, Lee; O'Meally, Robert N; May, Damon H; McIntosh, Martin W; Ruczinski, Ingo

    2013-02-01

    Isobaric tags for relative and absolute quantitation (iTRAQ) is a prominent mass spectrometry technology for protein identification and quantification that is capable of analyzing multiple samples in a single experiment. Frequently, iTRAQ experiments are carried out using an aliquot from a pool of all samples, or "masterpool", in one of the channels as a reference sample standard to estimate protein relative abundances in the biological samples and to combine abundance estimates from multiple experiments. In this manuscript, we show that using a masterpool is counterproductive. We obtain more precise estimates of protein relative abundance by using the available biological data instead of the masterpool and do not need to occupy a channel that could otherwise be used for another biological sample. In addition, we introduce a simple statistical method to associate proteomic data from multiple iTRAQ experiments with a numeric response and show that this approach is more powerful than the conventionally employed masterpool-based approach. We illustrate our methods using data from four replicate iTRAQ experiments on aliquots of the same pool of plasma samples and from a 406-sample project designed to identify plasma proteins that covary with nutrient concentrations in chronically undernourished children from South Asia.

  16. The impact of science methods courses on preservice elementary teachers' science teaching self-efficacy beliefs: Case studies from Turkey and the United States

    NASA Astrophysics Data System (ADS)

    Bursal, Murat

    Four case studies in two American and two Turkish science methods classrooms were conducted to investigate the changes in preservice elementary teachers' personal science teaching efficacy (PSTE) beliefs during their course periods. The findings indicated that while Turkish preservice elementary teachers (TR sample) started the science methods course semester with higher PSTE than their American peers (US sample), due to a significant increase in the US sample's and an insignificant decline in the TR sample's PSTE scores, both groups completed the science methods course with similar PSTE levels. Consistent with Bandura's social cognitive theory, describing four major sources of self-efficacy, the inclusion of mastery experiences (inquiry activities and elementary school micro-teaching experiences) and vicarious experiences (observation of course instructor and supervisor elementary teacher) into the science methods course, providing positive social persuasion (positive appraisal from the instructor and classmates), and improving physiological states (reduced science anxiety and positive attitudes toward becoming elementary school teachers), were found to contribute to the significant enhancement of the US sample's PSTE beliefs. For the TR sample, although some of the above sources were present, the lack of student teaching experiences and inservice teacher observations, as well as the TR samples' negative attitudes toward becoming elementary school teachers and a lack of positive classroom support were found to make Turkish preservice teachers rely mostly on their mastery in science concepts, and therefore resulted in not benefiting from their science methods course, in terms of enhancing their PSTE beliefs. Calls for reforms in the Turkish education system that will include more mastery experiences in the science methods courses and provide more flexibility for students to choose their high school majors and college programs, and switch between them are made. In addition to the mastery experiences contributing to the PSTE beliefs, this study reported that preservice elementary teachers' unawareness of their science misconceptions also results in enhancing their self-efficacy, which is troublesome. Revisions in science content courses to employ inquiry activities, designed for addressing and correcting students' misconceptions, are recommended to overcome teacher candidates' lack of science competency and negative attitudes toward science.

  17. Further Evidence of an Engagement-Achievement Paradox among U.S. High School Students

    ERIC Educational Resources Information Center

    Shernoff, David J.; Schmidt, Jennifer A.

    2008-01-01

    Achievement, engagement, and students' quality of experience were compared by racial and ethnic group in a sample of students (N = 586) drawn from 13 high schools with diverse ethnic and socioeconomic student populations. Using the Experience Sampling Method (ESM), 3,529 samples of classroom experiences were analyzed along with self-reported…

  18. Methods of human body odor sampling: the effect of freezing.

    PubMed

    Lenochova, Pavlina; Roberts, S Craig; Havlicek, Jan

    2009-02-01

    Body odor sampling is an essential tool in human chemical ecology research. However, methodologies of individual studies vary widely in terms of sampling material, length of sampling, and sample processing. Although these differences might have a critical impact on results obtained, almost no studies test validity of current methods. Here, we focused on the effect of freezing samples between collection and use in experiments involving body odor perception. In 2 experiments, we tested whether axillary odors were perceived differently by raters when presented fresh or having been frozen and whether several freeze-thaw cycles affected sample quality. In the first experiment, samples were frozen for 2 weeks, 1 month, or 4 months. We found no differences in ratings of pleasantness, attractiveness, or masculinity between fresh and frozen samples. Similarly, almost no differences between repeatedly thawed and fresh samples were found. We found some variations in intensity; however, this was unrelated to length of storage. The second experiment tested differences between fresh samples and those frozen for 6 months. Again no differences in subjective ratings were observed. These results suggest that freezing has no significant effect on perceived odor hedonicity and that samples can be reliably used after storage for relatively long periods.

  19. Fast acquisition of multidimensional NMR spectra of solids and mesophases using alternative sampling methods.

    PubMed

    Lesot, Philippe; Kazimierczuk, Krzysztof; Trébosc, Julien; Amoureux, Jean-Paul; Lafon, Olivier

    2015-11-01

    Unique information about the atom-level structure and dynamics of solids and mesophases can be obtained by the use of multidimensional nuclear magnetic resonance (NMR) experiments. Nevertheless, the acquisition of these experiments often requires long acquisition times. We review here alternative sampling methods, which have been proposed to circumvent this issue in the case of solids and mesophases. Compared to the spectra of solutions, those of solids and mesophases present some specificities because they usually display lower signal-to-noise ratios, non-Lorentzian line shapes, lower spectral resolutions and wider spectral widths. We highlight herein the advantages and limitations of these alternative sampling methods. A first route to accelerate the acquisition time of multidimensional NMR spectra consists in the use of sparse sampling schemes, such as truncated, radial or random sampling ones. These sparsely sampled datasets are generally processed by reconstruction methods differing from the Discrete Fourier Transform (DFT). A host of non-DFT methods have been applied for solids and mesophases, including the G-matrix Fourier transform, the linear least-square procedures, the covariance transform, the maximum entropy and the compressed sensing. A second class of alternative sampling consists in departing from the Jeener paradigm for multidimensional NMR experiments. These non-Jeener methods include Hadamard spectroscopy as well as spatial or orientational encoding of the evolution frequencies. The increasing number of high field NMR magnets and the development of techniques to enhance NMR sensitivity will contribute to widen the use of these alternative sampling methods for the study of solids and mesophases in the coming years. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Using the Experience Sampling Method in the Context of Contingency Management for Substance Abuse Treatment

    ERIC Educational Resources Information Center

    Husky, Mathilde M.; Mazure, Carolyn M.; Carroll, Kathleen M.; Barry, Danielle; Petry, Nancy M.

    2008-01-01

    Contingency management (CM) treatments have been shown to be effective in reducing substance use. This manuscript illustrates how the experience sampling method (ESM) can depict behavior and behavior change and can be used to explore CM treatment mechanisms. ESM characterizes idiosyncratic patterns of behavior and offers the potential to determine…

  1. Inner experience in the scanner: can high fidelity apprehensions of inner experience be integrated with fMRI?

    PubMed Central

    Kühn, Simone; Fernyhough, Charles; Alderson-Day, Benjamin; Hurlburt, Russell T.

    2014-01-01

    To provide full accounts of human experience and behavior, research in cognitive neuroscience must be linked to inner experience, but introspective reports of inner experience have often been found to be unreliable. The present case study aimed at providing proof of principle that introspection using one method, descriptive experience sampling (DES), can be reliably integrated with fMRI. A participant was trained in the DES method, followed by nine sessions of sampling within an MRI scanner. During moments where the DES interview revealed ongoing inner speaking, fMRI data reliably showed activation in classic speech processing areas including left inferior frontal gyrus. Further, the fMRI data validated the participant’s DES observations of the experiential distinction between inner speaking and innerly hearing her own voice. These results highlight the precision and validity of the DES method as a technique of exploring inner experience and the utility of combining such methods with fMRI. PMID:25538649

  2. Student Teachers' Emotional Teaching Experiences in Relation to Different Teaching Methods

    ERIC Educational Resources Information Center

    Timoštšuk, I.; Kikas, E.; Normak, M.

    2016-01-01

    The role of emotional experiences in teacher training is acknowledged, but the role of emotions during first experiences of classroom teaching has not been examined in large samples. This study examines the teaching methods used by student teachers in early teaching practice and the relationship between these methods and emotions experienced. We…

  3. Reading in Class & out of Class: An Experience Sampling Method Study

    ERIC Educational Resources Information Center

    Shumow, Lee; Schmidt, Jennifer A.; Kackar, Hayal

    2008-01-01

    This study described and compared the reading of sixth and eighth grade students both in and out of school using a unique data set collected with the Experience Sampling Method (ESM). On average, students read forty minutes a day out of class and seventeen minutes a day in class indicating that reading is a common leisure practice for…

  4. Temperament, Parenting, and Depressive Symptoms in a Population Sample of Preadolescents

    ERIC Educational Resources Information Center

    Oldehinkel, Albertine J.; Veenstra, Rene; Ormel, Johan; De Winter, Andrea F.; Verhulst, Frank C.

    2006-01-01

    Background: Depressive symptoms can be triggered by negative social experiences and individuals' processing of these experiences. This study focuses on the interaction between temperament, perceived parenting, and gender in relation to depressive problems in a Dutch population sample of preadolescents. Methods: The sample consisted of 2230…

  5. Modified slanted-edge method for camera modulation transfer function measurement using nonuniform fast Fourier transform technique

    NASA Astrophysics Data System (ADS)

    Duan, Yaxuan; Xu, Songbo; Yuan, Suochao; Chen, Yongquan; Li, Hongguang; Da, Zhengshang; Gao, Limin

    2018-01-01

    ISO 12233 slanted-edge method experiences errors using fast Fourier transform (FFT) in the camera modulation transfer function (MTF) measurement due to tilt angle errors in the knife-edge resulting in nonuniform sampling of the edge spread function (ESF). In order to resolve this problem, a modified slanted-edge method using nonuniform fast Fourier transform (NUFFT) for camera MTF measurement is proposed. Theoretical simulations for images with noise at a different nonuniform sampling rate of ESF are performed using the proposed modified slanted-edge method. It is shown that the proposed method successfully eliminates the error due to the nonuniform sampling of the ESF. An experimental setup for camera MTF measurement is established to verify the accuracy of the proposed method. The experiment results show that under different nonuniform sampling rates of ESF, the proposed modified slanted-edge method has improved accuracy for the camera MTF measurement compared to the ISO 12233 slanted-edge method.

  6. Using experience sampling methods/ecological momentary assessment (ESM/EMA) in clinical assessment and clinical research: introduction to the special section.

    PubMed

    Trull, Timothy J; Ebner-Priemer, Ulrich W

    2009-12-01

    This article introduces the special section on experience sampling methods and ecological momentary assessment in clinical assessment. We review the conceptual basis for experience sampling methods (ESM; Csikszentmihalyi & Larson, 1987) and ecological momentary assessment (EMA; Stone & Shiffman, 1994). Next, we highlight several advantageous features of ESM/EMA as applied to psychological assessment and clinical research. We provide a brief overview of the articles in this special section, each of which focuses on 1 of the following major classes of psychological disorders: mood disorders and mood dysregulation (Ebner-Priemer & Trull, 2009), anxiety disorders (Alpers, 2009), substance use disorders (Shiffman, 2009), and psychosis (Oorschot, Kwapil, Delespaul, & Myin-Germeys, 2009). Finally, we discuss prospects, future challenges, and limitations of ESM/EMA.

  7. A novel analysis method for paired-sample microbial ecology experiments

    DOE PAGES

    Olesen, Scott W.; Vora, Suhani; Techtmann, Stephen M.; ...

    2016-05-06

    Many microbial ecology experiments use sequencing data to measure a community s response to an experimental treatment. In a common experimental design, two units, one control and one experimental, are sampled before and after the treatment is applied to the experimental unit. The four resulting samples contain information about the dynamics of organisms that respond to the treatment, but there are no analytical methods designed to extract exactly this type of information from this configuration of samples. Here we present an analytical method specifically designed to visualize and generate hypotheses about microbial community dynamics in experiments that have paired samplesmore » and few or no replicates. The method is based on the Poisson lognormal distribution, long studied in macroecology, which we found accurately models the abundance distribution of taxa counts from 16S rRNA surveys. To demonstrate the method s validity and potential, we analyzed an experiment that measured the effect of crude oil on ocean microbial communities in microcosm. Our method identified known oil degraders as well as two clades, Maricurvus and Rhodobacteraceae, that responded to amendment with oil but do not include known oil degraders. Furthermore, our approach is sensitive to organisms that increased in abundance only in the experimental unit but less sensitive to organisms that increased in both control and experimental units, thus mitigating the role of bottle effects .« less

  8. A novel analysis method for paired-sample microbial ecology experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olesen, Scott W.; Vora, Suhani; Techtmann, Stephen M.

    Many microbial ecology experiments use sequencing data to measure a community s response to an experimental treatment. In a common experimental design, two units, one control and one experimental, are sampled before and after the treatment is applied to the experimental unit. The four resulting samples contain information about the dynamics of organisms that respond to the treatment, but there are no analytical methods designed to extract exactly this type of information from this configuration of samples. Here we present an analytical method specifically designed to visualize and generate hypotheses about microbial community dynamics in experiments that have paired samplesmore » and few or no replicates. The method is based on the Poisson lognormal distribution, long studied in macroecology, which we found accurately models the abundance distribution of taxa counts from 16S rRNA surveys. To demonstrate the method s validity and potential, we analyzed an experiment that measured the effect of crude oil on ocean microbial communities in microcosm. Our method identified known oil degraders as well as two clades, Maricurvus and Rhodobacteraceae, that responded to amendment with oil but do not include known oil degraders. Furthermore, our approach is sensitive to organisms that increased in abundance only in the experimental unit but less sensitive to organisms that increased in both control and experimental units, thus mitigating the role of bottle effects .« less

  9. Fiducial marker application method for position alignment of in situ multimodal X-ray experiments and reconstructions

    DOE PAGES

    Shade, Paul A.; Menasche, David B.; Bernier, Joel V.; ...

    2016-03-01

    An evolving suite of X-ray characterization methods are presently available to the materials community, providing a great opportunity to gain new insight into material behavior and provide critical validation data for materials models. Two critical and related issues are sample repositioning during anin situexperiment and registration of multiple data sets after the experiment. To address these issues, a method is described which utilizes a focused ion-beam scanning electron microscope equipped with a micromanipulator to apply gold fiducial markers to samples for X-ray measurements. The method is demonstrated with a synchrotron X-ray experiment involvingin situloading of a titanium alloy tensile specimen.

  10. A novel method for high-pressure annealing experiments in a water-rich environment: hydrogen solubility and speciation in natural, gem-quality diopside

    NASA Astrophysics Data System (ADS)

    Bromiley, G. D.; Keppler, H.; Bromiley, F. A.; Jacobsen, S. D.

    2003-04-01

    Previous experimental invesitgations on the incorporation of structurally-bound hydrogen in nominally anhydrous minerals have either involved synthesis experiments or annealing of natural samples under hydrothermal conditions. For investigation of hydrogen incorporation using FTIR, large, good quality crystals are required. Because of experimental difficulties, synthesis experiments are limited to the investigation of end-member systems. Annealing experiments may be used to investigate chemically more complex systems. However, in previous investigations problems have arisen due to reaction of samples with chemical buffers and fluids at elevated pressures and temperatures, and run times have been limited to less than 48 hours, raising questions regarding attainment of equilbrium. In the present study, a novel method for conducting long duration (100 s of hours) annealing experiments to investigate hydrogen incorporation in samples at high-pressure has been developed. The method relies on the use of a semi-permeable platinum membrane, which protects the sample during the experiment. Samples, cut into 1×2×3 mm blocks, are surrounded by a thin platinum jacket, which is "shrink-wrapped" around the samples. The samples are then loaded into larger Pt10%Rh capsules with a buffer mixture of the same composition as the Cr-diopside, a large amount of excess water, excess silica and a Ni-NiO buffer to control oxygen fugacity. At elevated pressures and temperatures, hydrogen can diffuse freely through the platinum membrane, but the samples are protected from reaction with the surrounding buffer material and fluid. Capsules are loaded into a specially designed low-friction NaCl cells for use in piston-cylinder apparatus. Samples are recovered completely intact and crack-free. Several experiments have been performed at 1.5 GPa, with increasing run duration, to demonstrate the attainment of equilibrium hydrogen contents in the sample. Experiments have been performed at pressures from 0.5 to 4.0 GPa, 1000 to 1100^oC, with run times of several hundred hours. The effects of increasing pressure and oxygen fugacity on hydeogen solubility, and hydrogen speciation in the diopside have been fully characterised using polarised FTIR spectoscopy. The high-quality of recovered samples means that further investigations on the effects of increasing water contents on other physical properties in the samples should be possible.

  11. External Standards or Standard Addition? Selecting and Validating a Method of Standardization

    NASA Astrophysics Data System (ADS)

    Harvey, David T.

    2002-05-01

    A common feature of many problem-based laboratories in analytical chemistry is a lengthy independent project involving the analysis of "real-world" samples. Students research the literature, adapting and developing a method suitable for their analyte, sample matrix, and problem scenario. Because these projects encompass the complete analytical process, students must consider issues such as obtaining a representative sample, selecting a method of analysis, developing a suitable standardization, validating results, and implementing appropriate quality assessment/quality control practices. Most textbooks and monographs suitable for an undergraduate course in analytical chemistry, however, provide only limited coverage of these important topics. The need for short laboratory experiments emphasizing important facets of method development, such as selecting a method of standardization, is evident. The experiment reported here, which is suitable for an introductory course in analytical chemistry, illustrates the importance of matrix effects when selecting a method of standardization. Students also learn how a spike recovery is used to validate an analytical method, and obtain a practical experience in the difference between performing an external standardization and a standard addition.

  12. Latex samples for RAMSES electrophoresis experiment on IML 2

    NASA Technical Reports Server (NTRS)

    Seaman, Geoffrey V. F.; Knox, Robert J.

    1994-01-01

    The objectives of these reported studies were to provide ground based support services for the flight experiment team for the RAMSES experiment to be flown aboard IML-2. The specific areas of support included consultation on the performance of particle based electrophoresis studies, development of methods for the preparation of suitable samples for the flight hardware, the screening of particles to obtain suitable candidates for the flight experiment, and the electrophoretic characterization of sample particle preparations. The first phases of these studies were performed under this contract, while the follow on work was performed under grant number NAG8 1081, 'Preparation and Characterization of Latex Samples for RAMSES Experiment on IML 2.' During this first phase of the experiment the following benchmarks were achieved: Methods were tested for the concentration and resuspension of latex samples in the greater than 0.4 micron diameter range to provide moderately high solids content samples free of particle aggregation which interferred with the normal functioning of the RAMSES hardware. Various candidate latex preparations were screened and two candidate types of latex were identified for use in the flight experiments, carboxylate modified latex (CML) and acrylic acid-acrylamide modified latex (AAM). These latexes have relatively hydrophilic surfaces, are not prone to aggregate, and display sufficiently low electrophoretic mobilities in the flight buffer so that they can be used to make mixtures to test the resolving power of the flight hardware.

  13. Assessing the Relationship between Family Mealtime Communication and Adolescent Emotional Well-Being Using the Experience Sampling Method

    ERIC Educational Resources Information Center

    Offer, Shira

    2013-01-01

    While most prior research has focused on the frequency of family meals the issue of which elements of family mealtime are most salient for adolescents' well-being has remained overlooked. The current study used the experience sampling method, a unique form of time diary, and survey data drawn from the 500 Family Study (N = 237 adolescents with…

  14. Alternative sample sizes for verification dose experiments and dose audits

    NASA Astrophysics Data System (ADS)

    Taylor, W. A.; Hansen, J. M.

    1999-01-01

    ISO 11137 (1995), "Sterilization of Health Care Products—Requirements for Validation and Routine Control—Radiation Sterilization", provides sampling plans for performing initial verification dose experiments and quarterly dose audits. Alternative sampling plans are presented which provide equivalent protection. These sampling plans can significantly reduce the cost of testing. These alternative sampling plans have been included in a draft ISO Technical Report (type 2). This paper examines the rational behind the proposed alternative sampling plans. The protection provided by the current verification and audit sampling plans is first examined. Then methods for identifying equivalent plans are highlighted. Finally, methods for comparing the cost associated with the different plans are provided. This paper includes additional guidance for selecting between the original and alternative sampling plans not included in the technical report.

  15. Parenthood and the Quality of Experience in Daily Life: A Longitudinal Study

    ERIC Educational Resources Information Center

    Fave, Antonella Delle; Massimini, Fausto

    2004-01-01

    This longitudinal study analyzes the time budget and the quality of experience reported by new parents. Five primiparous couples were repeatedly administered Experience Sampling Method. They carried pagers sending random signals 6-8 times a day; at the signal reception, they filled out forms sampling current thoughts, activities, and the quality…

  16. Analysis of pharmaceutical and other organic wastewater compounds in filtered and unfiltered water samples by gas chromatography/mass spectrometry

    USGS Publications Warehouse

    Zaugg, Steven D.; Phillips, Patrick J.; Smith, Steven G.

    2014-01-01

    Research on the effects of exposure of stream biota to complex mixtures of pharmaceuticals and other organic compounds associated with wastewater requires the development of additional analytical capabilities for these compounds in water samples. Two gas chromatography/mass spectrometry (GC/MS) analytical methods used at the U.S. Geological Survey National Water Quality Laboratory (NWQL) to analyze organic compounds associated with wastewater were adapted to include additional pharmaceutical and other organic compounds beginning in 2009. This report includes a description of method performance for 42 additional compounds for the filtered-water method (hereafter referred to as the filtered method) and 46 additional compounds for the unfiltered-water method (hereafter referred to as the unfiltered method). The method performance for the filtered method described in this report has been published for seven of these compounds; however, the addition of several other compounds to the filtered method and the addition of the compounds to the unfiltered method resulted in the need to document method performance for both of the modified methods. Most of these added compounds are pharmaceuticals or pharmaceutical degradates, although two nonpharmaceutical compounds are included in each method. The main pharmaceutical compound classes added to the two modified methods include muscle relaxants, opiates, analgesics, and sedatives. These types of compounds were added to the original filtered and unfiltered methods largely in response to the tentative identification of a wide range of pharmaceutical and other organic compounds in samples collected from wastewater-treatment plants. Filtered water samples are extracted by vacuum through disposable solid-phase cartridges that contain modified polystyrene-divinylbenzene resin. Unfiltered samples are extracted by using continuous liquid-liquid extraction with dichloromethane. The compounds of interest for filtered and unfiltered sample types were determined by use of the capillary-column gas chromatography/mass spectrometry. The performance of each method was assessed by using data on recoveries of compounds in fortified surface-water, wastewater, and reagent-water samples. These experiments (referred to as spike experiments) consist of fortifying (or spiking) samples with known amounts of target analytes. Surface-water-spike experiments were performed by using samples obtained from a stream in Colorado (unfiltered method) and a stream in New York (filtered method). Wastewater spike experiments for both the filtered and unfiltered methods were performed by using a treated wastewater obtained from a single wastewater treatment plant in New York. Surface water and wastewater spike experiments were fortified at both low and high concentrations and termed low- and high-level spikes, respectively. Reagent water spikes were assessed in three ways: (1) set spikes, (2) a low-concentration fortification experiment, and (3) a high-concentration fortification experiment. Set spike samples have been determined since 2009, and consist of analysis of fortified reagent water for target compounds included for each group of 10 to18 environmental samples analyzed at the NWQL. The low-concentration and high-concentration reagent spike experiments, by contrast, represent a one-time assessment of method performance. For each spike experiment, mean recoveries ranging from 60 to 130 percent indicate low bias, and relative standard deviations (RSDs) less than ( Of the compounds included in the filtered method, 21 had mean recoveries ranging from 63 to 129 percent for the low-level and high-level surface-water spikes, and had low ()132 percent]. For wastewater spikes, 24 of the compounds included in the filtered method had recoveries ranging from 61 to 130 percent for the low-level and high-level spikes. RSDs were 130 percent) or variable recoveries (RSDs >30 percent) for low-level wastewater spikes, or low recoveries ( Of the compounds included in the unfiltered method, 17 had mean spike recoveries ranging from 74 to 129 percent and RSDs ranging from 5 to 25 percent for low-level and high-level surface water spikes. The remaining compounds had poor mean recoveries (130 percent), or high RSDs (>29 percent) for these spikes. For wastewater, 14 of the compounds included in the unfiltered method had mean recoveries ranging from 62 to 127 percent and RSDs 130 percent), or low mean recoveries (33 percent) for the low-level wastewater spikes. Of the compounds found in wastewater, 24 had mean set spike recoveries ranging from 64 to 104 percent and RSDs Separate method detection limits (MDLs) were computed for surface water and wastewater for both the filtered and unfiltered methods. Filtered method MDLs ranged from 0.007 to 0.14 microgram per liter (μg/L) for the surface water matrix and from 0.004 to 0.62 μg/L for the wastewater matrix. Unfiltered method MDLs ranged from 0.014 to 0.33 μg/L for the surface water matrix and from 0.008 to 0.36 μg/L for the wastewater matrix.

  17. [Application of automatic photography in Schistosoma japonicum miracidium hatching experiments].

    PubMed

    Ming-Li, Zhou; Ai-Ling, Cai; Xue-Feng, Wang

    2016-05-20

    To explore the value of automatic photography in the observation of results of Schistosoma japonicum miracidium hatching experiments. Some fresh S. japonicum eggs were added into cow feces, and the samples of feces were divided into a low infested experimental group and a high infested group (40 samples each group). In addition, there was a negative control group with 40 samples of cow feces without S. japonicum eggs. The conventional nylon bag S. japonicum miracidium hatching experiments were performed. The process was observed with the method of flashlight and magnifying glass combined with automatic video (automatic photography method), and, at the same time, with the naked eye observation method. The results were compared. In the low infested group, the miracidium positive detection rates were 57.5% and 85.0% by the naked eye observation method and automatic photography method, respectively ( χ 2 = 11.723, P < 0.05). In the high infested group, the positive detection rates were 97.5% and 100% by the naked eye observation method and automatic photography method, respectively ( χ 2 = 1.253, P > 0.05). In the two infested groups, the average positive detection rates were 77.5% and 92.5% by the naked eye observation method and automatic photography method, respectively ( χ 2 = 6.894, P < 0.05). The automatic photography can effectively improve the positive detection rate in the S. japonicum miracidium hatching experiments.

  18. An improved sampling method of complex network

    NASA Astrophysics Data System (ADS)

    Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing

    2014-12-01

    Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.

  19. Fast 2D NMR Spectroscopy for In vivo Monitoring of Bacterial Metabolism in Complex Mixtures.

    PubMed

    Dass, Rupashree; Grudzia Ż, Katarzyna; Ishikawa, Takao; Nowakowski, Michał; Dȩbowska, Renata; Kazimierczuk, Krzysztof

    2017-01-01

    The biological toolbox is full of techniques developed originally for analytical chemistry. Among them, spectroscopic experiments are very important source of atomic-level structural information. Nuclear magnetic resonance (NMR) spectroscopy, although very advanced in chemical and biophysical applications, has been used in microbiology only in a limited manner. So far, mostly one-dimensional 1 H experiments have been reported in studies of bacterial metabolism monitored in situ . However, low spectral resolution and limited information on molecular topology limits the usability of these methods. These problems are particularly evident in the case of complex mixtures, where spectral peaks originating from many compounds overlap and make the interpretation of changes in a spectrum difficult or even impossible. Often a suite of two-dimensional (2D) NMR experiments is used to improve resolution and extract structural information from internuclear correlations. However, for dynamically changing sample, like bacterial culture, the time-consuming sampling of so-called indirect time dimensions in 2D experiments is inefficient. Here, we propose the technique known from analytical chemistry and structural biology of proteins, i.e., time-resolved non-uniform sampling. The method allows application of 2D (and multi-D) experiments in the case of quickly varying samples. The indirect dimension here is sparsely sampled resulting in significant reduction of experimental time. Compared to conventional approach based on a series of 1D measurements, this method provides extraordinary resolution and is a real-time approach to process monitoring. In this study, we demonstrate the usability of the method on a sample of Escherichia coli culture affected by ampicillin and on a sample of Propionibacterium acnes , an acne causing bacterium, mixed with a dose of face tonic, which is a complicated, multi-component mixture providing complex NMR spectrum. Through our experiments we determine the exact concentration and time at which the anti-bacterial agents affect the bacterial metabolism. We show, that it is worth to extend the NMR toolbox for microbiology by including techniques of 2D z-TOCSY, for total "fingerprinting" of a sample and 2D 13 C-edited HSQC to monitor changes in concentration of metabolites in selected metabolic pathways.

  20. How Generalizable Is Your Experiment? An Index for Comparing Samples and Populations

    ERIC Educational Resources Information Center

    Tipton, Elizabeth

    2013-01-01

    Recent research on the design of social experiments has highlighted the effects of different design choices on research findings. Since experiments rarely collect their samples using random selection, in order to address these external validity problems and design choices, recent research has focused on two areas. The first area is on methods for…

  1. The Expression of Adult ADHD Symptoms in Daily Life: An Application of Experience Sampling Methodology

    ERIC Educational Resources Information Center

    Knouse, Laura E.; Mitchell, John T.; Brown, Leslie H.; Silvia, Paul J.; Kane, Michael J.; Myin-Germeys, Inez; Kwapil, Thomas R.

    2008-01-01

    Objective: To use experience sampling method (ESM) to examine the impact of inattentive and hyperactive-impulsive ADHD symptoms on emotional well-being, activities and distress, cognitive impairment, and social functioning assessed in the daily lives of young adults. The impact of subjective appraisals on their experiences is also examined.…

  2. A method for cone fitting based on certain sampling strategy in CMM metrology

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Guo, Chaopeng

    2018-04-01

    A method of cone fitting in engineering is explored and implemented to overcome shortcomings of current fitting method. In the current method, the calculations of the initial geometric parameters are imprecise which cause poor accuracy in surface fitting. A geometric distance function of cone is constructed firstly, then certain sampling strategy is defined to calculate the initial geometric parameters, afterwards nonlinear least-squares method is used to fit the surface. The experiment is designed to verify accuracy of the method. The experiment data prove that the proposed method can get initial geometric parameters simply and efficiently, also fit the surface precisely, and provide a new accurate way to cone fitting in the coordinate measurement.

  3. Methods for sample size determination in cluster randomized trials

    PubMed Central

    Rutterford, Clare; Copas, Andrew; Eldridge, Sandra

    2015-01-01

    Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515

  4. Pristine Inner Experience and Descriptive Experience Sampling: Implications for Psychology

    PubMed Central

    Lapping-Carr, Leiszle R.; Heavey, Christopher L.

    2017-01-01

    Pristine inner experience is that which is directly present in awareness before it is distorted by attempts at observation or interpretation. Many psychological methods, including most introspective methods, attempt to measure some aspect of pristine inner experience (thoughts, feelings, mental imagery, sensations, etc.). We believe, however, that these methods produce unspecifiable combinations of pristine inner experience, beliefs about the self, beliefs about what inner experience should be like, inaccurate recollections, miscommunications, and other confounding influences. We argue that descriptive experience sampling (DES) can produce high fidelity descriptions of pristine inner experience. These descriptions are used to create idiographic profiles, carefully crafted, in-depth characterizations of the pristine inner experience of individuals. We believe these profiles, because they are built from moments apprehended via a method that confronts the challenges inherent in examining inner experience, are uniquely valuable in advancing the science of inner experience and psychology broadly. For example, DES observations raise important questions about the veracity of results gathered via questionnaires and other introspective methods, like casual introspection. DES findings also provide high fidelity phenomenological data that can be useful for those developing psychological theories, such as theories of emotional processing. Additionally, DES procedures may allow clinicians and clients to practice valuable skills, like bracketing presuppositions and attending to internal experiences. This paper will describe difficulties inherent in the study of pristine inner experience and discuss implications of high fidelity descriptions of pristine inner experience for psychological research, theory development, and clinical practice. PMID:29312047

  5. Anorexia Nervosa in the Context of Daily Experience.

    ERIC Educational Resources Information Center

    Larson, Reed; Johnson, Craig

    1981-01-01

    This study investigated the anorectic's experience in daily living using the Experience Sampling Method. Results suggest that anorectics spend more time alone and experience lower average affect than other young single women. (Author/GK)

  6. Analysing neutron scattering data using McStas virtual experiments

    NASA Astrophysics Data System (ADS)

    Udby, L.; Willendrup, P. K.; Knudsen, E.; Niedermayer, Ch.; Filges, U.; Christensen, N. B.; Farhi, E.; Wells, B. O.; Lefmann, K.

    2011-04-01

    With the intention of developing a new data analysis method using virtual experiments we have built a detailed virtual model of the cold triple-axis spectrometer RITA-II at PSI, Switzerland, using the McStas neutron ray-tracing package. The parameters characterising the virtual instrument were carefully tuned against real experiments. In the present paper we show that virtual experiments reproduce experimentally observed linewidths within 1-3% for a variety of samples. Furthermore we show that the detailed knowledge of the instrumental resolution found from virtual experiments, including sample mosaicity, can be used for quantitative estimates of linewidth broadening resulting from, e.g., finite domain sizes in single-crystal samples.

  7. Autonomous spatially adaptive sampling in experiments based on curvature, statistical error and sample spacing with applications in LDA measurements

    NASA Astrophysics Data System (ADS)

    Theunissen, Raf; Kadosh, Jesse S.; Allen, Christian B.

    2015-06-01

    Spatially varying signals are typically sampled by collecting uniformly spaced samples irrespective of the signal content. For signals with inhomogeneous information content, this leads to unnecessarily dense sampling in regions of low interest or insufficient sample density at important features, or both. A new adaptive sampling technique is presented directing sample collection in proportion to local information content, capturing adequately the short-period features while sparsely sampling less dynamic regions. The proposed method incorporates a data-adapted sampling strategy on the basis of signal curvature, sample space-filling, variable experimental uncertainty and iterative improvement. Numerical assessment has indicated a reduction in the number of samples required to achieve a predefined uncertainty level overall while improving local accuracy for important features. The potential of the proposed method has been further demonstrated on the basis of Laser Doppler Anemometry experiments examining the wake behind a NACA0012 airfoil and the boundary layer characterisation of a flat plate.

  8. RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.

    PubMed

    Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu

    2018-05-30

    One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.

  9. Abortion experiences among Zanzibari women: a chain-referral sampling study.

    PubMed

    Norris, Alison; Harrington, Bryna J; Grossman, Daniel; Hemed, Maryam; Hindin, Michelle J

    2016-03-11

    In Zanzibar, a semi-autonomous region of Tanzania, induced abortion is illegal but common, and fewer than 12% of married reproductive-aged women use modern contraception. As part of a multi-method study about contraception and consequences of unwanted pregnancies, the objective of this study was to understand the experiences of Zanzibari women who terminated pregnancies. The cross-sectional study was set in Zanzibar, Tanzania. Participants were a community-based sample of women who had terminated pregnancies. We carried out semi-structured interviews with 45 women recruited via chain-referral sampling. We report the characteristics of women who have had abortions, the reasons they had abortions, and the methods used to terminate their pregnancies. Women in Zanzibar terminate pregnancies that are unwanted for a range of reasons, at various points in their reproductive lives, and using multiple methods. While clinical methods were most effective, nearly half of our participants successfully terminated a pregnancy using non-clinical methods and very few had complications requiring post abortion care (PAC). Even in settings where abortion is illegal, some women experience illegal abortions without adverse health consequences, what we might call 'safer' unsafe abortions; these kinds of abortion experiences can be missed in studies about abortion conducted among women seeking PAC in hospitals.

  10. Foam generation and sample composition optimization for the FOAM-C experiment of the ISS

    NASA Astrophysics Data System (ADS)

    Carpy, R.; Picker, G.; Amann, B.; Ranebo, H.; Vincent-Bonnieu, S.; Minster, O.; Winter, J.; Dettmann, J.; Castiglione, L.; Höhler, R.; Langevin, D.

    2011-12-01

    End of 2009 and early 2010 a sealed cell, for foam generation and observation, has been designed and manufactured at Astrium Friedrichshafen facilities. With the use of this cell, different sample compositions of "wet foams" have been optimized for mixtures of chemicals such as water, dodecanol, pluronic, aethoxisclerol, glycerol, CTAB, SDS, as well as glass beads. This development is performed in the frame of the breadboarding development activities of the Experiment Container FOAM-C for operation in the ISS Fluid Science Laboratory (ISS). The sample cell supports multiple observation methods such as: Diffusing-Wave and Diffuse Transmission Spectrometry, Time Resolved Correlation Spectroscopy [1] and microscope observation, all of these methods are applied in the cell with a relatively small experiment volume <3cm3. These units, will be on orbit replaceable sets, that will allow multiple sample compositions processing (in the range of >40).

  11. The performance of diphoton primary vertex reconstruction methods in H → γγ+Met channel of ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Tomiwa, K. G.

    2017-09-01

    The search for new physics in the H → γγ+met relies on how well the missing transverse energy is reconstructed. The Met algorithm used by the ATLAS experiment in turns uses input variables like photon and jets which depend on the reconstruction of the primary vertex. This document presents the performance of di-photon vertex reconstruction algorithms (hardest vertex method and Neural Network method). Comparing the performance of these algorithms for the nominal Standard Model sample and the Beyond Standard Model sample, we see the overall performance of the Neural Network method of primary vertex selection performed better than the Hardest vertex method.

  12. Usage of CT data in biomechanical research

    NASA Astrophysics Data System (ADS)

    Safonov, Roman A.; Golyadkina, Anastasiya A.; Kirillova, Irina V.; Kossovich, Leonid Y.

    2017-02-01

    Object of study: The investigation is focused on development of personalized medicine. The determination of mechanical properties of bone tissues based on in vivo data was considered. Methods: CT, MRI, natural experiments on versatile test machine Instron 5944, numerical experiments using Python programs. Results: The medical diagnostics methods, which allows determination of mechanical properties of bone tissues based on in vivo data. The series of experiments to define the values of mechanical parameters of bone tissues. For one and the same sample, computed tomography (CT), magnetic resonance imaging (MRI), ultrasonic investigations and mechanical experiments on single-column test machine Instron 5944 were carried out. The computer program for comparison of CT and MRI images was created. The grayscale values in the same points of the samples were determined on both CT and MRI images. The Haunsfield grayscale values were used to determine rigidity (Young module) and tensile strength of the samples. The obtained data was compared to natural experiments results for verification.

  13. Experiments to Evaluate and Implement Passive Tracer Gas Methods to Measure Ventilation Rates in Homes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lunden, Melissa; Faulkner, David; Heredia, Elizabeth

    2012-10-01

    This report documents experiments performed in three homes to assess the methodology used to determine air exchange rates using passive tracer techniques. The experiments used four different tracer gases emitted simultaneously but implemented with different spatial coverage in the home. Two different tracer gas sampling methods were used. The results characterize the factors of the execution and analysis of the passive tracer technique that affect the uncertainty in the calculated air exchange rates. These factors include uncertainties in tracer gas emission rates, differences in measured concentrations for different tracer gases, temporal and spatial variability of the concentrations, the comparison betweenmore » different gas sampling methods, and the effect of different ventilation conditions.« less

  14. Archeointensity estimates of a tenth-century kiln: first application of the Tsunakawa-Shaw paleointensity method to archeological relics

    NASA Astrophysics Data System (ADS)

    Kitahara, Yu; Yamamoto, Yuhji; Ohno, Masao; Kuwahara, Yoshihiro; Kameda, Shuichi; Hatakeyama, Tadahiro

    2018-05-01

    Paleomagnetic information reconstructed from archeological materials can be utilized to estimate the archeological age of excavated relics, in addition to revealing the geomagnetic secular variation and core dynamics. The direction and intensity of the Earth's magnetic field (archeodirection and archeointensity) can be ascertained using different methods, many of which have been proposed over the past decade. Among the new experimental techniques for archeointensity estimates is the Tsunakawa-Shaw method. This study demonstrates the validity of the Tsunakawa-Shaw method to reconstruct archeointensity from samples of baked clay from archeological relics. The validity of the approach was tested by comparison with the IZZI-Thellier method. The intensity values obtained coincided at the standard deviation (1 σ) level. A total of 8 specimens for the Tsunakawa-Shaw method and 16 specimens for the IZZI-Thellier method, from 8 baked clay blocks, collected from the surface of the kiln were used in these experiments. Among them, 8 specimens (for the Tsunakawa-Shaw method) and 3 specimens (for the IZZI-Thellier method) passed a set of strict selection criteria used in the final evaluation of validity. Additionally, we performed rock magnetic experiments, mineral analysis, and paleodirection measurement to evaluate the suitability of the baked clay samples for paleointensity experiments and hence confirmed that the sample properties were ideal for performing paleointensity experiments. It is notable that the newly estimated archaomagnetic intensity values are lower than those in previous studies that used other paleointensity methods for the tenth century in Japan. [Figure not available: see fulltext.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor-Pashow, K.; Fondeur, F.; White, T.

    Savannah River National Laboratory (SRNL) was tasked with identifying and developing at least one, but preferably two methods for quantifying the suppressor in the Next Generation Solvent (NGS) system. The suppressor is a guanidine derivative, N,N',N"-tris(3,7-dimethyloctyl)guanidine (TiDG). A list of 10 possible methods was generated, and screening experiments were performed for 8 of the 10 methods. After completion of the screening experiments, the non-aqueous acid-base titration was determined to be the most promising, and was selected for further development as the primary method. {sup 1}H NMR also showed promising results from the screening experiments, and this method was selected formore » further development as the secondary method. Other methods, including {sup 36}Cl radiocounting and ion chromatography, also showed promise; however, due to the similarity to the primary method (titration) and the inability to differentiate between TiDG and TOA (tri-n-ocytlamine) in the blended solvent, {sup 1}H NMR was selected over these methods. Analysis of radioactive samples obtained from real waste ESS (extraction, scrub, strip) testing using the titration method showed good results. Based on these results, the titration method was selected as the method of choice for TiDG measurement. {sup 1}H NMR has been selected as the secondary (back-up) method, and additional work is planned to further develop this method and to verify the method using radioactive samples. Procedures for analyzing radioactive samples of both pure NGS and blended solvent were developed and issued for the both methods.« less

  16. Estimating the circuit delay of FPGA with a transfer learning method

    NASA Astrophysics Data System (ADS)

    Cui, Xiuhai; Liu, Datong; Peng, Yu; Peng, Xiyuan

    2017-10-01

    With the increase of FPGA (Field Programmable Gate Array, FPGA) functionality, FPGA has become an on-chip system platform. Due to increase the complexity of FPGA, estimating the delay of FPGA is a very challenge work. To solve the problems, we propose a transfer learning estimation delay (TLED) method to simplify the delay estimation of different speed grade FPGA. In fact, the same style different speed grade FPGA comes from the same process and layout. The delay has some correlation among different speed grade FPGA. Therefore, one kind of speed grade FPGA is chosen as a basic training sample in this paper. Other training samples of different speed grade can get from the basic training samples through of transfer learning. At the same time, we also select a few target FPGA samples as training samples. A general predictive model is trained by these samples. Thus one kind of estimation model is used to estimate different speed grade FPGA circuit delay. The framework of TRED includes three phases: 1) Building a basic circuit delay library which includes multipliers, adders, shifters, and so on. These circuits are used to train and build the predictive model. 2) By contrasting experiments among different algorithms, the forest random algorithm is selected to train predictive model. 3) The target circuit delay is predicted by the predictive model. The Artix-7, Kintex-7, and Virtex-7 are selected to do experiments. Each of them includes -1, -2, -2l, and -3 different speed grade. The experiments show the delay estimation accuracy score is more than 92% with the TLED method. This result shows that the TLED method is a feasible delay assessment method, especially in the high-level synthesis stage of FPGA tool, which is an efficient and effective delay assessment method.

  17. Comparative Efficiency of the Fenwick Can and Schuiling Centrifuge in Extracting Nematode Cysts from Different Soil Types

    PubMed Central

    Bellvert, Joaquim; Crombie, Kieran; Horgan, Finbarr G.

    2008-01-01

    The Fenwick can and Schuiling centrifuge are widely used to extract nematode cysts from soil samples. The comparative efficiencies of these two methods during cyst extraction have not been determined for different soil types under different cyst densities. Such information is vital for statutory laboratories that must choose a method for routine, high-throughput soil monitoring. In this study, samples of different soil types seeded with varying densities of potato cyst nematode (Globodera rostochiensis) cysts were processed using both methods. In one experiment, with 200 ml samples, recovery was similar between methods. In a second experiment with 500 ml samples, cyst recovery was higher using the Schuiling centrifuge. For each method and soil type, cyst extraction efficiency was similar across all densities tested. Extraction was efficient from pure sand (Fenwick 72%, Schuiling 84%) and naturally sandy soils (Fenwick 62%, Schuiling 73%), but was significantly less efficient from clay-soil (Fenwick 42%, Schuiling 44%) and peat-soil with high organic matter content (Fenwick 35%, Schuiling 33%). Residual moisture (<10% w/w) in samples prior to analyses reduced extraction efficiency, particularly for sand and sandy soils. For each soil type and method, there were significant linear relationships between the number of cysts extracted and the numbers of cysts in the samples. We discuss the advantages and disadvantages of each extraction method for cyst extraction in statutory soil laboratories. PMID:19259516

  18. Method of Analysis by the U.S. Geological Survey California District Sacramento Laboratory?Determination of Trihalomethane Formation Potential, Method Validation, and Quality-Control Practices

    USGS Publications Warehouse

    Crepeau, Kathryn L.; Fram, Miranda S.; Bush, Noel

    2004-01-01

    An analytical method for the determination of the trihalomethane formation potential of water samples has been developed. The trihalomethane formation potential is measured by dosing samples with chlorine under specified conditions of pH, temperature, incubation time, darkness, and residual-free chlorine, and then analyzing the resulting trihalomethanes by purge and trap/gas chromatography equipped with an electron capture detector. Detailed explanations of the method and quality-control practices are provided. Method validation experiments showed that the trihalomethane formation potential varies as a function of time between sample collection and analysis, residual-free chlorine concentration, method of sample dilution, and the concentration of bromide in the sample.

  19. Comparison of Thellier-type and multispecimen absolute paleointensities obtained on Miocene to historical lava flows from Lanzarote (Canary Islands, Spain)

    NASA Astrophysics Data System (ADS)

    Calvo-Rathert, M.; Morales, J.; Carrancho, Á.; Gogichaishvili, A.

    2015-12-01

    A paleomagnetic, rock-magnetic and paleointensity study has been carried out on 16 Miocene, Pleistocene, Quaternary and historical lava flows from Lanzarote (Canary Islands, Spain) with two main goals: (i) Compare paleointensity results obtained with two different techniques (Thellier-type and multispecimen) and (ii) obtain new paleointensity data. Initial rock-magnetic experiments on selected samples from each site were carried out to find out the carriers of remanence and to determine their thermal stability and grain size. They included the measurement of thermomagnetic curves, hysteresis parameters and IRM acquisition curves. Mostly reversible but also non-reversible curves were recorded in thermomagnetic experiments, with low-Ti titanomagnetite being the main carrier of remanence in most studied flows. Paleomagnetic analysis showed in most cases a single component and a characteristic component could be determined in 15 flows, all displaying normal-polarity. 83 samples from 13 flows were chosen for paleointensity experiments. In order to compare paleointensity results from exactly the same samples, they were cut into smaller specimens so that in each case a specimen was available to be used for a Thellier-type paleointensity determination, another one for a multispecimen paleointensity experiment and another one for rock-magnetic experiments. Thermomagnetic curves could be therefore measured on all samples subjected to paleointensity experiments. Thellier-type paleointensity determinations were performed with the Coe method between room temperature and 581°C on small (0.9 cm diameter and 1 to 2.5 cm length) specimens. After heating, samples were left cooling down naturally during several hours. Multispecimen paleointensity determinations were carried out using the method of Dekkers and Böhnel. The aforementioned sub-samples were cut into 8 specimens and pressed into salt pellets in order to obtain standard cylindrical specimens. A set of eight experiments was performed using laboratory fields from 10 to 80 μT, with increments of 10 μT. Samples were oriented in such a way that the NRM directions of each sub specimen lay parallel to the axis of the heating chamber and were heated at a temperature of 450°C. Results obtained with both methods are compared and discussed.

  20. Experiment, monitoring, and gradient methods used to infer climate change effects on plant communities yield consistent patterns

    Treesearch

    Sarah C. Elmendorf; Gregory H.R. Henry; Robert D. Hollisterd; Anna Maria Fosaa; William A. Gould; Luise Hermanutz; Annika Hofgaard; Ingibjorg I. Jonsdottir; Janet C. Jorgenson; Esther Levesque; Borgbor Magnusson; Ulf Molau; Isla H. Myers-Smith; Steven F. Oberbauer; Christian Rixen; Craig E. Tweedie; Marilyn Walkers

    2015-01-01

    Inference about future climate change impacts typically relies on one of three approaches: manipulative experiments, historical comparisons (broadly defined to include monitoring the response to ambient climate fluctuations using repeat sampling of plots, dendroecology, and paleoecology techniques), and space-for-time substitutions derived from sampling along...

  1. Disadvantaged Youth Report Less Negative Emotion to Minor Stressors When with Peers: An Experience Sampling Study

    ERIC Educational Resources Information Center

    Uink, Bep Norma; Modecki, Kathryn Lynn; Barber, Bonnie L.

    2017-01-01

    Previous Experience Sampling Method (ESM) studies demonstrate that adolescents' daily emotional states are heavily influenced by their immediate social context. However, despite adolescence being a risk period for exposure to daily stressors, research has yet to examine the influence of peers on adolescents' emotional responses to stressors…

  2. Classical boson sampling algorithms with superior performance to near-term experiments

    NASA Astrophysics Data System (ADS)

    Neville, Alex; Sparrow, Chris; Clifford, Raphaël; Johnston, Eric; Birchall, Patrick M.; Montanaro, Ashley; Laing, Anthony

    2017-12-01

    It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of linear optics, which has sparked interest as a rapid way to demonstrate such quantum supremacy. Photon statistics are governed by intractable matrix functions, which suggests that sampling from the distribution obtained by injecting photons into a linear optical network could be solved more quickly by a photonic experiment than by a classical computer. The apparently low resource requirements for large boson sampling experiments have raised expectations of a near-term demonstration of quantum supremacy by boson sampling. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. Our classical algorithm, based on Metropolised independence sampling, allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. Compared to current experiments, a demonstration of quantum supremacy over a successful implementation of these classical methods on a supercomputer would require the number of photons and experimental components to increase by orders of magnitude, while tackling exponentially scaling photon loss.

  3. Anthrax Sampling and Decontamination: Technology Trade-Offs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Price, Phillip N.; Hamachi, Kristina; McWilliams, Jennifer

    2008-09-12

    The goal of this project was to answer the following questions concerning response to a future anthrax release (or suspected release) in a building: 1. Based on past experience, what rules of thumb can be determined concerning: (a) the amount of sampling that may be needed to determine the extent of contamination within a given building; (b) what portions of a building should be sampled; (c) the cost per square foot to decontaminate a given type of building using a given method; (d) the time required to prepare for, and perform, decontamination; (e) the effectiveness of a given decontamination methodmore » in a given type of building? 2. Based on past experience, what resources will be spent on evaluating the extent of contamination, performing decontamination, and assessing the effectiveness of the decontamination in abuilding of a given type and size? 3. What are the trade-offs between cost, time, and effectiveness for the various sampling plans, sampling methods, and decontamination methods that have been used in the past?« less

  4. Calculation of Debye-Scherrer diffraction patterns from highly stressed polycrystalline materials

    DOE PAGES

    MacDonald, M. J.; Vorberger, J.; Gamboa, E. J.; ...

    2016-06-07

    Calculations of Debye-Scherrer diffraction patterns from polycrystalline materials have typically been done in the limit of small deviatoric stresses. Although these methods are well suited for experiments conducted near hydrostatic conditions, more robust models are required to diagnose the large strain anisotropies present in dynamic compression experiments. A method to predict Debye-Scherrer diffraction patterns for arbitrary strains has been presented in the Voigt (iso-strain) limit. Here, we present a method to calculate Debye-Scherrer diffraction patterns from highly stressed polycrystalline samples in the Reuss (iso-stress) limit. This analysis uses elastic constants to calculate lattice strains for all initial crystallite orientations, enablingmore » elastic anisotropy and sample texture effects to be modeled directly. Furthermore, the effects of probing geometry, deviatoric stresses, and sample texture are demonstrated and compared to Voigt limit predictions. An example of shock-compressed polycrystalline diamond is presented to illustrate how this model can be applied and demonstrates the importance of including material strength when interpreting diffraction in dynamic compression experiments.« less

  5. Polarisation in spin-echo experiments: Multi-point and lock-in measurements

    NASA Astrophysics Data System (ADS)

    Tamtögl, Anton; Davey, Benjamin; Ward, David J.; Jardine, Andrew P.; Ellis, John; Allison, William

    2018-02-01

    Spin-echo instruments are typically used to measure diffusive processes and the dynamics and motion in samples on ps and ns time scales. A key aspect of the spin-echo technique is to determine the polarisation of a particle beam. We present two methods for measuring the spin polarisation in spin-echo experiments. The current method in use is based on taking a number of discrete readings. The implementation of a new method involves continuously rotating the spin and measuring its polarisation after being scattered from the sample. A control system running on a microcontroller is used to perform the spin rotation and to calculate the polarisation of the scattered beam based on a lock-in amplifier. First experimental tests of the method on a helium spin-echo spectrometer show that it is clearly working and that it has advantages over the discrete approach, i.e., it can track changes of the beam properties throughout the experiment. Moreover, we show that real-time numerical simulations can perfectly describe a complex experiment and can be easily used to develop improved experimental methods prior to a first hardware implementation.

  6. Shear wave speed estimation by adaptive random sample consensus method.

    PubMed

    Lin, Haoming; Wang, Tianfu; Chen, Siping

    2014-01-01

    This paper describes a new method for shear wave velocity estimation that is capable of extruding outliers automatically without preset threshold. The proposed method is an adaptive random sample consensus (ARANDSAC) and the metric used here is finding the certain percentage of inliers according to the closest distance criterion. To evaluate the method, the simulation and phantom experiment results were compared using linear regression with all points (LRWAP) and radon sum transform (RS) method. The assessment reveals that the relative biases of mean estimation are 20.00%, 4.67% and 5.33% for LRWAP, ARANDSAC and RS respectively for simulation, 23.53%, 4.08% and 1.08% for phantom experiment. The results suggested that the proposed ARANDSAC algorithm is accurate in shear wave speed estimation.

  7. Cone beam x-ray luminescence computed tomography: a feasibility study.

    PubMed

    Chen, Dongmei; Zhu, Shouping; Yi, Huangjian; Zhang, Xianghan; Chen, Duofang; Liang, Jimin; Tian, Jie

    2013-03-01

    The appearance of x-ray luminescence computed tomography (XLCT) opens new possibilities to perform molecular imaging by x ray. In the previous XLCT system, the sample was irradiated by a sequence of narrow x-ray beams and the x-ray luminescence was measured by a highly sensitive charge coupled device (CCD) camera. This resulted in a relatively long sampling time and relatively low utilization of the x-ray beam. In this paper, a novel cone beam x-ray luminescence computed tomography strategy is proposed, which can fully utilize the x-ray dose and shorten the scanning time. The imaging model and reconstruction method are described. The validity of the imaging strategy has been studied in this paper. In the cone beam XLCT system, the cone beam x ray was adopted to illuminate the sample and a highly sensitive CCD camera was utilized to acquire luminescent photons emitted from the sample. Photons scattering in biological tissues makes it an ill-posed problem to reconstruct the 3D distribution of the x-ray luminescent sample in the cone beam XLCT. In order to overcome this issue, the authors used the diffusion approximation model to describe the photon propagation in tissues, and employed the sparse regularization method for reconstruction. An incomplete variables truncated conjugate gradient method and permissible region strategy were used for reconstruction. Meanwhile, traditional x-ray CT imaging could also be performed in this system. The x-ray attenuation effect has been considered in their imaging model, which is helpful in improving the reconstruction accuracy. First, simulation experiments with cylinder phantoms were carried out to illustrate the validity of the proposed compensated method. The experimental results showed that the location error of the compensated algorithm was smaller than that of the uncompensated method. The permissible region strategy was applied and reduced the reconstruction error to less than 2 mm. The robustness and stability were then evaluated from different view numbers, different regularization parameters, different measurement noise levels, and optical parameters mismatch. The reconstruction results showed that the settings had a small effect on the reconstruction. The nonhomogeneous phantom simulation was also carried out to simulate a more complex experimental situation and evaluated their proposed method. Second, the physical cylinder phantom experiments further showed similar results in their prototype XLCT system. With the discussion of the above experiments, it was shown that the proposed method is feasible to the general case and actual experiments. Utilizing numerical simulation and physical experiments, the authors demonstrated the validity of the new cone beam XLCT method. Furthermore, compared with the previous narrow beam XLCT, the cone beam XLCT could more fully utilize the x-ray dose and the scanning time would be shortened greatly. The study of both simulation experiments and physical phantom experiments indicated that the proposed method was feasible to the general case and actual experiments.

  8. Wastewater Sampling Methodologies and Flow Measurement Techniques.

    ERIC Educational Resources Information Center

    Harris, Daniel J.; Keffer, William J.

    This document provides a ready source of information about water/wastewater sampling activities using various commercial sampling and flow measurement devices. The report consolidates the findings and summarizes the activities, experiences, sampling methods, and field measurement techniques conducted by the Environmental Protection Agency (EPA),…

  9. Comparison of methods to determine methane emissions from dairy cows in farm conditions.

    PubMed

    Huhtanen, P; Cabezas-Garcia, E H; Utsumi, S; Zimmerman, S

    2015-05-01

    Nutritional and animal-selection strategies to mitigate enteric methane (CH4) depend on accurate, cost-effective methods to determine emissions from a large number of animals. The objective of the present study was to compare 2 spot-sampling methods to determine CH4 emissions from dairy cows, using gas quantification equipment installed in concentrate feeders or automatic milking stalls. In the first method (sniffer method), CH4 and carbon dioxide (CO2) concentrations were measured in close proximity to the muzzle of the animal, and average CH4 concentrations or CH4/CO2 ratio was calculated. In the second method (flux method), measurement of CH4 and CO2 concentration was combined with an active airflow inside the feed troughs for capture of emitted gas and measurements of CH4 and CO2 fluxes. A muzzle sensor was used allowing data to be filtered when the muzzle was not near the sampling inlet. In a laboratory study, a model cow head was built that emitted CO2 at a constant rate. It was found that CO2 concentrations using the sniffer method decreased up to 39% when the distance of the muzzle from the sampling inlet increased to 30cm, but no muzzle-position effects were observed for the flux method. The methods were compared in 2 on-farm studies conducted using 32 (experiment 1) or 59 (experiment 2) cows in a switch-back design of 5 (experiment 1) or 4 (experiment 2) periods for replicated comparisons between methods. Between-cow coefficient of variation (CV) in CH4 was smaller for the flux than the sniffer method (experiment 1, CV=11.0 vs. 17.5%, and experiment 2, 17.6 vs. 28.0%). Repeatability of the measurements from both methods were high (0.72-0.88), but the relationship between the sniffer and flux methods was weak (R(2)=0.09 in both experiments). With the flux method CH4 was found to be correlated to dry matter intake or body weight, but this was not the case with the sniffer method. The CH4/CO2 ratio was more highly correlated between the flux and sniffer methods (R(2)=0.30), and CV was similar (6.4-8.8%). In experiment 2, cow muzzle position was highly repeatable (0.82) and influenced sniffer and flux method results when not filtered for muzzle position. It was concluded that the flux method provides more reliable estimates of CH4 emissions than the sniffer method. The sniffer method appears to be affected by variable air-mixing conditions created by geometry of feed trough, muzzle movement, and muzzle position. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  10. Sample Dimensionality Effects on d' and Proportion of Correct Responses in Discrimination Testing.

    PubMed

    Bloom, David J; Lee, Soo-Yeun

    2016-09-01

    Products in the food and beverage industry have varying levels of dimensionality ranging from pure water to multicomponent food products, which can modify sensory perception and possibly influence discrimination testing results. The objectives of the study were to determine the impact of (1) sample dimensionality and (2) complex formulation changes on the d' and proportion of correct response of the 3-AFC and triangle methods. Two experiments were conducted using 47 prescreened subjects who performed either triangle or 3-AFC test procedures. In Experiment I, subjects performed 3-AFC and triangle tests using model solutions with different levels of dimensionality. Samples increased in dimensionality from 1-dimensional sucrose in water solution to 3-dimensional sucrose, citric acid, and flavor in water solution. In Experiment II, subjects performed 3-AFC and triangle tests using 3-dimensional solutions. Sample pairs differed in all 3 dimensions simultaneously to represent complex formulation changes. Two forms of complexity were compared: dilution, where all dimensions decreased in the same ratio, and compensation, where a dimension was increased to compensate for a reduction in another. The proportion of correct responses decreased for both methods when the dimensionality was increased from 1- to 2-dimensional samples. No reduction in correct responses was observed from 2- to 3-dimensional samples. No significant differences in d' were demonstrated between the 2 methods when samples with complex formulation changes were tested. Results reveal an impact on proportion of correct responses due to sample dimensionality and should be explored further using a wide range of sample formulations. © 2016 Institute of Food Technologists®

  11. A modified method for diffusive monitoring of 3-ethenylpyridine as a specific marker of environmental tobacco smoke

    NASA Astrophysics Data System (ADS)

    Kuusimäki, Leea; Peltonen, Kimmo; Vainiotalo, Sinikka

    A previously introduced method for monitoring environmental tobacco smoke (ETS) was further validated. The method is based on diffusive sampling of a vapour-phase marker, 3-ethenylpyridine (3-EP), with 3 M passive monitors (type 3500). Experiments were done in a dynamic chamber to assess diffusive sampling in comparison with active sampling in charcoal tubes or XAD-4 tubes. The sampling rate for 3-EP collected on the diffusive sampler was 23.1±0.6 mL min -1. The relative standard deviation for parallel samples ( n=6) ranged from 4% to 14% among experiments ( n=9). No marked reverse diffusion of 3-EP was detected nor any significant effect of relative humidity at 20%, 50% or 80%. The diffusive sampling of 3-EP was validated in field measurements in 15 restaurants in comparison with 3-EP and nicotine measurements using active sampling. The 3-EP concentration in restaurants ranged from 0.01 to 9.8 μg m -3, and the uptake rate for 3-EP based on 92 parallel samples was 24.0±0.4 mL min -1. A linear correlation ( r=0.98) was observed between 3-EP and nicotine concentrations, the average ratio of 3-EP to nicotine being 1:8. Active sampling of 3-EP and nicotine in charcoal tubes provided more reliable results than sampling in XAD-4 tubes. All samples were analysed using gas chromatography-mass spectrometry after elution with a 15% solution of pyridine in toluene. For nicotine, the limit of quantification of the charcoal tube method was 4 ng per sample, corresponding to 0.04 μg m -3 for an air sample of 96 L. For 3-EP, the limit of quantification of the diffusive method was 0.5-1.0 ng per sample, corresponding to 0.04-0.09 μg m -3 for 8 h sampling. The diffusive method proved suitable for ETS monitoring, even at low levels of ETS.

  12. Evaluation of Derivative Ultraviolet Spectrometry for Determining Saccharin in Cola and Other Matrices: An Instrumental Methods Experiment.

    ERIC Educational Resources Information Center

    Stolzberg, Richard J.

    1986-01-01

    Background information and experimental procedures are provided for an experiment in which three samples of saccharin (a nickel plating solution, a dilute cola drink, and a more concentrated cola drink) are analyzed and the data interpreted using five methods. Precision and accuracy are evaluated and the best method is selected. (JN)

  13. Relationships between Discretionary Time Activities, Emotional Experiences, Delinquency and Depressive Symptoms among Urban African American Adolescents

    ERIC Educational Resources Information Center

    Bohnert, Amy M.; Richards, Maryse; Kohl, Krista; Randall, Edin

    2009-01-01

    Using the Experience Sampling Method (ESM), this cross-sectional study examined mediated and moderated associations between different types of discretionary time activities and depressive symptoms and delinquency among a sample of 246 (107 boys, 139 girls) fifth through eighth grade urban African American adolescents. More time spent in passive…

  14. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  15. Containerless processing of single crystals in low-G environment

    NASA Technical Reports Server (NTRS)

    Walter, H. U.

    1974-01-01

    Experiments on containerless crystal growth from the melt were conducted during Skylab missions SL3 and SL4 (Skylab Experiment M-560). Six samples of InSb were processed, one of them heavily doped with selenium. The concept of the experiment is discussed and related to general crystal growth methods and their merits as techniques for containerless processing in space. The morphology of the crystals obtained is explained in terms of volume changes associated with solidification and wetting conditions during solidification. All samples exhibit extremely well developed growth facets. Analysis by X-ray topographical methods and chemical etching shows that the crystals are of high structural perfection. Average dislocation density as revealed by etching is of the order of 100 per sq cm; no dislocation clusters could be observed in the space-grown samples. A sequence of striations that is observed in the first half of the selenium-doped sample is explained as being caused by periodic surface breakdown.

  16. Uniform Sampling Table Method and its Applications II--Evaluating the Uniform Sampling by Experiment.

    PubMed

    Chen, Yibin; Chen, Jiaxi; Chen, Xuan; Wang, Min; Wang, Wei

    2015-01-01

    A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of sampling methods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform sampling method was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform sampling method was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform sampling method has good accuracy and reproducibility, which can be put into use in drugs analysis.

  17. Piloting the use of experience sampling method to investigate the everyday social experiences of children with Asperger syndrome/high functioning autism.

    PubMed

    Cordier, Reinie; Brown, Nicole; Chen, Yu-Wei; Wilkes-Gillan, Sarah; Falkmer, Torbjorn

    2016-01-01

    This pilot study explored the nature and quality of social experiences of children with Asperger Syndrome/High Functioning Autism (AS/HFA) through experience sampling method (ESM) while participating in everyday activities. ESM was used to identify the contexts and content of daily life experiences. Six children with AS/HFA (aged 8-12) wore an iPod Touch on seven consecutive days, while being signalled to complete a short survey. Participants were in the company of others 88.3% of their waking time, spent 69.0% of their time with family and 3.8% with friends, but only conversed with others 26.8% of the time. Participants had more positive experiences and emotions when they were with friends compared with other company. Participating in leisure activities was associated with enjoyment, interest in the occasion, and having positive emotions. ESM was found to be helpful in identifying the nature and quality of social experiences of children with AS/HFA from their perspective.

  18. Improving the performances of autofocus based on adaptive retina-like sampling model

    NASA Astrophysics Data System (ADS)

    Hao, Qun; Xiao, Yuqing; Cao, Jie; Cheng, Yang; Sun, Ce

    2018-03-01

    An adaptive retina-like sampling model (ARSM) is proposed to balance autofocusing accuracy and efficiency. Based on the model, we carry out comparative experiments between the proposed method and the traditional method in terms of accuracy, the full width of the half maxima (FWHM) and time consumption. Results show that the performances of our method are better than that of the traditional method. Meanwhile, typical autofocus functions, including sum-modified-Laplacian (SML), Laplacian (LAP), Midfrequency-DCT (MDCT) and Absolute Tenengrad (ATEN) are compared through comparative experiments. The smallest FWHM is obtained by the use of LAP, which is more suitable for evaluating accuracy than other autofocus functions. The autofocus function of MDCT is most suitable to evaluate the real-time ability.

  19. Recent archaeomagnetic studies in Slovakia: Comparison of methodological approaches

    NASA Astrophysics Data System (ADS)

    Kubišová, Lenka

    2016-03-01

    We review the recent archaeomagnetic studies carried out on the territory of Slovakia, focusing on the comparison of methodological approaches, discussing pros and cons of the individual applied methods from the perspective of our experience. The most widely used methods for the determination of intensity and direction of the archaeomegnetic field by demagnetisation of the sample material are the alternating field (AF) demagnetisation and the Thellier double heating method. These methods are used not only for archaeomagnetic studies but also help to solve some geological problems. The two methods were applied to samples collected recently at several sites of Slovakia, where archaeological prospection invoked by earthwork or reconstruction work of developing projects demanded archaeomagnetic dating. Then we discuss advantages and weaknesses of the investigated methods from different perspectives based on several examples and our recent experience.

  20. Comparison of direct observational methods for measuring stereotypic behavior in children with autism spectrum disorders.

    PubMed

    Gardenier, Nicole Ciotti; MacDonald, Rebecca; Green, Gina

    2004-01-01

    We compared partial-interval recording (PIR) and momentary time sampling (MTS) estimates against continuous measures of the actual durations of stereotypic behavior in young children with autism or pervasive developmental disorder-not otherwise specified. Twenty-two videotaped samples of stereotypy were scored using a low-tech duration recording method, and relative durations (i.e., proportions of observation periods consumed by stereotypy) were calculated. Then 10, 20, and 30s MTS and 10s PIR estimates of relative durations were derived from the raw duration data. Across all samples, PIR was found to grossly overestimate the relative duration of stereotypy. Momentary time sampling both over- and under-estimated the relative duration of stereotypy, but with much smaller errors than PIR (Experiment 1). These results were replicated across 27 samples of low, moderate and high levels of stereotypy (Experiment 2).

  1. Experiences using IAEA Code of practice for radiation sterilization of tissue allografts: Validation and routine control

    NASA Astrophysics Data System (ADS)

    Hilmy, N.; Febrida, A.; Basril, A.

    2007-11-01

    Problems of tissue allografts in using International Standard (ISO) 11137 for validation of radiation sterilization dose (RSD) are limited and low numbers of uniform samples per production batch, those are products obtained from one donor. Allograft is a graft transplanted between two different individuals of the same species. The minimum number of uniform samples needed for verification dose (VD) experiment at the selected sterility assurance level (SAL) per production batch according to the IAEA Code is 20, i.e., 10 for bio-burden determination and the remaining 10 for sterilization test. Three methods of the IAEA Code have been used for validation of RSD, i.e., method A1 that is a modification of method 1 of ISO 11137:1995, method B (ISO 13409:1996), and method C (AAMI TIR 27:2001). This paper describes VD experiments using uniform products obtained from one cadaver donor, i.e., cancellous bones, demineralized bone powders and amnion grafts from one life donor. Results of the verification dose experiments show that RSD is 15.4 kGy for cancellous and demineralized bone grafts and 19.2 kGy for amnion grafts according to method A1 and 25 kGy according to methods B and C.

  2. Adolescent Sexuality Related Beliefs and Differences by Sexual Experience Status

    ERIC Educational Resources Information Center

    Tolma, Eleni L.; Oman, Roy F.; Vesely, Sara K.; Aspy, Cheryl B.; Rodine, Sharon; Marshall, LaDonna; Fluhr, Janene

    2007-01-01

    Purpose: To examine if attitudes toward premarital sex, beliefs about peer influence, and family communication about sexual relationships differ by sexual experience status. Methods: Data were collected from a randomly selected ethnically diverse youth sample (N = 1,318) residing in two Midwestern cities. The primary method used in data analysis…

  3. Site Selection in Experiments: An Assessment of Site Recruitment and Generalizability in Two Scale-Up Studies

    ERIC Educational Resources Information Center

    Tipton, Elizabeth; Fellers, Lauren; Caverly, Sarah; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Ruiz de Castilla, Veronica

    2016-01-01

    Recently, statisticians have begun developing methods to improve the generalizability of results from large-scale experiments in education. This work has included the development of methods for improved site selection when random sampling is infeasible, including the use of stratification and targeted recruitment strategies. This article provides…

  4. Solvent signal suppression for high-resolution MAS-DNP

    NASA Astrophysics Data System (ADS)

    Lee, Daniel; Chaudhari, Sachin R.; De Paëpe, Gaël

    2017-05-01

    Dynamic nuclear polarization (DNP) has become a powerful tool to substantially increase the sensitivity of high-field magic angle spinning (MAS) solid-state NMR experiments. The addition of dissolved hyperpolarizing agents usually results in the presence of solvent signals that can overlap and obscure those of interest from the analyte. Here, two methods are proposed to suppress DNP solvent signals: a Forced Echo Dephasing experiment (FEDex) and TRAnsfer of Populations in DOuble Resonance Echo Dephasing (TRAPDORED) NMR. These methods reintroduce a heteronuclear dipolar interaction that is specific to the solvent, thereby forcing a dephasing of recoupled solvent spins and leaving acquired NMR spectra free of associated resonance overlap with the analyte. The potency of these methods is demonstrated on sample types common to MAS-DNP experiments, namely a frozen solution (of L-proline) and a powdered solid (progesterone), both containing deuterated glycerol as a DNP solvent. The proposed methods are efficient, simple to implement, compatible with other NMR experiments, and extendable past spectral editing for just DNP solvents. The sensitivity gains from MAS-DNP in conjunction with FEDex or TRAPDORED then permits rapid and uninterrupted sample analysis.

  5. Shiga toxin-producing Escherichia coli in meat: a preliminary simulation study on detection capabilities for three sampling methods

    USDA-ARS?s Scientific Manuscript database

    The objective of this simulation study is to determine which sampling method (Cozzini core sampler, core drill shaving, and N-60 surface excision) will better detect Shiga Toxin-producing Escherichia coli (STEC) at varying levels of contamination when present in the meat. 1000 simulated experiments...

  6. Loneliness in the Daily Lives of Adolescents: An Experience Sampling Study Examining the Effects of Social Contexts

    ERIC Educational Resources Information Center

    van Roekel, Eeske; Scholte, Ron H. J.; Engels, Rutger C. M. E.; Goossens, Luc; Verhagen, Maaike

    2015-01-01

    The main aim of the present study was to examine state levels of loneliness in adolescence. Both concurrent associations and temporal dynamics between social contexts and state levels of loneliness were examined. Data were collected from 286 adolescents (M[subscript age] = 14.19 years, 59% girls) by using the Experience Sampling Method. Results…

  7. Survey Response in a Statewide Social Experiment: Differences in Being Located and Collaborating, by Race and Hispanic Origin

    ERIC Educational Resources Information Center

    Nam, Yunju; Mason, Lisa Reyes; Kim, Youngmi; Clancy, Margaret; Sherraden, Michael

    2013-01-01

    This study examined whether and how survey response differs by race and Hispanic origin, using data from birth certificates and survey administrative data for a large-scale statewide experiment. The sample consisted of mothers of infants selected from Oklahoma birth certificates using a stratified random sampling method (N = 7,111). This study…

  8. A Membrane Gas Separation Experiment for the Undergraduate Laboratory.

    ERIC Educational Resources Information Center

    Davis, Richard A.; Sandall, Orville C.

    1991-01-01

    Described is a membrane experiment that provides students with experience in fundamental engineering skills such as mass balances, modeling, and using the computer as a research tool. Included are the experimental design, theory, method of solution, sample calculations, and conclusions. (KR)

  9. Evaluation of sample preservation methods for space mission

    NASA Technical Reports Server (NTRS)

    Schubert, W.; Rohatgi, N.; Kazarians, G.

    2002-01-01

    For interplanetary spacecraft that will travel to destinations where future life detection experiments may be conducted or samples are to be returned to earth, we should archive and preserve relevant samples from the spacecraft and cleanrooms for evaluation at a future date.

  10. Economic evaluation of an experience sampling method intervention in depression compared with treatment as usual using data from a randomized controlled trial.

    PubMed

    Simons, Claudia J P; Drukker, Marjan; Evers, Silvia; van Mastrigt, Ghislaine A P G; Höhn, Petra; Kramer, Ingrid; Peeters, Frenk; Delespaul, Philippe; Menne-Lothmann, Claudia; Hartmann, Jessica A; van Os, Jim; Wichers, Marieke

    2017-12-29

    Experience sampling, a method for real-time self-monitoring of affective experiences, holds opportunities for person-tailored treatment. By focussing on dynamic patterns of positive affect, experience sampling method interventions (ESM-I) accommodate strategies to enhance personalized treatment of depression-at potentially low-costs. This study aimed to investigate the cost-effectiveness of an experience sampling method intervention in patients with depression, from a societal perspective. Participants were recruited between January 2010 and February 2012 from out-patient mental health care facilities in or near the Dutch cities of Eindhoven and Maastricht, and through local advertisements. Out-patients diagnosed with major depression (n = 101) receiving pharmacotherapy were randomized into: (i) ESM-I consisting of six weeks of ESM combined with weekly feedback regarding the individual's positive affective experiences, (ii) six weeks of ESM without feedback, or (iii) treatment as usual only. Alongside this randomised controlled trial, an economic evaluation was conducted consisting of a cost-effectiveness and a cost-utility analysis, using Hamilton Depression Rating Scale (HDRS) and quality adjusted life years (QALYs) as outcome, with willingness-to-pay threshold for a QALY set at €50,000 (based on Dutch guidelines for moderate severe to severe illnesses). The economic evaluation showed that ESM-I is an optimal strategy only when willingness to pay is around €3000 per unit HDRS and around €40,500 per QALY. ESM-I was the least favourable treatment when willingness to pay was lower than €30,000 per QALY. However, at the €50,000 willingness-to-pay threshold, ESM-I was, with a 46% probability, the most favourable treatment (base-case analysis). Sensitivity analyses confirmed the robustness of these results. We may tentatively conclude that ESM-I is a cost-effective add-on intervention to pharmacotherapy in outpatients with major depression. Netherlands Trial register, NTR1974 .

  11. Properties of pTRM in Multidomain Grains and Their Implications for Palaeointensity Measurements

    NASA Astrophysics Data System (ADS)

    Biggin, A. J.; Michalk, D. M.

    2009-05-01

    As a consequence of their ubiquity in natural materials, much effort has been expended on trying to understand how 'multidomain' (sensu lato) grains behave in palaeointensity experiments. The known properties of multidomain thermoremanence (MD TRM) will be reviewed here and their implications for various types of palaeointensity experiments will be considered. The Dekkers-Boehnel and (quasi-) perpendicular palaeointensity methods tend to produce more accurate measurements from samples containing MD remanences than do Thellier-Thellier protocols. This is because they apply only a single type of thermal remagnetisation treatment and avoid the interleaving of demagnetisation and remagnetisation treatments which always produces non-ideal behaviour when MD grains are present in the sample. However, this benefit of using a single-heating technique does not apply if the TRM of the sample being measured carries a secondary (e.g. viscous) overprint. A kinematic model of MD TRM predicts that, if a substantial demagnetisation treatment is required to isolate the primary TRM of a sample, then even single-heating methods will produce non-ideal behaviour in the experiment. This effect probably explains why some recently made palaeointensity measurements performed using the Dekkers- Boehnel method on Mexican lavas appeared to produce over-high results. One way around this problem might be to perform the measurements of the remanence in the experiment at temperature instead of always cooling the sample to room temperature. This could enable the optimal experimental behaviour to be preserved in spite of a significant overprint but requires specialist equipment which is not available in all labs. In many palaeointensity experiments, it is simply not possible to avoid all the non-ideal effects associated with MD grains. Furthermore, there is the potential for sources of bias other than MD effects to impact on a palaeointensity experiment (thermochemical alteration being the most obvious) and the design of the experiment should take these into account also. Nonetheless, there are some steps that can be followed in any experiment in order to reduce the amount of bias that MD effects might have on the palaeointensity and these will be outlined.

  12. The Problem of Sample Contamination in a Fluvial Geochemistry Research Experience for Undergraduates.

    ERIC Educational Resources Information Center

    Andersen, Charles B.

    2001-01-01

    Introduces the analysis of a river as an excellent way to teach geochemical techniques because of the relative ease of sample collection and speed of sample analysis. Focuses on the potential sources of sample contamination during sampling, filtering, and bottle cleaning processes, and reviews methods to reduce and detect contamination. Includes…

  13. Sample selection in foreign similarity regions for multicrop experiments

    NASA Technical Reports Server (NTRS)

    Malin, J. T. (Principal Investigator)

    1981-01-01

    The selection of sample segments in the U.S. foreign similarity regions for development of proportion estimation procedures and error modeling for Argentina, Australia, Brazil, and USSR in AgRISTARS is described. Each sample was chosen to be similar in crop mix to the corresponding indicator region sample. Data sets, methods of selection, and resulting samples are discussed.

  14. [Research on fast classification based on LIBS technology and principle component analyses].

    PubMed

    Yu, Qi; Ma, Xiao-Hong; Wang, Rui; Zhao, Hua-Feng

    2014-11-01

    Laser-induced breakdown spectroscopy (LIBS) and the principle component analysis (PCA) were combined to study aluminum alloy classification in the present article. Classification experiments were done on thirteen different kinds of standard samples of aluminum alloy which belong to 4 different types, and the results suggested that the LIBS-PCA method can be used to aluminum alloy fast classification. PCA was used to analyze the spectrum data from LIBS experiments, three principle components were figured out that contribute the most, the principle component scores of the spectrums were calculated, and the scores of the spectrums data in three-dimensional coordinates were plotted. It was found that the spectrum sample points show clear convergence phenomenon according to the type of aluminum alloy they belong to. This result ensured the three principle components and the preliminary aluminum alloy type zoning. In order to verify its accuracy, 20 different aluminum alloy samples were used to do the same experiments to verify the aluminum alloy type zoning. The experimental result showed that the spectrum sample points all located in their corresponding area of the aluminum alloy type, and this proved the correctness of the earlier aluminum alloy standard sample type zoning method. Based on this, the identification of unknown type of aluminum alloy can be done. All the experimental results showed that the accuracy of principle component analyses method based on laser-induced breakdown spectroscopy is more than 97.14%, and it can classify the different type effectively. Compared to commonly used chemical methods, laser-induced breakdown spectroscopy can do the detection of the sample in situ and fast with little sample preparation, therefore, using the method of the combination of LIBS and PCA in the areas such as quality testing and on-line industrial controlling can save a lot of time and cost, and improve the efficiency of detection greatly.

  15. Hierarchical Bayesian modelling of gene expression time series across irregularly sampled replicates and clusters.

    PubMed

    Hensman, James; Lawrence, Neil D; Rattray, Magnus

    2013-08-20

    Time course data from microarrays and high-throughput sequencing experiments require simple, computationally efficient and powerful statistical models to extract meaningful biological signal, and for tasks such as data fusion and clustering. Existing methodologies fail to capture either the temporal or replicated nature of the experiments, and often impose constraints on the data collection process, such as regularly spaced samples, or similar sampling schema across replications. We propose hierarchical Gaussian processes as a general model of gene expression time-series, with application to a variety of problems. In particular, we illustrate the method's capacity for missing data imputation, data fusion and clustering.The method can impute data which is missing both systematically and at random: in a hold-out test on real data, performance is significantly better than commonly used imputation methods. The method's ability to model inter- and intra-cluster variance leads to more biologically meaningful clusters. The approach removes the necessity for evenly spaced samples, an advantage illustrated on a developmental Drosophila dataset with irregular replications. The hierarchical Gaussian process model provides an excellent statistical basis for several gene-expression time-series tasks. It has only a few additional parameters over a regular GP, has negligible additional complexity, is easily implemented and can be integrated into several existing algorithms. Our experiments were implemented in python, and are available from the authors' website: http://staffwww.dcs.shef.ac.uk/people/J.Hensman/.

  16. Calculation of Debye-Scherrer diffraction patterns from highly stressed polycrystalline materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDonald, M. J., E-mail: macdonm@umich.edu; SLAC National Accelerator Laboratory, Menlo Park, California 94025; Vorberger, J.

    2016-06-07

    Calculations of Debye-Scherrer diffraction patterns from polycrystalline materials have typically been done in the limit of small deviatoric stresses. Although these methods are well suited for experiments conducted near hydrostatic conditions, more robust models are required to diagnose the large strain anisotropies present in dynamic compression experiments. A method to predict Debye-Scherrer diffraction patterns for arbitrary strains has been presented in the Voigt (iso-strain) limit [Higginbotham, J. Appl. Phys. 115, 174906 (2014)]. Here, we present a method to calculate Debye-Scherrer diffraction patterns from highly stressed polycrystalline samples in the Reuss (iso-stress) limit. This analysis uses elastic constants to calculate latticemore » strains for all initial crystallite orientations, enabling elastic anisotropy and sample texture effects to be modeled directly. The effects of probing geometry, deviatoric stresses, and sample texture are demonstrated and compared to Voigt limit predictions. An example of shock-compressed polycrystalline diamond is presented to illustrate how this model can be applied and demonstrates the importance of including material strength when interpreting diffraction in dynamic compression experiments.« less

  17. Age and Gender Differences in Adolescents' Homework Experiences

    ERIC Educational Resources Information Center

    Kackar, Hayal Z.; Shumow, Lee; Schmidt, Jennifer A.; Grzetich, Janel

    2011-01-01

    Extant data collected through the Experience Sampling Method were analyzed to describe adolescents' subjective experiences of homework. Analyses explored age and gender differences in the time adolescents spend doing homework, and the situational variations (location and companions) in adolescents' reported concentration, effort, interest,…

  18. An Investigation of Milk Sugar.

    ERIC Educational Resources Information Center

    Smith, Christopher A.; Dawson, Maureen M.

    1987-01-01

    Describes an experiment to identify lactose and estimate the concentration of lactose in a sample of milk. Gives a background of the investigation. Details the experimental method, results and calculations. Discusses the implications of the experiment to students. Suggests further experiments using the same technique used in…

  19. Effectiveness of Winkler Litter Extraction and Pitfall Traps in Sampling Ant Communities and Functional Groups in a Temperate Forest.

    PubMed

    Mahon, Michael B; Campbell, Kaitlin U; Crist, Thomas O

    2017-06-01

    Selection of proper sampling methods for measuring a community of interest is essential whether the study goals are to conduct a species inventory, environmental monitoring, or a manipulative experiment. Insect diversity studies often employ multiple collection methods at the expense of researcher time and funding. Ants (Formicidae) are widely used in environmental monitoring owing to their sensitivity to ecosystem changes. When sampling ant communities, two passive techniques are recommended in combination: pitfall traps and Winkler litter extraction. These recommendations are often based on studies from highly diverse tropical regions or when a species inventory is the goal. Studies in temperate regions often focus on measuring consistent community response along gradients of disturbance or among management regimes; therefore, multiple sampling methods may be unnecessary. We compared the effectiveness of pitfalls and Winkler litter extraction in an eastern temperate forest for measuring ant species richness, composition, and occurrence of ant functional groups in response to experimental manipulations of two key forest ecosystem drivers, white-tailed deer and an invasive shrub (Amur honeysuckle). We found no significant effect of sampling method on the outcome of the ecological experiment; however, we found differences between the two sampling methods in the resulting ant species richness and functional group occurrence. Litter samples approximated the overall combined species richness and composition, but pitfalls were better at sampling large-bodied (Camponotus) species. We conclude that employing both methods is essential only for species inventories or monitoring ants in the Cold-climate Specialists functional group. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Evaluation of a Class of Simple and Effective Uncertainty Methods for Sparse Samples of Random Variables and Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero, Vicente; Bonney, Matthew; Schroeder, Benjamin

    When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a classmore » of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10 -4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.« less

  1. Signal Sampling for Efficient Sparse Representation of Resting State FMRI Data

    PubMed Central

    Ge, Bao; Makkie, Milad; Wang, Jin; Zhao, Shijie; Jiang, Xi; Li, Xiang; Lv, Jinglei; Zhang, Shu; Zhang, Wei; Han, Junwei; Guo, Lei; Liu, Tianming

    2015-01-01

    As the size of brain imaging data such as fMRI grows explosively, it provides us with unprecedented and abundant information about the brain. How to reduce the size of fMRI data but not lose much information becomes a more and more pressing issue. Recent literature studies tried to deal with it by dictionary learning and sparse representation methods, however, their computation complexities are still high, which hampers the wider application of sparse representation method to large scale fMRI datasets. To effectively address this problem, this work proposes to represent resting state fMRI (rs-fMRI) signals of a whole brain via a statistical sampling based sparse representation. First we sampled the whole brain’s signals via different sampling methods, then the sampled signals were aggregate into an input data matrix to learn a dictionary, finally this dictionary was used to sparsely represent the whole brain’s signals and identify the resting state networks. Comparative experiments demonstrate that the proposed signal sampling framework can speed-up by ten times in reconstructing concurrent brain networks without losing much information. The experiments on the 1000 Functional Connectomes Project further demonstrate its effectiveness and superiority. PMID:26646924

  2. Noninvasive methods for dynamic mapping of microbial populations across the landscape

    NASA Astrophysics Data System (ADS)

    Meredith, L. K.; Sengupta, A.; Troch, P. A.; Volkmann, T. H. M.

    2017-12-01

    Soil microorganisms drive key ecosystem processes, and yet characterizing their distribution and activity in soil has been notoriously difficult. This is due, in part, to the heterogeneous nature of their response to changing environmental and nutrient conditions across time and space. These dynamics are challenging to constrain in both natural and experimental systems because of sampling difficulty and constraints. For example, soil microbial sampling at the Landscape Evolution Observatory (LEO) infrastructure in Biosphere 2 is limited in efforts to minimize soil disruption to the long term experiment that aims to characterize the interacting biological, hydrological, and geochemical processes driving soil evolution. In this and other systems, new methods are needed to monitor soil microbial communities and their genetic potential over time. In this study, we take advantage of the well-defined boundary conditions on hydrological flow at LEO to develop a new method to nondestructively characterize in situ microbial populations. In our approach, we sample microbes from the seepage flow at the base of each of three replicate LEO hillslopes and use hydrological models to `map back' in situ microbial populations. Over the course of a 3-month periodic rainfall experiment we collected samples from the LEO outflow for DNA and extraction and microbial community composition analysis. These data will be used to describe changes in microbial community composition over the course of the experiment. In addition, we will use hydrological flow models to identify the changing source region of discharge water over the course of periodic rainfall pulses, thereby mapping back microbial populations onto their geographic origin in the slope. These predictions of in situ microbial populations will be ground-truthed against those derived from destructive soil sampling at the beginning and end of the rainfall experiment. Our results will show the suitability of this method for long-term, non-destructive monitoring of the microbial communities that contribute to soil evolution in this large-scale model system. Furthermore, this method may be useful for other study systems with limitations to destructive sampling including other model infrastructures and natural landscapes.

  3. A method for direct, semi-quantitative analysis of gas phase samples using gas chromatography-inductively coupled plasma-mass spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, Kimberly E; Gerdes, Kirk

    2013-07-01

    A new and complete GC–ICP-MS method is described for direct analysis of trace metals in a gas phase process stream. The proposed method is derived from standard analytical procedures developed for ICP-MS, which are regularly exercised in standard ICP-MS laboratories. In order to implement the method, a series of empirical factors were generated to calibrate detector response with respect to a known concentration of an internal standard analyte. Calibrated responses are ultimately used to determine the concentration of metal analytes in a gas stream using a semi-quantitative algorithm. The method was verified using a traditional gas injection from a GCmore » sampling valve and a standard gas mixture containing either a 1 ppm Xe + Kr mix with helium balance or 100 ppm Xe with helium balance. Data collected for Xe and Kr gas analytes revealed that agreement of 6–20% with the actual concentration can be expected for various experimental conditions. To demonstrate the method using a relevant “unknown” gas mixture, experiments were performed for continuous 4 and 7 hour periods using a Hg-containing sample gas that was co-introduced into the GC sample loop with the xenon gas standard. System performance and detector response to the dilute concentration of the internal standard were pre-determined, which allowed semi-quantitative evaluation of the analyte. The calculated analyte concentrations varied during the course of the 4 hour experiment, particularly during the first hour of the analysis where the actual Hg concentration was under predicted by up to 72%. Calculated concentration improved to within 30–60% for data collected after the first hour of the experiment. Similar results were seen during the 7 hour test with the deviation from the actual concentration being 11–81% during the first hour and then decreasing for the remaining period. The method detection limit (MDL) was determined for the mercury by injecting the sample gas into the system following a period of equilibration. The MDL for Hg was calculated as 6.8 μg · m -3. This work describes the first complete GC–ICP-MS method to directly analyze gas phase samples, and detailed sample calculations and comparisons to conventional ICP-MS methods are provided.« less

  4. Application of focused-beam flat-sample method to synchrotron powder X-ray diffraction with anomalous scattering effect

    NASA Astrophysics Data System (ADS)

    Tanaka, M.; Katsuya, Y.; Matsushita, Y.

    2013-03-01

    The focused-beam flat-sample method (FFM), which is a method for high-resolution and rapid synchrotron X-ray powder diffraction measurements by combination of beam focusing optics, a flat shape sample and an area detector, was applied for diffraction experiments with anomalous scattering effect. The advantages of FFM for anomalous diffraction were absorption correction without approximation, rapid data collection by an area detector and good signal-to-noise ratio data by focusing optics. In the X-ray diffraction experiments of CoFe2O4 and Fe3O4 (By FFM) using X-rays near the Fe K absorption edge, the anomalous scattering effect between Fe/Co or Fe2+/Fe3+ can be clearly detected, due to the change of diffraction intensity. The change of observed diffraction intensity as the incident X-ray energy was consistent with the calculation. The FFM is expected to be a method for anomalous powder diffraction.

  5. Rietveld analysis using powder diffraction data with anomalous scattering effect obtained by focused beam flat sample method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanaka, Masahiko, E-mail: masahiko@spring8.or.jp; Katsuya, Yoshio, E-mail: katsuya@spring8.or.jp; Sakata, Osami, E-mail: SAKATA.Osami@nims.go.jp

    2016-07-27

    Focused-beam flat-sample method (FFM) is a new trial for synchrotron powder diffraction method, which is a combination of beam focusing optics, flat shape powder sample and area detectors. The method has advantages for X-ray diffraction experiments applying anomalous scattering effect (anomalous diffraction), because of 1. Absorption correction without approximation, 2. High intensity X-rays of focused incident beams and high signal noise ratio of diffracted X-rays 3. Rapid data collection with area detectors. We applied the FFM to anomalous diffraction experiments and collected synchrotron X-ray powder diffraction data of CoFe{sub 2}O{sub 4} (inverse spinel structure) using X-rays near Fe K absorptionmore » edge, which can distinguish Co and Fe by anomalous scattering effect. We conducted Rietveld analyses with the obtained powder diffraction data and successfully determined the distribution of Co and Fe ions in CoFe{sub 2}O{sub 4} crystal structure.« less

  6. Economic Intervention and Parenting: A Randomized Experiment of Statewide Child Development Accounts

    ERIC Educational Resources Information Center

    Nam, Yunju; Wikoff, Nora; Sherraden, Michael

    2016-01-01

    Objective: We examine the effects of Child Development Accounts (CDAs) on parenting stress and practices. Methods: We use data from the SEED for Oklahoma Kids (SEED OK) experiment. SEED OK selected caregivers of infants from Oklahoma birth certificates using a probability sampling method, randomly assigned caregivers to the treatment (n = 1,132)…

  7. Teaching Methods and Their Impact on Students' Emotions in Mathematics: An Experience-Sampling Approach

    ERIC Educational Resources Information Center

    Bieg, Madeleine; Goetz, Thomas; Sticca, Fabio; Brunner, Esther; Becker, Eva; Morger, Vinzenz; Hubbard, Kyle

    2017-01-01

    Various theoretical approaches propose that emotions in the classroom are elicited by appraisal antecedents, with subjective experiences of control playing a crucial role in this context. Perceptions of control, in turn, are expected to be influenced by the classroom social environment, which can include the teaching methods being employed (e.g.,…

  8. Isotope dilution technique for quantitative analysis of endogenous trace element species in biological systems

    NASA Astrophysics Data System (ADS)

    Schaumlöffel, Dirk; Lobinski, Ryszard

    2005-04-01

    The aim of this study was to develop an inductively coupled plasma mass spectrometry (ICPMS) method for the determination of enriched species-specific mercury tracers at ng L-1 levels (ppt) in zooplankton and aquatic samples from biological tracer experiments. Applying a cold vapor sector field ICPMS method a high sensitivity was obtained, i.e., 106 cps for 1 [mu]g L-1 of natural mercury measured on 202Hg+, which in turn enabled the measurement of mercury isotope ratios with a 0.6-1.4%R.S.D. precision for a 50 ng L-1 standard. This method was used to quantify CH3201Hg+ and 200Hg2+ tracers in zooplankton from a biological tracer experiment with the aim of investigating the effects of algal density and zooplankton density on mercury bioaccumulation in zooplankton in a fresh water system. For quantification purposes a known amount of 199Hg+ was added to the zooplankton samples before digestion. The digested samples were analyzed and the resulting ICPMS spectra split into four spectra one for each of the four sources of mercury present in the sample (CH3201Hg+, 200Hg2+, 199Hg2+ and natural mercury) using algebraic de-convoluting. The CH3201Hg+ and 200Hg2+ tracers were quantified using an isotope dilution approach with the added 199Hg+. Detection limits were 0.6 and 0.2 ng L-1 for 200Hg+ and CH3201Hg+, respectively. The coefficient of variation on the tracer determinations was approximately 18% CV estimated from the analysis of real samples with tracer concentrations in the <0.1-100 ng L-1 range. The developed method was successfully applied for the determination of species-specific mercury tracers in zooplankton samples from a biological tracer experiment.

  9. Multidimensional scaling analysis of financial time series based on modified cross-sample entropy methods

    NASA Astrophysics Data System (ADS)

    He, Jiayi; Shang, Pengjian; Xiong, Hui

    2018-06-01

    Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.

  10. The Shell Seeker: What Is the Quantity of Shell in the Lido di Venezia Sand? A Calibration DRIFTS Experiment

    ERIC Educational Resources Information Center

    Pezzolo, Alessandra De Lorenzi

    2011-01-01

    In this experiment, students are given a fanciful application of the standard addition method to evaluate the approximate quantity of the shell component in a sample of sand collected on the Lido di Venezia seashore. Several diffuse reflectance infrared Fourier transform (DRIFT) spectra are recorded from a sand sample before and after addition of…

  11. Mobbing Experiences of Instructors: Causes, Results, and Solution Suggestions

    ERIC Educational Resources Information Center

    Celep, Cevat; Konakli, Tugba

    2013-01-01

    In this study, it was aimed to investigate possible mobbing problems in universities, their causes and results, and to attract attention to precautions that can be taken. Phenomenology as one of the qualitative research methods was used in the study. Sample group of the study was selected through the criteria sampling method and eight instructors…

  12. Sample Preparation and Extraction in Small Sample Volumes Suitable for Pediatric Clinical Studies: Challenges, Advances, and Experiences of a Bioanalytical HPLC-MS/MS Method Validation Using Enalapril and Enalaprilat

    PubMed Central

    Burckhardt, Bjoern B.; Laeer, Stephanie

    2015-01-01

    In USA and Europe, medicines agencies force the development of child-appropriate medications and intend to increase the availability of information on the pediatric use. This asks for bioanalytical methods which are able to deal with small sample volumes as the trial-related blood lost is very restricted in children. Broadly used HPLC-MS/MS, being able to cope with small volumes, is susceptible to matrix effects. The latter restrains the precise drug quantification through, for example, causing signal suppression. Sophisticated sample preparation and purification utilizing solid-phase extraction was applied to reduce and control matrix effects. A scale-up from vacuum manifold to positive pressure manifold was conducted to meet the demands of high-throughput within a clinical setting. Faced challenges, advances, and experiences in solid-phase extraction are exemplarily presented on the basis of the bioanalytical method development and validation of low-volume samples (50 μL serum). Enalapril, enalaprilat, and benazepril served as sample drugs. The applied sample preparation and extraction successfully reduced the absolute and relative matrix effect to comply with international guidelines. Recoveries ranged from 77 to 104% for enalapril and from 93 to 118% for enalaprilat. The bioanalytical method comprising sample extraction by solid-phase extraction was fully validated according to FDA and EMA bioanalytical guidelines and was used in a Phase I study in 24 volunteers. PMID:25873972

  13. Factors associated with experiences of stigma in a sample of HIV-positive, methamphetamine-using men who have sex with men

    PubMed Central

    Semple, Shirley J.; Strathdee, Steffanie A.; Zians, Jim; Patterson, Thomas L.

    2012-01-01

    Background While methamphetamine users report high rates of internalized or self-stigma, few studies have examined experiences of stigma (i.e., stigmatization by others) and its correlates. Methods This study identified correlates of stigma experiences in a sample of 438 HIV-positive men who have sex with men (MSM) who were enrolled in a sexual risk reduction intervention in San Diego, CA. Results Approximately 96% of the sample reported experiences of stigma related to their use of methamphetamine. In multiple regression analysis, experiences of stigma were associated with binge use of methamphetamine, injection drug use, increased anger symptoms, reduced emotional support, and lifetime treatment for methamphetamine use. Conclusions These findings suggest that experiences of stigma are common among methamphetamine users and that interventions to address this type of stigma and its correlates may offer social, psychological, and health benefits to HIV-positive methamphetamine-using MSM. PMID:22572209

  14. Experiences of Christian Clients in Secular Psychotherapy: A Mixed-Methods Investigation

    ERIC Educational Resources Information Center

    Cragun, Carrie L.; Friedlander, Myrna L.

    2012-01-01

    Eleven Christian former clients were sampled to uncover factors contributing to positive versus negative experiences in secular psychotherapy. The qualitative results indicated that although many participants felt hesitant to discuss their faith due to uncertainty about their therapists' reactions, positive experiences were reportedly facilitated…

  15. RNA-seq mixology: designing realistic control experiments to compare protocols and analysis methods

    PubMed Central

    Holik, Aliaksei Z.; Law, Charity W.; Liu, Ruijie; Wang, Zeya; Wang, Wenyi; Ahn, Jaeil; Asselin-Labat, Marie-Liesse; Smyth, Gordon K.

    2017-01-01

    Abstract Carefully designed control experiments provide a gold standard for benchmarking different genomics research tools. A shortcoming of many gene expression control studies is that replication involves profiling the same reference RNA sample multiple times. This leads to low, pure technical noise that is atypical of regular studies. To achieve a more realistic noise structure, we generated a RNA-sequencing mixture experiment using two cell lines of the same cancer type. Variability was added by extracting RNA from independent cell cultures and degrading particular samples. The systematic gene expression changes induced by this design allowed benchmarking of different library preparation kits (standard poly-A versus total RNA with Ribozero depletion) and analysis pipelines. Data generated using the total RNA kit had more signal for introns and various RNA classes (ncRNA, snRNA, snoRNA) and less variability after degradation. For differential expression analysis, voom with quality weights marginally outperformed other popular methods, while for differential splicing, DEXSeq was simultaneously the most sensitive and the most inconsistent method. For sample deconvolution analysis, DeMix outperformed IsoPure convincingly. Our RNA-sequencing data set provides a valuable resource for benchmarking different protocols and data pre-processing workflows. The extra noise mimics routine lab experiments more closely, ensuring any conclusions are widely applicable. PMID:27899618

  16. Sampling techniques for thrips (Thysanoptera: Thripidae) in preflowering tomato.

    PubMed

    Joost, P Houston; Riley, David G

    2004-08-01

    Sampling techniques for thrips (Thysanoptera: Thripidae) were compared in preflowering tomato plants at the Coastal Plain Experiment Station in Tifton, GA, in 2000 and 2003, to determine the most effective method of determining abundance of thrips on tomato foliage early in the growing season. Three relative sampling techniques, including a standard insect aspirator, a 946-ml beat cup, and an insect vacuum device, were compared for accuracy to an absolute method and to themselves for precision and efficiency of sampling thrips. Thrips counts of all relative sampling methods were highly correlated (R > 0.92) to the absolute method. The aspirator method was the most accurate compared with the absolute sample according to regression analysis in 2000. In 2003, all sampling methods were considered accurate according to Dunnett's test, but thrips numbers were lower and sample variation was greater than in 2000. In 2000, the beat cup method had the lowest relative variation (RV) or best precision, at 1 and 8 d after transplant (DAT). Only the beat cup method had RV values <25 for all sampling dates. In 2003, the beat cup method had the lowest RV value at 15 and 21 DAT. The beat cup method also was the most efficient method for all sample dates in both years. Frankliniella fusca (Pergande) was the most abundant thrips species on the foliage of preflowering tomato in both years of study at this location. Overall, the best thrips sampling technique tested was the beat cup method in terms of precision and sampling efficiency.

  17. Note: A simple image processing based fiducial auto-alignment method for sample registration.

    PubMed

    Robertson, Wesley D; Porto, Lucas R; Ip, Candice J X; Nantel, Megan K T; Tellkamp, Friedjof; Lu, Yinfei; Miller, R J Dwayne

    2015-08-01

    A simple method for the location and auto-alignment of sample fiducials for sample registration using widely available MATLAB/LabVIEW software is demonstrated. The method is robust, easily implemented, and applicable to a wide variety of experiment types for improved reproducibility and increased setup speed. The software uses image processing to locate and measure the diameter and center point of circular fiducials for distance self-calibration and iterative alignment and can be used with most imaging systems. The method is demonstrated to be fast and reliable in locating and aligning sample fiducials, provided here by a nanofabricated array, with accuracy within the optical resolution of the imaging system. The software was further demonstrated to register, load, and sample the dynamically wetted array.

  18. Preview of the NASA NNWG NDE Sample Preparation Handbook

    NASA Technical Reports Server (NTRS)

    2010-01-01

    This viewgraph presents a step-by-step how-to fabrication documentation of every kind of sample that is fabricated for MSFC by UA Huntsville, including photos and illustrations. The tabulation of what kind of samples are being fabricated for what NDE method, detailed instructions/documentation of the inclusion/creation of defects, detailed specifications for materials, processes, and equipment, case histories and/or experiences with the different fabrication methods and defect inclusion techniques, discussion of pitfalls and difficulties associated with sample fabrication and defect inclusion techniques, and a discussion of why certain fabrication techniques are needed as related to the specific NDE methods are included in this presentation.

  19. Short communication: Analytical method and amount of preservative added to milk samples may alter milk urea nitrogen measurements.

    PubMed

    Weeks, Holley L; Hristov, Alexander N

    2017-02-01

    Milk urea N (MUN) is used by dairy nutritionists and producers to monitor dietary protein intake and is indicative of N utilization in lactating dairy cows. Two experiments were conducted to explore discrepancies in MUN results provided by 3 milk processing laboratories using different methods. An additional experiment was conducted to evaluate the effect of 2-bromo-2-nitropropane-1, 3-diol (bronopol) on MUN analysis. In experiment 1, 10 replicates of bulk tank milk samples, collected from the Pennsylvania State University's Dairy Center over 5 consecutive days, were sent to 3 milk processing laboratories in Pennsylvania. Average MUN differed between laboratory A (14.9 ± 0.40 mg/dL; analyzed on MilkoScan 4000; Foss, Hillerød, Denmark), laboratory B (6.5 ± 0.17 mg/dL; MilkoScan FT + 6000), and laboratory C (7.4 ± 0.36 mg/dL; MilkoScan 6000). In experiment 2, milk samples were spiked with urea at 0 (7.3 to 15.0 mg/dL, depending on the laboratory analyzing the samples), 17.2, 34.2, and 51.5 mg/dL of milk. Two 35-mL samples from each urea level were sent to the 3 laboratories used in experiment 1. Average analyzed MUN was greater than predicted (calculated for each laboratory based on the control; 0 mg of added urea): for laboratory A (23.2 vs. 21.0 mg/dL), laboratory B (18.0 vs. 13.3 mg/dL), and laboratory C (20.6 vs. 15.2 mg/dL). In experiment 3, replicated milk samples were preserved with 0 to 1.35 mg of bronopol/mL of milk and submitted to one milk processing laboratory that analyzed MUN using 2 different methods. Milk samples with increasing amounts of bronopol ranged in MUN concentration from 7.7 to 11.9 mg/dL and from 9.0 to 9.3 mg/dL when analyzed on MilkoScan 4000 or CL 10 (EuroChem, Moscow, Russia), respectively. In conclusion, measured MUN concentrations varied due to analytical procedure used by milk processing laboratories and were affected by the amount of bronopol used to preserve milk sample, when milk was analyzed using a mid-infrared analyzer. Thus, it is important to maintain consistency in milk sample preservation and analysis to ensure precision of MUN results. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  20. Beyond Fourier

    NASA Astrophysics Data System (ADS)

    Hoch, Jeffrey C.

    2017-10-01

    Non-Fourier methods of spectrum analysis are gaining traction in NMR spectroscopy, driven by their utility for processing nonuniformly sampled data. These methods afford new opportunities for optimizing experiment time, resolution, and sensitivity of multidimensional NMR experiments, but they also pose significant challenges not encountered with the discrete Fourier transform. A brief history of non-Fourier methods in NMR serves to place different approaches in context. Non-Fourier methods reflect broader trends in the growing importance of computation in NMR, and offer insights for future software development.

  1. Numerical modeling of NI-monitored 3D infiltration experiment

    NASA Astrophysics Data System (ADS)

    Dohnal, Michal; Dusek, Jaromir; Snehota, Michal; Sacha, Jan; Vogel, Tomas; Votrubova, Jana

    2014-05-01

    It is well known that the temporal changes of saturated hydraulic conductivity caused by the occurrence of air phase discontinuities often play an important role in water flow and solute transport experiments. In the present study, a series of infiltration-outflow experiments was conducted to test several working hypotheses about the mechanism of air phase trapping. The experiments were performed on a porous sample with artificial internal structure, using three sandy materials with contrasting hydraulic properties. The sample was axially symmetric with continuous preferential pathways and separate porous matrix blocks (the sample was 3.4 cm in diameter and 8.8 cm high). The infiltration experiments were monitored by neutron imaging (NI). The NI data were then used to quantify the water content of the selected sample regions. The flow regime in the sample was studied using a three-dimensional model based on Richards' equation. The equation was solved by the finite element method. The results of the numerical simulations of the infiltration experiments were compared with the measured outflow rates and with the spatial distribution of water content determined by NI. The research was supported by the Czech Science Foundation Project No. 14-03691S.

  2. Filtrates & Residues: An Experiment on the Molar Solubility and Solubility Product of Barium Nitrate.

    ERIC Educational Resources Information Center

    Wruck, Betty; Reinstein, Jesse

    1989-01-01

    Provides a two hour experiment using direct gravimetric methods to determine solubility constants. Provides methodology and sample results. Discusses the effect of the common ion on the solubility constant. (MVL)

  3. Sampling Methods in Cardiovascular Nursing Research: An Overview.

    PubMed

    Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie

    2014-01-01

    Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.

  4. Rapid determination of molar mass in modified Archibald experiments using direct fitting of the Lamm equation.

    PubMed

    Schuck, P; Millar, D B

    1998-05-15

    A new method is described that allows measurement of the molar mass of the solute within 15 to 30 min after start of a conventional long-column sedimentation equilibrium experiment. A series of scans of the concentration distribution in close vicinity of the meniscus, taken in rapid succession after the start of the centrifuge run, is analyzed by direct fitting using the Lamm equation and the Svedberg equation. In case of a single solute, this analysis of the initial depletion at the meniscus reveals its buoyant molar mass and sedimentation coefficient with an accuracy of approximately 10% and provides gross information about sample heterogeneity. This method can be used to study macromolecules that do not possess the prolonged stability needed in conventional sedimentation equilibrium experiments and it can increase the efficiency of sedimentation equilibrium experiments of previously uncharacterized samples.

  5. Sampling Participants’ Experience in Laboratory Experiments: Complementary Challenges for More Complete Data Collection

    PubMed Central

    McAuliffe, Alan; McGann, Marek

    2016-01-01

    Speelman and McGann’s (2013) examination of the uncritical way in which the mean is often used in psychological research raises questions both about the average’s reliability and its validity. In the present paper, we argue that interrogating the validity of the mean involves, amongst other things, a better understanding of the person’s experiences, the meaning of their actions, at the time that the behavior of interest is carried out. Recently emerging approaches within Psychology and Cognitive Science have argued strongly that experience should play a more central role in our examination of behavioral data, but the relationship between experience and behavior remains very poorly understood. We outline some of the history of the science on this fraught relationship, as well as arguing that contemporary methods for studying experience fall into one of two categories. “Wide” approaches tend to incorporate naturalistic behavior settings, but sacrifice accuracy and reliability in behavioral measurement. “Narrow” approaches maintain controlled measurement of behavior, but involve too specific a sampling of experience, which obscures crucial temporal characteristics. We therefore argue for a novel, mid-range sampling technique, that extends Hurlburt’s descriptive experience sampling, and adapts it for the controlled setting of the laboratory. This controlled descriptive experience sampling may be an appropriate tool to help calibrate both the mean and the meaning of an experimental situation with one another. PMID:27242588

  6. Appearance-based representative samples refining method for palmprint recognition

    NASA Astrophysics Data System (ADS)

    Wen, Jiajun; Chen, Yan

    2012-07-01

    The sparse representation can deal with the lack of sample problem due to utilizing of all the training samples. However, the discrimination ability will degrade when more training samples are used for representation. We propose a novel appearance-based palmprint recognition method. We aim to find a compromise between the discrimination ability and the lack of sample problem so as to obtain a proper representation scheme. Under the assumption that the test sample can be well represented by a linear combination of a certain number of training samples, we first select the representative training samples according to the contributions of the samples. Then we further refine the training samples by an iteration procedure, excluding the training sample with the least contribution to the test sample for each time. Experiments on PolyU multispectral palmprint database and two-dimensional and three-dimensional palmprint database show that the proposed method outperforms the conventional appearance-based palmprint recognition methods. Moreover, we also explore and find out the principle of the usage for the key parameters in the proposed algorithm, which facilitates to obtain high-recognition accuracy.

  7. Analysis of Whiskey by Dispersive Liquid-Liquid Microextraction Coupled with Gas Chromatography/Mass Spectrometry: An Upper Division Analytical Chemistry Experiment Guided by Green Chemistry

    ERIC Educational Resources Information Center

    Owens, Janel E.; Zimmerman, Laura B.; Gardner, Michael A.; Lowe, Luis E.

    2016-01-01

    Analysis of whiskey samples prepared by a green microextraction technique, dispersive liquid-liquid microextraction (DLLME), before analysis by a qualitative gas chromatography-mass spectrometry (GC/MS) method, is described as a laboratory experiment for an upper division instrumental methods of analysis laboratory course. Here, aroma compounds in…

  8. [Method of fused sample preparation after nitrify-determination of primary and minor elements in manganese ore by X-ray fluorescence spectrometry].

    PubMed

    Song, Yi; Guo, Fen; Gu, Song-hai

    2007-02-01

    Eight components, i. e. Mn, SiO2, Fe, P, Al2O3, CaO, MgO and S, in manganese ore were determined by X-ray fluorescence spectrometer. Because manganese ore sample releases a lot of air bubbles during fusion which effect accuracy and reproducibility of determination, nitric acid was added to the sample to destroy organic matter before fusion by the mixture flux at 1000 degrees C. This method solved the problem that the flux splashed during fusion because organic matter volatilized brought out a lot of air bubbles, eliminated particle size effects and mineral effect, while solved the problem of volatilization of sulfur during fusion. The experiments for the selection of the sample preparation conditions, i. e. fusion flux, fusion time and volume of HNO3, were carried out. The matrix effects on absorption and enhancement were corrected by variable theoretical alpha coefficient to expand the range of determination. Moreover, the precision and accuracy experiments were performed. In comparison with chemical analysis method, the quantitative analytical results for each component are satisfactory. The method has proven rapid, precise and simple.

  9. Extraction of organic contaminants from marine sediments and tissues using microwave energy.

    PubMed

    Jayaraman, S; Pruell, R J; McKinney, R

    2001-07-01

    In this study, we compared microwave solvent extraction (MSE) to conventional methods for extracting organic contaminants from marine sediments and tissues with high and varying moisture content. The organic contaminants measured were polychlorinated biphenyl (PCB) congeners, chlorinated pesticides, and polycyclic aromatic hydrocarbons (PAHs). Initial experiments were conducted on dry standard reference materials (SRMs) and field collected marine sediments. Moisture content in samples greatly influenced the recovery of the analytes of interest. When wet sediments were included in a sample batch, low recoveries were often encountered in other samples in the batch, including the dry SRM. Experiments were conducted to test the effect of standardizing the moisture content in all samples in a batch prior to extraction. SRM1941a (marine sediment). SRM1974a (mussel tissue), as well as QA96SED6 (marine sediment), and QA96TIS7 (marine tissue), both from 1996 NIST Intercalibration Exercise were extracted using microwave and conventional methods. Moisture levels were adjusted in SRMs to match those of marine sediment and tissue samples before microwave extraction. The results demonstrated that it is crucial to standardize the moisture content in all samples, including dry reference material to ensure good recovery of organic contaminants. MSE yielded equivalent or superior recoveries compared to conventional methods for the majority of the compounds evaluated. The advantages of MSE over conventional methods are reduced solvent usage, higher sample throughput and the elimination of halogenated solvent usage.

  10. Determination of platinum in waste platinum-loaded carbon catalyst samples using microwave-assisted sample digestion and ICP-OES

    NASA Astrophysics Data System (ADS)

    Ma, Yinbiao; Wei, Xiaojuan

    2017-04-01

    A novel method for the determination of platinum in waste platinum-loaded carbon catalyst samples was established by inductively coupled plasma optical emission spectrometry after samples digested by microwave oven with aqua regia. Such experiment conditions were investigated as the influence of sample digestion methods, digestion time, digestion temperature and interfering ions on the determination. Under the optimized conditions, the linear range of calibration graph for Pt was 0 ˜ 200.00 mg L-1, and the recovery was 95.67% ˜ 104.29%. The relative standard deviation (RSDs) for Pt was 1.78 %. The proposed method was applied to determine the same samples with atomic absorption spectrometry with the results consistently, which is suitable for the determination of platinum in waste platinum-loaded carbon catalyst samples.

  11. A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields

    DOE PAGES

    Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto

    2017-10-26

    In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less

  12. A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto

    In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less

  13. Least squares polynomial chaos expansion: A review of sampling strategies

    NASA Astrophysics Data System (ADS)

    Hadigol, Mohammad; Doostan, Alireza

    2018-04-01

    As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.

  14. Quadrature demodulation based circuit implementation of pulse stream for ultrasonic signal FRI sparse sampling

    NASA Astrophysics Data System (ADS)

    Shoupeng, Song; Zhou, Jiang

    2017-03-01

    Converting ultrasonic signal to ultrasonic pulse stream is the key step of finite rate of innovation (FRI) sparse sampling. At present, ultrasonic pulse-stream-forming techniques are mainly based on digital algorithms. No hardware circuit that can achieve it has been reported. This paper proposes a new quadrature demodulation (QD) based circuit implementation method for forming an ultrasonic pulse stream. Elaborating on FRI sparse sampling theory, the process of ultrasonic signal is explained, followed by a discussion and analysis of ultrasonic pulse-stream-forming methods. In contrast to ultrasonic signal envelope extracting techniques, a quadrature demodulation method (QDM) is proposed. Simulation experiments were performed to determine its performance at various signal-to-noise ratios (SNRs). The circuit was then designed, with mixing module, oscillator, low pass filter (LPF), and root of square sum module. Finally, application experiments were carried out on pipeline sample ultrasonic flaw testing. The experimental results indicate that the QDM can accurately convert ultrasonic signal to ultrasonic pulse stream, and reverse the original signal information, such as pulse width, amplitude, and time of arrival. This technique lays the foundation for ultrasonic signal FRI sparse sampling directly with hardware circuitry.

  15. Environmental analysis of higher brominated diphenyl ethers and decabromodiphenyl ethane.

    PubMed

    Kierkegaard, Amelie; Sellström, Ulla; McLachlan, Michael S

    2009-01-16

    Methods for environmental analysis of higher brominated diphenyl ethers (PBDEs), in particular decabromodiphenyl ether (BDE209), and the recently discovered environmental contaminant decabromodiphenyl ethane (deBDethane) are reviewed. The extensive literature on analysis of BDE209 has identified several critical issues, including contamination of the sample, degradation of the analyte during sample preparation and GC analysis, and the selection of appropriate detection methods and surrogate standards. The limited experience with the analysis of deBDethane suggests that there are many commonalities with BDE209. The experience garnered from the analysis of BDE209 over the last 15 years will greatly facilitate progress in the analysis of deBDethane.

  16. A novel hybrid scattering order-dependent variance reduction method for Monte Carlo simulations of radiative transfer in cloudy atmosphere

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo

    2017-03-01

    We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.

  17. Results from the FIN-2 formal comparison

    NASA Astrophysics Data System (ADS)

    Connolly, Paul; Hoose, Corinna; Liu, Xiaohong; Moehler, Ottmar; Cziczo, Daniel; DeMott, Paul

    2017-04-01

    During the Fifth International Ice Nucleation Workshop (FIN-2) at the AIDA Ice Nucleation facility in Karlsruhe, Germany in March 2015, a formal comparison of ice nucleation measurement methods was conducted. During the experiments the samples of ice nucleating particles were not revealed to the instrument scientists, hence this was referred to as a "blind comparison". The two samples used were later revealed to be Arizona Test Dust and an Argentina soil sample. For these two samples seven mobile ice nucleating particle counters sampled directly from the AIDA chamber or from the aerosol preparation chamber at specified temperatures, whereas filter samples were taken for two offline deposition nucleation instruments. Wet suspension methods for determining IN concentrations were also used with 10 different methods employed. For the wet suspension methods experiments were conducted using INPs collected from the air inside the chambers (impinger sampling) and INPs taken from the bulk samples (vial sampling). Direct comparisons of the ice nucleating particle concentrations are reported as well as derived ice nucleation active site densities. The study highlights the difficulties in performing such analyses, but generally indicates that there is reasonable agreement between the wet suspension techniques. It is noted that ice nucleation efficiency derived from the AIDA chamber (quantified using the ice active surface site density approach) is higher than that for the cold stage techniques. This is both true for the Argentina soil sample and, to a lesser extent, for the Arizona Test Dust sample too. Other interesting effects were noted: for the ATD the impinger sampling demonstrated higher INP efficiency at higher temperatures (>255 K) than the vial sampling, but agreed at the lower temperatures (<255K), whereas the opposite was true for the Argentina soil sample. The results are analysed to better understand the performance of the various techniques and to address any size-sorting effects and / or sampling line loses.

  18. Statistical power and optimal design in experiments in which samples of participants respond to samples of stimuli.

    PubMed

    Westfall, Jacob; Kenny, David A; Judd, Charles M

    2014-10-01

    Researchers designing experiments in which a sample of participants responds to a sample of stimuli are faced with difficult questions about optimal study design. The conventional procedures of statistical power analysis fail to provide appropriate answers to these questions because they are based on statistical models in which stimuli are not assumed to be a source of random variation in the data, models that are inappropriate for experiments involving crossed random factors of participants and stimuli. In this article, we present new methods of power analysis for designs with crossed random factors, and we give detailed, practical guidance to psychology researchers planning experiments in which a sample of participants responds to a sample of stimuli. We extensively examine 5 commonly used experimental designs, describe how to estimate statistical power in each, and provide power analysis results based on a reasonable set of default parameter values. We then develop general conclusions and formulate rules of thumb concerning the optimal design of experiments in which a sample of participants responds to a sample of stimuli. We show that in crossed designs, statistical power typically does not approach unity as the number of participants goes to infinity but instead approaches a maximum attainable power value that is possibly small, depending on the stimulus sample. We also consider the statistical merits of designs involving multiple stimulus blocks. Finally, we provide a simple and flexible Web-based power application to aid researchers in planning studies with samples of stimuli.

  19. The Social Experiences of High School Students with Visual Impairments

    ERIC Educational Resources Information Center

    Jessup, Glenda; Bundy, Anita C.; Broom, Alex; Hancock, Nicola

    2017-01-01

    Introduction: This study explores the social experiences in high school of students with visual impairments. Methods: Experience sampling methodology was used to examine (a) how socially included students with visual impairments feel, (b) the internal qualities of their activities, and (c) the factors that influence a sense of inclusion. Twelve…

  20. Putting Biology Students Out to Grass: the Nettlecombe Experiment After Thirteen Years.

    ERIC Educational Resources Information Center

    Crothers, J. H.; Lucas, A. M.

    1982-01-01

    The importance of examining both the natural history of organisms being investigated and numerical data from long-term field experiments is illustrated by describing a long-running field experiment at an English Field Study Council Centre. Sample results are discussed and alternative methods of using field studies in biology instruction are…

  1. The EIPeptiDi tool: enhancing peptide discovery in ICAT-based LC MS/MS experiments.

    PubMed

    Cannataro, Mario; Cuda, Giovanni; Gaspari, Marco; Greco, Sergio; Tradigo, Giuseppe; Veltri, Pierangelo

    2007-07-15

    Isotope-coded affinity tags (ICAT) is a method for quantitative proteomics based on differential isotopic labeling, sample digestion and mass spectrometry (MS). The method allows the identification and relative quantification of proteins present in two samples and consists of the following phases. First, cysteine residues are either labeled using the ICAT Light or ICAT Heavy reagent (having identical chemical properties but different masses). Then, after whole sample digestion, the labeled peptides are captured selectively using the biotin tag contained in both ICAT reagents. Finally, the simplified peptide mixture is analyzed by nanoscale liquid chromatography-tandem mass spectrometry (LC-MS/MS). Nevertheless, the ICAT LC-MS/MS method still suffers from insufficient sample-to-sample reproducibility on peptide identification. In particular, the number and the type of peptides identified in different experiments can vary considerably and, thus, the statistical (comparative) analysis of sample sets is very challenging. Low information overlap at the peptide and, consequently, at the protein level, is very detrimental in situations where the number of samples to be analyzed is high. We designed a method for improving the data processing and peptide identification in sample sets subjected to ICAT labeling and LC-MS/MS analysis, based on cross validating MS/MS results. Such a method has been implemented in a tool, called EIPeptiDi, which boosts the ICAT data analysis software improving peptide identification throughout the input data set. Heavy/Light (H/L) pairs quantified but not identified by the MS/MS routine, are assigned to peptide sequences identified in other samples, by using similarity criteria based on chromatographic retention time and Heavy/Light mass attributes. EIPeptiDi significantly improves the number of identified peptides per sample, proving that the proposed method has a considerable impact on the protein identification process and, consequently, on the amount of potentially critical information in clinical studies. The EIPeptiDi tool is available at http://bioingegneria.unicz.it/~veltri/projects/eipeptidi/ with a demo data set. EIPeptiDi significantly increases the number of peptides identified and quantified in analyzed samples, thus reducing the number of unassigned H/L pairs and allowing a better comparative analysis of sample data sets.

  2. New Light Sources and Concepts for Electro-Optic Sampling

    DTIC Science & Technology

    1994-03-01

    Research to improve electro - optic sampling led to the development of several high performance optical phase modulators. These phase modulators serve...method of optical pulse shape measurement was demonstrated with 3 ps time resolution, excellent power sensitivity and relative system simplicity. These experiments have opened up the field of temporal optics. Electro - optic sampling.

  3. How Depressive Levels Are Related to the Adults' Experiences of Lower-Limb Amputation: A Mixed Methods Pilot Study

    ERIC Educational Resources Information Center

    Senra, Hugo

    2013-01-01

    The current pilot study aims to explore whether different adults' experiences of lower-limb amputation could be associated with different levels of depression. To achieve these study objectives, a convergent parallel mixed methods design was used in a convenience sample of 42 adult amputees (mean age of 61 years; SD = 13.5). All of them had…

  4. Determination of the Effect of Various Modes of Cooking on the Vitamin C Content of a Common Food, Green Pepper: An Introductory Biochemistry Experiment.

    ERIC Educational Resources Information Center

    Johnson, Eric R.

    1988-01-01

    Describes a laboratory experiment that measures the amount of ascorbic acid destroyed by food preparation methods (boiling and steaming). Points out that aqueous extracts of cooked green pepper samples can be analyzed for ascorbic acid by a relatively simple redox titration. Lists experimental procedure for four methods of preparation. (MVL)

  5. Striving and Thriving in a Foreign Culture: A Mixed Method Approach on Adult International Students' Experience in U.S.A.

    ERIC Educational Resources Information Center

    Chen, Dianbing; Yang, Xinxiao

    2014-01-01

    In this mixed method study, we examined the experience of a sample of international students in four American universities to identify the factors that might enhance their ability in surviving and thriving in a foreign country within the context of university internationalization. The research explored the concepts of cultural values, behaviors,…

  6. Examining Latina College Experiences

    ERIC Educational Resources Information Center

    Romero, Amanda R.

    2012-01-01

    The purposes of this qualitative narrative study were to explore the potential areas of conflict Latina college students experience between their educational goals and traditional cultural gender roles and expectations. Participants were selected utilizing purposeful sampling methods. All participants were first-generation college students.…

  7. An Improved Experiment to Illustrate the Effect of Electronegativity on Chemical Shift.

    ERIC Educational Resources Information Center

    Boggess, Robert K.

    1988-01-01

    Describes a method for using nuclear magnetic resonance to observe the effect of electronegativity on the chemical shift of protons in similar compounds. Suggests the use of 1,3-dihalopropanes as samples. Includes sample questions. (MVL)

  8. A modified experimental setup for sedimentation equilibrium experiments with gels. Part 2: Technical developments.

    PubMed

    Cölfen, H; Borchard, W

    1994-06-01

    This part of the paper trilogy describes technical developments for an efficient experimental setup to investigate gels with equilibrium analytical ultracentrifugation. New 10-channel centerpieces for the Schlieren optics, a new programmable multiplexer, a modified Schlieren optical system, and a photo pickup with impulse transformer are introduced as major developments. Also, some new centerpieces suitable for equilibrium experiments with solutions using the Rayleigh interference and the uv-absorption optics are presented. These centerpieces allow the investigation of 10, 12, or even 26 samples per centerpiece. The problem to find suitable materials for cell centerpieces and windows in the case of adhering samples is discussed for the system gelatin/water. A phase volume calculation for circular sample channels as a correction for the case of broadened menisci is presented. The method described allows an accurate measurement of up to 70 samples simultaneously in an equilibrium experiment if the 8-hole rotor presented in part 1 of the trilogy is used. The number of samples is sufficient to characterize a gel/solvent system in the experimentally accessible range under identical conditions, which is not possible by means of any of the methods known before. All parts described are also applicable for the investigation of solutions.

  9. Comparison method for uranium determination in ore sample by inductively coupled plasma optical emission spectrometry (ICP-OES).

    PubMed

    Sert, Şenol

    2013-07-01

    A comparison method for the determination (without sample pre-concentration) of uranium in ore by inductively coupled plasma optical emission spectrometry (ICP-OES) has been performed. The experiments were conducted using three procedures: matrix matching, plasma optimization, and internal standardization for three emission lines of uranium. Three wavelengths of Sm were tested as internal standard for the internal standardization method. The robust conditions were evaluated using applied radiofrequency power, nebulizer argon gas flow rate, and sample uptake flow rate by considering the intensity ratio of the Mg(II) 280.270 nm and Mg(I) 285.213 nm lines. Analytical characterization of method was assessed by limit of detection and relative standard deviation values. The certificated reference soil sample IAEA S-8 was analyzed, and the uranium determination at 367.007 nm with internal standardization using Sm at 359.260 nm has been shown to improve accuracy compared with other methods. The developed method was used for real uranium ore sample analysis.

  10. Correcting for intra-experiment variation in Illumina BeadChip data is necessary to generate robust gene-expression profiles.

    PubMed

    Kitchen, Robert R; Sabine, Vicky S; Sims, Andrew H; Macaskill, E Jane; Renshaw, Lorna; Thomas, Jeremy S; van Hemert, Jano I; Dixon, J Michael; Bartlett, John M S

    2010-02-24

    Microarray technology is a popular means of producing whole genome transcriptional profiles, however high cost and scarcity of mRNA has led many studies to be conducted based on the analysis of single samples. We exploit the design of the Illumina platform, specifically multiple arrays on each chip, to evaluate intra-experiment technical variation using repeated hybridisations of universal human reference RNA (UHRR) and duplicate hybridisations of primary breast tumour samples from a clinical study. A clear batch-specific bias was detected in the measured expressions of both the UHRR and clinical samples. This bias was found to persist following standard microarray normalisation techniques. However, when mean-centering or empirical Bayes batch-correction methods (ComBat) were applied to the data, inter-batch variation in the UHRR and clinical samples were greatly reduced. Correlation between replicate UHRR samples improved by two orders of magnitude following batch-correction using ComBat (ranging from 0.9833-0.9991 to 0.9997-0.9999) and increased the consistency of the gene-lists from the duplicate clinical samples, from 11.6% in quantile normalised data to 66.4% in batch-corrected data. The use of UHRR as an inter-batch calibrator provided a small additional benefit when used in conjunction with ComBat, further increasing the agreement between the two gene-lists, up to 74.1%. In the interests of practicalities and cost, these results suggest that single samples can generate reliable data, but only after careful compensation for technical bias in the experiment. We recommend that investigators appreciate the propensity for such variation in the design stages of a microarray experiment and that the use of suitable correction methods become routine during the statistical analysis of the data.

  11. Correcting for intra-experiment variation in Illumina BeadChip data is necessary to generate robust gene-expression profiles

    PubMed Central

    2010-01-01

    Background Microarray technology is a popular means of producing whole genome transcriptional profiles, however high cost and scarcity of mRNA has led many studies to be conducted based on the analysis of single samples. We exploit the design of the Illumina platform, specifically multiple arrays on each chip, to evaluate intra-experiment technical variation using repeated hybridisations of universal human reference RNA (UHRR) and duplicate hybridisations of primary breast tumour samples from a clinical study. Results A clear batch-specific bias was detected in the measured expressions of both the UHRR and clinical samples. This bias was found to persist following standard microarray normalisation techniques. However, when mean-centering or empirical Bayes batch-correction methods (ComBat) were applied to the data, inter-batch variation in the UHRR and clinical samples were greatly reduced. Correlation between replicate UHRR samples improved by two orders of magnitude following batch-correction using ComBat (ranging from 0.9833-0.9991 to 0.9997-0.9999) and increased the consistency of the gene-lists from the duplicate clinical samples, from 11.6% in quantile normalised data to 66.4% in batch-corrected data. The use of UHRR as an inter-batch calibrator provided a small additional benefit when used in conjunction with ComBat, further increasing the agreement between the two gene-lists, up to 74.1%. Conclusion In the interests of practicalities and cost, these results suggest that single samples can generate reliable data, but only after careful compensation for technical bias in the experiment. We recommend that investigators appreciate the propensity for such variation in the design stages of a microarray experiment and that the use of suitable correction methods become routine during the statistical analysis of the data. PMID:20181233

  12. Efficient method of image edge detection based on FSVM

    NASA Astrophysics Data System (ADS)

    Cai, Aiping; Xiong, Xiaomei

    2013-07-01

    For efficient object cover edge detection in digital images, this paper studied traditional methods and algorithm based on SVM. It analyzed Canny edge detection algorithm existed some pseudo-edge and poor anti-noise capability. In order to provide a reliable edge extraction method, propose a new detection algorithm based on FSVM. Which contains several steps: first, trains classify sample and gives the different membership function to different samples. Then, a new training sample is formed by increase the punishment some wrong sub-sample, and use the new FSVM classification model for train and test them. Finally the edges are extracted of the object image by using the model. Experimental result shows that good edge detection image will be obtained and adding noise experiments results show that this method has good anti-noise.

  13. Optimal color design of psychological counseling room by design of experiments and response surface methodology.

    PubMed

    Liu, Wenjuan; Ji, Jianlin; Chen, Hua; Ye, Chenyu

    2014-01-01

    Color is one of the most powerful aspects of a psychological counseling environment. Little scientific research has been conducted on color design and much of the existing literature is based on observational studies. Using design of experiments and response surface methodology, this paper proposes an optimal color design approach for transforming patients' perception into color elements. Six indices, pleasant-unpleasant, interesting-uninteresting, exciting-boring, relaxing-distressing, safe-fearful, and active-inactive, were used to assess patients' impression. A total of 75 patients participated, including 42 for Experiment 1 and 33 for Experiment 2. 27 representative color samples were designed in Experiment 1, and the color sample (L = 75, a = 0, b = -60) was the most preferred one. In Experiment 2, this color sample was set as the 'central point', and three color attributes were optimized to maximize the patients' satisfaction. The experimental results show that the proposed method can get the optimal solution for color design of a counseling room.

  14. Split-plot microarray experiments: issues of design, power and sample size.

    PubMed

    Tsai, Pi-Wen; Lee, Mei-Ling Ting

    2005-01-01

    This article focuses on microarray experiments with two or more factors in which treatment combinations of the factors corresponding to the samples paired together onto arrays are not completely random. A main effect of one (or more) factor(s) is confounded with arrays (the experimental blocks). This is called a split-plot microarray experiment. We utilise an analysis of variance (ANOVA) model to assess differentially expressed genes for between-array and within-array comparisons that are generic under a split-plot microarray experiment. Instead of standard t- or F-test statistics that rely on mean square errors of the ANOVA model, we use a robust method, referred to as 'a pooled percentile estimator', to identify genes that are differentially expressed across different treatment conditions. We illustrate the design and analysis of split-plot microarray experiments based on a case application described by Jin et al. A brief discussion of power and sample size for split-plot microarray experiments is also presented.

  15. Optimal Color Design of Psychological Counseling Room by Design of Experiments and Response Surface Methodology

    PubMed Central

    Chen, Hua; Ye, Chenyu

    2014-01-01

    Color is one of the most powerful aspects of a psychological counseling environment. Little scientific research has been conducted on color design and much of the existing literature is based on observational studies. Using design of experiments and response surface methodology, this paper proposes an optimal color design approach for transforming patients’ perception into color elements. Six indices, pleasant-unpleasant, interesting-uninteresting, exciting-boring, relaxing-distressing, safe-fearful, and active-inactive, were used to assess patients’ impression. A total of 75 patients participated, including 42 for Experiment 1 and 33 for Experiment 2. 27 representative color samples were designed in Experiment 1, and the color sample (L = 75, a = 0, b = -60) was the most preferred one. In Experiment 2, this color sample was set as the ‘central point’, and three color attributes were optimized to maximize the patients’ satisfaction. The experimental results show that the proposed method can get the optimal solution for color design of a counseling room. PMID:24594683

  16. An Undergraduate Field Experiment for Measuring Exposure to Environmental Tobacco Smoke in Indoor Environments

    NASA Astrophysics Data System (ADS)

    Marsella, Adam M.; Huang, Jiping; Ellis, David A.; Mabury, Scott A.

    1999-12-01

    An undergraduate field experiment is described for the measurement of nicotine and various carbonyl compounds arising from environmental tobacco smoke. Students are introduced to practical techniques in HPLC-UV and GC-NPD. Also introduced are current methods in personal air sampling using small and portable field sampling pumps. Carbonyls (formaldehyde, acetaldehyde, acrolein, and acetone) are sampled with silica solid-phase extraction cartridges impregnated with 2,4-dinitrophenylhydrazine, eluted, and analyzed by HPLC-UV (360-380 nm). Nicotine is sampled using XAD-2 cartridges, extracted, and analyzed by GC-NPD. Students gain an appreciation for the problems associated with measuring ubiquitous pollutants such as formaldehyde, as well as the issue of chromatographic peak resolution when trying to resolve closely eluting peaks. By allowing the students to formulate their own hypothesis and sampling scheme, critical thinking and problem solving are developed in addition to analysis skills. As an experiment in analytical environmental chemistry, this laboratory introduces the application of field sampling and analysis techniques to the undergraduate lab.

  17. Beyond Fourier.

    PubMed

    Hoch, Jeffrey C

    2017-10-01

    Non-Fourier methods of spectrum analysis are gaining traction in NMR spectroscopy, driven by their utility for processing nonuniformly sampled data. These methods afford new opportunities for optimizing experiment time, resolution, and sensitivity of multidimensional NMR experiments, but they also pose significant challenges not encountered with the discrete Fourier transform. A brief history of non-Fourier methods in NMR serves to place different approaches in context. Non-Fourier methods reflect broader trends in the growing importance of computation in NMR, and offer insights for future software development. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. In Situ 3D Coherent X-ray Diffraction Imaging of Shock Experiments: Possible?

    NASA Astrophysics Data System (ADS)

    Barber, John

    2011-03-01

    In traditional coherent X-ray diffraction imaging (CXDI), a 2D or quasi-2D object is illuminated by a beam of coherent X-rays to produce a diffraction pattern, which is then manipulated via a process known as iterative phase retrieval to reconstruct an image of the original 2D sample. Recently, there have been dramatic advances in methods for performing fully 3D CXDI of a sample from a single diffraction pattern [Raines et al, Nature 463 214-7 (2010)], and these methods have been used to image samples tens of microns in size using soft X-rays. In this work, I explore the theoretical possibility of applying 3D CXDI techniques to the in situ imaging of the interaction between a shock front and a polycrystal, a far more stringent problem. A delicate trade-off is required between photon energy, spot size, imaging resolution, and the dimensions of the experimental setup. In this talk, I will outline the experimental and computational requirements for performing such an experiment, and I will present images and movies from simulations of one such hypothetical experiment, including both the time-resolved X-ray diffraction patterns and the time-resolved sample imagery.

  19. ALE: automated label extraction from GEO metadata.

    PubMed

    Giles, Cory B; Brown, Chase A; Ripperger, Michael; Dennis, Zane; Roopnarinesingh, Xiavan; Porter, Hunter; Perz, Aleksandra; Wren, Jonathan D

    2017-12-28

    NCBI's Gene Expression Omnibus (GEO) is a rich community resource containing millions of gene expression experiments from human, mouse, rat, and other model organisms. However, information about each experiment (metadata) is in the format of an open-ended, non-standardized textual description provided by the depositor. Thus, classification of experiments for meta-analysis by factors such as gender, age of the sample donor, and tissue of origin is not feasible without assigning labels to the experiments. Automated approaches are preferable for this, primarily because of the size and volume of the data to be processed, but also because it ensures standardization and consistency. While some of these labels can be extracted directly from the textual metadata, many of the data available do not contain explicit text informing the researcher about the age and gender of the subjects with the study. To bridge this gap, machine-learning methods can be trained to use the gene expression patterns associated with the text-derived labels to refine label-prediction confidence. Our analysis shows only 26% of metadata text contains information about gender and 21% about age. In order to ameliorate the lack of available labels for these data sets, we first extract labels from the textual metadata for each GEO RNA dataset and evaluate the performance against a gold standard of manually curated labels. We then use machine-learning methods to predict labels, based upon gene expression of the samples and compare this to the text-based method. Here we present an automated method to extract labels for age, gender, and tissue from textual metadata and GEO data using both a heuristic approach as well as machine learning. We show the two methods together improve accuracy of label assignment to GEO samples.

  20. Chemometric and biological validation of a capillary electrophoresis metabolomic experiment of Schistosoma mansoni infection in mice.

    PubMed

    Garcia-Perez, Isabel; Angulo, Santiago; Utzinger, Jürg; Holmes, Elaine; Legido-Quigley, Cristina; Barbas, Coral

    2010-07-01

    Metabonomic and metabolomic studies are increasingly utilized for biomarker identification in different fields, including biology of infection. The confluence of improved analytical platforms and the availability of powerful multivariate analysis software have rendered the multiparameter profiles generated by these omics platforms a user-friendly alternative to the established analysis methods where the quality and practice of a procedure is well defined. However, unlike traditional assays, validation methods for these new multivariate profiling tools have yet to be established. We propose a validation for models obtained by CE fingerprinting of urine from mice infected with the blood fluke Schistosoma mansoni. We have analysed urine samples from two sets of mice infected in an inter-laboratory experiment where different infection methods and animal husbandry procedures were employed in order to establish the core biological response to a S. mansoni infection. CE data were analysed using principal component analysis. Validation of the scores consisted of permutation scrambling (100 repetitions) and a manual validation method, using a third of the samples (not included in the model) as a test or prediction set. The validation yielded 100% specificity and 100% sensitivity, demonstrating the robustness of these models with respect to deciphering metabolic perturbations in the mouse due to a S. mansoni infection. A total of 20 metabolites across the two experiments were identified that significantly discriminated between S. mansoni-infected and noninfected control samples. Only one of these metabolites, allantoin, was identified as manifesting different behaviour in the two experiments. This study shows the reproducibility of CE-based metabolic profiling methods for disease characterization and screening and highlights the importance of much needed validation strategies in the emerging field of metabolomics.

  1. Simulation Studies as Designed Experiments: The Comparison of Penalized Regression Models in the “Large p, Small n” Setting

    PubMed Central

    Chaibub Neto, Elias; Bare, J. Christopher; Margolin, Adam A.

    2014-01-01

    New algorithms are continuously proposed in computational biology. Performance evaluation of novel methods is important in practice. Nonetheless, the field experiences a lack of rigorous methodology aimed to systematically and objectively evaluate competing approaches. Simulation studies are frequently used to show that a particular method outperforms another. Often times, however, simulation studies are not well designed, and it is hard to characterize the particular conditions under which different methods perform better. In this paper we propose the adoption of well established techniques in the design of computer and physical experiments for developing effective simulation studies. By following best practices in planning of experiments we are better able to understand the strengths and weaknesses of competing algorithms leading to more informed decisions about which method to use for a particular task. We illustrate the application of our proposed simulation framework with a detailed comparison of the ridge-regression, lasso and elastic-net algorithms in a large scale study investigating the effects on predictive performance of sample size, number of features, true model sparsity, signal-to-noise ratio, and feature correlation, in situations where the number of covariates is usually much larger than sample size. Analysis of data sets containing tens of thousands of features but only a few hundred samples is nowadays routine in computational biology, where “omics” features such as gene expression, copy number variation and sequence data are frequently used in the predictive modeling of complex phenotypes such as anticancer drug response. The penalized regression approaches investigated in this study are popular choices in this setting and our simulations corroborate well established results concerning the conditions under which each one of these methods is expected to perform best while providing several novel insights. PMID:25289666

  2. Transport of explosives I: TNT in soil and its equilibrium vapor

    NASA Astrophysics Data System (ADS)

    Baez, Bibiana; Correa, Sandra N.; Hernandez-Rivera, Samuel P.; de Jesus, Maritza; Castro, Miguel E.; Mina, Nairmen; Briano, Julio G.

    2004-09-01

    Landmine detection is an important task for military operations and for humanitarian demining. Conventional methods for landmine detection involve measurements of physical properties. Several of these methods fail on the detection of modern mines with plastic enclosures. Methods based on the detection signature explosives chemicals such as TNT and DNT are specific to landmines and explosive devices. However, such methods involve the measurements of the vapor trace, which can be deceiving of the actual mine location because of the complex transport phenomena that occur in the soil neighboring the buried landmine. We report on the results of the study of the explosives subject to similar environmental conditions as the actual mines. Soil samples containing TNT were used to study the effects of aging, temperature and moisture under controlled conditions. The soil used in the investigation was Ottawa sand. A JEOL GCMate II gas chromatograph +/- mass spectrometer coupled to a Tunable Electron Energy Monochromator (TEEM-GC/MS) was used to develop the method of analysis of explosives under enhanced detection conditions. Simultaneously, a GC with micro cell 63Ni, Electron Capture Detector (μECD) was used for analysis of TNT in sand. Both techniques were coupled with Solid-Phase Micro Extraction (SPME) methodology to collect TNT doped sand samples. The experiments were done in both, headspace and immersion modes of SPME for sampling of explosives. In the headspace experiments it was possible to detect appreciable TNT vapors as early as 1 hour after of preparing the samples, even at room temperature (20 °C). In the immersion experiments, I-SPME technique allowed for the detection of concentrations as low as 0.010 mg of explosive per kilogram of soil.

  3. Sparse feature learning for instrument identification: Effects of sampling and pooling methods.

    PubMed

    Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu

    2016-05-01

    Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.

  4. A fast learning method for large scale and multi-class samples of SVM

    NASA Astrophysics Data System (ADS)

    Fan, Yu; Guo, Huiming

    2017-06-01

    A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.

  5. Preservation of Multiple Mammalian Tissues to Maximize Science Return from Ground Based and Spaceflight Experiments.

    PubMed

    Choi, Sungshin; Ray, Hami E; Lai, San-Huei; Alwood, Joshua S; Globus, Ruth K

    2016-01-01

    Even with recent scientific advancements, challenges posed by limited resources and capabilities at the time of sample dissection continue to limit the collection of high quality tissues from experiments that can be conducted only infrequently and at high cost, such as in space. The resources and time it takes to harvest tissues post-euthanasia, and the methods and duration of long duration storage, potentially have negative impacts on sample quantity and quality, thereby limiting the scientific outcome that can be achieved. The goals of this study were to optimize methods for both sample recovery and science return from rodent experiments, with possible relevance to both ground based and spaceflight studies. The first objective was to determine the impacts of tissue harvest time post-euthanasia, preservation methods, and storage duration, focusing on RNA quality and enzyme activities in liver and spleen as indices of sample quality. The second objective was to develop methods that will maximize science return by dissecting multiple tissues after long duration storage in situ at -80°C. Tissues of C57Bl/6J mice were dissected and preserved at various time points post-euthanasia and stored at -80°C for up to 11 months. In some experiments, tissues were recovered from frozen carcasses which had been stored at -80°C up to 7 months. RNA quantity and quality was assessed by measuring RNA Integrity Number (RIN) values using an Agilent Bioanalyzer. Additionally, the quality of tissues was assessed by measuring activities of hepatic enzymes (catalase, glutathione reductase and GAPDH). Fresh tissues were collected up to one hour post-euthanasia, and stored up to 11 months at -80°C, with minimal adverse effects on the RNA quality of either livers or RNAlater-preserved spleens. Liver enzyme activities were similar to those of positive controls, with no significant effect observed at any time point. Tissues dissected from frozen carcasses that had been stored for up to 7 months at -80°C had variable results, depending on the specific tissue analyzed. RNA quality of liver, heart, and kidneys were minimally affected after 6-7 months of storage at -80°C, whereas RNA degradation was evident in tissues such as small intestine, bone, and bone marrow when they were collected from the carcasses frozen for 2.5 months. These results demonstrate that 1) the protocols developed for spaceflight experiments with on-orbit dissections support the retrieval of high quality samples for RNA expression and some protein analyses, despite delayed preservation post-euthanasia or prolonged storage, and 2) many additional tissues for gene expression analysis can be obtained by dissection even following prolonged storage of the tissue in situ at -80°C. These findings have relevance both to high value, ground-based experiments when sample collection capability is severely constrained, and to spaceflight experiments that entail on-orbit sample recovery by astronauts.

  6. Random phase detection in multidimensional NMR.

    PubMed

    Maciejewski, Mark W; Fenwick, Matthew; Schuyler, Adam D; Stern, Alan S; Gorbatyuk, Vitaliy; Hoch, Jeffrey C

    2011-10-04

    Despite advances in resolution accompanying the development of high-field superconducting magnets, biomolecular applications of NMR require multiple dimensions in order to resolve individual resonances, and the achievable resolution is typically limited by practical constraints on measuring time. In addition to the need for measuring long evolution times to obtain high resolution, the need to distinguish the sign of the frequency constrains the ability to shorten measuring times. Sign discrimination is typically accomplished by sampling the signal with two different receiver phases or by selecting a reference frequency outside the range of frequencies spanned by the signal and then sampling at a higher rate. In the parametrically sampled (indirect) time dimensions of multidimensional NMR experiments, either method imposes an additional factor of 2 sampling burden for each dimension. We demonstrate that by using a single detector phase at each time sample point, but randomly altering the phase for different points, the sign ambiguity that attends fixed single-phase detection is resolved. Random phase detection enables a reduction in experiment time by a factor of 2 for each indirect dimension, amounting to a factor of 8 for a four-dimensional experiment, albeit at the cost of introducing sampling artifacts. Alternatively, for fixed measuring time, random phase detection can be used to double resolution in each indirect dimension. Random phase detection is complementary to nonuniform sampling methods, and their combination offers the potential for additional benefits. In addition to applications in biomolecular NMR, random phase detection could be useful in magnetic resonance imaging and other signal processing contexts.

  7. Influences of sampling volume and sample concentration on the analysis of atmospheric carbonyls by 2,4-dinitrophenylhydrazine cartridge.

    PubMed

    Pal, Raktim; Kim, Ki-Hyun

    2008-03-10

    In this study, the analytical bias involved in the application of the 2,4-dinitrophenylhydrazine (2,4-DNPH)-coated cartridge sampling method was investigated for the analysis of five atmospheric carbonyl species (i.e., acetaldehyde, propionaldehyde, butyraldehyde, isovaleraldehyde, and valeraldehyde). In order to evaluate the potential bias of the sampling technique, a series of the laboratory experiments were conducted to cover a wide range of volumes (1-20 L) and concentration levels (approximately 100-2000 ppb in case of acetaldehyde). The results of these experiments were then evaluated in terms of the recovery rate (RR) for each carbonyl species. The detection properties of these carbonyls were clearly distinguished between light and heavy species in terms of RR and its relative standard error (R.S.E.). It also indicates that the studied analytical approach can yield the most reliable pattern for light carbonyls, especially acetaldehyde. When these experimental results were tested further by a two-factor analysis of variance (ANOVA), the analysis based on the cartridge sampling method is affected more sensitively by the concentration levels of samples rather than the sampling volume.

  8. Molecular cancer classification using a meta-sample-based regularized robust coding method.

    PubMed

    Wang, Shu-Lin; Sun, Liuchao; Fang, Jianwen

    2014-01-01

    Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.

  9. Study of the influence of the parameters of an experiment on the simulation of pole figures of polycrystalline materials using electron microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antonova, A. O., E-mail: aoantonova@mail.ru; Savyolova, T. I.

    2016-05-15

    A two-dimensional mathematical model of a polycrystalline sample and an experiment on electron backscattering diffraction (EBSD) is considered. The measurement parameters are taken to be the scanning step and threshold grain-boundary angle. Discrete pole figures for materials with hexagonal symmetry have been calculated based on the results of the model experiment. Discrete and smoothed (by the kernel method) pole figures of the model sample and the samples in the model experiment are compared using homogeneity criterion χ{sup 2}, an estimate of the pole figure maximum and its coordinate, a deviation of the pole figures of the model in the experimentmore » from the sample in the space of L{sub 1} measurable functions, and the RP-criterion for estimating the pole figure errors. Is is shown that the problem of calculating pole figures is ill-posed and their determination with respect to measurement parameters is not reliable.« less

  10. A visual detection of protein content based on titration of moving reaction boundary electrophoresis.

    PubMed

    Wang, Hou-Yu; Guo, Cheng-Ye; Guo, Chen-Gang; Fan, Liu-Yin; Zhang, Lei; Cao, Cheng-Xi

    2013-04-24

    A visual electrophoretic titration method was firstly developed from the concept of moving reaction boundary (MRB) for protein content analysis. In the developed method, when the voltage was applied, the hydroxide ions in the cathodic vessel moved towards the anode, and neutralized the carboxyl groups of protein immobilized via highly cross-linked polyacrylamide gel (PAG), generating a MRB between the alkali and the immobilized protein. The boundary moving velocity (V(MRB)) was as a function of protein content, and an acid-base indicator was used to denote the boundary displacement. As a proof of concept, standard model proteins and biological samples were chosen for the experiments to study the feasibility of the developed method. The experiments revealed that good linear calibration functions between V(MRB) and protein content (correlation coefficients R>0.98). The experiments further demonstrated the following merits of developed method: (1) weak influence of non-protein nitrogen additives (e.g., melamine) adulterated in protein samples, (2) good agreement with the classic Kjeldahl method (R=0.9945), (3) fast measuring speed in total protein analysis of large samples from the same source, and (4) low limit of detection (0.02-0.15 mg mL(-1) for protein content), good precision (R.S.D. of intra-day less than 1.7% and inter-day less than 2.7%), and high recoveries (105-107%). Crown Copyright © 2013. Published by Elsevier B.V. All rights reserved.

  11. Designing dipolar recoupling and decoupling experiments for biological solid-state NMR using interleaved continuous wave and RF pulse irradiation.

    PubMed

    Bjerring, Morten; Jain, Sheetal; Paaske, Berit; Vinther, Joachim M; Nielsen, Niels Chr

    2013-09-17

    Rapid developments in solid-state NMR methodology have boosted this technique into a highly versatile tool for structural biology. The invention of increasingly advanced rf pulse sequences that take advantage of better hardware and sample preparation have played an important part in these advances. In the development of these new pulse sequences, researchers have taken advantage of analytical tools, such as average Hamiltonian theory or lately numerical methods based on optimal control theory. In this Account, we focus on the interplay between these strategies in the systematic development of simple pulse sequences that combines continuous wave (CW) irradiation with short pulses to obtain improved rf pulse, recoupling, sampling, and decoupling performance. Our initial work on this problem focused on the challenges associated with the increasing use of fully or partly deuterated proteins to obtain high-resolution, liquid-state-like solid-state NMR spectra. Here we exploit the overwhelming presence of (2)H in such samples as a source of polarization and to gain structural information. The (2)H nuclei possess dominant quadrupolar couplings which complicate even the simplest operations, such as rf pulses and polarization transfer to surrounding nuclei. Using optimal control and easy analytical adaptations, we demonstrate that a series of rotor synchronized short pulses may form the basis for essentially ideal rf pulse performance. Using similar approaches, we design (2)H to (13)C polarization transfer experiments that increase the efficiency by one order of magnitude over standard cross polarization experiments. We demonstrate how we can translate advanced optimal control waveforms into simple interleaved CW and rf pulse methods that form a new cross polarization experiment. This experiment significantly improves (1)H-(15)N and (15)N-(13)C transfers, which are key elements in the vast majority of biological solid-state NMR experiments. In addition, we demonstrate how interleaved sampling of spectra exploiting polarization from (1)H and (2)H nuclei can substantially enhance the sensitivity of such experiments. Finally, we present systematic development of (1)H decoupling methods where CW irradiation of moderate amplitude is interleaved with strong rotor-synchronized refocusing pulses. We show that these sequences remove residual cross terms between dipolar coupling and chemical shielding anisotropy more effectively and improve the spectral resolution over that observed in current state-of-the-art methods.

  12. Combining CO2 sequestration and CH4 production by means of guest exchange in a gas hydrate reservoir: two pilot scale experiments

    NASA Astrophysics Data System (ADS)

    Heeschen, Katja U.; Spangenberg, Erik; Schicks, Judith M.; Deusner, Christian; Priegnitz, Mike; Strauch, Bettina; Bigalke, Nikolaus; Luzi-Helbing, Manja; Kossel, Elke; Haeckel, Matthias; Wang, Yi

    2017-04-01

    Methane (CH4) hydrates are considered as a player in the field of energy supply and - if applied as such - as a possible sink for the greenhouse gas carbon dioxide (CO2). Next to the more conventional production methods depressurization and thermal stimulation, an extraction of CH4 by means of CO2 injection is investigated. The method is based on the chemical potential gradient between the CH4 hydrate phase and the injected CO2 phase. Results from small-scale laboratory experiments on the replacement method indicate recovery ratios of up to 66% CH4 but also encounter major discrepancies in conversion rates. So far it has not been demonstrated with certainty that the process rates are sufficient for an energy and cost effective production of CH4 with a concurrent sequestration of CO2. In a co-operation of GFZ and GEOMAR we used LARS (Large Scale Reservoir Simulator) to investigate the CO2-CH4-replacement method combined with thermal stimulation. LARS accommodates a sample volume of 210 l and allows for the simulation of in situ conditions typically found in gas hydrate reservoirs. Based on the sample size, diverse transport mechanisms could be simulated, which are assumed to significantly alter process yields. Temperature and pressure data complemented by a high resolution electrical resistivity tomography (ERT), gas chromatography, and flow measurements serve to interpret the experiments. In two experiments 50 kg heated CO2 was injected into sediments with CH4 hydrate saturations of 50%. While in the first experiment the CO2 was injected discontinuously in a so called "huff'n puff" manner, the second experiment saw a continuous injection. Conditions within LARS were set to 13 MPa and 8˚ C, which allow for stability of pure CO2 and CH4 hydrates as well as mixed hydrates. The CO2 was heated and entered the sediment sample with temperatures of approximately 30˚ C. In this presentation we will discuss the results from the large-scale experiments and compare them with data from small-scale experiments.

  13. Experimental Monitoring of Cr(VI) Bio-reduction Using Electrochemical Geophysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birsen Canan; Gary R. Olhoeft; William A. Smith

    2007-09-01

    Many Department of Energy (DOE) sites are contaminated with highly carcinogenic hexavalent chromium (Cr(VI)). In this research, we explore the feasibility of applying complex resistivity to the detection and monitoring of microbially-induced reduction of hexavalent chromium (Cr(VI)) to a less toxic form (Cr(III)). We hope to measure the change in ionic concentration that occurs during this reduction reaction. This form of reduction promises to be an attractive alternative to more expensive remedial treatment methods. The specific goal of this research is to define the minimum and maximum concentration of the chemical and biological compounds in contaminated samples for which themore » Cr(VI) - Cr(III) reduction processes could be detected via complex resistivity. There are three sets of experiments, each comprised of three sample columns. The first experiment compares three concentrations of Cr(VI) at the same bacterial cell concentration. The second experiment establishes background samples with, and without, Cr(VI) and bacterial cells. The third experiment examines the influence of three different bacterial cell counts on the same concentration of Cr(VI). A polarization relaxation mechanism was observed between 10 and 50 Hz. The polarization mechanism, unfortunately, was not unique to bio-chemically active samples. Spectral analysis of complex resistivity data, however, showed that the frequency where the phase minimum occurred was not constant for bio-chemically active samples throughout the experiment. A significant shifts in phase minima occurred between 10 to 20 Hz from the initiation to completion of Cr(VI) reduction. This phenomena was quantified using the Cole-Cole model and the Marquardt-Levenberg nonlinear least square minimization method. The data suggests that the relaxation time and the time constant of this relaxation are the Cole-Cole parameters most sensitive to changes in biologically-induced reduction of Cr(VI).« less

  14. Recording 2-D Nutation NQR Spectra by Random Sampling Method

    PubMed Central

    Sinyavsky, Nikolaj; Jadzyn, Maciej; Ostafin, Michal; Nogaj, Boleslaw

    2010-01-01

    The method of random sampling was introduced for the first time in the nutation nuclear quadrupole resonance (NQR) spectroscopy where the nutation spectra show characteristic singularities in the form of shoulders. The analytic formulae for complex two-dimensional (2-D) nutation NQR spectra (I = 3/2) were obtained and the condition for resolving the spectral singularities for small values of an asymmetry parameter η was determined. Our results show that the method of random sampling of a nutation interferogram allows significant reduction of time required to perform a 2-D nutation experiment and does not worsen the spectral resolution. PMID:20949121

  15. A reverse engineering approach to optimize experiments for the construction of biological regulatory networks.

    PubMed

    Zhang, Xiaomeng; Shao, Bin; Wu, Yangle; Qi, Ouyang

    2013-01-01

    One of the major objectives in systems biology is to understand the relation between the topological structures and the dynamics of biological regulatory networks. In this context, various mathematical tools have been developed to deduct structures of regulatory networks from microarray expression data. In general, from a single data set, one cannot deduct the whole network structure; additional expression data are usually needed. Thus how to design a microarray expression experiment in order to get the most information is a practical problem in systems biology. Here we propose three methods, namely, maximum distance method, trajectory entropy method, and sampling method, to derive the optimal initial conditions for experiments. The performance of these methods is tested and evaluated in three well-known regulatory networks (budding yeast cell cycle, fission yeast cell cycle, and E. coli. SOS network). Based on the evaluation, we propose an efficient strategy for the design of microarray expression experiments.

  16. Metal–organic complexation in the marine environment

    PubMed Central

    Luther, George W; Rozan, Timothy F; Witter, Amy; Lewis, Brent

    2001-01-01

    We discuss the voltammetric methods that are used to assess metal–organic complexation in seawater. These consist of titration methods using anodic stripping voltammetry (ASV) and cathodic stripping voltammetry competitive ligand experiments (CSV-CLE). These approaches and a kinetic approach using CSV-CLE give similar information on the amount of excess ligand to metal in a sample and the conditional metal ligand stability constant for the excess ligand bound to the metal. CSV-CLE data using different ligands to measure Fe(III) organic complexes are similar. All these methods give conditional stability constants for which the side reaction coefficient for the metal can be corrected but not that for the ligand. Another approach, pseudovoltammetry, provides information on the actual metal–ligand complex(es) in a sample by doing ASV experiments where the deposition potential is varied more negatively in order to destroy the metal–ligand complex. This latter approach gives concentration information on each actual ligand bound to the metal as well as the thermodynamic stability constant of each complex in solution when compared to known metal–ligand complexes. In this case the side reaction coefficients for the metal and ligand are corrected. Thus, this method may not give identical information to the titration methods because the excess ligand in the sample may not be identical to some of the actual ligands binding the metal in the sample. PMID:16759421

  17. Detection of Mycoplasma hyopneumoniae by polymerase chain reaction in swine presenting respiratory problems

    PubMed Central

    Yamaguti, M.; Muller, E.E.; Piffer, A.I.; Kich, J.D.; Klein, C.S.; Kuchiishi, S.S.

    2008-01-01

    Since Mycoplasma hyopneumoniae isolation in appropriate media is a difficult task and impractical for daily routine diagnostics, Nested-PCR (N-PCR) techniques are currently used to improve the direct diagnostic sensitivity of Swine Enzootic Pneumonia. In a first experiment, this paper describes a N-PCR technique optimization based on three variables: different sampling sites, sample transport media, and DNA extraction methods, using eight pigs. Based on the optimization results, a second experiment was conducted for testing validity using 40 animals. In conclusion, the obtained results of the N-PCR optimization and validation allow us to recommend this test as a routine monitoring diagnostic method for Mycoplasma hyopneumoniae infection in swine herds. PMID:24031248

  18. Experiences of Opium Dependents from Performance of Methadone Centers of Kerman, Iran

    PubMed Central

    Banazadeh Mahani, Nabi; Kheradmand, Ali; Abedi, Heidarali

    2009-01-01

    Background: To assess patients' satisfaction and to evaluate methadone therapy program, it is important to understand the experiences of opium dependents during the treatment period in methadone centers and determine the quality of this program and revise standards based on that. This study aimed to describe the nature and structure of patients' experiences during treatment in methadone centers. Methods: This was a qualitative method using phenomenology. Sampling was purposive and the participants were selected from opium dependents referred to Kerman methadone centers during 2007. Sampling continued until data saturation and the sample size was 32. Colaizzi's method was applied for data analysis. Findings: The findings of this study included 27 codes categorized in four main groups: experiences of structure, personnel, patients, and regulations. These four categories showed the main structure of experiences in methadone centers. Conclusion: Lack of treatment centers in near-by cities or the problems of those centers suggest that it is necessary to establish new centers or solve the problems of centers in near-by cities. The type of patients referring to the centers plays a role in treatment process. The regular presence of physicians and other personnel and their concerns and care for patients as well as longer working hours of the centers have roles in patients' satisfaction and increase of consistency with treatment. Discussing the rules and regulations of the center with patients including the obligatory of daily reference to the center to obtain medicine and injections sound necessary. Also, it is necessary to find ways for solving problems of urine tests. PMID:24494087

  19. Acoustic levitation and manipulation for space applications

    NASA Technical Reports Server (NTRS)

    Wang, T. G.

    1979-01-01

    A wide spectrum of experiments to be performed in space in a microgravity environment require levitation and manipulation of liquid or molten samples. A novel acoustic method has been developed at JPL for controlling liquid samples without physical contacts. This method utilizes the static pressure generated by three orthogonal acoustic standing waves excited within an enclosure. Furthermore, this method will allow the sample to be rotated and/or oscillated by modifying the phase angles and/or the amplitude of the acoustic field. This technique has been proven both in our laboratory and in a microgravity environment provided by KC-135 flights. Samples placed within our chamber driven at (1,0,0), (0,1,0), and (0,0,1), modes were indeed levitated, rotated, and oscillated.

  20. Fluoride glass: Crystallization, surface tension

    NASA Technical Reports Server (NTRS)

    Doremus, R. H.

    1988-01-01

    Fluoride glass was levitated acoustically in the ACES apparatus on STS-11, and the recovered sample had a different microstructure from samples cooled in a container. Further experiments on levitated samples of fluoride glass are proposed. These include nucleation, crystallization, melting observations, measurement of surface tension of molten glass, and observation of bubbles in the glass. Ground experiments are required on sample preparation, outgassing, and surface reactions. The results should help in the development and evaluation of containerless processing, especially of glass, in the development of a contaminent-free method of measuring surface tensions of melts, in extending knowledge of gas and bubble behavior in fluoride glasses, and in increasing insight into the processing and properties of fluoride glasses.

  1. Everyday Health Communication Experiences of College Students

    ERIC Educational Resources Information Center

    Baxter, Leslie; Egbert, Nichole; Ho, Evelyn

    2008-01-01

    Objective: The authors examined college students' day-to-day health communication experiences. Participants: A convenience sample of 109 midwestern university students participated in the study. Methods: The participants completed health communication diaries for 2 weeks, generating 2,185 records. Frequent health topics included nutrition and…

  2. Recovery of Sublethally Injured Bacteria Using Selective Agar Overlays.

    ERIC Educational Resources Information Center

    McKillip, John L.

    2001-01-01

    This experiment subjects bacteria in a food sample and an environmental sample to conditions of sublethal stress in order to assess the effectiveness of the agar overlay method to recover sublethally injured cells compared to direct plating onto the appropriate selective medium. (SAH)

  3. Measurement of the Thermal Properties of a Metal Using a Relaxation Method

    ERIC Educational Resources Information Center

    Fox, John N.; McMaster, Richard H.

    1975-01-01

    An undergraduate experiment is described which employs a relaxation method for the measurement of the thermal conductivity and specific heat of a metallic sample in a temperature range of 0-100 degrees centigrade. (Author/CP)

  4. Domain Regeneration for Cross-Database Micro-Expression Recognition

    NASA Astrophysics Data System (ADS)

    Zong, Yuan; Zheng, Wenming; Huang, Xiaohua; Shi, Jingang; Cui, Zhen; Zhao, Guoying

    2018-05-01

    In this paper, we investigate the cross-database micro-expression recognition problem, where the training and testing samples are from two different micro-expression databases. Under this setting, the training and testing samples would have different feature distributions and hence the performance of most existing micro-expression recognition methods may decrease greatly. To solve this problem, we propose a simple yet effective method called Target Sample Re-Generator (TSRG) in this paper. By using TSRG, we are able to re-generate the samples from target micro-expression database and the re-generated target samples would share same or similar feature distributions with the original source samples. For this reason, we can then use the classifier learned based on the labeled source samples to accurately predict the micro-expression categories of the unlabeled target samples. To evaluate the performance of the proposed TSRG method, extensive cross-database micro-expression recognition experiments designed based on SMIC and CASME II databases are conducted. Compared with recent state-of-the-art cross-database emotion recognition methods, the proposed TSRG achieves more promising results.

  5. Quantifying Uncertainties from Presence Data Sampling Methods for Species Distribution Modeling: Focused on Vegetation.

    NASA Astrophysics Data System (ADS)

    Sung, S.; Kim, H. G.; Lee, D. K.; Park, J. H.; Mo, Y.; Kil, S.; Park, C.

    2016-12-01

    The impact of climate change has been observed throughout the globe. The ecosystem experiences rapid changes such as vegetation shift, species extinction. In these context, Species Distribution Model (SDM) is one of the popular method to project impact of climate change on the ecosystem. SDM basically based on the niche of certain species with means to run SDM present point data is essential to find biological niche of species. To run SDM for plants, there are certain considerations on the characteristics of vegetation. Normally, to make vegetation data in large area, remote sensing techniques are used. In other words, the exact point of presence data has high uncertainties as we select presence data set from polygons and raster dataset. Thus, sampling methods for modeling vegetation presence data should be carefully selected. In this study, we used three different sampling methods for selection of presence data of vegetation: Random sampling, Stratified sampling and Site index based sampling. We used one of the R package BIOMOD2 to access uncertainty from modeling. At the same time, we included BioCLIM variables and other environmental variables as input data. As a result of this study, despite of differences among the 10 SDMs, the sampling methods showed differences in ROC values, random sampling methods showed the lowest ROC value while site index based sampling methods showed the highest ROC value. As a result of this study the uncertainties from presence data sampling methods and SDM can be quantified.

  6. [Efficacy of the keyword mnemonic method in adults].

    PubMed

    Campos, Alfredo; Pérez-Fabello, María José; Camino, Estefanía

    2010-11-01

    Two experiments were used to assess the efficacy of the keyword mnemonic method in adults. In Experiment 1, immediate and delayed recall (at a one-day interval) were assessed by comparing the results obtained by a group of adults using the keyword mnemonic method in contrast to a group using the repetition method. The mean age of the sample under study was 59.35 years. Subjects were required to learn a list of 16 words translated from Latin into Spanish. Participants who used keyword mnemonics that had been devised by other experimental participants of the same characteristics, obtained significantly higher immediate and delayed recall scores than participants in the repetition method. In Experiment 2, other participants had to learn a list of 24 Latin words translated into Spanish by using the keyword mnemonic method reinforced with pictures. Immediate and delayed recall were significantly greater in the keyword mnemonic method group than in the repetition method group.

  7. Mind Wandering in Chinese Daily Lives – An Experience Sampling Study

    PubMed Central

    Song, Xiaolan; Wang, Xiao

    2012-01-01

    Mind wandering has recently received extensive research because it reveals an important characteristic of our consciousness: conscious experience can arise internally and involuntarily. As the first attempt to examine mind wandering in a non-western population, the present study used experience-sampling method to collect the daily momentary mind wandering episodes in a Chinese sample. The results showed that mind wandering was also a ubiquitous experience among the Chinese population, and, instead of emerging out of nowhere, it was often elicited by external or internal cues. Furthermore, most of the mind wandering episodes involved prospective thinking and were closely related to one’s personal life. Finally, the frequency of mind wandering was influenced by some contextual factors. These results taken together suggest that mind wandering plays an important role in helping people to maintain a continuous feeling of “self” and to prepare them to cope with the upcoming events. PMID:22957071

  8. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition.

    PubMed

    Wang, Rong

    2015-01-01

    In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  9. Measuring herbicide volatilization from bare soil.

    PubMed

    Yates, S R

    2006-05-15

    A field experiment was conducted to measure surface dissipation and volatilization of the herbicide triallate after application to bare soil using micrometeorological, chamber, and soil-loss methods. The volatilization rate was measured continuously for 6.5 days and the range in the daily peak values for the integrated horizontal flux method was from 32.4 (day 5) to 235.2 g ha(-1) d(-1) (day 1), for the theoretical profile shape method was from 31.5 to 213.0 g ha(-1) d(-1), and for the flux chamber was from 15.7 to 47.8 g ha(-1) d(-1). Soil samples were taken within 30 min after application and the measured mass of triallate was 8.75 kg ha(-1). The measured triallate mass in the soil at the end of the experiment was approximately 6 kg ha(-1). The triallate dissipation rate, obtained by soil sampling, was approximately 334 g ha(-1) d(-1) (98 g d(-1)) and the average rate of volatilization was 361 g ha(-1) d(-1). Soil sampling at the end of the experiment showed that approximately 31% (0.803 kg/2.56 kg) of the triallate mass was lost from the soil. Significant volatilization of triallate is possible when applied directly to the soil surface without incorporation.

  10. High-resolution nuclear magnetic resonance measurements in inhomogeneous magnetic fields: A fast two-dimensional J-resolved experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Yuqing; Cai, Shuhui; Yang, Yu

    2016-03-14

    High spectral resolution in nuclear magnetic resonance (NMR) is a prerequisite for achieving accurate information relevant to molecular structures and composition assignments. The continuous development of superconducting magnets guarantees strong and homogeneous static magnetic fields for satisfactory spectral resolution. However, there exist circumstances, such as measurements on biological tissues and heterogeneous chemical samples, where the field homogeneity is degraded and spectral line broadening seems inevitable. Here we propose an NMR method, named intermolecular zero-quantum coherence J-resolved spectroscopy (iZQC-JRES), to face the challenge of field inhomogeneity and obtain desired high-resolution two-dimensional J-resolved spectra with fast acquisition. Theoretical analyses for this methodmore » are given according to the intermolecular multiple-quantum coherence treatment. Experiments on (a) a simple chemical solution and (b) an aqueous solution of mixed metabolites under externally deshimmed fields, and on (c) a table grape sample with intrinsic field inhomogeneity from magnetic susceptibility variations demonstrate the feasibility and applicability of the iZQC-JRES method. The application of this method to inhomogeneous chemical and biological samples, maybe in vivo samples, appears promising.« less

  11. Frequency-Modulated Continuous Flow Analysis Electrospray Ionization Mass Spectrometry (FM-CFA-ESI-MS) for Sample Multiplexing.

    PubMed

    Filla, Robert T; Schrell, Adrian M; Coulton, John B; Edwards, James L; Roper, Michael G

    2018-02-20

    A method for multiplexed sample analysis by mass spectrometry without the need for chemical tagging is presented. In this new method, each sample is pulsed at unique frequencies, mixed, and delivered to the mass spectrometer while maintaining a constant total flow rate. Reconstructed ion currents are then a time-dependent signal consisting of the sum of the ion currents from the various samples. Spectral deconvolution of each reconstructed ion current reveals the identity of each sample, encoded by its unique frequency, and its concentration encoded by the peak height in the frequency domain. This technique is different from other approaches that have been described, which have used modulation techniques to increase the signal-to-noise ratio of a single sample. As proof of concept of this new method, two samples containing up to 9 analytes were multiplexed. The linear dynamic range of the calibration curve was increased with extended acquisition times of the experiment and longer oscillation periods of the samples. Because of the combination of the samples, salt had little effect on the ability of this method to achieve relative quantitation. Continued development of this method is expected to allow for increased numbers of samples that can be multiplexed.

  12. Evaluation of the furosine and homoarginine methods for determining reactive lysine in rumen-undegraded protein.

    PubMed

    Boucher, S E; Pedersen, C; Stein, H H; Schwab, C G

    2009-08-01

    Three samples of soybean meal (SBM), 3 samples of expeller SBM (SoyPlus, West Central Cooperative, Ralston, IA), 5 samples of distillers dried grains with solubles (DDGS), and 5 samples of fish meal were used to evaluate the furosine and homoarginine procedures to estimate reactive Lys in the rumen-undegraded protein fraction (RUP-Lys). One sample each of SBM, expeller SBM, and DDGS were subjected to additional heat treatment in the lab to ensure there was a wide range in reactive RUP-Lys content among the samples. Furosine is a secondary product of the initial stages of the Maillard reaction and can be used to calculate blocked Lys. Homoarginine is formed via the reaction of reactive Lys with O-methylisourea and can be used to calculate the concentration of reactive Lys. In previous experiments, each sample was ruminally incubated in situ for 16 h, and standardized RUP-Lys digestibility of the samples was determined in cecectomized roosters. All rumen-undegraded residue (RUR) samples were analyzed for furosine and Lys; however, only 9 of the 16 samples contained furosine, and only the 4 unheated DDGS samples contained appreciable amounts of furosine. Blocked RUP-Lys was calculated from the furosine and Lys concentrations of the RUR. Both the intact feed and RUR samples were evaluated using the homoarginine method. All samples were incubated with an O-methylisourea/BaOH solution for 72 h and analyzed for Lys and homoarginine concentrations. Reactive Lys concentrations of the intact feeds and RUR were calculated. Results of the experiment indicate that blocked RUP-Lys determined via the furosine method was negatively correlated with standardized RUP-Lys digestibility, and reactive RUP-Lys determined via the guanidination method was positively correlated with standardized RUP-Lys digestibility. Reactive Lys concentrations of the intact samples were also highly correlated with RUP-Lys digestibility. In conclusion, the furosine assay is useful in predicting RUP-Lys digestibility of DDGS samples, and the guanidination procedure can be used to predict RUP-Lys digestibility of SBM, expeller SBM, DDGS, and fish meal samples.

  13. Multilattice sampling strategies for region of interest dynamic MRI.

    PubMed

    Rilling, Gabriel; Tao, Yuehui; Marshall, Ian; Davies, Mike E

    2013-08-01

    A multilattice sampling approach is proposed for dynamic MRI with Cartesian trajectories. It relies on the use of sampling patterns composed of several different lattices and exploits an image model where only some parts of the image are dynamic, whereas the rest is assumed static. Given the parameters of such an image model, the methodology followed for the design of a multilattice sampling pattern adapted to the model is described. The multi-lattice approach is compared to single-lattice sampling, as used by traditional acceleration methods such as UNFOLD (UNaliasing by Fourier-Encoding the Overlaps using the temporal Dimension) or k-t BLAST, and random sampling used by modern compressed sensing-based methods. On the considered image model, it allows more flexibility and higher accelerations than lattice sampling and better performance than random sampling. The method is illustrated on a phase-contrast carotid blood velocity mapping MR experiment. Combining the multilattice approach with the KEYHOLE technique allows up to 12× acceleration factors. Simulation and in vivo undersampling results validate the method. Compared to lattice and random sampling, multilattice sampling provides significant gains at high acceleration factors. © 2012 Wiley Periodicals, Inc.

  14. Consideration of sample return and the exploration strategy for Mars

    NASA Technical Reports Server (NTRS)

    Bogard, D. C.; Duke, M. B.; Gibson, E. K.; Minear, J. W.; Nyquist, L. E.; Phinney, W. C.

    1979-01-01

    The scientific rationale and requirements for a Mars surface sample return were examined and the experience gained from the analysis and study of the returned lunar samples were incorporated into the science requirements and engineering design for the Mars sample return mission. The necessary data sets for characterizing Mars are presented. If further analyses of surface samples are to be made, the best available method is for the analysis to be conducted in terrestrial laboratories.

  15. Application of the experimental design of experiments (DoE) for the determination of organotin compounds in water samples using HS-SPME and GC-MS/MS.

    PubMed

    Coscollà, Clara; Navarro-Olivares, Santiago; Martí, Pedro; Yusà, Vicent

    2014-02-01

    When attempting to discover the important factors and then optimise a response by tuning these factors, experimental design (design of experiments, DoE) gives a powerful suite of statistical methodology. DoE identify significant factors and then optimise a response with respect to them in method development. In this work, a headspace-solid-phase micro-extraction (HS-SPME) combined with gas chromatography tandem mass spectrometry (GC-MS/MS) methodology for the simultaneous determination of six important organotin compounds namely monobutyltin (MBT), dibutyltin (DBT), tributyltin (TBT), monophenyltin (MPhT), diphenyltin (DPhT), triphenyltin (TPhT) has been optimized using a statistical design of experiments (DOE). The analytical method is based on the ethylation with NaBEt4 and simultaneous headspace-solid-phase micro-extraction of the derivative compounds followed by GC-MS/MS analysis. The main experimental parameters influencing the extraction efficiency selected for optimization were pre-incubation time, incubation temperature, agitator speed, extraction time, desorption temperature, buffer (pH, concentration and volume), headspace volume, sample salinity, preparation of standards, ultrasonic time and desorption time in the injector. The main factors (excitation voltage, excitation time, ion source temperature, isolation time and electron energy) affecting the GC-IT-MS/MS response were also optimized using the same statistical design of experiments. The proposed method presented good linearity (coefficient of determination R(2)>0.99) and repeatibilty (1-25%) for all the compounds under study. The accuracy of the method measured as the average percentage recovery of the compounds in spiked surface and marine waters was higher than 70% for all compounds studied. Finally, the optimized methodology was applied to real aqueous samples enabled the simultaneous determination of all compounds under study in surface and marine water samples obtained from Valencia region (Spain). © 2013 Elsevier B.V. All rights reserved.

  16. Experimental Null Method to Guide the Development of Technical Procedures and to Control False-Positive Discovery in Quantitative Proteomics.

    PubMed

    Shen, Xiaomeng; Hu, Qiang; Li, Jun; Wang, Jianmin; Qu, Jun

    2015-10-02

    Comprehensive and accurate evaluation of data quality and false-positive biomarker discovery is critical to direct the method development/optimization for quantitative proteomics, which nonetheless remains challenging largely due to the high complexity and unique features of proteomic data. Here we describe an experimental null (EN) method to address this need. Because the method experimentally measures the null distribution (either technical or biological replicates) using the same proteomic samples, the same procedures and the same batch as the case-vs-contol experiment, it correctly reflects the collective effects of technical variability (e.g., variation/bias in sample preparation, LC-MS analysis, and data processing) and project-specific features (e.g., characteristics of the proteome and biological variation) on the performances of quantitative analysis. To show a proof of concept, we employed the EN method to assess the quantitative accuracy and precision and the ability to quantify subtle ratio changes between groups using different experimental and data-processing approaches and in various cellular and tissue proteomes. It was found that choices of quantitative features, sample size, experimental design, data-processing strategies, and quality of chromatographic separation can profoundly affect quantitative precision and accuracy of label-free quantification. The EN method was also demonstrated as a practical tool to determine the optimal experimental parameters and rational ratio cutoff for reliable protein quantification in specific proteomic experiments, for example, to identify the necessary number of technical/biological replicates per group that affords sufficient power for discovery. Furthermore, we assessed the ability of EN method to estimate levels of false-positives in the discovery of altered proteins, using two concocted sample sets mimicking proteomic profiling using technical and biological replicates, respectively, where the true-positives/negatives are known and span a wide concentration range. It was observed that the EN method correctly reflects the null distribution in a proteomic system and accurately measures false altered proteins discovery rate (FADR). In summary, the EN method provides a straightforward, practical, and accurate alternative to statistics-based approaches for the development and evaluation of proteomic experiments and can be universally adapted to various types of quantitative techniques.

  17. Understanding environmental DNA detection probabilities: A case study using a stream-dwelling char Salvelinus fontinalis

    Treesearch

    Taylor M. Wilcox; Kevin S. McKelvey; Michael K. Young; Adam J. Sepulveda; Bradley B. Shepard; Stephen F. Jane; Andrew R. Whiteley; Winsor H. Lowe; Michael K. Schwartz

    2016-01-01

    Environmental DNA sampling (eDNA) has emerged as a powerful tool for detecting aquatic animals. Previous research suggests that eDNA methods are substantially more sensitive than traditional sampling. However, the factors influencing eDNA detection and the resulting sampling costs are still not well understood. Here we use multiple experiments to derive...

  18. High-density grids for efficient data collection from multiple crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baxter, Elizabeth L.; Aguila, Laura; Alonso-Mori, Roberto

    Higher throughput methods to mount and collect data from multiple small and radiation-sensitive crystals are important to support challenging structural investigations using microfocus synchrotron beamlines. Furthermore, efficient sample-delivery methods are essential to carry out productive femtosecond crystallography experiments at X-ray free-electron laser (XFEL) sources such as the Linac Coherent Light Source (LCLS). To address these needs, a high-density sample grid useful as a scaffold for both crystal growth and diffraction data collection has been developed and utilized for efficient goniometer-based sample delivery at synchrotron and XFEL sources. A single grid contains 75 mounting ports and fits inside an SSRL cassettemore » or uni-puck storage container. The use of grids with an SSRL cassette expands the cassette capacity up to 7200 samples. Grids may also be covered with a polymer film or sleeve for efficient room-temperature data collection from multiple samples. New automated routines have been incorporated into theBlu-Ice/DCSSexperimental control system to support grids, including semi-automated grid alignment, fully automated positioning of grid ports, rastering and automated data collection. Specialized tools have been developed to support crystallization experiments on grids, including a universal adaptor, which allows grids to be filled by commercial liquid-handling robots, as well as incubation chambers, which support vapor-diffusion and lipidic cubic phase crystallization experiments. Experiments in which crystals were loaded into grids or grown on grids using liquid-handling robots and incubation chambers are described. As a result, crystals were screened at LCLS-XPP and SSRL BL12-2 at room temperature and cryogenic temperatures.« less

  19. High-density grids for efficient data collection from multiple crystals

    PubMed Central

    Baxter, Elizabeth L.; Aguila, Laura; Alonso-Mori, Roberto; Barnes, Christopher O.; Bonagura, Christopher A.; Brehmer, Winnie; Brunger, Axel T.; Calero, Guillermo; Caradoc-Davies, Tom T.; Chatterjee, Ruchira; Degrado, William F.; Fraser, James S.; Ibrahim, Mohamed; Kern, Jan; Kobilka, Brian K.; Kruse, Andrew C.; Larsson, Karl M.; Lemke, Heinrik T.; Lyubimov, Artem Y.; Manglik, Aashish; McPhillips, Scott E.; Norgren, Erik; Pang, Siew S.; Soltis, S. M.; Song, Jinhu; Thomaston, Jessica; Tsai, Yingssu; Weis, William I.; Woldeyes, Rahel A.; Yachandra, Vittal; Yano, Junko; Zouni, Athina; Cohen, Aina E.

    2016-01-01

    Higher throughput methods to mount and collect data from multiple small and radiation-sensitive crystals are important to support challenging structural investigations using microfocus synchrotron beamlines. Furthermore, efficient sample-delivery methods are essential to carry out productive femtosecond crystallography experiments at X-ray free-electron laser (XFEL) sources such as the Linac Coherent Light Source (LCLS). To address these needs, a high-density sample grid useful as a scaffold for both crystal growth and diffraction data collection has been developed and utilized for efficient goniometer-based sample delivery at synchrotron and XFEL sources. A single grid contains 75 mounting ports and fits inside an SSRL cassette or uni-puck storage container. The use of grids with an SSRL cassette expands the cassette capacity up to 7200 samples. Grids may also be covered with a polymer film or sleeve for efficient room-temperature data collection from multiple samples. New automated routines have been incorporated into the Blu-Ice/DCSS experimental control system to support grids, including semi-automated grid alignment, fully automated positioning of grid ports, rastering and automated data collection. Specialized tools have been developed to support crystallization experiments on grids, including a universal adaptor, which allows grids to be filled by commercial liquid-handling robots, as well as incubation chambers, which support vapor-diffusion and lipidic cubic phase crystallization experiments. Experiments in which crystals were loaded into grids or grown on grids using liquid-handling robots and incubation chambers are described. Crystals were screened at LCLS-XPP and SSRL BL12-2 at room temperature and cryogenic temperatures. PMID:26894529

  20. High-density grids for efficient data collection from multiple crystals

    DOE PAGES

    Baxter, Elizabeth L.; Aguila, Laura; Alonso-Mori, Roberto; ...

    2015-11-03

    Higher throughput methods to mount and collect data from multiple small and radiation-sensitive crystals are important to support challenging structural investigations using microfocus synchrotron beamlines. Furthermore, efficient sample-delivery methods are essential to carry out productive femtosecond crystallography experiments at X-ray free-electron laser (XFEL) sources such as the Linac Coherent Light Source (LCLS). To address these needs, a high-density sample grid useful as a scaffold for both crystal growth and diffraction data collection has been developed and utilized for efficient goniometer-based sample delivery at synchrotron and XFEL sources. A single grid contains 75 mounting ports and fits inside an SSRL cassettemore » or uni-puck storage container. The use of grids with an SSRL cassette expands the cassette capacity up to 7200 samples. Grids may also be covered with a polymer film or sleeve for efficient room-temperature data collection from multiple samples. New automated routines have been incorporated into theBlu-Ice/DCSSexperimental control system to support grids, including semi-automated grid alignment, fully automated positioning of grid ports, rastering and automated data collection. Specialized tools have been developed to support crystallization experiments on grids, including a universal adaptor, which allows grids to be filled by commercial liquid-handling robots, as well as incubation chambers, which support vapor-diffusion and lipidic cubic phase crystallization experiments. Experiments in which crystals were loaded into grids or grown on grids using liquid-handling robots and incubation chambers are described. As a result, crystals were screened at LCLS-XPP and SSRL BL12-2 at room temperature and cryogenic temperatures.« less

  1. Acceptability of self-collection sampling for HPV-DNA testing in low-resource settings: a mixed methods approach.

    PubMed

    Bansil, Pooja; Wittet, Scott; Lim, Jeanette L; Winkler, Jennifer L; Paul, Proma; Jeronimo, Jose

    2014-06-12

    Vaginal self-sampling with HPV-DNA tests is a promising primary screening method for cervical cancer. However, women's experiences, concerns and the acceptability of such tests in low-resource settings remain unknown. In India, Nicaragua, and Uganda, a mixed-method design was used to collect data from surveys (N = 3,863), qualitative interviews (N = 72; 20 providers and 52 women) and focus groups (N = 30 women) on women's and providers' experiences with self-sampling, women's opinions of sampling at home, and their future needs. Among surveyed women, 90% provided a self- collected sample. Of these, 75% reported it was easy, although 52% were initially concerned about hurting themselves and 24% were worried about not getting a good sample. Most surveyed women preferred self-sampling (78%). However it was not clear if they responded to the privacy of self-sampling or the convenience of avoiding a pelvic examination, or both. In follow-up interviews, most women reported that they didn't mind self-sampling, but many preferred to have a provider collect the vaginal sample. Most women also preferred clinic-based screening (as opposed to home-based self-sampling), because the sample could be collected by a provider, women could receive treatment if needed, and the clinic was sanitary and provided privacy. Self-sampling acceptability was higher when providers prepared women through education, allowed women to examine the collection brush, and were present during the self-collection process. Among survey respondents, aids that would facilitate self-sampling in the future were: staff help (53%), additional images in the illustrated instructions (31%), and a chance to practice beforehand with a doll/model (26%). Self-and vaginal-sampling are widely acceptable among women in low-resource settings. Providers have a unique opportunity to educate and prepare women for self-sampling and be flexible in accommodating women's preference for self-sampling.

  2. Kinetics and mechanism of catalytic hydroprocessing of components of coal-derived liquids. Sixteenth quarterly report, February 16, 1983-May 15, 1983.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gates, B. C.; Olson, H. H.; Schuit, G. C.A.

    1983-08-22

    A new method of structural analysis is applied to a group of hydroliquefied coal samples. The method uses elemental analysis and NMR data to estimate the concentrations of functional groups in the samples. The samples include oil and asphaltene fractions obtained in a series of hydroliquefaction experiments, and a set of 9 fractions separated from a coal-derived oil. The structural characterization of these samples demonstrates that estimates of functional group concentrations can be used to provide detailed structural profiles of complex mixtures and to obtain limited information about reaction pathways. 11 references, 1 figure, 7 tables.

  3. Effect of cryopreservation methods and precryopreservation storage on bottlenose dolphin (Tursiops truncatus) spermatozoa.

    PubMed

    Robeck, T R; O'Brien, J K

    2004-05-01

    Research was conducted to develop an effective method for cryopreserving bottlenose dolphin (Tursiops truncatus) semen processed immediately after collection or after 24-h liquid storage. In each of two experiments, four ejaculates were collected from three males. In experiment 1, three cryopreservation methods (CM1, CM2, and CM3), two straw sizes (0.25 and 0.5 ml), and three thawing rates (slow, medium, and fast) were evaluated. Evaluations were conducted at collection, prefreeze, and 0-, 3-, and 6-h postthaw. A sperm motility index (SMI; total motility [TM] x % progressive motility [PPM] x kinetic rating [KR, scale of 0-5]) was calculated and expressed as a percentage MI of the initial ejaculate. For all ejaculates, initial TM and PPM were greater than 85%, and KR was five. At 0-h postthaw, differences in SMI among cryopreservation methods and thaw rates were observed (P < 0.05), but no effect of straw size was observed. In experiment 2, ejaculates were divided into four aliquots for dilution (1:1) and storage at 4 degrees C with a skim milk- glucose or a N-tris(hydroxymethyl)methyl-2-aminoethane sulfonic acid (TES)-TRIS egg yolk solution and at 21 degrees C with a Hepes-Tyrode balanced salt solution (containing bovine albumin and HEPES) (TALP) medium or no dilution. After 24 h, samples were frozen and thawed (CM3, 0.5-ml straws, fast thawing rate) at 20 x 10(6) spermatozoa ml(-1) (low concentration) or at 100 x 10(6) spermatozoa ml(-1) (standard concentration). The SMI at 0-h postthaw was higher for samples stored at 4 degrees C than for samples stored at 21 degrees C (P < 0.001), and at 6-h postthaw, the SMI was higher for samples frozen at the standard concentration than for samples frozen at the low concentration (P < 0.05). For both experiments, acrosome integrity was similar across treatments. In summary, a semen cryopreservation protocol applied to fresh or liquid-stored semen maintained high levels of initial ejaculate sperm characteristics.

  4. Blood transport method for chromosome analysis of residents living near Semipalatinsk nuclear test site.

    PubMed

    Rodzi, Mohd; Ihda, Shozo; Yokozeki, Masako; Takeichi, Nobuo; Tanaka, Kimio; Hoshi, Masaharu

    2009-12-01

    A study was conducted to compare the storage conditions and transportation period for blood samples collected from residents living in areas near the Semipalatinsk nuclear test site (SNTS). Experiments were performed to simulate storage and shipping environments. Phytohaemagglutinin (PHA)-stimulated blood was stored in 15-ml tubes (condition A: current transport method) in the absence or in 50-ml flasks (condition B: previous transport method) in the presence of RPMI-1640 and 20% fetal bovine serum (FBS). Samples were kept refrigerated at 4 degrees C and cell viability was assessed after 3, 8, 12 and 14 days of storage. RPMI-1640, 20% FBS and further PHA were added to blood samples under condition A in 50-ml flasks for culture. Whole-blood samples under condition B were directly incubated without further sub-culturing process, neither media nor PHA were added, to adopt a similar protocol to that employed in the previous transport method. Samples in condition A and condition B were incubated for 48 hr at 37 degrees C and their mitotic index was determined. The results showed that viable lymphocytes were consistent in both storage conditions but the mitotic index was higher in condition A than in condition B. Although further confirmation studies have to be carried out, previous chromosomal studies and the present experiment have shown that PHA-stimulated blood could be stored without culture medium for up to 8 days under condition A. The present results will be useful for cytogenetic analysis of blood samples that have been transported long distances wherever a radiation accident has occurred.

  5. Numerical simulation and analysis for low-frequency rock physics measurements

    NASA Astrophysics Data System (ADS)

    Dong, Chunhui; Tang, Genyang; Wang, Shangxu; He, Yanxiao

    2017-10-01

    In recent years, several experimental methods have been introduced to measure the elastic parameters of rocks in the relatively low-frequency range, such as differential acoustic resonance spectroscopy (DARS) and stress-strain measurement. It is necessary to verify the validity and feasibility of the applied measurement method and to quantify the sources and levels of measurement error. Relying solely on the laboratory measurements, however, we cannot evaluate the complete wavefield variation in the apparatus. Numerical simulations of elastic wave propagation, on the other hand, are used to model the wavefield distribution and physical processes in the measurement systems, and to verify the measurement theory and analyze the measurement results. In this paper we provide a numerical simulation method to investigate the acoustic waveform response of the DARS system and the quasi-static responses of the stress-strain system, both of which use axisymmetric apparatus. We applied this method to parameterize the properties of the rock samples, the sample locations and the sensor (hydrophone and strain gauges) locations and simulate the measurement results, i.e. resonance frequencies and axial and radial strains on the sample surface, from the modeled wavefield following the physical experiments. Rock physical parameters were estimated by inversion or direct processing of these data, and showed a perfect match with the true values, thus verifying the validity of the experimental measurements. Error analysis was also conducted for the DARS system with 18 numerical samples, and the sources and levels of error are discussed. In particular, we propose an inversion method for estimating both density and compressibility of these samples. The modeled results also showed fairly good agreement with the real experiment results, justifying the effectiveness and feasibility of our modeling method.

  6. Evaluation of methods for measuring relative permeability of anhydride from the Salado Formation: Sensitivity analysis and data reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christiansen, R.L.; Kalbus, J.S.; Howarth, S.M.

    This report documents, demonstrates, evaluates, and provides theoretical justification for methods used to convert experimental data into relative permeability relationships. The report facilities accurate determination of relative permeabilities of anhydride rock samples from the Salado Formation at the Waste Isolation Pilot Plant (WIPP). Relative permeability characteristic curves are necessary for WIPP Performance Assessment (PA) predictions of the potential for flow of waste-generated gas from the repository and brine flow into repository. This report follows Christiansen and Howarth (1995), a comprehensive literature review of methods for measuring relative permeability. It focuses on unsteady-state experiments and describes five methods for obtaining relativemore » permeability relationships from unsteady-state experiments. Unsteady-state experimental methods were recommended for relative permeability measurements of low-permeability anhydrite rock samples form the Salado Formation because these tests produce accurate relative permeability information and take significantly less time to complete than steady-state tests. Five methods for obtaining relative permeability relationships from unsteady-state experiments are described: the Welge method, the Johnson-Bossler-Naumann method, the Jones-Roszelle method, the Ramakrishnan-Cappiello method, and the Hagoort method. A summary, an example of the calculations, and a theoretical justification are provided for each of the five methods. Displacements in porous media are numerically simulated for the calculation examples. The simulated product data were processed using the methods, and the relative permeabilities obtained were compared with those input to the numerical model. A variety of operating conditions were simulated to show sensitivity of production behavior to rock-fluid properties.« less

  7. Out-of-Sample Extensions for Non-Parametric Kernel Methods.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang

    2017-02-01

    Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.

  8. Emissivity measurements of shocked tin using a multi-wavelength integrating sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seifter, A; Holtkamp, D B; Iverson, A J

    Pyrometric measurements of radiance to determine temperature have been performed on shock physics experiments for decades. However, multi-wavelength pyrometry schemes sometimes fail to provide credible temperatures in experiments, which incur unknown changes in sample emissivity, because an emissivity change also affects the spectral radiance. Hence, for shock physics experiments using pyrometry to measure temperatures, it is essential to determine the dynamic sample emissivity. The most robust way to determine the normal spectral emissivity is to measure the spectral normal-hemispherical reflectance using an integrating sphere. In this paper we describe a multi-wavelength (1.6–5.0 μm) integrating sphere system that utilizes a “reversed”more » scheme, which we use for shock physics experiments. The sample to be shocked is illuminated uniformly by scattering broadband light from inside a sphere onto the sample. A portion of the light reflected from the sample is detected at a point 12° from normal to the sample surface. For this experiment, we used the system to measure emissivity of shocked tin at four wavelengths for shock stress values between 17 and 33 GPa. The results indicate a large increase in effective emissivity upon shock release from tin when the shock is above 24–25 GPa, a shock stress that partially melts the sample. We also recorded an IR image of one of the shocked samples through the integrating sphere, and the emissivity inferred from the image agreed well with the integrating-sphere, pyrometer-detector data. Here, we discuss experimental data, uncertainties, and a data analysis process. We also describe unique emissivity-measurement problems arising from shock experiments and methods to overcome such problems.« less

  9. An adaptive sampling method for variable-fidelity surrogate models using improved hierarchical kriging

    NASA Astrophysics Data System (ADS)

    Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli

    2018-01-01

    Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.

  10. Statistical inference for tumor growth inhibition T/C ratio.

    PubMed

    Wu, Jianrong

    2010-09-01

    The tumor growth inhibition T/C ratio is commonly used to quantify treatment effects in drug screening tumor xenograft experiments. The T/C ratio is converted to an antitumor activity rating using an arbitrary cutoff point and often without any formal statistical inference. Here, we applied a nonparametric bootstrap method and a small sample likelihood ratio statistic to make a statistical inference of the T/C ratio, including both hypothesis testing and a confidence interval estimate. Furthermore, sample size and power are also discussed for statistical design of tumor xenograft experiments. Tumor xenograft data from an actual experiment were analyzed to illustrate the application.

  11. Computer Literacy Learning Emotions of ODL Teacher-Students

    ERIC Educational Resources Information Center

    Esterhuizen, Hendrik D.; Blignaut, A. Seugnet; Els, Christo J.; Ellis, Suria M.

    2012-01-01

    This paper addresses the affective human experiences in terms of the emotions of South African teacher-students while attaining computer competencies for teaching and learning, and for ODL. The full mixed method study investigated how computers contribute towards affective experiences of disadvantaged teacher-students. The purposive sample related…

  12. School Nurses and Health Education: The Classroom Experience

    ERIC Educational Resources Information Center

    Klein, Julie; Sendall, Marguerite C.; Fleming, Marylou; Lidstone, John; Domocol, Michelle

    2013-01-01

    Objective: The aim of the study is to explore school nurses' experience of health education. Design: A qualitative approach, phenomenology was used to answer the question. Method: Sixteen participants were recruited through purposeful and snowball sampling. Participants undertook an audio-recorded interview which was transcribed and analysed.…

  13. Towards practical time-of-flight secondary ion mass spectrometry lignocellulolytic enzyme assays

    PubMed Central

    2013-01-01

    Background Time-of-Flight Secondary Ion Mass Spectrometry (ToF-SIMS) is a surface sensitive mass spectrometry technique with potential strengths as a method for detecting enzymatic activity on solid materials. In particular, ToF-SIMS has been applied to detect the enzymatic degradation of woody lignocellulose. Proof-of-principle experiments previously demonstrated the detection of both lignin-degrading and cellulose-degrading enzymes on solvent-extracted hardwood and softwood. However, these preliminary experiments suffered from low sample throughput and were restricted to samples which had been solvent-extracted in order to minimize the potential for mass interferences between low molecular weight extractive compounds and polymeric lignocellulose components. Results The present work introduces a new, higher-throughput method for processing powdered wood samples for ToF-SIMS, meanwhile exploring likely sources of sample contamination. Multivariate analysis (MVA) including Principal Component Analysis (PCA) and Multivariate Curve Resolution (MCR) was regularly used to check for sample contamination as well as to detect extractives and enzyme activity. New data also demonstrates successful ToF-SIMS analysis of unextracted samples, placing an emphasis on identifying the low-mass secondary ion peaks related to extractives, revealing how extractives change previously established peak ratios used to describe enzyme activity, and elucidating peak intensity patterns for better detection of cellulase activity in the presence of extractives. The sensitivity of ToF-SIMS to a range of cellulase doses is also shown, along with preliminary experiments augmenting the cellulase cocktail with other proteins. Conclusions These new procedures increase the throughput of sample preparation for ToF-SIMS analysis of lignocellulose and expand the applications of the method to include unextracted lignocellulose. These are important steps towards the practical use of ToF-SIMS as a tool to screen for changes in plant composition, whether the transformation of the lignocellulose is achieved through enzyme application, plant mutagenesis, or other treatments. PMID:24034438

  14. Testing sample stability using four storage methods and the macroalgae Ulva and Gracilaria

    EPA Science Inventory

    Concern over the relative importance of different sample preparation and storage techniques frequently used in stable isotope analysis of particulate nitrogen (δ15N) and carbon (δ13C) prompted an experiment to determine how important such factors were to measured values in marine...

  15. 76 FR 28786 - Proposed Data Collections Submitted for Public Comment and Recommendations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-18

    .... The sample size is based on recommendations related to qualitative interview methods and the research... than 10 employees (CPWR, 2007), and this establishment size experiences the highest fatality rate... out occupational safety and health training. This interview will be administered to a sample of...

  16. What Canadian Youth Tell Us about Disclosing Abuse

    ERIC Educational Resources Information Center

    Ungar, Michael; Tutty, Leslie M.; McConnell, Sheri; Barter, Ken; Fairholm, Judi

    2009-01-01

    Objective: To report findings from a study of anonymous disclosures of abuse experiences among a national sample of youth in Canada who participated in violence prevention programming. Methods: A qualitative analysis was done of a purposeful sample of 1,099 evaluation forms completed following Red Cross RespectED violence prevention programming…

  17. Cleaning of nanopillar templates for nanoparticle collection using PDMS

    NASA Astrophysics Data System (ADS)

    Merzsch, S.; Wasisto, H. S.; Waag, A.; Kirsch, I.; Uhde, E.; Salthammer, T.; Peiner, E.

    2011-05-01

    Nanoparticles are easily attracted by surfaces. This sticking behavior makes it difficult to clean contaminated samples. Some complex approaches have already shown efficiencies in the range of 90%. However, a simple and cost efficient method was still missing. A commonly used silicone for soft lithography, PDMS, is able to mold a given surface. This property was used to cover surface-bonded particles from all other sides. After hardening the PDMS, particles are still embedded. A separation of silicone and sample disjoins also the particles from the surface. After this procedure, samples are clean again. This method was first tested with carbon particles on Si surfaces and Si pillar samples with aspect ratios up to 10. Experiments were done using 2 inch wafers, which, however, is not a size limitation for this method.

  18. A factorial design experiment as a pilot study for noninvasive genetic sampling.

    PubMed

    Renan, Sharon; Speyer, Edith; Shahar, Naama; Gueta, Tomer; Templeton, Alan R; Bar-David, Shirli

    2012-11-01

    Noninvasive genetic sampling has increasingly been used in ecological and conservation studies during the last decade. A major part of the noninvasive genetic literature is dedicated to the search for optimal protocols, by comparing different methods of collection, preservation and extraction of DNA from noninvasive materials. However, the lack of quantitative comparisons among these studies and the possibility that different methods are optimal for different systems make it difficult to decide which protocol to use. Moreover, most studies that have compared different methods focused on a single factor - collection, preservation or extraction - while there could be interactions between these factors. We designed a factorial experiment, as a pilot study, aimed at exploring the effect of several collection, preservation and extraction methods, and the interactions between them, on the quality and amplification success of DNA obtained from Asiatic wild ass (Equus hemionus) faeces in Israel. The amplification success rates of one mitochondrial DNA and four microsatellite markers differed substantially as a function of collection, preservation and extraction methods and their interactions. The most efficient combination for our system integrated the use of swabs as a collection method with preservation at -20 °C and with the Qiagen DNA Stool Kit with modifications as the DNA extraction method. The significant interaction found between the collection, preservation methods and the extraction methods reinforces the importance of conducting a factorial design experiment, rather than examining each factor separately, as a pilot study before initiating a full-scale noninvasive research project. © 2012 Blackwell Publishing Ltd.

  19. Interpolating seismic data via the POCS method based on shearlet transform

    NASA Astrophysics Data System (ADS)

    Jicheng, Liu; Yongxin, Chou; Jianjiang, Zhu

    2018-06-01

    A method based on shearlet transform and the projection onto convex sets with L0-norm constraint is proposed to interpolate irregularly sampled 2D and 3D seismic data. The 2D directional filter of shearlet transform is constructed by modulating a low-pass diamond filter pair to minimize the effect of additional edges introduced by the missing traces. In order to abate the spatial aliasing and control the maximal gap between missing traces for a 3D data cube, a 2D separable jittered sampling strategy is discussed. Finally, numerical experiments on 2D and 3D synthetic and real data with different under-sampling rates prove the validity of the proposed method.

  20. EVALUATION OF METHODS FOR SAMPLING, RECOVERY, AND ENUMERATION OF BACTERIA APPLIED TO THE PHYLLOPANE

    EPA Science Inventory

    Determining the fate and survival of genetically engineered microorganisms released into the environment requires the development and application of accurate and practical methods of detection and enumeration. everal experiments were performed to examine quantitative recovery met...

  1. Field substitution of nonresponders can maintain sample size and structure without altering survey estimates-the experience of the Italian behavioral risk factors surveillance system (PASSI).

    PubMed

    Baldissera, Sandro; Ferrante, Gianluigi; Quarchioni, Elisa; Minardi, Valentina; Possenti, Valentina; Carrozzi, Giuliano; Masocco, Maria; Salmaso, Stefania

    2014-04-01

    Field substitution of nonrespondents can be used to maintain the planned sample size and structure in surveys but may introduce additional bias. Sample weighting is suggested as the preferable alternative; however, limited empirical evidence exists comparing the two methods. We wanted to assess the impact of substitution on surveillance results using data from Progressi delle Aziende Sanitarie per la Salute in Italia-Progress by Local Health Units towards a Healthier Italy (PASSI). PASSI is conducted by Local Health Units (LHUs) through telephone interviews of stratified random samples of residents. Nonrespondents are replaced with substitutes randomly preselected in the same LHU stratum. We compared the weighted estimates obtained in the original PASSI sample (used as a reference) and in the substitutes' sample. The differences were evaluated using a Wald test. In 2011, 50,697 units were selected: 37,252 were from the original sample and 13,445 were substitutes; 37,162 persons were interviewed. The initially planned size and demographic composition were restored. No significant differences in the estimates between the original and the substitutes' sample were found. In our experience, field substitution is an acceptable method for dealing with nonresponse, maintaining the characteristics of the original sample without affecting the results. This evidence can support appropriate decisions about planning and implementing a surveillance system. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. The Contribution of Qualitative Methods for Identifying the Educational Needs of Adults

    ERIC Educational Resources Information Center

    Boz, Hayat; Dagli, Yakup

    2017-01-01

    This study addresses the contribution of applying qualitative research methods for identifying the educational activities planned for adults. The paper is based on the experience gained during in-depth interviews with 39 elderly and 33 middle-aged participants, by purposive sampling method and maximum variation technique, within a needs analysis…

  3. Effect of an ultrafast laser induced plasma on a relativistic electron beam to determine temporal overlap in pump-probe experiments.

    PubMed

    Scoby, Cheyne M; Li, R K; Musumeci, P

    2013-04-01

    In this paper we report on a simple and robust method to measure the absolute temporal overlap of the laser and the electron beam at the sample based on the effect of a laser induced plasma on the electron beam transverse distribution, successfully extending a similar method from keV to MeV electron beams. By pumping a standard copper TEM grid to form the plasma, we gain timing information independent of the sample under study. In experiments discussed here the optical delay to achieve temporal overlap between the pump electron beam and probe laser can be determined with ~1 ps precision. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Reproducibility experiments on measuring acoustical properties of rigid-frame porous media (round-robin tests).

    PubMed

    Horoshenkov, Kirill V; Khan, Amir; Bécot, François-Xavier; Jaouen, Luc; Sgard, Franck; Renault, Amélie; Amirouche, Nesrine; Pompoli, Francesco; Prodi, Nicola; Bonfiglio, Paolo; Pispola, Giulio; Asdrubali, Francesco; Hübelt, Jörn; Atalla, Noureddine; Amédin, Celse K; Lauriks, Walter; Boeckx, Laurens

    2007-07-01

    This paper reports the results of reproducibility experiments on the interlaboratory characterization of the acoustical properties of three types of consolidated porous media: granulated porous rubber, reticulated foam, and fiberglass. The measurements are conducted in several independent laboratories in Europe and North America. The studied acoustical characteristics are the surface complex acoustic impedance at normal incidence and plane wave absorption coefficient which are determined using the standard impedance tube method. The paper provides detailed procedures related to sample preparation and installation and it discusses the dispersion in the acoustical material property observed between individual material samples and laboratories. The importance of the boundary conditions, homogeneity of the porous material structure, and stability of the adopted signal processing method are highlighted.

  5. Separation of antibody drug conjugate species by RPLC: A generic method development approach.

    PubMed

    Fekete, Szabolcs; Molnár, Imre; Guillarme, Davy

    2017-04-15

    This study reports the use of modelling software for the successful method development of IgG1 cysteine conjugated antibody drug conjugate (ADC) in RPLC. The goal of such a method is to be able to calculate the average drug to antibody ratio (DAR) of and ADC product. A generic method development strategy was proposed including the optimization of mobile phase temperature, gradient profile and mobile phase ternary composition. For the first time, a 3D retention modelling was presented for large therapeutic protein. Based on a limited number of preliminary experiments, a fast and efficient separation of the DAR species of a commercial ADC sample, namely brentuximab vedotin, was achieved. The prediction offered by the retention model was found to be highly reliable, with an average error of retention time prediction always lower than 0.5% using a 2D or 3D retention models. For routine purpose, four to six initial experiments were required to build the 2D retention models, while 12 experiments were recommended to create the 3D model. At the end, RPLC can therefore be considered as a good method for estimating the average DAR of an ADC, based on the observed peak area ratios of RPLC chromatogram of the reduced ADC sample. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Feature Selection for Ridge Regression with Provable Guarantees.

    PubMed

    Paul, Saurabh; Drineas, Petros

    2016-04-01

    We introduce single-set spectral sparsification as a deterministic sampling-based feature selection technique for regularized least-squares classification, which is the classification analog to ridge regression. The method is unsupervised and gives worst-case guarantees of the generalization power of the classification function after feature selection with respect to the classification function obtained using all features. We also introduce leverage-score sampling as an unsupervised randomized feature selection method for ridge regression. We provide risk bounds for both single-set spectral sparsification and leverage-score sampling on ridge regression in the fixed design setting and show that the risk in the sampled space is comparable to the risk in the full-feature space. We perform experiments on synthetic and real-world data sets; a subset of TechTC-300 data sets, to support our theory. Experimental results indicate that the proposed methods perform better than the existing feature selection methods.

  7. Using the experience-sampling method to examine the psychological mechanisms by which participatory art improves wellbeing.

    PubMed

    Holt, Nicola J

    2018-01-01

    To measure the immediate impact of art-making in everyday life on diverse indices of wellbeing ('in the moment' and longer term) in order to improve understanding of the psychological mechanisms by which art may improve mental health. Using the experience-sampling method, 41 artists were prompted (with a 'beep' on a handheld computer) at random intervals (10 times a day, for one week) to answer a short questionnaire. The questionnaire tracked art-making and enquired about mood, cognition and state of consciousness. This resulted in 2,495 sampled experiences, with a high response rate in which 89% of questionnaires were completed. Multi-level modelling was used to evaluate the impact of art-making on experience, with 2,495 'experiences' (experiential-level) nested within 41 participants (person-level). Recent art-making was significantly associated with experiential shifts: improvement in hedonic tone, vivid internal imagery and the flow state. Furthermore, the frequency of art-making across the week was associated with person-level measures of wellbeing: eudemonic happiness and self-regulation. Cross-level interactions, between experiential and person-level variables, suggested that hedonic tone improved more for those scoring low on eudemonic happiness, and further that, those high in eudemonic happiness were more likely to experience phenomenological features of the flow state and to experience inner dialogue while art-making. Art-making has both immediate and long-term associations with wellbeing. At the experiential level, art-making affects multiple dimensions of conscious experience: affective, cognitive and state factors. This suggests that there are multiple routes to wellbeing (improving hedonic tone, making meaning through inner dialogue and experiencing the flow state). Recommendations are made to consider these factors when both developing and evaluating public health interventions that involve participatory art.

  8. Novel approaches to estimating the turbulent kinetic energy dissipation rate from low- and moderate-resolution velocity fluctuation time series

    NASA Astrophysics Data System (ADS)

    Wacławczyk, Marta; Ma, Yong-Feng; Kopeć, Jacek M.; Malinowski, Szymon P.

    2017-11-01

    In this paper we propose two approaches to estimating the turbulent kinetic energy (TKE) dissipation rate, based on the zero-crossing method by Sreenivasan et al. (1983). The original formulation requires a fine resolution of the measured signal, down to the smallest dissipative scales. However, due to finite sampling frequency, as well as measurement errors, velocity time series obtained from airborne experiments are characterized by the presence of effective spectral cutoffs. In contrast to the original formulation the new approaches are suitable for use with signals originating from airborne experiments. The suitability of the new approaches is tested using measurement data obtained during the Physics of Stratocumulus Top (POST) airborne research campaign as well as synthetic turbulence data. They appear useful and complementary to existing methods. We show the number-of-crossings-based approaches respond differently to errors due to finite sampling and finite averaging than the classical power spectral method. Hence, their application for the case of short signals and small sampling frequencies is particularly interesting, as it can increase the robustness of turbulent kinetic energy dissipation rate retrieval.

  9. Getting DNA copy numbers without control samples

    PubMed Central

    2012-01-01

    Background The selection of the reference to scale the data in a copy number analysis has paramount importance to achieve accurate estimates. Usually this reference is generated using control samples included in the study. However, these control samples are not always available and in these cases, an artificial reference must be created. A proper generation of this signal is crucial in terms of both noise and bias. We propose NSA (Normality Search Algorithm), a scaling method that works with and without control samples. It is based on the assumption that genomic regions enriched in SNPs with identical copy numbers in both alleles are likely to be normal. These normal regions are predicted for each sample individually and used to calculate the final reference signal. NSA can be applied to any CN data regardless the microarray technology and preprocessing method. It also finds an optimal weighting of the samples minimizing possible batch effects. Results Five human datasets (a subset of HapMap samples, Glioblastoma Multiforme (GBM), Ovarian, Prostate and Lung Cancer experiments) have been analyzed. It is shown that using only tumoral samples, NSA is able to remove the bias in the copy number estimation, to reduce the noise and therefore, to increase the ability to detect copy number aberrations (CNAs). These improvements allow NSA to also detect recurrent aberrations more accurately than other state of the art methods. Conclusions NSA provides a robust and accurate reference for scaling probe signals data to CN values without the need of control samples. It minimizes the problems of bias, noise and batch effects in the estimation of CNs. Therefore, NSA scaling approach helps to better detect recurrent CNAs than current methods. The automatic selection of references makes it useful to perform bulk analysis of many GEO or ArrayExpress experiments without the need of developing a parser to find the normal samples or possible batches within the data. The method is available in the open-source R package NSA, which is an add-on to the aroma.cn framework. http://www.aroma-project.org/addons. PMID:22898240

  10. Getting DNA copy numbers without control samples.

    PubMed

    Ortiz-Estevez, Maria; Aramburu, Ander; Rubio, Angel

    2012-08-16

    The selection of the reference to scale the data in a copy number analysis has paramount importance to achieve accurate estimates. Usually this reference is generated using control samples included in the study. However, these control samples are not always available and in these cases, an artificial reference must be created. A proper generation of this signal is crucial in terms of both noise and bias.We propose NSA (Normality Search Algorithm), a scaling method that works with and without control samples. It is based on the assumption that genomic regions enriched in SNPs with identical copy numbers in both alleles are likely to be normal. These normal regions are predicted for each sample individually and used to calculate the final reference signal. NSA can be applied to any CN data regardless the microarray technology and preprocessing method. It also finds an optimal weighting of the samples minimizing possible batch effects. Five human datasets (a subset of HapMap samples, Glioblastoma Multiforme (GBM), Ovarian, Prostate and Lung Cancer experiments) have been analyzed. It is shown that using only tumoral samples, NSA is able to remove the bias in the copy number estimation, to reduce the noise and therefore, to increase the ability to detect copy number aberrations (CNAs). These improvements allow NSA to also detect recurrent aberrations more accurately than other state of the art methods. NSA provides a robust and accurate reference for scaling probe signals data to CN values without the need of control samples. It minimizes the problems of bias, noise and batch effects in the estimation of CNs. Therefore, NSA scaling approach helps to better detect recurrent CNAs than current methods. The automatic selection of references makes it useful to perform bulk analysis of many GEO or ArrayExpress experiments without the need of developing a parser to find the normal samples or possible batches within the data. The method is available in the open-source R package NSA, which is an add-on to the aroma.cn framework. http://www.aroma-project.org/addons.

  11. Relations among questionnaire and experience sampling measures of inner speech: a smartphone app study

    PubMed Central

    Alderson-Day, Ben; Fernyhough, Charles

    2015-01-01

    Inner speech is often reported to be a common and central part of inner experience, but its true prevalence is unclear. Many questionnaire-based measures appear to lack convergent validity and it has been claimed that they overestimate inner speech in comparison to experience sampling methods (which involve collecting data at random timepoints). The present study compared self-reporting of inner speech collected via a general questionnaire and experience sampling, using data from a custom-made smartphone app (Inner Life). Fifty-one university students completed a generalized self-report measure of inner speech (the Varieties of Inner Speech Questionnaire, VISQ) and responded to at least seven random alerts to report on incidences of inner speech over a 2-week period. Correlations and pairwise comparisons were used to compare generalized endorsements and randomly sampled scores for each VISQ subscale. Significant correlations were observed between general and randomly sampled measures for only two of the four VISQ subscales, and endorsements of inner speech with evaluative or motivational characteristics did not correlate at all across different measures. Endorsement of inner speech items was significantly lower for random sampling compared to generalized self-report, for all VISQ subscales. Exploratory analysis indicated that specific inner speech characteristics were also related to anxiety and future-oriented thinking. PMID:25964773

  12. Effect of pasteurization on survival of Mycobacterium paratuberculosis in milk.

    PubMed

    Gao, A; Mutharia, L; Chen, S; Rahn, K; Odumeru, J

    2002-12-01

    Mycobacterium paratuberculosis (Mptb) is the causative agent of Johne's disease of ruminant animals including cattle, goats, and sheep. It has been suggested that this organism is associated with Crohn's disease in humans, and milk is a potential source of human exposure to this organism. A total of 18, including 7 regular batch and 11 high temperature short time (HTST) pasteurization experiments, were conducted in this study. Raw milk or ultra-high temperature pasteurized milk samples were spiked at levels of 10(3), 10(5), and 10(7) cfu of Mptb/ml. Escherichia coli and Mycobacterium bovis BCG strains at 10(7) cfu/ml were used as controls. Pasteurization experiments were conducted using time and temperature standards specified in the Canadian National Dairy Code: regular batch pasteurization method: 63 degrees C for 30 min, and HTST method: 72 degrees C for 15 s. The death curve of this organism was assessed at 63 degrees C. No survivors were detected after 15 min. Each spiked sample was cultured in Middlebrook 7H9 culture broth and Middlebrook 7H11 agar slants. Samples selected from 15 experiments were also subjected to BACTEC culture procedure. Survival of Mptb was confirmed by IS900-based PCR of colonies recovered on slants. No survivors were detected from any of the slants or broths corresponding to the seven regular batch pasteurization trials. Mptb survivors were detected in two of the 11 HTST experiments. One was by both slant and broth culture for the sample spiked to 10(7) cfu/ml of Mptb, while the other was detected by BACTEC for the sample spiked to 10(5) cfu/ml. These results indicate that Mptb may survive HTST pasteurization when present at > or = 10(5) cfu/ml in milk. A total of 710 retail milk samples collected from retail store and dairy plants in southwest Ontario were tested by nested IS900 PCR for the presence of Mptb. Fifteen percent of these samples (n = 110) were positive. However, no survivors were isolated from the broth and agar cultures of 44 PCR positive and 200 PCR negative retail milk samples. The lack of recovery of live Mptb from the retail milk samples tested may be due to either the absence of live Mptb in the retail milk samples tested or the presence of low number of viable Mptb which were undetected by the culture method used in this study.

  13. High energy PIXE: A tool to characterize multi-layer thick samples

    NASA Astrophysics Data System (ADS)

    Subercaze, A.; Koumeir, C.; Métivier, V.; Servagent, N.; Guertin, A.; Haddad, F.

    2018-02-01

    High energy PIXE is a useful and non-destructive tool to characterize multi-layer thick samples such as cultural heritage objects. In a previous work, we demonstrated the possibility to perform quantitative analysis of simple multi-layer samples using high energy PIXE, without any assumption on their composition. In this work an in-depth study of the parameters involved in the method previously published is proposed. Its extension to more complex samples with a repeated layer is also presented. Experiments have been performed at the ARRONAX cyclotron using 68 MeV protons. The thicknesses and sequences of a multi-layer sample including two different layers of the same element have been determined. Performances and limits of this method are presented and discussed.

  14. [Influence of sample surface roughness on mathematical model of NIR quantitative analysis of wood density].

    PubMed

    Huang, An-Min; Fei, Ben-Hua; Jiang, Ze-Hui; Hse, Chung-Yun

    2007-09-01

    Near infrared spectroscopy is widely used as a quantitative method, and the main multivariate techniques consist of regression methods used to build prediction models, however, the accuracy of analysis results will be affected by many factors. In the present paper, the influence of different sample roughness on the mathematical model of NIR quantitative analysis of wood density was studied. The result of experiments showed that if the roughness of predicted samples was consistent with that of calibrated samples, the result was good, otherwise the error would be much higher. The roughness-mixed model was more flexible and adaptable to different sample roughness. The prediction ability of the roughness-mixed model was much better than that of the single-roughness model.

  15. Contour temperature programmed desorption for monitoring multiple chemical reaction products

    NASA Astrophysics Data System (ADS)

    Chusuei, C. C.; de la Peña, J. V.; Schreifels, J. A.

    1999-09-01

    A simple method for obtaining a comprehensive overview of major compounds desorbing from the surface during temperature programmed desorption (TPD) experiments is outlined. Standard commercially available equipment is used to perform the experiment. The method is particularly valuable when high molecular mass compounds are being studied. The acquisition of contour temperature programmed desorption (CTPD) spectra, sampling 50-dalton mass ranges at a time in the thermal desorption experiments, is described and demonstrated for the interaction of benzotriazole adsorbed on a Ni(111) surface. Conventional two-dimensional TPD spectra can be extracted from the CTPD by taking vertical slices of the contour.

  16. Experiments on Nucleation in Different Flow Regimes

    NASA Technical Reports Server (NTRS)

    Bayuzick, R. J.; Hofmeister, W. H.; Morton, C. M.; Robinson, M. B.

    1998-01-01

    The vast majority of metallic engineering materials are solidified from the liquid phase. Understanding the solidification process is essential to control microstructure, which in turn, determines the properties of materials. The genesis of solidification is nucleation, where the first stable solid forms from the liquid phase. Nucleation kinetics determine the degree of undercooling and phase selection. As such, it is important to understand nucleation phenomena in order to control solidification or glass formation in metals and alloys. Early experiments in nucleation kinetics were accomplished by droplet dispersion methods. Dilitometry was used by Turnbull and others, and more recently differential thermal analysis and differential scanning calorimetry have been used for kinetic studies. These techniques have enjoyed success; however, there are difficulties with these experiments. Since materials are dispersed in a medium, the character of the emulsion/metal interface affects the nucleation behavior. Statistics are derived from the large number of particles observed in a single experiment, but dispersions have a finite size distribution which adds to the uncertainty of the kinetic determinations. Even though temperature can be controlled quite well before the onset of nucleation, the release of the latent heat of fusion during nucleation of particles complicates the assumption of isothermality during these experiments. Containerless processing has enabled another approach to the study of nucleation kinetics. With levitation techniques it is possible to undercool one sample to nucleation repeatedly in a controlled manner, such that the statistics of the nucleation process can be derived from multiple experiments on a single sample. The authors have fully developed the analysis of nucleation experiments on single samples following the suggestions of Skripov. The advantage of these experiments is that the samples are directly observable. The nucleation temperature can be measured by noncontact optical pyrometry, the mass of the sample is known, and post-processing analysis can be conducted on the sample. The disadvantages are that temperature measurement must have exceptionally high precision, and it is not possible to isolate specific heterogeneous sites as in droplet dispersions. Levitation processing of refractory materials in ultra high vacuum provides an avenue to conduct these kinetic studies on single samples. Two experimental methods have been identified where ultra high vacuum experiments are possible; electrostatic levitation in ground-based experiments and electromagnetic processing in low earth orbit on TEMPUS. Such experiments, reported here, were conducted on zirconium. Liquid zirconium is an excellent solvent and has a high solubility for contaminants contained in the bulk material as well as those contaminants found in the vacuum environment. Oxides, nitrides, and carbides do not exist in the melt, and do not form on the surface of molten zirconium, for the materials and vacuum levels used in this study. Ground-based experiments with electrostatic levitation have shown that the statistical nucleation kinetic experiments are viable and yield results which are consistent with classical nucleation theory. The advantage of low earth orbit experiments is the ability to vary the flow conditions in the liquid prior to nucleation. The put-pose of nucleation experiments in TEMPUS was to examine.

  17. Media Influence on Sexual Activity and Contraceptive Use: A Cross Sectional Survey among Young Women in Urban Nigeria.

    PubMed

    Bajoga, Ummulkhulthum A; Atagame, Ken L; Okigbo, Chinelo C

    2015-09-01

    This study assessed the relationship between recent exposure to family planning (FP) messages in the media (newspaper, radio, television, and mobile phones) and use of modern contraceptive methods among women aged 15-24 years living in six cities in Nigeria. Logistic regression models were used to predict recent media exposure to FP messages and its association with sexual experience and modern contraceptive method use. About 45% of our sample had ever had sex with only a quarter of them using a modern contraceptive method at the time of survey. Approximately 71% of our sample was exposed to FP messages in the media within the three months preceding the survey. The main sources of media exposure were mobile phones (48%), radio (37%), and television (29%). Controlling for relevant factors, recent media exposure to FP messages predicted both sexual experience and use of modern contraceptive methods, although there were city-level differences.

  18. A rapid low-cost high-density DNA-based multi-detection test for routine inspection of meat species.

    PubMed

    Lin, Chun Chi; Fung, Lai Ling; Chan, Po Kwok; Lee, Cheuk Man; Chow, Kwok Fai; Cheng, Shuk Han

    2014-02-01

    The increasing occurrence of food frauds suggests that species identification should be part of food authentication. Current molecular-based species identification methods have their own limitations or drawbacks, such as relatively time-consuming experimental steps, expensive equipment and, in particular, these methods cannot identify mixed species in a single experiment. This project proposes an improved method involving PCR amplification of the COI gene and detection of species-specific sequences by hybridisation. Major innovative breakthrough lies in the detection of multiple species, including pork, beef, lamb, horse, cat, dog and mouse, from a mixed sample within a single experiment. The probes used are species-specific either in sole or mixed species samples. As little as 5 pg of DNA template in the PCR is detectable in the proposed method. By designing species-specific probes and adopting reverse dot blot hybridisation and flow-through hybridisation, a low-cost high-density DNA-based multi-detection test suitable for routine inspection of meat species was developed. © 2013.

  19. Manifold Regularized Experimental Design for Active Learning.

    PubMed

    Zhang, Lining; Shum, Hubert P H; Shao, Ling

    2016-12-02

    Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.

  20. Measurement of Passive Uptake Rates for Volatile Organic Compounds on Commercial Thermal Desorption Tubes and the Effect of Ozone on Sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maddalena, Randy; Parra, Amanda; Russell, Marion

    Diffusive or passive sampling methods using commercially filled axial-sampling thermal desorption tubes are widely used for measuring volatile organic compounds (VOCs) in air. The passive sampling method provides a robust, cost effective way to measure air quality with time-averaged concentrations spanning up to a week or more. Sampling rates for VOCs can be calculated using tube geometry and Fick’s Law for ideal diffusion behavior or measured experimentally. There is evidence that uptake rates deviate from ideal and may not be constant over time. Therefore, experimentally measured sampling rates are preferred. In this project, a calibration chamber with a continuous stirredmore » tank reactor design and constant VOC source was combined with active sampling to generate a controlled dynamic calibration environment for passive samplers. The chamber air was augmented with a continuous source of 45 VOCs ranging from pentane to diethyl phthalate representing a variety of chemical classes and physiochemical properties. Both passive and active samples were collected on commercially filled Tenax TA thermal desorption tubes over an 11-day period and used to calculate passive sampling rates. A second experiment was designed to determine the impact of ozone on passive sampling by using the calibration chamber to passively load five terpenes on a set of Tenax tubes and then exposing the tubes to different ozone environments with and without ozone scrubbers attached to the tube inlet. During the sampling rate experiment, the measured diffusive uptake was constant for up to seven days for most of the VOCs tested but deviated from linearity for some of the more volatile compounds between seven and eleven days. In the ozone experiment, both exposed and unexposed tubes showed a similar decline in terpene mass over time indicating back diffusion when uncapped tubes were transferred to a clean environment but there was no indication of significant loss by ozone reaction.« less

  1. Low acetaldehyde collection efficiencies for 24-hour sampling with 2,4-dinitrophenylhydrazine (DNPH)-coated solid sorbents.

    PubMed

    Herrington, Jason S; Fan, Zhi-Hua Tina; Lioy, Paul J; Zhang, Junfeng Jim

    2007-01-15

    Airborne aldehyde and ketone (carbonyl) sampling methodologies based on derivatization with 2,4-dinitrophenylhydrazine (DNPH)-coated solid sorbents could unequivocally be considered the "gold" standard. Originally developed in the late 1970s, these methods have been extensively evaluated and developed up to the present day. However, these methods have been inadequately evaluated for the long-term (i.e., 24 h or greater) sampling collection efficiency (CE) of carbonyls other than formaldehyde. The current body of literature fails to demonstrate that DNPH-coated solid sorbent sampling methods have acceptable CEs for the long-term sampling of carbonyls other than formaldehyde. Despite this, such methods are widely used to report the concentrations of multiple carbonyls from long-term sampling, assuming approximately 100% CEs. Laboratory experiments were conducted in this study to evaluate the long-term formaldehyde and acetaldehyde sampling CEs for several commonly used DNPH-coated solid sorbents. Results from sampling known concentrations of formaldehyde and acetaldehyde generated in a dynamic atmosphere generation system demonstrate that the 24-hour formaldehyde sampling CEs ranged from 83 to 133%, confirming the findings made in previous studies. However, the 24-hour acetaldehyde sampling CEs ranged from 1 to 62%. Attempts to increase the acetaldehyde CEs by adding acid to the samples post sampling were unsuccessful. These results indicate that assuming approximately 100% CEs for 24-hour acetaldehyde sampling, as commonly done with DNPH-coated solid sorbent methods, would substantially under estimate acetaldehyde concentrations.

  2. Weighted analysis of paired microarray experiments.

    PubMed

    Kristiansson, Erik; Sjögren, Anders; Rudemo, Mats; Nerman, Olle

    2005-01-01

    In microarray experiments quality often varies, for example between samples and between arrays. The need for quality control is therefore strong. A statistical model and a corresponding analysis method is suggested for experiments with pairing, including designs with individuals observed before and after treatment and many experiments with two-colour spotted arrays. The model is of mixed type with some parameters estimated by an empirical Bayes method. Differences in quality are modelled by individual variances and correlations between repetitions. The method is applied to three real and several simulated datasets. Two of the real datasets are of Affymetrix type with patients profiled before and after treatment, and the third dataset is of two-colour spotted cDNA type. In all cases, the patients or arrays had different estimated variances, leading to distinctly unequal weights in the analysis. We suggest also plots which illustrate the variances and correlations that affect the weights computed by our analysis method. For simulated data the improvement relative to previously published methods without weighting is shown to be substantial.

  3. EPA Method 1615. Measurement of Enterovirus and Norovirus Occurrence in Water by Culture and RT-qPCR. I. Collection of Virus Samples

    PubMed Central

    Fout, G. Shay; Cashdollar, Jennifer L.; Varughese, Eunice A.; Parshionikar, Sandhya U.; Grimm, Ann C.

    2015-01-01

    EPA Method 1615 was developed with a goal of providing a standard method for measuring enteroviruses and noroviruses in environmental and drinking waters. The standardized sampling component of the method concentrates viruses that may be present in water by passage of a minimum specified volume of water through an electropositive cartridge filter. The minimum specified volumes for surface and finished/ground water are 300 L and 1,500 L, respectively. A major method limitation is the tendency for the filters to clog before meeting the sample volume requirement. Studies using two different, but equivalent, cartridge filter options showed that filter clogging was a problem with 10% of the samples with one of the filter types compared to 6% with the other filter type. Clogging tends to increase with turbidity, but cannot be predicted based on turbidity measurements only. From a cost standpoint one of the filter options is preferable over the other, but the water quality and experience with the water system to be sampled should be taken into consideration in making filter selections. PMID:25867928

  4. Determining significant material properties: A discovery approach

    NASA Technical Reports Server (NTRS)

    Karplus, Alan K.

    1992-01-01

    The following is a laboratory experiment designed to further understanding of materials science. The experiment itself can be informative for persons of any age past elementary school, and even for some in elementary school. The preparation of the plastic samples is readily accomplished by persons with resonable dexterity in the cutting of paper designs. The completion of the statistical Design of Experiments, which uses Yates' Method, requires basic math (addition and subtraction). Interpretive work requires plotting of data and making observations. Knowledge of statistical methods would be helpful. The purpose of this experiment is to acquaint students with the seven classes of recyclable plastics, and provide hands-on learning about the response of these plastics to mechanical tensile loading.

  5. Improved lossless intra coding for H.264/MPEG-4 AVC.

    PubMed

    Lee, Yung-Lyul; Han, Ki-Hun; Sullivan, Gary J

    2006-09-01

    A new lossless intra coding method based on sample-by-sample differential pulse code modulation (DPCM) is presented as an enhancement of the H.264/MPEG-4 AVC standard. The H.264/AVC design includes a multidirectional spatial prediction method to reduce spatial redundancy by using neighboring samples as a prediction for the samples in a block of data to be encoded. In the new lossless intra coding method, the spatial prediction is performed based on samplewise DPCM instead of in the block-based manner used in the current H.264/AVC standard, while the block structure is retained for the residual difference entropy coding process. We show that the new method, based on samplewise DPCM, does not have a major complexity penalty, despite its apparent pipeline dependencies. Experiments show that the new lossless intra coding method reduces the bit rate by approximately 12% in comparison with the lossless intra coding method previously included in the H.264/AVC standard. As a result, the new method is currently being adopted into the H.264/AVC standard in a new enhancement project.

  6. A dynamic programming approach for the alignment of signal peaks in multiple gas chromatography-mass spectrometry experiments.

    PubMed

    Robinson, Mark D; De Souza, David P; Keen, Woon Wai; Saunders, Eleanor C; McConville, Malcolm J; Speed, Terence P; Likić, Vladimir A

    2007-10-29

    Gas chromatography-mass spectrometry (GC-MS) is a robust platform for the profiling of certain classes of small molecules in biological samples. When multiple samples are profiled, including replicates of the same sample and/or different sample states, one needs to account for retention time drifts between experiments. This can be achieved either by the alignment of chromatographic profiles prior to peak detection, or by matching signal peaks after they have been extracted from chromatogram data matrices. Automated retention time correction is particularly important in non-targeted profiling studies. A new approach for matching signal peaks based on dynamic programming is presented. The proposed approach relies on both peak retention times and mass spectra. The alignment of more than two peak lists involves three steps: (1) all possible pairs of peak lists are aligned, and similarity of each pair of peak lists is estimated; (2) the guide tree is built based on the similarity between the peak lists; (3) peak lists are progressively aligned starting with the two most similar peak lists, following the guide tree until all peak lists are exhausted. When two or more experiments are performed on different sample states and each consisting of multiple replicates, peak lists within each set of replicate experiments are aligned first (within-state alignment), and subsequently the resulting alignments are aligned themselves (between-state alignment). When more than two sets of replicate experiments are present, the between-state alignment also employs the guide tree. We demonstrate the usefulness of this approach on GC-MS metabolic profiling experiments acquired on wild-type and mutant Leishmania mexicana parasites. We propose a progressive method to match signal peaks across multiple GC-MS experiments based on dynamic programming. A sensitive peak similarity function is proposed to balance peak retention time and peak mass spectra similarities. This approach can produce the optimal alignment between an arbitrary number of peak lists, and models explicitly within-state and between-state peak alignment. The accuracy of the proposed method was close to the accuracy of manually-curated peak matching, which required tens of man-hours for the analyzed data sets. The proposed approach may offer significant advantages for processing of high-throughput metabolomics data, especially when large numbers of experimental replicates and multiple sample states are analyzed.

  7. A multitask clustering approach for single-cell RNA-seq analysis in Recessive Dystrophic Epidermolysis Bullosa

    PubMed Central

    Petegrosso, Raphael; Tolar, Jakub

    2018-01-01

    Single-cell RNA sequencing (scRNA-seq) has been widely applied to discover new cell types by detecting sub-populations in a heterogeneous group of cells. Since scRNA-seq experiments have lower read coverage/tag counts and introduce more technical biases compared to bulk RNA-seq experiments, the limited number of sampled cells combined with the experimental biases and other dataset specific variations presents a challenge to cross-dataset analysis and discovery of relevant biological variations across multiple cell populations. In this paper, we introduce a method of variance-driven multitask clustering of single-cell RNA-seq data (scVDMC) that utilizes multiple single-cell populations from biological replicates or different samples. scVDMC clusters single cells in multiple scRNA-seq experiments of similar cell types and markers but varying expression patterns such that the scRNA-seq data are better integrated than typical pooled analyses which only increase the sample size. By controlling the variance among the cell clusters within each dataset and across all the datasets, scVDMC detects cell sub-populations in each individual experiment with shared cell-type markers but varying cluster centers among all the experiments. Applied to two real scRNA-seq datasets with several replicates and one large-scale droplet-based dataset on three patient samples, scVDMC more accurately detected cell populations and known cell markers than pooled clustering and other recently proposed scRNA-seq clustering methods. In the case study applied to in-house Recessive Dystrophic Epidermolysis Bullosa (RDEB) scRNA-seq data, scVDMC revealed several new cell types and unknown markers validated by flow cytometry. MATLAB/Octave code available at https://github.com/kuanglab/scVDMC. PMID:29630593

  8. [Determination of LF-VD refining furnace slag by X ray fluorescence spectrometry].

    PubMed

    Kan, Bin; Cheng, Jian-ping; Song, Zu-feng

    2004-10-01

    Eight components, i.e. TFe, CaO, MgO, Al2O3, SiO2, TiO2, MnO and P2O5 in refining furnace slag were determined by X ray fluorescence spectrometer. Because the content of CaO was high, the authors selected 12 national and departmental grade slag standard samples and prepared a series of synthetic standard samples by adding spectrally pure reagents to them. The calibration curve is suitable to the sample analysis of CaO, MgO and SiO2 with widely varying range. Meanwhile, the points on the curve are even. The samples were prepared at high temperature by adding Li2B4O7 as flux. The experiments for the selection of the sample preparation conditions about strip reagents, melting temperature and dulition ratio were carried out. The matrix effects on absorption and enhancement were corrected by means of PH model and theoretical alpha coefficient. Moreover, the precision and accuracy experiments were performed. In comparison with chemical analysis method, the quantitative analytical results for each component are satisfactory. The method has proven rapid, precise and simple.

  9. Instrumental Analysis of Biodiesel Content in Commercial Diesel Blends: An Experiment for Undergraduate Analytical Chemistry

    ERIC Educational Resources Information Center

    Feng, Z. Vivian; Buchman, Joseph T.

    2012-01-01

    The potential of replacing petroleum fuels with renewable biofuels has drawn significant public interest. Many states have imposed biodiesel mandates or incentives to use commercial biodiesel blends. We present an inquiry-driven experiment where students are given the tasks to gather samples, develop analytical methods using various instrumental…

  10. Lived Experience of Women Suffering from Vitiligo: A Phenomenological Study

    ERIC Educational Resources Information Center

    Borimnejad, Leili; Yekta, Zohreh Parsa; Nasrabadi, Alireza Nikbakht

    2006-01-01

    Vitiligo is a chronic skin disease, which through change of appearance and body image, exerts a devastating effect on people, especially women. The objective of this study is to explore lived experience of women with Vitiligo by the hermeneutic phenomenology method. The purposive sample consisted of 16 Iranian women. Data analysis followed…

  11. Identification of Phosphorylated Proteins on a Global Scale.

    PubMed

    Iliuk, Anton

    2018-05-31

    Liquid chromatography (LC) coupled with tandem mass spectrometry (MS/MS) has enabled researchers to analyze complex biological samples with unprecedented depth. It facilitates the identification and quantification of modifications within thousands of proteins in a single large-scale proteomic experiment. Analysis of phosphorylation, one of the most common and important post-translational modifications, has particularly benefited from such progress in the field. Here, detailed protocols are provided for a few well-regarded, common sample preparation methods for an effective phosphoproteomic experiment. © 2018 by John Wiley & Sons, Inc. Copyright © 2018 John Wiley & Sons, Inc.

  12. Whole-rock uranium analysis by fission track activation

    NASA Technical Reports Server (NTRS)

    Weiss, J. R.; Haines, E. L.

    1974-01-01

    We report a whole-rock uranium method in which the polished sample and track detector are separated in a vacuum chamber. Irradiation with thermal neutrons induces uranium fission in the sample, and the detector records the integrated fission track density. Detection efficiency and geometric factors are calculated and compared with calibration experiments.

  13. Cognitive Anxiety: A Method of Content Analysis for Verbal Samples

    ERIC Educational Resources Information Center

    Viney, Linda L.; Westbrook, Mary

    1976-01-01

    Five groups--second year students, psychiatric inpatients, incoming students, mothers, and relocated women were tested with verbal samples to examine the effects of cognitive anxiety as a construct implying a reaction to being unable to anticipate and integrate experiences meaningfully. The measure used was found to be valid. (Author/DEP)

  14. An approach for evaluating the repeatability of rapid wetland assessment methods: The effects of training and experience

    EPA Science Inventory

    We sampled 92 wetlands from four different basins in the United States to quantify observer repeatability in rapid wetland condition assessment using the Delaware Rapid Assessment Protocol (DERAP). In the Inland Bays basin of Delaware, 58 wetland sites were sampled by multiple ob...

  15. A unified method to process biosolids samples for the recovery of bacterial, viral, and helminths pathogens.

    PubMed

    Alum, Absar; Rock, Channah; Abbaszadegan, Morteza

    2014-01-01

    For land application, biosolids are classified as Class A or Class B based on the levels of bacterial, viral, and helminths pathogens in residual biosolids. The current EPA methods for the detection of these groups of pathogens in biosolids include discrete steps. Therefore, a separate sample is processed independently to quantify the number of each group of the pathogens in biosolids. The aim of the study was to develop a unified method for simultaneous processing of a single biosolids sample to recover bacterial, viral, and helminths pathogens. At the first stage for developing a simultaneous method, nine eluents were compared for their efficiency to recover viruses from a 100 gm spiked biosolids sample. In the second stage, the three top performing eluents were thoroughly evaluated for the recovery of bacteria, viruses, and helminthes. For all three groups of pathogens, the glycine-based eluent provided higher recovery than the beef extract-based eluent. Additional experiments were performed to optimize performance of glycine-based eluent under various procedural factors such as, solids to eluent ratio, stir time, and centrifugation conditions. Last, the new method was directly compared with the EPA methods for the recovery of the three groups of pathogens spiked in duplicate samples of biosolids collected from different sources. For viruses, the new method yielded up to 10% higher recoveries than the EPA method. For bacteria and helminths, recoveries were 74% and 83% by the new method compared to 34% and 68% by the EPA method, respectively. The unified sample processing method significantly reduces the time required for processing biosolids samples for different groups of pathogens; it is less impacted by the intrinsic variability of samples, while providing higher yields (P = 0.05) and greater consistency than the current EPA methods.

  16. Comparison of depth-specific groundwater sampling methods and their influence on hydrochemistry, isotopy and dissolved gases - Experiences from the Fuhrberger Feld, Germany

    NASA Astrophysics Data System (ADS)

    Houben, Georg J.; Koeniger, Paul; Schloemer, Stefan; Gröger-Trampe, Jens; Sültenfuß, Jürgen

    2018-02-01

    Depth-specific sampling of groundwater is important for a variety of hydrogeological applications. Several sampling methods are available but comparably little is known about how their results compare. Therefore, samples from regular observation wells (short screen), micro-filters and direct push were compared for two sites with differing hydrogeological conditions and land use, both located in the Fuhrberger Feld, Germany. The encountered hydrochemical zonation requires a high resolution of 1 m or better, which the available small number of regular observation wells could only roughly mirror. Because the three methods employ significantly varying pumping rates and therefore, have varying spatial origins of the sample, individual concentrations at similar depths may differ significantly. In a hydrologically and chemically dynamical environment such as the agricultural site, this effect becomes more pronounced than for the more stable forest site. The micro-filters are probably the most depth-specific, but showed distinctly lower concentrations for dissolved gases than the other two methods, due to degassing during sampling. They should thus not be used for any method that relies on dissolved gas analysis.

  17. Simple Sodium Dodecyl Sulfate-Assisted Sample Preparation Method for LC-MS-based Proteomic Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Jianying; Dann, Geoffrey P.; Shi, Tujin

    2012-03-10

    Sodium dodecyl sulfate (SDS) is one of the most popular laboratory reagents used for highly efficient biological sample extraction; however, SDS presents a significant challenge to LC-MS-based proteomic analyses due to its severe interference with reversed-phase LC separations and electrospray ionization interfaces. This study reports a simple SDS-assisted proteomic sample preparation method facilitated by a novel peptide-level SDS removal protocol. After SDS-assisted protein extraction and digestion, SDS was effectively (>99.9%) removed from peptides through ion substitution-mediated DS- precipitation with potassium chloride (KCl) followed by {approx}10 min centrifugation. Excellent peptide recovery (>95%) was observed for less than 20 {mu}g of peptides.more » Further experiments demonstrated the compatibility of this protocol with LC-MS/MS analyses. The resulting proteome coverage from this SDS-assisted protocol was comparable to or better than those obtained from other standard proteomic preparation methods in both mammalian tissues and bacterial samples. These results suggest that this SDS-assisted protocol is a practical, simple, and broadly applicable proteomic sample processing method, which can be particularly useful when dealing with samples difficult to solubilize by other methods.« less

  18. Random Numbers and Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  19. Analysis of host response to bacterial infection using error model based gene expression microarray experiments

    PubMed Central

    Stekel, Dov J.; Sarti, Donatella; Trevino, Victor; Zhang, Lihong; Salmon, Mike; Buckley, Chris D.; Stevens, Mark; Pallen, Mark J.; Penn, Charles; Falciani, Francesco

    2005-01-01

    A key step in the analysis of microarray data is the selection of genes that are differentially expressed. Ideally, such experiments should be properly replicated in order to infer both technical and biological variability, and the data should be subjected to rigorous hypothesis tests to identify the differentially expressed genes. However, in microarray experiments involving the analysis of very large numbers of biological samples, replication is not always practical. Therefore, there is a need for a method to select differentially expressed genes in a rational way from insufficiently replicated data. In this paper, we describe a simple method that uses bootstrapping to generate an error model from a replicated pilot study that can be used to identify differentially expressed genes in subsequent large-scale studies on the same platform, but in which there may be no replicated arrays. The method builds a stratified error model that includes array-to-array variability, feature-to-feature variability and the dependence of error on signal intensity. We apply this model to the characterization of the host response in a model of bacterial infection of human intestinal epithelial cells. We demonstrate the effectiveness of error model based microarray experiments and propose this as a general strategy for a microarray-based screening of large collections of biological samples. PMID:15800204

  20. Design of experiments for amino acid extraction from tobacco leaves and their subsequent determination by capillary zone electrophoresis.

    PubMed

    Hodek, Ondřej; Křížek, Tomáš; Coufal, Pavel; Ryšlavá, Helena

    2017-03-01

    In this study, we optimized a method for the determination of free amino acids in Nicotiana tabacum leaves. Capillary electrophoresis with contactless conductivity detector was used for the separation of 20 proteinogenic amino acids in acidic background electrolyte. Subsequently, the conditions of extraction with HCl were optimized for the highest extraction yield of the amino acids because sample treatment of plant materials brings some specific challenges. Central composite face-centered design with fractional factorial design was used in order to evaluate the significance of selected factors (HCl volume, HCl concentration, sonication, shaking) on the extraction process. In addition, the composite design helped us to find the optimal values for each factor using the response surface method. The limits of detection and limits of quantification for the 20 proteinogenic amino acids were found to be in the order of 10 -5 and 10 -4  mol l -1 , respectively. Addition of acetonitrile to the sample was tested as a method commonly used to decrease limits of detection. Ambiguous results of this experiment pointed out some features of plant extract samples, which often required specific approaches. Suitability of the method for metabolomic studies was tested by analysis of a real sample, in which all amino acids, except for L-methionine and L-cysteine, were successfully detected. The optimized extraction process together with the capillary electrophoresis method can be used for the determination of proteinogenic amino acids in plant materials. The resulting inexpensive, simple, and robust method is well suited for various metabolomic studies in plants. As such, the method represents a valuable tool for research and practical application in the fields of biology, biochemistry, and agriculture.

  1. Selection of species and sampling areas: The importance of inference

    Treesearch

    Paul Stephen Corn

    2009-01-01

    Inductive inference, the process of drawing general conclusions from specific observations, is fundamental to the scientific method. Platt (1964) termed conclusions obtained through rigorous application of the scientific method as "strong inference" and noted the following basic steps: generating alternative hypotheses; devising experiments, the...

  2. Who Set the Fire? Determination of Arson Accelerants by GC-MS in an Instrumental Methods Course

    NASA Astrophysics Data System (ADS)

    Sodeman, David A.; Lillard, Sheri J.

    2001-09-01

    Forensic scenarios have advantages over traditional experiments in the instrumental laboratory from the perspectives of both teaching and learning. First, students feel that they are calculating more than just a number from their experiments and that their results have meaning. Second, we are teaching techniques that are used in the real world and students can no longer complain, "This is not how it is done in the real world." This experiment is designed for upper-division chemistry and chemical engineering majors taking an instrumental methods course. The experimental approach simulates the steps an arson investigator would take to determine if arson was the cause of a fire. Charred (unknown) samples of wood and five standards of liquid accelerants are prepared in sealed containers and presented to the students for headspace gas chromatography (GC) with quadrupole mass spectrometric (MS) detection. Students interpret the standards and the charred samples using chromatographic retention times and MS data. From this information, they determine which accelerant was used to start the fire. They are also asked to discuss differences between the chromatograms of the charred sample and the corresponding liquid accelerant.

  3. Validation of RNAi Silencing Efficiency Using Gene Array Data shows 18.5% Failure Rate across 429 Independent Experiments.

    PubMed

    Munkácsy, Gyöngyi; Sztupinszki, Zsófia; Herman, Péter; Bán, Bence; Pénzváltó, Zsófia; Szarvas, Nóra; Győrffy, Balázs

    2016-09-27

    No independent cross-validation of success rate for studies utilizing small interfering RNA (siRNA) for gene silencing has been completed before. To assess the influence of experimental parameters like cell line, transfection technique, validation method, and type of control, we have to validate these in a large set of studies. We utilized gene chip data published for siRNA experiments to assess success rate and to compare methods used in these experiments. We searched NCBI GEO for samples with whole transcriptome analysis before and after gene silencing and evaluated the efficiency for the target and off-target genes using the array-based expression data. Wilcoxon signed-rank test was used to assess silencing efficacy and Kruskal-Wallis tests and Spearman rank correlation were used to evaluate study parameters. All together 1,643 samples representing 429 experiments published in 207 studies were evaluated. The fold change (FC) of down-regulation of the target gene was above 0.7 in 18.5% and was above 0.5 in 38.7% of experiments. Silencing efficiency was lowest in MCF7 and highest in SW480 cells (FC = 0.59 and FC = 0.30, respectively, P = 9.3E-06). Studies utilizing Western blot for validation performed better than those with quantitative polymerase chain reaction (qPCR) or microarray (FC = 0.43, FC = 0.47, and FC = 0.55, respectively, P = 2.8E-04). There was no correlation between type of control, transfection method, publication year, and silencing efficiency. Although gene silencing is a robust feature successfully cross-validated in the majority of experiments, efficiency remained insufficient in a significant proportion of studies. Selection of cell line model and validation method had the highest influence on silencing proficiency.

  4. Quasi-closed phase forward-backward linear prediction analysis of speech for accurate formant detection and estimation.

    PubMed

    Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo

    2017-09-01

    Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.

  5. Experiments and modeling of variably permeable carbonate reservoir samples in contact with CO₂-acidified brines

    DOE PAGES

    Smith, Megan M.; Hao, Yue; Mason, Harris E.; ...

    2014-12-31

    Reactive experiments were performed to expose sample cores from the Arbuckle carbonate reservoir to CO₂-acidified brine under reservoir temperature and pressure conditions. The samples consisted of dolomite with varying quantities of calcite and silica/chert. The timescales of monitored pressure decline across each sample in response to CO₂ exposure, as well as the amount of and nature of dissolution features, varied widely among these three experiments. For all samples cores, the experimentally measured initial permeability was at least one order of magnitude or more lower than the values estimated from downhole methods. Nondestructive X-ray computed tomography (XRCT) imaging revealed dissolution featuresmore » including “wormholes,” removal of fracture-filling crystals, and widening of pre-existing pore spaces. In the injection zone sample, multiple fractures may have contributed to the high initial permeability of this core and restricted the distribution of CO₂-induced mineral dissolution. In contrast, the pre-existing porosity of the baffle zone sample was much lower and less connected, leading to a lower initial permeability and contributing to the development of a single dissolution channel. While calcite may make up only a small percentage of the overall sample composition, its location and the effects of its dissolution have an outsized effect on permeability responses to CO₂ exposure. The XRCT data presented here are informative for building the model domain for numerical simulations of these experiments but require calibration by higher resolution means to confidently evaluate different porosity-permeability relationships.« less

  6. Generation of monodisperse cell-sized microdroplets using a centrifuge-based axisymmetric co-flowing microfluidic device.

    PubMed

    Yamashita, Hitoyoshi; Morita, Masamune; Sugiura, Haruka; Fujiwara, Kei; Onoe, Hiroaki; Takinoue, Masahiro

    2015-04-01

    We report an easy-to-use generation method of biologically compatible monodisperse water-in-oil microdroplets using a glass-capillary-based microfluidic device in a tabletop mini-centrifuge. This device does not require complicated microfabrication; furthermore, only a small sample volume is required in experiments. Therefore, we believe that this method will assist biochemical and cell-biological experiments. Copyright © 2014 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  7. The Development of an Officer Training School Board Score Prediction Method Using a Multi-Board Approach

    DTIC Science & Technology

    1991-03-01

    forms: ". ..application blanks, biographical inventories , interviews, work sample tests, and intelligence, aptitude, and personality tests" (1:11...the grouping method, 3) the task method, and 4) the knowledge , skills, abilities (KSA) method. The point method of measuring training/experience assigns... knowledge , skills, abilities, and other characteristics which relate specifically to each job element (3:131). Interview. According to N. Schmitt

  8. Valley splitting of single-electron Si MOS quantum dots

    DOE PAGES

    Gamble, John King; Harvey-Collard, Patrick; Jacobson, N. Tobias; ...

    2016-12-19

    Here, silicon-based metal-oxide-semiconductor quantum dots are prominent candidates for high-fidelity, manufacturable qubits. Due to silicon's band structure, additional low-energy states persist in these devices, presenting both challenges and opportunities. Although the physics governing these valley states has been the subject of intense study, quantitative agreement between experiment and theory remains elusive. Here, we present data from an experiment probing the valley states of quantum dot devices and develop a theory that is in quantitative agreement with both this and a recently reported experiment. Through sampling millions of realistic cases of interface roughness, our method provides evidence that the valley physicsmore » between the two samples is essentially the same.« less

  9. Understanding Transgender Men's Experiences with and Preferences for Cervical Cancer Screening: A Rapid Assessment Survey.

    PubMed

    Seay, Julia; Ranck, Atticus; Weiss, Roy; Salgado, Christopher; Fein, Lydia; Kobetz, Erin

    2017-08-01

    Transgender men are less likely than cisgender women to receive cervical cancer screening. The purpose of the current study was to understand experiences with and preferences for cervical cancer screening among transgender men. Ninety-one transgender men ages 21-63 completed the survey. The survey evaluated experiences with and preferences for screening, including opinions regarding human papillomavirus (HPV) self-sampling as a primary cervical cancer screening. Half (50.5%) of participants did not have Pap smear screening within the past 3 years. The majority (57.1%) of participants preferred HPV self-sampling over provider-collected Pap smear screening. Participants who reported discrimination were more likely to prefer HPV self-sampling (odds ratio = 3.29, 95% confidence interval 1.38-7.84, P = 0.007). Primary HPV testing via HPV self-sampling may improve cervical cancer screening uptake among transgender men. Future work should pilot this innovative cervical cancer screening method within this population.

  10. Inferring Strength of Tantalum from Hydrodynamic Instability Recovery Experiments

    NASA Astrophysics Data System (ADS)

    Sternberger, Z.; Maddox, B.; Opachich, Y.; Wehrenberg, C.; Kraus, R.; Remington, B.; Randall, G.; Farrell, M.; Ravichandran, G.

    2018-05-01

    Hydrodynamic instability experiments allow access to material properties at extreme conditions, where strain rates exceed 105 s-1 and pressures reach 100 GPa. Current hydrodynamic instability experimental methods require in-flight radiography to image the instability growth at high pressure and high strain rate, limiting the facilities where these experiments can be performed. An alternate approach, recovering the sample after loading, allows measurement of the instability growth with profilometry. Tantalum samples were manufactured with different 2D and 3D initial perturbation patterns and dynamically compressed by a blast wave generated by laser ablation. The samples were recovered from peak pressures between 30 and 120 GPa and strain rates on the order of 107 s-1, providing a record of the growth of the perturbations due to hydrodynamic instability. These records are useful validation points for hydrocode simulations using models of material strength at high strain rate. Recovered tantalum samples were analyzed, providing an estimate of the strength of the material at high pressure and strain rate.

  11. X-ray-induced photo-chemistry and X-ray absorption spectroscopy of biological samples

    PubMed Central

    George, Graham N.; Pickering, Ingrid J.; Pushie, M. Jake; Nienaber, Kurt; Hackett, Mark J.; Ascone, Isabella; Hedman, Britt; Hodgson, Keith O.; Aitken, Jade B.; Levina, Aviva; Glover, Christopher; Lay, Peter A.

    2012-01-01

    As synchrotron light sources and optics deliver greater photon flux on samples, X-ray-induced photo-chemistry is increasingly encountered in X-ray absorption spectroscopy (XAS) experiments. The resulting problems are particularly pronounced for biological XAS experiments. This is because biological samples are very often quite dilute and therefore require signal averaging to achieve adequate signal-to-noise ratios, with correspondingly greater exposures to the X-ray beam. This paper reviews the origins of photo-reduction and photo-oxidation, the impact that they can have on active site structure, and the methods that can be used to provide relief from X-ray-induced photo-chemical artifacts. PMID:23093745

  12. A non-uniformly sampled 4D HCC(CO)NH-TOCSY experiment processed using maximum entropy for rapid protein sidechain assignment

    PubMed Central

    Mobli, Mehdi; Stern, Alan S.; Bermel, Wolfgang; King, Glenn F.; Hoch, Jeffrey C.

    2010-01-01

    One of the stiffest challenges in structural studies of proteins using NMR is the assignment of sidechain resonances. Typically, a panel of lengthy 3D experiments are acquired in order to establish connectivities and resolve ambiguities due to overlap. We demonstrate that these experiments can be replaced by a single 4D experiment that is time-efficient, yields excellent resolution, and captures unique carbon-proton connectivity information. The approach is made practical by the use of non-uniform sampling in the three indirect time dimensions and maximum entropy reconstruction of the corresponding 3D frequency spectrum. This 4D method will facilitate automated resonance assignment procedures and it should be particularly beneficial for increasing throughput in NMR-based structural genomics initiatives. PMID:20299257

  13. Combined micro-droplet and thin-film-assisted pre-concentration of lead traces for on-line monitoring using anodic stripping voltammetry.

    PubMed

    Belostotsky, Inessa; Gridin, Vladimir V; Schechter, Israel; Yarnitzky, Chaim N

    2003-02-01

    An improved analytical method for airborne lead traces is reported. It is based on using a Venturi scrubber sampling device for simultaneous thin-film stripping and droplet entrapment of aerosol influxes. At least threefold enhancement of the lead-trace pre-concentration is achieved. The sampled traces are analyzed by square-wave anodic stripping voltammetry. The method was tested by a series of pilot experiments. These were performed using contaminant-controlled air intakes. Reproducible calibration plots were obtained. The data were validated by traditional analysis using filter sampling. LODs are comparable with the conventional techniques. The method was successfully applied to on-line and in situ environmental monitoring of lead.

  14. Model of medicines sales forecasting taking into account factors of influence

    NASA Astrophysics Data System (ADS)

    Kravets, A. G.; Al-Gunaid, M. A.; Loshmanov, V. I.; Rasulov, S. S.; Lempert, L. B.

    2018-05-01

    The article describes a method for forecasting sales of medicines in conditions of data sampling, which is insufficient for building a model based on historical data alone. The developed method is applicable mainly to new drugs that are already licensed and released for sale but do not yet have stable sales performance in the market. The purpose of this study is to prove the effectiveness of the suggested method forecasting drug sales, taking into account the selected factors of influence, revealed during the review of existing solutions and analysis of the specificity of the area under study. Three experiments were performed on samples of different volumes, which showed an improvement in the accuracy of forecasting sales in small samples.

  15. Methodology Series Module 10: Qualitative Health Research

    PubMed Central

    Setia, Maninder Singh

    2017-01-01

    Although quantitative designs are commonly used in clinical research, some studies require qualitative methods. These designs are different from quantitative methods; thus, researchers should be aware of data collection methods and analyses for qualitative research. Qualitative methods are particularly useful to understand patient experiences with the treatment or new methods of management or to explore issues in detail. These methods are useful in social and behavioral research. In qualitative research, often, the main focus is to understand the issue in detail rather than generalizability; thus, the sampling methods commonly used are purposive sampling; quota sampling; and snowball sampling (for hard to reach groups). Data can be collected using in-depth interviews (IDIs) or focus group discussions (FGDs). IDI is a one-to-one interview with the participant. FGD is a method of group interview or discussion, in which more than one participant is interviewed at the same time and is usually led by a facilitator. The commonly used methods for data analysis are: thematic analysis; grounded theory analysis; and framework analysis. Qualitative data collection and analysis require special expertise. Hence, if the reader plans to conduct qualitative research, they should team up with a qualitative researcher. PMID:28794545

  16. Methodology Series Module 10: Qualitative Health Research.

    PubMed

    Setia, Maninder Singh

    2017-01-01

    Although quantitative designs are commonly used in clinical research, some studies require qualitative methods. These designs are different from quantitative methods; thus, researchers should be aware of data collection methods and analyses for qualitative research. Qualitative methods are particularly useful to understand patient experiences with the treatment or new methods of management or to explore issues in detail. These methods are useful in social and behavioral research. In qualitative research, often, the main focus is to understand the issue in detail rather than generalizability; thus, the sampling methods commonly used are purposive sampling; quota sampling; and snowball sampling (for hard to reach groups). Data can be collected using in-depth interviews (IDIs) or focus group discussions (FGDs). IDI is a one-to-one interview with the participant. FGD is a method of group interview or discussion, in which more than one participant is interviewed at the same time and is usually led by a facilitator. The commonly used methods for data analysis are: thematic analysis; grounded theory analysis; and framework analysis. Qualitative data collection and analysis require special expertise. Hence, if the reader plans to conduct qualitative research, they should team up with a qualitative researcher.

  17. Enterocin M and its Beneficial Effects in Horses-a Pilot Experiment.

    PubMed

    Lauková, Andrea; Styková, Eva; Kubašová, Ivana; Gancarčíková, Soňa; Plachá, Iveta; Mudroňová, Dagmar; Kandričáková, Anna; Miltko, Renata; Belzecki, Grzegorz; Valocký, Igor; Strompfová, Viola

    2018-02-07

    Probiotic bacteria or their antimicrobial proteinaceous substances called bacteriocins (enterocins) hold promising prophylactic potential for animal breeding. This study present the results achieved after application of Enterocin M in horses. Enterocin M has never been applied to horses before. Clinically healthy horses (10) were involved in this pilot experiment. They were placed in the stables of the University of Veterinary Medicine and Pharmacy, Košice, Slovakia, with the approval of the University Ethics Committee. The animals were fed twice a day with hay and oats, or alternatively grazed with access to water ad libitum. The experiment lasted 6 weeks. Sampling was performed at the start of the experiment, at day 0-1, at day 21 (3 weeks of Enterocin M application), and at day 42 (3 weeks of cessation). Feces were sampled directly from the rectum and blood from the vena jugularis; the samples were immediately treated and/or stored for analyses. Each horse itself represented a control animal (compared to its status at the start of the experiment, day 0-1). After initial sampling, the horses were administered 100 μl of Ent M (precipitate, 12,800 AU/ml) in a small feed bolus to ensure it was consumed; Ent M was applied for 3 weeks (21 days). Fecal samples were treated using the standard microbial dilution method; phagocytic activity was assessed with standard and flow cytometry; biochemistry and metabolic profiles were tested using commercial kits and standard methods. Administration of Ent M led to mathematical reduction of coliforms, campylobacters ( ab P < 0.05), and significant reduction of Clostridium spp. ( ab P < 0.001, bc P < 0.001); increase of PA values was noted (P < 0.05, P < 0.0001); no negative influence on hydrolytic enzyme profile or biochemical blood parameters was noted.

  18. In vivo serial sampling of epididymal sperm in mice.

    PubMed

    Del Val, Gonzalo Moreno; Robledano, Patricia Muñoz

    2013-07-01

    This study was undertaken to refine the techniques of in vivo collection of sperm in the mouse. The principal objective was to offer a viable, safe and reliable method for serial collection of in vivo epididimary sperm through the direct puncture of the epididymis. Six C57Bl/6J males were subjected to the whole experiment. First we obtain a sperm sample of the right epididymis, and perform a vasectomy on the left side. This sample was used in an in vitro fertilization (IVF) experiment while the males were individually housed for 10 days to let them recover from the surgery, and then their fertility was tested with natural matings until we obtained a litter of each one. After that, the animals were subjected another time to the same process (sampling, recover and natural mating). The results of these experiments were a fertilization average value of 56.7%, and that all the males had a litter in the first month after the natural matings. This study documented the feasibility of the epididimary puncture technique to in vivo serial sampling of sperm in the mouse.

  19. Follow-up of the fate of imazalil from post-harvest lemon surface treatment to a baking experiment.

    PubMed

    Vass, Andrea; Korpics, Evelin; Dernovics, Mihály

    2015-01-01

    Imazalil is one of the most widespread fungicides used for the post-harvest treatment of citrus species. The separate use of peel during food preparation and processing may hitherto concentrate most of the imazalil into food products, where specific maximum residue limits hardly exist for this fungicide. In order to monitor comprehensively the path of imazalil, our study covered the monitoring of the efficiency of several washing treatments, the comparison of operative and related sample preparation methods for the lemon samples, the validation of a sample preparation technique for a fatty cake matrix, the preparation of a model cake sample made separately either with imazalil containing lemon peel or with imazalil spiking, the monitoring of imazalil degradation into α-(2,4-dichlorophenyl)-1H-imidazole-1-ethanol because of the baking process, and finally the mass balance of imazalil throughout the washing experiments and the baking process. Quantification of imazalil was carried out with an LC-ESI-MS/MS set-up, while LC-QTOF was used for the monitoring of imazalil degradation. Concerning the washing, none of the addressed five washing protocols could remove more than 30% of imazalil from the surface of the lemon samples. The study revealed a significant difference between the extraction efficiency of imazalil by the EN 15662:2008 and AOAC 2007.1 methods, with the advantage of the former. The use of the model cake sample helped to validate a modified version of the EN 15662:2008 method that included a freeze-out step to efficiently recover imazalil (>90%) from the fatty cake matrix. The degradation of imazalil during the baking process was significantly higher when this analyte was spiked into the cake matrix than in the case of preparing the cake with imazalil-containing lemon peel (52% vs. 22%). This observation calls the attention to the careful evaluation of pesticide stability data that are based on solution spiking experiments.

  20. Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

    NASA Astrophysics Data System (ADS)

    Beck, Joakim; Dia, Ben Mansour; Espath, Luis F. R.; Long, Quan; Tempone, Raúl

    2018-06-01

    In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized according to the desired error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a more recent single-loop Monte Carlo method that uses the Laplace method as an approximation of the return value of the inner loop. The first example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.

  1. Electrospray ionization and time-of-flight mass spectrometric method for simultaneous determination of spermidine and spermine.

    PubMed

    Samejima, Keijiro; Otani, Masahiro; Murakami, Yasuko; Oka, Takami; Kasai, Misao; Tsumoto, Hiroki; Kohda, Kohfuku

    2007-10-01

    A sensitive method for the determination of polyamines in mammalian cells was described using electrospray ionization and time-of-flight mass spectrometer. This method was 50-fold more sensitive than the previous method using ionspray ionization and quadrupole mass spectrometer. The method employed the partial purification and derivatization of polyamines, but allowed a measurement of multiple samples which contained picomol amounts of polyamines. Time required for data acquisition of one sample was approximately 2 min. The method was successfully applied for the determination of reduced spermidine and spermine contents in cultured cells under the inhibition of aminopropyltransferases. In addition, a new proper internal standard was proposed for the tracer experiment using (15)N-labeled polyamines.

  2. Effective modern methods of protecting metal road structures from corrosion

    NASA Astrophysics Data System (ADS)

    Panteleeva, Margarita

    2017-10-01

    In the article the ways of protection of barrier road constructions from various external influences which cause development of irreversible corrosion processes are considered. The author studied modern methods of action on metal for corrosion protection and chose the most effective of them: a method of directly affecting the metal structures themselves. This method was studied in more detail in the framework of the experiment. As a result, the article describes the experiment of using a three-layer polymer coating, which includes a thermally activated primer, an elastomeric thermoplastic layer with a spatial structure, and a strong outer polyolefin layer. As a result of the experiment, the ratios of the ingredients for obtaining samples of the treated metal having the best parameters of corrosion resistance, elasticity, and strength were revealed. The author constructed a regression equation describing the main properties of the protective polymer coating using the simplex-lattice planning method in the composition-property diagrams.

  3. Photochemical methods to assay DNA photocleavage using supercoiled pUC18 DNA and LED or xenon arc lamp excitation.

    PubMed

    Prussin, Aaron J; Zigler, David F; Jain, Avijita; Brown, Jared R; Winkel, Brenda S J; Brewer, Karen J

    2008-04-01

    Methods for the study of DNA photocleavage are illustrated using a mixed-metal supramolecular complex [{(bpy)(2)Ru(dpp)}(2)RhCl(2)]Cl(5). The methods use supercoiled pUC18 plasmid as a DNA probe and either filtered light from a xenon arc lamp source or monochromatic light from a newly designed, high-intensity light-emitting diode (LED) array. Detailed methods for performing the photochemical experiments and analysis of the DNA photoproduct are delineated. Detailed methods are also given for building an LED array to be used for DNA photolysis experiments. The Xe arc source has a broad spectral range and high light flux. The LEDs have a high-intensity, nearly monochromatic output. Arrays of LEDs have the advantage of allowing tunable, accurate output to multiple samples for high-throughput photochemistry experiments at relatively low cost.

  4. New Laboratory Technique to Determine Thermal Conductivity of Complex Regolith Simulants Under High Vacuum

    NASA Astrophysics Data System (ADS)

    Ryan, A. J.; Christensen, P. R.

    2016-12-01

    Laboratory measurements have been necessary to interpret thermal data of planetary surfaces for decades. We present a novel radiometric laboratory method to determine temperature-dependent thermal conductivity of complex regolith simulants under high vacuum and across a wide range of temperatures. Here, we present our laboratory method, strategy, and initial results. This method relies on radiometric temperature measurements instead of contact measurements, eliminating the need to disturb the sample with thermal probes. We intend to determine the conductivity of grains that are up to 2 cm in diameter and to parameterize the effects of angularity, sorting, layering, composition, and cementation. These results will support the efforts of the OSIRIS-REx team in selecting a site on asteroid Bennu that is safe and meets grain size requirements for sampling. Our system consists of a cryostat vacuum chamber with an internal liquid nitrogen dewar. A granular sample is contained in a cylindrical cup that is 4 cm in diameter and 1 to 6 cm deep. The surface of the sample is exposed to vacuum and is surrounded by a black liquid nitrogen cold shroud. Once the system has equilibrated at 80 K, the base of the sample cup is rapidly heated to 450 K. An infrared camera observes the sample from above to monitor its temperature change over time. We have built a time-dependent finite element model of the experiment in COMSOL Multiphysics. Boundary temperature conditions and all known material properties (including surface emissivities) are included to replicate the experiment as closely as possible. The Optimization module in COMSOL is specifically designed for parameter estimation. Sample thermal conductivity is assumed to be a quadratic or cubic polynomial function of temperature. We thus use gradient-based optimization methods in COMSOL to vary the polynomial coefficients in an effort to reduce the least squares error between the measured and modeled sample surface temperature.

  5. Semi-Supervised Marginal Fisher Analysis for Hyperspectral Image Classification

    NASA Astrophysics Data System (ADS)

    Huang, H.; Liu, J.; Pan, Y.

    2012-07-01

    The problem of learning with both labeled and unlabeled examples arises frequently in Hyperspectral image (HSI) classification. While marginal Fisher analysis is a supervised method, which cannot be directly applied for Semi-supervised classification. In this paper, we proposed a novel method, called semi-supervised marginal Fisher analysis (SSMFA), to process HSI of natural scenes, which uses a combination of semi-supervised learning and manifold learning. In SSMFA, a new difference-based optimization objective function with unlabeled samples has been designed. SSMFA preserves the manifold structure of labeled and unlabeled samples in addition to separating labeled samples in different classes from each other. The semi-supervised method has an analytic form of the globally optimal solution, and it can be computed based on eigen decomposition. Classification experiments with a challenging HSI task demonstrate that this method outperforms current state-of-the-art HSI-classification methods.

  6. Component-based subspace linear discriminant analysis method for face recognition with one training sample

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Yuen, Pong C.; Chen, Wen-Sheng; Lai, J. H.

    2005-05-01

    Many face recognition algorithms/systems have been developed in the last decade and excellent performances have also been reported when there is a sufficient number of representative training samples. In many real-life applications such as passport identification, only one well-controlled frontal sample image is available for training. Under this situation, the performance of existing algorithms will degrade dramatically or may not even be implemented. We propose a component-based linear discriminant analysis (LDA) method to solve the one training sample problem. The basic idea of the proposed method is to construct local facial feature component bunches by moving each local feature region in four directions. In this way, we not only generate more samples with lower dimension than the original image, but also consider the face detection localization error while training. After that, we propose a subspace LDA method, which is tailor-made for a small number of training samples, for the local feature projection to maximize the discrimination power. Theoretical analysis and experiment results show that our proposed subspace LDA is efficient and overcomes the limitations in existing LDA methods. Finally, we combine the contributions of each local component bunch with a weighted combination scheme to draw the recognition decision. A FERET database is used for evaluating the proposed method and results are encouraging.

  7. [Research on NIR equivalent spectral measurement].

    PubMed

    Wang, Zhi-Hong; Liu, Jie; Sun, Yu-Yang; Teng, Fei; Lin, Jun

    2013-04-01

    When the spectra of the diffuse reflectance of low reflectivity samples or the transmittance of low transmisivity samples are measured by a portable near infrared (NIR) spectrometer, because there is the noise of the spectrometer, the smaller the reflectance or transmittance of the sample, the lower its SNR. Even if treated by denoise methods, the spectra can not meet the requirement of NIR analysis. So the equivalent spectrum measure method was researched. Based on the intensity of the reflected or transmitted signal by the sample under the traditional measure conditions, the light current of the spectrometer was enlarged, and then the signal of the measured sample increased; the reflected or transmitted light of the measure reference was reduced to avoid the signal of the measure reference over range. Moreover the equivalent spectrum of the sample was calculated in order to make it identical with the spectrum measured by traditional method. Thus the NIR spectral SNR was improved. The results of theory analysis and experiments show that if the light signal of the spectrometer was properly increased according to the reflected or transmitted signal of the low reflectivity or transmisivity sample, the equivalent spectrum was the same as the spectrum measured by traditional method and its SNR was improved.

  8. Analysis of problems and failures in the measurement of soil-gas radon concentration.

    PubMed

    Neznal, Martin; Neznal, Matěj

    2014-07-01

    Long-term experience in the field of soil-gas radon concentration measurements allows to describe and explain the most frequent causes of failures, which can appear in practice when various types of measurement methods and soil-gas sampling techniques are used. The concept of minimal sampling depth, which depends on the volume of the soil-gas sample and on the soil properties, is shown in detail. Consideration of minimal sampling depth at the time of measurement planning allows to avoid the most common mistakes. The ways how to identify influencing parameters, how to avoid a dilution of soil-gas samples by the atmospheric air, as well as how to recognise inappropriate sampling methods are discussed. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. User Experience Evaluation in the Mobile Context

    NASA Astrophysics Data System (ADS)

    Obrist, Marianna; Meschtscherjakov, Alexander; Tscheligi, Manfred

    Multimedia services on mobile devices are becoming increasingly popular. Whereas the mobile phone is the most likely platform for mobile TV, PDAs, portable game consoles, and music players are attractive alternatives. Mobile TV consumption on mobile phones allows new kinds of user experiences, but it also puts designers and researchers in front of new challenges. On the one hand, designers have to take these novel experience potentials into account. On the other hand, the right methods to collect user feedback to further improve services for the mobile context have to be applied. In this chapter the importance of user experience research for mobile TV within the mobile context is highlighted. We present how different experience levels can be evaluated taking different mobile context categories into account. In particular, we discuss the Experience Sampling Method (ESM), which seems to be a fruitful approach for investigating user TV experiences.

  10. Teaching Method and Effect on Learning Piagetian Concepts

    ERIC Educational Resources Information Center

    Swiderski, David J.; Amadio, Dean M.

    2013-01-01

    Instructors of psychology typically use a variety of methods to teach concepts. The present double-blind experiment is intended to determine the effectiveness of popular television clips as exemplars of Piagetian concepts compared to verbal descriptions of the same exemplars among a sample of 86 undergraduate students enrolled in an introductory…

  11. Surface Cleaning of Iron Artefacts by Lasers

    NASA Astrophysics Data System (ADS)

    Koh, Y. S.; Sárady, I.

    In this paper the general method and ethics of the laser cleaning technique for conservation are presented. The results of two experiments are also presented; experiment 1 compares cleaning of rust by an Nd:YAG laser and micro-blasting whilst experiment 2 deals with removing the wax coating from iron samples by a TEA CO2 laser. The first experiment showed that cleaning with a pulsed laser and higher photon energy obtained a better surface structure than micro blasting. The second experiment showed how differences in energy density affect the same surface.

  12. Numerical simulations of regolith sampling processes

    NASA Astrophysics Data System (ADS)

    Schäfer, Christoph M.; Scherrer, Samuel; Buchwald, Robert; Maindl, Thomas I.; Speith, Roland; Kley, Wilhelm

    2017-07-01

    We present recent improvements in the simulation of regolith sampling processes in microgravity using the numerical particle method smooth particle hydrodynamics (SPH). We use an elastic-plastic soil constitutive model for large deformation and failure flows for dynamical behaviour of regolith. In the context of projected small body (asteroid or small moons) sample return missions, we investigate the efficiency and feasibility of a particular material sampling method: Brushes sweep material from the asteroid's surface into a collecting tray. We analyze the influence of different material parameters of regolith such as cohesion and angle of internal friction on the sampling rate. Furthermore, we study the sampling process in two environments by varying the surface gravity (Earth's and Phobos') and we apply different rotation rates for the brushes. We find good agreement of our sampling simulations on Earth with experiments and provide estimations for the influence of the material properties on the collecting rate.

  13. Design-based and model-based inference in surveys of freshwater mollusks

    USGS Publications Warehouse

    Dorazio, R.M.

    1999-01-01

    Well-known concepts in statistical inference and sampling theory are used to develop recommendations for planning and analyzing the results of quantitative surveys of freshwater mollusks. Two methods of inference commonly used in survey sampling (design-based and model-based) are described and illustrated using examples relevant in surveys of freshwater mollusks. The particular objectives of a survey and the type of information observed in each unit of sampling can be used to help select the sampling design and the method of inference. For example, the mean density of a sparsely distributed population of mollusks can be estimated with higher precision by using model-based inference or by using design-based inference with adaptive cluster sampling than by using design-based inference with conventional sampling. More experience with quantitative surveys of natural assemblages of freshwater mollusks is needed to determine the actual benefits of different sampling designs and inferential procedures.

  14. Microwave resonances in dielectric samples probed in Corbino geometry: simulation and experiment.

    PubMed

    Felger, M Maximilian; Dressel, Martin; Scheffler, Marc

    2013-11-01

    The Corbino approach, where the sample of interest terminates a coaxial cable, is a well-established method for microwave spectroscopy. If the sample is dielectric and if the probe geometry basically forms a conductive cavity, this combination can sustain well-defined microwave resonances that are detrimental for broadband measurements. Here, we present detailed simulations and measurements to investigate the resonance frequencies as a function of sample and probe size and of sample permittivity. This allows a quantitative optimization to increase the frequency of the lowest-lying resonance.

  15. Win-Stay, Lose-Sample: a simple sequential algorithm for approximating Bayesian inference.

    PubMed

    Bonawitz, Elizabeth; Denison, Stephanie; Gopnik, Alison; Griffiths, Thomas L

    2014-11-01

    People can behave in a way that is consistent with Bayesian models of cognition, despite the fact that performing exact Bayesian inference is computationally challenging. What algorithms could people be using to make this possible? We show that a simple sequential algorithm "Win-Stay, Lose-Sample", inspired by the Win-Stay, Lose-Shift (WSLS) principle, can be used to approximate Bayesian inference. We investigate the behavior of adults and preschoolers on two causal learning tasks to test whether people might use a similar algorithm. These studies use a "mini-microgenetic method", investigating how people sequentially update their beliefs as they encounter new evidence. Experiment 1 investigates a deterministic causal learning scenario and Experiments 2 and 3 examine how people make inferences in a stochastic scenario. The behavior of adults and preschoolers in these experiments is consistent with our Bayesian version of the WSLS principle. This algorithm provides both a practical method for performing Bayesian inference and a new way to understand people's judgments. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Preparation of magnetic molecularly imprinted polymers by atom transfer radical polymerization for the rapid extraction of avermectin from fish samples.

    PubMed

    You, Xiaoxiao; Gao, Lei; Qin, Dongli; Chen, Ligang

    2017-01-01

    A novel and highly efficient approach to obtain magnetic molecularly imprinted polymers is described to detect avermectin in fish samples. The magnetic molecularly imprinted polymers were synthesized by surface imprinting polymerization using magnetic multiwalled carbon nanotubes as the support materials, atom transfer radical polymerization as the polymerization method, avermectin as template, acrylamide as functional monomer, and ethylene glycol dimethacrylate as crosslinker. The characteristics of the magnetic molecularly imprinted polymers were assessed by using transmission electron microscopy, Fourier transform infrared spectroscopy, X-ray photoelectron spectroscopy, vibrating sample magnetometry, X-ray diffraction, and thermogravimetric analysis. The binding characteristics of magnetic molecularly imprinted polymers were researched through isothermal adsorption experiment, kinetics adsorption experiment, and the selectivity experiment. Coupled with ultra high performance liquid chromatography and tandem mass spectrometry, the extraction conditions of the magnetic molecularly imprinted polymers as adsorbents for avermectin were investigated in detail. The recovery of avermectin was 84.2-97.0%, and the limit of detection was 0.075 μg/kg. Relative standard deviations of intra- and inter-day precisions were in the range of 1.7-2.9% and 3.4-5.6%, respectively. The results demonstrated that the extraction method not only has high selectivity and accuracy, but also is convenient for the determination of avermectin in fish samples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Spike-In Normalization of ChIP Data Using DNA-DIG-Antibody Complex.

    PubMed

    Eberle, Andrea B

    2018-01-01

    Chromatin immunoprecipitation (ChIP) is a widely used method to determine the occupancy of specific proteins within the genome, helping to unravel the function and activity of specific genomic regions. In ChIP experiments, normalization of the obtained data by a suitable internal reference is crucial. However, particularly when comparing differently treated samples, such a reference is difficult to identify. Here, a simple method to improve the accuracy and reliability of ChIP experiments by the help of an external reference is described. An artificial molecule, composed of a well-defined digoxigenin (DIG) labeled DNA fragment in complex with an anti-DIG antibody, is synthesized and added to each chromatin sample before immunoprecipitation. During the ChIP procedure, the DNA-DIG-antibody complex undergoes the same treatments as the chromatin and is therefore purified and quantified together with the chromatin of interest. This external reference compensates for variability during the ChIP routine and improves the similarity between replicates, thereby emphasizing the biological differences between samples.

  18. Effect of silver additive on physicochemical properties of hydroxyapatite applied to reconstructive surgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhuk, I. V., E-mail: zhukiv1993@mail.ru; Rasskazova, L. A., E-mail: ly-2207@mail.ru; Korotchenko, N. M., E-mail: korotch@mail.ru

    The effect of silver adding to hydroxyapatite (HA) in its solubility in physiological solution and biological activity was investigated. Samples of HA containing silver (AgHA) obtained by liquid-phase method in the conditions of microwave exposure. Solubility (C{sub Ca}{sup 2+}·10{sup 3}, mol/l) of the powders AgHA was determined by chemical methods according trilonometric titration of the calcium ions in physiological solution at 25 and 37 °C. To investigate the biological activity of the samples, a series of experiments on the formation of the calcium-phosphate layer on the surface of the SBF-solution at 37 °C for 28 days. Electronic micrographs of samplesmore » taken at the end of each 7 days of the experiment, indicate the formation of calcium-phosphate layer (CPL) in the samples, the kinetics of which is shown as a function of cumulative concentrations of calcium and magnesium ions from time.« less

  19. Volume Measurements of Laser-generated Pits for In Situ Geochronology using KArLE (Potassium-Argon Laser Experiment)

    NASA Technical Reports Server (NTRS)

    French, R. A.; Cohen, B. A.; Miller, J. S.

    2014-01-01

    The Potassium-Argon Laser Experiment( KArLE), is composed of two main instruments: a spectrometer as part of the Laser-Induced Breakdown Spectroscopy (LIBS) method and a Mass Spectrometer (MS). The LIBS laser ablates a sample and creates a plasma cloud, generating a pit in the sample. The LIBS plasma is measured for K abundance in weight percent and the released gas is measured using the MS, which calculates Ar abundance in mols. To relate the K and Ar measurements, total mass of the ablated sample is needed but can be difficult to directly measure. Instead, density and volume are used to calculate mass, where density is calculated based on the elemental composition of the rock (from the emission spectrum) and volume is determined by pit morphology. This study aims to reduce the uncertainty for KArLE by analyzing pit volume relationships in several analog materials and comparing methods of pit volume measurements and their associated uncertainties.

  20. Linear sampling method applied to non destructive testing of an elastic waveguide: theory, numerics and experiments

    NASA Astrophysics Data System (ADS)

    Baronian, Vahan; Bourgeois, Laurent; Chapuis, Bastien; Recoquillay, Arnaud

    2018-07-01

    This paper presents an application of the linear sampling method to ultrasonic non destructive testing of an elastic waveguide. In particular, the NDT context implies that both the solicitations and the measurements are located on the surface of the waveguide and are given in the time domain. Our strategy consists in using a modal formulation of the linear sampling method at multiple frequencies, such modal formulation being justified theoretically in Bourgeois et al (2011 Inverse Problems 27 055001) for rigid obstacles and in Bourgeois and Lunéville (2013 Inverse Problems 29 025017) for cracks. Our strategy requires the inversion of some emission and reception matrices which deserve some special attention due to potential ill-conditioning. The feasibility of our method is proved with the help of artificial data as well as real data.

  1. First oxygen from lunar basalt

    NASA Technical Reports Server (NTRS)

    Gibson, M. A.; Knudsen, C. W.; Brueneman, D. J.; Kanamori, H.; Ness, R. O.; Sharp, L. L.; Brekke, D. W.; Allen, C. C.; Morris, R. V.; Keller, L. P.

    1993-01-01

    The Carbotek/Shimizu process to produce oxygen from lunar soils has been successfully demonstrated on actual lunar samples in laboratory facilities at Carbotek with Shimizu funding and support. Apollo sample 70035 containing approximately 25 percent ilmenite (FeTiO3) was used in seven separate reactions with hydrogen varying temperature and pressure: FeTiO3 + H2 yields Fe + TiO2 + H2O. The experiments gave extremely encouraging results as all ilmenite was reduced in every experiment. The lunar ilmenite was found to be about twice as reactive as terrestrial ilmenite samples. Analytical techniques of the lunar and terrestrial ilmenite experiments performed by NASA Johnson Space Center include iron Mossbauer spectroscopy (FeMS), optical microscopy, SEM, TEM, and XRD. The Energy and Environmental Research Center at the University of North Dakota performed three SEM techniques (point count method, morphology determination, elemental mapping), XRD, and optical microscopy.

  2. Simulating and assessing boson sampling experiments with phase-space representations

    NASA Astrophysics Data System (ADS)

    Opanchuk, Bogdan; Rosales-Zárate, Laura; Reid, Margaret D.; Drummond, Peter D.

    2018-04-01

    The search for new, application-specific quantum computers designed to outperform any classical computer is driven by the ending of Moore's law and the quantum advantages potentially obtainable. Photonic networks are promising examples, with experimental demonstrations and potential for obtaining a quantum computer to solve problems believed classically impossible. This introduces a challenge: how does one design or understand such photonic networks? One must be able to calculate observables using general methods capable of treating arbitrary inputs, dissipation, and noise. We develop complex phase-space software for simulating these photonic networks, and apply this to boson sampling experiments. Our techniques give sampling errors orders of magnitude lower than experimental correlation measurements for the same number of samples. We show that these techniques remove systematic errors in previous algorithms for estimating correlations, with large improvements in errors in some cases. In addition, we obtain a scalable channel-combination strategy for assessment of boson sampling devices.

  3. CORNAS: coverage-dependent RNA-Seq analysis of gene expression data without biological replicates.

    PubMed

    Low, Joel Z B; Khang, Tsung Fei; Tammi, Martti T

    2017-12-28

    In current statistical methods for calling differentially expressed genes in RNA-Seq experiments, the assumption is that an adjusted observed gene count represents an unknown true gene count. This adjustment usually consists of a normalization step to account for heterogeneous sample library sizes, and then the resulting normalized gene counts are used as input for parametric or non-parametric differential gene expression tests. A distribution of true gene counts, each with a different probability, can result in the same observed gene count. Importantly, sequencing coverage information is currently not explicitly incorporated into any of the statistical models used for RNA-Seq analysis. We developed a fast Bayesian method which uses the sequencing coverage information determined from the concentration of an RNA sample to estimate the posterior distribution of a true gene count. Our method has better or comparable performance compared to NOISeq and GFOLD, according to the results from simulations and experiments with real unreplicated data. We incorporated a previously unused sequencing coverage parameter into a procedure for differential gene expression analysis with RNA-Seq data. Our results suggest that our method can be used to overcome analytical bottlenecks in experiments with limited number of replicates and low sequencing coverage. The method is implemented in CORNAS (Coverage-dependent RNA-Seq), and is available at https://github.com/joel-lzb/CORNAS .

  4. Kernel Wiener filter and its application to pattern recognition.

    PubMed

    Yoshino, Hirokazu; Dong, Chen; Washizawa, Yoshikazu; Yamashita, Yukihiko

    2010-11-01

    The Wiener filter (WF) is widely used for inverse problems. From an observed signal, it provides the best estimated signal with respect to the squared error averaged over the original and the observed signals among linear operators. The kernel WF (KWF), extended directly from WF, has a problem that an additive noise has to be handled by samples. Since the computational complexity of kernel methods depends on the number of samples, a huge computational cost is necessary for the case. By using the first-order approximation of kernel functions, we realize KWF that can handle such a noise not by samples but as a random variable. We also propose the error estimation method for kernel filters by using the approximations. In order to show the advantages of the proposed methods, we conducted the experiments to denoise images and estimate errors. We also apply KWF to classification since KWF can provide an approximated result of the maximum a posteriori classifier that provides the best recognition accuracy. The noise term in the criterion can be used for the classification in the presence of noise or a new regularization to suppress changes in the input space, whereas the ordinary regularization for the kernel method suppresses changes in the feature space. In order to show the advantages of the proposed methods, we conducted experiments of binary and multiclass classifications and classification in the presence of noise.

  5. Batch experiments versus soil pore water extraction--what makes the difference in isoproturon (bio-)availability?

    PubMed

    Folberth, Christian; Suhadolc, Metka; Scherb, Hagen; Munch, Jean Charles; Schroll, Reiner

    2009-10-01

    Two approaches to determine pesticide (bio-)availability in soils (i) batch experiments with "extraction with an excess of water" (EEW) and (ii) the recently introduced "soil pore water (PW) extraction" of pesticide incubated soil samples have been compared with regard to the sorption behavior of the model compound isoproturon in soils. A significant correlation between TOC and adsorbed pesticide amount was found when using the EEW approach. In contrast, there was no correlation between TOC and adsorbed isoproturon when using the in situ PW extraction method. Furthermore, sorption was higher at all concentrations in the EEW method when comparing the distribution coefficients (K(d)) for both methods. Over all, sorption in incubated soil samples at an identical water tension (-15 kPa) and soil density (1.3 g cm(-3)) appears to be controlled by a complex combination of sorption driving soil parameters. Isoproturon bioavailability was found to be governed in different soils by binding strength and availability of sorption sites as well as water content, whereas the dominance of either one of these factors seems to depend on the individual composition and characteristics of the respective soil sample. Using multiple linear regression analysis we obtained furthermore indications that the soil pore structure is affected by the EEW method due to disaggregation, resulting in a higher availability of pesticide sorption sites than in undisturbed soil samples. Therefore, it can be concluded that isoproturon sorption is overestimated when using the EEW method, which should be taken into account when using data from this approach or similar batch techniques for risk assessment analysis.

  6. Constructing a Reward-Related Quality of Life Statistic in Daily Life—a Proof of Concept Study Using Positive Affect

    PubMed Central

    Verhagen, Simone J. W.; Simons, Claudia J. P.; van Zelst, Catherine; Delespaul, Philippe A. E. G.

    2017-01-01

    Background: Mental healthcare needs person-tailored interventions. Experience Sampling Method (ESM) can provide daily life monitoring of personal experiences. This study aims to operationalize and test a measure of momentary reward-related Quality of Life (rQoL). Intuitively, quality of life improves by spending more time on rewarding experiences. ESM clinical interventions can use this information to coach patients to find a realistic, optimal balance of positive experiences (maximize reward) in daily life. rQoL combines the frequency of engaging in a relevant context (a ‘behavior setting’) with concurrent (positive) affect. High rQoL occurs when the most frequent behavior settings are combined with positive affect or infrequent behavior settings co-occur with low positive affect. Methods: Resampling procedures (Monte Carlo experiments) were applied to assess the reliability of rQoL using various behavior setting definitions under different sampling circumstances, for real or virtual subjects with low-, average- and high contextual variability. Furthermore, resampling was used to assess whether rQoL is a distinct concept from positive affect. Virtual ESM beep datasets were extracted from 1,058 valid ESM observations for virtual and real subjects. Results: Behavior settings defined by Who-What contextual information were most informative. Simulations of at least 100 ESM observations are needed for reliable assessment. Virtual ESM beep datasets of a real subject can be defined by Who-What-Where behavior setting combinations. Large sample sizes are necessary for reliable rQoL assessments, except for subjects with low contextual variability. rQoL is distinct from positive affect. Conclusion: rQoL is a feasible concept. Monte Carlo experiments should be used to assess the reliable implementation of an ESM statistic. Future research in ESM should asses the behavior of summary statistics under different sampling situations. This exploration is especially relevant in clinical implementation, where often only small datasets are available. PMID:29163294

  7. Constructing a Reward-Related Quality of Life Statistic in Daily Life-a Proof of Concept Study Using Positive Affect.

    PubMed

    Verhagen, Simone J W; Simons, Claudia J P; van Zelst, Catherine; Delespaul, Philippe A E G

    2017-01-01

    Background: Mental healthcare needs person-tailored interventions. Experience Sampling Method (ESM) can provide daily life monitoring of personal experiences. This study aims to operationalize and test a measure of momentary reward-related Quality of Life (rQoL). Intuitively, quality of life improves by spending more time on rewarding experiences. ESM clinical interventions can use this information to coach patients to find a realistic, optimal balance of positive experiences (maximize reward) in daily life. rQoL combines the frequency of engaging in a relevant context (a 'behavior setting') with concurrent (positive) affect. High rQoL occurs when the most frequent behavior settings are combined with positive affect or infrequent behavior settings co-occur with low positive affect. Methods: Resampling procedures (Monte Carlo experiments) were applied to assess the reliability of rQoL using various behavior setting definitions under different sampling circumstances, for real or virtual subjects with low-, average- and high contextual variability. Furthermore, resampling was used to assess whether rQoL is a distinct concept from positive affect. Virtual ESM beep datasets were extracted from 1,058 valid ESM observations for virtual and real subjects. Results: Behavior settings defined by Who-What contextual information were most informative. Simulations of at least 100 ESM observations are needed for reliable assessment. Virtual ESM beep datasets of a real subject can be defined by Who-What-Where behavior setting combinations. Large sample sizes are necessary for reliable rQoL assessments, except for subjects with low contextual variability. rQoL is distinct from positive affect. Conclusion: rQoL is a feasible concept. Monte Carlo experiments should be used to assess the reliable implementation of an ESM statistic. Future research in ESM should asses the behavior of summary statistics under different sampling situations. This exploration is especially relevant in clinical implementation, where often only small datasets are available.

  8. Further studies on the problems of geomagnetic field intensity determination from archaeological baked clay materials

    NASA Astrophysics Data System (ADS)

    Kostadinova-Avramova, M.; Kovacheva, M.

    2015-10-01

    Archaeological baked clay remains provide valuable information about the geomagnetic field in historical past, but determination of the geomagnetic field characteristics, especially intensity, is often a difficult task. This study was undertaken to elucidate the reasons for unsuccessful intensity determination experiments obtained from two different Bulgarian archaeological sites (Nessebar - Early Byzantine period and Malenovo - Early Iron Age). With this aim, artificial clay samples were formed in the laboratory and investigated. The clay used for the artificial samples preparation differs according to its initial state. Nessebar clay was baked in the antiquity, but Malenovo clay was raw, taken from the clay deposit near the site. The obtained artificial samples were repeatedly heated eight times in known magnetic field to 700 °C. X-ray diffraction analyses and rock-magnetic experiments were performed to obtain information about the mineralogical content and magnetic properties of the initial and laboratory heated clays. Two different protocols were applied for the intensity determination-Coe version of Thellier and Thellier method and multispecimen parallel differential pTRM protocol. Various combinations of laboratory fields and mutual positions of the directions of laboratory field and carried thermoremanence were used in the performed Coe experiment. The obtained results indicate that the failure of this experiment is probably related to unfavourable grain sizes of the prevailing magnetic carriers combined with the chosen experimental conditions. The multispecimen parallel differential pTRM protocol in its original form gives excellent results for the artificial samples, but failed for the real samples (samples coming from previously studied kilns of Nessebar and Malenovo sites). Obviously the strong dependence of this method on the homogeneity of the used subsamples hinders its implementation in its original form for archaeomaterials. The latter are often heterogeneous due to variable heating conditions in the different parts of the archaeological structures. The study draws attention to the importance of multiple heating for the stabilization of grain size distribution in baked clay materials and the need of elucidation of this question.

  9. Family Carers' Experiences Using Support Services in Europe: Empirical Evidence from the EUROFAMCARE Study

    ERIC Educational Resources Information Center

    Lamura, Giovanni; Mnich, Eva; Nolan, Mike; Wojszel, Beata; Krevers, Barbro; Mestheneos, Liz; Dohner, Hanneli

    2008-01-01

    Purpose: This article explores the experiences of family carers of older people in using support services in six European countries: Germany, Greece, Italy, Poland, Sweden, and the UK. Design and Methods: Following a common protocol, data were collected from national samples of approximately 1,000 family carers per country and clustered into…

  10. Sequential-Injection Analysis: Principles, Instrument Construction, and Demonstration by a Simple Experiment

    ERIC Educational Resources Information Center

    Economou, A.; Tzanavaras, P. D.; Themelis, D. G.

    2005-01-01

    The sequential-injection analysis (SIA) is an approach to sample handling that enables the automation of manual wet-chemistry procedures in a rapid, precise and efficient manner. The experiments using SIA fits well in the course of Instrumental Chemical Analysis and especially in the section of Automatic Methods of analysis provided by chemistry…

  11. A Phenomenological Study of the Lived Experiences of Social Studies Teachers: Constructing Ideas about Democratic Citizenship and Teaching

    ERIC Educational Resources Information Center

    Thapa, Om Kumar

    2016-01-01

    The purpose of the study was to explore how social studies teachers conceptualized democracy, developed ideas about democratic citizenship, and implemented their perspectives and experiences into teaching. The study used phenomenological approach of qualitative research design. Six participants were selected using a convenient sampling method with…

  12. Supplemental US/Canada wheat and barley exploratory experiment implementation plan: Evaluation of a procedure 1A technology

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A plan is presented for a supplemental experiment to evaluate a sample allocation technique for selecting picture elements from remotely sensed multispectral imagery for labeling in connection with a new crop proportion estimation technique. The method of evaluating an improved allocation and proportion estimation technique is also provided.

  13. Adverse Childhood Experiences of Referred Children Exposed to Intimate Partner Violence: Consequences for their Wellbeing

    ERIC Educational Resources Information Center

    Lamers-Winkelman, Francien; Willemen, Agnes M.; Visser, Margreet

    2012-01-01

    Objective: This study investigated the relationships among Adverse Childhood Experiences (ACEs) in a high risk clinical sample of Dutch children whose mothers were abused by an intimate partner, and the severity of behavioral and emotional problems and trauma symptoms. Methods: The study population comprised 208 children (M = 7.81 years, SD =…

  14. The Long-Term Effects of War Experiences on Children's Depression in the Republic of Croatia

    ERIC Educational Resources Information Center

    Brajsa-Zganec, A.

    2005-01-01

    Objective:: The aim of the study was to investigate whether different levels of depressive symptoms in early adolescent boys and girls could be predicted on the basis of war experiences, perceived available social support (instrumental support, support to self-esteem, belonging and acceptance) and extraversion. Methods:: The sample consisted of…

  15. Striking the Right Balance: Police Experience, Perceptions and Use of Independent Support Persons during Interviews Involving People with Intellectual Disability

    ERIC Educational Resources Information Center

    Henshaw, Marie; Spivak, Benjamin; Thomas, Stuart D. M.

    2018-01-01

    Background: Several jurisdictions mandate the presence of an independent support person during police interviews with vulnerable people. The current study investigated police officers' experiences and perceptions of these volunteers during interviews with people with intellectual disability(ies) (ID). Methods: The sample comprised 229 police…

  16. The College Student as Mother: A Phenomenological Examination of Community College Student Experiences

    ERIC Educational Resources Information Center

    Erk, Tiffany

    2013-01-01

    The purpose of this study is to identify how low-SES women who are providing primary childcare for children ages 0-10 experience higher education. In-depth phenomenological interviewing combined with document analysis were the methods utilized. This exploration used a purposive/snowball sample of low-SES mothers who were making satisfactory…

  17. Student Teachers' Experiences of Teaching Practice at Open and Distance Learning Institution in South Africa

    ERIC Educational Resources Information Center

    Mokoena, Sello

    2017-01-01

    This small-scale study focused on the experiences of student teachers towards teaching practice in an open and distance learning (ODL) institution in South Africa. The sample consisted of 65 fourth year students enrolled for Bachelor of Education, specializing in secondary school teaching. The mixed-method research design consisting of…

  18. Phenomenological Analysis of Teachers' Organizational Deviance Experiences in a Rural Primary School in Turkey

    ERIC Educational Resources Information Center

    Anasiz, Burcu Türkkas; Püsküllüoglu, Elif Iliman

    2018-01-01

    The purpose of this study was to analyze organizational deviance experiences of teachers. The study was in phenomenological design among qualitative research methods. In the research convenience sampling technique was used. The research was conducted in a rural primary school in Mugla province in Turkey. Nine teachers participated in the study,…

  19. Technology Experiences of Student Interns in a One to One Mobile Program

    ERIC Educational Resources Information Center

    Cullen, Theresa A.; Karademir, Tugra

    2018-01-01

    This article describes how a group of student intern teachers (n = 51) in a one to one teacher education iPad program were asked to reflect using Experience Sampling Method (ESM) on their use of technology in the classroom during internship. Interns also completed summative reflections and class discussions. Data collected both in online and…

  20. Atmospheric Transformation of Volatile Organic Compounds

    DTIC Science & Technology

    2008-03-01

    Study Analysis Reactant mixtures and standards from product identification experiments were sampled by exposing a 100% polydimethylsiloxane solid...later using the DNPH derivatization method described above and confirmed against a commercial standard. HPLC analysis of the DNPH cartridges also...reaction mixture for a combined total photolysis time ofapproximately 50 seconds. 2.3. Kinetic Study Analysis Samples from kinetic studies were

  1. Resilience to Adult Psychopathology Following Childhood Maltreatment: Evidence from a Community Sample

    ERIC Educational Resources Information Center

    Collishaw, Stephan; Pickles, Andrew; Messer, Julie; Rutter, Michael; Shearer, Christina; Maughan, Barbara

    2007-01-01

    Objective: Child abuse is an important risk for adult psychiatric morbidity. However, not all maltreated children experience mental health problems as adults. The aims of the present study were to address the extent of resilience to adult psychopathology in a representative community sample, and to explore predictors of a good prognosis. Methods:…

  2. A novel bi-level meta-analysis approach: applied to biological pathway analysis.

    PubMed

    Nguyen, Tin; Tagett, Rebecca; Donato, Michele; Mitrea, Cristina; Draghici, Sorin

    2016-02-01

    The accumulation of high-throughput data in public repositories creates a pressing need for integrative analysis of multiple datasets from independent experiments. However, study heterogeneity, study bias, outliers and the lack of power of available methods present real challenge in integrating genomic data. One practical drawback of many P-value-based meta-analysis methods, including Fisher's, Stouffer's, minP and maxP, is that they are sensitive to outliers. Another drawback is that, because they perform just one statistical test for each individual experiment, they may not fully exploit the potentially large number of samples within each study. We propose a novel bi-level meta-analysis approach that employs the additive method and the Central Limit Theorem within each individual experiment and also across multiple experiments. We prove that the bi-level framework is robust against bias, less sensitive to outliers than other methods, and more sensitive to small changes in signal. For comparative analysis, we demonstrate that the intra-experiment analysis has more power than the equivalent statistical test performed on a single large experiment. For pathway analysis, we compare the proposed framework versus classical meta-analysis approaches (Fisher's, Stouffer's and the additive method) as well as against a dedicated pathway meta-analysis package (MetaPath), using 1252 samples from 21 datasets related to three human diseases, acute myeloid leukemia (9 datasets), type II diabetes (5 datasets) and Alzheimer's disease (7 datasets). Our framework outperforms its competitors to correctly identify pathways relevant to the phenotypes. The framework is sufficiently general to be applied to any type of statistical meta-analysis. The R scripts are available on demand from the authors. sorin@wayne.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Unbiased, scalable sampling of protein loop conformations from probabilistic priors.

    PubMed

    Zhang, Yajia; Hauser, Kris

    2013-01-01

    Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion.

  4. Unbiased, scalable sampling of protein loop conformations from probabilistic priors

    PubMed Central

    2013-01-01

    Background Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Results Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Conclusion Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion. PMID:24565175

  5. Balancing a U-Shaped Assembly Line by Applying Nested Partitions Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhagwat, Nikhil V.

    2005-01-01

    In this study, we applied the Nested Partitions method to a U-line balancing problem and conducted experiments to evaluate the application. From the results, it is quite evident that the Nested Partitions method provided near optimal solutions (optimal in some cases). Besides, the execution time is quite short as compared to the Branch and Bound algorithm. However, for larger data sets, the algorithm took significantly longer times for execution. One of the reasons could be the way in which the random samples are generated. In the present study, a random sample is a solution in itself which requires assignment ofmore » tasks to various stations. The time taken to assign tasks to stations is directly proportional to the number of tasks. Thus, if the number of tasks increases, the time taken to generate random samples for the different regions also increases. The performance index for the Nested Partitions method in the present study was the number of stations in the random solutions (samples) generated. The total idle time for the samples can be used as another performance index. ULINO method is known to have used a combination of bounds to come up with good solutions. This approach of combining different performance indices can be used to evaluate the random samples and obtain even better solutions. Here, we used deterministic time values for the tasks. In industries where majority of tasks are performed manually, the stochastic version of the problem could be of vital importance. Experimenting with different objective functions (No. of stations was used in this study) could be of some significance to some industries where in the cost associated with creation of a new station is not the same. For such industries, the results obtained by using the present approach will not be of much value. Labor costs, task incompletion costs or a combination of those can be effectively used as alternate objective functions.« less

  6. Automated space processing payloads study. Volume 2, book 2: Technical report, appendices A through E. [instrument packages and space shuttles

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Experiment hardware and operational requirements for space shuttle experiments are discussed along with payload and system concepts. Appendixes are included in which experiment data sheets, chamber environmental control and monitoring, method for collection and storage of electrophoretically-separated samples, preliminary thermal evaluation of electromagnetic levitation facilities L1, L2, and L3, and applicable industrial automation equipment are discussed.

  7. Non-uniform sampling: post-Fourier era of NMR data collection and processing.

    PubMed

    Kazimierczuk, Krzysztof; Orekhov, Vladislav

    2015-11-01

    The invention of multidimensional techniques in the 1970s revolutionized NMR, making it the general tool of structural analysis of molecules and materials. In the most straightforward approach, the signal sampling in the indirect dimensions of a multidimensional experiment is performed in the same manner as in the direct dimension, i.e. with a grid of equally spaced points. This results in lengthy experiments with a resolution often far from optimum. To circumvent this problem, numerous sparse-sampling techniques have been developed in the last three decades, including two traditionally distinct approaches: the radial sampling and non-uniform sampling. This mini review discusses the sparse signal sampling and reconstruction techniques from the point of view of an underdetermined linear algebra problem that arises when a full, equally spaced set of sampled points is replaced with sparse sampling. Additional assumptions that are introduced to solve the problem, as well as the shape of the undersampled Fourier transform operator (visualized as so-called point spread function), are shown to be the main differences between various sparse-sampling methods. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Laboratory Testing of Volcanic Gas Sampling Techniques

    NASA Astrophysics Data System (ADS)

    Kress, V. C.; Green, R.; Ortiz, M.; Delmelle, P.; Fischer, T.

    2003-12-01

    A series of laboratory experiments were performed designed to calibrate several commonly used methods for field measurement of volcanic gas composition. H2, CO2, SO2 and CHCl2F gases were mixed through carefully calibrated rotameters to form mixtures representative of the types of volcanic compositions encountered at Kilauea and Showa-Shinzan. Gas mixtures were passed through a horizontal furnace at 700oC to break down CHCl2F and form an equilibrium high-temperature mixture. With the exception of Giggenbach bottle samples, all gas sampling was performed adjacent to the furnace exit in order to roughly simulate the air-contaminated samples encountered in Nature. Giggenbach bottle samples were taken from just beyond the hot-spot 10cm down the furnace tube to minimize atmospheric contamination. Alkali-trap measurements were performed by passing gases over or bubbling gases through 6N KOH, NaOH or LiOH solution for 10 minutes. Results were highly variable with errors in measured S/Cl varying from +1600% to -19%. In general reduced Kilauea compositions showed smaller errors than the more oxidized Showa-Shinzan compositions. Results were not resolvably different in experiments where gas was bubbled through the alkaline solution. In a second set of experiments, 25mm circles of Whatman 42 filter paper were impregnated with NaHCO3or KHCO3 alkaline solutions stabilized with glycerol. Some filters also included Alizarin (5.6-7.2) and neutral red (6.8-8.0) Ph indicator to provide a visual monitor of gas absorption. Filters were mounted in individual holders and used in stacks of 3. Durations were adjusted to maximize reaction in the first filter in the stack and minimize reaction in the final filter. Errors in filter pack measurements were smaller and more systematic than the alkali trap measurements. S/Cl was overestimated in oxidized gas mixtures and underestimated in reduced mixtures. Alkali-trap methods allow extended unattended monitoring of volcanic gasses, but our results suggest that they are poor recorders of gas composition. Filter pack methods are somewhat better, but are more difficult to interpret than previously recognized. We suggest several refinements to the filter-pack technique that can improve accuracy. Giggenbach bottles remain the best method for volcanic gas sampling, despite the inherent difficulty and danger of obtaining samples in active volcanic environments. Relative merits of different alkali solutions and indicators are discussed.

  9. Concentration and Detection of Cryptosporidium Oocysts in Surface Water Samples by Method 1622 Using Ultrafiltration and Capsule Filtration

    USGS Publications Warehouse

    Simmons, O. D.; Sobsey, M.D.; Heaney, C.D.; Schaefer, F. W.; Francy, D.S.

    2001-01-01

    The protozoan parasite Cryptosporidium parvum is known to occur widely in both source and drinking water and has caused waterborne outbreaks of gastroenteritis. To improve monitoring, the U.S. Environmental Protection Agency developed method 1622 for isolation and detection of Cryptosporidium oocysts in water. Method 1622 is performance based and involves filtration, concentration, immunomagnetic separation, fluorescent-antibody staining and 4???,6-diamidino-2-phenylindole (DAPI) counterstaining, and microscopic evaluation. The capsule filter system currently recommended for method 1622 was compared to a hollow-fiber ultrafilter system for primary concentration of C. parvum oocysts in seeded reagent water and untreated surface waters. Samples were otherwise processed according to method 1622. Rates of C. parvum oocyst recovery from seeded 10-liter volumes of reagent water in precision and recovery experiments with filter pairs were 42% (standard deviation [SD], 24%) and 46% (SD, 18%) for hollow-fiber ultrafilters and capsule filters, respectively. Mean oocyst recovery rates in experiments testing both filters on seeded surface water samples were 42% (SD, 27%) and 15% (SD, 12%) for hollow-fiber ultrafilters and capsule filters, respectively. Although C. parvum oocysts were recovered from surface waters by using the approved filter of method 1622, the recovery rates were significantly lower and more variable than those from reagent grade water. In contrast, the disposable hollow-fiber ultrafilter system was compatible with subsequent method 1622 processing steps, and it recovered C. parvum oocysts from seeded surface waters with significantly greater efficiency and reliability than the filter suggested for use in the version of method 1622 tested.

  10. Rapid-viability PCR method for detection of live, virulent Bacillus anthracis in environmental samples.

    PubMed

    Létant, Sonia E; Murphy, Gloria A; Alfaro, Teneile M; Avila, Julie R; Kane, Staci R; Raber, Ellen; Bunt, Thomas M; Shah, Sanjiv R

    2011-09-01

    In the event of a biothreat agent release, hundreds of samples would need to be rapidly processed to characterize the extent of contamination and determine the efficacy of remediation activities. Current biological agent identification and viability determination methods are both labor- and time-intensive such that turnaround time for confirmed results is typically several days. In order to alleviate this issue, automated, high-throughput sample processing methods were developed in which real-time PCR analysis is conducted on samples before and after incubation. The method, referred to as rapid-viability (RV)-PCR, uses the change in cycle threshold after incubation to detect the presence of live organisms. In this article, we report a novel RV-PCR method for detection of live, virulent Bacillus anthracis, in which the incubation time was reduced from 14 h to 9 h, bringing the total turnaround time for results below 15 h. The method incorporates a magnetic bead-based DNA extraction and purification step prior to PCR analysis, as well as specific real-time PCR assays for the B. anthracis chromosome and pXO1 and pXO2 plasmids. A single laboratory verification of the optimized method applied to the detection of virulent B. anthracis in environmental samples was conducted and showed a detection level of 10 to 99 CFU/sample with both manual and automated RV-PCR methods in the presence of various challenges. Experiments exploring the relationship between the incubation time and the limit of detection suggest that the method could be further shortened by an additional 2 to 3 h for relatively clean samples.

  11. In vitro test of external Qigong

    PubMed Central

    Yount, Garret; Solfvin, Jerry; Moore, Dan; Schlitz, Marilyn; Reading, Melissa; Aldape, Ken; Qian, Yifang

    2004-01-01

    Background Practitioners of the alternative medical practice 'external Qigong' generally claim the ability to emit or direct "healing energy" to treat patients. We investigated the ability of experienced Qigong practitioners to enhance the healthy growth of cultured human cells in a series of studies, each following a rigorously designed protocol with randomization, blinding and controls for variability. Methods Qigong practitioners directed healing intentionality toward normal brain cell cultures in a basic science laboratory. Qigong treatments were delivered for 20 minutes from a minimum distance of 10 centimeters. Cell proliferation was measured by a standard colony-forming efficiency (CFE) assay and a CFE ratio (CFE for treated samples/CFE for sham samples) was the dependent measure for each experiment. Results During a pilot study (8 experiments), a trend of increased cell proliferation in Qigong-treated samples (CFE Qigong/sham ratios > 1.0) was observed (P = 0.162). In a formal study (28 experiments), a similar trend was observed, with Qigong-treated samples showing on average more colony formation than sham samples (P = 0.036). In a replication study (60 experiments), no significant difference between Qigong-treated samples and sham samples was observed (P = 0.465). Conclusion We observed an apparent increase in the proliferation of cultured cells following external Qigong treatment by practitioners under strictly controlled conditions, but we did not observe this effect in a replication study. These results suggest the need for more controlled and thorough investigation of external Qigong before scientific validation is claimed. PMID:15102336

  12. Comparative study on sample stacking by moving reaction boundary formed with weak acid and weak or strong base in capillary electrophoresis: II. Experiments.

    PubMed

    Zhang, Wei; Fan, Liuyin; Shao, Jing; Li, Si; Li, Shan; Cao, Chengxi

    2011-04-15

    To demonstrate the theoretic method on the stacking of zwitterion with moving reaction boundary (MRB) in the accompanying paper, the relevant experiments were performed. The experimental results quantitatively show that (1) MRB velocity, including the comparisons between MRB and zwitterionic velocities, possesses key importance to the design of MRB stacking; (2) a much long front alkaline plug without sample should be injected before the sample injection for a complete stacking of zwitterion if sample buffer is prepared with strong base, conversely no such plug is needed if using a weak base as the sample buffer with proper concentration and pH value; (3) the presence of salt in MRB system holds dramatic effect on the MRB stacking if sample solution is a strong base, but has no effect if a weak alkali is used as sample solution; (4) all of the experiments of this paper, including the previous work, quantitatively manifest the theory and predictions shown in the accompanying paper. In addition, the so-called derivative MRB-induced re-stacking and transient FASI-induced re-stacking were also observed during the experiments, and the relevant mechanisms were briefly demonstrated with the results. The theory and its calculation procedures developed in the accompanying paper can be well used for the predictions to the MRB stacking of zwitterion in CE. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. [Phenomenology and phenomenological method: their usefulness for nursing knowledge and practice].

    PubMed

    Vellone, E; Sinapi, N; Rastelli, D

    2000-01-01

    Phenomenology is a thought movement the main aim of which is to study human fenomena as they are experienced and lived. Key concepts of phenomenology are: the study of lived experience and subjectivity of human beings, the intentionality of consciousness, perception and interpretation. Phenomenological research method has nine steps: definition of the research topic; superficial literature searching; sample selection; gathering of lived experiences; analysis of lived experiences; written synthesis of lived experiences; validation of written synthesis; deep literature searching; writing of the scientific document. Phenomenology and phenomenological method are useful for nursing either to develop knowledge or to guide practice. Qualitative-phenomenological and quantitative-positivistic research are complementary: the first one guides clinicians towards a person-centered approach, the second one allows the manipulation of phenomena which can damage health, worsen illness or decrease the quality of life of people who rely on nursing care.

  14. A non-uniformly under-sampled blade tip-timing signal reconstruction method for blade vibration monitoring.

    PubMed

    Hu, Zheng; Lin, Jun; Chen, Zhong-Sheng; Yang, Yong-Min; Li, Xue-Jun

    2015-01-22

    High-speed blades are often prone to fatigue due to severe blade vibrations. In particular, synchronous vibrations can cause irreversible damages to the blade. Blade tip-timing methods (BTT) have become a promising way to monitor blade vibrations. However, synchronous vibrations are unsuitably monitored by uniform BTT sampling. Therefore, non-equally mounted probes have been used, which will result in the non-uniformity of the sampling signal. Since under-sampling is an intrinsic drawback of BTT methods, how to analyze non-uniformly under-sampled BTT signals is a big challenge. In this paper, a novel reconstruction method for non-uniformly under-sampled BTT data is presented. The method is based on the periodically non-uniform sampling theorem. Firstly, a mathematical model of a non-uniform BTT sampling process is built. It can be treated as the sum of certain uniform sample streams. For each stream, an interpolating function is required to prevent aliasing in the reconstructed signal. Secondly, simultaneous equations of all interpolating functions in each sub-band are built and corresponding solutions are ultimately derived to remove unwanted replicas of the original signal caused by the sampling, which may overlay the original signal. In the end, numerical simulations and experiments are carried out to validate the feasibility of the proposed method. The results demonstrate the accuracy of the reconstructed signal depends on the sampling frequency, the blade vibration frequency, the blade vibration bandwidth, the probe static offset and the number of samples. In practice, both types of blade vibration signals can be particularly reconstructed by non-uniform BTT data acquired from only two probes.

  15. A Non-Uniformly Under-Sampled Blade Tip-Timing Signal Reconstruction Method for Blade Vibration Monitoring

    PubMed Central

    Hu, Zheng; Lin, Jun; Chen, Zhong-Sheng; Yang, Yong-Min; Li, Xue-Jun

    2015-01-01

    High-speed blades are often prone to fatigue due to severe blade vibrations. In particular, synchronous vibrations can cause irreversible damages to the blade. Blade tip-timing methods (BTT) have become a promising way to monitor blade vibrations. However, synchronous vibrations are unsuitably monitored by uniform BTT sampling. Therefore, non-equally mounted probes have been used, which will result in the non-uniformity of the sampling signal. Since under-sampling is an intrinsic drawback of BTT methods, how to analyze non-uniformly under-sampled BTT signals is a big challenge. In this paper, a novel reconstruction method for non-uniformly under-sampled BTT data is presented. The method is based on the periodically non-uniform sampling theorem. Firstly, a mathematical model of a non-uniform BTT sampling process is built. It can be treated as the sum of certain uniform sample streams. For each stream, an interpolating function is required to prevent aliasing in the reconstructed signal. Secondly, simultaneous equations of all interpolating functions in each sub-band are built and corresponding solutions are ultimately derived to remove unwanted replicas of the original signal caused by the sampling, which may overlay the original signal. In the end, numerical simulations and experiments are carried out to validate the feasibility of the proposed method. The results demonstrate the accuracy of the reconstructed signal depends on the sampling frequency, the blade vibration frequency, the blade vibration bandwidth, the probe static offset and the number of samples. In practice, both types of blade vibration signals can be particularly reconstructed by non-uniform BTT data acquired from only two probes. PMID:25621612

  16. Determination of lipophilic toxins by LC/MS/MS: single-laboratory validation.

    PubMed

    Villar-González, Adriano; Rodríguez-Velasco, María Luisa; Gago-Martínez, Ana

    2011-01-01

    An LC/MS/MS method has been developed, assessed, and intralaboratory-validated for the analysis of the lipophilic toxins currently regulated by European Union legislation: okadaic acid (OA) and dinophysistoxins 1 and 2, including their ester forms; azaspiracids 1, 2, and 3; pectenotoxins 1 and 2; yessotoxin (YTX), and the analogs 45 OH-YTX, Homo YTX, and 45 OH-Homo YTX; as well as for the analysis of 13-desmetil-spirolide C. The method consists of duplicate sample extraction with methanol and direct analysis of the crude extract without further cleanup or concentration. Ester forms of OA and dinophysistoxins are detected as the parent ions after alkaline hydrolysis of the extract. The validation process of this method was performed using both fortified and naturally contaminated samples, and experiments were designed according to International Organization for Standardization, International Union of Pure and Applied Chemistry, and AOAC guidelines. With the exception of YTX in fortified samples, RSDr below 15% and RSDR were below 25%. Recovery values were between 77 and 95%, and LOQs were below 60 microg/kg. These data together with validation experiments for recovery, selectivity, robustness, traceability, and linearity, as well as uncertainty calculations, are presented in this paper.

  17. Effect of different drying methods on moisture ratio and rehydration of pumpkin slices.

    PubMed

    Seremet Ceclu, Liliana; Botez, Elisabeta; Nistor, Oana-Viorela; Andronoiu, Doina Georgeta; Mocanu, Gabriel-Danut

    2016-03-15

    This study was carried to determine the influence of hot air drying process and combined methods on physicochemical properties of pumpkin (Cucurbita moschata) samples. The experiments in hot air chamber were lead at 50, 60 and 70 °C. The combined method consists of a triple combination of the main drying techniques. Thus, in first stage the samples were dried in hot air convection at 60 °C followed by hot air ventilation at 40 °C simultaneous with microwave. The time required to reduce the moisture content to any given level was highly dependent on the drying conditions. So, the highest value of drying time in hot air has been 540 min at 50 °C, while the lowest time has been 189 min in hot air combined by microwave at 40 °C and a power of 315 W. The samples dried by hot air shows a higher rehydration capacity than samples dried by combined method. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Experiments on Nucleation in Different Flow Regimes

    NASA Technical Reports Server (NTRS)

    Bayuzick, R. J.; Hofmeister, W. H.; Morton, C. M.; Robinson, M. B.

    1999-01-01

    The vast majority of metallic engineering materials are solidified from the liquid phase. Understanding the solidification process is essential to control microstructure, which in turn, determines the properties of materials. The genesis of solidification is nucleation, where the first stable solid forms from the liquid phase. Nucleation kinetics determine the degree of undercooling and phase selection. As such, it is important to understand nucleation phenomena in order to control solidification or glass formation in metals and alloys. Early experiments in nucleation kinetics were accomplished by droplet dispersion methods. Dilatometry was used by Turnbull and others, and more recently differential thermal analysis and differential scanning calorimetry have been used for kinetic studies. These techniques have enjoyed success; however, there are difficulties with these experiments. Since materials are dispersed in a medium, the character of the emulsion/metal interface affects the nucleation behavior. Statistics are derived from the large number of particles observed in a single experiment, but dispersions have a finite size distribution which adds to the uncertainty of the kinetic determinations. Even though temperature can be controlled quite well before the onset of nucleation, the release of the latent heat of fusion during nucleation of particles complicates the assumption of isothermality during these experiments. Containerless processing has enabled another approach to the study of nucleation kinetics. With levitation techniques it is possible to undercool one sample to nucleation repeatedly in a controlled manner, such that the statistics of the nucleation process can be derived from multiple experiments on a single sample. The authors have fully developed the analysis of nucleation experiments on single samples following the suggestions of Skripov. The advantage of these experiments is that the samples are directly observable. The nucleation temperature can be measured by noncontact optical pyrometry, the mass of the sample is known, and post processing analysis can be conducted on the sample. The disadvantages are that temperature measurement must have exceptionally high precision, and it is not possible to isolate specific heterogeneous sites as in droplet dispersions.

  19. Application of Enhanced Sampling Monte Carlo Methods for High-Resolution Protein-Protein Docking in Rosetta

    PubMed Central

    Zhang, Zhe; Schindler, Christina E. M.; Lange, Oliver F.; Zacharias, Martin

    2015-01-01

    The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419

  20. Coding of DNA samples and data in the pharmaceutical industry: current practices and future directions--perspective of the I-PWG.

    PubMed

    Franc, M A; Cohen, N; Warner, A W; Shaw, P M; Groenen, P; Snapir, A

    2011-04-01

    DNA samples collected in clinical trials and stored for future research are valuable to pharmaceutical drug development. Given the perceived higher risk associated with genetic research, industry has implemented complex coding methods for DNA. Following years of experience with these methods and with addressing questions from institutional review boards (IRBs), ethics committees (ECs) and health authorities, the industry has started reexamining the extent of the added value offered by these methods. With the goal of harmonization, the Industry Pharmacogenomics Working Group (I-PWG) conducted a survey to gain an understanding of company practices for DNA coding and to solicit opinions on their effectiveness at protecting privacy. The results of the survey and the limitations of the coding methods are described. The I-PWG recommends dialogue with key stakeholders regarding coding practices such that equal standards are applied to DNA and non-DNA samples. The I-PWG believes that industry standards for privacy protection should provide adequate safeguards for DNA and non-DNA samples/data and suggests a need for more universal standards for samples stored for future research.

  1. DNA Everywhere. A Guide for Simplified Environmental Genomic DNA Extraction Suitable for Use in Remote Areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gabrielle N. Pecora; Francine C. Reid; Lauren M. Tom

    2016-05-01

    Collecting field samples from remote or geographically distant areas can be a financially and logistically challenging. With participation of a local organization where the samples are originated from, gDNA samples can be extracted from the field and shipped to a research institution for further processing and analysis. The ability to set up gDNA extraction capabilities in the field can drastically reduce cost and time when running long-term microbial studies with a large sample set. The method outlined here has developed a compact and affordable method for setting up a “laboratory” and extracting and shipping gDNA samples from anywhere in themore » world. This white paper explains the process of setting up the “laboratory”, choosing and training individuals with no prior scientific experience how to perform gDNA extractions and safe methods for shipping extracts to any research institution. All methods have been validated by the Andersen group at Lawrence Berkeley National Laboratory using the Berkeley Lab PhyloChip.« less

  2. Image sampling in static telepathology for frozen section diagnosis.

    PubMed

    Della Mea, V; Cataldi, P; Boi, S; Finato, N; Dalla Palma, P; Beltrami, C A

    1999-10-01

    A frozen section diagnostic service is often not directly available in small rural or mountain hospitals. In these cases, it could be possible to provide frozen section diagnosis through telepathology systems. Telepathology is based on two main methods: static and dynamic. The former is less expensive, but involves the crucial problem of image sampling. To characterise the differences in image sampling for static telepathology when undertaken by pathologists with different experience. As a test field, a previously studied telepathology method based on multimedia email was adopted. Using this method, three pathologists with different levels of experience sampled images from 155 routine frozen sections and sent them to a distant pathology institute, where diagnoses were made on digital images. After the telepathology diagnoses, the glass slides of both the frozen sections and the definitive sections were sent to the remote pathologists for review. Four of 155 transmissions were considered inadequate by the remote pathologist. In the remaining 151 cases, the telepathology diagnosis agreed with the gold standard in 146 (96.7%). There was no significant divergence between the three pathologists in their sampling of the images. Each case comprised five images on average, acquired in four minutes. The overall time for transmission was about 19 minutes. The results suggest that in routine frozen section diagnosis an inexperienced pathologist can sample images sufficiently well to permit remote diagnosis. However, as expected, the internet is too unreliable for such a time dependent task. An improvement in the system would involve integrated real time features, so that there could be interaction between the two pathologists.

  3. [High Precision Identification of Igneous Rock Lithology by Laser Induced Breakdown Spectroscopy].

    PubMed

    Wang, Chao; Zhang, Wei-gang; Yan, Zhi-quan

    2015-09-01

    In the field of petroleum exploration, lithology identification of finely cuttings sample, especially high precision identification of igneous rock with similar property, has become one of the geological problems. In order to solve this problem, a new method is proposed based on element analysis of Laser-Induced Breakdown Spectroscopy (LIBS) and Total Alkali versus Silica (TAS) diagram. Using independent LIBS system, factors influencing spectral signal, such as pulse energy, acquisition time delay, spectrum acquisition method and pre-ablation are researched through contrast experiments systematically. The best analysis conditions of igneous rock are determined: pulse energy is 50 mJ, acquisition time delay is 2 μs, the analysis result is integral average of 20 different points of sample's surface, and pre-ablation has been proved not suitable for igneous rock sample by experiment. The repeatability of spectral data is improved effectively. Characteristic lines of 7 elements (Na, Mg, Al, Si, K, Ca, Fe) commonly used for lithology identification of igneous rock are determined, and igneous rock samples of different lithology are analyzed and compared. Calibration curves of Na, K, Si are generated by using national standard series of rock samples, and all the linearly dependent coefficients are greater than 0.9. The accuracy of quantitative analysis is investigated by national standard samples. Element content of igneous rock is analyzed quantitatively by calibration curve, and its lithology is identified accurately by the method of TAS diagram, whose accuracy rate is 90.7%. The study indicates that LIBS can effectively achieve the high precision identification of the lithology of igneous rock.

  4. The software peculiarities of pattern recognition in track detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Starkov, N.

    The different kinds of nuclear track recognition algorithms are represented. Several complicated samples of use them in physical experiments are considered. The some processing methods of complicated images are described.

  5. A two-dimensional matrix image based feature extraction method for classification of sEMG: A comparative analysis based on SVM, KNN and RBF-NN.

    PubMed

    Wen, Tingxi; Zhang, Zhongnan; Qiu, Ming; Zeng, Ming; Luo, Weizhen

    2017-01-01

    The computer mouse is an important human-computer interaction device. But patients with physical finger disability are unable to operate this device. Surface EMG (sEMG) can be monitored by electrodes on the skin surface and is a reflection of the neuromuscular activities. Therefore, we can control limbs auxiliary equipment by utilizing sEMG classification in order to help the physically disabled patients to operate the mouse. To develop a new a method to extract sEMG generated by finger motion and apply novel features to classify sEMG. A window-based data acquisition method was presented to extract signal samples from sEMG electordes. Afterwards, a two-dimensional matrix image based feature extraction method, which differs from the classical methods based on time domain or frequency domain, was employed to transform signal samples to feature maps used for classification. In the experiments, sEMG data samples produced by the index and middle fingers at the click of a mouse button were separately acquired. Then, characteristics of the samples were analyzed to generate a feature map for each sample. Finally, the machine learning classification algorithms (SVM, KNN, RBF-NN) were employed to classify these feature maps on a GPU. The study demonstrated that all classifiers can identify and classify sEMG samples effectively. In particular, the accuracy of the SVM classifier reached up to 100%. The signal separation method is a convenient, efficient and quick method, which can effectively extract the sEMG samples produced by fingers. In addition, unlike the classical methods, the new method enables to extract features by enlarging sample signals' energy appropriately. The classical machine learning classifiers all performed well by using these features.

  6. Protocol: high throughput silica-based purification of RNA from Arabidopsis seedlings in a 96-well format

    PubMed Central

    2011-01-01

    The increasing popularity of systems-based approaches to plant research has resulted in a demand for high throughput (HTP) methods to be developed. RNA extraction from multiple samples in an experiment is a significant bottleneck in performing systems-level genomic studies. Therefore we have established a high throughput method of RNA extraction from Arabidopsis thaliana to facilitate gene expression studies in this widely used plant model. We present optimised manual and automated protocols for the extraction of total RNA from 9-day-old Arabidopsis seedlings in a 96 well plate format using silica membrane-based methodology. Consistent and reproducible yields of high quality RNA are isolated averaging 8.9 μg total RNA per sample (~20 mg plant tissue). The purified RNA is suitable for subsequent qPCR analysis of the expression of over 500 genes in triplicate from each sample. Using the automated procedure, 192 samples (2 × 96 well plates) can easily be fully processed (samples homogenised, RNA purified and quantified) in less than half a day. Additionally we demonstrate that plant samples can be stored in RNAlater at -20°C (but not 4°C) for 10 months prior to extraction with no significant effect on RNA yield or quality. Additionally, disrupted samples can be stored in the lysis buffer at -20°C for at least 6 months prior to completion of the extraction procedure providing a flexible sampling and storage scheme to facilitate complex time series experiments. PMID:22136293

  7. Protocol: high throughput silica-based purification of RNA from Arabidopsis seedlings in a 96-well format.

    PubMed

    Salvo-Chirnside, Eliane; Kane, Steven; Kerr, Lorraine E

    2011-12-02

    The increasing popularity of systems-based approaches to plant research has resulted in a demand for high throughput (HTP) methods to be developed. RNA extraction from multiple samples in an experiment is a significant bottleneck in performing systems-level genomic studies. Therefore we have established a high throughput method of RNA extraction from Arabidopsis thaliana to facilitate gene expression studies in this widely used plant model. We present optimised manual and automated protocols for the extraction of total RNA from 9-day-old Arabidopsis seedlings in a 96 well plate format using silica membrane-based methodology. Consistent and reproducible yields of high quality RNA are isolated averaging 8.9 μg total RNA per sample (~20 mg plant tissue). The purified RNA is suitable for subsequent qPCR analysis of the expression of over 500 genes in triplicate from each sample. Using the automated procedure, 192 samples (2 × 96 well plates) can easily be fully processed (samples homogenised, RNA purified and quantified) in less than half a day. Additionally we demonstrate that plant samples can be stored in RNAlater at -20°C (but not 4°C) for 10 months prior to extraction with no significant effect on RNA yield or quality. Additionally, disrupted samples can be stored in the lysis buffer at -20°C for at least 6 months prior to completion of the extraction procedure providing a flexible sampling and storage scheme to facilitate complex time series experiments.

  8. Surface, Water, and Air Biocharacterization (SWAB) Flight Experiment

    NASA Technical Reports Server (NTRS)

    Castro, V. A.; Ott, C. M.; Pierson, D. L.

    2012-01-01

    The determination of risk from infectious disease during spaceflight missions is composed of several factors including both the concentration and characteristics of the microorganisms to which the crew are exposed. Thus, having a good understanding of the microbial ecology aboard spacecraft provides the necessary information to mitigate health risks to the crew. While preventive measures are taken to minimize the presence of pathogens on spacecraft, medically significant organisms have been isolated from both the Mir and International Space Station (ISS). Historically, the method for isolation and identification of microorganisms from spacecraft environmental samples depended upon their growth on culture media. Unfortunately, only a fraction of the organisms may grow on a specific culture medium, potentially omitting those microorganisms whose nutritional and physical requirements for growth are not met. To address this bias in our understanding of the ISS environment, the Surface, Water, and Air Biocharacterization (SWAB) Flight Experiment was designed to investigate and develop monitoring technology to provide better microbial characterization. For the SWAB flight experiment, we hypothesized that environmental analysis using non-culture-based technologies would reveal microorganisms, allergens, and microbial toxins not previously reported in spacecraft, allowing for a more complete health assessment. Key findings during this experiment included: a) Generally, advanced molecular techniques were able to reveal a few organisms not recovered using culture-based methods; however, there is no indication that current monitoring is "missing" any medically significant bacteria or fungi. b) Molecular techniques have tremendous potential for microbial monitoring, however, sample preparation and data analysis present challenges for spaceflight hardware. c) Analytical results indicate that some molecular techniques, such as denaturing gradient gel electrophoresis (DGGE), can be much less sensitive than culture-based methods. d) More sensitive molecular techniques, such as quantitative polymerase chain reaction (QPCR), were able to identify viral DNA from ISS environments, suggesting potential transfer of the organism between crewmembers. In addition, the hardware selected for this experiment represented advances for next-generation sample collection. The advanced nature of this collection hardware was noted, when the Sartorius MD8 Air Port air sampler from the SWAB experiment remained on board ISS at the request of JAXA investigators, who intend to use it in completion of their microbial ecology experiment.

  9. Overview of qualitative research.

    PubMed

    Grossoehme, Daniel H

    2014-01-01

    Qualitative research methods are a robust tool for chaplaincy research questions. Similar to much of chaplaincy clinical care, qualitative research generally works with written texts, often transcriptions of individual interviews or focus group conversations and seeks to understand the meaning of experience in a study sample. This article describes three common methodologies: ethnography, grounded theory, and phenomenology. Issues to consider relating to the study sample, design, and analysis are discussed. Enhancing the validity of the data, as well reliability and ethical issues in qualitative research are described. Qualitative research is an accessible way for chaplains to contribute new knowledge about the sacred dimension of people's lived experience.

  10. Life Sciences Research Facility automation requirements and concepts for the Space Station

    NASA Technical Reports Server (NTRS)

    Rasmussen, Daryl N.

    1986-01-01

    An evaluation is made of the methods and preliminary results of a study on prospects for the automation of the NASA Space Station's Life Sciences Research Facility. In order to remain within current Space Station resource allocations, approximately 85 percent of planned life science experiment tasks must be automated; these tasks encompass specimen care and feeding, cage and instrument cleaning, data acquisition and control, sample analysis, waste management, instrument calibration, materials inventory and management, and janitorial work. Task automation will free crews for specimen manipulation, tissue sampling, data interpretation and communication with ground controllers, and experiment management.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, William BJ J; Rearden, Bradley T

    The validation of neutron transport methods used in nuclear criticality safety analyses is required by consensus American National Standards Institute/American Nuclear Society (ANSI/ANS) standards. In the last decade, there has been an increased interest in correlations among critical experiments used in validation that have shared physical attributes and which impact the independence of each measurement. The statistical methods included in many of the frequently cited guidance documents on performing validation calculations incorporate the assumption that all individual measurements are independent, so little guidance is available to practitioners on the topic. Typical guidance includes recommendations to select experiments from multiple facilitiesmore » and experiment series in an attempt to minimize the impact of correlations or common-cause errors in experiments. Recent efforts have been made both to determine the magnitude of such correlations between experiments and to develop and apply methods for adjusting the bias and bias uncertainty to account for the correlations. This paper describes recent work performed at Oak Ridge National Laboratory using the Sampler sequence from the SCALE code system to develop experimental correlations using a Monte Carlo sampling technique. Sampler will be available for the first time with the release of SCALE 6.2, and a brief introduction to the methods used to calculate experiment correlations within this new sequence is presented in this paper. Techniques to utilize these correlations in the establishment of upper subcritical limits are the subject of a companion paper and will not be discussed here. Example experimental uncertainties and correlation coefficients are presented for a variety of low-enriched uranium water-moderated lattice experiments selected for use in a benchmark exercise by the Working Party on Nuclear Criticality Safety Subgroup on Uncertainty Analysis in Criticality Safety Analyses. The results include studies on the effect of fuel rod pitch on the correlations, and some observations are also made regarding difficulties in determining experimental correlations using the Monte Carlo sampling technique.« less

  12. Validation of odor concentration from mechanical-biological treatment piles using static chamber and wind tunnel with different wind speed values.

    PubMed

    Szyłak-Szydłowski, Mirosław

    2017-09-01

    The basic principle of odor sampling from surface sources is based primarily on the amount of air obtained from a specific area of the ground, which acts as a source of malodorous compounds. Wind tunnels and flux chambers are often the only available, direct method of evaluating the odor fluxes from small area sources. There are currently no widely accepted chamber-based methods; thus, there is still a need for standardization of these methods to ensure accuracy and comparability. Previous research has established that there is a significant difference between the odor concentration values obtained using the Lindvall chamber and those obtained by a dynamic flow chamber. Thus, the present study compares sampling methods using a streaming chamber modeled on the Lindvall cover (using different wind speeds), a static chamber, and a direct sampling method without any screens. The volumes of chambers in the current work were similar, ~0.08 m 3 . This study was conducted at the mechanical-biological treatment plant in Poland. Samples were taken from a pile covered by the membrane. Measured odor concentration values were between 2 and 150 ou E /m 3 . Results of the study demonstrated that both chambers can be used interchangeably in the following conditions: odor concentration is below 60 ou E /m 3 , wind speed inside the Lindvall chamber is below 0.2 m/sec, and a flow value is below 0.011 m 3 /sec. Increasing the wind speed above the aforementioned value results in significant differences in the results obtained between those methods. In all experiments, the results of the concentration of odor in the samples using the static chamber were consistently higher than those from the samples measured in the Lindvall chamber. Lastly, the results of experiments were employed to determine a model function of the relationship between wind speed and odor concentration values. Several researchers wrote that there are no widely accepted chamber-based methods. Also, there is still a need for standardization to ensure full comparability of these methods. The present study compared the existing methods to improve the standardization of area source sampling. The practical usefulness of the results was proving that both examined chambers can be used interchangeably. Statistically similar results were achieved while odor concentration was below 60 ou E /m 3 and wind speed inside the Lindvall chamber was below 0.2 m/sec. Increasing wind speed over these values results in differences between these methods. A model function of relationship between wind speed and odor concentration value was determined.

  13. Method of analysis at the U.S. Geological Survey California Water Science Center, Sacramento Laboratory - determination of haloacetic acid formation potential, method validation, and quality-control practices

    USGS Publications Warehouse

    Zazzi, Barbara C.; Crepeau, Kathryn L.; Fram, Miranda S.; Bergamaschi, Brian A.

    2005-01-01

    An analytical method for the determination of haloacetic acid formation potential of water samples has been developed by the U.S. Geological Survey California Water Science Center Sacramento Laboratory. The haloacetic acid formation potential is measured by dosing water samples with chlorine under specified conditions of pH, temperature, incubation time, darkness, and residual-free chlorine. The haloacetic acids formed are bromochloroacetic acid, bromodichloroacetic acid, dibromochloroacetic acid, dibromoacetic acid, dichloroacetic acid, monobromoacetic acid, monochloroacetic acid, tribromoacetic acid, and trichloroacetic acid. They are extracted, methylated, and then analyzed using a gas chromatograph equipped with an electron capture detector. Method validation experiments were performed to determine the method accuracy, precision, and detection limit for each of the compounds. Method detection limits for these nine haloacetic acids ranged from 0.11 to 0.45 microgram per liter. Quality-control practices include the use of blanks, quality-control samples, calibration verification standards, surrogate recovery, internal standard, matrix spikes, and duplicates.

  14. Laboratory Noble Gas Migration Experiments through Rock

    NASA Astrophysics Data System (ADS)

    Broome, S.; Cashion, A.; Feldman, J.; Sussman, A. J.; Swanson, E.; Wilson, J.

    2016-12-01

    The Underground Nuclear Explosion Signatures Experiment (UNESE) was created to address science and research and development aspects associated with nuclear explosion verification and nuclear nonproliferation with a focus on non-prompt signals. A critical component of the UNESE program is a realistic understanding of the post-detonation processes and changes in the environment that produce observable physical and radio-chemical signatures. As such, an understanding of noble gas migration properties through various lithologies is essential. Here we present an empirical methodology to measure tortuosity on well-characterized rhyolitic tuffs and lavas. Tortuosity is then compared with microfracture networks characterized by microscopy. To quantify tortuosity, a pressurized (1500 mbar) fixed volume of argon is expanded into a sample under high vacuum (0.200 mbar). A quadrupole mass spectrometer (QMS) is used to measure argon downstream of the sample in real time, allowing the time-series gas arrival curve to be characterized for each sample. To evaluate the method, blank samples have been machined to correspond with tortuosities of 1, 2, and 4 in conjunction with a restricted-flow valve to mimic rock sample permeability. Data from the blanks are analyzed with this system to correct for system effects on gas arrival. High vacuum is maintained in the QMS system during sampling by precise metering of the gas through a leak valve with active feedback control which allows arrival time and concentration of argon to be established in real time. Along with a comprehensive characterization of the rock and fracture properties, the parameters derived from these experiments will provide invaluable insight into the three-dimensional structure of damage zones, the production of temporally variable signatures and the methods to best detect underground nuclear explosion signatures. SAND2016-7309 A

  15. Bioanalytical method development and validation for the determination of glycine in human cerebrospinal fluid by ion-pair reversed-phase liquid chromatography-tandem mass spectrometry.

    PubMed

    Jiang, Jian; James, Christopher A; Wong, Philip

    2016-09-05

    A LC-MS/MS method has been developed and validated for the determination of glycine in human cerebrospinal fluid (CSF). The validated method used artificial cerebrospinal fluid as a surrogate matrix for calibration standards. The calibration curve range for the assay was 100-10,000ng/mL and (13)C2, (15)N-glycine was used as an internal standard (IS). Pre-validation experiments were performed to demonstrate parallelism with surrogate matrix and standard addition methods. The mean endogenous glycine concentration in a pooled human CSF determined on three days by using artificial CSF as a surrogate matrix and the method of standard addition was found to be 748±30.6 and 768±18.1ng/mL, respectively. A percentage difference of -2.6% indicated that artificial CSF could be used as a surrogate calibration matrix for the determination of glycine in human CSF. Quality control (QC) samples, except the lower limit of quantitation (LLOQ) QC and low QC samples, were prepared by spiking glycine into aliquots of pooled human CSF sample. The low QC sample was prepared from a separate pooled human CSF sample containing low endogenous glycine concentrations, while the LLOQ QC sample was prepared in artificial CSF. Standard addition was used extensively to evaluate matrix effects during validation. The validated method was used to determine the endogenous glycine concentrations in human CSF samples. Incurred sample reanalysis demonstrated reproducibility of the method. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Monte Carlo source simulation technique for solution of interference reactions in INAA experiments: a preliminary report

    NASA Astrophysics Data System (ADS)

    Allaf, M. Athari; Shahriari, M.; Sohrabpour, M.

    2004-04-01

    A new method using Monte Carlo source simulation of interference reactions in neutron activation analysis experiments has been developed. The neutron spectrum at the sample location has been simulated using the Monte Carlo code MCNP and the contributions of different elements to produce a specified gamma line have been determined. The produced response matrix has been used to measure peak areas and the sample masses of the elements of interest. A number of benchmark experiments have been performed and the calculated results verified against known values. The good agreement obtained between the calculated and known values suggests that this technique may be useful for the elimination of interference reactions in neutron activation analysis.

  17. "He Never Did Anything You Typically Think of as Abuse": Experiences With Violence in Controlling and Non-Controlling Relationships in a Non-Agency Sample of Women.

    PubMed

    Velonis, Alisa J

    2016-08-01

    Traditionally, any physical aggression within intimate relationships has been labeled "domestic violence," even as researchers and advocates continue to disagree about the nature of that phenomenon vis-à-vis gender and control. As part of a larger mixed-methods study, 22 women from a non-agency, community-based sample who reported experience with relationship violence were interviewed. The existence of patterned coercive and controlling behaviors substantially differentiated experiences with violence, suggesting this dynamic is at least as important to identify as physical violence. Although preliminary, the impact of these findings on intervention and prevention strategies and on the debate surrounding gender symmetry is discussed. © The Author(s) 2016.

  18. Economic consequences of improved temperature forecasts: An experiment with the Florida citrus growers (an update of control group results)

    NASA Technical Reports Server (NTRS)

    Braen, C.

    1978-01-01

    The economic experiment, the results obtained to date and the work which still remains to be done are summarized. Specifically, the experiment design is described in detail as are the developed data collection methodology and procedures, sampling plan, data reduction techniques, cost and loss models, establishment of frost severity measures, data obtained from citrus growers, National Weather Service and Federal Crop Insurance Corp. Resulting protection costs and crop losses for the control group sample, extrapolation of results of control group to the Florida citrus industry and the method for normalization of these results to a normal or average frost season so that results may be compared with anticipated similar results from test group measurements are discussed.

  19. Addiction and treatment experiences among active methamphetamine users recruited from a township community in Cape Town, South Africa: a mixed-methods study

    PubMed Central

    Meade, Christina S.; Towe, Sheri L.; Watt, Melissa H.; Lion, Ryan R.; Myers, Bronwyn; Skinner, Donald; Kimani, Stephen; Pieterse, Desiree

    2015-01-01

    Background Since 2000, there has been a dramatic increase in methamphetamine use in South Africa, but little is known about the experiences of out-of-treatment users. This mixed-methods study describes the substance use histories, addiction symptoms, and treatment experiences of a community-recruited sample of methamphetamine users in Cape Town. Methods Using respondent driven sampling, 360 methamphetamine users (44% female) completed structured clinical interviews to assess substance abuse and treatment history and computerized surveys to assess drug-related risks. A sub-sample of 30 participants completed in-depth interviews to qualitatively explore experiences with methamphetamine use and drug treatment. Results Participants had used methamphetamine for an average of 7.06 years (SD=3.64). They reported using methamphetamine on an average of 23.49 of the past 30 days (SD=8.90); 60% used daily. The majority (90%) met ICD-10 criteria for dependence, and many reported severe social, financial, and legal consequences. While only 10% had ever received drug treatment, 90% reported that they wanted treatment. In the qualitative interviews, participants reported multiple barriers to treatment, including beliefs that treatment is ineffective and relapse is inevitable in their social context. They also identified important motivators, including desires to be drug free and improve family functioning. Conclusion This study yields valuable information to more effectively respond to emerging methamphetamine epidemics in South Africa and other low- and middle-income countries. Interventions to increase uptake of evidence-based services must actively seek out drug users and build motivation for treatment, and offer continuing care services to prevent relapse. Community education campaigns are also needed. PMID:25977205

  20. The determination of density and molecular weight distributions of lipoproteins by sedimentation equilibrium.

    PubMed

    Jeffrey, P D; Nichol, L W; Smith, G D

    1975-01-25

    A method is presented by which an experimental record of total concentration as a function of radial distance, obtained in a sedimentation equilibrium experiment conducted with a noninteracting mixture in the absence of a density gradient, may be analyzed to obtain the unimodal distributions of molecular weight and of partial molar volume when these vary concomitantly and continuously. Particular attention is given to the caracterization of classes of lipoproteins exhibiting Gaussian distributions of these quantities, although the analysis is applicable to other types of unimodal distribution. Equations are also formulated permitting the definition of the corresponding distributions of partial specific volume and of density. The analysis procedure is based on a method (employing Laplace transforms) developed previously, but differs from it in that it avoids the necessity of differentiating experimental results, which introduces error. The method offers certain advantages over other procedures used to characterize and compare lipoprotein samples (exhibiting unimodal distributions) with regard to the duration of the experiment, economy of the sample, and, particularly, the ability to define in principle all of the relevant distributions from one sedimentation equilibrium experiment and an external measurement of the weight average partial specific volume. These points and the steps in the analysis procedure are illustrated with experimental results obtained in the sedimentation equilibrium of a sample of human serum low density lipoprotein. The experimental parameters (such as solution density, column height, and angular velocity) used in the conduction of these experiments were selected on the basis of computer-simulated examples, which are also presented. These provide a guide for other workers interested in characterizing lipoproteins of this class.

  1. Sample preparation for radiocarbon ( 14C) measurements of carbonyl compounds in the atmosphere . quantifying the biogenic contribution

    NASA Astrophysics Data System (ADS)

    Larsen, B. R.; Brussol, C.; Kotzias, D.; Veltkamp, T.; Zwaagstra, O.; Slanina, J.

    A method has been developed for the preparation of samples for radiocarbon ( 14C) measurements of carbonyl compounds in the atmosphere. Sampling on 25 ml 2,4-dinitrophenylhydrazine (DNPH)- coated silica gel cartridges can be carried out with up to 10.000 ℓ of ambient air with no adverse effects on sample integrity. Methods for the selective clean-up of the extracts have been investigated. This is a necessary step in preparing ambient carbonyl samples for a measurement of the radiocarbon ( 14C) content. The method which gave the best results include extraction of the DNPH cartridge with CH 3CN and purification of the carbonyl hydrazones over activated silica gel to remove excess DNPH and non target compounds. This method has been validated with laboratory samples and has been proved to give reliable results The radiocarbon data from the first field experiment showed that ambient air over a semi-rural test site in Ispra, Italy on a late summer day contained mainly five carbonyls (formaldehyde>acetaldehyde>acetone>propanal>butanal) of a mixed biogenic (41-57%) and anthropogenic (43-59%) origin. The method will be used in future monitoring of radiocarbon ( 14C) on a number of test sites in Europe.

  2. A Compressed Sensing Based Method for Reducing the Sampling Time of A High Resolution Pressure Sensor Array System

    PubMed Central

    Sun, Chenglu; Li, Wei; Chen, Wei

    2017-01-01

    For extracting the pressure distribution image and respiratory waveform unobtrusively and comfortably, we proposed a smart mat which utilized a flexible pressure sensor array, printed electrodes and novel soft seven-layer structure to monitor those physiological information. However, in order to obtain high-resolution pressure distribution and more accurate respiratory waveform, it needs more time to acquire the pressure signal of all the pressure sensors embedded in the smart mat. In order to reduce the sampling time while keeping the same resolution and accuracy, a novel method based on compressed sensing (CS) theory was proposed. By utilizing the CS based method, 40% of the sampling time can be decreased by means of acquiring nearly one-third of original sampling points. Then several experiments were carried out to validate the performance of the CS based method. While less than one-third of original sampling points were measured, the correlation degree coefficient between reconstructed respiratory waveform and original waveform can achieve 0.9078, and the accuracy of the respiratory rate (RR) extracted from the reconstructed respiratory waveform can reach 95.54%. The experimental results demonstrated that the novel method can fit the high resolution smart mat system and be a viable option for reducing the sampling time of the pressure sensor array. PMID:28796188

  3. Using Virtual Social Networks for Case Finding in Clinical Studies: An Experiment from Adolescence, Brain, Cognition, and Diabetes Study.

    PubMed

    Pourabbasi, Ata; Farzami, Jalal; Shirvani, Mahbubeh-Sadat Ebrahimnegad; Shams, Amir Hossein; Larijani, Bagher

    2017-01-01

    One of the main usages of social networks in clinical studies is facilitating the process of sampling and case finding for scientists. The main focus of this study is on comparing two different methods of sampling through phone calls and using social network, for study purposes. One of the researchers started calling 214 families of children with diabetes during 90 days. After this period, phone calls stopped, and the team started communicating with families through telegram, a virtual social network for 30 days. The number of children who participated in the study was evaluated. Although the telegram method was 60 days shorter than the phone call method, researchers found that the number of participants from telegram (17.6%) did not have any significant differences compared with the ones being phone called (12.9%). Using social networks can be suggested as a beneficial method for local researchers who look for easier sampling methods, winning their samples' trust, following up with the procedure, and an easy-access database.

  4. Student Teachers' Views about Assessment and Evaluation Methods in Mathematics

    ERIC Educational Resources Information Center

    Dogan, Mustafa

    2011-01-01

    This study aimed to find out assessment and evaluation approaches in a Mathematics Teacher Training Department based on the views and experiences of student teachers. The study used a descriptive survey method, with the research sample consisting of 150 third- and fourth-year Primary Mathematics student teachers. Data were collected using a…

  5. The Quantitative Determination of Food Dyes in Powdered Drink Mixes: A High School or General Science Experiment

    ERIC Educational Resources Information Center

    Sigmann, Samuella B.; Wheeler, Dale E.

    2004-01-01

    The development of a simple spectro photometric method to quantitatively determine the quantity of FD&C color additives present in powdered drink mixes, are focused by the investigations. Samples containing single dyes of binary mixtures of dyes can be analyzed using this method.

  6. Reflexion on linear regression trip production modelling method for ensuring good model quality

    NASA Astrophysics Data System (ADS)

    Suprayitno, Hitapriya; Ratnasari, Vita

    2017-11-01

    Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.

  7. Concurrent Ultrasonic Tomography and Acoustic Emission in Solid Materials

    NASA Astrophysics Data System (ADS)

    Chow, Thomas M.

    A series of experiments were performed to detect stress induced changes in the elastic properties of various solid materials. A technique was developed where these changes were monitored concurrently by two methods, ultrasonic tomography and acoustic emission monitoring. This thesis discusses some experiments in which acoustic emission (AE) and ultrasonic tomography were performed on various samples of solid materials including rocks, concrete, metals, and fibre reinforced composites. Three separate techniques were used to induce stress in these samples. Disk shaped samples were subject to stress via diametral loading using an indirect tensile test geometry. Cylindrical samples of rocks and concrete were subject to hydraulic fracture tests, and rectangular samples of fibre reinforced composite were subject to direct tensile loading. The majority of the samples were elastically anisotropic. Full waveform acoustic emission and tomographic data were collected while these samples were under load to give information concerning changes in the structure of the material as it was undergoing stress change and/or failure. Analysis of this data indicates that AE and tomographic techniques mutually compliment each other to give a view of the stress induced elastic changes in the tested samples.

  8. Non-Contact Temperature Requirements (NCTM) for drop and bubble physics

    NASA Technical Reports Server (NTRS)

    Hmelo, Anthony B.; Wang, Taylor G.

    1989-01-01

    Many of the materials research experiments to be conducted in the Space Processing program require a non-contaminating method of manipulating and controlling weightless molten materials. In these experiments, the melt is positioned and formed within a container without physically contacting the container's wall. An acoustic method, which was developed by Professor Taylor G. Wang before coming to Vanderbilt University from the Jet Propulsion Laboratory, has demonstrated the capability of positioning and manipulating room temperature samples. This was accomplished in an earth-based laboratory with a zero-gravity environment of short duration. However, many important facets of high temperature containerless processing technology have not been established yet, nor can they be established from the room temperature studies, because the details of the interaction between an acoustic field an a molten sample are largely unknown. Drop dynamics, bubble dynamics, coalescence behavior of drops and bubbles, electromagnetic and acoustic levitation methods applied to molten metals, and thermal streaming are among the topics discussed.

  9. Recording 13C-15N HMQC 2D sparse spectra in solids in 30 s

    NASA Astrophysics Data System (ADS)

    Kupče, Ēriks; Trébosc, Julien; Perrone, Barbara; Lafon, Olivier; Amoureux, Jean-Paul

    2018-03-01

    We propose a dipolar HMQC Hadamard-encoded (D-HMQC-Hn) experiment for fast 2D correlations of abundant nuclei in solids. The main limitation of the Hadamard methods resides in the length of the encoding pulses, which results from a compromise between the selectivity and the sensitivity due to losses. For this reason, these methods should mainly be used with sparse spectra, and they profit from the increased separation of the resonances at high magnetic fields. In the case of the D-HMQC-Hn experiments, we give a simple rule that allows directly setting the optimum length of the selective pulses, versus the minimum separation of the resonances in the indirect dimension. The demonstration has been performed on a fully 13C,15N labelled f-MLF sample, and it allowed recording the build-up curves of the 13C-15N cross-peaks within 10 min. However, the method could also be used in the case of less sensitive samples, but with more accumulations.

  10. Effects of sterilization treatments on the analysis of TOC in water samples.

    PubMed

    Shi, Yiming; Xu, Lingfeng; Gong, Dongqin; Lu, Jun

    2010-01-01

    Decomposition experiments conducted with and without microbial processes are commonly used to study the effects of environmental microorganisms on the degradation of organic pollutants. However, the effects of biological pretreatment (sterilization) on organic matter often have a negative impact on such experiments. Based on the principle of water total organic carbon (TOC) analysis, the effects of physical sterilization treatments on determination of TOC and other water quality parameters were investigated. The results revealed that two conventional physical sterilization treatments, autoclaving and 60Co gamma-radiation sterilization, led to the direct decomposition of some organic pollutants, resulting in remarkable errors in the analysis of TOC in water samples. Furthermore, the extent of the errors varied with the intensity and the duration of sterilization treatments. Accordingly, a novel sterilization method for water samples, 0.45 microm micro-filtration coupled with ultraviolet radiation (MCUR), was developed in the present study. The results indicated that the MCUR method was capable of exerting a high bactericidal effect on the water sample while significantly decreasing the negative impact on the analysis of TOC and other water quality parameters. Before and after sterilization treatments, the relative errors of TOC determination could be controlled to lower than 3% for water samples with different categories and concentrations of organic pollutants by using MCUR.

  11. Clean access, measurement, and sampling of Ellsworth Subglacial Lake: A method for exploring deep Antarctic subglacial lake environments

    NASA Astrophysics Data System (ADS)

    Siegert, Martin J.; Clarke, Rachel J.; Mowlem, Matt; Ross, Neil; Hill, Christopher S.; Tait, Andrew; Hodgson, Dominic; Parnell, John; Tranter, Martyn; Pearce, David; Bentley, Michael J.; Cockell, Charles; Tsaloglou, Maria-Nefeli; Smith, Andy; Woodward, John; Brito, Mario P.; Waugh, Ed

    2012-01-01

    Antarctic subglacial lakes are thought to be extreme habitats for microbial life and may contain important records of ice sheet history and climate change within their lake floor sediments. To find whether or not this is true, and to answer the science questions that would follow, direct measurement and sampling of these environments are required. Ever since the water depth of Vostok Subglacial Lake was shown to be >500 m, attention has been given to how these unique, ancient, and pristine environments may be entered without contamination and adverse disturbance. Several organizations have offered guidelines on the desirable cleanliness and sterility requirements for direct sampling experiments, including the U.S. National Academy of Sciences and the Scientific Committee on Antarctic Research. Here we summarize the scientific protocols and methods being developed for the exploration of Ellsworth Subglacial Lake in West Antarctica, planned for 2012-2013, which we offer as a guide to future subglacial environment research missions. The proposed exploration involves accessing the lake using a hot-water drill and deploying a sampling probe and sediment corer to allow sample collection. We focus here on how this can be undertaken with minimal environmental impact while maximizing scientific return without compromising the environment for future experiments.

  12. [Fast optimization of stepwise gradient conditions for ternary mobile phase in reversed-phase high performance liquid chromatography].

    PubMed

    Shan, Yi-chu; Zhang, Yu-kui; Zhao, Rui-huan

    2002-07-01

    In high performance liquid chromatography, it is necessary to apply multi-composition gradient elution for the separation of complex samples such as environmental and biological samples. Multivariate stepwise gradient elution is one of the most efficient elution modes, because it combines the high selectivity of multi-composition mobile phase and shorter analysis time of gradient elution. In practical separations, the separation selectivity of samples can be effectively adjusted by using ternary mobile phase. For the optimization of these parameters, the retention equation of samples must be obtained at first. Traditionally, several isocratic experiments are used to get the retention equation of solute. However, it is time consuming especially for the separation of complex samples with a wide range of polarity. A new method for the fast optimization of ternary stepwise gradient elution was proposed based on the migration rule of solute in column. First, the coefficients of retention equation of solute are obtained by running several linear gradient experiments, then the optimal separation conditions are searched according to the hierarchical chromatography response function which acts as the optimization criterion. For each kind of organic modifier, two initial linear gradient experiments are used to obtain the primary coefficients of retention equation of each solute. For ternary mobile phase, only four linear gradient runs are needed to get the coefficients of retention equation. Then the retention times of solutes under arbitrary mobile phase composition can be predicted. The initial optimal mobile phase composition is obtained by resolution mapping for all of the solutes. A hierarchical chromatography response function is used to evaluate the separation efficiencies and search the optimal elution conditions. In subsequent optimization, the migrating distance of solute in the column is considered to decide the mobile phase composition and sustaining time of the latter steps until all the solutes are eluted out. Thus the first stepwise gradient elution conditions are predicted. If the resolution of samples under the predicted optimal separation conditions is satisfactory, the optimization procedure is stopped; otherwise, the coefficients of retention equation are adjusted according to the experimental results under the previously predicted elution conditions. Then the new stepwise gradient elution conditions are predicted repeatedly until satisfactory resolution is obtained. Normally, the satisfactory separation conditions can be found only after six experiments by using the proposed method. In comparison with the traditional optimization method, the time needed to finish the optimization procedure can be greatly reduced. The method has been validated by its application to the separation of several samples such as amino acid derivatives, aromatic amines, in which satisfactory separations were obtained with predicted resolution.

  13. Improved selenium recovery from tissue with modified sample decomposition

    USGS Publications Warehouse

    Brumbaugh, W. G.; Walther, M.J.

    1991-01-01

    The present paper describes a simple modification of a recently reported decomposition method for determination of selenium in biological tissue by hydride generation atomic absorption. The modified method yielded slightly higher selenium recoveries (3-4%) for selected reference tissues and fish tissue spiked with selenomethionine. Radiotracer experiments indicated that the addition of a small volume of hydrochloric acid to the wet digestate mixture reduced slight losses of selenium as the sample initially went to dryness before ashing. With the modified method, selenium spiked as selenomethionine behaved more like the selenium in reference tissues than did the inorganic spike forms when this digestion modification was used.

  14. A Differential Scanning Calorimetry Method for Construction of Continuous Cooling Transformation Diagram of Blast Furnace Slag

    NASA Astrophysics Data System (ADS)

    Gan, Lei; Zhang, Chunxia; Shangguan, Fangqin; Li, Xiuping

    2012-06-01

    The continuous cooling crystallization of a blast furnace slag was studied by the application of the differential scanning calorimetry (DSC) method. A kinetic model describing the correlation between the evolution of the degree of crystallization with time was obtained. Bulk cooling experiments of the molten slag coupled with numerical simulation of heat transfer were conducted to validate the results of the DSC methods. The degrees of crystallization of the samples from the bulk cooling experiments were estimated by means of the X-ray diffraction (XRD) and the DSC method. It was found that the results from the DSC cooling and bulk cooling experiments are in good agreement. The continuous cooling transformation (CCT) diagram of the blast furnace slag was constructed according to crystallization kinetic model and experimental data. The obtained CCT diagram characterizes with two crystallization noses at different temperature ranges.

  15. Arrogance analysis of several typical pattern recognition classifiers

    NASA Astrophysics Data System (ADS)

    Jing, Chen; Xia, Shengping; Hu, Weidong

    2007-04-01

    Various kinds of classification methods have been developed. However, most of these classical methods, such as Back-Propagation (BP), Bayesian method, Support Vector Machine(SVM), Self-Organizing Map (SOM) are arrogant. A so-called arrogance, for a human, means that his decision, which even is a mistake, overstates his actual experience. Accordingly, we say that he is a arrogant if he frequently makes arrogant decisions. Likewise, some classical pattern classifiers represent the similar characteristic of arrogance. Given an input feature vector, we say a classifier is arrogant in its classification if its veracity is high yet its experience is low. Typically, for a new sample which is distinguishable from original training samples, traditional classifiers recognize it as one of the known targets. Clearly, arrogance in classification is an undesirable attribute. Conversely, a classifier is non-arrogant in its classification if there is a reasonable balance between its veracity and its experience. Inquisitiveness is, in many ways, the opposite of arrogance. In nature, inquisitiveness is an eagerness for knowledge characterized by the drive to question, to seek a deeper understanding. The human capacity to doubt present beliefs allows us to acquire new experiences and to learn from our mistakes. Within the discrete world of computers, inquisitive pattern recognition is the constructive investigation and exploitation of conflict in information. Thus, we quantify this balance and discuss new techniques that will detect arrogance in a classifier.

  16. The challenge of on-tissue digestion for MALDI MSI- a comparison of different protocols to improve imaging experiments.

    PubMed

    Diehl, Hanna C; Beine, Birte; Elm, Julian; Trede, Dennis; Ahrens, Maike; Eisenacher, Martin; Marcus, Katrin; Meyer, Helmut E; Henkel, Corinna

    2015-03-01

    Mass spectrometry imaging (MSI) has become a powerful and successful tool in the context of biomarker detection especially in recent years. This emerging technique is based on the combination of histological information of a tissue and its corresponding spatial resolved mass spectrometric information. The identification of differentially expressed protein peaks between samples is still the method's bottleneck. Therefore, peptide MSI compared to protein MSI is closer to the final goal of identification since peptides are easier to measure than proteins. Nevertheless, the processing of peptide imaging samples is challenging due to experimental complexity. To address this issue, a method development study for peptide MSI using cryoconserved and formalin-fixed paraffin-embedded (FFPE) rat brain tissue is provided. Different digestion times, matrices, and proteases were tested to define an optimal workflow for peptide MSI. All practical experiments were done in triplicates and analyzed by the SCiLS Lab software, using structures derived from myelin basic protein (MBP) peaks, principal component analysis (PCA) and probabilistic latent semantic analysis (pLSA) to rate the experiments' quality. Blinded experimental evaluation in case of defining countable structures in the datasets was performed by three individuals. Such an extensive method development for peptide matrix-assisted laser desorption/ionization (MALDI) imaging experiments has not been performed so far, and the resulting problems and consequences were analyzed and discussed.

  17. Headspace profiling of cocaine samples for intelligence purposes.

    PubMed

    Dujourdy, Laurence; Besacier, Fabrice

    2008-08-06

    A method for determination of residual solvents in illicit hydrochloride cocaine samples using static headspace-gas chromatography (HS-GC) associated with a storage computerized procedure is described for the profiling and comparison of seizures. The system involves a gas chromatographic separation of 18 occluded solvents followed by fully automatic data analysis and transfer to a PHP/MySQL database. First, a fractional factorial design was used to evaluate the main effects of some critical method parameters (salt choice, vial agitation intensity, oven temperature, pressurization and loop equilibration) on the results with a minimum of experiments. The method was then validated for tactical intelligence purposes (batch comparison) via several studies: selection of solvents and mathematical comparison tool, reproducibility and "cutting" influence studies. The decision threshold to determine the similarity of two samples was set and false positives and negatives evaluated. Finally, application of the method to distinguish geographical origins is discussed.

  18. Scanning tunneling spectroscopy under large current flow through the sample.

    PubMed

    Maldonado, A; Guillamón, I; Suderow, H; Vieira, S

    2011-07-01

    We describe a method to make scanning tunneling microscopy/spectroscopy imaging at very low temperatures while driving a constant electric current up to some tens of mA through the sample. It gives a new local probe, which we term current driven scanning tunneling microscopy/spectroscopy. We show spectroscopic and topographic measurements under the application of a current in superconducting Al and NbSe(2) at 100 mK. Perspective of applications of this local imaging method includes local vortex motion experiments, and Doppler shift local density of states studies.

  19. Evaluation of Normalization Methods to Pave the Way Towards Large-Scale LC-MS-Based Metabolomics Profiling Experiments

    PubMed Central

    Valkenborg, Dirk; Baggerman, Geert; Vanaerschot, Manu; Witters, Erwin; Dujardin, Jean-Claude; Burzykowski, Tomasz; Berg, Maya

    2013-01-01

    Abstract Combining liquid chromatography-mass spectrometry (LC-MS)-based metabolomics experiments that were collected over a long period of time remains problematic due to systematic variability between LC-MS measurements. Until now, most normalization methods for LC-MS data are model-driven, based on internal standards or intermediate quality control runs, where an external model is extrapolated to the dataset of interest. In the first part of this article, we evaluate several existing data-driven normalization approaches on LC-MS metabolomics experiments, which do not require the use of internal standards. According to variability measures, each normalization method performs relatively well, showing that the use of any normalization method will greatly improve data-analysis originating from multiple experimental runs. In the second part, we apply cyclic-Loess normalization to a Leishmania sample. This normalization method allows the removal of systematic variability between two measurement blocks over time and maintains the differential metabolites. In conclusion, normalization allows for pooling datasets from different measurement blocks over time and increases the statistical power of the analysis, hence paving the way to increase the scale of LC-MS metabolomics experiments. From our investigation, we recommend data-driven normalization methods over model-driven normalization methods, if only a few internal standards were used. Moreover, data-driven normalization methods are the best option to normalize datasets from untargeted LC-MS experiments. PMID:23808607

  20. Bacteria holding times for fecal coliform by mFC agar method and total coliform and Escherichia coli by Colilert®-18 Quanti-Tray® method

    USGS Publications Warehouse

    Aulenbach, Brent T.

    2010-01-01

    Bacteria holding-time experiments of up to 62 h were performed on five surface-water samples from four urban stream sites in the vicinity of Atlanta, GA, USA that had relatively high densities of coliform bacteria (Escherichia coli densities were all well above the US Environmental Protection Agency criterion of 126 colonies (100 ml) − 1 for recreational waters). Holding-time experiments were done for fecal coliform using the membrane filtration modified fecal coliform (mFC) agar method and for total coliform and E. coli using the Colilert®-18 Quanti-Tray® method. The precisions of these analytical methods were quantified. Precisions determined for fecal coliform indicated that the upper bound of the ideal range of counts could reasonably be extended upward and would improve precision. For the Colilert®-18 method, analytical precisions were similar to the theoretical precisions for this method. Fecal and total coliform densities did not change significantly with holding times up to about 27 h. Limited information indicated that fecal coliform densities might be stable for holding times of up to 62 h, whereas total coliform densities might not be stable for holding times greater than about 27 h. E. coli densities were stable for holding times of up to 18 h—a shorter period than indicated from a previous studies. These results should be applicable to non-regulatory monitoring sampling designs for similar urban surface-water sample types.

  1. Coherent amplification of X-ray scattering from meso-structures

    DOE PAGES

    Lhermitte, Julien R.; Stein, Aaron; Tian, Cheng; ...

    2017-07-10

    Small-angle X-ray scattering (SAXS) often includes an unwanted background, which increases the required measurement time to resolve the sample structure. This is undesirable in all experiments, and may make measurement of dynamic or radiation-sensitive samples impossible. Here, we demonstrate a new technique, applicable when the scattering signal is background-dominated, which reduces the requisite exposure time. Our method consists of exploiting coherent interference between a sample with a designed strongly scattering `amplifier'. A modified angular correlation function is used to extract the symmetry of the interference term; that is, the scattering arising from the interference between the amplifier and the sample.more » This enables reconstruction of the sample's symmetry, despite the sample scattering itself being well below the intensity of background scattering. Thus, coherent amplification is used to generate a strong scattering term (well above background), from which sample scattering is inferred. We validate this method using lithographically defined test samples.« less

  2. Use of experience sampling method to understand the wilderness experience

    Treesearch

    Lynn Anderson

    2002-01-01

    There is a growing body of research documenting the benefits of outdoor adventure and wilderness-based programs with a variety of special populations. Criticisms of this body of research are that it is not grounded in theory and it is outcome-based, with no investigation into the processes causing the behavior change in individuals. This study attempted to investigate...

  3. Missouri Ozark Forest Ecosystem Project: site history, soils, landforms, woody and herbaceous vegetation, down wood, and inventory methods for the landscape experiment.

    Treesearch

    Stephen R. Shifley; Brian L., eds. Brookshire

    2000-01-01

    Describes vegetation and physical site conditions at the initiation (1991-1995) of the Missouri Ozark Forest Ecosystem Project (MOFEP) in the southeastern Missouri Ozarks. Provides detailed information on sampling protocols and summarizes initial conditions of the landscape experiment prior to harvest treatments. Summaries are by plot, by ~800-acre...

  4. Child Maltreatment and Perceived Family Environment as Risk Factors for Adult Rape: Is Child Sexual Abuse the Most Salient Experience?

    ERIC Educational Resources Information Center

    Messman-Moore, T.L.; Brown, A.L.

    2004-01-01

    Objective:: Child maltreatment and family functioning were examined as predictors of adult rape in a sample of 925 college women. Method:: Information was obtained from retrospective self-report questionnaires. Child sexual abuse (CSA) was assessed with the Life Experiences Questionnaire, child emotional abuse (CEA) and physical abuse (CPA) were…

  5. Preparing for Exit from Sport: A Phenomenological Examination of the Pre-Transition Experiences of Division I Female Intercollegiate Athletes

    ERIC Educational Resources Information Center

    Archer, David Eric

    2010-01-01

    Scope and Method of Study: The purpose of this study was to discover the meanings female intercollegiate athletes ascribe to their experiences preceding exit from NCAA Division I competition. The study sample included five Division I female intercollegiate athletes. Four of these attended a large public research institution in the Southern Plains…

  6. A comparison of moment-based methods of estimation for the log Pearson type 3 distribution

    NASA Astrophysics Data System (ADS)

    Koutrouvelis, I. A.; Canavos, G. C.

    2000-06-01

    The log Pearson type 3 distribution is a very important model in statistical hydrology, especially for modeling annual flood series. In this paper we compare the various methods based on moments for estimating quantiles of this distribution. Besides the methods of direct and mixed moments which were found most successful in previous studies and the well-known indirect method of moments, we develop generalized direct moments and generalized mixed moments methods and a new method of adaptive mixed moments. The last method chooses the orders of two moments for the original observations by utilizing information contained in the sample itself. The results of Monte Carlo experiments demonstrated the superiority of this method in estimating flood events of high return periods when a large sample is available and in estimating flood events of low return periods regardless of the sample size. In addition, a comparison of simulation and asymptotic results shows that the adaptive method may be used for the construction of meaningful confidence intervals for design events based on the asymptotic theory even with small samples. The simulation results also point to the specific members of the class of generalized moments estimates which maintain small values for bias and/or mean square error.

  7. Emotional Experience in Negative Symptoms of Schizophrenia—No Evidence for a Generalized Hedonic Deficit

    PubMed Central

    Oorschot, Margreet; Lataster, Tineke; Thewissen, Viviane; Lardinois, Mariëlle; Wichers, Marieke; van Os, Jim; Delespaul, Philippe; Myin-Germeys, Inez

    2013-01-01

    Background: Deficits in emotion processing are thought to underlie the key negative symptoms flat affect and anhedonia observed in psychotic disorders. This study investigated emotional experience and social behavior in the realm of daily life in a sample of patients with schizophrenia and schizoaffective disorder, stratified by level of negative symptoms. Methods: Emotional experience and behavior of 149 patients with schizophrenia and schizoaffective disorder and 143 controls were explored using the Experience Sampling Method. Results: Patients reported lower levels of positive and higher levels of negative affect compared with controls. High negative symptom patients reported similar emotional stability and capacity to generate positive affect as controls, whereas low negative symptom patients reported increased instability. All participants displayed roughly comparable emotional responses to the company of other people. However, in comparison with controls, patients showed more social withdrawal and preference to be alone while in company, particularly the high negative symptom group. Conclusions: This study revealed no evidence for a generalized hedonic deficit in patients with psychotic spectrum disorders. Lower rather than higher levels of negative symptoms were associated with a pattern of emotional processing which was different from healthy controls. PMID:22021660

  8. The Use of ATR-FTIR in Conjunction with Thermal Analysis Methods for Efficient Identification of Polymer Samples: A Qualitative Multiinstrument Instrumental Analysis Laboratory Experiment

    ERIC Educational Resources Information Center

    Dickson-Karn, Nicole M.

    2017-01-01

    A multi-instrument approach has been applied to the efficient identification of polymers in an upper-division undergraduate instrumental analysis laboratory course. Attenuated total reflectance Fourier transform infrared spectroscopy (ATR-FTIR) is used in conjunction with differential scanning calorimetry (DSC) to identify 18 polymer samples and…

  9. The Nurturant Fathering Scale: A Confirmatory Factor Analysis with an African American Sample of College Students

    ERIC Educational Resources Information Center

    Doyle, Otima; Pecukonis, Edward; Harrington, Donna

    2011-01-01

    Objective: The objective of this study was to test the factor structure of the "Nurturant Fathering Scale" (NFS) among an African American sample in the mid-Atlantic region that have neither Caribbean heritage nor immigration experiences but who do have diverse family structures (N = 212). Method: A confirmatory factor analysis (CFA) was conducted…

  10. The Relationship between Child Abuse, Parental Divorce, and Lifetime Mental Disorders and Suicidality in a Nationally Representative Adult Sample

    ERIC Educational Resources Information Center

    Afifi, Tracie O.; Boman, Jonathan; Fleisher, William; Sareen, Jitender

    2009-01-01

    Objectives: To determine how the experiences of child abuse and parental divorce are related to long-term mental health outcomes using a nationally representative adult sample after adjusting for sociodemographic variables and parental psychopathology. Methods: Data were drawn from the National Comorbidity Survey (NCS, n=5,877; age 15-54 years;…

  11. Some experiences with the viscous-inviscid interaction approach

    NASA Technical Reports Server (NTRS)

    Vandalsem, W. R.; Steger, J. L.; Rao, K. V.

    1987-01-01

    Methods for simulating compressible viscous flow using the viscid-inviscid interaction approach are described. The formulations presented range from the more familiar full-potential/boundary-layer interaction schemes to a method for coupling Euler/Navier-Stokes and boundary-layer algorithms. An effort is made to describe the advantages and disadvantages of each formulation. Sample results are presented which illustrate the applicability of the methods.

  12. Monitoring larval populations of the Douglas-fir tussock moth and the western spruce budworm on permanent plots: sampling methods and statistical properties of data

    Treesearch

    A.R. Mason; H.G. Paul

    1994-01-01

    Procedures for monitoring larval populations of the Douglas-fir tussock moth and the western spruce budworm are recommended based on many years experience in sampling these species in eastern Oregon and Washington. It is shown that statistically reliable estimates of larval density can be made for a population by sampling host trees in a series of permanent plots in a...

  13. STS 131 Return Samples: Assessment of Air Quality Aboard the Shuttle (STS-131) and International Space Station (19A)

    NASA Technical Reports Server (NTRS)

    James, John T.

    2010-01-01

    The toxicological assessments of 1 grab sample canister (GSC) from the Shuttle are reported in Table 1. Analytical methods have not changed from earlier reports. The recoveries of the 3 surrogates (C-13-acetone, fluorobenzene, and chlorobenzene) from the Shuttle GSC were 100%, 93%, and 101%, respectively. Based on the historical experience using end-of-mission samples, the Shuttle atmosphere was acceptable for human respiration.

  14. Research on the principle and experimentation of optical compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Chen, Yuheng; Chen, Xinhua; Zhou, Jiankang; Ji, Yiqun; Shen, Weimin

    2013-12-01

    The optical compressive spectral imaging method is a novel spectral imaging technique that draws in the inspiration of compressed sensing, which takes on the advantages such as reducing acquisition data amount, realizing snapshot imaging, increasing signal to noise ratio and so on. Considering the influence of the sampling quality on the ultimate imaging quality, researchers match the sampling interval with the modulation interval in former reported imaging system, while the depressed sampling rate leads to the loss on the original spectral resolution. To overcome that technical defect, the demand for the matching between the sampling interval and the modulation interval is disposed of and the spectral channel number of the designed experimental device increases more than threefold comparing to that of the previous method. Imaging experiment is carried out by use of the experiment installation and the spectral data cube of the shooting target is reconstructed with the acquired compressed image by use of the two-step iterative shrinkage/thresholding algorithms. The experimental result indicates that the spectral channel number increases effectively and the reconstructed data stays high-fidelity. The images and spectral curves are able to accurately reflect the spatial and spectral character of the target.

  15. Evaluation of airborne asbestos exposure from routine handling of asbestos-containing wire gauze pads in the research laboratory.

    PubMed

    Garcia, Ediberto; Newfang, Daniel; Coyle, Jayme P; Blake, Charles L; Spencer, John W; Burrelli, Leonard G; Johnson, Giffe T; Harbison, Raymond D

    2018-07-01

    Three independently conducted asbestos exposure evaluations were conducted using wire gauze pads similar to standard practice in the laboratory setting. All testing occurred in a controlled atmosphere inside an enclosed chamber simulating a laboratory setting. Separate teams consisting of a laboratory technician, or technician and assistant simulated common tasks involving wire gauze pads, including heating and direct wire gauze manipulation. Area and personal air samples were collected and evaluated for asbestos consistent with the National Institute of Occupational Safety Health method 7400 and 7402, and the Asbestos Hazard Emergency Response Act (AHERA) method. Bulk gauze pad samples were analyzed by Polarized Light Microscopy and Transmission Electron Microscopy to determine asbestos content. Among air samples, chrysotile asbestos was the only fiber found in the first and third experiments, and tremolite asbestos for the second experiment. None of the air samples contained asbestos in concentrations above the current permissible regulatory levels promulgated by OSHA. These findings indicate that the level of asbestos exposure when working with wire gauze pads in the laboratory setting is much lower than levels associated with asbestosis or asbestos-related lung cancer and mesothelioma. Copyright © 2018. Published by Elsevier Inc.

  16. Detection of sex chromosome aneuploidies using quantitative fluorescent PCR in the Hungarian population.

    PubMed

    Nagy, Balint; Nagy, Richard Gyula; Lazar, Levente; Schonleber, Julianna; Papp, Csaba; Rigo, Janos

    2015-05-20

    Aneuploidies are the most frequent chromosomal abnormalities at birth. Autosomal aneuploidies cause serious malformations like trisomy 21, trisomy 18 and trisomy 13. However sex chromosome aneuploidies are causing less severe syndromes. For the detection of these aneuploidies, the "gold standard" method is the cytogenetic analysis of fetal cells, karyograms show all numerical and structural abnormalities, but it takes 2-4 weeks to get the reports. Molecular biological methods were developed to overcome the long culture time, thus, FISH and quantitative fluorescent PCR were introduced. In this work we show our experience with a commercial kit for the detection of sex chromosome aneuploidies. We analyzed 20.173 amniotic fluid samples for the period of 2006-2013 in our department. A conventional cytogenetic analysis was performed on the samples. We checked the reliability of quantitative fluorescent PCR and DNA fragment analysis on those samples where sex chromosomal aneuploidy was diagnosed. From the 20.173 amniotic fluid samples we found 50 samples with sex chromosome aneuploidy. There were 19 samples showing 46, XO, 17 samples with 46, XXY, 9 samples with 47, XXX and 5 samples with 47, XYY karyotypes. The applied quantitative fluorescent PCR and DNA fragment analyses method are suitable to detect all abnormal sex chromosome aneuploidies. Quantitative fluorescent PCR is a fast and reliable method for detection of sex chromosome aneuploidies. Copyright © 2015. Published by Elsevier B.V.

  17. A method for three-dimensional quantitative observation of the microstructure of biological samples

    NASA Astrophysics Data System (ADS)

    Wang, Pengfei; Chen, Dieyan; Ma, Wanyun; Wu, Hongxin; Ji, Liang; Sun, Jialin; Lv, Danyu; Zhang, Lu; Li, Ying; Tian, Ning; Zheng, Jinggao; Zhao, Fengying

    2009-07-01

    Contemporary biology has developed into the era of cell biology and molecular biology, and people try to study the mechanism of all kinds of biological phenomena at the microcosmic level now. Accurate description of the microstructure of biological samples is exigent need from many biomedical experiments. This paper introduces a method for 3-dimensional quantitative observation on the microstructure of vital biological samples based on two photon laser scanning microscopy (TPLSM). TPLSM is a novel kind of fluorescence microscopy, which has excellence in its low optical damage, high resolution, deep penetration depth and suitability for 3-dimensional (3D) imaging. Fluorescent stained samples were observed by TPLSM, and afterward the original shapes of them were obtained through 3D image reconstruction. The spatial distribution of all objects in samples as well as their volumes could be derived by image segmentation and mathematic calculation. Thus the 3-dimensionally and quantitatively depicted microstructure of the samples was finally derived. We applied this method to quantitative analysis of the spatial distribution of chromosomes in meiotic mouse oocytes at metaphase, and wonderful results came out last.

  18. Development of a Thermal Desorption Gas Chromatography-Mass Spectrometry Analysis Method for Airborne Dichlorodiphenyltrichloroethane

    DTIC Science & Technology

    2013-05-28

    span of 1-250 ng DDT. Furthermore, laboratory and field experiments utilizing this method confirmed that significant DDT concentration differences ... different between the two sample introduction methods when comparing the same DDT mass which may be due to differences in the precision of split...degradation of DDT was significantly different between the liquid and TD methods (t-test; p < 0.001). For TD analyses the relative percent

  19. Aerodynamic laser-heated contactless furnace for neutron scattering experiments at elevated temperatures

    NASA Astrophysics Data System (ADS)

    Landron, Claude; Hennet, Louis; Coutures, Jean-Pierre; Jenkins, Tudor; Alétru, Chantal; Greaves, Neville; Soper, Alan; Derbyshire, Gareth

    2000-04-01

    Conventional radiative furnaces require sample containment that encourages contamination at elevated temperatures and generally need windows which restrict the entrance and exit solid angles required for diffraction and scattering measurements. We describe a contactless windowless furnace based on aerodynamic levitation and laser heating which has been designed for high temperature neutron scattering experiments. Data from initial experiments are reported for crystalline and amorphous oxides at temperatures up to 1900 °C, using the spallation neutron source ISIS together with our laser-heated aerodynamic levitator. Accurate reproduction of thermal expansion coefficients and radial distribution functions have been obtained, demonstrating the utility of aerodynamic levitation methods for neutron scattering methods.

  20. Free zone electrophoresis simulation of static column electrophoresis in microgravity on shuttle flight STS-3

    NASA Technical Reports Server (NTRS)

    Todd, P. W.; Hjerten, S.

    1985-01-01

    Experiments were designed to replicate, as closely as possible in 1-G, the conditions of the STS-3 red blood cell (RBC) experiments. Free zone electrophoresis was the method of choice, since it minimizes the role of gravity in cell migration. The physical conditions of the STS-3 experiments were used, and human and rabbit RBC's fixed by the same method were the test particles. The effects of cell concentration, electroosmotic mobility, and sample composition were tested in order to seek explanations for the STS-3 results and to provide data on cell concentration effects for future zero-G separation on the continuous-flow zero-G electrophoretics separator.

  1. Meta-Stable Magnetic Domain States That Prevent Reliable Absolute Palaeointensity Experiments Revealed By Magnetic Force Microscopy

    NASA Astrophysics Data System (ADS)

    de Groot, L. V.; Fabian, K.; Bakelaar, I. A.; Dekkers, M. J.

    2014-12-01

    Obtaining reliable estimates of the absolute palaeointensity of the Earth's magnetic field is notoriously difficult. Many methods to obtain paleointensities from suitable records such as lavas and archeological artifacts involve heating the samples. These heating steps are believed to induce 'magnetic alteration' - a process that is still poorly understood but prevents obtaining correct paleointensity estimates. To observe this magnetic alteration directly we imaged the magnetic domain state of titanomagnetite particles - a common carrier of the magnetic remanence in samples used for paleointensity studies. We selected samples from the 1971-flow of Mt. Etna from a site that systematically yields underestimates of the known intensity of the paleofield - in spite of rigorous testing by various groups. Magnetic Force Microscope images were taken before and after a heating step typically used in absolute palaeointensity experiments. Before heating, the samples feature distinct, blocky domains that sometimes seem to resemble a classical magnetite domain structure. After imparting a partial thermo-remanent magnetization at a temperature often critical to paleointensity experiments (250 °C) the domain state of the same titanomagnetite grains changes into curvier, wavy domains. Furthermore, these structures appeared to be unstable over time: after one-year storage in a magnetic field-free environment the domain states evolved into a viscous remanent magnetization state. Our observations may qualitatively explain reported underestimates from technically successful paleointensity experiments for this site and other sites reported previously. Furthermore the occurrence of intriguing observations such as 'the drawer storage effect' by Shaar et al (EPSL, 2011), and viscous magnetizations observed by Muxworthy and Williams (JGR, 2006) may be (partially) explained by our observations. The major implications of our study for all palaeointensity methods involving heating may be evident.

  2. TargetSearch--a Bioconductor package for the efficient preprocessing of GC-MS metabolite profiling data.

    PubMed

    Cuadros-Inostroza, Alvaro; Caldana, Camila; Redestig, Henning; Kusano, Miyako; Lisec, Jan; Peña-Cortés, Hugo; Willmitzer, Lothar; Hannah, Matthew A

    2009-12-16

    Metabolite profiling, the simultaneous quantification of multiple metabolites in an experiment, is becoming increasingly popular, particularly with the rise of systems-level biology. The workhorse in this field is gas-chromatography hyphenated with mass spectrometry (GC-MS). The high-throughput of this technology coupled with a demand for large experiments has led to data pre-processing, i.e. the quantification of metabolites across samples, becoming a major bottleneck. Existing software has several limitations, including restricted maximum sample size, systematic errors and low flexibility. However, the biggest limitation is that the resulting data usually require extensive hand-curation, which is subjective and can typically take several days to weeks. We introduce the TargetSearch package, an open source tool which is a flexible and accurate method for pre-processing even very large numbers of GC-MS samples within hours. We developed a novel strategy to iteratively correct and update retention time indices for searching and identifying metabolites. The package is written in the R programming language with computationally intensive functions written in C for speed and performance. The package includes a graphical user interface to allow easy use by those unfamiliar with R. TargetSearch allows fast and accurate data pre-processing for GC-MS experiments and overcomes the sample number limitations and manual curation requirements of existing software. We validate our method by carrying out an analysis against both a set of known chemical standard mixtures and of a biological experiment. In addition we demonstrate its capabilities and speed by comparing it with other GC-MS pre-processing tools. We believe this package will greatly ease current bottlenecks and facilitate the analysis of metabolic profiling data.

  3. TargetSearch - a Bioconductor package for the efficient preprocessing of GC-MS metabolite profiling data

    PubMed Central

    2009-01-01

    Background Metabolite profiling, the simultaneous quantification of multiple metabolites in an experiment, is becoming increasingly popular, particularly with the rise of systems-level biology. The workhorse in this field is gas-chromatography hyphenated with mass spectrometry (GC-MS). The high-throughput of this technology coupled with a demand for large experiments has led to data pre-processing, i.e. the quantification of metabolites across samples, becoming a major bottleneck. Existing software has several limitations, including restricted maximum sample size, systematic errors and low flexibility. However, the biggest limitation is that the resulting data usually require extensive hand-curation, which is subjective and can typically take several days to weeks. Results We introduce the TargetSearch package, an open source tool which is a flexible and accurate method for pre-processing even very large numbers of GC-MS samples within hours. We developed a novel strategy to iteratively correct and update retention time indices for searching and identifying metabolites. The package is written in the R programming language with computationally intensive functions written in C for speed and performance. The package includes a graphical user interface to allow easy use by those unfamiliar with R. Conclusions TargetSearch allows fast and accurate data pre-processing for GC-MS experiments and overcomes the sample number limitations and manual curation requirements of existing software. We validate our method by carrying out an analysis against both a set of known chemical standard mixtures and of a biological experiment. In addition we demonstrate its capabilities and speed by comparing it with other GC-MS pre-processing tools. We believe this package will greatly ease current bottlenecks and facilitate the analysis of metabolic profiling data. PMID:20015393

  4. NASA Bioculture System: From Experiment Definition to Flight Payload

    NASA Technical Reports Server (NTRS)

    Sato, Kevin Y.; Almeida, Eduardo; Austin, Edward M.

    2014-01-01

    Starting in 2015, the NASA Bioculture System will be available to the science community to conduct cell biology and microbiology experiments on ISS. The Bioculture System carries ten environmentally independent Cassettes, which house the experiments. The closed loop fluids flow path subsystem in each Cassette provides a perfusion-based method for maintain specimen cultures in a shear-free environment by using a biochamber based on porous hollow fiber bioreactor technology. Each Cassette contains an incubator and separate insulated refrigerator compartment for storage of media, samples, nutrients and additives. The hardware is capable of fully automated or manual specimen culturing and processing, including in-flight experiment initiation, sampling and fixation, up to BSL-2 specimen culturing, and the ability to up to 10 independent cultures in parallel for statistical analysis. The incubation and culturing of specimens in the Bioculture System is a departure from standard laboratory culturing methods. Therefore, it is critical that the PI has an understanding the pre-flight test required for successfully using the Bioculture System to conduct an on-orbit experiment. Overall, the PI will conduct a series of ground tests to define flight experiment and on-orbit implementation requirements, verify biocompatibility, and determine base bioreactor conditions. The ground test processes for the utilization of the Bioculture System, from experiment selection to flight, will be reviewed. Also, pre-flight test schedules and use of COTS ground test equipment (CellMax and FiberCell systems) and the Bioculture System will be discussed.

  5. Multi-locus analysis of genomic time series data from experimental evolution.

    PubMed

    Terhorst, Jonathan; Schlötterer, Christian; Song, Yun S

    2015-04-01

    Genomic time series data generated by evolve-and-resequence (E&R) experiments offer a powerful window into the mechanisms that drive evolution. However, standard population genetic inference procedures do not account for sampling serially over time, and new methods are needed to make full use of modern experimental evolution data. To address this problem, we develop a Gaussian process approximation to the multi-locus Wright-Fisher process with selection over a time course of tens of generations. The mean and covariance structure of the Gaussian process are obtained by computing the corresponding moments in discrete-time Wright-Fisher models conditioned on the presence of a linked selected site. This enables our method to account for the effects of linkage and selection, both along the genome and across sampled time points, in an approximate but principled manner. We first use simulated data to demonstrate the power of our method to correctly detect, locate and estimate the fitness of a selected allele from among several linked sites. We study how this power changes for different values of selection strength, initial haplotypic diversity, population size, sampling frequency, experimental duration, number of replicates, and sequencing coverage depth. In addition to providing quantitative estimates of selection parameters from experimental evolution data, our model can be used by practitioners to design E&R experiments with requisite power. We also explore how our likelihood-based approach can be used to infer other model parameters, including effective population size and recombination rate. Then, we apply our method to analyze genome-wide data from a real E&R experiment designed to study the adaptation of D. melanogaster to a new laboratory environment with alternating cold and hot temperatures.

  6. Women's Awareness of, Interest in, and Experiences with Long-acting Reversible and Permanent Contraception.

    PubMed

    Burns, Bridgit; Grindlay, Kate; Dennis, Amanda

    2015-01-01

    Long-acting reversible contraception (LARC) and sterilization are popular contraceptive methods. However, they have been associated with safety concerns and coercive practices. We aimed to understand women's opinions and experiences related to these methods, including whether the methods' fraught histories influence use or interest. Between May and July 2013, we conducted an online survey with a convenience sample of 520 women aged 14 to 45. We used quota sampling to ensure women of color were at least 60% of our sample. Descriptive statistics, χ(2) tests, and multivariable logistic regression were used to estimate participants' awareness of, interest in, and experiences with LARCs and sterilization. Overall, 30% of women reported current LARC use and 67% interest in future LARC use. Four percent reported sterilization use and 48% interest in future sterilization. In multivariate analyses, current LARC use was lower among Asian women versus White women (odds ratio [OR], 0.24), and interest in future use was higher among women aged 14 to 24 versus 35 to 45 (OR, 5.49). Interest in sterilization was higher among women aged 14 to 24 and 25 to 34 versus 35 to 45 (ORs, 3.29-3.66) and women with disabilities (OR, 1.64), and lower among Black compared with White women (OR, 0.41). Method misperceptions were evident, and concerns about contraceptive coercion were reported. Concerns about contraceptive coercion were not predominant reasons for noninterest in LARCs and sterilization, but were reported by some participants. Lower sterilization interest among Black women and higher sterilization interest among women with disabilities warrant further research. Efforts to address misperceptions about LARCs and sterilization, including their safety and efficacy, are needed. Copyright © 2015 Jacobs Institute of Women's Health. Published by Elsevier Inc. All rights reserved.

  7. A novel membrane inlet mass spectrometer method to measure ¹⁵NH4₄⁺ for isotope-enrichment experiments in aquatic ecosystems.

    PubMed

    Yin, Guoyu; Hou, Lijun; Liu, Min; Liu, Zhanfei; Gardner, Wayne S

    2014-08-19

    Nitrogen (N) pollution in aquatic ecosystems has attracted much attention over the past decades, but the dynamics of this bioreactive element are difficult to measure in aquatic oxygen-transition environments. Nitrogen-transformation experiments often require measurement of (15)N-ammonium ((15)NH4(+)) ratios in small-volume (15)N-enriched samples. Published methods to determine N isotope ratios of dissolved ammonium require large samples and/or costly equipment and effort. We present a novel ("OX/MIMS") method to determine N isotope ratios for (15)NH4(+) in experimental waters previously enriched with (15)N compounds. Dissolved reduced (15)N (dominated by (15)NH4(+)) is oxidized with hypobromite iodine to nitrogen gas ((29)N2 and/or (30)N2) and analyzed by membrane inlet mass spectrometry (MIMS) to quantify (15)NH4(+) concentrations. The N isotope ratios, obtained by comparing the (15)NH4(+) to total ammonium (via autoanalyzer) concentrations, are compared to the ratios of prepared standards. The OX/MIMS method requires only small sample volumes of water (ca. 12 mL) or sediment slurries and is rapid, convenient, accurate, and precise (R(2) = 0.9994, p < 0.0001) over a range of salinities and (15)N/(14)N ratios. It can provide data needed to quantify rates of ammonium regeneration, potential ammonium uptake, and dissimilatory nitrate reduction to ammonium (DNRA). Isotope ratio results agreed closely (R = 0.998, P = 0.001) with those determined independently by isotope ratio mass spectrometry for DNRA measurements or by ammonium isotope retention time shift liquid chromatography for water-column N-cycling experiments. Application of OX/MIMS should simplify experimental approaches and improve understanding of N-cycling rates and fate in a variety of freshwater and marine environments.

  8. Using machine learning tools to model complex toxic interactions with limited sampling regimes.

    PubMed

    Bertin, Matthew J; Moeller, Peter; Guillette, Louis J; Chapman, Robert W

    2013-03-19

    A major impediment to understanding the impact of environmental stress, including toxins and other pollutants, on organisms, is that organisms are rarely challenged by one or a few stressors in natural systems. Thus, linking laboratory experiments that are limited by practical considerations to a few stressors and a few levels of these stressors to real world conditions is constrained. In addition, while the existence of complex interactions among stressors can be identified by current statistical methods, these methods do not provide a means to construct mathematical models of these interactions. In this paper, we offer a two-step process by which complex interactions of stressors on biological systems can be modeled in an experimental design that is within the limits of practicality. We begin with the notion that environment conditions circumscribe an n-dimensional hyperspace within which biological processes or end points are embedded. We then randomly sample this hyperspace to establish experimental conditions that span the range of the relevant parameters and conduct the experiment(s) based upon these selected conditions. Models of the complex interactions of the parameters are then extracted using machine learning tools, specifically artificial neural networks. This approach can rapidly generate highly accurate models of biological responses to complex interactions among environmentally relevant toxins, identify critical subspaces where nonlinear responses exist, and provide an expedient means of designing traditional experiments to test the impact of complex mixtures on biological responses. Further, this can be accomplished with an astonishingly small sample size.

  9. Methods for processing microarray data.

    PubMed

    Ares, Manuel

    2014-02-01

    Quality control must be maintained at every step of a microarray experiment, from RNA isolation through statistical evaluation. Here we provide suggestions for analyzing microarray data. Because the utility of the results depends directly on the design of the experiment, the first critical step is to ensure that the experiment can be properly analyzed and interpreted. What is the biological question? What is the best way to perform the experiment? How many replicates will be required to obtain the desired statistical resolution? Next, the samples must be prepared, pass quality controls for integrity and representation, and be hybridized and scanned. Also, slides with defects, missing data, high background, or weak signal must be rejected. Data from individual slides must be normalized and combined so that the data are as free of systematic bias as possible. The third phase is to apply statistical filters and tests to the data to determine genes (1) expressed above background, (2) whose expression level changes in different samples, and (3) whose RNA-processing patterns or protein associations change. Next, a subset of the data should be validated by an alternative method, such as reverse transcription-polymerase chain reaction (RT-PCR). Provided that this endorses the general conclusions of the array analysis, gene sets whose expression, splicing, polyadenylation, protein binding, etc. change in different samples can be classified with respect to function, sequence motif properties, as well as other categories to extract hypotheses for their biological roles and regulatory logic.

  10. Robust and transferable quantification of NMR spectral quality using IROC analysis

    NASA Astrophysics Data System (ADS)

    Zambrello, Matthew A.; Maciejewski, Mark W.; Schuyler, Adam D.; Weatherby, Gerard; Hoch, Jeffrey C.

    2017-12-01

    Non-Fourier methods are increasingly utilized in NMR spectroscopy because of their ability to handle nonuniformly-sampled data. However, non-Fourier methods present unique challenges due to their nonlinearity, which can produce nonrandom noise and render conventional metrics for spectral quality such as signal-to-noise ratio unreliable. The lack of robust and transferable metrics (i.e. applicable to methods exhibiting different nonlinearities) has hampered comparison of non-Fourier methods and nonuniform sampling schemes, preventing the identification of best practices. We describe a novel method, in situ receiver operating characteristic analysis (IROC), for characterizing spectral quality based on the Receiver Operating Characteristic curve. IROC utilizes synthetic signals added to empirical data as "ground truth", and provides several robust scalar-valued metrics for spectral quality. This approach avoids problems posed by nonlinear spectral estimates, and provides a versatile quantitative means of characterizing many aspects of spectral quality. We demonstrate applications to parameter optimization in Fourier and non-Fourier spectral estimation, critical comparison of different methods for spectrum analysis, and optimization of nonuniform sampling schemes. The approach will accelerate the discovery of optimal approaches to nonuniform sampling experiment design and non-Fourier spectrum analysis for multidimensional NMR.

  11. Rapid-Viability PCR Method for Detection of Live, Virulent Bacillus anthracis in Environmental Samples ▿

    PubMed Central

    Létant, Sonia E.; Murphy, Gloria A.; Alfaro, Teneile M.; Avila, Julie R.; Kane, Staci R.; Raber, Ellen; Bunt, Thomas M.; Shah, Sanjiv R.

    2011-01-01

    In the event of a biothreat agent release, hundreds of samples would need to be rapidly processed to characterize the extent of contamination and determine the efficacy of remediation activities. Current biological agent identification and viability determination methods are both labor- and time-intensive such that turnaround time for confirmed results is typically several days. In order to alleviate this issue, automated, high-throughput sample processing methods were developed in which real-time PCR analysis is conducted on samples before and after incubation. The method, referred to as rapid-viability (RV)-PCR, uses the change in cycle threshold after incubation to detect the presence of live organisms. In this article, we report a novel RV-PCR method for detection of live, virulent Bacillus anthracis, in which the incubation time was reduced from 14 h to 9 h, bringing the total turnaround time for results below 15 h. The method incorporates a magnetic bead-based DNA extraction and purification step prior to PCR analysis, as well as specific real-time PCR assays for the B. anthracis chromosome and pXO1 and pXO2 plasmids. A single laboratory verification of the optimized method applied to the detection of virulent B. anthracis in environmental samples was conducted and showed a detection level of 10 to 99 CFU/sample with both manual and automated RV-PCR methods in the presence of various challenges. Experiments exploring the relationship between the incubation time and the limit of detection suggest that the method could be further shortened by an additional 2 to 3 h for relatively clean samples. PMID:21764960

  12. Systematic random sampling of the comet assay.

    PubMed

    McArt, Darragh G; Wasson, Gillian R; McKerr, George; Saetzler, Kurt; Reed, Matt; Howard, C Vyvyan

    2009-07-01

    The comet assay is a technique used to quantify DNA damage and repair at a cellular level. In the assay, cells are embedded in agarose and the cellular content is stripped away leaving only the DNA trapped in an agarose cavity which can then be electrophoresed. The damaged DNA can enter the agarose and migrate while the undamaged DNA cannot and is retained. DNA damage is measured as the proportion of the migratory 'tail' DNA compared to the total DNA in the cell. The fundamental basis of these arbitrary values is obtained in the comet acquisition phase using fluorescence microscopy with a stoichiometric stain in tandem with image analysis software. Current methods deployed in such an acquisition are expected to be both objectively and randomly obtained. In this paper we examine the 'randomness' of the acquisition phase and suggest an alternative method that offers both objective and unbiased comet selection. In order to achieve this, we have adopted a survey sampling approach widely used in stereology, which offers a method of systematic random sampling (SRS). This is desirable as it offers an impartial and reproducible method of comet analysis that can be used both manually or automated. By making use of an unbiased sampling frame and using microscope verniers, we are able to increase the precision of estimates of DNA damage. Results obtained from a multiple-user pooled variation experiment showed that the SRS technique attained a lower variability than that of the traditional approach. The analysis of a single user with repetition experiment showed greater individual variances while not being detrimental to overall averages. This would suggest that the SRS method offers a better reflection of DNA damage for a given slide and also offers better user reproducibility.

  13. New Insights into Prebiotic Chemistry from Old Archived Miller Extracts

    NASA Technical Reports Server (NTRS)

    Parker, Eric T.; Cleaves, H. James; Dworkin, Jason P.; Glavin, Daniel P.; Callahan, Michael P.; Aubrey, Andrew D.; Lazcano, Antonio; Bada, Jeffrey L.

    2011-01-01

    Following the discovery of an archived set of samples from Stanley Miller's early experiments, analyses were undertaken to better understand the diversity of compounds produced from electric discharges acting on reducing gas mixtures. The paper chromatography methods that Miller used in the 1950s were only capable of detecting a few amino acids and were unable to provide substantial quantitative data relative to today's techniques. Current analytical techniques are much more sensitive and selective, and are capable of precisely quantifying a much larger range of amino acids and their enantiomeric abundances. In one study, preserved dried samples produced by Miller using a lesser-known volcanic apparatus which differed from Miller's classic apparatus in that it utilized an aspirator that injected steam into the electric discharge chamber, simulating a volcanic eruption. The volcanic apparatus produced a wider variety of amino acids than the classic configuration. Prebiotic compounds synthesized in these environments may have locally accumulated where they could have undergone further processing. An additional preserved set of samples from an experiment conducted in 1958 were also found in Miller's archived collection. These samples which had been generated using a mixture of CH4, NH3, H2S and CO2 were collected, catalogued, and stored by Miller, but for unknown reasons were never studied. In our analyses a total of 23 amino acids and 4 amines, including 7 organosulfur compounds, were detected in these samples. The major amino acids with chiral centers are racemic within the accuracy of the measurements, indicating that they are not contaminants introduced during sample storage. This experiment marks the first synthesis of sulfur amino acids from spark discharge experiments designed to imitate primordial environments. The relative yield of some amino acids, in particular the isomers of amino butyric acid, are the highest ever found in a spark discharge experiment. The simulated primordial conditions used by Miller in these experiments may serve as a model for early volcanic plume chemistry and provide insight to the possible roles such plumes may have played in abiotic organic synthesis. Additionally, the overall abundances of the synthesized amino acids in the presence of H2S are very similar to the abundances found in some carbonaceous meteorites, suggesting that H2S may have played an important role in prebiotic reactions in early solar system environments. Although experiments using a variety of gases as components of the primordial Earth's atmospheric composition and a spark discharge apparatus configured according to Miller's original or volcanic design can be readily carried out, the unique opportunity to investigate samples prepared by the pioneer in abiotic synthesis using state of the art analytical methods is of considerable historic interest.

  14. Modeling steady-state experiments with a scanning electrochemical microscope involving several independent diffusing species using the boundary element method.

    PubMed

    Sklyar, Oleg; Träuble, Markus; Zhao, Chuan; Wittstock, Gunther

    2006-08-17

    The BEM algorithm developed earlier for steady-state experiments in the scanning electrochemical microscopy (SECM) feedback mode has been expanded to allow for the treatment of more than one independently diffusing species. This allows the treatment of substrate-generation/tip-collection SECM experiments. The simulations revealed the interrelation of sample layout, local kinetics, imaging conditions, and the quality of the obtained SECM images. Resolution in the SECM SG/TC images has been evaluated, and it depends on several factors. For most practical situations, the resolution is limited by the diffusion profiles of the sample. When a dissolved compound is converted at the sample (e.g., oxygen reduction or enzymatic reaction at the sample), the working distance should be significantly larger than in SECM feedback experiments (ca. 3 r(T) for RG = 5) in order to avoid diffusional shielding of the active regions on the sample by the UME body. The resolution ability also depends on the kinetics of the active regions. The best resolution can be expected if all the active regions cause the same flux. In one simulated example, which might mimic a possible scenario of a low-density protein array, considerable compromises in the resolving power, were noted when the flux from two neighboring spots differs by more than a factor of 2.

  15. Empirical entropic contributions in computational docking: evaluation in APS reductase complexes.

    PubMed

    Chang, Max W; Belew, Richard K; Carroll, Kate S; Olson, Arthur J; Goodsell, David S

    2008-08-01

    The results from reiterated docking experiments may be used to evaluate an empirical vibrational entropy of binding in ligand-protein complexes. We have tested several methods for evaluating the vibrational contribution to binding of 22 nucleotide analogues to the enzyme APS reductase. These include two cluster size methods that measure the probability of finding a particular conformation, a method that estimates the extent of the local energetic well by looking at the scatter of conformations within clustered results, and an RMSD-based method that uses the overall scatter and clustering of all conformations. We have also directly characterized the local energy landscape by randomly sampling around docked conformations. The simple cluster size method shows the best performance, improving the identification of correct conformations in multiple docking experiments. 2008 Wiley Periodicals, Inc.

  16. Use of the experience sampling method in the context of clinical trials

    PubMed Central

    Verhagen, Simone J W; Hasmi, Laila; Drukker, Marjan; van Os, J; Delespaul, Philippe A E G

    2016-01-01

    Objective The experience sampling method (ESM) is a structured diary technique to appraise subjective experiences in daily life. It is applied in psychiatric patients, as well as in patients with somatic illness. Despite the potential of ESM assessment, the improved logistics and its increased administration in research, its use in clinical trials remains limited. This paper introduces ESM for clinical trials in psychiatry and beyond. Methods ESM is an ecologically valid method that yields a comprehensive view of an individual's daily life. It allows the assessment of various constructs (eg, quality of life, psychopathology) and psychological mechanisms (eg, stress-sensitivity, coping). These constructs are difficult to assess using cross-sectional questionnaires. ESM can be applied in treatment monitoring, as an ecological momentary intervention, in clinical trials, or in single case clinical trials. Technological advances (eg, smartphone applications) make its implementation easier. Results Advantages of ESM are highlighted and disadvantages are discussed. Furthermore, the ecological nature of ESM data and its consequences are explored, including the potential pitfalls of ambiguously formulated research questions and the specificities of ESM in statistical analyses. The last section focuses on ESM in relation to clinical trials and discusses its future use in optimising clinical decision-making. Conclusions ESM can be a valuable asset in clinical trial research and should be used more often to study the benefits of treatment in psychiatry and somatic health. PMID:27443678

  17. Teachers' emotional experiences and exhaustion as predictors of emotional labor in the classroom: an experience sampling study.

    PubMed

    Keller, Melanie M; Chang, Mei-Lin; Becker, Eva S; Goetz, Thomas; Frenzel, Anne C

    2014-01-01

    Emotional exhaustion (EE) is the core component in the study of teacher burnout, with significant impact on teachers' professional lives. Yet, its relation to teachers' emotional experiences and emotional labor (EL) during instruction remains unclear. Thirty-nine German secondary teachers were surveyed about their EE (trait), and via the experience sampling method on their momentary (state; N = 794) emotional experiences (enjoyment, anxiety, anger) and momentary EL (suppression, faking). Teachers reported that in 99 and 39% of all lessons, they experienced enjoyment and anger, respectively, whereas they experienced anxiety less frequently. Teachers reported suppressing or faking their emotions during roughly a third of all lessons. Furthermore, EE was reflected in teachers' decreased experiences of enjoyment and increased experiences of anger. On an intra-individual level, all three emotions predict EL, whereas on an inter-individual level, only anger evokes EL. Explained variances in EL (within: 39%, between: 67%) stress the relevance of emotions in teaching and within the context of teacher burnout. Beyond implying the importance of reducing anger, our findings suggest the potential of enjoyment lessening EL and thereby reducing teacher burnout.

  18. Uncertainties in Air Exchange using Continuous-Injection, Long-Term Sampling Tracer-Gas Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sherman, Max H.; Walker, Iain S.; Lunden, Melissa M.

    2013-12-01

    The PerFluorocarbon Tracer (PFT) method is a low-cost approach commonly used for measuring air exchange in buildings using tracer gases. It is a specific application of the more general Continuous-Injection, Long-Term Sampling (CILTS) method. The technique is widely used but there has been little work on understanding the uncertainties (both precision and bias) associated with its use, particularly given that it is typically deployed by untrained or lightly trained people to minimize experimental costs. In this article we will conduct a first-principles error analysis to estimate the uncertainties and then compare that analysis to CILTS measurements that were over-sampled, throughmore » the use of multiple tracers and emitter and sampler distribution patterns, in three houses. We find that the CILTS method can have an overall uncertainty of 10-15percent in ideal circumstances, but that even in highly controlled field experiments done by trained experimenters expected uncertainties are about 20percent. In addition, there are many field conditions (such as open windows) where CILTS is not likely to provide any quantitative data. Even avoiding the worst situations of assumption violations CILTS should be considered as having a something like a ?factor of two? uncertainty for the broad field trials that it is typically used in. We provide guidance on how to deploy CILTS and design the experiment to minimize uncertainties.« less

  19. Genetic algorithms and MCML program for recovery of optical properties of homogeneous turbid media

    PubMed Central

    Morales Cruzado, Beatriz; y Montiel, Sergio Vázquez; Atencio, José Alberto Delgado

    2013-01-01

    In this paper, we present and validate a new method for optical properties recovery of turbid media with slab geometry. This method is an iterative method that compares diffuse reflectance and transmittance, measured using integrating spheres, with those obtained using the known algorithm MCML. The search procedure is based in the evolution of a population due to selection of the best individual, i.e., using a genetic algorithm. This new method includes several corrections such as non-linear effects in integrating spheres measurements and loss of light due to the finite size of the sample. As a potential application and proof-of-principle experiment of this new method, we use this new algorithm in the recovery of optical properties of blood samples at different degrees of coagulation. PMID:23504404

  20. Intelligent Detection of Structure from Remote Sensing Images Based on Deep Learning Method

    NASA Astrophysics Data System (ADS)

    Xin, L.

    2018-04-01

    Utilizing high-resolution remote sensing images for earth observation has become the common method of land use monitoring. It requires great human participation when dealing with traditional image interpretation, which is inefficient and difficult to guarantee the accuracy. At present, the artificial intelligent method such as deep learning has a large number of advantages in the aspect of image recognition. By means of a large amount of remote sensing image samples and deep neural network models, we can rapidly decipher the objects of interest such as buildings, etc. Whether in terms of efficiency or accuracy, deep learning method is more preponderant. This paper explains the research of deep learning method by a great mount of remote sensing image samples and verifies the feasibility of building extraction via experiments.

  1. fMRI capture of auditory hallucinations: Validation of the two-steps method.

    PubMed

    Leroy, Arnaud; Foucher, Jack R; Pins, Delphine; Delmaire, Christine; Thomas, Pierre; Roser, Mathilde M; Lefebvre, Stéphanie; Amad, Ali; Fovet, Thomas; Jaafari, Nemat; Jardri, Renaud

    2017-10-01

    Our purpose was to validate a reliable method to capture brain activity concomitant with hallucinatory events, which constitute frequent and disabling experiences in schizophrenia. Capturing hallucinations using functional magnetic resonance imaging (fMRI) remains very challenging. We previously developed a method based on a two-steps strategy including (1) multivariate data-driven analysis of per-hallucinatory fMRI recording and (2) selection of the components of interest based on a post-fMRI interview. However, two tests still need to be conducted to rule out critical pitfalls of conventional fMRI capture methods before this two-steps strategy can be adopted in hallucination research: replication of these findings on an independent sample and assessment of the reliability of the hallucination-related patterns at the subject level. To do so, we recruited a sample of 45 schizophrenia patients suffering from frequent hallucinations, 20 schizophrenia patients without hallucinations and 20 matched healthy volunteers; all participants underwent four different experiments. The main findings are (1) high accuracy in reporting unexpected sensory stimuli in an MRI setting; (2) good detection concordance between hypothesis-driven and data-driven analysis methods (as used in the two-steps strategy) when controlled unexpected sensory stimuli are presented; (3) good agreement of the two-steps method with the online button-press approach to capture hallucinatory events; (4) high spatial consistency of hallucinatory-related networks detected using the two-steps method on two independent samples. By validating the two-steps method, we advance toward the possible transfer of such technology to new image-based therapies for hallucinations. Hum Brain Mapp 38:4966-4979, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  2. Advanced Stochastic Collocation Methods for Polynomial Chaos in RAVEN

    NASA Astrophysics Data System (ADS)

    Talbot, Paul W.

    As experiment complexity in fields such as nuclear engineering continually increases, so does the demand for robust computational methods to simulate them. In many simulations, input design parameters and intrinsic experiment properties are sources of uncertainty. Often small perturbations in uncertain parameters have significant impact on the experiment outcome. For instance, in nuclear fuel performance, small changes in fuel thermal conductivity can greatly affect maximum stress on the surrounding cladding. The difficulty quantifying input uncertainty impact in such systems has grown with the complexity of numerical models. Traditionally, uncertainty quantification has been approached using random sampling methods like Monte Carlo. For some models, the input parametric space and corresponding response output space is sufficiently explored with few low-cost calculations. For other models, it is computationally costly to obtain good understanding of the output space. To combat the expense of random sampling, this research explores the possibilities of using advanced methods in Stochastic Collocation for generalized Polynomial Chaos (SCgPC) as an alternative to traditional uncertainty quantification techniques such as Monte Carlo (MC) and Latin Hypercube Sampling (LHS) methods for applications in nuclear engineering. We consider traditional SCgPC construction strategies as well as truncated polynomial spaces using Total Degree and Hyperbolic Cross constructions. We also consider applying anisotropy (unequal treatment of different dimensions) to the polynomial space, and offer methods whereby optimal levels of anisotropy can be approximated. We contribute development to existing adaptive polynomial construction strategies. Finally, we consider High-Dimensional Model Reduction (HDMR) expansions, using SCgPC representations for the subspace terms, and contribute new adaptive methods to construct them. We apply these methods on a series of models of increasing complexity. We use analytic models of various levels of complexity, then demonstrate performance on two engineering-scale problems: a single-physics nuclear reactor neutronics problem, and a multiphysics fuel cell problem coupling fuels performance and neutronics. Lastly, we demonstrate sensitivity analysis for a time-dependent fuels performance problem. We demonstrate the application of all the algorithms in RAVEN, a production-level uncertainty quantification framework.

  3. Testing paleointensity determinations on recent lava flows and scorias from Miyakejima, Japan

    NASA Astrophysics Data System (ADS)

    Fukuma, K.

    2013-12-01

    Still no consensus has been reached on paleointensity method. Even the classical Thellier method has not been fully tested on recent lava flows with known geomagnetic field intensity based on a systematic sampling scheme. In this study, Thellier method was applied for 1983, 1962 and 1940 basaltic lava flows and scorias from Miyakejima, Japan. Several vertical lava sections and quenched scorias, which are quite variable in magnetic mineralogy and grain size, provide an unparalleled opportunity to test paleointensity methods. Thellier experiments were conducted on a completely automated three-component spinner magnetometer with thermal demagnetizer 'tspin'. Specimens were heated in air, applied laboratory field was 45 microT, and pTRM checks were performed at every two heating steps. Curie points and hysteresis properties were obtained on small fragments removed from cylindrical specimens. For lava flows sigmoidal curves were commonly observed on the Arai diagrams. Especially the interior part of lava flows always revealed sigmoidal patterns and sometimes resulted in erroneously blurred behaviors. The directions after zero-field heating were not necessarily stable in the course of the Thellier experiments. It was very difficult, for the interior part, to ascertain linear segments on Arai diagrams corresponding to the geomagnetic field intensity at the eruption. Upper and lower clinker samples also generally revealed sigmoidal or upward concave curves on Arai diagrams. Neither lower nor higher temperature portions of the sigmoids or concaves gave the expected geomagnetic field intensities. However, there were two exceptional cases of lava flows giving correct field intensities: upper clinkers with relatively low unblocking temperatures (< 400 deg.C) and lower clinkers with broad unblocking temperature ranges from room temperature to 600 deg.C. A most promising target for paleointensity experiments within the volcanic rocks is scoria. Scoria samples always carry single Curie temperatures higher than 500 deg.C, and the ratios of saturation remanence to saturation magnetization (Mr/Ms) of about 0.5 are indicative of truly single-domain low-titanium titanomagnetite. Unambiguous straight lines were always observed on Arai diagrams covering broad temperature ranges like the lower clinker samples, and the gradients gave the expected field values within a few percent errors. Thellier experiments applied for the recent lava flows did not successfully recover the expected field intensity from most samples. No linear segment was recognized or incorrect paleointensity values were obtained from short segments with limited temperature ranges. In Thellier or other types of paleointensity experiments laboratory alteration is checked in details, but if a sample once passed the alteration check, the TRM/NRM ratios of any limited temperature or field ranges were accepted as reflecting paleointensity. Previously published paleointensity data from lava flows should include much of such dubious data. Generally lava flows are not suitable for paleointensity determinations in light of its large grain-size and mixed magnetic mineralogy, except for scoria and clinker.

  4. Resistivity Correction Factor for the Four-Probe Method: Experiment I

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Yamaguchi, Shoji; Enjoji, Hideo

    1988-05-01

    Experimental verification of the theoretically derived resistivity correction factor (RCF) is presented. Resistivity and sheet resistance measurements by the four-probe method are made on three samples: isotropic graphite, ITO film and Au film. It is indicated that the RCF can correct the apparent variations of experimental data to yield reasonable resistivities and sheet resistances.

  5. Determination of Calcium in Dietary Supplements: Statistical Comparison of Methods in the Analytical Laboratory

    ERIC Educational Resources Information Center

    Garvey, Sarah L.; Shahmohammadi, Golbon; McLain, Derek R.; Dietz, Mark L.

    2015-01-01

    A laboratory experiment is described in which students compare two methods for the determination of the calcium content of commercial dietary supplement tablets. In a two-week sequence, the sample tablets are first analyzed via complexometric titration with ethylenediaminetetraacetic acid and then, following ion exchange of the calcium ion present…

  6. Children's Text Messaging: Abbreviations, Input Methods and Links with Literacy

    ERIC Educational Resources Information Center

    Kemp, N.; Bushnell, C.

    2011-01-01

    This study investigated the effects of mobile phone text-messaging method (predictive and multi-press) and experience (in texters and non-texters) on children's textism use and understanding. It also examined popular claims that the use of text-message abbreviations, or "textese" spelling, is associated with poor literacy skills. A sample of 86…

  7. Enhancement of low sampling frequency recordings for ECG biometric matching using interpolation.

    PubMed

    Sidek, Khairul Azami; Khalil, Ibrahim

    2013-01-01

    Electrocardiogram (ECG) based biometric matching suffers from high misclassification error with lower sampling frequency data. This situation may lead to an unreliable and vulnerable identity authentication process in high security applications. In this paper, quality enhancement techniques for ECG data with low sampling frequency has been proposed for person identification based on piecewise cubic Hermite interpolation (PCHIP) and piecewise cubic spline interpolation (SPLINE). A total of 70 ECG recordings from 4 different public ECG databases with 2 different sampling frequencies were applied for development and performance comparison purposes. An analytical method was used for feature extraction. The ECG recordings were segmented into two parts: the enrolment and recognition datasets. Three biometric matching methods, namely, Cross Correlation (CC), Percent Root-Mean-Square Deviation (PRD) and Wavelet Distance Measurement (WDM) were used for performance evaluation before and after applying interpolation techniques. Results of the experiments suggest that biometric matching with interpolated ECG data on average achieved higher matching percentage value of up to 4% for CC, 3% for PRD and 94% for WDM. These results are compared with the existing method when using ECG recordings with lower sampling frequency. Moreover, increasing the sample size from 56 to 70 subjects improves the results of the experiment by 4% for CC, 14.6% for PRD and 0.3% for WDM. Furthermore, higher classification accuracy of up to 99.1% for PCHIP and 99.2% for SPLINE with interpolated ECG data as compared of up to 97.2% without interpolation ECG data verifies the study claim that applying interpolation techniques enhances the quality of the ECG data. Crown Copyright © 2012. Published by Elsevier Ireland Ltd. All rights reserved.

  8. [An Introduction to Methods for Evaluating Health Care Technology].

    PubMed

    Lee, Ting-Ting

    2015-06-01

    The rapid and continual advance of healthcare technology makes ensuring that this technology is used effectively to achieve its original goals a critical issue. This paper presents three methods that may be applied by healthcare professionals in the evaluation of healthcare technology. These methods include: the perception/experiences of users, user work-pattern changes, and chart review or data mining. The first method includes two categories: using interviews to explore the user experience and using theory-based questionnaire surveys. The second method applies work sampling to observe the work pattern changes of users. The last method conducts chart reviews or data mining to analyze the designated variables. In conclusion, while evaluative feedback may be used to improve the design and development of healthcare technology applications, the informatics competency and informatics literacy of users may be further explored in future research.

  9. Optimization of Sample Preparation and Instrumental Parameters for the Rapid Analysis of Drugs of Abuse in Hair samples by MALDI-MS/MS Imaging

    NASA Astrophysics Data System (ADS)

    Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.

    2017-08-01

    Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.

  10. Solid Phase Micro-extraction (SPME) with In Situ Transesterification: An Easy Method for the Detection of Non-volatile Fatty Acid Derivatives on the Insect Cuticle.

    PubMed

    Kühbandner, Stephan; Ruther, Joachim

    2015-06-01

    Triacylglycerides (TAGs) and other non-volatile fatty acid derivatives (NFADs) occur in large amounts in the internal tissues of insects, but their presence on the insect cuticle is controversially discussed. Most studies investigating cuticular lipids of insects involve solvent extraction, which implies the risk of extracting lipids from internal tissues. Here, we present a new method that overcomes this problem. The method employs solid phase micro-extraction (SPME) to sample NFADs by rubbing the SPME fiber over the insect cuticle. Subsequently, the sampled NFADs are transesterified in situ with trimethyl sulfonium hydroxide (TMSH) into more volatile fatty acid methyl esters (FAMEs), which can be analyzed by standard GC/MS. We performed two types of control experiments to enable significant conclusions: (1) to rule out contamination of the GC/MS system with NFADs, and (2) to exclude the presence of free fatty acids on the insect cuticle, which would also furnish FAMEs after TMSH treatment, and thus might simulate the presence of NFADs. In combination with these two essential control experiments, the described SPME technique can be used to detect TAGs and/or other NFADs on the insect cuticle. We analyzed six insect species from four insect orders with our method and compared the results with conventional solvent extraction followed by ex situ transesterification. Several fatty acids typically found as constituents of TAGs were detected by the SPME method on the cuticle of all species analyzed. A comparison of the two methods revealed differences in the fatty acid compositions of the samples. Saturated fatty acids showed by trend higher relative abundances when sampled with the SPME method, while several minor FAMEs were detected only in the solvent extracts. Our study suggests that TAGs and maybe other NFADs are far more common on the insect cuticle than usually thought.

  11. X-Ray microtomography analysis of the impact of pCO2 on serpentinization reactions: A reactive percolation experimental approach

    NASA Astrophysics Data System (ADS)

    Escario, Sofia; Godard, Marguerite; Gouze, Philippe; Smal, Pavel; Rodriguez, Olivier; Leprovost, Richard

    2017-04-01

    Serpentinization is the main hydrothermal process driving the alteration of the mantle lithosphere by seawater at ridges. It consists in the alteration of olivine to serpentine and is associated to processes such as oxidation as well as carbonation when CO2 is present. The sustainability and efficiency of the reaction requires penetration and renewal of fluids at the mineral-fluid interface. Yet the secondary low density minerals can fill the porous network, clogging flow paths efficiently. This study aims at better understanding the coupled hydrodynamic and chemical processes driving the earliest stages of alteration of the ultramafic basement, when seawater-derived hydrothermal fluids penetrate and interact with exposed mantle rocks at slow spreading ridges. We investigate the structural changes of the rock in relation to dissolution-precipitation reactions triggered by the injection CO2-rich seawater using an experimental approach. The experiments simulate open conditions and were performed using the reactive percolation bench ICARE Lab 3 - Géosciences Montpellier. ICARE 3 allows to continuously measuring permeability changes during experiments and sampling the outlet fluids passing through the sample. We analysed the reacted samples before and after the experiments using a combination of geochemical (TGA-MS) and high resolution X-Ray microtomography (ESRF ID19 synchrotron beamline, Grenoble) approaches. A series of experiments was carried out at 190°C and 25 MPa. CO2 enriched natural seawater (XCO2 5.24 mmol/kg) was injected into Titanium capsules (2 mm diameter, 6 mm length) filled by pressed powdered San Carlos olivine (Fo90; grains 150-200 µm). The outlet section of the samples were analysed at 0.65 µm resolution using microtomography before and after the experiments. The reacted powdered sample was analysed by TGA-MS. Comparison of microtomography images of reacted and unreacted samples shows evidences of olivine dissolution and secondary minerals precipitation during the 14 days long experiments. A new method based on image registration enables to identify the dissolution mainly localized at grain borders and the precipitation localizes in fractures and at grain borders. Dissolution appears to be dominant in the outlet section of the sample. The equilibrium of the reaction will be determined by the comparison of thermogravimetry (TGA-MS) analysis of the reacted sample after the experiment and thermodynamic modelling of the experiment in equilibrium.

  12. Quantitative Assessment of Molecular Dynamics Sampling for Flexible Systems.

    PubMed

    Nemec, Mike; Hoffmann, Daniel

    2017-02-14

    Molecular dynamics (MD) simulation is a natural method for the study of flexible molecules but at the same time is limited by the large size of the conformational space of these molecules. We ask by how much the MD sampling quality for flexible molecules can be improved by two means: the use of diverse sets of trajectories starting from different initial conformations to detect deviations between samples and sampling with enhanced methods such as accelerated MD (aMD) or scaled MD (sMD) that distort the energy landscape in controlled ways. To this end, we test the effects of these approaches on MD simulations of two flexible biomolecules in aqueous solution, Met-Enkephalin (5 amino acids) and HIV-1 gp120 V3 (a cycle of 35 amino acids). We assess the convergence of the sampling quantitatively with known, extensive measures of cluster number N c and cluster distribution entropy S c and with two new quantities, conformational overlap O conf and density overlap O dens , both conveniently ranging from 0 to 1. These new overlap measures quantify self-consistency of sampling in multitrajectory MD experiments, a necessary condition for converged sampling. A comprehensive assessment of sampling quality of MD experiments identifies the combination of diverse trajectory sets and aMD as the most efficient approach among those tested. However, analysis of O dens between conventional and aMD trajectories also reveals that we have not completely corrected aMD sampling for the distorted energy landscape. Moreover, for V3, the courses of N c and O dens indicate that much higher resources than those generally invested today will probably be needed to achieve convergence. The comparative analysis also shows that conventional MD simulations with insufficient sampling can be easily misinterpreted as being converged.

  13. Challenges to be overcome using population-based sampling methods to recruit veterans for a study of post-traumatic stress disorder and traumatic brain injury.

    PubMed

    Bayley, Peter J; Kong, Jennifer Y; Helmer, Drew A; Schneiderman, Aaron; Roselli, Lauren A; Rosse, Stephanie M; Jackson, Jordan A; Baldwin, Janet; Isaac, Linda; Nolasco, Michael; Blackman, Marc R; Reinhard, Matthew J; Ashford, John Wesson; Chapman, Julie C

    2014-04-08

    Many investigators are interested in recruiting veterans from recent conflicts in Afghanistan and Iraq with Traumatic Brain Injury (TBI) and/or Post Traumatic Stress Disorder (PTSD). Researchers pursuing such studies may experience problems in recruiting sufficient numbers unless effective strategies are used. Currently, there is very little information on recruitment strategies for individuals with TBI and/or PTSD. It is known that groups of patients with medical conditions may be less likely to volunteer for clinical research. This study investigated the feasibility of recruiting veterans returning from recent military conflicts--Operation Enduring Freedom (OEF) and Operation Iraqi Freedom (OIF)--using a population-based sampling method. Individuals were sampled from a previous epidemiological study. Three study sites focused on recruiting survey respondents (n = 445) who lived within a 60 mile radius of one of the sites. Overall, the successful recruitment of veterans using a population-based sampling method was dependent on the ability to contact potential participants following mass mailing. Study enrollment of participants with probable TBI and/or PTSD had a recruitment yield (enrolled/total identified) of 5.4%. We were able to contact 146 individuals, representing a contact rate of 33%. Sixty-six of the individuals contacted were screened. The major reasons for not screening included a stated lack of interest in the study (n = 37), a failure to answer screening calls after initial contact (n = 30), and an unwillingness or inability to travel to a study site (n = 10). Based on the phone screening, 36 veterans were eligible for the study. Twenty-four veterans were enrolled, (recruitment yield = 5.4%) and twelve were not enrolled for a variety of reasons. Our experience with a population-based sampling method for recruitment of recent combat veterans illustrates the challenges encountered, particularly contacting and screening potential participants. The screening and enrollment data will help guide recruitment for future studies using population-based methods.

  14. Anomalous electrical conductivity of a gold thin film percolation system

    NASA Astrophysics Data System (ADS)

    Tao, Xiang-Ming; Ye, Gao-Xiang; Ye, Quan-Lin; Jin, Jin-Sheng; Lao, Yan-Feng; Jiao, Zheng-Kuan

    2002-09-01

    A gold thin film percolation system, deposited on a glass surface by the vapor deposition method, has been fabricated. By using the expansive and mobile properties of the silicone oil drop, a characteristic wedge-shaped film system with a slope of ~10-5 naturally forms during deposition. The electrical conductivity of the bandlike film, i.e., the uniform part of the wedge-shaped film with a fixed thickness, is measured with the four-probe method. It is found that the hopping and tunneling effects of the films are stronger than those of the other films. The dependence between the dc sheet resistance R0 and temperature T shows that the samples exhibit a negative coefficient dR0/dT below the temperature T*. According to our experiment, it is suggested that all the anomalous behaviors of the system should be related to the characteristic microstructure of the samples, which results from the immediate quench processes by the oil drop during deposition. The experiment indicates that the relaxation period of the microstructure of the samples may be longer than 30 min.

  15. Conducting Internet Research With the Transgender Population: Reaching Broad Samples and Collecting Valid Data

    PubMed Central

    Miner, Michael H.; Bockting, Walter O.; Romine, Rebecca Swinburne; Raman, Sivakumaran

    2013-01-01

    Health research on transgender people has been hampered by the challenges inherent in studying a hard-to-reach, relatively small, and geographically dispersed population. The Internet has the potential to facilitate access to transgender samples large enough to permit examination of the diversity and syndemic health disparities found among this population. In this article, we describe the experiences of a team of investigators using the Internet to study HIV risk behaviors of transgender people in the United States. We developed an online instrument, recruited participants exclusively via websites frequented by members of the target population, and collected data using online quantitative survey and qualitative synchronous and asynchronous interview methods. Our experiences indicate that the Internet environment presents the investigator with some unique challenges and that commonly expressed criticisms about Internet research (e.g., lack of generalizable samples, invalid study participants, and multiple participation by the same subject) can be overcome with careful method design, usability testing, and pilot testing. The importance of both usability and pilot testing are described with respect to participant engagement and retention and the quality of data obtained online. PMID:24031157

  16. Conducting Internet Research With the Transgender Population: Reaching Broad Samples and Collecting Valid Data.

    PubMed

    Miner, Michael H; Bockting, Walter O; Romine, Rebecca Swinburne; Raman, Sivakumaran

    2012-05-01

    Health research on transgender people has been hampered by the challenges inherent in studying a hard-to-reach, relatively small, and geographically dispersed population. The Internet has the potential to facilitate access to transgender samples large enough to permit examination of the diversity and syndemic health disparities found among this population. In this article, we describe the experiences of a team of investigators using the Internet to study HIV risk behaviors of transgender people in the United States. We developed an online instrument, recruited participants exclusively via websites frequented by members of the target population, and collected data using online quantitative survey and qualitative synchronous and asynchronous interview methods. Our experiences indicate that the Internet environment presents the investigator with some unique challenges and that commonly expressed criticisms about Internet research (e.g., lack of generalizable samples, invalid study participants, and multiple participation by the same subject) can be overcome with careful method design, usability testing, and pilot testing. The importance of both usability and pilot testing are described with respect to participant engagement and retention and the quality of data obtained online.

  17. Comparison of thermal and microwave paleointensity estimates in specimens that violate Thellier's laws

    NASA Astrophysics Data System (ADS)

    Grappone, J. M., Jr.; Biggin, A. J.; Barrett, T. J.; Hill, M. J.

    2017-12-01

    Deep in the Earth, thermodynamic behavior drives the geodynamo and creates the Earth's magnetic field. Determining how the strength of the field, its paleointensity (PI), varies with time, is vital to our understanding of Earth's evolution. Thellier-style paleointensity experiments assume the presence of non-interacting, single domain (SD) magnetic particles, which follow Thellier's laws. Most natural rocks however, contain larger, multi-domain (MD) or interacting single domain (ISD) particles that often violate these laws and cause experiments to fail. Even for samples that pass reliability criteria designed to minimize the impact of MD or ISD grains, different PI techniques can give systematically different estimates, implying violation of Thellier's laws. Our goal is to identify any disparities in PI results that may be explainable by protocol-specific MD and ISD behavior and determine optimum methods to maximize accuracy. Volcanic samples from the Hawai'ian SOH1 borehole previously produced method-dependent PI estimates. Previous studies showed consistently lower PI values when using a microwave (MW) system and the perpendicular method than using the original thermal Thellier-Thellier (OT) technique. However, the data were ambiguous regarding the cause of the discrepancy. The diverging estimates appeared to be either the result of using OT instead of the perpendicular method or the result of using MW protocols instead of thermal protocols. Comparison experiments were conducted using the thermal perpendicular method and microwave OT technique to bridge the gap. Preliminary data generally show that the perpendicular method gives lower estimates than OT for comparable Hlab values. MW estimates are also generally lower than thermal estimates using the same protocol.

  18. The Relationship between Experiences of Discrimination and Mental Health among Lesbians and Gay Men: An Examination of Internalized Homonegativity and Rejection Sensitivity as Potential Mechanisms

    ERIC Educational Resources Information Center

    Feinstein, Brian A.; Goldfried, Marvin R.; Davila, Joanne

    2012-01-01

    Objective: The current study used path analysis to examine potential mechanisms through which experiences of discrimination influence depressive and social anxiety symptoms. Method: The sample included 218 lesbians and 249 gay men (total N = 467) who participated in an online survey about minority stress and mental health. The proposed model…

  19. Child Sexual Abuse Is Largely Hidden from the Adult Society: An Epidemiological Study of Adolescents' Disclosures

    ERIC Educational Resources Information Center

    Priebe, Gisela; Svedin, Carl Goran

    2008-01-01

    Objectives: The aim of this study was to investigate disclosure rates and disclosure patterns and to examine predictors of non-disclosure in a sample of male and female adolescents with self-reported experiences of sexual abuse. Method: A sample of 4,339 high school seniors (2,324 girls, 2,015 boys) was examined with a questionnaire concerning…

  20. Exploiting Multi-Step Sample Trajectories for Approximate Value Iteration

    DTIC Science & Technology

    2013-09-01

    WORK UNIT NUMBER IH 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) AFRL/ RISC 525 Brooks Road, Rome NY 13441-4505 Binghamton University...S) AND ADDRESS(ES) Air Force Research Laboratory/Information Directorate Rome Research Site/ RISC 525 Brooks Road Rome NY 13441-4505 10. SPONSOR...iteration methods for reinforcement learning (RL) generalize experience from limited samples across large state-action spaces. The function approximators

  1. Determination of Pb in Biological Samples by Graphite Furnace Atomic Absorption Spectrophotometry: An Exercise in Common Interferences and Fundamental Practices in Trace Element Determination

    ERIC Educational Resources Information Center

    Spudich, Thomas M.; Herrmann, Jennifer K.; Fietkau, Ronald; Edwards, Grant A.

    2004-01-01

    An experiment is conducted to ascertain trace-level Pb in samples of bovine liver or muscle by applying graphite furnace atomic absorption spectrophotometry (GFAAS). The primary objective is to display the effects of physical and spectral intrusions in determining trace elements, and project the usual methods employed to minimize accuracy errors…

  2. Gender and Racial/Ethnic Differences in Self-Reported Levels of Engagement in High School Math and Science Courses

    ERIC Educational Resources Information Center

    Martinez, Sylvia; Guzman, Stephanie

    2013-01-01

    While gender and racial/ethnic performance gaps in math and science have been well documented, we know little about how students feel while they are in these courses. Using a sample of 793 high school students who participated in the Experience Sampling Method of the Study of Youth and Social Development, this study examines the gender and…

  3. Characteristics of speaking style and implications for speech recognition.

    PubMed

    Shinozaki, Takahiro; Ostendorf, Mari; Atlas, Les

    2009-09-01

    Differences in speaking style are associated with more or less spectral variability, as well as different modulation characteristics. The greater variation in some styles (e.g., spontaneous speech and infant-directed speech) poses challenges for recognition but possibly also opportunities for learning more robust models, as evidenced by prior work and motivated by child language acquisition studies. In order to investigate this possibility, this work proposes a new method for characterizing speaking style (the modulation spectrum), examines spontaneous, read, adult-directed, and infant-directed styles in this space, and conducts pilot experiments in style detection and sampling for improved speech recognizer training. Speaking style classification is improved by using the modulation spectrum in combination with standard pitch and energy variation. Speech recognition experiments on a small vocabulary conversational speech recognition task show that sampling methods for training with a small amount of data benefit from the new features.

  4. Following the dynamics of matter with femtosecond precision using the X-ray streaking method

    DOE PAGES

    David, C.; Karvinen, P.; Sikorski, M.; ...

    2015-01-06

    X-ray Free Electron Lasers (FELs) can produce extremely intense and very short pulses, down to below 10 femtoseconds (fs). Among the key applications are ultrafast time-resolved studies of dynamics of matter by observing responses to fast excitation pulses in a pump-probe manner. Detectors with sufficient time resolution for observing these processes are not available. Therefore, such experiments typically measure a sample's full dynamics by repeating multiple pump-probe cycles at different delay times. This conventional method assumes that the sample returns to an identical or very similar state after each cycle. Here we describe a novel approach that can provide amore » time trace of responses following a single excitation pulse, jitter-free, with fs timing precision. We demonstrate, in an X-ray diffraction experiment, how it can be applied to the investigation of ultrafast irreversible processes.« less

  5. Measurement of the total spectrum of electrons and positrons in the energy range of 300–1500 GeV in the PAMELA experiment with the aid of a sampling calorimeter and a neutron detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karelin, A. V., E-mail: karelin@hotbox.ru; Voronov, S. A.; Galper, A. M.

    2015-03-15

    A method based on the use of a sampling calorimeter was developed for measuring the total energy spectrum of electrons and positrons from high-energy cosmic rays in the PAMELA satellite-borne experiment. This made it possible to extend the range of energies accessible to measurements by the magnetic system of the PAMELA spectrometer. Themethod involves a procedure for selecting electrons on the basis of features of a secondary-particle shower in the calorimeter. The results obtained by measuring the total spectrum of cosmic-ray electrons and positrons in the energy range of 300–1500 GeV by the method in question are presented on themore » basis of data accumulated over a period spanning 2006 and 2013.« less

  6. Correlation Between Hot Spots and 3-d Defect Structure in Single and Polycrystalline High-explosive Materials

    NASA Astrophysics Data System (ADS)

    Hawkins, Cameron; Tschuaner, Oliver; Fussell, Zachary; Smith, Jesse

    2017-06-01

    A novel approach that spatially identifies inhomogeneities from microscale (defects, con-formational disorder) to mesoscale (voids, inclusions) is developed using synchrotron x-ray methods: tomography, Lang topography, and micro-diffraction mapping. These techniques pro-vide a non-destructive method for characterization of mm-sized samples prior to shock experiments. These characterization maps can be used to correlate continuum level measurements in shock compression experiments to the mesoscale and microscale structure. Specifically examined is a sample of C4. We show extensive conformational disorder in gamma-RDX, which is the main component. Further, we observe that the minor HMX-component in C4 contains at least two different phases: alpha- and beta-HMX. This work supported by National Security Technologies, LLC, under Contract No. DE-AC52-06NA25946 with the U.S. Department of Energy and by the Site-Directed Research and Development Program. DOE/NV/25946-3071.

  7. Assessing the feasibility of low temperature XAFS experiments at Indus-2, India: First results

    NASA Astrophysics Data System (ADS)

    Ramanan, Nitya; Rajput, Parasmani; Jha, S. N.; Lahiri, Debdutta

    2015-05-01

    In this work, we report installation of displex cryostat XAFS sample holder at XAFS beamline (BL-09) of Indus-2 synchrotron facility, India and make critical assessment of feasibility of low-temperature XAFS experiments in terms of data quality and reproducibility, temperature range, calibration and attainable resolution. We adopted the Debye Model-based calibration method by measuring XAFS of standard Au foil with known Debye temperature (ΘDebye)Autheory = 165 K. The data is of good quality and reproducible with international data. By fitting Debye Waller Factor (σexpt2 (T)), we deduced (ΘDebye)Auexpt = 163 K which implies calibration within 2 K. Error bars for σexpt2 (T) correspond to temperature uncertainty ΔT ≤ 5 K, which defines the temperature resolution for low temperature XAFS experiments. Thus, from both calibration and resolution points-of-view, this work demonstrates the feasibility of low temperature XAFS experiments at BL-09, Indus-2. Feasibility of extending XAFS experiments to lower temperature and unknown samples is discussed.

  8. Research MethodologyOverview of Qualitative Research

    PubMed Central

    GROSSOEHME, DANIEL H.

    2015-01-01

    Qualitative research methods are a robust tool for chaplaincy research questions. Similar to much of chaplaincy clinical care, qualitative research generally works with written texts, often transcriptions of individual interviews or focus group conversations and seeks to understand the meaning of experience in a study sample. This article describes three common methodologies: ethnography, grounded theory, and phenomenology. Issues to consider relating to the study sample, design, and analysis are discussed. Enhancing the validity of the data, as well reliability and ethical issues in qualitative research are described. Qualitative research is an accessible way for chaplains to contribute new knowledge about the sacred dimension of people's lived experience. PMID:24926897

  9. Experiment requirements: Vitamin D metabolites and bone demineralization, Spacelab 2, experiment no. 1

    NASA Technical Reports Server (NTRS)

    Schnoes, H. K.; Holton, E. M.; Thirolf, R. G.

    1978-01-01

    As a contribution toward an understanding of the molecular basis of bone loss, mineral imbalance, and increasing fecal calcium under conditions of prolonged space flight, the blood levels of biologically active vitamin D metabolites of flight crew members will be quantitatively measured. Prior to the mission, the refinement of existing and the development of new techniques for the assay of all vitamin D metabolites will provide an arsenal of methods suitable for a wide range of metabolite levels. In terms of practical application, the analysis of human and animal plasma samples, Spacelab crew plasma samples, and flight hardware are envisioned.

  10. Spectral and correlation analysis with applications to middle-atmosphere radars

    NASA Technical Reports Server (NTRS)

    Rastogi, Prabhat K.

    1989-01-01

    The correlation and spectral analysis methods for uniformly sampled stationary random signals, estimation of their spectral moments, and problems arising due to nonstationary are reviewed. Some of these methods are already in routine use in atmospheric radar experiments. Other methods based on the maximum entropy principle and time series models have been used in analyzing data, but are just beginning to receive attention in the analysis of radar signals. These methods are also briefly discussed.

  11. Integrating conventional and inverse representation for face recognition.

    PubMed

    Xu, Yong; Li, Xuelong; Yang, Jian; Lai, Zhihui; Zhang, David

    2014-10-01

    Representation-based classification methods are all constructed on the basis of the conventional representation, which first expresses the test sample as a linear combination of the training samples and then exploits the deviation between the test sample and the expression result of every class to perform classification. However, this deviation does not always well reflect the difference between the test sample and each class. With this paper, we propose a novel representation-based classification method for face recognition. This method integrates conventional and the inverse representation-based classification for better recognizing the face. It first produces conventional representation of the test sample, i.e., uses a linear combination of the training samples to represent the test sample. Then it obtains the inverse representation, i.e., provides an approximation representation of each training sample of a subject by exploiting the test sample and training samples of the other subjects. Finally, the proposed method exploits the conventional and inverse representation to generate two kinds of scores of the test sample with respect to each class and combines them to recognize the face. The paper shows the theoretical foundation and rationale of the proposed method. Moreover, this paper for the first time shows that a basic nature of the human face, i.e., the symmetry of the face can be exploited to generate new training and test samples. As these new samples really reflect some possible appearance of the face, the use of them will enable us to obtain higher accuracy. The experiments show that the proposed conventional and inverse representation-based linear regression classification (CIRLRC), an improvement to linear regression classification (LRC), can obtain very high accuracy and greatly outperforms the naive LRC and other state-of-the-art conventional representation based face recognition methods. The accuracy of CIRLRC can be 10% greater than that of LRC.

  12. Bridging the gap between formal and experience-based knowledge for context-aware laparoscopy.

    PubMed

    Katić, Darko; Schuck, Jürgen; Wekerle, Anna-Laura; Kenngott, Hannes; Müller-Stich, Beat Peter; Dillmann, Rüdiger; Speidel, Stefanie

    2016-06-01

    Computer assistance is increasingly common in surgery. However, the amount of information is bound to overload processing abilities of surgeons. We propose methods to recognize the current phase of a surgery for context-aware information filtering. The purpose is to select the most suitable subset of information for surgical situations which require special assistance. We combine formal knowledge, represented by an ontology, and experience-based knowledge, represented by training samples, to recognize phases. For this purpose, we have developed two different methods. Firstly, we use formal knowledge about possible phase transitions to create a composition of random forests. Secondly, we propose a method based on cultural optimization to infer formal rules from experience to recognize phases. The proposed methods are compared with a purely formal knowledge-based approach using rules and a purely experience-based one using regular random forests. The comparative evaluation on laparoscopic pancreas resections and adrenalectomies employs a consistent set of quality criteria on clean and noisy input. The rule-based approaches proved best with noisefree data. The random forest-based ones were more robust in the presence of noise. Formal and experience-based knowledge can be successfully combined for robust phase recognition.

  13. Determination of total sulfur content of sedimentary rocks by a combustion method

    USGS Publications Warehouse

    Coller, M.E.; Leininger, R.K.

    1955-01-01

    Total sulfur has been determined in common sedimentary rocks by a combustion method. Sulfur contents range from 0.001 to 5.0%. Experiments show that the combustion method can be used in analyzing sedimentary rocks in which sulfur is present as sulfide, sulfate, or both. Pulverized samples from 0.100 to 0.500 gram in weight are used in this method. Each sample is placed in a No. 6 Leco combustion boat and covered with two fluxes: 0.50 gram of standard ingot iron and approximately 1.0 gram of 30-mesh granular tin. The boat with sample then is placed in the combustion tube of a Burrell Unit Package Model T29A tube furnace which is controlled at a temperature of 1310?? to 1320?? C. After the sample has been heated for 1 minute, oxygen is admitted at a rate of about 1 liter per minute. The sulfur dioxide formed is absorbed in a starch solution and is titrated with standard potassium iodate in a Leco sulfur determinator. Thirteen values obtained for National Bureau of Standards standard sample 1a, argillaceous limestone, range from 0.273 to 0.276% sulfur (certificate value 0.27% by calculation).

  14. A method for measuring the local gas pressure within a gas-flow stage in situ in the transmission electron microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colby, Robert J.; Alsem, Daan H.; Liyu, Andrey V.

    2015-06-01

    The development of environmental transmission electron microscopy (TEM) has enabled in situ experiments in a gaseous environment with high resolution imaging and spectroscopy. Addressing scientific challenges in areas such as catalysis, corrosion, and geochemistry can require pressures much higher than the ~20 mbar achievable with a differentially pumped, dedicated environmental TEM. Gas flow stages, in which the environment is contained between two semi-transparent thin membrane windows, have been demonstrated at pressures of several atmospheres. While this constitutes significant progress towards operando measurements, the design of many current gas flow stages is such that the pressure at the sample cannot necessarilymore » be directly inferred from the pressure differential across the system. Small differences in the setup and design of the gas flow stage can lead to very different sample pressures. We demonstrate a method for measuring the gas pressure directly, using a combination of electron energy loss spectroscopy and TEM imaging. This method requires only two energy filtered TEM images, limiting the measurement time to a few seconds and can be performed during an ongoing experiment at the region of interest. This approach provides a means to ensure reproducibility between different experiments, and even between very differently designed gas flow stages.« less

  15. Data acquisition system for chemical kinetic studies

    PubMed Central

    Zhu, Yu-zhen; Zhou, Xin; Zang, Xiang-sheng

    1989-01-01

    A microcomputer-interfaced data acquisition system for chemical kinetics (interfacing with laboratory analogue instruments) has been developed. Analogue signals from instruments used in kinetics experiments are amplifed by a wide-range adjustable high-gain operational amplifier and smoothed by an op-based filter, and then digitized at rates of up to 104 samples per channel by an ADC 0816 digitizer. The ADC data transfer and manipulation routine was written in Assembler code and in high-level language; the graphics package and data treatment package is in Basic. For the various sampling speeds, all of the program can be written using Basic-Assembler or completely in Assembler if a high sampling rate is needed. Several numerical treatment methods for chemical kinetics have been utilized to smooth the data from experiments. The computer-interfaced system for second-order chemical kinetic studies was applied to the determination of the rate constant of the saponification of ethyl acetate at 35°C. For this specific problem, an averaging treatment was used which can be called an interval method. The use of this method avoids the diffcully of measuring the starting time of the reaction. Two groups of experimental data and results were used to evaluate the systems performance. All of the results obtained are in agreement with the reference value. PMID:18925219

  16. Sugar composition of French royal jelly for comparison with commercial and artificial sugar samples.

    PubMed

    Daniele, Gaëlle; Casabianca, Hervé

    2012-09-15

    A gas chromatographic method was developed to quantify the major and minor sugars of 400 Royal Jellies (RJs). Their contents were compared in relation to the geographical origins and different production methods. A reliable database was established from the analysis of 290 RJs harvested in different French areas that took into account the diversity of geographical origin, harvesting season, forage sources available in the environment corresponding to natural food of the bees: pollen and nectar. Around 30 RJ samples produced by Italian beekeepers, about sixty-ones from French market, and around thirty-ones derived from feeding experiments were analysed and compared with our database. Fructose and glucose contents are in the range 2.3-7.8% and 3.4-7.7%, respectively, whatever the RJ's origin. On the contrary, differences in minor sugar composition are observed. Indeed sucrose and erlose contents in French RJs are lesser than 1.7% and 0.3%, respectively, whereas they reach 3.9% and 2.0% in some commercial samples and 5.1% and 1.7% in RJs produced from feeding experiments. This study could be used to discriminate different production methods and provide an additional tool for identifying unknown commercial RJs. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Observing system simulation experiments with multiple methods

    NASA Astrophysics Data System (ADS)

    Ishibashi, Toshiyuki

    2014-11-01

    An observing System Simulation Experiment (OSSE) is a method to evaluate impacts of hypothetical observing systems on analysis and forecast accuracy in numerical weather prediction (NWP) systems. Since OSSE requires simulations of hypothetical observations, uncertainty of OSSE results is generally larger than that of observing system experiments (OSEs). To reduce such uncertainty, OSSEs for existing observing systems are often carried out as calibration of the OSSE system. The purpose of this study is to achieve reliable OSSE results based on results of OSSEs with multiple methods. There are three types of OSSE methods. The first one is the sensitivity observing system experiment (SOSE) based OSSE (SOSEOSSE). The second one is the ensemble of data assimilation cycles (ENDA) based OSSE (ENDA-OSSE). The third one is the nature-run (NR) based OSSE (NR-OSSE). These three OSSE methods have very different properties. The NROSSE evaluates hypothetical observations in a virtual (hypothetical) world, NR. The ENDA-OSSE is very simple method but has a sampling error problem due to a small size ensemble. The SOSE-OSSE requires a very highly accurate analysis field as a pseudo truth of the real atmosphere. We construct these three types of OSSE methods in the Japan meteorological Agency (JMA) global 4D-Var experimental system. In the conference, we will present initial results of these OSSE systems and their comparisons.

  18. Ultra-high performance liquid chromatographic determination of levofloxacin in human plasma and prostate tissue with use of experimental design optimization procedures.

    PubMed

    Szerkus, O; Jacyna, J; Wiczling, P; Gibas, A; Sieczkowski, M; Siluk, D; Matuszewski, M; Kaliszan, R; Markuszewski, M J

    2016-09-01

    Fluoroquinolones are considered as gold standard for the prevention of bacterial infections after transrectal ultrasound guided prostate biopsy. However, recent studies reported that fluoroquinolone- resistant bacterial strains are responsible for gradually increasing number of infections after transrectal prostate biopsy. In daily clinical practice, antibacterial efficacy is evaluated only in vitro, by measuring the reaction of bacteria with an antimicrobial agent in culture media (i.e. calculation of minimal inhibitory concentration). Such approach, however, has no relation to the treated tissue characteristics and might be highly misleading. Thus, the objective of this study was to develop, with the use of Design of Experiments approach, a reliable, specific and sensitive ultra-high performance liquid chromatography- diode array detection method for the quantitative analysis of levofloxacin in plasma and prostate tissue samples obtained from patients undergoing prostate biopsy. Moreover, correlation study between concentrations observed in plasma samples vs prostatic tissue samples was performed, resulting in better understanding, evaluation and optimization of the fluoroquinolone-based antimicrobial prophylaxis during transrectal ultrasound guided prostate biopsy. Box-Behnken design was employed to optimize chromatographic conditions of the isocratic elution program in order to obtain desirable retention time, peak symmetry and resolution of levofloxacine and ciprofloxacine (internal standard) peaks. Fractional Factorial design 2(4-1) with four center points was used for screening of significant factors affecting levofloxacin extraction from the prostatic tissue. Due to the limited number of tissue samples the prostatic sample preparation procedure was further optimized using Central Composite design. Design of Experiments approach was also utilized for evaluation of parameter robustness. The method was found linear over the range of 0.030-10μg/mL for human plasma and 0.300-30μg/g for human prostate tissue samples. The intra-day and inter-day variability for levofloxacine from both plasma and prostate samples were less than 10%, with accuracies between 93 and 108% of the nominal values. The limit of detection and the limit of quantification for human plasma were 0.01μg/mL and 0.03μg/mL, respectively. For the prostate tissue, the limit of detection and the limit of quantification were 0.1μg/g and 0.3μg/g, respectively. The average recoveries of levofloxacin were in the range from 99 to 106%. Also, the method fulfills requirements of robustness what was determined and proved by Design of Experiments. The developed method was successfully applied to examine prostate tissue and plasma samples from 140 hospitalized patients enrolled into the clinical study, 12h after oral administration of LVF at a dose of 500mg. The mean (±SD) LVF concentration in prostate was 6.22±3.52μg/g and in plasma 2.54±1.14μg/mL. Due to simplicity of the method and relative small amount of sample needed for the assay, the method can be applied in clinical practice for monitoring of LVF concentrations in plasma and prostate gland. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. A Universal Method for Species Identification of Mammals Utilizing Next Generation Sequencing for the Analysis of DNA Mixtures

    PubMed Central

    Tillmar, Andreas O.; Dell'Amico, Barbara; Welander, Jenny; Holmlund, Gunilla

    2013-01-01

    Species identification can be interesting in a wide range of areas, for example, in forensic applications, food monitoring and in archeology. The vast majority of existing DNA typing methods developed for species determination, mainly focuses on a single species source. There are, however, many instances where all species from mixed sources need to be determined, even when the species in minority constitutes less than 1 % of the sample. The introduction of next generation sequencing opens new possibilities for such challenging samples. In this study we present a universal deep sequencing method using 454 GS Junior sequencing of a target on the mitochondrial gene 16S rRNA. The method was designed through phylogenetic analyses of DNA reference sequences from more than 300 mammal species. Experiments were performed on artificial species-species mixture samples in order to verify the method’s robustness and its ability to detect all species within a mixture. The method was also tested on samples from authentic forensic casework. The results showed to be promising, discriminating over 99.9 % of mammal species and the ability to detect multiple donors within a mixture and also to detect minor components as low as 1 % of a mixed sample. PMID:24358309

  20. Differentiation of organic and non-organic winter wheat cultivars from a controlled field trial by crystallization patterns.

    PubMed

    Kahl, Johannes; Busscher, Nicolaas; Mergardt, Gaby; Mäder, Paul; Torp, Torfinn; Ploeger, Angelika

    2015-01-01

    There is a need for authentication tools in order to verify the existing certification system. Recently, markers for analytical authentication of organic products were evaluated. Herein, crystallization with additives was described as an interesting fingerprint approach which needs further evidence, based on a standardized method and well-documented sample origin. The fingerprint of wheat cultivars from a controlled field trial is generated from structure analysis variables of crystal patterns. Method performance was tested on factors such as crystallization chamber, day of experiment and region of interest of the patterns. Two different organic treatments and two different treatments of the non-organic regime can be grouped together in each of three consecutive seasons. When the k-nearest-neighbor classification method was applied, approximately 84% of Runal samples and 95% of Titlis samples were classified correctly into organic and non-organic origin using cross-validation. Crystallization with additive offers an interesting complementary fingerprint method for organic wheat samples. When the method is applied to winter wheat from the DOK trial, organic and non-organic treated samples can be differentiated significantly based on pattern recognition. Therefore crystallization with additives seems to be a promising tool in organic wheat authentication. © 2014 Society of Chemical Industry.

  1. DoE optimization of a mercury isotope ratio determination method for environmental studies.

    PubMed

    Berni, Alex; Baschieri, Carlo; Covelli, Stefano; Emili, Andrea; Marchetti, Andrea; Manzini, Daniela; Berto, Daniela; Rampazzo, Federico

    2016-05-15

    By using the experimental design (DoE) technique, we optimized an analytical method for the determination of mercury isotope ratios by means of cold-vapor multicollector ICP-MS (CV-MC-ICP-MS) to provide absolute Hg isotopic ratio measurements with a suitable internal precision. By running 32 experiments, the influence of mercury and thallium internal standard concentrations, total measuring time and sample flow rate was evaluated. Method was optimized varying Hg concentration between 2 and 20 ng g(-1). The model finds out some correlations within the parameters affect the measurements precision and predicts suitable sample measurement precisions for Hg concentrations from 5 ng g(-1) Hg upwards. The method was successfully applied to samples of Manila clams (Ruditapes philippinarum) coming from the Marano and Grado lagoon (NE Italy), a coastal environment affected by long term mercury contamination mainly due to mining activity. Results show different extents of both mass dependent fractionation (MDF) and mass independent fractionation (MIF) phenomena in clams according to their size and sampling sites in the lagoon. The method is fit for determinations on real samples, allowing for the use of Hg isotopic ratios to study mercury biogeochemical cycles in complex ecosystems. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Automated Fast Screening Method for Cocaine Identification in Seized Drug Samples Using a Portable Fourier Transform Infrared (FT-IR) Instrument.

    PubMed

    Mainali, Dipak; Seelenbinder, John

    2016-05-01

    Quick and presumptive identification of seized drug samples without destroying evidence is necessary for law enforcement officials to control the trafficking and abuse of drugs. This work reports an automated screening method to detect the presence of cocaine in seized samples using portable Fourier transform infrared (FT-IR) spectrometers. The method is based on the identification of well-defined characteristic vibrational frequencies related to the functional group of the cocaine molecule and is fully automated through the use of an expert system. Traditionally, analysts look for key functional group bands in the infrared spectra and characterization of the molecules present is dependent on user interpretation. This implies the need for user expertise, especially in samples that likely are mixtures. As such, this approach is biased and also not suitable for non-experts. The method proposed in this work uses the well-established "center of gravity" peak picking mathematical algorithm and combines it with the conditional reporting feature in MicroLab software to provide an automated method that can be successfully employed by users with varied experience levels. The method reports the confidence level of cocaine present only when a certain number of cocaine related peaks are identified by the automated method. Unlike library search and chemometric methods that are dependent on the library database or the training set samples used to build the calibration model, the proposed method is relatively independent of adulterants and diluents present in the seized mixture. This automated method in combination with a portable FT-IR spectrometer provides law enforcement officials, criminal investigators, or forensic experts a quick field-based prescreening capability for the presence of cocaine in seized drug samples. © The Author(s) 2016.

  3. Two and three-dimensional quantitative neutron imaging of the water distribution during ponded infiltration

    NASA Astrophysics Data System (ADS)

    Sacha, Jan; Snehota, Michal; Jelinkova, Vladimira

    2016-04-01

    Information on spatial and temporal water and air distribution in a soil sample during hydrological processes is important for evaluating current and developing new water transport models. Modern imaging techniques such as neutron imaging (NI) allow relatively short acquisition times and high resolution of images. At the same time, the appropriate data processing has to be applied to obtain results free of bias and artifacts. In this study a ponded infiltration experiments were conducted on two soil samples packed into the quartz glass columns of inner diameter of 29 and 34 mm, respectively. First sample was prepared by packing of fine and coarse fractions of sand and the second sample was packed using coarse sand and disks of fine porous ceramic. Ponded infiltration experiments conducted on both samples were monitored by neutron radiography to produce two dimensional (2D) projection images during the transient phase of infiltration. During the steady state flow stage of experiments neutron tomography was utilized to obtain three-dimensional (3D) information on gradual water redistribution. The acquired radiographic images were normalized for background noise and spatial inhomogeneity of the detector, fluctuations of the neutron flux in time and for spatial inhomogeneity of the neutron beam. The radiograms of dry sample were subtracted from all subsequent radiograms to determine water thickness in the 2D projection images. All projections were corrected for beam hardening and neutron scattering by empirical method of Kang et al. (2013). Parameters of the correction method uses were identified by two different approaches. The first approach was based on fitting the NI derived water thickness representing the water filled region in the layer of water above the sample surface to actual water thickness. In the second approach the NI derived volume of water in the entire sample in given time was fitted to corresponding gravimetrically determined amount of water in the sample. Tomography images were reconstructed from the both corrected and uncorrected water thickness maps to obtain the 3D spatial distribution of water content within the sample. Without the correction the beam hardening and scattering effects overestimated the water content values close to the sample perimeter and underestimated the values close to the center of the sample, however the total water content of whole sample was the same in both cases.

  4. Broiler carcass contamination with Campylobacter from feces during defeathering.

    PubMed

    Berrang, M E; Buhr, R J; Cason, J A; Dickens, J A

    2001-12-01

    Three sets of experiments were conducted to explore the increase in recovery of Campylobacter from broiler carcasses after defeathering. In the first set of experiments, live broilers obtained from a commercial processor were transported to a pilot plant, and breast skin was sampled by a sponge wipe method before and after defeathering. One of 120 broiler breast skin samples was positive for Campylobacter before defeathering, and 95 of 120 were positive after defeathering. In the second set of experiments, Campylobacter-free flocks were identified, subjected to feed withdrawal, and transported to the pilot plant. Carcasses were intracloacally inoculated with Campylobacter (10(7) CFU) just prior to entering the scald tank. Breast skin sponge samples were negative for Campylobacter before carcasses entered the picker (0 of 120 samples). After defeathering, 69 of 120 samples were positive for Campylobacter, with an average of log10 2.7 CFU per sample (approximately 30 cm2). The third set of experiments was conducted using Campylobacter-positive broilers obtained at a commercial processing plant and transported live to the pilot plant. Just prior to scalding, the cloacae were plugged with tampons and sutured shut on half of the carcasses. Plugged carcasses were scalded, and breast skin samples taken before and after defeathering were compared with those collected from control broilers from the same flock. Prior to defeathering, 1 of 120 breast skin sponge samples were positive for the control carcasses, and 0 of 120 were positive for the plugged carcasses. After passing through the picker, 120 of 120 control carcasses had positive breast skin sponge samples, with an average of log10 4.2 CFU per sample (approximately 30 cm2). Only 13 of 120 plugged carcasses had detectable numbers of Campylobacter on the breast skin sponge, with an average of log10 2.5 CFU per sample. These data indicate that an increase in the recovery of Campylobacter after defeathering can be related to the escape of contaminated feces from the cloaca during defeathering.

  5. Non-parametric methods for cost-effectiveness analysis: the central limit theorem and the bootstrap compared.

    PubMed

    Nixon, Richard M; Wonderling, David; Grieve, Richard D

    2010-03-01

    Cost-effectiveness analyses (CEA) alongside randomised controlled trials commonly estimate incremental net benefits (INB), with 95% confidence intervals, and compute cost-effectiveness acceptability curves and confidence ellipses. Two alternative non-parametric methods for estimating INB are to apply the central limit theorem (CLT) or to use the non-parametric bootstrap method, although it is unclear which method is preferable. This paper describes the statistical rationale underlying each of these methods and illustrates their application with a trial-based CEA. It compares the sampling uncertainty from using either technique in a Monte Carlo simulation. The experiments are repeated varying the sample size and the skewness of costs in the population. The results showed that, even when data were highly skewed, both methods accurately estimated the true standard errors (SEs) when sample sizes were moderate to large (n>50), and also gave good estimates for small data sets with low skewness. However, when sample sizes were relatively small and the data highly skewed, using the CLT rather than the bootstrap led to slightly more accurate SEs. We conclude that while in general using either method is appropriate, the CLT is easier to implement, and provides SEs that are at least as accurate as the bootstrap. (c) 2009 John Wiley & Sons, Ltd.

  6. Women's Contraceptive Preference-Use Mismatch

    PubMed Central

    He, Katherine; Dalton, Vanessa K.; Zochowski, Melissa K.

    2017-01-01

    Abstract Background: Family planning research has not adequately addressed women's preferences for different contraceptive methods and whether women's contraceptive experiences match their preferences. Methods: Data were drawn from the Women's Healthcare Experiences and Preferences Study, an Internet survey of 1,078 women aged 18–55 randomly sampled from a national probability panel. Survey items assessed women's preferences for contraceptive methods, match between methods preferred and used, and perceived reasons for mismatch. We estimated predictors of contraceptive preference with multinomial logistic regression models. Results: Among women at risk for pregnancy who responded with their preferred method (n = 363), hormonal methods (non-LARC [long-acting reversible contraception]) were the most preferred method (34%), followed by no method (23%) and LARC (18%). Sociodemographic differences in contraception method preferences were noted (p-values <0.05), generally with minority, married, and older women having higher rates of preferring less effective methods, compared to their counterparts. Thirty-six percent of women reported preference-use mismatch, with the majority preferring more effective methods than those they were using. Rates of match between preferred and usual methods were highest for LARC (76%), hormonal (non-LARC) (65%), and no method (65%). The most common reasons for mismatch were cost/insurance (41%), lack of perceived/actual need (34%), and method-specific preference concerns (19%). Conclusion: While preference for effective contraception was common among this sample of women, we found substantial mismatch between preferred and usual methods, notably among women of lower socioeconomic status and women using less effective methods. Findings may have implications for patient-centered contraceptive interventions. PMID:27710196

  7. Note: A new method for directly reducing the sampling jitter noise of the digital phasemeter

    NASA Astrophysics Data System (ADS)

    Liang, Yu-Rong

    2018-03-01

    The sampling jitter noise is one non-negligible noise source of the digital phasemeter used for space gravitational wave detection missions. This note provides a new method for directly reducing the sampling jitter noise of the digital phasemeter, by adding a dedicated signal of which the frequency, amplitude, and initial phase should be pre-set. In contrast to the phase correction using the pilot-tone in the work of Burnett, Gerberding et al., Liang et al., Ales et al., Gerberding et al., and Ware et al. [M.Sc. thesis, Luleå University of Technology, 2010; Classical Quantum Gravity 30, 235029 (2013); Rev. Sci. Instrum. 86, 016106 (2015); Rev. Sci. Instrum. 86, 084502 (2015); Rev. Sci. Instrum. 86, 074501 (2015); and Proceedings of the Earth Science Technology Conference (NASA, USA, 2006)], the new method is intrinsically additive noise suppression. The experiment results validate that the new method directly reduces the sampling jitter noise without data post-processing and provides the same phase measurement noise level (10-6 rad/Hz1/2 at 0.1 Hz) as the pilot-tone correction.

  8. Face recognition via sparse representation of SIFT feature on hexagonal-sampling image

    NASA Astrophysics Data System (ADS)

    Zhang, Daming; Zhang, Xueyong; Li, Lu; Liu, Huayong

    2018-04-01

    This paper investigates a face recognition approach based on Scale Invariant Feature Transform (SIFT) feature and sparse representation. The approach takes advantage of SIFT which is local feature other than holistic feature in classical Sparse Representation based Classification (SRC) algorithm and possesses strong robustness to expression, pose and illumination variations. Since hexagonal image has more inherit merits than square image to make recognition process more efficient, we extract SIFT keypoint in hexagonal-sampling image. Instead of matching SIFT feature, firstly the sparse representation of each SIFT keypoint is given according the constructed dictionary; secondly these sparse vectors are quantized according dictionary; finally each face image is represented by a histogram and these so-called Bag-of-Words vectors are classified by SVM. Due to use of local feature, the proposed method achieves better result even when the number of training sample is small. In the experiments, the proposed method gave higher face recognition rather than other methods in ORL and Yale B face databases; also, the effectiveness of the hexagonal-sampling in the proposed method is verified.

  9. Development and optimization of the determination of pharmaceuticals in water samples by SPE and HPLC with diode-array detection.

    PubMed

    Pavlović, Dragana Mutavdžić; Ašperger, Danijela; Tolić, Dijana; Babić, Sandra

    2013-09-01

    This paper describes the development, optimization, and validation of a method for the determination of five pharmaceuticals from different therapeutic classes (antibiotics, anthelmintics, glucocorticoides) in water samples. Water samples were prepared using SPE and extracts were analyzed by HPLC with diode-array detection. The efficiency of 11 different SPE cartridges to extract the investigated compounds from water was tested in preliminary experiments. Then, the pH of the water sample, elution solvent, and sorbent mass were optimized. Except for optimization of the SPE procedure, selection of the optimal HPLC column with different stationary phases from different manufacturers has been performed. The developed method was validated using spring water samples spiked with appropriate concentrations of pharmaceuticals. Good linearity was obtained in the range of 2.4-200 μg/L, depending on the pharmaceutical with the correlation coefficients >0.9930 in all cases, except for ciprofloxacin (0.9866). Also, the method has revealed that low LODs (0.7-3.9 μg/L), good precision (intra- and interday) with RSD below 17% and recoveries above 98% for all pharmaceuticals. The method has been successfully applied to the analysis of production wastewater samples from the pharmaceutical industry. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Testing the ISP method with the PARIO device: Accuracy of results and influence of homogenization technique

    NASA Astrophysics Data System (ADS)

    Durner, Wolfgang; Huber, Magdalena; Yangxu, Li; Steins, Andi; Pertassek, Thomas; Göttlein, Axel; Iden, Sascha C.; von Unold, Georg

    2017-04-01

    The particle-size distribution (PSD) is one of the main properties of soils. To determine the proportions of the fine fractions silt and clay, sedimentation experiments are used. Most common are the Pipette and Hydrometer method. Both need manual sampling at specific times. Both are thus time-demanding and rely on experienced operators. Durner et al. (Durner, W., S.C. Iden, and G. von Unold (2017): The integral suspension pressure method (ISP) for precise particle-size analysis by gravitational sedimentation, Water Resources Research, doi:10.1002/2016WR019830) recently developed the integral suspension method (ISP) method, which is implemented in the METER Group device PARIOTM. This new method estimates continuous PSD's from sedimentation experiments by recording the temporal evolution of the suspension pressure at a certain measurement depth in a sedimentation cylinder. It requires no manual interaction after start and thus no specialized training of the lab personnel. The aim of this study was to test the precision and accuracy of new method with a variety of materials, to answer the following research questions: (1) Are the results obtained by PARIO reliable and stable? (2) Are the results affected by the initial mixing technique to homogenize the suspension, or by the presence of sand in the experiment? (3) Are the results identical to the one that are obtained with the Pipette method as reference method? The experiments were performed with a pure quartz silt material and four real soil materials. PARIO measurements were done repetitively on the same samples in a temperature-controlled lab to characterize the repeatability of the measurements. Subsequently, the samples were investigated by the pipette method to validate the results. We found that the statistical error for silt fraction from replicate and repetitive measurements was in the range of 1% for the quartz material to 3% for soil materials. Since the sand fractions, as in any sedimentation method, must be measured explicitly and are used as fixed parameters in the PARIO evaluation, the error of the clay fraction is determined by error propagation from the sand and silt fraction. Homogenization of the suspension by overhead shaking gave lower reproducibility and smaller silt fractions than vertical stirring. However, it turned out that vertical stirring must be performed with sufficient rigour to obtain a fully homogeneous initial distribution. Analysis of material sieved to < 2000 μm and to < 200 μm gave equal results, i.e., there was no hint towards dragging effects of large particles. Complete removal of the sand fraction, i.e. sieving to < 63 μm lead to less silt, probably due to a loss of fine material by the sieving process. The PSD's obtained with the PARIO corresponded very well with the results of the Pipette method.

  11. Lipidic cubic phase injector is a viable crystal delivery system for time-resolved serial crystallography

    DOE PAGES

    Nogly, Przemyslaw; Panneels, Valerie; Nelson, Garrett; ...

    2016-08-22

    Serial femtosecond crystallography (SFX) using X-ray free-electron laser sources is an emerging method with considerable potential for time-resolved pump-probe experiments. Here we present a lipidic cubic phase SFX structure of the light-driven proton pump bacteriorhodopsin (bR) to 2.3 Å resolution and a method to investigate protein dynamics with modest sample requirement. Time-resolved SFX (TR-SFX) with a pump-probe delay of 1 ms yields difference Fourier maps compatible with the dark to M state transition of bR. Importantly, the method is very sample efficient and reduces sample consumption to about 1 mg per collected time point. Accumulation of M intermediate within themore » crystal lattice is confirmed by time-resolved visible absorption spectroscopy. Furthermore, this study provides an important step towards characterizing the complete photocycle dynamics of retinal proteins and demonstrates the feasibility of a sample efficient viscous medium jet for TR-SFX.« less

  12. Lipidic cubic phase injector is a viable crystal delivery system for time-resolved serial crystallography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nogly, Przemyslaw; Panneels, Valerie; Nelson, Garrett

    Serial femtosecond crystallography (SFX) using X-ray free-electron laser sources is an emerging method with considerable potential for time-resolved pump-probe experiments. Here we present a lipidic cubic phase SFX structure of the light-driven proton pump bacteriorhodopsin (bR) to 2.3 Å resolution and a method to investigate protein dynamics with modest sample requirement. Time-resolved SFX (TR-SFX) with a pump-probe delay of 1 ms yields difference Fourier maps compatible with the dark to M state transition of bR. Importantly, the method is very sample efficient and reduces sample consumption to about 1 mg per collected time point. Accumulation of M intermediate within themore » crystal lattice is confirmed by time-resolved visible absorption spectroscopy. Furthermore, this study provides an important step towards characterizing the complete photocycle dynamics of retinal proteins and demonstrates the feasibility of a sample efficient viscous medium jet for TR-SFX.« less

  13. Lipidic cubic phase injector is a viable crystal delivery system for time-resolved serial crystallography

    PubMed Central

    Nogly, Przemyslaw; Panneels, Valerie; Nelson, Garrett; Gati, Cornelius; Kimura, Tetsunari; Milne, Christopher; Milathianaki, Despina; Kubo, Minoru; Wu, Wenting; Conrad, Chelsie; Coe, Jesse; Bean, Richard; Zhao, Yun; Båth, Petra; Dods, Robert; Harimoorthy, Rajiv; Beyerlein, Kenneth R.; Rheinberger, Jan; James, Daniel; DePonte, Daniel; Li, Chufeng; Sala, Leonardo; Williams, Garth J.; Hunter, Mark S.; Koglin, Jason E.; Berntsen, Peter; Nango, Eriko; Iwata, So; Chapman, Henry N.; Fromme, Petra; Frank, Matthias; Abela, Rafael; Boutet, Sébastien; Barty, Anton; White, Thomas A.; Weierstall, Uwe; Spence, John; Neutze, Richard; Schertler, Gebhard; Standfuss, Jörg

    2016-01-01

    Serial femtosecond crystallography (SFX) using X-ray free-electron laser sources is an emerging method with considerable potential for time-resolved pump-probe experiments. Here we present a lipidic cubic phase SFX structure of the light-driven proton pump bacteriorhodopsin (bR) to 2.3 Å resolution and a method to investigate protein dynamics with modest sample requirement. Time-resolved SFX (TR-SFX) with a pump-probe delay of 1 ms yields difference Fourier maps compatible with the dark to M state transition of bR. Importantly, the method is very sample efficient and reduces sample consumption to about 1 mg per collected time point. Accumulation of M intermediate within the crystal lattice is confirmed by time-resolved visible absorption spectroscopy. This study provides an important step towards characterizing the complete photocycle dynamics of retinal proteins and demonstrates the feasibility of a sample efficient viscous medium jet for TR-SFX. PMID:27545823

  14. A computational method for estimating the PCR duplication rate in DNA and RNA-seq experiments.

    PubMed

    Bansal, Vikas

    2017-03-14

    PCR amplification is an important step in the preparation of DNA sequencing libraries prior to high-throughput sequencing. PCR amplification introduces redundant reads in the sequence data and estimating the PCR duplication rate is important to assess the frequency of such reads. Existing computational methods do not distinguish PCR duplicates from "natural" read duplicates that represent independent DNA fragments and therefore, over-estimate the PCR duplication rate for DNA-seq and RNA-seq experiments. In this paper, we present a computational method to estimate the average PCR duplication rate of high-throughput sequence datasets that accounts for natural read duplicates by leveraging heterozygous variants in an individual genome. Analysis of simulated data and exome sequence data from the 1000 Genomes project demonstrated that our method can accurately estimate the PCR duplication rate on paired-end as well as single-end read datasets which contain a high proportion of natural read duplicates. Further, analysis of exome datasets prepared using the Nextera library preparation method indicated that 45-50% of read duplicates correspond to natural read duplicates likely due to fragmentation bias. Finally, analysis of RNA-seq datasets from individuals in the 1000 Genomes project demonstrated that 70-95% of read duplicates observed in such datasets correspond to natural duplicates sampled from genes with high expression and identified outlier samples with a 2-fold greater PCR duplication rate than other samples. The method described here is a useful tool for estimating the PCR duplication rate of high-throughput sequence datasets and for assessing the fraction of read duplicates that correspond to natural read duplicates. An implementation of the method is available at https://github.com/vibansal/PCRduplicates .

  15. Learning from samples of one or fewer*

    PubMed Central

    March, J; Sproull, L; Tamuz, M

    2003-01-01

    

 Organizations learn from experience. Sometimes, however, history is not generous with experience. We explore how organizations convert infrequent events into interpretations of history, and how they balance the need to achieve agreement on interpretations with the need to interpret history correctly. We ask what methods are used, what problems are involved, and what improvements might be made. Although the methods we observe are not guaranteed to lead to consistent agreement on interpretations, valid knowledge, improved organizational performance, or organizational survival, they provide possible insights into the possibilities for and problems of learning from fragments of history. PMID:14645764

  16. Quantum state estimation when qubits are lost: a no-data-left-behind approach

    DOE PAGES

    Williams, Brian P.; Lougovski, Pavel

    2017-04-06

    We present an approach to Bayesian mean estimation of quantum states using hyperspherical parametrization and an experiment-specific likelihood which allows utilization of all available data, even when qubits are lost. With this method, we report the first closed-form Bayesian mean and maximum likelihood estimates for the ideal single qubit. Due to computational constraints, we utilize numerical sampling to determine the Bayesian mean estimate for a photonic two-qubit experiment in which our novel analysis reduces burdens associated with experimental asymmetries and inefficiencies. This method can be applied to quantum states of any dimension and experimental complexity.

  17. Experiments on Nucleation in Different Flow Regimes

    NASA Technical Reports Server (NTRS)

    Bayuzick, Robert J.

    1999-01-01

    The vast majority of metallic engineering materials are solidified from the liquid phase. Understanding the solidification process is essential to control microstructure, which in turn, determines the properties of materials. The genesis of solidification is nucleation, where the first stable solid forms from the liquid phase. Nucleation kinetics determine the degree of undercooling and phase selection. As such, it is important to understand nucleation phenomena in order to control solidification or glass formation in metals and alloys. Early experiments in nucleation kinetics were accomplished by droplet dispersion methods [1-6]. Dilitometry was used by Turnbull and others, and more recently differential thermal analysis and differential scanning calorimetry have been used for kinetic studies. These techniques have enjoyed success; however, there are difficulties with these experiments. Since materials are dispersed in a medium, the character of the emulsion/metal interface affects the nucleation behavior. Statistics are derived from the large number of particles observed in a single experiment, but dispersions have a finite size distribution which adds to the uncertainty of the kinetic determinations. Even though temperature can be controlled quite well before the onset of nucleation, the release of the latent heat of fusion during nucleation of particles complicates the assumption of isothermality during these experiments. Containerless processing has enabled another approach to the study of nucleation kinetics [7]. With levitation techniques it is possible to undercool one sample to nucleation repeatedly in a controlled manner, such that the statistics of the nucleation process can be derived from multiple experiments on a single sample. The authors have fully developed the analysis of nucleation experiments on single samples following the suggestions of Skripov [8]. The advantage of these experiments is that the samples are directly observable. The nucleation temperature can be measured by noncontact optical pyrometry, the mass of the sample is known, and post processing analysis can be conducted on the sample. The disadvantages are that temperature measurement must have exceptionally high precision, and it is not possible to isolate specific heterogeneous sites as in droplet dispersions.

  18. Evaluation of sequential extraction procedures for soluble and insoluble hexavalent chromium compounds in workplace air samples.

    PubMed

    Ashley, Kevin; Applegate, Gregory T; Marcy, A Dale; Drake, Pamela L; Pierce, Paul A; Carabin, Nathalie; Demange, Martine

    2009-02-01

    Because toxicities may differ for Cr(VI) compounds of varying solubility, some countries and organizations have promulgated different occupational exposure limits (OELs) for soluble and insoluble hexavalent chromium (Cr(VI)) compounds, and analytical methods are needed to determine these species in workplace air samples. To address this need, international standard methods ASTM D6832 and ISO 16740 have been published that describe sequential extraction techniques for soluble and insoluble Cr(VI) in samples collected from occupational settings. However, no published performance data were previously available for these Cr(VI) sequential extraction procedures. In this work, the sequential extraction methods outlined in the relevant international standards were investigated. The procedures tested involved the use of either deionized water or an ammonium sulfate/ammonium hydroxide buffer solution to target soluble Cr(VI) species. This was followed by extraction in a sodium carbonate/sodium hydroxide buffer solution to dissolve insoluble Cr(VI) compounds. Three-step sequential extraction with (1) water, (2) sulfate buffer and (3) carbonate buffer was also investigated. Sequential extractions were carried out on spiked samples of soluble, sparingly soluble and insoluble Cr(VI) compounds, and analyses were then generally carried out by using the diphenylcarbazide method. Similar experiments were performed on paint pigment samples and on airborne particulate filter samples collected from stainless steel welding. Potential interferences from soluble and insoluble Cr(III) compounds, as well as from Fe(II), were investigated. Interferences from Cr(III) species were generally absent, while the presence of Fe(II) resulted in low Cr(VI) recoveries. Two-step sequential extraction of spiked samples with (first) either water or sulfate buffer, and then carbonate buffer, yielded quantitative recoveries of soluble Cr(VI) and insoluble Cr(VI), respectively. Three-step sequential extraction gave excessively high recoveries of soluble Cr(VI), low recoveries of sparingly soluble Cr(VI), and quantitative recoveries of insoluble Cr(VI). Experiments on paint pigment samples using two-step extraction with water and carbonate buffer yielded varying percentages of relative fractions of soluble and insoluble Cr(VI). Sequential extractions of stainless steel welding fume air filter samples demonstrated the predominance of soluble Cr(VI) compounds in such samples. The performance data obtained in this work support the Cr(VI) sequential extraction procedures described in the international standards.

  19. What’s in a drop? Correlating observations and outcomes to guide macromolecular crystallization experiments

    PubMed Central

    Luft, Joseph R.; Wolfley, Jennifer R.; Snell, Edward H.

    2011-01-01

    Observations of crystallization experiments are classified as specific outcomes and integrated through a phase diagram to visualize solubility and thereby direct subsequent experiments. Specific examples are taken from our high-throughput crystallization laboratory which provided a broad scope of data from 20 million crystallization experiments on 12,500 different biological macromolecules. The methods and rationale are broadly and generally applicable in any crystallization laboratory. Through a combination of incomplete factorial sampling of crystallization cocktails, standard outcome classifications, visualization of outcomes as they relate chemically and application of a simple phase diagram approach we demonstrate how to logically design subsequent crystallization experiments. PMID:21643490

  20. Enabling Self-Monitoring Data Exchange in Participatory Medicine.

    PubMed

    Lopez-Campos, Guillermo; Ofoghi, Bahadorreza; Martin-Sanchez, Fernando

    2015-01-01

    The development of new methods, devices and apps for self-monitoring have enabled the extension of the application of these approaches for consumer health and research purposes. The increase in the number and variety of devices has generated a complex scenario where reporting guidelines and data exchange formats will be needed to ensure the quality of the information and the reproducibility of results of the experiments. Based on the Minimal Information for Self Monitoring Experiments (MISME) reporting guideline we have developed an XML format (MISME-ML) to facilitate data exchange for self monitoring experiments. We have also developed a sample instance to illustrate the concept and a Java MISME-ML validation tool. The implementation and adoption of these tools should contribute to the consolidation of a set of methods that ensure the reproducibility of self monitoring experiments for research purposes.

  1. A multiresidue method by high performance liquid chromatography-based fractionation and gas chromatographic determination of trace levels of pesticides in air and water.

    PubMed

    Seiber, J N; Glotfelty, D E; Lucas, A D; McChesney, M M; Sagebiel, J C; Wehner, T A

    1990-01-01

    A multiresidue analytical method is described for pesticides, transformation products, and related toxicants based upon high performance liquid chromatographic (HPLC) fractionation of extracted residue on a Partisil silica gel normal phase column followed by selective-detector gas chromatographic (GC) determination of components in each fraction. The HPLC mobile phase gradient (hexane to methyl t-butyl ether) gave good chromatographic efficiency, resolution, reproducibility and recovery for 61 test compounds, and allowed for collection in four fractions spanning polarities from low polarity organochlorine compounds (fraction 1) to polar N-methylcarbamates and organophosphorus oxons (fraction 4). The multiresidue method was developed for use with air samples collected on XAD-4 and related trapping agents, and water samples extracted with methylene chloride. Detection limits estimated from spiking experiments were generally 0.3-1 ng/m3 for high-volume air samples, and 0.01-0.1 microgram/L for one-liter water samples. Applications were made to determination of pesticides in fogwater and air samples.

  2. Magnetic Stirrer Method for the Detection of Trichinella Larvae in Muscle Samples.

    PubMed

    Mayer-Scholl, Anne; Pozio, Edoardo; Gayda, Jennifer; Thaben, Nora; Bahn, Peter; Nöckler, Karsten

    2017-03-03

    Trichinellosis is a debilitating disease in humans and is caused by the consumption of raw or undercooked meat of animals infected with the nematode larvae of the genus Trichinella. The most important sources of human infections worldwide are game meat and pork or pork products. In many countries, the prevention of human trichinellosis is based on the identification of infected animals by means of the artificial digestion of muscle samples from susceptible animal carcasses. There are several methods based on the digestion of meat but the magnetic stirrer method is considered the gold standard. This method allows the detection of Trichinella larvae by microscopy after the enzymatic digestion of muscle samples and subsequent filtration and sedimentation steps. Although this method does not require special and expensive equipment, internal controls cannot be used. Therefore, stringent quality management should be applied throughout the test. The aim of the present work is to provide detailed handling instructions and critical control points of the method to analysts, based on the experience of the European Union Reference Laboratory for Parasites and the National Reference Laboratory of Germany for Trichinella.

  3. Local Intrinsic Dimension Estimation by Generalized Linear Modeling.

    PubMed

    Hino, Hideitsu; Fujiki, Jun; Akaho, Shotaro; Murata, Noboru

    2017-07-01

    We propose a method for intrinsic dimension estimation. By fitting the power of distance from an inspection point and the number of samples included inside a ball with a radius equal to the distance, to a regression model, we estimate the goodness of fit. Then, by using the maximum likelihood method, we estimate the local intrinsic dimension around the inspection point. The proposed method is shown to be comparable to conventional methods in global intrinsic dimension estimation experiments. Furthermore, we experimentally show that the proposed method outperforms a conventional local dimension estimation method.

  4. Services provided in support of the planetary quarantine requirements

    NASA Technical Reports Server (NTRS)

    Favero, M. S.

    1972-01-01

    Results are presented of laboratory experiments conducted on the thermal resistance of naturally occurring airborne spores and microbiological examinations of space hardware using long-term slit samplers and rodac plate and swab-rinse methods of sampling environmental surfaces.

  5. 3D TOCSY-HSQC NMR for metabolic flux analysis using non-uniform sampling

    DOE PAGES

    Reardon, Patrick N.; Marean-Reardon, Carrie L.; Bukovec, Melanie A.; ...

    2016-02-05

    13C-Metabolic Flux Analysis ( 13C-MFA) is rapidly being recognized as the authoritative method for determining fluxes through metabolic networks. Site-specific 13C enrichment information obtained using NMR spectroscopy is a valuable input for 13C-MFA experiments. Chemical shift overlaps in the 1D or 2D NMR experiments typically used for 13C-MFA frequently hinder assignment and quantitation of site-specific 13C enrichment. Here we propose the use of a 3D TOCSY-HSQC experiment for 13C-MFA. We employ Non-Uniform Sampling (NUS) to reduce the acquisition time of the experiment to a few hours, making it practical for use in 13C-MFA experiments. Our data show that the NUSmore » experiment is linear and quantitative. Identification of metabolites in complex mixtures, such as a biomass hydrolysate, is simplified by virtue of the 13C chemical shift obtained in the experiment. In addition, the experiment reports 13C-labeling information that reveals the position specific labeling of subsets of isotopomers. As a result, the information provided by this technique will enable more accurate estimation of metabolic fluxes in larger metabolic networks.« less

  6. Community‐Based Participatory Research Skills and Training Needs in a Sample of Academic Researchers from a Clinical and Translational Science Center in the Northeast

    PubMed Central

    DiGirolamo, Ann; Geller, Alan C.; Tendulkar, Shalini A.; Patil, Pratima; Hacker, Karen

    2012-01-01

    Abstract Purpose: To determine the community‐based participatory research (CBPR) training interests and needs of researchers interested in CBPR to inform efforts to build infrastructure for conducting community‐engaged research. Method: A 20‐item survey was completed by 127 academic health researchers at Harvard Medical School, Harvard School of Public Health, and Harvard affiliated hospitals. Results: Slightly more than half of the participants reported current or prior experience with CBPR (58 %). Across all levels of academic involvement, approximately half of the participants with CBPR experience reported lacking skills in research methods and dissemination, with even fewer reporting skills in training of community partners. Regardless of prior CBPR experience, about half of the respondents reported having training needs in funding, partnership development, evaluation, and dissemination of CBPR projects. Among those with CBPR experience, more than one‐third of the participants wanted a mentor in CBPR; however only 19 % were willing to act as a mentor. Conclusions: Despite having experience with CBPR, many respondents did not have the comprehensive package of CBPR skills, reporting a need for training in a variety of CBPR skill sets. Further, the apparent mismatch between the need for mentors and availability in this sample suggests an important area for development. Clin Trans Sci 2012; Volume #: 1–5 PMID:22686211

  7. Analysis of degraded papers by infrared and Raman spectroscopy for forensic purposes

    NASA Astrophysics Data System (ADS)

    Zięba-Palus, J.; Wesełucha-Birczyńska, A.; Trzcińska, B.; Kowalski, R.; Moskal, P.

    2017-07-01

    Paper being the basis of different documents is often the subject of forensic examination. Growing number of bogus or in other manner fraudulently alternated documents causes necessity of identification of individual paper sheets and discrimination between sheets being the parts of analyzed questioned document. Frequently it is necessary to distinguish between paper of the same type but of a different age. Thus, it is essential to know whether the degradation process of paper influences the possibility of differentiation between paper samples. Samples of five types of office paper from different manufacturers were artificially aged in a climatic chamber under 65% relative humidity in air at 90 °C for various periods of time up to 35 days. The conditioned samples were examined by the use of infrared and Raman spectroscopy. Three cards of each paper type were chosen for the experiment. Three different spots on each paper card were measured to assure reproducibility of the experiment in both spectroscopic methods. The possibility of differentiation between aged samples was evaluated. The 2D correlation analysis based on the Noda's method was carried out using ATR FTIR spectra as an input data for generating the correlation maps. It was found that pattern of 2D maps allow to distinguish tested paper samples, identified its components and get insight into paper degradation mechanism.

  8. High-performance liquid chromatography (HPLC) as a tool for monitoring the fate of fluazinam in soil.

    PubMed

    Hakala, Kati P; Tuomainen, Päivi M; Yli-Halla, Markku J; Hartikainen, Helinä

    2014-01-01

    Fluazinam is a widely used pesticide employed against the fungal disease late blight in potato cultivation. A specific, repeatable, and rapid high-performance liquid chromatography (HPLC) method utilizing a diode array detector (DAD) was developed to determine the presence of fluazinam in soil. The method consists of acetonitrile (ACN) extraction, clean-up with solid-phase extraction (SPE), and separation using a mobile phase consisting of 70% ACN and 30% water (v/v), including 0.02% acetic acid. HPLC was performed with a C18 column and the detection wavelength was 240 nm. The method was successfully applied to an incubation experiment and to soil samples taken from potato fields where fluazinam had been applied two to three times during the on-going growing season. In the 90-day incubation experiment, analytical standard fluazinam and the commercial fungicide Shirlan(®) were added to soil samples that had never been treated with fluazinam, and were then extracted with ACN and 0.01 M calcium chloride (CaCl2). Fluazinam was not extractable with CaCl2, indicating that it does not leach to watercourses in the dissolved form. Recovery with ACN extraction for sandy soils was 72-95% immediately after application and 53-73% after 90 days of incubation. Out of the eight potato field soil samples, fluazinam was found in two samples at concentrations of 2.1 mg kg(-1) and 1.9 mg kg(-1), well above the limit of quantification (0.1 mg kg(-1)).

  9. New methodology for the thermal characterization of thermoelectric liquids

    NASA Astrophysics Data System (ADS)

    Touati, Karim; Depriester, Michael; Kuriakose, Maju; Hadj Sahraoui, Abdelhak

    2015-09-01

    A new and accurate method for the thermal characterization of thermoelectric liquids is proposed. The experiment is based on a self-generated voltage due to the Seebeck effect. This voltage is provided by the sample when one of its two faces is thermally excited using a modulated laser. The sample used is tetradodecylammonium nitrate salt/1-octanol mixture, with high Seebeck coefficient. The thermal properties of the used sample (thermal diffusivity, effusivity, and conductivity) are found and compared to those obtained by other photothermal techniques. In addition to this, a study of the electrolyte thermal parameters with the variation of tetradodecylammonium nitrate concentration was also carried out. This new method is promising due to its accuracy and its simplicity.

  10. Self-expansion and flow in couples' momentary experiences: an experience sampling study.

    PubMed

    Graham, James M

    2008-09-01

    The self-expansion model of close relationships posits that when couples engage in exciting and activating conjoint activities, they feel connected with their partners and more satisfied with their relationships. In the present study, the experience sampling method was used to examine the predictions of the self-expansion model in couples' momentary experiences. In addition, the author generated several new hypotheses by integrating the self-expansion model with existing research on flow. Over the course of 1 week, 20 couples were signaled at quasi-random intervals to provide data on 1,265 unique experiences. The results suggest that the level of activation experienced during an activity was positively related to experience-level relationship quality. This relationship was consistent across free-time and nonfree-time contexts and was mediated by positive affect. Activation was not found to predict later affect unless the level of activation exceeded what was typical for the individual. Also examined was the influence of interpersonal context and activity type on self-expansion. The results support the self-expansion model and suggest that it could be considered under the broader umbrella of flow.

  11. Quantitative evaluation of analyte transport on microfluidic paper-based analytical devices (μPADs).

    PubMed

    Ota, Riki; Yamada, Kentaro; Suzuki, Koji; Citterio, Daniel

    2018-02-07

    The transport efficiency during capillary flow-driven sample transport on microfluidic paper-based analytical devices (μPADs) made from filter paper has been investigated for a selection of model analytes (Ni 2+ , Zn 2+ , Cu 2+ , PO 4 3- , bovine serum albumin, sulforhodamine B, amaranth) representing metal cations, complex anions, proteins and anionic molecules. For the first time, the transport of the analytical target compounds rather than the sample liquid, has been quantitatively evaluated by means of colorimetry and absorption spectrometry-based methods. The experiments have revealed that small paperfluidic channel dimensions, additional user operation steps (e.g. control of sample volume, sample dilution, washing step) as well as the introduction of sample liquid wicking areas allow to increase analyte transport efficiency. It is also shown that the interaction of analytes with the negatively charged cellulosic paper substrate surface is strongly influenced by the physico-chemical properties of the model analyte and can in some cases (Cu 2+ ) result in nearly complete analyte depletion during sample transport. The quantitative information gained through these experiments is expected to contribute to the development of more sensitive μPADs.

  12. Ellipsometric Analysis of Contaminant Layer on Optical Witness Samples from MISSE

    NASA Technical Reports Server (NTRS)

    Norwood, Joseph K.

    2007-01-01

    Several optical witness samples included in the Materials for International Space Station Experiment (MISSE) trays have been analyzed with a variable angle spectroscopic ellipsometer or VASE. Witness samples of gold or platinum mirrors are extremely useful as collectors of space-borne contamination, due to the relative inertness of these noble metals in the atomic oxygen-rich environment of LEO. Highly accurate thickness measurements, typically at the sub-nanometer scale, may be achieved with this method, which uses polarized light in a spectral range of 300 to 1300 nanometers at several angles of incidence to the sample surface.

  13. Exposure to Community Violence and Protective and Risky Contexts among Low Income Urban African American Adolescents: A Prospective Study

    ERIC Educational Resources Information Center

    Goldner, Jonathan; Peters, Tracy L.; Richards, Maryse H.; Pearce, Steven

    2011-01-01

    This study examined protective and risky companionship and locations for exposure to community violence among African American young adolescents living in high crime, urban areas. The Experience Sampling Method (ESM), an in vivo data collection method, was employed to gather information from 233 students (62% female) over 3 years, beginning in the…

  14. Development of a near-infrared spectroscopic system for monitoring urine glucose level for the use of long-term home healthcare

    NASA Astrophysics Data System (ADS)

    Tanaka, Shinobu; Hayakawa, Yuuto; Ogawa, Mitsuhiro; Yamakoshi, Ken-ichi

    2010-08-01

    We have been developing a new technique for measuring urine glucose concentration using near infrared spectroscopy (NIRS) in conjunction with the Partial Least Square (PLS) method. In the previous study, we reported some results of preliminary experiments for assessing feasibility of this method using a FT-IR spectrometer. In this study, considering practicability of the system, a flow-through cell with the optical path length of 10 mm was newly introduced. Accuracy of the system was verified by the preliminary experiments using urine samples. From the results obtained, it was clearly demonstrated that the present method had a capability of predicting individual urine glucose level with reasonable accuracy (the minimum value of standard error of prediction: SEP = 22.3 mg/dl) and appeared to be a useful means for long-term home health care. However, mean value of SEP obtained by the urine samples from ten subjects was not satisfactorily low (53.7 mg/dl). For improving the accuracy, (1) mechanical stability of the optical system should be improved, (2) the method for normalizing the spectrum should be reconsidered, and (3) the number of subject should be increased.

  15. The theory precision analyse of RFM localization of satellite remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Jianqing; Xv, Biao

    2009-11-01

    The tradition method of detecting precision of Rational Function Model(RFM) is to make use of a great deal check points, and it calculates mean square error through comparing calculational coordinate with known coordinate. This method is from theory of probability, through a large number of samples to statistic estimate value of mean square error, we can think its estimate value approaches in its true when samples are well enough. This paper is from angle of survey adjustment, take law of propagation of error as the theory basis, and it calculates theory precision of RFM localization. Then take the SPOT5 three array imagery as experiment data, and the result of traditional method and narrated method in the paper are compared, while has confirmed tradition method feasible, and answered its theory precision question from the angle of survey adjustment.

  16. Low-dose, high-resolution and high-efficiency ptychography at STXM beamline of SSRF

    NASA Astrophysics Data System (ADS)

    Xu, Zijian; Wang, Chunpeng; Liu, Haigang; Tao, Xulei; Tai, Renzhong

    2017-06-01

    Ptychography is a diffraction-based X-ray microscopy method that can image extended samples quantitatively while remove the resolution limit imposed by image-forming optical elements. As a natural extension of scanning transmission X-ray microscopy (STXM) imaging method, we developed soft X-ray ptychographic coherent diffraction imaging (PCDI) method at the STXM endstation of BL08U beamline of Shanghai Synchrotron Radiation Facility. Compared to the traditional STXM imaging, the new PCDI method has resulted in significantly lower dose, higher resolution and higher efficiency imaging in our platform. In the demonstration experiments shown here, a spatial resolution of sub-10 nm was obtained for a gold nanowires sample, which is much better than the limit resolution 30 nm of the STXM method, while the radiation dose is only 1/12 of STXM.

  17. Influence of persistent exchangeable oxygen on biogenic silica δ18O in deep sea cores

    NASA Astrophysics Data System (ADS)

    Menicucci, A. J.; Spero, H. J.

    2016-12-01

    The removal of exchangeable oxygen from biogenic opal prior to IRMS analysis is critical during sample preparation. Exchangeable oxygen is found in the form of hydroxyl and between defects within the amorphous silicate lattice structure. Typical analytical procedures utilize a variety of dehydroxylation methods to eliminate this exchangeable oxygen, including vacuum dehydroxylation and prefluorination. Such methods are generally considered sufficient for elimination of non-lattice bound oxygen that would obfuscate environmental oxygen isotopic signals contained within the silicate tetrahedra. δ18O data that are then empirically calibrated against modern hydrographic data, and applied down core in paleoceanographic applications. We have conducted a suite of experiments on purified marine opal samples using the new microfluorination method (Menicucci et al., 2013). Our data demonstrate that the amount of exchangeable oxygen in biogenic opal decreases as sample age/depth in core increases. These changes are not accounted for by current researchers. Further, our experimental data indicate that vacuum dehydroxylation does not eliminate all exchangeable oxygen, even after hydroxyl is undetectable. We have conducted experiments to quantify the amount of time necessary to ensure vacuum dehydroxylation has eliminated exchangeable oxygen so that opal samples are stable prior to δ18Odiatom analysis. Our experiments suggest that previously generated opal δ18O data may contain a variable down-core offset due to the presence of exchangeable, non-lattice bound oxygen sources. Our experiments indicate that diatom silica requires dehydroxylation for ≥ 44 hours at 1060oC to quantitatively remove all non-lattice bound oxygen. Further, this variable amount of exchangeable oxygen may be responsible for some of the disagreement between existing empirical calibrations based on core-top diatom frustule remains. Analysis of δ18Odiatom values after this long vacuum dehydroxylation time is necessary for quantitative comparisons of stable isotopic values across geologic time periods. Menicucci, A. J., et al. (2013). "Oxygen isotope analyses of biogenic opal and quartz using a novel microfluorination technique." Rapid Communications in Mass Spectrometry 27(16): 1873-1881

  18. Advancing the Use of Passive Sampling in Risk Assessment and Management of Sediments Contaminated with Hydrophobic Organic Chemicals: Results of an International Ex Situ Passive Sampling Interlaboratory Comparison

    PubMed Central

    2018-01-01

    This work presents the results of an international interlaboratory comparison on ex situ passive sampling in sediments. The main objectives were to map the state of the science in passively sampling sediments, identify sources of variability, provide recommendations and practical guidance for standardized passive sampling, and advance the use of passive sampling in regulatory decision making by increasing confidence in the use of the technique. The study was performed by a consortium of 11 laboratories and included experiments with 14 passive sampling formats on 3 sediments for 25 target chemicals (PAHs and PCBs). The resulting overall interlaboratory variability was large (a factor of ∼10), but standardization of methods halved this variability. The remaining variability was primarily due to factors not related to passive sampling itself, i.e., sediment heterogeneity and analytical chemistry. Excluding the latter source of variability, by performing all analyses in one laboratory, showed that passive sampling results can have a high precision and a very low intermethod variability (

  19. Magnetic constraints on early lunar evolution revisited: Limits on accuracy imposed by methods of paleointensity measurements

    NASA Technical Reports Server (NTRS)

    Banerjee, S. K.

    1984-01-01

    It is impossible to carry out conventional paleointensity experiments requiring repeated heating and cooling to 770 C without chemical, physical or microstructural changes on lunar samples. Non-thermal methods of paleointensity determination have been sought: the two anhysteretic remanent magnetization (ARM) methods, and the saturation isothermal remanent magnetization (IRMS) method. Experimental errors inherent in these alternative approaches have been investigated to estimate the accuracy limits on the calculated paleointensities. Results are indicated in this report.

  20. The End-of-Life Experience in Long-Term Care: Five Themes Identified from Focus Groups with Residents, Family Members, and Staff

    ERIC Educational Resources Information Center

    Munn, Jean C.; Dobbs, Debra; Meier, Andrea; Williams, Christianna S.; Biola, Holly; Zimmerman, Sheryl

    2008-01-01

    Purpose: We designed this study to examine the end-of-life (EOL) experience in long-term care (LTC) based on input from key stakeholders. Design and Methods: The study consisted of 10 homogeneous focus groups drawn from a purposive sample of LTC residents (2 groups; total n = 11), family caregivers (2 groups; total n = 19), paraprofessional staff…

Top