Sample records for sample selection method

  1. Local Feature Selection for Data Classification.

    PubMed

    Armanfard, Narges; Reilly, James P; Komeili, Majid

    2016-06-01

    Typical feature selection methods choose an optimal global feature subset that is applied over all regions of the sample space. In contrast, in this paper we propose a novel localized feature selection (LFS) approach whereby each region of the sample space is associated with its own distinct optimized feature set, which may vary both in membership and size across the sample space. This allows the feature set to optimally adapt to local variations in the sample space. An associated method for measuring the similarities of a query datum to each of the respective classes is also proposed. The proposed method makes no assumptions about the underlying structure of the samples; hence the method is insensitive to the distribution of the data over the sample space. The method is efficiently formulated as a linear programming optimization problem. Furthermore, we demonstrate the method is robust against the over-fitting problem. Experimental results on eleven synthetic and real-world data sets demonstrate the viability of the formulation and the effectiveness of the proposed algorithm. In addition we show several examples where localized feature selection produces better results than a global feature selection method.

  2. A Comparison of EPI Sampling, Probability Sampling, and Compact Segment Sampling Methods for Micro and Small Enterprises

    PubMed Central

    Chao, Li-Wei; Szrek, Helena; Peltzer, Karl; Ramlagan, Shandir; Fleming, Peter; Leite, Rui; Magerman, Jesswill; Ngwenya, Godfrey B.; Pereira, Nuno Sousa; Behrman, Jere

    2011-01-01

    Finding an efficient method for sampling micro- and small-enterprises (MSEs) for research and statistical reporting purposes is a challenge in developing countries, where registries of MSEs are often nonexistent or outdated. This lack of a sampling frame creates an obstacle in finding a representative sample of MSEs. This study uses computer simulations to draw samples from a census of businesses and non-businesses in the Tshwane Municipality of South Africa, using three different sampling methods: the traditional probability sampling method, the compact segment sampling method, and the World Health Organization’s Expanded Programme on Immunization (EPI) sampling method. Three mechanisms by which the methods could differ are tested, the proximity selection of respondents, the at-home selection of respondents, and the use of inaccurate probability weights. The results highlight the importance of revisits and accurate probability weights, but the lesser effect of proximity selection on the samples’ statistical properties. PMID:22582004

  3. Field-based random sampling without a sampling frame: control selection for a case-control study in rural Africa.

    PubMed

    Crampin, A C; Mwinuka, V; Malema, S S; Glynn, J R; Fine, P E

    2001-01-01

    Selection bias, particularly of controls, is common in case-control studies and may materially affect the results. Methods of control selection should be tailored both for the risk factors and disease under investigation and for the population being studied. We present here a control selection method devised for a case-control study of tuberculosis in rural Africa (Karonga, northern Malawi) that selects an age/sex frequency-matched random sample of the population, with a geographical distribution in proportion to the population density. We also present an audit of the selection process, and discuss the potential of this method in other settings.

  4. A novel heterogeneous training sample selection method on space-time adaptive processing

    NASA Astrophysics Data System (ADS)

    Wang, Qiang; Zhang, Yongshun; Guo, Yiduo

    2018-04-01

    The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.

  5. Evaluation of two outlier-detection-based methods for detecting tissue-selective genes from microarray data.

    PubMed

    Kadota, Koji; Konishi, Tomokazu; Shimizu, Kentaro

    2007-05-01

    Large-scale expression profiling using DNA microarrays enables identification of tissue-selective genes for which expression is considerably higher and/or lower in some tissues than in others. Among numerous possible methods, only two outlier-detection-based methods (an AIC-based method and Sprent's non-parametric method) can treat equally various types of selective patterns, but they produce substantially different results. We investigated the performance of these two methods for different parameter settings and for a reduced number of samples. We focused on their ability to detect selective expression patterns robustly. We applied them to public microarray data collected from 36 normal human tissue samples and analyzed the effects of both changing the parameter settings and reducing the number of samples. The AIC-based method was more robust in both cases. The findings confirm that the use of the AIC-based method in the recently proposed ROKU method for detecting tissue-selective expression patterns is correct and that Sprent's method is not suitable for ROKU.

  6. The predictive validity of selection for entry into postgraduate training in general practice: evidence from three longitudinal studies

    PubMed Central

    Patterson, Fiona; Lievens, Filip; Kerrin, Máire; Munro, Neil; Irish, Bill

    2013-01-01

    Background The selection methodology for UK general practice is designed to accommodate several thousand applicants per year and targets six core attributes identified in a multi-method job-analysis study Aim To evaluate the predictive validity of selection methods for entry into postgraduate training, comprising a clinical problem-solving test, a situational judgement test, and a selection centre. Design and setting A three-part longitudinal predictive validity study of selection into training for UK general practice. Method In sample 1, participants were junior doctors applying for training in general practice (n = 6824). In sample 2, participants were GP registrars 1 year into training (n = 196). In sample 3, participants were GP registrars sitting the licensing examination after 3 years, at the end of training (n = 2292). The outcome measures include: assessor ratings of performance in a selection centre comprising job simulation exercises (sample 1); supervisor ratings of trainee job performance 1 year into training (sample 2); and licensing examination results, including an applied knowledge examination and a 12-station clinical skills objective structured clinical examination (OSCE; sample 3). Results Performance ratings at selection predicted subsequent supervisor ratings of job performance 1 year later. Selection results also significantly predicted performance on both the clinical skills OSCE and applied knowledge examination for licensing at the end of training. Conclusion In combination, these longitudinal findings provide good evidence of the predictive validity of the selection methods, and are the first reported for entry into postgraduate training. Results show that the best predictor of work performance and training outcomes is a combination of a clinical problem-solving test, a situational judgement test, and a selection centre. Implications for selection methods for all postgraduate specialties are considered. PMID:24267856

  7. Feature Selection for Ridge Regression with Provable Guarantees.

    PubMed

    Paul, Saurabh; Drineas, Petros

    2016-04-01

    We introduce single-set spectral sparsification as a deterministic sampling-based feature selection technique for regularized least-squares classification, which is the classification analog to ridge regression. The method is unsupervised and gives worst-case guarantees of the generalization power of the classification function after feature selection with respect to the classification function obtained using all features. We also introduce leverage-score sampling as an unsupervised randomized feature selection method for ridge regression. We provide risk bounds for both single-set spectral sparsification and leverage-score sampling on ridge regression in the fixed design setting and show that the risk in the sampled space is comparable to the risk in the full-feature space. We perform experiments on synthetic and real-world data sets; a subset of TechTC-300 data sets, to support our theory. Experimental results indicate that the proposed methods perform better than the existing feature selection methods.

  8. An active learning representative subset selection method using net analyte signal.

    PubMed

    He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi

    2018-05-05

    To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. An active learning representative subset selection method using net analyte signal

    NASA Astrophysics Data System (ADS)

    He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi

    2018-05-01

    To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced.

  10. A Mixed Methods Sampling Methodology for a Multisite Case Study

    ERIC Educational Resources Information Center

    Sharp, Julia L.; Mobley, Catherine; Hammond, Cathy; Withington, Cairen; Drew, Sam; Stringfield, Sam; Stipanovic, Natalie

    2012-01-01

    The flexibility of mixed methods research strategies makes such approaches especially suitable for multisite case studies. Yet the utilization of mixed methods to select sites for these studies is rarely reported. The authors describe their pragmatic mixed methods approach to select a sample for their multisite mixed methods case study of a…

  11. Analysis of training sample selection strategies for regression-based quantitative landslide susceptibility mapping methods

    NASA Astrophysics Data System (ADS)

    Erener, Arzu; Sivas, A. Abdullah; Selcuk-Kestel, A. Sevtap; Düzgün, H. Sebnem

    2017-07-01

    All of the quantitative landslide susceptibility mapping (QLSM) methods requires two basic data types, namely, landslide inventory and factors that influence landslide occurrence (landslide influencing factors, LIF). Depending on type of landslides, nature of triggers and LIF, accuracy of the QLSM methods differs. Moreover, how to balance the number of 0 (nonoccurrence) and 1 (occurrence) in the training set obtained from the landslide inventory and how to select which one of the 1's and 0's to be included in QLSM models play critical role in the accuracy of the QLSM. Although performance of various QLSM methods is largely investigated in the literature, the challenge of training set construction is not adequately investigated for the QLSM methods. In order to tackle this challenge, in this study three different training set selection strategies along with the original data set is used for testing the performance of three different regression methods namely Logistic Regression (LR), Bayesian Logistic Regression (BLR) and Fuzzy Logistic Regression (FLR). The first sampling strategy is proportional random sampling (PRS), which takes into account a weighted selection of landslide occurrences in the sample set. The second method, namely non-selective nearby sampling (NNS), includes randomly selected sites and their surrounding neighboring points at certain preselected distances to include the impact of clustering. Selective nearby sampling (SNS) is the third method, which concentrates on the group of 1's and their surrounding neighborhood. A randomly selected group of landslide sites and their neighborhood are considered in the analyses similar to NNS parameters. It is found that LR-PRS, FLR-PRS and BLR-Whole Data set-ups, with order, yield the best fits among the other alternatives. The results indicate that in QLSM based on regression models, avoidance of spatial correlation in the data set is critical for the model's performance.

  12. Evaluation of Two Outlier-Detection-Based Methods for Detecting Tissue-Selective Genes from Microarray Data

    PubMed Central

    Kadota, Koji; Konishi, Tomokazu; Shimizu, Kentaro

    2007-01-01

    Large-scale expression profiling using DNA microarrays enables identification of tissue-selective genes for which expression is considerably higher and/or lower in some tissues than in others. Among numerous possible methods, only two outlier-detection-based methods (an AIC-based method and Sprent’s non-parametric method) can treat equally various types of selective patterns, but they produce substantially different results. We investigated the performance of these two methods for different parameter settings and for a reduced number of samples. We focused on their ability to detect selective expression patterns robustly. We applied them to public microarray data collected from 36 normal human tissue samples and analyzed the effects of both changing the parameter settings and reducing the number of samples. The AIC-based method was more robust in both cases. The findings confirm that the use of the AIC-based method in the recently proposed ROKU method for detecting tissue-selective expression patterns is correct and that Sprent’s method is not suitable for ROKU. PMID:19936074

  13. Methodology Series Module 5: Sampling Strategies.

    PubMed

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.

  14. The predictive validity of selection for entry into postgraduate training in general practice: evidence from three longitudinal studies.

    PubMed

    Patterson, Fiona; Lievens, Filip; Kerrin, Máire; Munro, Neil; Irish, Bill

    2013-11-01

    The selection methodology for UK general practice is designed to accommodate several thousand applicants per year and targets six core attributes identified in a multi-method job-analysis study To evaluate the predictive validity of selection methods for entry into postgraduate training, comprising a clinical problem-solving test, a situational judgement test, and a selection centre. A three-part longitudinal predictive validity study of selection into training for UK general practice. In sample 1, participants were junior doctors applying for training in general practice (n = 6824). In sample 2, participants were GP registrars 1 year into training (n = 196). In sample 3, participants were GP registrars sitting the licensing examination after 3 years, at the end of training (n = 2292). The outcome measures include: assessor ratings of performance in a selection centre comprising job simulation exercises (sample 1); supervisor ratings of trainee job performance 1 year into training (sample 2); and licensing examination results, including an applied knowledge examination and a 12-station clinical skills objective structured clinical examination (OSCE; sample 3). Performance ratings at selection predicted subsequent supervisor ratings of job performance 1 year later. Selection results also significantly predicted performance on both the clinical skills OSCE and applied knowledge examination for licensing at the end of training. In combination, these longitudinal findings provide good evidence of the predictive validity of the selection methods, and are the first reported for entry into postgraduate training. Results show that the best predictor of work performance and training outcomes is a combination of a clinical problem-solving test, a situational judgement test, and a selection centre. Implications for selection methods for all postgraduate specialties are considered.

  15. Molecularly imprinted membrane extraction combined with high-performance liquid chromatography for selective analysis of cloxacillin from shrimp samples.

    PubMed

    Du, Wei; Sun, Min; Guo, Pengqi; Chang, Chun; Fu, Qiang

    2018-09-01

    Nowadays, the abuse of antibiotics in aquaculture has generated considerable problems for food safety. Therefore, it is imperative to develop a simple and selective method for monitoring illegal use of antibiotics in aquatic products. In this study, a method combined molecularly imprinted membranes (MIMs) extraction and liquid chromatography was developed for the selective analysis of cloxacillin from shrimp samples. The MIMs was synthesized by UV photopolymerization, and characterized by scanning electron microscope, Fourier transform infrared spectra, thermo-gravimetric analysis and swelling test. The results showed that the MIMs exhibited excellent permselectivity, high adsorption capacity and fast adsorption rate for cloxacillin. Finally, the method was utilized to determine cloxacillin from shrimp samples, with good accuracies and acceptable relative standard deviation values for precision. The proposed method was a promising alternative for selective analysis of cloxacillin in shrimp samples, due to the easy-operation and excellent selectivity. Copyright © 2018. Published by Elsevier Ltd.

  16. A quick method based on SIMPLISMA-KPLS for simultaneously selecting outlier samples and informative samples for model standardization in near infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Li, Li-Na; Ma, Chang-Ming; Chang, Ming; Zhang, Ren-Cheng

    2017-12-01

    A novel method based on SIMPLe-to-use Interactive Self-modeling Mixture Analysis (SIMPLISMA) and Kernel Partial Least Square (KPLS), named as SIMPLISMA-KPLS, is proposed in this paper for selection of outlier samples and informative samples simultaneously. It is a quick algorithm used to model standardization (or named as model transfer) in near infrared (NIR) spectroscopy. The NIR experiment data of the corn for analysis of the protein content is introduced to evaluate the proposed method. Piecewise direct standardization (PDS) is employed in model transfer. And the comparison of SIMPLISMA-PDS-KPLS and KS-PDS-KPLS is given in this research by discussion of the prediction accuracy of protein content and calculation speed of each algorithm. The conclusions include that SIMPLISMA-KPLS can be utilized as an alternative sample selection method for model transfer. Although it has similar accuracy to Kennard-Stone (KS), it is different from KS as it employs concentration information in selection program. This means that it ensures analyte information is involved in analysis, and the spectra (X) of the selected samples is interrelated with concentration (y). And it can be used for outlier sample elimination simultaneously by validation of calibration. According to the statistical data results of running time, it is clear that the sample selection process is more rapid when using KPLS. The quick algorithm of SIMPLISMA-KPLS is beneficial to improve the speed of online measurement using NIR spectroscopy.

  17. Sample selection in foreign similarity regions for multicrop experiments

    NASA Technical Reports Server (NTRS)

    Malin, J. T. (Principal Investigator)

    1981-01-01

    The selection of sample segments in the U.S. foreign similarity regions for development of proportion estimation procedures and error modeling for Argentina, Australia, Brazil, and USSR in AgRISTARS is described. Each sample was chosen to be similar in crop mix to the corresponding indicator region sample. Data sets, methods of selection, and resulting samples are discussed.

  18. Methods for producing silicon carbide architectural preforms

    NASA Technical Reports Server (NTRS)

    DiCarlo, James A. (Inventor); Yun, Hee (Inventor)

    2010-01-01

    Methods are disclosed for producing architectural preforms and high-temperature composite structures containing high-strength ceramic fibers with reduced preforming stresses within each fiber, with an in-situ grown coating on each fiber surface, with reduced boron within the bulk of each fiber, and with improved tensile creep and rupture resistance properties for each fiber. The methods include the steps of preparing an original sample of a preform formed from a pre-selected high-strength silicon carbide ceramic fiber type, placing the original sample in a processing furnace under a pre-selected preforming stress state and thermally treating the sample in the processing furnace at a pre-selected processing temperature and hold time in a processing gas having a pre-selected composition, pressure, and flow rate. For the high-temperature composite structures, the method includes additional steps of depositing a thin interphase coating on the surface of each fiber and forming a ceramic or carbon-based matrix within the sample.

  19. Development of a Simultaneous Extraction and Cleanup Method for Pyrethroid Pesticides from Indoor House Dust Samples

    EPA Science Inventory

    An efficient and reliable analytical method was developed for the sensitive and selective quantification of pyrethroid pesticides (PYRs) in house dust samples. The method is based on selective pressurized liquid extraction (SPLE) of the dust-bound PYRs into dichloromethane (DCM) wi...

  20. Sample Integrity Evaluation and EPA Method 325b Interlaboratory Comparison for Select Volatile Organic Compounds Collected Diffusively on Carbopack X Sorbent Tubes

    EPA Science Inventory

    Sample integrity evaluations and inter-laboratory comparisons were conducted in application of U.S. Environmental Protection Agency (EPA) Methods 325A/B for monitoring benzene and additional selected volatile organic compounds (VOCs) usingpassive-diffusive Carbopack X tube sample...

  1. Sampling for Patient Exit Interviews: Assessment of Methods Using Mathematical Derivation and Computer Simulations.

    PubMed

    Geldsetzer, Pascal; Fink, Günther; Vaikath, Maria; Bärnighausen, Till

    2018-02-01

    (1) To evaluate the operational efficiency of various sampling methods for patient exit interviews; (2) to discuss under what circumstances each method yields an unbiased sample; and (3) to propose a new, operationally efficient, and unbiased sampling method. Literature review, mathematical derivation, and Monte Carlo simulations. Our simulations show that in patient exit interviews it is most operationally efficient if the interviewer, after completing an interview, selects the next patient exiting the clinical consultation. We demonstrate mathematically that this method yields a biased sample: patients who spend a longer time with the clinician are overrepresented. This bias can be removed by selecting the next patient who enters, rather than exits, the consultation room. We show that this sampling method is operationally more efficient than alternative methods (systematic and simple random sampling) in most primary health care settings. Under the assumption that the order in which patients enter the consultation room is unrelated to the length of time spent with the clinician and the interviewer, selecting the next patient entering the consultation room tends to be the operationally most efficient unbiased sampling method for patient exit interviews. © 2016 The Authors. Health Services Research published by Wiley Periodicals, Inc. on behalf of Health Research and Educational Trust.

  2. Sampling in health geography: reconciling geographical objectives and probabilistic methods. An example of a health survey in Vientiane (Lao PDR)

    PubMed Central

    Vallée, Julie; Souris, Marc; Fournet, Florence; Bochaton, Audrey; Mobillion, Virginie; Peyronnie, Karine; Salem, Gérard

    2007-01-01

    Background Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. Methods We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. Application We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. Conclusion This paper describes the conceptual reasoning behind the construction of the survey sample and shows that it can be advantageous to choose clusters using reasoned hypotheses, based on both probability and geographical approaches, in contrast to a conventional, random cluster selection strategy. PMID:17543100

  3. Methodology Series Module 5: Sampling Strategies

    PubMed Central

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438

  4. [The research protocol III. Study population].

    PubMed

    Arias-Gómez, Jesús; Villasís-Keever, Miguel Ángel; Miranda-Novales, María Guadalupe

    2016-01-01

    The study population is defined as a set of cases, determined, limited, and accessible, that will constitute the subjects for the selection of the sample, and must fulfill several characteristics and distinct criteria. The objectives of this manuscript are focused on specifying each one of the elements required to make the selection of the participants of a research project, during the elaboration of the protocol, including the concepts of study population, sample, selection criteria and sampling methods. After delineating the study population, the researcher must specify the criteria that each participant has to comply. The criteria that include the specific characteristics are denominated selection or eligibility criteria. These criteria are inclusion, exclusion and elimination, and will delineate the eligible population. The sampling methods are divided in two large groups: 1) probabilistic or random sampling and 2) non-probabilistic sampling. The difference lies in the employment of statistical methods to select the subjects. In every research, it is necessary to establish at the beginning the specific number of participants to be included to achieve the objectives of the study. This number is the sample size, and can be calculated or estimated with mathematical formulas and statistic software.

  5. A mixture model with a reference-based automatic selection of components for disease classification from protein and/or gene expression levels

    PubMed Central

    2011-01-01

    Background Bioinformatics data analysis is often using linear mixture model representing samples as additive mixture of components. Properly constrained blind matrix factorization methods extract those components using mixture samples only. However, automatic selection of extracted components to be retained for classification analysis remains an open issue. Results The method proposed here is applied to well-studied protein and genomic datasets of ovarian, prostate and colon cancers to extract components for disease prediction. It achieves average sensitivities of: 96.2 (sd = 2.7%), 97.6% (sd = 2.8%) and 90.8% (sd = 5.5%) and average specificities of: 93.6% (sd = 4.1%), 99% (sd = 2.2%) and 79.4% (sd = 9.8%) in 100 independent two-fold cross-validations. Conclusions We propose an additive mixture model of a sample for feature extraction using, in principle, sparseness constrained factorization on a sample-by-sample basis. As opposed to that, existing methods factorize complete dataset simultaneously. The sample model is composed of a reference sample representing control and/or case (disease) groups and a test sample. Each sample is decomposed into two or more components that are selected automatically (without using label information) as control specific, case specific and not differentially expressed (neutral). The number of components is determined by cross-validation. Automatic assignment of features (m/z ratios or genes) to particular component is based on thresholds estimated from each sample directly. Due to the locality of decomposition, the strength of the expression of each feature across the samples can vary. Yet, they will still be allocated to the related disease and/or control specific component. Since label information is not used in the selection process, case and control specific components can be used for classification. That is not the case with standard factorization methods. Moreover, the component selected by proposed method as disease specific can be interpreted as a sub-mode and retained for further analysis to identify potential biomarkers. As opposed to standard matrix factorization methods this can be achieved on a sample (experiment)-by-sample basis. Postulating one or more components with indifferent features enables their removal from disease and control specific components on a sample-by-sample basis. This yields selected components with reduced complexity and generally, it increases prediction accuracy. PMID:22208882

  6. Aqueous nitrite ion determination by selective reduction and gas phase nitric oxide chemiluminescence

    NASA Technical Reports Server (NTRS)

    Dunham, A. J.; Barkley, R. M.; Sievers, R. E.; Clarkson, T. W. (Principal Investigator)

    1995-01-01

    An improved method of flow injection analysis for aqueous nitrite ion exploits the sensitivity and selectivity of the nitric oxide (NO) chemilluminescence detector. Trace analysis of nitrite ion in a small sample (5-160 microL) is accomplished by conversion of nitrite ion to NO by aqueous iodide in acid. The resulting NO is transported to the gas phase through a semipermeable membrane and subsequently detected by monitoring the photoemission of the reaction between NO and ozone (O3). Chemiluminescence detection is selective for measurement of NO, and, since the detection occurs in the gas-phase, neither sample coloration nor turbidity interfere. The detection limit for a 100-microL sample is 0.04 ppb of nitrite ion. The precision at the 10 ppb level is 2% relative standard deviation, and 60-180 samples can be analyzed per hour. Samples of human saliva and food extracts were analyzed; the results from a standard colorimetric measurement are compared with those from the new chemiluminescence method in order to further validate the latter method. A high degree of selectivity is obtained due to the three discriminating steps in the process: (1) the nitrite ion to NO conversion conditions are virtually specific for nitrite ion, (2) only volatile products of the conversion will be swept to the gas phase (avoiding turbidity or color in spectrophotometric methods), and (3) the NO chemiluminescence detector selectively detects the emission from the NO + O3 reaction. The method is free of interferences, offers detection limits of low parts per billion of nitrite ion, and allows the analysis of up to 180 microL-sized samples per hour, with little sample preparation and no chromatographic separation. Much smaller samples can be analyzed by this method than in previously reported batch analysis methods, which typically require 5 mL or more of sample and often need chromatographic separations as well.

  7. 10 CFR 430.70 - Enforcement.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...), the Secretary may conduct testing of that covered product under this subpart by means of a test notice... be selected for testing, the method of selecting the test sample, the time at which testing shall be... shall select a batch, a batch sample, and test units from the batch sample in accordance with the...

  8. Quantifying Uncertainties from Presence Data Sampling Methods for Species Distribution Modeling: Focused on Vegetation.

    NASA Astrophysics Data System (ADS)

    Sung, S.; Kim, H. G.; Lee, D. K.; Park, J. H.; Mo, Y.; Kil, S.; Park, C.

    2016-12-01

    The impact of climate change has been observed throughout the globe. The ecosystem experiences rapid changes such as vegetation shift, species extinction. In these context, Species Distribution Model (SDM) is one of the popular method to project impact of climate change on the ecosystem. SDM basically based on the niche of certain species with means to run SDM present point data is essential to find biological niche of species. To run SDM for plants, there are certain considerations on the characteristics of vegetation. Normally, to make vegetation data in large area, remote sensing techniques are used. In other words, the exact point of presence data has high uncertainties as we select presence data set from polygons and raster dataset. Thus, sampling methods for modeling vegetation presence data should be carefully selected. In this study, we used three different sampling methods for selection of presence data of vegetation: Random sampling, Stratified sampling and Site index based sampling. We used one of the R package BIOMOD2 to access uncertainty from modeling. At the same time, we included BioCLIM variables and other environmental variables as input data. As a result of this study, despite of differences among the 10 SDMs, the sampling methods showed differences in ROC values, random sampling methods showed the lowest ROC value while site index based sampling methods showed the highest ROC value. As a result of this study the uncertainties from presence data sampling methods and SDM can be quantified.

  9. Sampling in health geography: reconciling geographical objectives and probabilistic methods. An example of a health survey in Vientiane (Lao PDR).

    PubMed

    Vallée, Julie; Souris, Marc; Fournet, Florence; Bochaton, Audrey; Mobillion, Virginie; Peyronnie, Karine; Salem, Gérard

    2007-06-01

    Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. This paper describes the conceptual reasoning behind the construction of the survey sample and shows that it can be advantageous to choose clusters using reasoned hypotheses, based on both probability and geographical approaches, in contrast to a conventional, random cluster selection strategy.

  10. A comparison of selection at list time and time-stratified sampling for estimating suspended sediment loads

    Treesearch

    Robert B. Thomas; Jack Lewis

    1993-01-01

    Time-stratified sampling of sediment for estimating suspended load is introduced and compared to selection at list time (SALT) sampling. Both methods provide unbiased estimates of load and variance. The magnitude of the variance of the two methods is compared using five storm populations of suspended sediment flux derived from turbidity data. Under like conditions,...

  11. Passive Sampling Methods for Contaminated Sediments: Practical Guidance for Selection, Calibration, and Implementation

    EPA Science Inventory

    This article provides practical guidance on the use of passive sampling methods(PSMs) that target the freely dissolved concentration (Cfree) for improved exposure assessment of hydrophobic organic chemicals in sediments. Primary considerations for selecting a PSM for a specific a...

  12. Methods for Producing High-Performance Silicon Carbide Fibers, Architectural Preforms, and High-Temperature Composite Structures

    NASA Technical Reports Server (NTRS)

    Yun, Hee-Mann (Inventor); DiCarlo, James A. (Inventor)

    2014-01-01

    Methods are disclosed for producing architectural preforms and high-temperature composite structures containing high-strength ceramic fibers with reduced preforming stresses within each fiber, with an in-situ grown coating on each fiber surface, with reduced boron within the bulk of each fiber, and with improved tensile creep and rupture resistance properties tier each fiber. The methods include the steps of preparing an original sample of a preform formed from a pre-selected high-strength silicon carbide ceramic fiber type, placing the original sample in a processing furnace under a pre-selected preforming stress state and thermally treating the sample in the processing furnace at a pre-selected processing temperature and hold time in a processing gas having a pre-selected composition, pressure, and flow rate. For the high-temperature composite structures, the method includes additional steps of depositing a thin interphase coating on the surface of each fiber and forming a ceramic or carbon-based matrix within the sample.

  13. Compendium of selected methods for sampling and analysis at geothermal facilities

    NASA Astrophysics Data System (ADS)

    Kindle, C. H.; Pool, K. H.; Ludwick, J. D.; Robertson, D. E.

    1984-06-01

    An independent study of the field has resulted in a compilation of the best methods for sampling, preservation and analysis of potential pollutants from geothermally fueled electric power plants. These methods are selected as the most usable over the range of application commonly experienced in the various geothermal plant sample locations. In addition to plant and well piping, techniques for sampling cooling towers, ambient gases, solids, surface and subsurface waters are described. Emphasis is placed on the use of sampling proves to extract samples from heterogeneous flows. Certain sampling points, constituents and phases of plant operation are more amenable to quality assurance improvement in the emission measurements than others and are so identified.

  14. Self-contained cryogenic gas sampling apparatus and method

    DOEpatents

    McManus, G.J.; Motes, B.G.; Bird, S.K.; Kotter, D.K.

    1996-03-26

    Apparatus for obtaining a whole gas sample, is composed of: a sample vessel having an inlet for receiving a gas sample; a controllable valve mounted for controllably opening and closing the inlet; a valve control coupled to the valve for opening and closing the valve at selected times; a portable power source connected for supplying operating power to the valve control; and a cryogenic coolant in thermal communication with the vessel for cooling the interior of the vessel to cryogenic temperatures. A method is described for obtaining an air sample using the apparatus described above, by: placing the apparatus at a location at which the sample is to be obtained; operating the valve control to open the valve at a selected time and close the valve at a selected subsequent time; and between the selected times maintaining the vessel at a cryogenic temperature by heat exchange with the coolant. 3 figs.

  15. Self-contained cryogenic gas sampling apparatus and method

    DOEpatents

    McManus, Gary J.; Motes, Billy G.; Bird, Susan K.; Kotter, Dale K.

    1996-01-01

    Apparatus for obtaining a whole gas sample, composed of: a sample vessel having an inlet for receiving a gas sample; a controllable valve mounted for controllably opening and closing the inlet; a valve control coupled to the valve for opening and closing the valve at selected times; a portable power source connected for supplying operating power to the valve control; and a cryogenic coolant in thermal communication with the vessel for cooling the interior of the vessel to cryogenic temperatures. A method of obtaining an air sample using the apparatus described above, by: placing the apparatus at a location at which the sample is to be obtained; operating the valve control to open the valve at a selected time and close the valve at a selected subsequent time; and between the selected times maintaining the vessel at a cryogenic temperature by heat exchange with the coolant.

  16. SAM Pathogen Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target pathogen analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select pathogens.

  17. Representativeness of direct observations selected using a work-sampling equation.

    PubMed

    Sharp, Rebecca A; Mudford, Oliver C; Elliffe, Douglas

    2015-01-01

    Deciding on appropriate sampling to obtain representative samples of behavior is important but not straightforward, because the relative duration of the target behavior may affect its observation in a given sampling interval. Work-sampling methods, which offer a way to adjust the frequency of sampling according to a priori or ongoing estimates of the behavior to achieve a preselected level of representativeness, may provide a solution. Full-week observations of 7 behaviors were conducted for 3 students with autism spectrum disorder and intellectual disabilities. Work-sampling methods were used to select momentary time samples from the full time-of-interest, which produced representative samples. However, work sampling required impractically high numbers of time samples to obtain representative samples. More practical momentary time samples produced less representative samples, particularly for low-duration behaviors. The utility and limits of work-sampling methods for applied behavior analysis are discussed. © Society for the Experimental Analysis of Behavior.

  18. Perpendicular distance sampling: an alternative method for sampling downed coarse woody debris

    Treesearch

    Michael S. Williams; Jeffrey H. Gove

    2003-01-01

    Coarse woody debris (CWD) plays an important role in many forest ecosystem processes. In recent years, a number of new methods have been proposed to sample CWD. These methods select individual logs into the sample using some form of unequal probability sampling. One concern with most of these methods is the difficulty in estimating the volume of each log. A new method...

  19. SAM Biotoxin Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target biotoxin analytes in environmental samples can use this online query tool to identify analytical methods included in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select biotoxins.

  20. SAM Radiochemical Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target radiochemical analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select radiochemical analytes.

  1. Sample Selection for Training Cascade Detectors.

    PubMed

    Vállez, Noelia; Deniz, Oscar; Bueno, Gloria

    2015-01-01

    Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.

  2. Comparison of Feature Selection Techniques in Machine Learning for Anatomical Brain MRI in Dementia.

    PubMed

    Tohka, Jussi; Moradi, Elaheh; Huttunen, Heikki

    2016-07-01

    We present a comparative split-half resampling analysis of various data driven feature selection and classification methods for the whole brain voxel-based classification analysis of anatomical magnetic resonance images. We compared support vector machines (SVMs), with or without filter based feature selection, several embedded feature selection methods and stability selection. While comparisons of the accuracy of various classification methods have been reported previously, the variability of the out-of-training sample classification accuracy and the set of selected features due to independent training and test sets have not been previously addressed in a brain imaging context. We studied two classification problems: 1) Alzheimer's disease (AD) vs. normal control (NC) and 2) mild cognitive impairment (MCI) vs. NC classification. In AD vs. NC classification, the variability in the test accuracy due to the subject sample did not vary between different methods and exceeded the variability due to different classifiers. In MCI vs. NC classification, particularly with a large training set, embedded feature selection methods outperformed SVM-based ones with the difference in the test accuracy exceeding the test accuracy variability due to the subject sample. The filter and embedded methods produced divergent feature patterns for MCI vs. NC classification that suggests the utility of the embedded feature selection for this problem when linked with the good generalization performance. The stability of the feature sets was strongly correlated with the number of features selected, weakly correlated with the stability of classification accuracy, and uncorrelated with the average classification accuracy.

  3. External Standards or Standard Addition? Selecting and Validating a Method of Standardization

    NASA Astrophysics Data System (ADS)

    Harvey, David T.

    2002-05-01

    A common feature of many problem-based laboratories in analytical chemistry is a lengthy independent project involving the analysis of "real-world" samples. Students research the literature, adapting and developing a method suitable for their analyte, sample matrix, and problem scenario. Because these projects encompass the complete analytical process, students must consider issues such as obtaining a representative sample, selecting a method of analysis, developing a suitable standardization, validating results, and implementing appropriate quality assessment/quality control practices. Most textbooks and monographs suitable for an undergraduate course in analytical chemistry, however, provide only limited coverage of these important topics. The need for short laboratory experiments emphasizing important facets of method development, such as selecting a method of standardization, is evident. The experiment reported here, which is suitable for an introductory course in analytical chemistry, illustrates the importance of matrix effects when selecting a method of standardization. Students also learn how a spike recovery is used to validate an analytical method, and obtain a practical experience in the difference between performing an external standardization and a standard addition.

  4. A new comprehensive method for detection of livestock-related pathogenic viruses using a target enrichment system.

    PubMed

    Oba, Mami; Tsuchiaka, Shinobu; Omatsu, Tsutomu; Katayama, Yukie; Otomaru, Konosuke; Hirata, Teppei; Aoki, Hiroshi; Murata, Yoshiteru; Makino, Shinji; Nagai, Makoto; Mizutani, Tetsuya

    2018-01-08

    We tested usefulness of a target enrichment system SureSelect, a comprehensive viral nucleic acid detection method, for rapid identification of viral pathogens in feces samples of cattle, pigs and goats. This system enriches nucleic acids of target viruses in clinical/field samples by using a library of biotinylated RNAs with sequences complementary to the target viruses. The enriched nucleic acids are amplified by PCR and subjected to next generation sequencing to identify the target viruses. In many samples, SureSelect target enrichment method increased efficiencies for detection of the viruses listed in the biotinylated RNA library. Furthermore, this method enabled us to determine nearly full-length genome sequence of porcine parainfluenza virus 1 and greatly increased Breadth, a value indicating the ratio of the mapping consensus length in the reference genome, in pig samples. Our data showed usefulness of SureSelect target enrichment system for comprehensive analysis of genomic information of various viruses in field samples. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor-Pashow, K.; Fondeur, F.; White, T.

    Savannah River National Laboratory (SRNL) was tasked with identifying and developing at least one, but preferably two methods for quantifying the suppressor in the Next Generation Solvent (NGS) system. The suppressor is a guanidine derivative, N,N',N"-tris(3,7-dimethyloctyl)guanidine (TiDG). A list of 10 possible methods was generated, and screening experiments were performed for 8 of the 10 methods. After completion of the screening experiments, the non-aqueous acid-base titration was determined to be the most promising, and was selected for further development as the primary method. {sup 1}H NMR also showed promising results from the screening experiments, and this method was selected formore » further development as the secondary method. Other methods, including {sup 36}Cl radiocounting and ion chromatography, also showed promise; however, due to the similarity to the primary method (titration) and the inability to differentiate between TiDG and TOA (tri-n-ocytlamine) in the blended solvent, {sup 1}H NMR was selected over these methods. Analysis of radioactive samples obtained from real waste ESS (extraction, scrub, strip) testing using the titration method showed good results. Based on these results, the titration method was selected as the method of choice for TiDG measurement. {sup 1}H NMR has been selected as the secondary (back-up) method, and additional work is planned to further develop this method and to verify the method using radioactive samples. Procedures for analyzing radioactive samples of both pure NGS and blended solvent were developed and issued for the both methods.« less

  6. A Simple Joint Estimation Method of Residual Frequency Offset and Sampling Frequency Offset for DVB Systems

    NASA Astrophysics Data System (ADS)

    Kwon, Ki-Won; Cho, Yongsoo

    This letter presents a simple joint estimation method for residual frequency offset (RFO) and sampling frequency offset (STO) in OFDM-based digital video broadcasting (DVB) systems. The proposed method selects a continual pilot (CP) subset from an unsymmetrically and non-uniformly distributed CP set to obtain an unbiased estimator. Simulation results show that the proposed method using a properly selected CP subset is unbiased and performs robustly.

  7. ROLE OF LABORATORY SAMPLING DEVICES AND LABORATORY SUBSAMPLING METHODS IN OPTIMIZING REPRESENTATIVENESS STRATEGIES

    EPA Science Inventory

    Sampling is the act of selecting items from a specified population in order to estimate the parameters of that population (e.g., selecting soil samples to characterize the properties at an environmental site). Sampling occurs at various levels and times throughout an environmenta...

  8. On using sample selection methods in estimating the price elasticity of firms' demand for insurance.

    PubMed

    Marquis, M Susan; Louis, Thomas A

    2002-01-01

    We evaluate a technique based on sample selection models that has been used by health economists to estimate the price elasticity of firms' demand for insurance. We demonstrate that, this technique produces inflated estimates of the price elasticity. We show that alternative methods lead to valid estimates.

  9. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    PubMed

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  10. Sources and preparation of data for assessing trends in concentrations of pesticides in streams of the United States, 1992–2010

    USGS Publications Warehouse

    Martin, Jeffrey D.; Eberle, Michael; Nakagaki, Naomi

    2011-01-01

    This report updates a previously published water-quality dataset of 44 commonly used pesticides and 8 pesticide degradates suitable for a national assessment of trends in pesticide concentrations in streams of the United States. Water-quality samples collected from January 1992 through September 2010 at stream-water sites of the U.S. Geological Survey (USGS) National Water-Quality Assessment (NAWQA) Program and the National Stream Quality Accounting Network (NASQAN) were compiled, reviewed, selected, and prepared for trend analysis. The principal steps in data review for trend analysis were to (1) identify analytical schedule, (2) verify sample-level coding, (3) exclude inappropriate samples or results, (4) review pesticide detections per sample, (5) review high pesticide concentrations, and (6) review the spatial and temporal extent of NAWQA pesticide data and selection of analytical methods for trend analysis. The principal steps in data preparation for trend analysis were to (1) select stream-water sites for trend analysis, (2) round concentrations to a consistent level of precision for the concentration range, (3) identify routine reporting levels used to report nondetections unaffected by matrix interference, (4) reassign the concentration value for routine nondetections to the maximum value of the long-term method detection level (maxLT-MDL), (5) adjust concentrations to compensate for temporal changes in bias of recovery of the gas chromatography/mass spectrometry (GCMS) analytical method, and (6) identify samples considered inappropriate for trend analysis. Samples analyzed at the USGS National Water Quality Laboratory (NWQL) by the GCMS analytical method were the most extensive in time and space and, consequently, were selected for trend analysis. Stream-water sites with 3 or more water years of data with six or more samples per year were selected for pesticide trend analysis. The selection criteria described in the report produced a dataset of 21,988 pesticide samples at 212 stream-water sites. Only 21,144 pesticide samples, however, are considered appropriate for trend analysis.

  11. Accounting for animal movement in estimation of resource selection functions: sampling and data analysis.

    PubMed

    Forester, James D; Im, Hae Kyung; Rathouz, Paul J

    2009-12-01

    Patterns of resource selection by animal populations emerge as a result of the behavior of many individuals. Statistical models that describe these population-level patterns of habitat use can miss important interactions between individual animals and characteristics of their local environment; however, identifying these interactions is difficult. One approach to this problem is to incorporate models of individual movement into resource selection models. To do this, we propose a model for step selection functions (SSF) that is composed of a resource-independent movement kernel and a resource selection function (RSF). We show that standard case-control logistic regression may be used to fit the SSF; however, the sampling scheme used to generate control points (i.e., the definition of availability) must be accommodated. We used three sampling schemes to analyze simulated movement data and found that ignoring sampling and the resource-independent movement kernel yielded biased estimates of selection. The level of bias depended on the method used to generate control locations, the strength of selection, and the spatial scale of the resource map. Using empirical or parametric methods to sample control locations produced biased estimates under stronger selection; however, we show that the addition of a distance function to the analysis substantially reduced that bias. Assuming a uniform availability within a fixed buffer yielded strongly biased selection estimates that could be corrected by including the distance function but remained inefficient relative to the empirical and parametric sampling methods. As a case study, we used location data collected from elk in Yellowstone National Park, USA, to show that selection and bias may be temporally variable. Because under constant selection the amount of bias depends on the scale at which a resource is distributed in the landscape, we suggest that distance always be included as a covariate in SSF analyses. This approach to modeling resource selection is easily implemented using common statistical tools and promises to provide deeper insight into the movement ecology of animals.

  12. Effect of finite sample size on feature selection and classification: a simulation study.

    PubMed

    Way, Ted W; Sahiner, Berkman; Hadjiiski, Lubomir M; Chan, Heang-Ping

    2010-02-01

    The small number of samples available for training and testing is often the limiting factor in finding the most effective features and designing an optimal computer-aided diagnosis (CAD) system. Training on a limited set of samples introduces bias and variance in the performance of a CAD system relative to that trained with an infinite sample size. In this work, the authors conducted a simulation study to evaluate the performances of various combinations of classifiers and feature selection techniques and their dependence on the class distribution, dimensionality, and the training sample size. The understanding of these relationships will facilitate development of effective CAD systems under the constraint of limited available samples. Three feature selection techniques, the stepwise feature selection (SFS), sequential floating forward search (SFFS), and principal component analysis (PCA), and two commonly used classifiers, Fisher's linear discriminant analysis (LDA) and support vector machine (SVM), were investigated. Samples were drawn from multidimensional feature spaces of multivariate Gaussian distributions with equal or unequal covariance matrices and unequal means, and with equal covariance matrices and unequal means estimated from a clinical data set. Classifier performance was quantified by the area under the receiver operating characteristic curve Az. The mean Az values obtained by resubstitution and hold-out methods were evaluated for training sample sizes ranging from 15 to 100 per class. The number of simulated features available for selection was chosen to be 50, 100, and 200. It was found that the relative performance of the different combinations of classifier and feature selection method depends on the feature space distributions, the dimensionality, and the available training sample sizes. The LDA and SVM with radial kernel performed similarly for most of the conditions evaluated in this study, although the SVM classifier showed a slightly higher hold-out performance than LDA for some conditions and vice versa for other conditions. PCA was comparable to or better than SFS and SFFS for LDA at small samples sizes, but inferior for SVM with polynomial kernel. For the class distributions simulated from clinical data, PCA did not show advantages over the other two feature selection methods. Under this condition, the SVM with radial kernel performed better than the LDA when few training samples were available, while LDA performed better when a large number of training samples were available. None of the investigated feature selection-classifier combinations provided consistently superior performance under the studied conditions for different sample sizes and feature space distributions. In general, the SFFS method was comparable to the SFS method while PCA may have an advantage for Gaussian feature spaces with unequal covariance matrices. The performance of the SVM with radial kernel was better than, or comparable to, that of the SVM with polynomial kernel under most conditions studied.

  13. Compressive strength of human openwedges: a selection method

    NASA Astrophysics Data System (ADS)

    Follet, H.; Gotteland, M.; Bardonnet, R.; Sfarghiu, A. M.; Peyrot, J.; Rumelhart, C.

    2004-02-01

    A series of 44 samples of bone wedges of human origin, intended for allograft openwedge osteotomy and obtained without particular precautions during hip arthroplasty were re-examined. After viral inactivity chemical treatment, lyophilisation and radio-sterilisation (intended to produce optimal health safety), the compressive strength, independent of age, sex and the height of the sample (or angle of cut), proved to be too widely dispersed [ 10{-}158 MPa] in the first study. We propose a method for selecting samples which takes into account their geometry (width, length, thicknesses, cortical surface area). Statistical methods (Principal Components Analysis PCA, Hierarchical Cluster Analysis, Multilinear regression) allowed final selection of 29 samples having a mean compressive strength σ_{max} =103 MPa ± 26 and with variation [ 61{-}158 MPa] . These results are equivalent or greater than average materials currently used in openwedge osteotomy.

  14. The Cross-Entropy Based Multi-Filter Ensemble Method for Gene Selection.

    PubMed

    Sun, Yingqiang; Lu, Chengbo; Li, Xiaobo

    2018-05-17

    The gene expression profile has the characteristics of a high dimension, low sample, and continuous type, and it is a great challenge to use gene expression profile data for the classification of tumor samples. This paper proposes a cross-entropy based multi-filter ensemble (CEMFE) method for microarray data classification. Firstly, multiple filters are used to select the microarray data in order to obtain a plurality of the pre-selected feature subsets with a different classification ability. The top N genes with the highest rank of each subset are integrated so as to form a new data set. Secondly, the cross-entropy algorithm is used to remove the redundant data in the data set. Finally, the wrapper method, which is based on forward feature selection, is used to select the best feature subset. The experimental results show that the proposed method is more efficient than other gene selection methods and that it can achieve a higher classification accuracy under fewer characteristic genes.

  15. Comparison of Enterococcus Species Diversity in Marine Water and Wastewater Using Enterolert and EPA Method 1600

    PubMed Central

    Ferguson, Donna M.; Griffith, John F.; McGee, Charles D.; Weisberg, Stephen B.; Hagedorn, Charles

    2013-01-01

    EPA Method 1600 and Enterolert are used interchangeably to measure Enterococcus for fecal contamination of public beaches, but the methods occasionally produce different results. Here we assess whether these differences are attributable to the selectivity for certain species within the Enterococcus group. Both methods were used to obtain 1279 isolates from 17 environmental samples, including influent and effluent of four wastewater treatment plants, ambient marine water from seven different beaches, and freshwater urban runoff from two stream systems. The isolates were identified to species level. Detection of non-Enterococcus species was slightly higher using Enterolert (8.4%) than for EPA Method 1600 (5.1%). E. faecalis and E. faecium, commonly associated with human fecal waste, were predominant in wastewater; however, Enterolert had greater selectivity for E. faecalis, which was also shown using a laboratory-created sample. The same species selectivity was not observed for most beach water and urban runoff samples. These samples had relatively higher proportions of plant associated species, E. casseliflavus (18.5%) and E. mundtii (5.7%), compared to wastewater, suggesting environmental inputs to beaches and runoff. The potential for species selectivity among water testing methods should be considered when assessing the sanitary quality of beaches so that public health warnings are based on indicators representative of fecal sources. PMID:23840233

  16. Basis Selection for Wavelet Regression

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)

    1998-01-01

    A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.

  17. Online selective kernel-based temporal difference learning.

    PubMed

    Chen, Xingguo; Gao, Yang; Wang, Ruili

    2013-12-01

    In this paper, an online selective kernel-based temporal difference (OSKTD) learning algorithm is proposed to deal with large scale and/or continuous reinforcement learning problems. OSKTD includes two online procedures: online sparsification and parameter updating for the selective kernel-based value function. A new sparsification method (i.e., a kernel distance-based online sparsification method) is proposed based on selective ensemble learning, which is computationally less complex compared with other sparsification methods. With the proposed sparsification method, the sparsified dictionary of samples is constructed online by checking if a sample needs to be added to the sparsified dictionary. In addition, based on local validity, a selective kernel-based value function is proposed to select the best samples from the sample dictionary for the selective kernel-based value function approximator. The parameters of the selective kernel-based value function are iteratively updated by using the temporal difference (TD) learning algorithm combined with the gradient descent technique. The complexity of the online sparsification procedure in the OSKTD algorithm is O(n). In addition, two typical experiments (Maze and Mountain Car) are used to compare with both traditional and up-to-date O(n) algorithms (GTD, GTD2, and TDC using the kernel-based value function), and the results demonstrate the effectiveness of our proposed algorithm. In the Maze problem, OSKTD converges to an optimal policy and converges faster than both traditional and up-to-date algorithms. In the Mountain Car problem, OSKTD converges, requires less computation time compared with other sparsification methods, gets a better local optima than the traditional algorithms, and converges much faster than the up-to-date algorithms. In addition, OSKTD can reach a competitive ultimate optima compared with the up-to-date algorithms.

  18. [Research on fast detecting tomato seedlings nitrogen content based on NIR characteristic spectrum selection].

    PubMed

    Wu, Jing-zhu; Wang, Feng-zhu; Wang, Li-li; Zhang, Xiao-chao; Mao, Wen-hua

    2015-01-01

    In order to improve the accuracy and robustness of detecting tomato seedlings nitrogen content based on near-infrared spectroscopy (NIR), 4 kinds of characteristic spectrum selecting methods were studied in the present paper, i. e. competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variables elimination (MCUVE), backward interval partial least squares (BiPLS) and synergy interval partial least squares (SiPLS). There were totally 60 tomato seedlings cultivated at 10 different nitrogen-treatment levels (urea concentration from 0 to 120 mg . L-1), with 6 samples at each nitrogen-treatment level. They are in different degrees of over nitrogen, moderate nitrogen, lack of nitrogen and no nitrogen status. Each sample leaves were collected to scan near-infrared spectroscopy from 12 500 to 3 600 cm-1. The quantitative models based on the above 4 methods were established. According to the experimental result, the calibration model based on CARS and MCUVE selecting methods show better performance than those based on BiPLS and SiPLS selecting methods, but their prediction ability is much lower than that of the latter. Among them, the model built by BiPLS has the best prediction performance. The correlation coefficient (r), root mean square error of prediction (RMSEP) and ratio of performance to standard derivate (RPD) is 0. 952 7, 0. 118 3 and 3. 291, respectively. Therefore, NIR technology combined with characteristic spectrum selecting methods can improve the model performance. But the characteristic spectrum selecting methods are not universal. For the built model based or single wavelength variables selection is more sensitive, it is more suitable for the uniform object. While the anti-interference ability of the model built based on wavelength interval selection is much stronger, it is more suitable for the uneven and poor reproducibility object. Therefore, the characteristic spectrum selection will only play a better role in building model, combined with the consideration of sample state and the model indexes.

  19. Principal Selection: A National Study of Selection Criteria and Procedures

    ERIC Educational Resources Information Center

    Palmer, Brandon

    2017-01-01

    Despite empirical evidence correlating the role of the principal with student achievement, researchers have seldom scrutinized principal selection methods over the past 60 years. This mixed methods study investigated the processes by which school principals are selected. A national sample of top-level school district administrators was used to…

  20. Principal Selection: A National Study of Selection Criteria and Procedures

    ERIC Educational Resources Information Center

    Palmer, Brandon

    2016-01-01

    Despite empirical evidence correlating the role of the principal with student achievement, researchers have seldom scrutinized principal selection methods over the past 60 years. This mixed methods study investigated the processes by which school principals are selected. A national sample of top-level school district administrators was used to…

  1. Sampling Methods in Cardiovascular Nursing Research: An Overview.

    PubMed

    Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie

    2014-01-01

    Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.

  2. Multiplex Real-Time PCR for Detection of Staphylococcus aureus, mecA and Panton-Valentine Leukocidin (PVL) Genes from Selective Enrichments from Animals and Retail Meat

    PubMed Central

    Velasco, Valeria; Sherwood, Julie S.; Rojas-García, Pedro P.; Logue, Catherine M.

    2014-01-01

    The aim of this study was to compare a real-time PCR assay, with a conventional culture/PCR method, to detect S. aureus, mecA and Panton-Valentine Leukocidin (PVL) genes in animals and retail meat, using a two-step selective enrichment protocol. A total of 234 samples were examined (77 animal nasal swabs, 112 retail raw meat, and 45 deli meat). The multiplex real-time PCR targeted the genes: nuc (identification of S. aureus), mecA (associated with methicillin resistance) and PVL (virulence factor), and the primary and secondary enrichment samples were assessed. The conventional culture/PCR method included the two-step selective enrichment, selective plating, biochemical testing, and multiplex PCR for confirmation. The conventional culture/PCR method recovered 95/234 positive S. aureus samples. Application of real-time PCR on samples following primary and secondary enrichment detected S. aureus in 111/234 and 120/234 samples respectively. For detection of S. aureus, the kappa statistic was 0.68–0.88 (from substantial to almost perfect agreement) and 0.29–0.77 (from fair to substantial agreement) for primary and secondary enrichments, using real-time PCR. For detection of mecA gene, the kappa statistic was 0–0.49 (from no agreement beyond that expected by chance to moderate agreement) for primary and secondary enrichment samples. Two pork samples were mecA gene positive by all methods. The real-time PCR assay detected the mecA gene in samples that were negative for S. aureus, but positive for Staphylococcus spp. The PVL gene was not detected in any sample by the conventional culture/PCR method or the real-time PCR assay. Among S. aureus isolated by conventional culture/PCR method, the sequence type ST398, and multi-drug resistant strains were found in animals and raw meat samples. The real-time PCR assay may be recommended as a rapid method for detection of S. aureus and the mecA gene, with further confirmation of methicillin-resistant S. aureus (MRSA) using the standard culture method. PMID:24849624

  3. Multiplex real-time PCR for detection of Staphylococcus aureus, mecA and Panton-Valentine Leukocidin (PVL) genes from selective enrichments from animals and retail meat.

    PubMed

    Velasco, Valeria; Sherwood, Julie S; Rojas-García, Pedro P; Logue, Catherine M

    2014-01-01

    The aim of this study was to compare a real-time PCR assay, with a conventional culture/PCR method, to detect S. aureus, mecA and Panton-Valentine Leukocidin (PVL) genes in animals and retail meat, using a two-step selective enrichment protocol. A total of 234 samples were examined (77 animal nasal swabs, 112 retail raw meat, and 45 deli meat). The multiplex real-time PCR targeted the genes: nuc (identification of S. aureus), mecA (associated with methicillin resistance) and PVL (virulence factor), and the primary and secondary enrichment samples were assessed. The conventional culture/PCR method included the two-step selective enrichment, selective plating, biochemical testing, and multiplex PCR for confirmation. The conventional culture/PCR method recovered 95/234 positive S. aureus samples. Application of real-time PCR on samples following primary and secondary enrichment detected S. aureus in 111/234 and 120/234 samples respectively. For detection of S. aureus, the kappa statistic was 0.68-0.88 (from substantial to almost perfect agreement) and 0.29-0.77 (from fair to substantial agreement) for primary and secondary enrichments, using real-time PCR. For detection of mecA gene, the kappa statistic was 0-0.49 (from no agreement beyond that expected by chance to moderate agreement) for primary and secondary enrichment samples. Two pork samples were mecA gene positive by all methods. The real-time PCR assay detected the mecA gene in samples that were negative for S. aureus, but positive for Staphylococcus spp. The PVL gene was not detected in any sample by the conventional culture/PCR method or the real-time PCR assay. Among S. aureus isolated by conventional culture/PCR method, the sequence type ST398, and multi-drug resistant strains were found in animals and raw meat samples. The real-time PCR assay may be recommended as a rapid method for detection of S. aureus and the mecA gene, with further confirmation of methicillin-resistant S. aureus (MRSA) using the standard culture method.

  4. Robust gene selection methods using weighting schemes for microarray data analysis.

    PubMed

    Kang, Suyeon; Song, Jongwoo

    2017-09-02

    A common task in microarray data analysis is to identify informative genes that are differentially expressed between two different states. Owing to the high-dimensional nature of microarray data, identification of significant genes has been essential in analyzing the data. However, the performances of many gene selection techniques are highly dependent on the experimental conditions, such as the presence of measurement error or a limited number of sample replicates. We have proposed new filter-based gene selection techniques, by applying a simple modification to significance analysis of microarrays (SAM). To prove the effectiveness of the proposed method, we considered a series of synthetic datasets with different noise levels and sample sizes along with two real datasets. The following findings were made. First, our proposed methods outperform conventional methods for all simulation set-ups. In particular, our methods are much better when the given data are noisy and sample size is small. They showed relatively robust performance regardless of noise level and sample size, whereas the performance of SAM became significantly worse as the noise level became high or sample size decreased. When sufficient sample replicates were available, SAM and our methods showed similar performance. Finally, our proposed methods are competitive with traditional methods in classification tasks for microarrays. The results of simulation study and real data analysis have demonstrated that our proposed methods are effective for detecting significant genes and classification tasks, especially when the given data are noisy or have few sample replicates. By employing weighting schemes, we can obtain robust and reliable results for microarray data analysis.

  5. Wavelength Selection Method Based on Differential Evolution for Precise Quantitative Analysis Using Terahertz Time-Domain Spectroscopy.

    PubMed

    Li, Zhi; Chen, Weidong; Lian, Feiyu; Ge, Hongyi; Guan, Aihong

    2017-12-01

    Quantitative analysis of component mixtures is an important application of terahertz time-domain spectroscopy (THz-TDS) and has attracted broad interest in recent research. Although the accuracy of quantitative analysis using THz-TDS is affected by a host of factors, wavelength selection from the sample's THz absorption spectrum is the most crucial component. The raw spectrum consists of signals from the sample and scattering and other random disturbances that can critically influence the quantitative accuracy. For precise quantitative analysis using THz-TDS, the signal from the sample needs to be retained while the scattering and other noise sources are eliminated. In this paper, a novel wavelength selection method based on differential evolution (DE) is investigated. By performing quantitative experiments on a series of binary amino acid mixtures using THz-TDS, we demonstrate the efficacy of the DE-based wavelength selection method, which yields an error rate below 5%.

  6. A new mosaic method for three-dimensional surface

    NASA Astrophysics Data System (ADS)

    Yuan, Yun; Zhu, Zhaokun; Ding, Yongjun

    2011-08-01

    Three-dimensional (3-D) data mosaic is a indispensable link in surface measurement and digital terrain map generation. With respect to the mosaic problem of the local unorganized cloud points with rude registration and mass mismatched points, a new mosaic method for 3-D surface based on RANSAC is proposed. Every circular of this method is processed sequentially by random sample with additional shape constraint, data normalization of cloud points, absolute orientation, data denormalization of cloud points, inlier number statistic, etc. After N random sample trials the largest consensus set is selected, and at last the model is re-estimated using all the points in the selected subset. The minimal subset is composed of three non-colinear points which form a triangle. The shape of triangle is considered in random sample selection in order to make the sample selection reasonable. A new coordinate system transformation algorithm presented in this paper is used to avoid the singularity. The whole rotation transformation between the two coordinate systems can be solved by twice rotations expressed by Euler angle vector, each rotation has explicit physical means. Both simulation and real data are used to prove the correctness and validity of this mosaic method. This method has better noise immunity due to its robust estimation property, and has high accuracy as the shape constraint is added to random sample and the data normalization added to the absolute orientation. This method is applicable for high precision measurement of three-dimensional surface and also for the 3-D terrain mosaic.

  7. SnagPRO: snag and tree sampling and analysis methods for wildlife

    Treesearch

    Lisa J. Bate; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe sampling methods and provide software to accurately and efficiently estimate snag and tree densities at desired scales to meet a variety of research and management objectives. The methods optimize sampling effort by choosing a plot size appropriate for the specified forest conditions and sampling goals. Plot selection and data analyses are supported by...

  8. A new approach based on off-line coupling of high-performance liquid chromatography with gas chromatography-mass spectrometry to determine acrylamide in coffee brew.

    PubMed

    Blanch, Gracia Patricia; Morales, Francisco José; Moreno, Fernando de la Peña; del Castillo, María Luisa Ruiz

    2013-01-01

    A new method based on off-line coupling of LC with GC in replacement of conventional sample preparation techniques is proposed to analyze acrylamide in coffee brews. The method involves the preseparation of the sample by LC, the collection of the selected fraction, its concentration under nitrogen, and subsequent analysis by GC coupled with MS. The composition of the LC mobile phase and the flow rate were studied to select those conditions that allowed separation of acrylamide without coeluting compounds. Under the conditions selected recoveries close to 100% were achieved while LODs and LOQs equal to 5 and 10 μg/L for acrylamide in brewed coffee were obtained. The method developed enabled the reliable detection of acrylamide in spiked coffee beverage samples without further clean-up steps or sample manipulation. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Longitudinal study of Escherichia coli O157 shedding and super shedding in dairy heifers.

    PubMed

    Williams, K J; Ward, M P; Dhungyel, O P

    2015-04-01

    A longitudinal study was conducted to assess the methods available for detection of Escherichia coli O157 and to investigate the prevalence and occurrence of long-term shedding and super shedding in a cohort of Australian dairy heifers. Samples were obtained at approximately weekly intervals from heifers at pasture under normal management systems. Selective sampling techniques were used with the aim of identifying heifers with a higher probability of shedding or super shedding. Rectoanal mucosal swabs (RAMS) and fecal samples were obtained from each heifer. Direct culture of feces was used for detection and enumeration. Feces and RAMS were tested by enrichment culture. Selected samples were further tested retrospectively by immunomagnetic separation of enriched samples. Of 784 samples obtained, 154 (19.6%) were detected as positive using culture methods. Adjusting for selective sampling, the prevalence was 71 (15.6%) of 454. In total, 66 samples were detected as positive at >10(2) CFU/g of which 8 were >10(4) CFU/g and classed as super shedding. A significant difference was observed in detection by enriched culture of RAMS and feces. Dairy heifers within this cohort exhibited variable E. coli O157 shedding, consistent with previous estimates of shedding. Super shedding was detected at a low frequency and inconsistently from individual heifers. All detection methods identified some samples as positive that were not detected by any other method, indicating that the testing methods used will influence survey results.

  10. Quantitative analysis of benzodiazepines in vitreous humor by high-performance liquid chromatography

    PubMed Central

    Bazmi, Elham; Behnoush, Behnam; Akhgari, Maryam; Bahmanabadi, Leila

    2016-01-01

    Objective: Benzodiazepines are frequently screened drugs in emergency toxicology, drugs of abuse testing, and in forensic cases. As the variations of benzodiazepines concentrations in biological samples during bleeding, postmortem changes, and redistribution could be biasing forensic medicine examinations, hence selecting a suitable sample and a validated accurate method is essential for the quantitative analysis of these main drug categories. The aim of this study was to develop a valid method for the determination of four benzodiazepines (flurazepam, lorazepam, alprazolam, and diazepam) in vitreous humor using liquid–liquid extraction and high-performance liquid chromatography. Methods: Sample preparation was carried out using liquid–liquid extraction with n-hexane: ethyl acetate and subsequent detection by high-performance liquid chromatography method coupled to diode array detector. This method was applied to quantify benzodiazepines in 21 authentic vitreous humor samples. Linear curve for each drug was obtained within the range of 30–3000 ng/mL with coefficient of correlation higher than 0.99. Results: The limit of detection and quantitation were 30 and 100 ng/mL respectively for four drugs. The method showed an appropriate intra- and inter-day precision (coefficient of variation < 10%). Benzodiazepines recoveries were estimated to be over 80%. The method showed high selectivity; no additional peak due to interfering substances in samples was observed. Conclusion: The present method was selective, sensitive, accurate, and precise for the quantitative analysis of benzodiazepines in vitreous humor samples in forensic toxicology laboratory. PMID:27635251

  11. Selected Analytical Methods for Environmental Remediation and Recovery (SAM) - Home

    EPA Pesticide Factsheets

    The SAM Home page provides access to all information provided in EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM), and includes a query function allowing users to search methods by analyte, sample type and instrumentation.

  12. Methods for purifying carbon materials

    DOEpatents

    Dailly, Anne [Pasadena, CA; Ahn, Channing [Pasadena, CA; Yazami, Rachid [Los Angeles, CA; Fultz, Brent T [Pasadena, CA

    2009-05-26

    Methods of purifying samples are provided that are capable of removing carbonaceous and noncarbonaceous impurities from a sample containing a carbon material having a selected structure. Purification methods are provided for removing residual metal catalyst particles enclosed in multilayer carbonaceous impurities in samples generate by catalytic synthesis methods. Purification methods are provided wherein carbonaceous impurities in a sample are at least partially exfoliated, thereby facilitating subsequent removal of carbonaceous and noncarbonaceous impurities from the sample. Methods of purifying carbon nanotube-containing samples are provided wherein an intercalant is added to the sample and subsequently reacted with an exfoliation initiator to achieve exfoliation of carbonaceous impurities.

  13. Selected Factors Related to Selective Service Rejection and Rejection Rate in Delaware (1967): A Study of the Characteristics of Young Men Failing to Meet Mental Qualifications for Military Service.

    ERIC Educational Resources Information Center

    Price, Jay R.

    This study sought information about selective service rejection in Delaware, specifically rejectee characteristics, reasons for rejection, and the high rejection rate in Delaware. The basic design was a modified case study method in which a sample of individual records were examined. Differences between this sample and national samples were tested…

  14. Arrayed Micro-Ring Spectrometer System and Method of Use

    NASA Technical Reports Server (NTRS)

    Choi, Sang H. (Inventor); Park, Yeonjoon (Inventor); King, Glen C. (Inventor); Elliott, James R. (Inventor)

    2012-01-01

    A spectrometer system includes an array of micro-zone plates (MZP) each having coaxially-aligned ring gratings, a sample plate for supporting and illuminating a sample, and an array of photon detectors for measuring a spectral characteristic of the predetermined wavelength. The sample plate emits an evanescent wave in response to incident light, which excites molecules of the sample to thereby cause an emission of secondary photons. A method of detecting the intensity of a selected wavelength of incident light includes directing the incident light onto an array of MZP, diffracting a selected wavelength of the incident light onto a target focal point using the array of MZP, and detecting the intensity of the selected portion using an array of photon detectors. An electro-optic layer positioned adjacent to the array of MZP may be excited via an applied voltage to select the wavelength of the incident light.

  15. Molecularly imprinted covalent organic polymers for the selective extraction of benzoxazole fluorescent whitening agents from food samples.

    PubMed

    Ding, Hui; Wang, Rongyu; Wang, Xiao; Ji, Wenhua

    2018-06-21

    Molecularly imprinted covalent organic polymers were constructed by an imine-linking reaction between 1,3,5-triformylphloroglucinol and 2,6-diaminopyridine and used for the selective solid-phase extraction of benzoxazole fluorescent whitening agents from food samples. Binding experiments showed that imprinting sites on molecularly imprinted polymers had higher selectivity for targets compared with those of the corresponding non-imprinted polymers. Parameters affecting the solid-phase extraction procedure were examined. Under optimal conditions, actual samples were treated and the eluent was analyzed with high-performance liquid chromatography with diode-array detection. The results showed that the established method owned the wide linearity, satisfactory detection limits and quantification limits, and acceptable recoveries. Thus, this developed method possesses the practical potential to the selectively determine benzoxazole fluorescent whitening agents in complex food samples. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  16. An evaluation of flow-stratified sampling for estimating suspended sediment loads

    Treesearch

    Robert B. Thomas; Jack Lewis

    1995-01-01

    Abstract - Flow-stratified sampling is a new method for sampling water quality constituents such as suspended sediment to estimate loads. As with selection-at-list-time (SALT) and time-stratified sampling, flow-stratified sampling is a statistical method requiring random sampling, and yielding unbiased estimates of load and variance. It can be used to estimate event...

  17. A re-evaluation of a case-control model with contaminated controls for resource selection studies

    Treesearch

    Christopher T. Rota; Joshua J. Millspaugh; Dylan C. Kesler; Chad P. Lehman; Mark A. Rumble; Catherine M. B. Jachowski

    2013-01-01

    A common sampling design in resource selection studies involves measuring resource attributes at sample units used by an animal and at sample units considered available for use. Few models can estimate the absolute probability of using a sample unit from such data, but such approaches are generally preferred over statistical methods that estimate a relative probability...

  18. Sample selection via angular distance in the space of the arguments of an artificial neural network

    NASA Astrophysics Data System (ADS)

    Fernández Jaramillo, J. M.; Mayerle, R.

    2018-05-01

    In the construction of an artificial neural network (ANN) a proper data splitting of the available samples plays a major role in the training process. This selection of subsets for training, testing and validation affects the generalization ability of the neural network. Also the number of samples has an impact in the time required for the design of the ANN and the training. This paper introduces an efficient and simple method for reducing the set of samples used for training a neural network. The method reduces the required time to calculate the network coefficients, while keeping the diversity and avoiding overtraining the ANN due the presence of similar samples. The proposed method is based on the calculation of the angle between two vectors, each one representing one input of the neural network. When the angle formed among samples is smaller than a defined threshold only one input is accepted for the training. The accepted inputs are scattered throughout the sample space. Tidal records are used to demonstrate the proposed method. The results of a cross-validation show that with few inputs the quality of the outputs is not accurate and depends on the selection of the first sample, but as the number of inputs increases the accuracy is improved and differences among the scenarios with a different starting sample have and important reduction. A comparison with the K-means clustering algorithm shows that for this application the proposed method with a smaller number of samples is producing a more accurate network.

  19. 40 CFR 761.289 - Compositing samples.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Remediation Waste and Porous Surfaces in Accordance With § 761.61(a)(6) § 761.289 Compositing samples. Compositing is a method of combining several samples of a specific type of bulk PCB remediation waste or... compositing bulk PCB remediation waste samples. These procedures are based on the method for selecting...

  20. 40 CFR 761.289 - Compositing samples.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Remediation Waste and Porous Surfaces in Accordance With § 761.61(a)(6) § 761.289 Compositing samples. Compositing is a method of combining several samples of a specific type of bulk PCB remediation waste or... compositing bulk PCB remediation waste samples. These procedures are based on the method for selecting...

  1. Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling

    ERIC Educational Resources Information Center

    Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah

    2014-01-01

    Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…

  2. GeoLab Concept: The Importance of Sample Selection During Long Duration Human Exploration Mission

    NASA Technical Reports Server (NTRS)

    Calaway, M. J.; Evans, C. A.; Bell, M. S.; Graff, T. G.

    2011-01-01

    In the future when humans explore planetary surfaces on the Moon, Mars, and asteroids or beyond, the return of geologic samples to Earth will be a high priority for human spaceflight operations. All future sample return missions will have strict down-mass and volume requirements; methods for in-situ sample assessment and prioritization will be critical for selecting the best samples for return-to-Earth.

  3. Quantification of six herbicide metabolites in human urine.

    PubMed

    Norrgran, Jessica; Bravo, Roberto; Bishop, Amanda M; Restrepo, Paula; Whitehead, Ralph D; Needham, Larry L; Barr, Dana B

    2006-01-18

    We developed a sensitive, selective and precise method for measuring herbicide metabolites in human urine. Our method uses automated liquid delivery of internal standards and acetate buffer and a mixed polarity polymeric phase solid phase extraction of a 2 mL urine sample. The concentrated eluate is analyzed using high-performance liquid chromatography-tandem mass spectrometry. Isotope dilution calibration is used for quantification of all analytes. The limits of detection of our method range from 0.036 to 0.075 ng/mL. The within- and between-day variation in pooled quality control samples range from 2.5 to 9.0% and from 3.2 to 16%, respectively, for all analytes at concentrations ranging from 0.6 to 12 ng/mL. Precision was similar with samples fortified with 0.1 and 0.25 ng/mL that were analyzed in each run. We validated our selective method against a less selective method used previously in our laboratory by analyzing human specimens using both methods. The methods produced results that were in agreement, with no significant bias observed.

  4. Circular Samples as Objects for Magnetic Resonance Imaging - Mathematical Simulation, Experimental Results

    NASA Astrophysics Data System (ADS)

    Frollo, Ivan; Krafčík, Andrej; Andris, Peter; Přibil, Jiří; Dermek, Tomáš

    2015-12-01

    Circular samples are the frequent objects of "in-vitro" investigation using imaging method based on magnetic resonance principles. The goal of our investigation is imaging of thin planar layers without using the slide selection procedure, thus only 2D imaging or imaging of selected layers of samples in circular vessels, eppendorf tubes,.. compulsorily using procedure "slide selection". In spite of that the standard imaging methods was used, some specificity arise when mathematical modeling of these procedure is introduced. In the paper several mathematical models were presented that were compared with real experimental results. Circular magnetic samples were placed into the homogenous magnetic field of a low field imager based on nuclear magnetic resonance. For experimental verification an MRI 0.178 Tesla ESAOTE Opera imager was used.

  5. Sampling Operations on Big Data

    DTIC Science & Technology

    2015-11-29

    gories. These include edge sampling methods where edges are selected by a predetermined criteria; snowball sampling methods where algorithms start... Sampling Operations on Big Data Vijay Gadepally, Taylor Herr, Luke Johnson, Lauren Milechin, Maja Milosavljevic, Benjamin A. Miller Lincoln...process and disseminate information for discovery and exploration under real-time constraints. Common signal processing operations such as sampling and

  6. SNP selection and classification of genome-wide SNP data using stratified sampling random forests.

    PubMed

    Wu, Qingyao; Ye, Yunming; Liu, Yang; Ng, Michael K

    2012-09-01

    For high dimensional genome-wide association (GWA) case-control data of complex disease, there are usually a large portion of single-nucleotide polymorphisms (SNPs) that are irrelevant with the disease. A simple random sampling method in random forest using default mtry parameter to choose feature subspace, will select too many subspaces without informative SNPs. Exhaustive searching an optimal mtry is often required in order to include useful and relevant SNPs and get rid of vast of non-informative SNPs. However, it is too time-consuming and not favorable in GWA for high-dimensional data. The main aim of this paper is to propose a stratified sampling method for feature subspace selection to generate decision trees in a random forest for GWA high-dimensional data. Our idea is to design an equal-width discretization scheme for informativeness to divide SNPs into multiple groups. In feature subspace selection, we randomly select the same number of SNPs from each group and combine them to form a subspace to generate a decision tree. The advantage of this stratified sampling procedure can make sure each subspace contains enough useful SNPs, but can avoid a very high computational cost of exhaustive search of an optimal mtry, and maintain the randomness of a random forest. We employ two genome-wide SNP data sets (Parkinson case-control data comprised of 408 803 SNPs and Alzheimer case-control data comprised of 380 157 SNPs) to demonstrate that the proposed stratified sampling method is effective, and it can generate better random forest with higher accuracy and lower error bound than those by Breiman's random forest generation method. For Parkinson data, we also show some interesting genes identified by the method, which may be associated with neurological disorders for further biological investigations.

  7. Application of Bayesian methods to habitat selection modeling of the northern spotted owl in California: new statistical methods for wildlife research

    Treesearch

    Howard B. Stauffer; Cynthia J. Zabel; Jeffrey R. Dunk

    2005-01-01

    We compared a set of competing logistic regression habitat selection models for Northern Spotted Owls (Strix occidentalis caurina) in California. The habitat selection models were estimated, compared, evaluated, and tested using multiple sample datasets collected on federal forestlands in northern California. We used Bayesian methods in interpreting...

  8. GalaxyGPCRloop: Template-Based and Ab Initio Structure Sampling of the Extracellular Loops of G-Protein-Coupled Receptors.

    PubMed

    Won, Jonghun; Lee, Gyu Rie; Park, Hahnbeom; Seok, Chaok

    2018-06-07

    The second extracellular loops (ECL2s) of G-protein-coupled receptors (GPCRs) are often involved in GPCR functions, and their structures have important implications in drug discovery. However, structure prediction of ECL2 is difficult because of its long length and the structural diversity among different GPCRs. In this study, a new ECL2 conformational sampling method involving both template-based and ab initio sampling was developed. Inspired by the observation of similar ECL2 structures of closely related GPCRs, a template-based sampling method employing loop structure templates selected from the structure database was developed. A new metric for evaluating similarity of the target loop to templates was introduced for template selection. An ab initio loop sampling method was also developed to treat cases without highly similar templates. The ab initio method is based on the previously developed fragment assembly and loop closure method. A new sampling component that takes advantage of secondary structure prediction was added. In addition, a conserved disulfide bridge restraining ECL2 conformation was predicted and analytically incorporated into sampling, reducing the effective dimension of the conformational search space. The sampling method was combined with an existing energy function for comparison with previously reported loop structure prediction methods, and the benchmark test demonstrated outstanding performance.

  9. A nonparametric method to generate synthetic populations to adjust for complex sampling design features.

    PubMed

    Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E

    2014-06-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.

  10. A nonparametric method to generate synthetic populations to adjust for complex sampling design features

    PubMed Central

    Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608

  11. Eating and Exercising: Nebraska Adolescents' Attitudes and Behaviors. Technical Report 25.

    ERIC Educational Resources Information Center

    Newman, Ian M.

    This report describes selected eating and exercise patterns among a sample of 2,237 Nebraska youth in grades 9-12 selected from a random sample of 24 junior and senior high schools. The eating patterns reported cover food selection, body image, weight management, and weight loss methods. The exercise patterns relate to the frequency of…

  12. Accuracy and suitability of selected sampling methods within conifer dominated riparian zones

    Treesearch

    Theresa Marquardt; Hailemariam Temesgen; Paul D. Anderson

    2010-01-01

    Sixteen sampling alternatives were examined for their performance to quantify selected attributes of overstory conifers in riparian areas of western Oregon. Each alternative was examined at eight headwater forest locations based on 0.52 ha square stem maps. The alternatives were evaluated for selected stand attributes (tree per hectare, basal area per hectare, and...

  13. Methods of analysis by the U. S. Geological Survey National Water Quality Laboratory - determination of organonitrogen herbicides in water by solid-phase extraction and capillary-column gas chromatography/mass spectrometry with selected-ion monitoring

    USGS Publications Warehouse

    Sandstrom, Mark W.; Wydoski, Duane S.; Schroeder, Michael P.; Zamboni, Jana L.; Foreman, William T.

    1992-01-01

    A method for the isolation of organonitrogen herbicides from natural water samples using solid-phase extraction and analysis by capillary-column gas chromatography/mass spectrometry with selected-ion monitoring is described. Water samples are filtered to remove suspended particulate matter and then are pumped through disposable solid-phase extraction cartridges containing octadecyl-bonded porous silica to remove the herbicides. The cartridges are dried using carbon dioxide, and adsorbed herbicides are removed from the cartridges by elution with 1.8 milliliters of hexaneisopropanol (3:1). Extracts of the eluants are analyzed by capillary-column gas chromatography/mass spectrometry with selected-ion monitoring of at least three characteristic ions. The method detection limits are dependent on sample matrix and each particular herbicide. The method detection limits, based on a 100-milliliter sample size, range from 0.02 to 0.25 microgram per liter. Recoveries averaged 80 to 115 percent for the 23 herbicides and 2 metabolites in 1 reagent-water and 2 natural-water samples fortified at levels of 0.2 and 2.0 micrograms per liter.

  14. Method 1200: Analytical Protocol for Non-Typhoidal Salmonella in Drinking Water and Surface Water

    EPA Pesticide Factsheets

    Method 1200 is used for identification, confirmation and quantitation of non-typhoidal Salmonella in water samples, using selective and non-selective media followed by biochemical and serological confirmation.

  15. Powder Handling Device for Analytical Instruments

    NASA Technical Reports Server (NTRS)

    Sarrazin, Philippe C. (Inventor); Blake, David F. (Inventor)

    2006-01-01

    Method and system for causing a powder sample in a sample holder to undergo at least one of three motions (vibration, rotation and translation) at a selected motion frequency in order to present several views of an individual grain of the sample. One or more measurements of diffraction, fluorescence, spectroscopic interaction, transmission, absorption and/or reflection can be made on the sample, using light in a selected wavelength region.

  16. EVIDENCE FOR THE UNIVERSALITY OF PROPERTIES OF RED-SEQUENCE GALAXIES IN X-RAY- AND RED-SEQUENCE-SELECTED CLUSTERS AT z ∼ 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foltz, R.; Wilson, G.; DeGroot, A.

    We study the slope, intercept, and scatter of the color–magnitude and color–mass relations for a sample of 10 infrared red-sequence-selected clusters at z ∼ 1. The quiescent galaxies in these clusters formed the bulk of their stars above z ≳ 3 with an age spread Δt ≳ 1 Gyr. We compare UVJ color–color and spectroscopic-based galaxy selection techniques, and find a 15% difference in the galaxy populations classified as quiescent by these methods. We compare the color–magnitude relations from our red-sequence selected sample with X-ray- and photometric-redshift-selected cluster samples of similar mass and redshift. Within uncertainties, we are unable tomore » detect any difference in the ages and star formation histories of quiescent cluster members in clusters selected by different methods, suggesting that the dominant quenching mechanism is insensitive to cluster baryon partitioning at z ∼ 1.« less

  17. Selectivity in analytical chemistry: two interpretations for univariate methods.

    PubMed

    Dorkó, Zsanett; Verbić, Tatjana; Horvai, George

    2015-01-01

    Selectivity is extremely important in analytical chemistry but its definition is elusive despite continued efforts by professional organizations and individual scientists. This paper shows that the existing selectivity concepts for univariate analytical methods broadly fall in two classes: selectivity concepts based on measurement error and concepts based on response surfaces (the response surface being the 3D plot of the univariate signal as a function of analyte and interferent concentration, respectively). The strengths and weaknesses of the different definitions are analyzed and contradictions between them unveiled. The error based selectivity is very general and very safe but its application to a range of samples (as opposed to a single sample) requires the knowledge of some constraint about the possible sample compositions. The selectivity concepts based on the response surface are easily applied to linear response surfaces but may lead to difficulties and counterintuitive results when applied to nonlinear response surfaces. A particular advantage of this class of selectivity is that with linear response surfaces it can provide a concentration independent measure of selectivity. In contrast, the error based selectivity concept allows only yes/no type decision about selectivity. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. System and method of infrared matrix-assisted laser desorption/ionization mass spectrometry in polyacrylamide gels

    DOEpatents

    Haglund, Jr., Richard F.; Ermer, David R.; Baltz-Knorr, Michelle Lee

    2004-11-30

    A system and method for desorption and ionization of analytes in an ablation medium. In one embodiment, the method includes the steps of preparing a sample having analytes in a medium including at least one component, freezing the sample at a sufficiently low temperature so that at least part of the sample has a phase transition, and irradiating the frozen sample with short-pulse radiation to cause medium ablation and desorption and ionization of the analytes. The method further includes the steps of selecting a resonant vibrational mode of at least one component of the medium and selecting an energy source tuned to emit radiation substantially at the wavelength of the selected resonant vibrational mode. The medium is an electrophoresis medium having polyacrylamide. In one embodiment, the energy source is a laser, where the laser can be a free electron laser tunable to generate short-pulse radiation. Alternatively, the laser can be a solid state laser tunable to generate short-pulse radiation. The laser can emit light at various ranges of wavelength.

  19. 40 CFR 63.1385 - Test methods and procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... applicable emission limits: (1) Method 1 (40 CFR part 60, appendix A) for the selection of the sampling port location and number of sampling ports; (2) Method 2 (40 CFR part 60, appendix A) for volumetric flow rate.... Each run shall consist of a minimum run time of 2 hours and a minimum sample volume of 60 dry standard...

  20. An Alternative View of Forest Sampling

    Treesearch

    Francis A. Roesch; Edwin J. Green; Charles T. Scott

    1993-01-01

    A generalized concept is presented for all of the commonly used methods of forest sampling. The concept views the forest as a two-dimensional picture which is cut up into pieces like a jigsaw puzzle, with the pieces defined by the individual selection probabilities of the trees in the forest. This concept results in a finite number of independently selected sample...

  1. Identification of solid state fermentation degree with FT-NIR spectroscopy: Comparison of wavelength variable selection methods of CARS and SCARS.

    PubMed

    Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai

    2015-01-01

    The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Identification of solid state fermentation degree with FT-NIR spectroscopy: Comparison of wavelength variable selection methods of CARS and SCARS

    NASA Astrophysics Data System (ADS)

    Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai

    2015-10-01

    The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree.

  3. A Hierarchical Feature and Sample Selection Framework and Its Application for Alzheimer’s Disease Diagnosis

    NASA Astrophysics Data System (ADS)

    An, Le; Adeli, Ehsan; Liu, Mingxia; Zhang, Jun; Lee, Seong-Whan; Shen, Dinggang

    2017-03-01

    Classification is one of the most important tasks in machine learning. Due to feature redundancy or outliers in samples, using all available data for training a classifier may be suboptimal. For example, the Alzheimer’s disease (AD) is correlated with certain brain regions or single nucleotide polymorphisms (SNPs), and identification of relevant features is critical for computer-aided diagnosis. Many existing methods first select features from structural magnetic resonance imaging (MRI) or SNPs and then use those features to build the classifier. However, with the presence of many redundant features, the most discriminative features are difficult to be identified in a single step. Thus, we formulate a hierarchical feature and sample selection framework to gradually select informative features and discard ambiguous samples in multiple steps for improved classifier learning. To positively guide the data manifold preservation process, we utilize both labeled and unlabeled data during training, making our method semi-supervised. For validation, we conduct experiments on AD diagnosis by selecting mutually informative features from both MRI and SNP, and using the most discriminative samples for training. The superior classification results demonstrate the effectiveness of our approach, as compared with the rivals.

  4. Method and apparatus for sampling atmospheric mercury

    DOEpatents

    Trujillo, Patricio E.; Campbell, Evan E.; Eutsler, Bernard C.

    1976-01-20

    A method of simultaneously sampling particulate mercury, organic mercurial vapors, and metallic mercury vapor in the working and occupational environment and determining the amount of mercury derived from each such source in the sampled air. A known volume of air is passed through a sampling tube containing a filter for particulate mercury collection, a first adsorber for the selective adsorption of organic mercurial vapors, and a second adsorber for the adsorption of metallic mercury vapor. Carbon black molecular sieves are particularly useful as the selective adsorber for organic mercurial vapors. The amount of mercury adsorbed or collected in each section of the sampling tube is readily quantitatively determined by flameless atomic absorption spectrophotometry.

  5. Compilation of a near-infrared library for the construction of quantitative models of amoxicillin and potassium clavulanate oral dosage forms

    NASA Astrophysics Data System (ADS)

    Zou, Wen-bo; Chong, Xiao-meng; Wang, Yan; Hu, Chang-qin

    2018-05-01

    The accuracy of NIR quantitative models depends on calibration samples with concentration variability. Conventional sample collecting methods have some shortcomings especially the time-consuming which remains a bottleneck in the application of NIR models for Process Analytical Technology (PAT) control. A study was performed to solve the problem of sample selection collection for construction of NIR quantitative models. Amoxicillin and potassium clavulanate oral dosage forms were used as examples. The aim was to find a normal approach to rapidly construct NIR quantitative models using an NIR spectral library based on the idea of a universal model [2021]. The NIR spectral library of amoxicillin and potassium clavulanate oral dosage forms was defined and consisted of spectra of 377 batches of samples produced by 26 domestic pharmaceutical companies, including tablets, dispersible tablets, chewable tablets, oral suspensions, and granules. The correlation coefficient (rT) was used to indicate the similarities of the spectra. The samples’ calibration sets were selected from a spectral library according to the median rT of the samples to be analyzed. The rT of the samples selected was close to the median rT. The difference in rT of those samples was 1.0% to 1.5%. We concluded that sample selection is not a problem when constructing NIR quantitative models using a spectral library versus conventional methods of determining universal models. The sample spectra with a suitable concentration range in the NIR models were collected quickly. In addition, the models constructed through this method were more easily targeted.

  6. Methicillin-Resistant Staphylococcus aureus (MRSA) Detection: Comparison of Two Molecular Methods (IDI-MRSA PCR Assay and GenoType MRSA Direct PCR Assay) with Three Selective MRSA Agars (MRSA ID, MRSASelect, and CHROMagar MRSA) for Use with Infection-Control Swabs▿

    PubMed Central

    van Hal, S. J.; Stark, D.; Lockwood, B.; Marriott, D.; Harkness, J.

    2007-01-01

    Methicillin-resistant Staphylococcus aureus (MRSA) is an increasing problem. Rapid detection of MRSA-colonized patients has the potential to limit spread of the organism. We evaluated the sensitivities and specificities of MRSA detection by two molecular methods (IDI-MRSA PCR assay and GenoType MRSA Direct PCR assay) and three selective MRSA agars (MRSA ID, MRSASelect, and CHROMagar MRSA), using 205 (101 nasal, 52 groin, and 52 axillary samples) samples from consecutive known MRSA-infected and/or -colonized patients. All detection methods had higher MRSA detection rates for nasal swabs than for axillary and groin swabs. Detection of MRSA by IDI-MRSA was the most sensitive method, independent of the site (94% for nasal samples, 80% for nonnasal samples, and 90% overall). The sensitivities of the GenoType MRSA Direct assay and the MRSA ID, MRSASelect, and CHROMagar MRSA agars with nasal swabs were 70%, 72%, 68%, and 75%, respectively. All detection methods had high specificities (95 to 99%), independent of the swab site. Extended incubation for a further 24 h with selective MRSA agars increased the detection of MRSA, with a corresponding decline in specificity secondary to a significant increase in false-positive results. There was a noticeable difference in test performance of the GenoType MRSA Direct assay in detection of MRSA (28/38 samples [74%]) compared with detection of nonmultiresistant MRSA (17/31 samples [55%]) (susceptible to two or more non-β-lactam antibiotics). This was not observed with selective MRSA agar plates or IDI-MRSA. Although it is more expensive, in addition to rapid turnaround times of 2 to 4 h, IDI-MRSA offers greater detection of MRSA colonization, independent of the swab site, than do conventional selective agars and GenoType MRSA Direct. PMID:17537949

  7. Application of enhanced gas chromatography/triple quadrupole mass spectrometry for monitoring petroleum weathering and forensic source fingerprinting in samples impacted by the Deepwater Horizon oil spill.

    PubMed

    Adhikari, Puspa L; Wong, Roberto L; Overton, Edward B

    2017-10-01

    Accurate characterization of petroleum hydrocarbons in complex and weathered oil residues is analytically challenging. This is primarily due to chemical compositional complexity of both the oil residues and environmental matrices, and the lack of instrumental selectivity due to co-elution of interferences with the target analytes. To overcome these analytical selectivity issues, we used an enhanced resolution gas chromatography coupled with triple quadrupole mass spectrometry in Multiple Reaction Monitoring (MRM) mode (GC/MS/MS-MRM) to eliminate interferences within the ion chromatograms of target analytes found in environmental samples. This new GC/MS/MS-MRM method was developed and used for forensic fingerprinting of deep-water and marsh sediment samples containing oily residues from the Deepwater Horizon oil spill. The results showed that the GC/MS/MS-MRM method increases selectivity, eliminates interferences, and provides more accurate quantitation and characterization of trace levels of alkyl-PAHs and biomarker compounds, from weathered oil residues in complex sample matrices. The higher selectivity of the new method, even at low detection limits, provides greater insights on isomer and homolog compositional patterns and the extent of oil weathering under various environmental conditions. The method also provides flat chromatographic baselines for accurate and unambiguous calculation of petroleum forensic biomarker compound ratios. Thus, this GC/MS/MS-MRM method can be a reliable analytical strategy for more accurate and selective trace level analyses in petroleum forensic studies, and for tacking continuous weathering of oil residues. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. A sampling design framework for monitoring secretive marshbirds

    USGS Publications Warehouse

    Johnson, D.H.; Gibbs, J.P.; Herzog, M.; Lor, S.; Niemuth, N.D.; Ribic, C.A.; Seamans, M.; Shaffer, T.L.; Shriver, W.G.; Stehman, S.V.; Thompson, W.L.

    2009-01-01

    A framework for a sampling plan for monitoring marshbird populations in the contiguous 48 states is proposed here. The sampling universe is the breeding habitat (i.e. wetlands) potentially used by marshbirds. Selection protocols would be implemented within each of large geographical strata, such as Bird Conservation Regions. Site selection will be done using a two-stage cluster sample. Primary sampling units (PSUs) would be land areas, such as legal townships, and would be selected by a procedure such as systematic sampling. Secondary sampling units (SSUs) will be wetlands or portions of wetlands in the PSUs. SSUs will be selected by a randomized spatially balanced procedure. For analysis, the use of a variety of methods as a means of increasing confidence in conclusions that may be reached is encouraged. Additional effort will be required to work out details and implement the plan.

  9. Field Sampling and Selecting On-Site Analytical Methods for Explosives in Soil

    EPA Pesticide Factsheets

    The purpose of this issue paper is to provide guidance to Remedial Project Managers regarding field sampling and on-site analytical methods fordetecting and quantifying secondary explosive compounds in soils.

  10. Fuzziness-based active learning framework to enhance hyperspectral image classification performance for discriminative and generative classifiers

    PubMed Central

    2018-01-01

    Hyperspectral image classification with a limited number of training samples without loss of accuracy is desirable, as collecting such data is often expensive and time-consuming. However, classifiers trained with limited samples usually end up with a large generalization error. To overcome the said problem, we propose a fuzziness-based active learning framework (FALF), in which we implement the idea of selecting optimal training samples to enhance generalization performance for two different kinds of classifiers, discriminative and generative (e.g. SVM and KNN). The optimal samples are selected by first estimating the boundary of each class and then calculating the fuzziness-based distance between each sample and the estimated class boundaries. Those samples that are at smaller distances from the boundaries and have higher fuzziness are chosen as target candidates for the training set. Through detailed experimentation on three publically available datasets, we showed that when trained with the proposed sample selection framework, both classifiers achieved higher classification accuracy and lower processing time with the small amount of training data as opposed to the case where the training samples were selected randomly. Our experiments demonstrate the effectiveness of our proposed method, which equates favorably with the state-of-the-art methods. PMID:29304512

  11. Recovery of Sublethally Injured Bacteria Using Selective Agar Overlays.

    ERIC Educational Resources Information Center

    McKillip, John L.

    2001-01-01

    This experiment subjects bacteria in a food sample and an environmental sample to conditions of sublethal stress in order to assess the effectiveness of the agar overlay method to recover sublethally injured cells compared to direct plating onto the appropriate selective medium. (SAH)

  12. Methods for the selective detection of alkyne-presenting molecules and related compositions and systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valdez, Carlos A.; Vu, Alexander K.

    Provided herein are methods for selectively detecting an alkyne-presenting molecule in a sample and related detection reagents, compositions, methods and systems. The methods include contacting a detection reagent with the sample for a time and under a condition to allow binding of the detection reagent to the one or more alkyne-presenting molecules possibly present in the matrix to the detection reagent. The detection reagent includes an organic label moiety presenting an azide group. The binding of the azide group to the alkyne-presenting molecules results in emission of a signal from the organic label moiety.

  13. EPA Method 538: Determination of Selected Organic Contaminants in Drinking Water by Direct Aqueous Injection-Liquid Chromatography/Tandem Mass Spectrometry (DAI-LC/MS/MS)

    EPA Pesticide Factsheets

    EPA’s Selected Analytical Methods for Environmental Remediation and Recovery (SAM) lists this method for preparation and analysis of drinking water samples to detect and measure acephate, diisopropyl methylphosphonate (DIMP), methamidophos and thiofanox.

  14. Piecewise SALT sampling for estimating suspended sediment yields

    Treesearch

    Robert B. Thomas

    1989-01-01

    A probability sampling method called SALT (Selection At List Time) has been developed for collecting and summarizing data on delivery of suspended sediment in rivers. It is based on sampling and estimating yield using a suspended-sediment rating curve for high discharges and simple random sampling for low flows. The method gives unbiased estimates of total yield and...

  15. Sampling methods for amphibians in streams in the Pacific Northwest.

    Treesearch

    R. Bruce Bury; Paul Stephen Corn

    1991-01-01

    Methods describing how to sample aquatic and semiaquatic amphibians in small streams and headwater habitats in the Pacific Northwest are presented. We developed a technique that samples 10-meter stretches of selected streams, which was adequate to detect presence or absence of amphibian species and provided sample sizes statistically sufficient to compare abundance of...

  16. Sample integrity evaluation and EPA method 325B interlaboratory comparison for select volatile organic compounds collected diffusively on Carbopack X sorbent tubes

    NASA Astrophysics Data System (ADS)

    Oliver, Karen D.; Cousett, Tamira A.; Whitaker, Donald A.; Smith, Luther A.; Mukerjee, Shaibal; Stallings, Casson; Thoma, Eben D.; Alston, Lillian; Colon, Maribel; Wu, Tai; Henkle, Stacy

    2017-08-01

    A sample integrity evaluation and an interlaboratory comparison were conducted in application of U.S. Environmental Protection Agency (EPA) Methods 325A and 325B for diffusively monitoring benzene and other selected volatile organic compounds (VOCs) using Carbopack X sorbent tubes. To evaluate sample integrity, VOC samples were refrigerated for up to 240 days and analyzed using thermal desorption/gas chromatography-mass spectrometry at the EPA Office of Research and Development laboratory in Research Triangle Park, NC, USA. For the interlaboratory comparison, three commercial analytical laboratories were asked to follow Method 325B when analyzing samples of VOCs that were collected in field and laboratory settings for EPA studies. Overall results indicate that the selected VOCs collected diffusively on sorbent tubes generally were stable for 6 months or longer when samples were refrigerated. This suggests the specified maximum 30-day storage time of VOCs collected diffusively on Carbopack X passive samplers and analyzed using Method 325B might be able to be relaxed. Interlaboratory comparison results were in agreement for the challenge samples collected diffusively in an exposure chamber in the laboratory, with most measurements within ±25% of the theoretical concentration. Statistically significant differences among laboratories for ambient challenge samples were small, less than 1 part per billion by volume (ppbv). Results from all laboratories exhibited good precision and generally agreed well with each other.

  17. Does Self-Selection Affect Samples’ Representativeness in Online Surveys? An Investigation in Online Video Game Research

    PubMed Central

    van Singer, Mathias; Chatton, Anne; Achab, Sophia; Zullino, Daniele; Rothen, Stephane; Khan, Riaz; Billieux, Joel; Thorens, Gabriel

    2014-01-01

    Background The number of medical studies performed through online surveys has increased dramatically in recent years. Despite their numerous advantages (eg, sample size, facilitated access to individuals presenting stigmatizing issues), selection bias may exist in online surveys. However, evidence on the representativeness of self-selected samples in online studies is patchy. Objective Our objective was to explore the representativeness of a self-selected sample of online gamers using online players’ virtual characters (avatars). Methods All avatars belonged to individuals playing World of Warcraft (WoW), currently the most widely used online game. Avatars’ characteristics were defined using various games’ scores, reported on the WoW’s official website, and two self-selected samples from previous studies were compared with a randomly selected sample of avatars. Results We used scores linked to 1240 avatars (762 from the self-selected samples and 478 from the random sample). The two self-selected samples of avatars had higher scores on most of the assessed variables (except for guild membership and exploration). Furthermore, some guilds were overrepresented in the self-selected samples. Conclusions Our results suggest that more proficient players or players more involved in the game may be more likely to participate in online surveys. Caution is needed in the interpretation of studies based on online surveys that used a self-selection recruitment procedure. Epidemiological evidence on the reduced representativeness of sample of online surveys is warranted. PMID:25001007

  18. Noise Spectroscopy Used in Biology

    NASA Astrophysics Data System (ADS)

    Žacik, Michal

    This thesis contains glossary topic of spectroscopic measurement methods in broad bands of frequency. There is designed experimental measurement method for simple samples and biological samples measurements for noise spectroscopy in frequency range of 0.1 - 6 GHz, using broadband noise generator. There is realized the workplace and the measurement method is verified by measuring on selected samples. Measurements a displayed and analyzed.

  19. Developments in Sampling and Analysis Instrumentation for Stationary Sources

    ERIC Educational Resources Information Center

    Nader, John S.

    1973-01-01

    Instrumentation for the measurement of pollutant emissions is considered including sample-site selection, sample transport, sample treatment, sample analysis, and data reduction, display, and interpretation. Measurement approaches discussed involve sample extraction from within the stack and electro-optical methods. (BL)

  20. Assessment of Sample Preparation Bias in Mass Spectrometry-Based Proteomics.

    PubMed

    Klont, Frank; Bras, Linda; Wolters, Justina C; Ongay, Sara; Bischoff, Rainer; Halmos, Gyorgy B; Horvatovich, Péter

    2018-04-17

    For mass spectrometry-based proteomics, the selected sample preparation strategy is a key determinant for information that will be obtained. However, the corresponding selection is often not based on a fit-for-purpose evaluation. Here we report a comparison of in-gel (IGD), in-solution (ISD), on-filter (OFD), and on-pellet digestion (OPD) workflows on the basis of targeted (QconCAT-multiple reaction monitoring (MRM) method for mitochondrial proteins) and discovery proteomics (data-dependent acquisition, DDA) analyses using three different human head and neck tissues (i.e., nasal polyps, parotid gland, and palatine tonsils). Our study reveals differences between the sample preparation methods, for example, with respect to protein and peptide losses, quantification variability, protocol-induced methionine oxidation, and asparagine/glutamine deamidation as well as identification of cysteine-containing peptides. However, none of the methods performed best for all types of tissues, which argues against the existence of a universal sample preparation method for proteome analysis.

  1. Selection of sampling rate for digital control of aircrafts

    NASA Technical Reports Server (NTRS)

    Katz, P.; Powell, J. D.

    1974-01-01

    The considerations in selecting the sample rates for digital control of aircrafts are identified and evaluated using the optimal discrete method. A high performance aircraft model which includes a bending mode and wind gusts was studied. The following factors which influence the selection of the sampling rates were identified: (1) the time and roughness response to control inputs; (2) the response to external disturbances; and (3) the sensitivity to variations of parameters. It was found that the time response to a control input and the response to external disturbances limit the selection of the sampling rate. The optimal discrete regulator, the steady state Kalman filter, and the mean response to external disturbances are calculated.

  2. Roka Listeria detection method using transcription mediated amplification to detect Listeria species in select foods and surfaces. Performance Tested Method(SM) 011201.

    PubMed

    Hua, Yang; Kaplan, Shannon; Reshatoff, Michael; Hu, Ernie; Zukowski, Alexis; Schweis, Franz; Gin, Cristal; Maroni, Brett; Becker, Michael; Wisniewski, Michele

    2012-01-01

    The Roka Listeria Detection Assay was compared to the reference culture methods for nine select foods and three select surfaces. The Roka method used Half-Fraser Broth for enrichment at 35 +/- 2 degrees C for 24-28 h. Comparison of Roka's method to reference methods requires an unpaired approach. Each method had a total of 545 samples inoculated with a Listeria strain. Each food and surface was inoculated with a different strain of Listeria at two different levels per method. For the dairy products (Brie cheese, whole milk, and ice cream), our method was compared to AOAC Official Method(SM) 993.12. For the ready-to-eat meats (deli chicken, cured ham, chicken salad, and hot dogs) and environmental surfaces (sealed concrete, stainless steel, and plastic), these samples were compared to the U.S. Department of Agriculture/Food Safety and Inspection Service-Microbiology Laboratory Guidebook (USDA/FSIS-MLG) method MLG 8.07. Cold-smoked salmon and romaine lettuce were compared to the U.S. Food and Drug Administration/Bacteriological Analytical Manual, Chapter 10 (FDA/BAM) method. Roka's method had 358 positives out of 545 total inoculated samples compared to 332 positive for the reference methods. Overall the probability of detection analysis of the results showed better or equivalent performance compared to the reference methods.

  3. Magnetically separable polymer (Mag-MIP) for selective analysis of biotin in food samples.

    PubMed

    Uzuriaga-Sánchez, Rosario Josefina; Khan, Sabir; Wong, Ademar; Picasso, Gino; Pividori, Maria Isabel; Sotomayor, Maria Del Pilar Taboada

    2016-01-01

    This work presents an efficient method for the preparation of magnetic nanoparticles modified with molecularly imprinted polymers (Mag-MIP) through core-shell method for the determination of biotin in milk food samples. The functional monomer acrylic acid was selected from molecular modeling, EGDMA was used as cross-linking monomer and AIBN as radical initiator. The Mag-MIP and Mag-NIP were characterized by FTIR, magnetic hysteresis, XRD, SEM and N2-sorption measurements. The capacity of Mag-MIP for biotin adsorption, its kinetics and selectivity were studied in detail. The adsorption data was well described by Freundlich isotherm model with adsorption equilibrium constant (KF) of 1.46 mL g(-1). The selectivity experiments revealed that prepared Mag-MIP had higher selectivity toward biotin compared to other molecules with different chemical structure. The material was successfully applied for the determination of biotin in diverse milk samples using HPLC for quantification of the analyte, obtaining the mean value of 87.4% recovery. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Comparison of green sample preparation techniques in the analysis of pyrethrins and pyrethroids in baby food by liquid chromatography-tandem mass spectrometry.

    PubMed

    Petrarca, Mateus Henrique; Ccanccapa-Cartagena, Alexander; Masiá, Ana; Godoy, Helena Teixeira; Picó, Yolanda

    2017-05-12

    A new selective and sensitive liquid chromatography triple quadrupole mass spectrometry method was developed for simultaneous analysis of natural pyrethrins and synthetic pyrethroids residues in baby food. In this study, two sample preparation methods based on ultrasound-assisted dispersive liquid-liquid microextraction (UA-DLLME) and salting-out assisted liquid-liquid extraction (SALLE) were optimized, and then, compared regarding the performance criteria. Appropriate linearity in solvent and matrix-based calibrations, and suitable recoveries (75-120%) and precision (RSD values≤16%) were achieved for selected analytes by any of the sample preparation procedures. Both methods provided the analytical selectivity required for the monitoring of the insecticides in fruit-, cereal- and milk-based baby foods. SALLE, recognized by cost-effectiveness, and simple and fast execution, provided a lower enrichment factor, consequently, higher limits of quantification (LOQs) were obtained. Some of them too high to meet the strict legislation regarding baby food. Nonetheless, the combination of ultrasound and DLLME also resulted in a high sample throughput and environmental-friendly method, whose LOQs were lower than the default maximum residue limit (MRL) of 10μgkg -1 set by European Community for baby foods. In the commercial baby foods analyzed, cyhalothrin and etofenprox were detected in different samples, demonstrating the suitability of proposed method for baby food control. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Molecularly imprinted solid-phase extraction for selective extraction of bisphenol analogues in beverages and canned food.

    PubMed

    Yang, Yunjia; Yu, Jianlong; Yin, Jie; Shao, Bing; Zhang, Jing

    2014-11-19

    This study aimed to develop a selective analytical method for the simultaneous determination of seven bisphenol analogues in beverage and canned food samples by using a new molecularly imprinted polymer (MIP) as a sorbent for solid-phase extraction (SPE). Liquid chromatography coupled to triple-quadruple tandem mass spectrometry (LC-MS/MS) was used to identify and quantify the target analytes. The MIP-SPE method exhibited a higher level of selectivity and purification than the traditional SPE method. The developed procedures were further validated in terms of accuracy, precision, and sensitivity. The obtained recoveries varied from 50% to 103% at three fortification levels and yielded a relative standard deviation (RSD, %) of less than 15% for all of the analytes. The limits of quantification (LOQ) for the seven analytes varied from 0.002 to 0.15 ng/mL for beverage samples and from 0.03 to 1.5 ng/g for canned food samples. This method was used to analyze real samples that were collected from a supermarket in Beijing. Overall, the results revealed that bisphenol A and bisphenol F were the most frequently detected bisphenols in the beverage and canned food samples and that their concentrations were closely associated with the type of packaging material. This study provides an alternative method of traditional SPE extraction for screening bisphenol analogues in food matrices.

  6. A novel one-class SVM based negative data sampling method for reconstructing proteome-wide HTLV-human protein interaction networks.

    PubMed

    Mei, Suyu; Zhu, Hao

    2015-01-26

    Protein-protein interaction (PPI) prediction is generally treated as a problem of binary classification wherein negative data sampling is still an open problem to be addressed. The commonly used random sampling is prone to yield less representative negative data with considerable false negatives. Meanwhile rational constraints are seldom exerted on model selection to reduce the risk of false positive predictions for most of the existing computational methods. In this work, we propose a novel negative data sampling method based on one-class SVM (support vector machine, SVM) to predict proteome-wide protein interactions between HTLV retrovirus and Homo sapiens, wherein one-class SVM is used to choose reliable and representative negative data, and two-class SVM is used to yield proteome-wide outcomes as predictive feedback for rational model selection. Computational results suggest that one-class SVM is more suited to be used as negative data sampling method than two-class PPI predictor, and the predictive feedback constrained model selection helps to yield a rational predictive model that reduces the risk of false positive predictions. Some predictions have been validated by the recent literature. Lastly, gene ontology based clustering of the predicted PPI networks is conducted to provide valuable cues for the pathogenesis of HTLV retrovirus.

  7. A two-stage cluster sampling method using gridded population data, a GIS, and Google Earth(TM) imagery in a population-based mortality survey in Iraq.

    PubMed

    Galway, Lp; Bell, Nathaniel; Sae, Al Shatari; Hagopian, Amy; Burnham, Gilbert; Flaxman, Abraham; Weiss, Wiliam M; Rajaratnam, Julie; Takaro, Tim K

    2012-04-27

    Mortality estimates can measure and monitor the impacts of conflict on a population, guide humanitarian efforts, and help to better understand the public health impacts of conflict. Vital statistics registration and surveillance systems are rarely functional in conflict settings, posing a challenge of estimating mortality using retrospective population-based surveys. We present a two-stage cluster sampling method for application in population-based mortality surveys. The sampling method utilizes gridded population data and a geographic information system (GIS) to select clusters in the first sampling stage and Google Earth TM imagery and sampling grids to select households in the second sampling stage. The sampling method is implemented in a household mortality study in Iraq in 2011. Factors affecting feasibility and methodological quality are described. Sampling is a challenge in retrospective population-based mortality studies and alternatives that improve on the conventional approaches are needed. The sampling strategy presented here was designed to generate a representative sample of the Iraqi population while reducing the potential for bias and considering the context specific challenges of the study setting. This sampling strategy, or variations on it, are adaptable and should be considered and tested in other conflict settings.

  8. A two-stage cluster sampling method using gridded population data, a GIS, and Google EarthTM imagery in a population-based mortality survey in Iraq

    PubMed Central

    2012-01-01

    Background Mortality estimates can measure and monitor the impacts of conflict on a population, guide humanitarian efforts, and help to better understand the public health impacts of conflict. Vital statistics registration and surveillance systems are rarely functional in conflict settings, posing a challenge of estimating mortality using retrospective population-based surveys. Results We present a two-stage cluster sampling method for application in population-based mortality surveys. The sampling method utilizes gridded population data and a geographic information system (GIS) to select clusters in the first sampling stage and Google Earth TM imagery and sampling grids to select households in the second sampling stage. The sampling method is implemented in a household mortality study in Iraq in 2011. Factors affecting feasibility and methodological quality are described. Conclusion Sampling is a challenge in retrospective population-based mortality studies and alternatives that improve on the conventional approaches are needed. The sampling strategy presented here was designed to generate a representative sample of the Iraqi population while reducing the potential for bias and considering the context specific challenges of the study setting. This sampling strategy, or variations on it, are adaptable and should be considered and tested in other conflict settings. PMID:22540266

  9. Systematic evaluation of matrix effects in hydrophilic interaction chromatography versus reversed phase liquid chromatography coupled to mass spectrometry.

    PubMed

    Periat, Aurélie; Kohler, Isabelle; Thomas, Aurélien; Nicoli, Raul; Boccard, Julien; Veuthey, Jean-Luc; Schappler, Julie; Guillarme, Davy

    2016-03-25

    Reversed phase liquid chromatography (RPLC) coupled to mass spectrometry (MS) is the gold standard technique in bioanalysis. However, hydrophilic interaction chromatography (HILIC) could represent a viable alternative to RPLC for the analysis of polar and/or ionizable compounds, as it often provides higher MS sensitivity and alternative selectivity. Nevertheless, this technique can be also prone to matrix effects (ME). ME are one of the major issues in quantitative LC-MS bioanalysis. To ensure acceptable method performance (i.e., trueness and precision), a careful evaluation and minimization of ME is required. In the present study, the incidence of ME in HILIC-MS/MS and RPLC-MS/MS was compared for plasma and urine samples using two representative sets of 38 pharmaceutical compounds and 40 doping agents, respectively. The optimal generic chromatographic conditions in terms of selectivity with respect to interfering compounds were established in both chromatographic modes by testing three different stationary phases in each mode with different mobile phase pH. A second step involved the assessment of ME in RPLC and HILIC under the best generic conditions, using the post-extraction addition method. Biological samples were prepared using two different sample pre-treatments, i.e., a non-selective sample clean-up procedure (protein precipitation and simple dilution for plasma and urine samples, respectively) and a selective sample preparation, i.e., solid phase extraction for both matrices. The non-selective pretreatments led to significantly less ME in RPLC vs. HILIC conditions regardless of the matrix. On the contrary, HILIC appeared as a valuable alternative to RPLC for plasma and urine samples treated by a selective sample preparation. Indeed, in the case of selective sample preparation, the compounds influenced by ME were different in HILIC and RPLC, and lower and similar ME occurrence was generally observed in RPLC vs. HILIC for urine and plasma samples, respectively. The complementary of both chromatographic modes was also demonstrated, as ME was observed only scarcely for urine and plasma samples when selecting the most appropriate chromatographic mode. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Advancing Explosives Detection Capabilities: Vapor Detection

    ScienceCinema

    Atkinson, David

    2018-05-11

    A new, PNNL-developed method provides direct, real-time detection of trace amounts of explosives such as RDX, PETN and C-4. The method selectively ionizes a sample before passing the sample through a mass spectrometer to detect explosive vapors. The method could be used at airports to improve aviation security.

  11. A COMPARISON OF BENTHIC MACROINVERTEBRATE SAMPLING METHODS ON SELECTED LARGE RIVER TRIBUTARIES TO THE MISSISSIPPI

    EPA Science Inventory

    We compared three benthic macroinvertebrate sampling methods on the St. Croix, Wisconsin and Scioto Rivers in summer 2004 and 2005. EPA's newly developed, multi-habitat Large River Bioassessment Protocol (LR-BP) was compared to the multi-habitat method of the Minnesota Pollution...

  12. Advancing Explosives Detection Capabilities: Vapor Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atkinson, David

    2012-10-15

    A new, PNNL-developed method provides direct, real-time detection of trace amounts of explosives such as RDX, PETN and C-4. The method selectively ionizes a sample before passing the sample through a mass spectrometer to detect explosive vapors. The method could be used at airports to improve aviation security.

  13. Manifold Regularized Experimental Design for Active Learning.

    PubMed

    Zhang, Lining; Shum, Hubert P H; Shao, Ling

    2016-12-02

    Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.

  14. Collective feature selection to identify crucial epistatic variants.

    PubMed

    Verma, Shefali S; Lucas, Anastasia; Zhang, Xinyuan; Veturi, Yogasudha; Dudek, Scott; Li, Binglan; Li, Ruowang; Urbanowicz, Ryan; Moore, Jason H; Kim, Dokyoon; Ritchie, Marylyn D

    2018-01-01

    Machine learning methods have gained popularity and practicality in identifying linear and non-linear effects of variants associated with complex disease/traits. Detection of epistatic interactions still remains a challenge due to the large number of features and relatively small sample size as input, thus leading to the so-called "short fat data" problem. The efficiency of machine learning methods can be increased by limiting the number of input features. Thus, it is very important to perform variable selection before searching for epistasis. Many methods have been evaluated and proposed to perform feature selection, but no single method works best in all scenarios. We demonstrate this by conducting two separate simulation analyses to evaluate the proposed collective feature selection approach. Through our simulation study we propose a collective feature selection approach to select features that are in the "union" of the best performing methods. We explored various parametric, non-parametric, and data mining approaches to perform feature selection. We choose our top performing methods to select the union of the resulting variables based on a user-defined percentage of variants selected from each method to take to downstream analysis. Our simulation analysis shows that non-parametric data mining approaches, such as MDR, may work best under one simulation criteria for the high effect size (penetrance) datasets, while non-parametric methods designed for feature selection, such as Ranger and Gradient boosting, work best under other simulation criteria. Thus, using a collective approach proves to be more beneficial for selecting variables with epistatic effects also in low effect size datasets and different genetic architectures. Following this, we applied our proposed collective feature selection approach to select the top 1% of variables to identify potential interacting variables associated with Body Mass Index (BMI) in ~ 44,000 samples obtained from Geisinger's MyCode Community Health Initiative (on behalf of DiscovEHR collaboration). In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.

  15. Isolation and identification of Salmonella spp. in environmental water by molecular technology in Taiwan

    NASA Astrophysics Data System (ADS)

    Kuo, Chun Wei; Hao Huang, Kuan; Hsu, Bing Mu; Tsai, Hsien Lung; Tseng, Shao Feng; Shen, Tsung Yu; Kao, Po Min; Shen, Shu Min; Chen, Jung Sheng

    2013-04-01

    Salmonella spp. is one of the most important causal agents of waterborne diseases. The taxonomy of Salmonella is very complicated and its genus comprises more than 2,500 serotypes. The detection of Salmonella in environmental water samples by routines culture methods using selective media and characterization of suspicious colonies based on biochemical tests and serological assay are generally time consuming. To overcome this drawback, it is desirable to use effective method which provides a higher discrimination and more rapid identification about Salmonella in environmental water. The aim of this study is to investigate the occurrence of Salmonella using molecular technology and to identify the serovars of Salmonella isolates from 70 environmental water samples in Taiwan. The analytical procedures include membrane filtration, non-selective pre-enrichment, selective enrichment of Salmonella. After that, we isolated Salmonella strains by selective culture plates. Both selective enrichment and culture plates were detected by Polymerase Chain Reaction (PCR). Finally, the serovars of Salmonella were confirmed by using biochemical tests and serological assay. In this study, 15 water samples (21.4%) were identified as Salmonella by PCR. The positive water samples will further identify their serotypes by culture method. The presence of Salmonella in environmental water indicates the possibility of waterborne transmission in drinking watershed. Consequently, the authorities need to provide sufficient source protection and to maintain the system for disease prevention. Keywords: Salmonella spp., serological assay, PCR

  16. Reduction in training time of a deep learning model in detection of lesions in CT

    NASA Astrophysics Data System (ADS)

    Makkinejad, Nazanin; Tajbakhsh, Nima; Zarshenas, Amin; Khokhar, Ashfaq; Suzuki, Kenji

    2018-02-01

    Deep learning (DL) emerged as a powerful tool for object detection and classification in medical images. Building a well-performing DL model, however, requires a huge number of images for training, and it takes days to train a DL model even on a cutting edge high-performance computing platform. This study is aimed at developing a method for selecting a "small" number of representative samples from a large collection of training samples to train a DL model for the could be used to detect polyps in CT colonography (CTC), without compromising the classification performance. Our proposed method for representative sample selection (RSS) consists of a K-means clustering algorithm. For the performance evaluation, we applied the proposed method to select samples for the training of a massive training artificial neural network based DL model, to be used for the classification of polyps and non-polyps in CTC. Our results show that the proposed method reduce the training time by a factor of 15, while maintaining the classification performance equivalent to the model trained using the full training set. We compare the performance using area under the receiveroperating- characteristic curve (AUC).

  17. Comparison of estimates of hardwood bole volume using importance sampling, the centroid method, and some taper equations

    Treesearch

    Harry V., Jr. Wiant; Michael L. Spangler; John E. Baumgras

    2002-01-01

    Various taper systems and the centroid method were compared to unbiased volume estimates made by importance sampling for 720 hardwood trees selected throughout the state of West Virginia. Only the centroid method consistently gave volumes estimates that did not differ significantly from those made by importance sampling, although some taper equations did well for most...

  18. Selecting a sampling method to aid in vegetation management decisions in loblolly pine plantations

    Treesearch

    David R. Weise; Glenn R. Glover

    1993-01-01

    Objective methods to evaluate hardwood competition in young loblolly pine (Pinustaeda L.) plantations are not widely used in the southeastern United States. Ability of common sampling rules to accurately estimate hardwood rootstock attributes at low sampling intensities and across varying rootstock spatial distributions is unknown. Fixed area plot...

  19. A sampling-based method for ranking protein structural models by integrating multiple scores and features.

    PubMed

    Shi, Xiaohu; Zhang, Jingfen; He, Zhiquan; Shang, Yi; Xu, Dong

    2011-09-01

    One of the major challenges in protein tertiary structure prediction is structure quality assessment. In many cases, protein structure prediction tools generate good structural models, but fail to select the best models from a huge number of candidates as the final output. In this study, we developed a sampling-based machine-learning method to rank protein structural models by integrating multiple scores and features. First, features such as predicted secondary structure, solvent accessibility and residue-residue contact information are integrated by two Radial Basis Function (RBF) models trained from different datasets. Then, the two RBF scores and five selected scoring functions developed by others, i.e., Opus-CA, Opus-PSP, DFIRE, RAPDF, and Cheng Score are synthesized by a sampling method. At last, another integrated RBF model ranks the structural models according to the features of sampling distribution. We tested the proposed method by using two different datasets, including the CASP server prediction models of all CASP8 targets and a set of models generated by our in-house software MUFOLD. The test result shows that our method outperforms any individual scoring function on both best model selection, and overall correlation between the predicted ranking and the actual ranking of structural quality.

  20. Estimating Sampling Selection Bias in Human Genetics: A Phenomenological Approach

    PubMed Central

    Risso, Davide; Taglioli, Luca; De Iasio, Sergio; Gueresi, Paola; Alfani, Guido; Nelli, Sergio; Rossi, Paolo; Paoli, Giorgio; Tofanelli, Sergio

    2015-01-01

    This research is the first empirical attempt to calculate the various components of the hidden bias associated with the sampling strategies routinely-used in human genetics, with special reference to surname-based strategies. We reconstructed surname distributions of 26 Italian communities with different demographic features across the last six centuries (years 1447–2001). The degree of overlapping between "reference founding core" distributions and the distributions obtained from sampling the present day communities by probabilistic and selective methods was quantified under different conditions and models. When taking into account only one individual per surname (low kinship model), the average discrepancy was 59.5%, with a peak of 84% by random sampling. When multiple individuals per surname were considered (high kinship model), the discrepancy decreased by 8–30% at the cost of a larger variance. Criteria aimed at maximizing locally-spread patrilineages and long-term residency appeared to be affected by recent gene flows much more than expected. Selection of the more frequent family names following low kinship criteria proved to be a suitable approach only for historically stable communities. In any other case true random sampling, despite its high variance, did not return more biased estimates than other selective methods. Our results indicate that the sampling of individuals bearing historically documented surnames (founders' method) should be applied, especially when studying the male-specific genome, to prevent an over-stratification of ancient and recent genetic components that heavily biases inferences and statistics. PMID:26452043

  1. Estimating Sampling Selection Bias in Human Genetics: A Phenomenological Approach.

    PubMed

    Risso, Davide; Taglioli, Luca; De Iasio, Sergio; Gueresi, Paola; Alfani, Guido; Nelli, Sergio; Rossi, Paolo; Paoli, Giorgio; Tofanelli, Sergio

    2015-01-01

    This research is the first empirical attempt to calculate the various components of the hidden bias associated with the sampling strategies routinely-used in human genetics, with special reference to surname-based strategies. We reconstructed surname distributions of 26 Italian communities with different demographic features across the last six centuries (years 1447-2001). The degree of overlapping between "reference founding core" distributions and the distributions obtained from sampling the present day communities by probabilistic and selective methods was quantified under different conditions and models. When taking into account only one individual per surname (low kinship model), the average discrepancy was 59.5%, with a peak of 84% by random sampling. When multiple individuals per surname were considered (high kinship model), the discrepancy decreased by 8-30% at the cost of a larger variance. Criteria aimed at maximizing locally-spread patrilineages and long-term residency appeared to be affected by recent gene flows much more than expected. Selection of the more frequent family names following low kinship criteria proved to be a suitable approach only for historically stable communities. In any other case true random sampling, despite its high variance, did not return more biased estimates than other selective methods. Our results indicate that the sampling of individuals bearing historically documented surnames (founders' method) should be applied, especially when studying the male-specific genome, to prevent an over-stratification of ancient and recent genetic components that heavily biases inferences and statistics.

  2. Sampling considerations for disease surveillance in wildlife populations

    USGS Publications Warehouse

    Nusser, S.M.; Clark, W.R.; Otis, D.L.; Huang, L.

    2008-01-01

    Disease surveillance in wildlife populations involves detecting the presence of a disease, characterizing its prevalence and spread, and subsequent monitoring. A probability sample of animals selected from the population and corresponding estimators of disease prevalence and detection provide estimates with quantifiable statistical properties, but this approach is rarely used. Although wildlife scientists often assume probability sampling and random disease distributions to calculate sample sizes, convenience samples (i.e., samples of readily available animals) are typically used, and disease distributions are rarely random. We demonstrate how landscape-based simulation can be used to explore properties of estimators from convenience samples in relation to probability samples. We used simulation methods to model what is known about the habitat preferences of the wildlife population, the disease distribution, and the potential biases of the convenience-sample approach. Using chronic wasting disease in free-ranging deer (Odocoileus virginianus) as a simple illustration, we show that using probability sample designs with appropriate estimators provides unbiased surveillance parameter estimates but that the selection bias and coverage errors associated with convenience samples can lead to biased and misleading results. We also suggest practical alternatives to convenience samples that mix probability and convenience sampling. For example, a sample of land areas can be selected using a probability design that oversamples areas with larger animal populations, followed by harvesting of individual animals within sampled areas using a convenience sampling method.

  3. Impacts of sampling design and estimation methods on nutrient leaching of intensively monitored forest plots in the Netherlands.

    PubMed

    de Vries, W; Wieggers, H J J; Brus, D J

    2010-08-05

    Element fluxes through forest ecosystems are generally based on measurements of concentrations in soil solution at regular time intervals at plot locations sampled in a regular grid. Here we present spatially averaged annual element leaching fluxes in three Dutch forest monitoring plots using a new sampling strategy in which both sampling locations and sampling times are selected by probability sampling. Locations were selected by stratified random sampling with compact geographical blocks of equal surface area as strata. In each sampling round, six composite soil solution samples were collected, consisting of five aliquots, one per stratum. The plot-mean concentration was estimated by linear regression, so that the bias due to one or more strata being not represented in the composite samples is eliminated. The sampling times were selected in such a way that the cumulative precipitation surplus of the time interval between two consecutive sampling times was constant, using an estimated precipitation surplus averaged over the past 30 years. The spatially averaged annual leaching flux was estimated by using the modeled daily water flux as an ancillary variable. An important advantage of the new method is that the uncertainty in the estimated annual leaching fluxes due to spatial and temporal variation and resulting sampling errors can be quantified. Results of this new method were compared with the reference approach in which daily leaching fluxes were calculated by multiplying daily interpolated element concentrations with daily water fluxes and then aggregated to a year. Results show that the annual fluxes calculated with the reference method for the period 2003-2005, including all plots, elements and depths, lies only in 53% of the cases within the range of the average +/-2 times the standard error of the new method. Despite the differences in results, both methods indicate comparable N retention and strong Al mobilization in all plots, with Al leaching being nearly equal to the leaching of SO(4) and NO(3) with fluxes expressed in mol(c) ha(-1) yr(-1). This illustrates that Al release, which is the clearest signal of soil acidification, is mainly due to the external input of SO(4) and NO(3).

  4. Selective Distance-Based K+ Quantification on Paper-Based Microfluidics.

    PubMed

    Gerold, Chase T; Bakker, Eric; Henry, Charles S

    2018-04-03

    In this study, paper-based microfluidic devices (μPADs) capable of K + quantification in aqueous samples, as well as in human serum, using both colorimetric and distance-based methods are described. A lipophilic phase containing potassium ionophore I (valinomycin) was utilized to achieve highly selective quantification of K + in the presence of Na + , Li + , and Mg 2+ ions. Successful addition of a suspended lipophilic phase to a wax printed paper-based device is described and offers a solution to current approaches that rely on organic solvents, which damage wax barriers. The approach provides an avenue for future alkali/alkaline quantification utilizing μPADs. Colorimetric spot tests allowed for K + quantification from 0.1-5.0 mM using only 3.00 μL of sample solution. Selective distance-based quantification required small sample volumes (6.00 μL) and gave responses sensitive enough to distinguish between 1.0 and 2.5 mM of sample K + . μPADs using distance-based methods were also capable of differentiating between 4.3 and 6.9 mM K + in human serum samples. Distance-based methods required no digital analysis, electronic hardware, or pumps; any steps required for quantification could be carried out using the naked eye.

  5. Vapor permeation-stepwise injection simultaneous determination of methanol and ethanol in biodiesel with voltammetric detection.

    PubMed

    Shishov, Andrey; Penkova, Anastasia; Zabrodin, Andrey; Nikolaev, Konstantin; Dmitrenko, Maria; Ermakov, Sergey; Bulatov, Andrey

    2016-02-01

    A novel vapor permeation-stepwise injection (VP-SWI) method for the determination of methanol and ethanol in biodiesel samples is discussed. In the current study, stepwise injection analysis was successfully combined with voltammetric detection and vapor permeation. This method is based on the separation of methanol and ethanol from a sample using a vapor permeation module (VPM) with a selective polymer membrane based on poly(phenylene isophtalamide) (PA) containing high amounts of a residual solvent. After the evaporation into the headspace of the VPM, methanol and ethanol were transported, by gas bubbling, through a PA membrane to a mixing chamber equipped with a voltammetric detector. Ethanol was selectively detected at +0.19 V, and both compounds were detected at +1.20 V. Current subtractions (using a correction factor) were used for the selective determination of methanol. A linear range between 0.05 and 0.5% (m/m) was established for each analyte. The limits of detection were estimated at 0.02% (m/m) for ethanol and methanol. The sample throughput was 5 samples h(-1). The method was successfully applied to the analysis of biodiesel samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Analysis of Environmental Contamination resulting from ...

    EPA Pesticide Factsheets

    Catastrophic incidents can generate a large number of samples with analytically diverse types including forensic, clinical, environmental, food, and others. Environmental samples include water, wastewater, soil, air, urban building and infrastructure materials, and surface residue. Such samples may arise not only from contamination from the incident but also from the multitude of activities surrounding the response to the incident, including decontamination. This document summarizes a range of activities to help build laboratory capability in preparation for analysis following a catastrophic incident, including selection and development of fit-for-purpose analytical methods for chemical, biological, and radiological contaminants. Fit-for-purpose methods are those which have been selected to meet project specific data quality objectives. For example, methods could be fit for screening contamination in the early phases of investigation of contamination incidents because they are rapid and easily implemented, but those same methods may not be fit for the purpose of remediating the environment to safe levels when a more sensitive method is required. While the exact data quality objectives defining fitness-for-purpose can vary with each incident, a governing principle of the method selection and development process for environmental remediation and recovery is based on achieving high throughput while maintaining high quality analytical results. This paper illu

  7. Comparison of Control Group Generating Methods.

    PubMed

    Szekér, Szabolcs; Fogarassy, György; Vathy-Fogarassy, Ágnes

    2017-01-01

    Retrospective studies suffer from drawbacks such as selection bias. As the selection of the control group has a significant impact on the evaluation of the results, it is very important to find the proper method to generate the most appropriate control group. In this paper we suggest two nearest neighbors based control group selection methods that aim to achieve good matching between the individuals of case and control groups. The effectiveness of the proposed methods is evaluated by runtime and accuracy tests and the results are compared to the classical stratified sampling method.

  8. Detection of Salmonella spp. in veterinary samples by combining selective enrichment and real-time PCR.

    PubMed

    Goodman, Laura B; McDonough, Patrick L; Anderson, Renee R; Franklin-Guild, Rebecca J; Ryan, James R; Perkins, Gillian A; Thachil, Anil J; Glaser, Amy L; Thompson, Belinda S

    2017-11-01

    Rapid screening for enteric bacterial pathogens in clinical environments is essential for biosecurity. Salmonella found in veterinary hospitals, particularly Salmonella enterica serovar Dublin, can pose unique challenges for culture and testing because of its poor growth. Multiple Salmonella serovars including Dublin are emerging threats to public health given increasing prevalence and antimicrobial resistance. We adapted an automated food testing method to veterinary samples and evaluated the performance of the method in a variety of matrices including environmental samples ( n = 81), tissues ( n = 52), feces ( n = 148), and feed ( n = 29). A commercial kit was chosen as the basis for this approach in view of extensive performance characterizations published by multiple independent organizations. A workflow was established for efficiently and accurately testing veterinary matrices and environmental samples by use of real-time PCR after selective enrichment in Rappaport-Vassiliadis soya (RVS) medium. Using this method, the detection limit for S. Dublin improved by 100-fold over subculture on selective agars (eosin-methylene blue, brilliant green, and xylose-lysine-deoxycholate). Overall, the procedure was effective in detecting Salmonella spp. and provided next-day results.

  9. Paleogene stratigraphy of the Solomons Island, Maryland corehole

    USGS Publications Warehouse

    Gibson, Thomas G.; Bybell, Laurel M.

    1994-01-01

    Purge and trap capillary gas chromatography/mass spectrometry is a rapid, precise, accurate method for determining volatile organic compounds in samples of surface water and ground water. The method can be used to determine 59 selected compounds, including chlorofluorohydrocarbons, aromatic hydrocarbons, and halogenated hydrocarbons. The volatile organic compounds are removed from the sample matrix by actively purging the sample with helium. The volatile organic compounds are collected onto a sorbant trap, thermally desorbed, separated by a Megabore gas chromatographic capillary column, ionized by electron impact, and determined by a full-scan quadrupole mass spectrometer. Compound identification is confirmed by the gas chromatographic retention time and by the resultant mass spectrum. Unknown compounds detected in a sample can be tentatively identified by comparing the unknown mass spectrum to reference spectra in the mass-spectra computer-data system library compiled by the National Institute of Standards and Technology. Method detection limits for the selected compounds range from 0.05 to 0.2 microgram per liter. Recoveries for the majority of the selected compounds ranged from 80 to 120 percent, with relative standard deviations of less than 10 percent.

  10. An improved survivability prognosis of breast cancer by using sampling and feature selection technique to solve imbalanced patient classification data

    PubMed Central

    2013-01-01

    Background Breast cancer is one of the most critical cancers and is a major cause of cancer death among women. It is essential to know the survivability of the patients in order to ease the decision making process regarding medical treatment and financial preparation. Recently, the breast cancer data sets have been imbalanced (i.e., the number of survival patients outnumbers the number of non-survival patients) whereas the standard classifiers are not applicable for the imbalanced data sets. The methods to improve survivability prognosis of breast cancer need for study. Methods Two well-known five-year prognosis models/classifiers [i.e., logistic regression (LR) and decision tree (DT)] are constructed by combining synthetic minority over-sampling technique (SMOTE) ,cost-sensitive classifier technique (CSC), under-sampling, bagging, and boosting. The feature selection method is used to select relevant variables, while the pruning technique is applied to obtain low information-burden models. These methods are applied on data obtained from the Surveillance, Epidemiology, and End Results database. The improvements of survivability prognosis of breast cancer are investigated based on the experimental results. Results Experimental results confirm that the DT and LR models combined with SMOTE, CSC, and under-sampling generate higher predictive performance consecutively than the original ones. Most of the time, DT and LR models combined with SMOTE and CSC use less informative burden/features when a feature selection method and a pruning technique are applied. Conclusions LR is found to have better statistical power than DT in predicting five-year survivability. CSC is superior to SMOTE, under-sampling, bagging, and boosting to improve the prognostic performance of DT and LR. PMID:24207108

  11. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    PubMed

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  12. A model of directional selection applied to the evolution of drug resistance in HIV-1.

    PubMed

    Seoighe, Cathal; Ketwaroo, Farahnaz; Pillay, Visva; Scheffler, Konrad; Wood, Natasha; Duffet, Rodger; Zvelebil, Marketa; Martinson, Neil; McIntyre, James; Morris, Lynn; Hide, Winston

    2007-04-01

    Understanding how pathogens acquire resistance to drugs is important for the design of treatment strategies, particularly for rapidly evolving viruses such as HIV-1. Drug treatment can exert strong selective pressures and sites within targeted genes that confer resistance frequently evolve far more rapidly than the neutral rate. Rapid evolution at sites that confer resistance to drugs can be used to help elucidate the mechanisms of evolution of drug resistance and to discover or corroborate novel resistance mutations. We have implemented standard maximum likelihood methods that are used to detect diversifying selection and adapted them for use with serially sampled reverse transcriptase (RT) coding sequences isolated from a group of 300 HIV-1 subtype C-infected women before and after single-dose nevirapine (sdNVP) to prevent mother-to-child transmission. We have also extended the standard models of codon evolution for application to the detection of directional selection. Through simulation, we show that the directional selection model can provide a substantial improvement in sensitivity over models of diversifying selection. Five of the sites within the RT gene that are known to harbor mutations that confer resistance to nevirapine (NVP) strongly supported the directional selection model. There was no evidence that other mutations that are known to confer NVP resistance were selected in this cohort. The directional selection model, applied to serially sampled sequences, also had more power than the diversifying selection model to detect selection resulting from factors other than drug resistance. Because inference of selection from serial samples is unlikely to be adversely affected by recombination, the methods we describe may have general applicability to the analysis of positive selection affecting recombining coding sequences when serially sampled data are available.

  13. Petroleomics by electrospray ionization FT-ICR mass spectrometry coupled to partial least squares with variable selection methods: prediction of the total acid number of crude oils.

    PubMed

    Terra, Luciana A; Filgueiras, Paulo R; Tose, Lílian V; Romão, Wanderson; de Souza, Douglas D; de Castro, Eustáquio V R; de Oliveira, Mirela S L; Dias, Júlio C M; Poppi, Ronei J

    2014-10-07

    Negative-ion mode electrospray ionization, ESI(-), with Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) was coupled to a Partial Least Squares (PLS) regression and variable selection methods to estimate the total acid number (TAN) of Brazilian crude oil samples. Generally, ESI(-)-FT-ICR mass spectra present a power of resolution of ca. 500,000 and a mass accuracy less than 1 ppm, producing a data matrix containing over 5700 variables per sample. These variables correspond to heteroatom-containing species detected as deprotonated molecules, [M - H](-) ions, which are identified primarily as naphthenic acids, phenols and carbazole analog species. The TAN values for all samples ranged from 0.06 to 3.61 mg of KOH g(-1). To facilitate the spectral interpretation, three methods of variable selection were studied: variable importance in the projection (VIP), interval partial least squares (iPLS) and elimination of uninformative variables (UVE). The UVE method seems to be more appropriate for selecting important variables, reducing the dimension of the variables to 183 and producing a root mean square error of prediction of 0.32 mg of KOH g(-1). By reducing the size of the data, it was possible to relate the selected variables with their corresponding molecular formulas, thus identifying the main chemical species responsible for the TAN values.

  14. Laser vaporization of trace explosives for enhanced non-contact detection

    NASA Astrophysics Data System (ADS)

    Furstenberg, Robert; Papantonakis, Michael; Kendziora, Christopher A.; Bubb, Daniel M.; Corgan, Jeffrey; McGill, R. Andrew

    2010-04-01

    Trace explosives contamination is found primarily in the form of solid particulates on surfaces, due to the low vapor pressure of most explosives materials. Today, the standard sampling procedure involves physical removal of particulate matter from surfaces of interest. A variety of collection methods have been used including air-jetting or swabbing surfaces of interest. The sampled particles are typically heated to generate vapor for analysis in hand held, bench top, or portal detection systems. These sampling methods are time-consuming (and hence costly), require a skilled technician for optimal performance, and are inherently non-selective, allowing non-explosives particles to be co-sampled and analyzed. This can adversely affect the sensitivity and selectivity of detectors, especially those with a limited dynamic range. We present a new approach to sampling solid particles on a solid surface that is targeted, non-contact, and which selectively enhances trace explosive signatures thus improving the selectivity and sensitivity of existing detectors. Our method involves the illumination of a surface of interest with infrared laser light with a wavelength that matches a distinctive vibrational mode of an explosive. The resonant coupling of laser energy results in rapid heating of explosive particles and rapid release of a vapor plume. Neighboring particles unrelated to explosives are generally not directly heated as their vibrational modes are not resonant with the laser. As a result, the generated vapor plume includes a higher concentration of explosives than if the particles were heated with a non-selective light source (e.g. heat lamp). We present results with both benchtop infrared lasers as well as miniature quantum cascade lasers.

  15. A Review of Selected Engineered Nanoparticles in the Atmosphere: Sources, Transformations, and Techniques for Sampling and Analysis

    EPA Science Inventory

    A state-of-the-science review was undertaken to identify and assess sampling and analysis methods to detect and quantify selected nanomaterials (NMs) in the ambient atmosphere. The review is restricted to five types of NMs of interest to the Office of Research and Development Nan...

  16. Trends in Turkish Education Studies

    ERIC Educational Resources Information Center

    Varisoglu, Behice; Sahin, Abdullah; Goktas, Yuksel

    2013-01-01

    The purpose of this study was to determine trends in the subject areas, methods, data collection tools, data analysis methods, and sample types used in recent studies on Turkish education, published in journals from 2000-2011. A total of 558 articles from 44 journals were selected from databases by the purposive sampling method and examined using…

  17. Robust check loss-based variable selection of high-dimensional single-index varying-coefficient model

    NASA Astrophysics Data System (ADS)

    Song, Yunquan; Lin, Lu; Jian, Ling

    2016-07-01

    Single-index varying-coefficient model is an important mathematical modeling method to model nonlinear phenomena in science and engineering. In this paper, we develop a variable selection method for high-dimensional single-index varying-coefficient models using a shrinkage idea. The proposed procedure can simultaneously select significant nonparametric components and parametric components. Under defined regularity conditions, with appropriate selection of tuning parameters, the consistency of the variable selection procedure and the oracle property of the estimators are established. Moreover, due to the robustness of the check loss function to outliers in the finite samples, our proposed variable selection method is more robust than the ones based on the least squares criterion. Finally, the method is illustrated with numerical simulations.

  18. COMPENDIUM OF SELECTED METHODS FOR SAMPLING AND ANALYSIS AT GEOTHERMAL FACILITIES

    EPA Science Inventory

    The establishment of generally accepted methods for characterizing geothermal emissions has been hampered by the independent natures of both geothermal industrial development and sampling/analysis procedures despite three workshops on the latter (Las Vegas 1975, 1977, 1980). An i...

  19. Method for immunodiagnostic detection of dioxins at low concentrations

    DOEpatents

    Vanderlaan, Martin; Stanker, Larry H.; Watkins, Bruce E.; Petrovic, Peter; Gorbach, Siegbert

    1995-01-01

    A method is described for the use of monoclonal antibodies in a sensitive immunoassay for halogenated dioxins and dibenzofurans in industrial samples which contain impurities. Appropriate sample preparation and selective enzyme amplification of the immunoassay sensitivity permits detection of dioxin contaminants in industrial or environmental samples at concentrations in the range of a few parts per trillion.

  20. Possibility of successive SRXFA use along with chemical-spectral methods for palladium analysis in geological samples

    NASA Astrophysics Data System (ADS)

    Kislov, E. V.; Kulikov, A. A.; Kulikova, A. B.

    1989-10-01

    Samples of basit-ultrabasit rocks and NiCu ores of the Ioko-Dovyren and Chaya massifs were analysed by SRXFA and a chemical-spectral method. SRXFA perfectly satisfies the quantitative noble-metals analysis of ore-free rocks. Combination of SRXFA and chemical-spectral analysis has good prospects. After analysis of a great number of samples by SRXFA it is necessary to select samples which would show minimal and maximal results for the chemical-spectral method.

  1. Determination of Polychlorinated Biphenyls in Soil and Sediment by Selective Pressurized Liquid Extraction with Immunochemical Detection

    EPA Science Inventory

    A selective liquid pressurized extraction (SPLE) method was developed as a streamlined sample preparation/cleanup procedure for determining Aroclors and coplanar polychlorinated biphenyls (PCBs) in soil and sediment matrices. The SPLE method was coupled with an enzyme-linked imm...

  2. Method and apparatus for chemical and topographical microanalysis

    NASA Technical Reports Server (NTRS)

    Kossakovski, Dmitri A. (Inventor); Baldeschwieler, John D. (Inventor); Beauchamp, Jesse L. (Inventor)

    2002-01-01

    A scanning probe microscope is combined with a laser induced breakdown spectrometer to provide spatially resolved chemical analysis of the surface correlated with the surface topography. Topographical analysis is achieved by scanning a sharp probe across the sample at constant distance from the surface. Chemical analysis is achieved by the means of laser induced breakdown spectroscopy by delivering pulsed laser radiation to the sample surface through the same sharp probe, and consequent collection and analysis of emission spectra from plasma generated on the sample by the laser radiation. The method comprises performing microtopographical analysis of the sample with a scanning probe, selecting a scanned topological site on the sample, generating a plasma plume at the selected scanned topological site, and measuring a spectrum of optical emission from the plasma at the selected scanned topological site. The apparatus comprises a scanning probe, a pulsed laser optically coupled to the probe, an optical spectrometer, and a controller coupled to the scanner, laser and spectrometer for controlling the operation of the scanner, laser and spectrometer. The probe and scanner are used for topographical profiling the sample. The probe is also used for laser radiation delivery to the sample for generating a plasma plume from the sample. Optical emission from the plasma plume is collected and delivered to the optical spectrometer so that analysis of emission spectrum by the optical spectrometer allows for identification of chemical composition of the sample at user selected sites.

  3. Preparation of novel alumina nanowire solid-phase microextraction fiber coating for ultra-selective determination of volatile esters and alcohols from complicated food samples.

    PubMed

    Zhang, Zhuomin; Ma, Yunjian; Wang, Qingtang; Chen, An; Pan, Zhuoyan; Li, Gongke

    2013-05-17

    A novel alumina nanowire (ANW) solid-phase microextraction (SPME) fiber coating was prepared by a simple and rapid anodization-chemical etching method for ultra-selective determination of volatile esters and alcohols from complicated food samples. Preparation conditions for ANW SPME fiber coating including corrosion solution concentration and corrosion time were optimized in detail for better surface morphology and higher surface area based on scanning electron microscope (SEM). Under the optimum conditions, homogeneous alumina nanowire structure of ANW SPME fiber coating was achieved with the average thickness of 20 μm around. Compared with most of commercial SPME fiber coatings, ANW SPME fiber coatings achieved the higher extraction capacity and special selectivity for volatile esters and alcohols. Finally, an efficient gas sampling technique based on ANW SPME fiber coating as the core was established and successfully applied for the ultra-selective determination of trace volatile esters and alcohols from complicated banana and fermented glutinous rice samples coupled with gas chromatography/mass spectrometry (GC/MS) detection. It was interesting that 25 esters and 2 alcohols among 30 banana volatile organic compounds (VOCs) identified and 4 esters and 7 alcohols among 13 identified VOCs of fermented glutinous rice were selectively sampled by ANW SPME fiber coatings. Furthermore, new analytical methods for the determination of some typical volatile esters and alcohols from banana and fermented glutinous rice samples at specific storage or brewing phases were developed and validated. Good recoveries for banana and fermented glutinous rice samples were achieved in range of 108-115% with relative standard deviations (RSDs) of 2.6-6.7% and 80.0-91.8% with RSDs of 0.3-1.3% (n=3), respectively. This work proposed a novel and efficient gas sampling technique of ANW SPME which was quite suitable for ultra-selectively sampling trace volatile esters and alcohols from complicated food samples. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Estimating the circuit delay of FPGA with a transfer learning method

    NASA Astrophysics Data System (ADS)

    Cui, Xiuhai; Liu, Datong; Peng, Yu; Peng, Xiyuan

    2017-10-01

    With the increase of FPGA (Field Programmable Gate Array, FPGA) functionality, FPGA has become an on-chip system platform. Due to increase the complexity of FPGA, estimating the delay of FPGA is a very challenge work. To solve the problems, we propose a transfer learning estimation delay (TLED) method to simplify the delay estimation of different speed grade FPGA. In fact, the same style different speed grade FPGA comes from the same process and layout. The delay has some correlation among different speed grade FPGA. Therefore, one kind of speed grade FPGA is chosen as a basic training sample in this paper. Other training samples of different speed grade can get from the basic training samples through of transfer learning. At the same time, we also select a few target FPGA samples as training samples. A general predictive model is trained by these samples. Thus one kind of estimation model is used to estimate different speed grade FPGA circuit delay. The framework of TRED includes three phases: 1) Building a basic circuit delay library which includes multipliers, adders, shifters, and so on. These circuits are used to train and build the predictive model. 2) By contrasting experiments among different algorithms, the forest random algorithm is selected to train predictive model. 3) The target circuit delay is predicted by the predictive model. The Artix-7, Kintex-7, and Virtex-7 are selected to do experiments. Each of them includes -1, -2, -2l, and -3 different speed grade. The experiments show the delay estimation accuracy score is more than 92% with the TLED method. This result shows that the TLED method is a feasible delay assessment method, especially in the high-level synthesis stage of FPGA tool, which is an efficient and effective delay assessment method.

  5. Isolation and identification of Salmonella spp. in drinking water, streams, and swine wastewater by molecular techniques in Taiwan

    NASA Astrophysics Data System (ADS)

    Kuo, C.; Hsu, B.; Shen, T.; Tseng, S.; Tsai, J.; Huang, K.; Kao, P.; Chen, J.

    2013-12-01

    Salmonella spp. is a common water-borne pathogens and its genus comprises more than 2,500 serotypes. Major pathogenic genotypes which cause typhoid fever, enteritis and other intestinal-type diseases are S. Typhimurium, S. Enteritidis, S. Stanley, S. Agona, S.Albany, S. Schwarzengrund, S. Newport, S. Choleraesuis, and S. Derby. Hence, the identification of the serotypes of Salmonella spp. is important. In the present study, the analytical procedures include direct concentration method, non-selective pre-enrichment method and selective enrichment method of Salmonella spp.. Both selective enrichment method and cultured bacteria were detected with specific primers of Salmonella spp. by polymerase chain reaction (PCR). At last, the serotypes of Salmonella were confirmed by using MLST (multilocus sequence typing) with aroC, dnaN, hemD, hisD, purE, sucA, thrA housekeeping genes to identify the strains of positive samples. This study contains 121 samples from three different types of water sources including the drinking water (51), streams (45), and swine wastewater (25). Thirteen samples with positive invA gene are separated from culture method. The strains of these positive samples which identified from MLST method are S. Albany, S. Typhimurium, S. Newport, S. Bareilly, and S. Derby. Some of the serotypes, S. Albany, S. Typhimurium and S. Newport, are highly pathogenic which correlated to human diarrhea. In our results, MLST is a useful method to identify the strains of Salmonella spp.. Keywords: Salmonella, PCR, MLST.

  6. The predictive validity of a situational judgement test, a clinical problem solving test and the core medical training selection methods for performance in specialty training .

    PubMed

    Patterson, Fiona; Lopes, Safiatu; Harding, Stephen; Vaux, Emma; Berkin, Liz; Black, David

    2017-02-01

    The aim of this study was to follow up a sample of physicians who began core medical training (CMT) in 2009. This paper examines the long-term validity of CMT and GP selection methods in predicting performance in the Membership of Royal College of Physicians (MRCP(UK)) examinations. We performed a longitudinal study, examining the extent to which the GP and CMT selection methods (T1) predict performance in the MRCP(UK) examinations (T2). A total of 2,569 applicants from 2008-09 who completed CMT and GP selection methods were included in the study. Looking at MRCP(UK) part 1, part 2 written and PACES scores, both CMT and GP selection methods show evidence of predictive validity for the outcome variables, and hierarchical regressions show the GP methods add significant value to the CMT selection process. CMT selection methods predict performance in important outcomes and have good evidence of validity; the GP methods may have an additional role alongside the CMT selection methods. © Royal College of Physicians 2017. All rights reserved.

  7. SAM Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target chemical, radiochemical, pathogens, and biotoxin analytes in environmental samples can use this online query tool to identify analytical methods included in EPA's Selected Analytical Methods for Environmental Remediation

  8. Poly(1-vinylimidazole) functionalized magnetic ion imprinted polymer for fast and selective extraction of trace gold in geological, environmental and biological samples followed by graphite furnace atomic absorption spectrometry detection

    NASA Astrophysics Data System (ADS)

    Zhao, Bingshan; He, Man; Chen, Beibei; Xu, Hongrun; Hu, Bin

    2018-05-01

    In this study, poly(1-vinylimidazole) functionalized gold ion imprinted polymer coated magnetic nanoparticles (MNPs@PVIM-Au-IIP) were prepared and characterized. The adsorption behaviors of the prepared MNPs@PVIM-Au-IIP toward gold ions (Au(III)) were studied, it was found that MNPs@PVIM-Au-IIP has good selectivity, high adsorption capacity (185.4 mg g-1) and fast adsorption kinetic for Au(III). Based on it, a new method of ion imprinted magnetic solid phase extraction (II-MSPE) coupled with graphite furnace atomic absorption spectrometry (GFAAS) detection was proposed for the analysis of trace Au(III) in real samples with complicated matrix. Factors affecting MSPE including sample pH, desorption reagent, elution concentration and volume, elution time, sample volume and adsorption time were optimized. With high enrichment factor of 100-fold, the detection limit of the proposed method is 7.9 ng L-1 for Au(III) with the relative standard deviation of 7.4% (c = 50 ng L-1, n = 7). In order to validate the accuracy of the proposed method, the Certified Reference Material of GBW07293 geological sample (platinpalladium ore) was analyzed, and the determined value was in good agreement with the certified value. The proposed II-MSPE-GFAAS method is simple, fast, selective, sensitive and has been successfully applied in the determination of trace Au in ore, sediment, environmental water and human urine samples with satisfactory results.

  9. The application of mixed methods designs to trauma research.

    PubMed

    Creswell, John W; Zhang, Wanqing

    2009-12-01

    Despite the use of quantitative and qualitative data in trauma research and therapy, mixed methods studies in this field have not been analyzed to help researchers designing investigations. This discussion begins by reviewing four core characteristics of mixed methods research in the social and human sciences. Combining these characteristics, the authors focus on four select mixed methods designs that are applicable in trauma research. These designs are defined and their essential elements noted. Applying these designs to trauma research, a search was conducted to locate mixed methods trauma studies. From this search, one sample study was selected, and its characteristics of mixed methods procedures noted. Finally, drawing on other mixed methods designs available, several follow-up mixed methods studies were described for this sample study, enabling trauma researchers to view design options for applying mixed methods research in trauma investigations.

  10. A Novel Selective Deep Eutectic Solvent Extraction Method for Versatile Determination of Copper in Sediment Samples by ICP-OES.

    PubMed

    Bağda, Esra; Altundağ, Huseyin; Tüzen, Mustafa; Soylak, Mustafa

    2017-08-01

    In the present study, a simple, mono step deep eutectic solvent (DES) extraction was developed for selective extraction of copper from sediment samples. The optimization of all experimental parameters, e.g. DES type, sample/DES ratio, contact time and temperature were performed with using BCR-280 R (lake sediment certified reference material). The limit of detection (LOD) and the limit of quantification (LOQ) were found as 1.2 and 3.97 µg L -1 , respectively. The RSD of the procedure was 7.5%. The proposed extraction method was applied to river and lake sediments sampled from Serpincik, Çeltek, Kızılırmak (Fadl and Tecer region of the river), Sivas-Turkey.

  11. Air and Surface Sampling Method for Assessing Exposures to Quaternary Ammonium Compounds Using Liquid Chromatography Tandem Mass Spectrometry.

    PubMed

    LeBouf, Ryan F; Virji, Mohammed Abbas; Ranpara, Anand; Stefaniak, Aleksandr B

    2017-07-01

    This method was designed for sampling select quaternary ammonium (quat) compounds in air or on surfaces followed by analysis using ultraperformance liquid chromatography tandem mass spectrometry. Target quats were benzethonium chloride, didecyldimethylammonium bromide, benzyldimethyldodecylammonium chloride, benzyldimethyltetradecylammonium chloride, and benzyldimethylhexadecylammonium chloride. For air sampling, polytetrafluoroethylene (PTFE) filters are recommended for 15-min to 24-hour sampling. For surface sampling, Pro-wipe® 880 (PW) media was chosen. Samples were extracted in 60:40 acetonitrile:0.1% formic acid for 1 hour on an orbital shaker. Method detection limits range from 0.3 to 2 ng/ml depending on media and analyte. Matrix effects of media are minimized through the use of multiple reaction monitoring versus selected ion recording. Upper confidence limits on accuracy meet the National Institute for Occupational Safety and Health 25% criterion for PTFE and PW media for all analytes. Using PTFE and PW analyzed with multiple reaction monitoring, the method quantifies levels among the different quats compounds with high precision (<10% relative standard deviation) and low bias (<11%). The method is sensitive enough with very low method detection limits to capture quats on air sampling filters with only a 15-min sample duration with a maximum assessed storage time of 103 days before sample extraction. This method will support future exposure assessment and quantitative epidemiologic studies to explore exposure-response relationships and establish levels of quats exposures associated with adverse health effects. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  12. [Comparison study on sampling methods of Oncomelania hupensis snail survey in marshland schistosomiasis epidemic areas in China].

    PubMed

    An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang

    2016-06-29

    To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.

  13. Systematic sampling for suspended sediment

    Treesearch

    Robert B. Thomas

    1991-01-01

    Abstract - Because of high costs or complex logistics, scientific populations cannot be measured entirely and must be sampled. Accepted scientific practice holds that sample selection be based on statistical principles to assure objectivity when estimating totals and variances. Probability sampling--obtaining samples with known probabilities--is the only method that...

  14. Determination of methylmercury in marine biota samples with advanced mercury analyzer: method validation.

    PubMed

    Azemard, Sabine; Vassileva, Emilia

    2015-06-01

    In this paper, we present a simple, fast and cost-effective method for determination of methyl mercury (MeHg) in marine samples. All important parameters influencing the sample preparation process were investigated and optimized. Full validation of the method was performed in accordance to the ISO-17025 (ISO/IEC, 2005) and Eurachem guidelines. Blanks, selectivity, working range (0.09-3.0ng), recovery (92-108%), intermediate precision (1.7-4.5%), traceability, limit of detection (0.009ng), limit of quantification (0.045ng) and expanded uncertainty (15%, k=2) were assessed. Estimation of the uncertainty contribution of each parameter and the demonstration of traceability of measurement results was provided as well. Furthermore, the selectivity of the method was studied by analyzing the same sample extracts by advanced mercury analyzer (AMA) and gas chromatography-atomic fluorescence spectrometry (GC-AFS). Additional validation of the proposed procedure was effectuated by participation in the IAEA-461 worldwide inter-laboratory comparison exercises. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Systematic Assessment of Seven Solvent and Solid-Phase Extraction Methods for Metabolomics Analysis of Human Plasma by LC-MS

    NASA Astrophysics Data System (ADS)

    Sitnikov, Dmitri G.; Monnin, Cian S.; Vuckovic, Dajana

    2016-12-01

    The comparison of extraction methods for global metabolomics is usually executed in biofluids only and focuses on metabolite coverage and method repeatability. This limits our detailed understanding of extraction parameters such as recovery and matrix effects and prevents side-by-side comparison of different sample preparation strategies. To address this gap in knowledge, seven solvent-based and solid-phase extraction methods were systematically evaluated using standard analytes spiked into both buffer and human plasma. We compared recovery, coverage, repeatability, matrix effects, selectivity and orthogonality of all methods tested for non-lipid metabolome in combination with reversed-phased and mixed-mode liquid chromatography mass spectrometry analysis (LC-MS). Our results confirmed wide selectivity and excellent precision of solvent precipitations, but revealed their high susceptibility to matrix effects. The use of all seven methods showed high overlap and redundancy which resulted in metabolite coverage increases of 34-80% depending on LC-MS method employed as compared to the best single extraction protocol (methanol/ethanol precipitation) despite 7x increase in MS analysis time and sample consumption. The most orthogonal methods to methanol-based precipitation were ion-exchange solid-phase extraction and liquid-liquid extraction using methyl-tertbutyl ether. Our results help facilitate rational design and selection of sample preparation methods and internal standards for global metabolomics.

  16. Systematic Assessment of Seven Solvent and Solid-Phase Extraction Methods for Metabolomics Analysis of Human Plasma by LC-MS

    PubMed Central

    Sitnikov, Dmitri G.; Monnin, Cian S.; Vuckovic, Dajana

    2016-01-01

    The comparison of extraction methods for global metabolomics is usually executed in biofluids only and focuses on metabolite coverage and method repeatability. This limits our detailed understanding of extraction parameters such as recovery and matrix effects and prevents side-by-side comparison of different sample preparation strategies. To address this gap in knowledge, seven solvent-based and solid-phase extraction methods were systematically evaluated using standard analytes spiked into both buffer and human plasma. We compared recovery, coverage, repeatability, matrix effects, selectivity and orthogonality of all methods tested for non-lipid metabolome in combination with reversed-phased and mixed-mode liquid chromatography mass spectrometry analysis (LC-MS). Our results confirmed wide selectivity and excellent precision of solvent precipitations, but revealed their high susceptibility to matrix effects. The use of all seven methods showed high overlap and redundancy which resulted in metabolite coverage increases of 34–80% depending on LC-MS method employed as compared to the best single extraction protocol (methanol/ethanol precipitation) despite 7x increase in MS analysis time and sample consumption. The most orthogonal methods to methanol-based precipitation were ion-exchange solid-phase extraction and liquid-liquid extraction using methyl-tertbutyl ether. Our results help facilitate rational design and selection of sample preparation methods and internal standards for global metabolomics. PMID:28000704

  17. Harmonic reduction of Direct Torque Control of six-phase induction motor.

    PubMed

    Taheri, A

    2016-07-01

    In this paper, a new switching method in Direct Torque Control (DTC) of a six-phase induction machine for reduction of current harmonics is introduced. Selecting a suitable vector in each sampling period is an ordinal method in the ST-DTC drive of a six-phase induction machine. The six-phase induction machine has 64 voltage vectors and divided further into four groups. In the proposed DTC method, the suitable voltage vectors are selected from two vector groups. By a suitable selection of two vectors in each sampling period, the harmonic amplitude is decreased more, in and various comparison to that of the ST-DTC drive. The harmonics loss is greater reduced, while the electromechanical energy is decreased with switching loss showing a little increase. Spectrum analysis of the phase current in the standard and new switching table DTC of the six-phase induction machine and determination for the amplitude of each harmonics is proposed in this paper. The proposed method has a less sampling time in comparison to the ordinary method. The Harmonic analyses of the current in the low and high speed shows the performance of the presented method. The simplicity of the proposed method and its implementation without any extra hardware is other advantages of the proposed method. The simulation and experimental results show the preference of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  18. A simple and selective method for determination of phthalate biomarkers in vegetable samples by high pressure liquid chromatography-electrospray ionization-tandem mass spectrometry.

    PubMed

    Zhou, Xi; Cui, Kunyan; Zeng, Feng; Li, Shoucong; Zeng, Zunxiang

    2016-06-01

    In the present study, solid-phase extraction cartridges including silica reversed-phase Isolute C18, polymeric reversed-phase Oasis HLB and mixed-mode anion-exchange Oasis MAX, and liquid-liquid extractions with ethyl acetate, n-hexane, dichloromethane and its mixtures were compared for clean-up of phthalate monoesters from vegetable samples. Best recoveries and minimised matrix effects were achieved using ethyl acetate/n-hexane liquid-liquid extraction for these target compounds. A simple and selective method, based on sample preparation by ultrasonic extraction and liquid-liquid extraction clean-up, for the determination of phthalate monoesters in vegetable samples by liquid chromatography/electrospray ionisation-tandem mass spectrometry was developed. The method detection limits for phthalate monoesters ranged from 0.013 to 0.120 ng g(-1). Good linearity (r(2)>0.991) between MQLs and 1000× MQLs was achieved. The intra- and inter-day relative standard deviation values were less than 11.8%. The method was successfully used to determine phthalate monoester metabolites in the vegetable samples. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Analysis of environmental contamination resulting from catastrophic incidents: part 2. Building laboratory capability by selecting and developing analytical methodologies.

    PubMed

    Magnuson, Matthew; Campisano, Romy; Griggs, John; Fitz-James, Schatzi; Hall, Kathy; Mapp, Latisha; Mullins, Marissa; Nichols, Tonya; Shah, Sanjiv; Silvestri, Erin; Smith, Terry; Willison, Stuart; Ernst, Hiba

    2014-11-01

    Catastrophic incidents can generate a large number of samples of analytically diverse types, including forensic, clinical, environmental, food, and others. Environmental samples include water, wastewater, soil, air, urban building and infrastructure materials, and surface residue. Such samples may arise not only from contamination from the incident but also from the multitude of activities surrounding the response to the incident, including decontamination. This document summarizes a range of activities to help build laboratory capability in preparation for sample analysis following a catastrophic incident, including selection and development of fit-for-purpose analytical methods for chemical, biological, and radiological contaminants. Fit-for-purpose methods are those which have been selected to meet project specific data quality objectives. For example, methods could be fit for screening contamination in the early phases of investigation of contamination incidents because they are rapid and easily implemented, but those same methods may not be fit for the purpose of remediating the environment to acceptable levels when a more sensitive method is required. While the exact data quality objectives defining fitness-for-purpose can vary with each incident, a governing principle of the method selection and development process for environmental remediation and recovery is based on achieving high throughput while maintaining high quality analytical results. This paper illustrates the result of applying this principle, in the form of a compendium of analytical methods for contaminants of interest. The compendium is based on experience with actual incidents, where appropriate and available. This paper also discusses efforts aimed at adaptation of existing methods to increase fitness-for-purpose and development of innovative methods when necessary. The contaminants of interest are primarily those potentially released through catastrophes resulting from malicious activity. However, the same techniques discussed could also have application to catastrophes resulting from other incidents, such as natural disasters or industrial accidents. Further, the high sample throughput enabled by the techniques discussed could be employed for conventional environmental studies and compliance monitoring, potentially decreasing costs and/or increasing the quantity of data available to decision-makers. Published by Elsevier Ltd.

  20. Photoacoustic sample vessel and method of elevated pressure operation

    DOEpatents

    Autrey, Tom; Yonker, Clement R.

    2004-05-04

    An improved photoacoustic vessel and method of photoacoustic analysis. The photoacoustic sample vessel comprises an acoustic detector, an acoustic couplant, and an acoustic coupler having a chamber for holding the acoustic couplant and a sample. The acoustic couplant is selected from the group consisting of liquid, solid, and combinations thereof. Passing electromagnetic energy through the sample generates an acoustic signal within the sample, whereby the acoustic signal propagates through the sample to and through the acoustic couplant to the acoustic detector.

  1. How Many Fish Need to Be Measured to Effectively Evaluate Trawl Selectivity?

    PubMed Central

    Santos, Juan; Sala, Antonello

    2016-01-01

    The aim of this study was to provide practitioners working with trawl selectivity with general and easily understandable guidelines regarding the fish sampling effort necessary during sea trials. In particular, we focused on how many fish would need to be caught and length measured in a trawl haul in order to assess the selectivity parameters of the trawl at a designated uncertainty level. We also investigated the dependency of this uncertainty level on the experimental method used to collect data and on the potential effects of factors such as the size structure in the catch relative to the size selection of the gear. We based this study on simulated data created from two different fisheries: the Barents Sea cod (Gadus morhua) trawl fishery and the Mediterranean Sea multispecies trawl fishery represented by red mullet (Mullus barbatus). We used these two completely different fisheries to obtain results that can be used as general guidelines for other fisheries. We found that the uncertainty in the selection parameters decreased with increasing number of fish measured and that this relationship could be described by a power model. The sampling effort needed to achieve a specific uncertainty level for the selection parameters was always lower for the covered codend method compared to the paired-gear method. In many cases, the number of fish that would need to be measured to maintain a specific uncertainty level was around 10 times higher for the paired-gear method than for the covered codend method. The trends observed for the effect of sampling effort in the two fishery cases investigated were similar; therefore the guidelines presented herein should be applicable to other fisheries. PMID:27560696

  2. Comparison of statistical methods for detection of serum lipid biomarkers for mesothelioma and asbestos exposure.

    PubMed

    Xu, Rengyi; Mesaros, Clementina; Weng, Liwei; Snyder, Nathaniel W; Vachani, Anil; Blair, Ian A; Hwang, Wei-Ting

    2017-07-01

    We compared three statistical methods in selecting a panel of serum lipid biomarkers for mesothelioma and asbestos exposure. Serum samples from mesothelioma, asbestos-exposed subjects and controls (40 per group) were analyzed. Three variable selection methods were considered: top-ranked predictors from univariate model, stepwise and least absolute shrinkage and selection operator. Crossed-validated area under the receiver operating characteristic curve was used to compare the prediction performance. Lipids with high crossed-validated area under the curve were identified. Lipid with mass-to-charge ratio of 372.31 was selected by all three methods comparing mesothelioma versus control. Lipids with mass-to-charge ratio of 1464.80 and 329.21 were selected by two models for asbestos exposure versus control. Different methods selected a similar set of serum lipids. Combining candidate biomarkers can improve prediction.

  3. Selective counting and sizing of single virus particles using fluorescent aptamer-based nanoparticle tracking analysis.

    PubMed

    Szakács, Zoltán; Mészáros, Tamás; de Jonge, Marien I; Gyurcsányi, Róbert E

    2018-05-30

    Detection and counting of single virus particles in liquid samples are largely limited to narrow size distribution of viruses and purified formulations. To address these limitations, here we propose a calibration-free method that enables concurrently the selective recognition, counting and sizing of virus particles as demonstrated through the detection of human respiratory syncytial virus (RSV), an enveloped virus with a broad size distribution, in throat swab samples. RSV viruses were selectively labeled through their attachment glycoproteins (G) with fluorescent aptamers, which further enabled their identification, sizing and counting at the single particle level by fluorescent nanoparticle tracking analysis. The proposed approach seems to be generally applicable to virus detection and quantification. Moreover, it could be successfully applied to detect single RSV particles in swab samples of diagnostic relevance. Since the selective recognition is associated with the sizing of each detected particle, this method enables to discriminate viral elements linked to the virus as well as various virus forms and associations.

  4. A cryopreservation method for Pasteurella multocida from wetland samples

    USGS Publications Warehouse

    Moore, Melody K.; Shadduck, D.J.; Goldberg, Diana R.; Samuel, M.D.

    1998-01-01

    A cryopreservation method and improved isolation techniques for detection of Pasteurella multocida from wetland samples were developed. Wetland water samples were collected in the field, diluted in dimethyl sulfoxide (DMSO, final concentration 10%), and frozen at -180 C in a liquid nitrogen vapor shipper. Frozen samples were transported to the laboratory where they were subsequently thawed and processed in Pasteurella multocida selective broth (PMSB) to isolate P. multocida. This method allowed for consistent isolation of 2 to 18 organisms/ml from water seeded with known concentrations of P. multocida. The method compared favorably with the standard mouse inoculation method and allowed for preservation of the samples until they could be processed in the laboratory.

  5. A New Sample Size Formula for Regression.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.

    The focus of this research was to determine the efficacy of a new method of selecting sample sizes for multiple linear regression. A Monte Carlo simulation was used to study both empirical predictive power rates and empirical statistical power rates of the new method and seven other methods: those of C. N. Park and A. L. Dudycha (1974); J. Cohen…

  6. ELT Research in Turkey: A Content Analysis of Selected Features of Published Articles

    ERIC Educational Resources Information Center

    Yagiz, Oktay; Aydin, Burcu; Akdemir, Ahmet Selçuk

    2016-01-01

    This study reviews a selected sample of 274 research articles on ELT, published between 2005 and 2015 in Turkish contexts. In the study, 15 journals in ULAKBIM database and articles from national and international journals accessed according to convenience sampling method were surveyed and relevant articles were obtained. A content analysis was…

  7. The Relationship between Teachers Commitment and Female Students Academic Achievements in Some Selected Secondary School in Wolaita Zone, Southern Ethiopia

    ERIC Educational Resources Information Center

    Bibiso, Abyot; Olango, Menna; Bibiso, Mesfin

    2017-01-01

    The purpose of this study was to investigate the relationship between teacher's commitment and female students academic achievement in selected secondary school of Wolaita zone, Southern Ethiopia. The research method employed was survey study and the sampling techniques were purposive, simple random and stratified random sampling. Questionnaire…

  8. COMPENDIUM OF METHODS FOR THE DETERMINATION ...

    EPA Pesticide Factsheets

    This Second Edition of the Compendium has been prepared to provide regional, state and local environmental regulatory agencies with step-by-step sampling and analysis procedures for the determination of selected toxic organic pollutants in ambient air. It is designed to assist those persons responsible for sampling and analysis of toxic organic pollutants in complying with the requirements of Title III of the Clean Air Act. This revised Compendium presents a set of 17 methods in a standardized format with a variety of applicable sampling methods, as well as several analytical techniques, for specific classes of organic pollutants, as appropriate to the specific pollutant compound, its level, and potential interferences. Consequently, this treatment allows the user flexibility in selecting alternatives to complement his or her background and laboratory capability. Information

  9. Evaluation of Scat Deposition Transects versus Radio Telemetry for Developing a Species Distribution Model for a Rare Desert Carnivore, the Kit Fox.

    PubMed

    Dempsey, Steven J; Gese, Eric M; Kluever, Bryan M; Lonsinger, Robert C; Waits, Lisette P

    2015-01-01

    Development and evaluation of noninvasive methods for monitoring species distribution and abundance is a growing area of ecological research. While noninvasive methods have the advantage of reduced risk of negative factors associated with capture, comparisons to methods using more traditional invasive sampling is lacking. Historically kit foxes (Vulpes macrotis) occupied the desert and semi-arid regions of southwestern North America. Once the most abundant carnivore in the Great Basin Desert of Utah, the species is now considered rare. In recent decades, attempts have been made to model the environmental variables influencing kit fox distribution. Using noninvasive scat deposition surveys for determination of kit fox presence, we modeled resource selection functions to predict kit fox distribution using three popular techniques (Maxent, fixed-effects, and mixed-effects generalized linear models) and compared these with similar models developed from invasive sampling (telemetry locations from radio-collared foxes). Resource selection functions were developed using a combination of landscape variables including elevation, slope, aspect, vegetation height, and soil type. All models were tested against subsequent scat collections as a method of model validation. We demonstrate the importance of comparing multiple model types for development of resource selection functions used to predict a species distribution, and evaluating the importance of environmental variables on species distribution. All models we examined showed a large effect of elevation on kit fox presence, followed by slope and vegetation height. However, the invasive sampling method (i.e., radio-telemetry) appeared to be better at determining resource selection, and therefore may be more robust in predicting kit fox distribution. In contrast, the distribution maps created from the noninvasive sampling (i.e., scat transects) were significantly different than the invasive method, thus scat transects may be appropriate when used in an occupancy framework to predict species distribution. We concluded that while scat deposition transects may be useful for monitoring kit fox abundance and possibly occupancy, they do not appear to be appropriate for determining resource selection. On our study area, scat transects were biased to roadways, while data collected using radio-telemetry was dictated by movements of the kit foxes themselves. We recommend that future studies applying noninvasive scat sampling should consider a more robust random sampling design across the landscape (e.g., random transects or more complete road coverage) that would then provide a more accurate and unbiased depiction of resource selection useful to predict kit fox distribution.

  10. Population genetics inference for longitudinally-sampled mutants under strong selection.

    PubMed

    Lacerda, Miguel; Seoighe, Cathal

    2014-11-01

    Longitudinal allele frequency data are becoming increasingly prevalent. Such samples permit statistical inference of the population genetics parameters that influence the fate of mutant variants. To infer these parameters by maximum likelihood, the mutant frequency is often assumed to evolve according to the Wright-Fisher model. For computational reasons, this discrete model is commonly approximated by a diffusion process that requires the assumption that the forces of natural selection and mutation are weak. This assumption is not always appropriate. For example, mutations that impart drug resistance in pathogens may evolve under strong selective pressure. Here, we present an alternative approximation to the mutant-frequency distribution that does not make any assumptions about the magnitude of selection or mutation and is much more computationally efficient than the standard diffusion approximation. Simulation studies are used to compare the performance of our method to that of the Wright-Fisher and Gaussian diffusion approximations. For large populations, our method is found to provide a much better approximation to the mutant-frequency distribution when selection is strong, while all three methods perform comparably when selection is weak. Importantly, maximum-likelihood estimates of the selection coefficient are severely attenuated when selection is strong under the two diffusion models, but not when our method is used. This is further demonstrated with an application to mutant-frequency data from an experimental study of bacteriophage evolution. We therefore recommend our method for estimating the selection coefficient when the effective population size is too large to utilize the discrete Wright-Fisher model. Copyright © 2014 by the Genetics Society of America.

  11. Quality Control Guidelines for SAM Chemical Methods

    EPA Pesticide Factsheets

    Learn more about quality control guidelines and recommendations for the analysis of samples using the chemistry methods listed in EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM).

  12. Quality Control Guidelines for SAM Pathogen Methods

    EPA Pesticide Factsheets

    Learn more about quality control guidelines and recommendations for the analysis of samples using the biotoxin methods listed in EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM).

  13. SAM Chemical Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target chemical, radiochemical, pathogens, and biotoxin analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery

  14. Quality Control Guidelines for SAM Radiochemical Methods

    EPA Pesticide Factsheets

    Learn more about quality control guidelines and recommendations for the analysis of samples using the radiochemistry methods listed in EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM).

  15. General Quality Control (QC) Guidelines for SAM Methods

    EPA Pesticide Factsheets

    Learn more about quality control guidelines and recommendations for the analysis of samples using the methods listed in EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM).

  16. Quality Control Guidelines for SAM Biotoxin Methods

    EPA Pesticide Factsheets

    Learn more about quality control guidelines and recommendations for the analysis of samples using the pathogen methods listed in EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM).

  17. Efficient sampling over rough energy landscapes with high barriers: A combination of metadynamics with integrated tempering sampling.

    PubMed

    Yang, Y Isaac; Zhang, Jun; Che, Xing; Yang, Lijiang; Gao, Yi Qin

    2016-03-07

    In order to efficiently overcome high free energy barriers embedded in a complex energy landscape and calculate overall thermodynamics properties using molecular dynamics simulations, we developed and implemented a sampling strategy by combining the metadynamics with (selective) integrated tempering sampling (ITS/SITS) method. The dominant local minima on the potential energy surface (PES) are partially exalted by accumulating history-dependent potentials as in metadynamics, and the sampling over the entire PES is further enhanced by ITS/SITS. With this hybrid method, the simulated system can be rapidly driven across the dominant barrier along selected collective coordinates. Then, ITS/SITS ensures a fast convergence of the sampling over the entire PES and an efficient calculation of the overall thermodynamic properties of the simulation system. To test the accuracy and efficiency of this method, we first benchmarked this method in the calculation of ϕ - ψ distribution of alanine dipeptide in explicit solvent. We further applied it to examine the design of template molecules for aromatic meta-C-H activation in solutions and investigate solution conformations of the nonapeptide Bradykinin involving slow cis-trans isomerizations of three proline residues.

  18. Efficient sampling over rough energy landscapes with high barriers: A combination of metadynamics with integrated tempering sampling

    NASA Astrophysics Data System (ADS)

    Yang, Y. Isaac; Zhang, Jun; Che, Xing; Yang, Lijiang; Gao, Yi Qin

    2016-03-01

    In order to efficiently overcome high free energy barriers embedded in a complex energy landscape and calculate overall thermodynamics properties using molecular dynamics simulations, we developed and implemented a sampling strategy by combining the metadynamics with (selective) integrated tempering sampling (ITS/SITS) method. The dominant local minima on the potential energy surface (PES) are partially exalted by accumulating history-dependent potentials as in metadynamics, and the sampling over the entire PES is further enhanced by ITS/SITS. With this hybrid method, the simulated system can be rapidly driven across the dominant barrier along selected collective coordinates. Then, ITS/SITS ensures a fast convergence of the sampling over the entire PES and an efficient calculation of the overall thermodynamic properties of the simulation system. To test the accuracy and efficiency of this method, we first benchmarked this method in the calculation of ϕ - ψ distribution of alanine dipeptide in explicit solvent. We further applied it to examine the design of template molecules for aromatic meta-C—H activation in solutions and investigate solution conformations of the nonapeptide Bradykinin involving slow cis-trans isomerizations of three proline residues.

  19. Efficient sampling over rough energy landscapes with high barriers: A combination of metadynamics with integrated tempering sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y. Isaac; Zhang, Jun; Che, Xing

    2016-03-07

    In order to efficiently overcome high free energy barriers embedded in a complex energy landscape and calculate overall thermodynamics properties using molecular dynamics simulations, we developed and implemented a sampling strategy by combining the metadynamics with (selective) integrated tempering sampling (ITS/SITS) method. The dominant local minima on the potential energy surface (PES) are partially exalted by accumulating history-dependent potentials as in metadynamics, and the sampling over the entire PES is further enhanced by ITS/SITS. With this hybrid method, the simulated system can be rapidly driven across the dominant barrier along selected collective coordinates. Then, ITS/SITS ensures a fast convergence ofmore » the sampling over the entire PES and an efficient calculation of the overall thermodynamic properties of the simulation system. To test the accuracy and efficiency of this method, we first benchmarked this method in the calculation of ϕ − ψ distribution of alanine dipeptide in explicit solvent. We further applied it to examine the design of template molecules for aromatic meta-C—H activation in solutions and investigate solution conformations of the nonapeptide Bradykinin involving slow cis-trans isomerizations of three proline residues.« less

  20. SAM Companion Documents

    EPA Pesticide Factsheets

    SAM Companion Documents and Sample Collection Procedures provide information intended to complement the analytical methods listed in Selected Analytical Methods for Environmental Remediation and Recovery (SAM).

  1. A comparison of soil organic matter physical fractionation methods

    NASA Astrophysics Data System (ADS)

    Duddigan, Sarah; Alexander, Paul; Shaw, Liz; Collins, Chris

    2017-04-01

    Selecting a suitable physical fractionation to investigate soil organic matter dynamics from the plethora that are available is a difficult task. An initial investigation of four different physical fractionation methods was conducted (i) Six et al. (2002); (ii) Zimmermann et al. (2007); (iii) Sohi et al. (2001); and (iv) Plaza et al. (2013). Soils used for this were from a long-term organic matter field plot study where a sandy loam soil was subjected to the following treatments: Peat (Pt), Horse Manure (H), Garden Compost (GCf), Garden Compost at half rate (GCh), and a bare plot control (BP). Although each of these methods involved the isolation of unique fractions, in the interest of comparison, each fraction was categorised as either being (i) physically protected (i.e. in aggregates); (ii) chemically protected (such as in organo-mineral complexes); or (iii) unprotected by either of these mechanisms (so-called 'free' organic matter). Regardless of the fractionation method used, a large amount of the variation in total C contents of the different treated soils is accounted for by the differences in unprotected particulate organic matter. When comparing the methods to one another there were no consistent differences in carbon content in the physically protected, chemically protected, or unprotected fractions as operationally defined across all the five organic matter treatments. Therefore fractionation method selection, for this research, was primarily driven by the practicalities of conducting each method in the lab. All of the methods tested had their limitations, for use in this research. This is not a criticism of the methods themselves but largely a result of the lack of suitability for these particular samples. For example, samples that contain a lot of gravel can lead to problems for methods that use size distribution for fractionation. Problems can also be encountered when free particulate organic matter contributes a large proportion of the sample, leaving insufficient sample for further fractionation. This highlights the need for an understanding of the nature of your sample prior to method selection.

  2. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  3. Crystal-face-selective adsorption of Au nanoparticles onto polycrystalline diamond surfaces.

    PubMed

    Kondo, Takeshi; Aoshima, Shinsuke; Hirata, Kousuke; Honda, Kensuke; Einaga, Yasuaki; Fujishima, Akira; Kawai, Takeshi

    2008-07-15

    Crystal-face-selective adsorption of Au nanoparticles (AuNPs) was achieved on polycrystalline boron-doped diamond (BDD) surface via the self-assembly method combined with a UV/ozone treatment. To the best of our knowledge, this is the first report of crystal-face-selective adsorption on an inorganic solid surface. Hydrogen-plasma-treated BDD samples and those followed by UV/ozone treatment for 2 min or longer showed almost no adsorption of AuNP after immersion in the AuNP solution prepared by the citrate reduction method. However, the samples treated by UV/ozone for 10 s showed AuNP adsorption on their (111) facets selectively after the immersion. Moreover, the sample treated with UV/ozone for 40-60 s showed AuNP adsorption on the whole surface. These results indicate that the AuNP adsorption behavior can be controlled by UV/ozone treatment time. This phenomenon was highly reproducible and was applied to a two-step adsorption method, where AuNPs from different batches were adsorbed on the (111) and (100) surface in this order. Our findings may be of great value for the fabrication of advanced nanoparticle-based functional materials via bottom-up approaches with simple macroscale procedures.

  4. Sensitive Detection of Neonicotinoid Insecticides and Other Selected Pesticides in Pollen and Nectar Using Nanoflow Liquid Chromatography Orbitrap Tandem Mass Spectrometry.

    PubMed

    Moreno-González, David; Alcántara-Durán, Jaime; Gilbert-López, Bienvenida; Beneito-Cambra, Miriam; Cutillas, Víctor M; Rajski, Łukasz; Molina-Díaz, Antonio; García-Reyes, Juan F

    2018-03-01

    In this work, a new method based on nanoflow LC with high-resolution MS was developed for the determination of eight pesticides in pollen and nectar samples, including neonicotinoid insecticides and other selected pesticides commonly found in bees and beeswax. Detection was undertaken with a hybrid quadrupole-Orbitrap mass spectrometer (Q Exactive™) equipped with a commercial nanospray ion source. The extraction of pesticides from pollen samples was performed by a modified micro-QuEChERS method scaled down to Eppendorf tubes, whereas nectar samples were simply diluted with a water-methanol (95 + 5, v/v) solution. Good linearity (>0.999 in all cases) was obtained between 0.05 and 500 µg/kg and between 0.04 and 400 µg/kg for pollen and nectar, respectively. Recovery rates in pollen ranged from 85 to 97%, with RSDs <12%. Matrix effect was evaluated and showed negligible effects for all studied pesticides. The lowest concentration levels tested and validated were 0.5 and 0.4 µg/kg for pollen and nectar matrixes, respectively. In addition, selected incurred samples were studied, obtaining several positive findings in pollen and nectar samples, demonstrating the sensitivity and applicability of the proposed method.

  5. Computational fragment-based screening using RosettaLigand: the SAMPL3 challenge

    NASA Astrophysics Data System (ADS)

    Kumar, Ashutosh; Zhang, Kam Y. J.

    2012-05-01

    SAMPL3 fragment based virtual screening challenge provides a valuable opportunity for researchers to test their programs, methods and screening protocols in a blind testing environment. We participated in SAMPL3 challenge and evaluated our virtual fragment screening protocol, which involves RosettaLigand as the core component by screening a 500 fragments Maybridge library against bovine pancreatic trypsin. Our study reaffirmed that the real test for any virtual screening approach would be in a blind testing environment. The analyses presented in this paper also showed that virtual screening performance can be improved, if a set of known active compounds is available and parameters and methods that yield better enrichment are selected. Our study also highlighted that to achieve accurate orientation and conformation of ligands within a binding site, selecting an appropriate method to calculate partial charges is important. Another finding is that using multiple receptor ensembles in docking does not always yield better enrichment than individual receptors. On the basis of our results and retrospective analyses from SAMPL3 fragment screening challenge we anticipate that chances of success in a fragment screening process could be increased significantly with careful selection of receptor structures, protein flexibility, sufficient conformational sampling within binding pocket and accurate assignment of ligand and protein partial charges.

  6. Sampling Based Influence Maximization on Linear Threshold Model

    NASA Astrophysics Data System (ADS)

    Jia, Su; Chen, Ling

    2018-04-01

    A sampling based influence maximization on linear threshold (LT) model method is presented. The method samples the routes in the possible worlds in the social networks, and uses Chernoff bound to estimate the number of samples so that the error can be constrained within a given bound. Then the active possibilities of the routes in the possible worlds are calculated, and are used to compute the influence spread of each node in the network. Our experimental results show that our method can effectively select appropriate seed nodes set that spreads larger influence than other similar methods.

  7. Detection of hyphomycetes in the upper respiratory tract of patients with cystic fibrosis.

    PubMed

    Horré, R; Marklein, G; Siekmeier, R; Reiffert, S-M

    2011-11-01

    The respiratory tract of cystic fibrosis patients is colonised by bacteria and fungi. Although colonisation by slow growing fungi such as Pseudallescheria, Scedosporium and Exophiala species has been studied previously, the colonisation rate differs from study to study. Infections caused by these fungi have been recognised, especially after lung transplants. Monitoring of respiratory tract colonisation in cystic fibrosis patients includes the use of several semi-selective culture media to detect bacteria such as Pseudomonas aeruginosa and Burkholderia cepacia as well as Candida albicans. It is relevant to study whether conventional methods are sufficient for the detection of slow growing hyphomycetes or if additional semi-selective culture media should be used. In total, 589 respiratory specimens from cystic fibrosis patients were examined for the presence of slow growing hyphomycetes. For 439 samples from 81 patients, in addition to conventional methods, erythritol-chloramphenicol agar was used for the selective isolation of Exophiala dermatitidis and paraffin-covered liquid Sabouraud media for the detection of phaeohyphomycetes. For 150 subsequent samples from 42 patients, SceSel+ agar was used for selective isolation of Pseudallescheria and Scedosporium species,and brain-heart infusion bouillon containing a wooden stick for hyphomycete detection. Selective isolation techniques were superior in detecting non-Aspergillus hyphomycetes compared with conventional methods. Although liquid media detected fewer strains of Exophiala, Pseudallescheria and Scedosporium species, additional hyphomycete species not detected by other methods were isolated. Current conventional methods are insufficient to detect non-Aspergillus hyphomycetes, especially Exophiala, Pseudallescheria and Scedosporium species, in sputum samples of cystic fibrosis patients. © 2010 Blackwell Verlag GmbH.

  8. Method and system for providing precise multi-function modulation

    NASA Technical Reports Server (NTRS)

    Davarian, Faramaz (Inventor); Sumida, Joe T. (Inventor)

    1989-01-01

    A method and system is disclosed which provides precise multi-function digitally implementable modulation for a communication system. The invention provides a modulation signal for a communication system in response to an input signal from a data source. A digitized time response is generated from samples of a time domain representation of a spectrum profile of a selected modulation scheme. The invention generates and stores coefficients for each input symbol in accordance with the selected modulation scheme. The output signal is provided by a plurality of samples, each sample being generated by summing the products of a predetermined number of the coefficients and a predetermined number of the samples of the digitized time response. In a specific illustrative implementation, the samples of the output signals are converted to analog signals, filtered and used to modulate a carrier in a conventional manner. The invention is versatile in that it allows for the storage of the digitized time responses and corresponding coefficient lookup table of a number of modulation schemes, any of which may then be selected for use in accordance with the teachings of the invention.

  9. Obscured AGN at z ~ 1 from the zCOSMOS-Bright Survey. I. Selection and optical properties of a [Ne v]-selected sample

    NASA Astrophysics Data System (ADS)

    Mignoli, M.; Vignali, C.; Gilli, R.; Comastri, A.; Zamorani, G.; Bolzonella, M.; Bongiorno, A.; Lamareille, F.; Nair, P.; Pozzetti, L.; Lilly, S. J.; Carollo, C. M.; Contini, T.; Kneib, J.-P.; Le Fèvre, O.; Mainieri, V.; Renzini, A.; Scodeggio, M.; Bardelli, S.; Caputi, K.; Cucciati, O.; de la Torre, S.; de Ravel, L.; Franzetti, P.; Garilli, B.; Iovino, A.; Kampczyk, P.; Knobel, C.; Kovač, K.; Le Borgne, J.-F.; Le Brun, V.; Maier, C.; Pellò, R.; Peng, Y.; Perez Montero, E.; Presotto, V.; Silverman, J. D.; Tanaka, M.; Tasca, L.; Tresse, L.; Vergani, D.; Zucca, E.; Bordoloi, R.; Cappi, A.; Cimatti, A.; Koekemoer, A. M.; McCracken, H. J.; Moresco, M.; Welikala, N.

    2013-08-01

    Aims: The application of multi-wavelength selection techniques is essential for obtaining a complete and unbiased census of active galactic nuclei (AGN). We present here a method for selecting z ~ 1 obscured AGN from optical spectroscopic surveys. Methods: A sample of 94 narrow-line AGN with 0.65 < z < 1.20 was selected from the 20k-Bright zCOSMOS galaxy sample by detection of the high-ionization [Ne v] λ3426 line. The presence of this emission line in a galaxy spectrum is indicative of nuclear activity, although the selection is biased toward low absorbing column densities on narrow-line region or galactic scales. A similar sample of unobscured (type 1 AGN) was collected applying the same analysis to zCOSMOS broad-line objects. This paper presents and compares the optical spectral properties of the two AGN samples. Taking advantage of the large amount of data available in the COSMOS field, the properties of the [Ne v]-selected type 2 AGN were investigated, focusing on their host galaxies, X-ray emission, and optical line-flux ratios. Finally, a previously developed diagnostic, based on the X-ray-to-[Ne v] luminosity ratio, was exploited to search for the more heavily obscured AGN. Results: We found that [Ne v]-selected narrow-line AGN have Seyfert 2-like optical spectra, although their emission line ratios are diluted by a star-forming component. The ACS morphologies and stellar component in the optical spectra indicate a preference for our type 2 AGN to be hosted in early-type spirals with stellar masses greater than 109.5 - 10 M⊙, on average higher than those of the galaxy parent sample. The fraction of galaxies hosting [Ne v]-selected obscured AGN increases with the stellar mass, reaching a maximum of about 3% at ≈2 × 1011 M⊙. A comparison with other selection techniques at z ~ 1, namely the line-ratio diagnostics and X-ray detections, shows that the detection of the [Ne v] λ3426 line is an effective method for selecting AGN in the optical band, in particular the most heavily obscured ones, but cannot provide a complete census of type 2 AGN by itself. Finally, the high fraction of [Ne v]-selected type 2 AGN not detected in medium-deep (≈100-200 ks) Chandra observations (67%) is suggestive of the inclusion of Compton-thick (i.e., with NH > 1024 cm-2) sources in our sample. The presence of a population of heavily obscured AGN is corroborated by the X-ray-to-[Ne v] ratio; we estimated, by means of an X-ray stacking technique and simulations, that the Compton-thick fraction in our sample of type 2 AGN is 43 ± 4% (statistical errors only), which agrees well with standard assumptions by XRB synthesis models.

  10. An improved survivability prognosis of breast cancer by using sampling and feature selection technique to solve imbalanced patient classification data.

    PubMed

    Wang, Kung-Jeng; Makond, Bunjira; Wang, Kung-Min

    2013-11-09

    Breast cancer is one of the most critical cancers and is a major cause of cancer death among women. It is essential to know the survivability of the patients in order to ease the decision making process regarding medical treatment and financial preparation. Recently, the breast cancer data sets have been imbalanced (i.e., the number of survival patients outnumbers the number of non-survival patients) whereas the standard classifiers are not applicable for the imbalanced data sets. The methods to improve survivability prognosis of breast cancer need for study. Two well-known five-year prognosis models/classifiers [i.e., logistic regression (LR) and decision tree (DT)] are constructed by combining synthetic minority over-sampling technique (SMOTE), cost-sensitive classifier technique (CSC), under-sampling, bagging, and boosting. The feature selection method is used to select relevant variables, while the pruning technique is applied to obtain low information-burden models. These methods are applied on data obtained from the Surveillance, Epidemiology, and End Results database. The improvements of survivability prognosis of breast cancer are investigated based on the experimental results. Experimental results confirm that the DT and LR models combined with SMOTE, CSC, and under-sampling generate higher predictive performance consecutively than the original ones. Most of the time, DT and LR models combined with SMOTE and CSC use less informative burden/features when a feature selection method and a pruning technique are applied. LR is found to have better statistical power than DT in predicting five-year survivability. CSC is superior to SMOTE, under-sampling, bagging, and boosting to improve the prognostic performance of DT and LR.

  11. Parallel cascade selection molecular dynamics for efficient conformational sampling and free energy calculation of proteins

    NASA Astrophysics Data System (ADS)

    Kitao, Akio; Harada, Ryuhei; Nishihara, Yasutaka; Tran, Duy Phuoc

    2016-12-01

    Parallel Cascade Selection Molecular Dynamics (PaCS-MD) was proposed as an efficient conformational sampling method to investigate conformational transition pathway of proteins. In PaCS-MD, cycles of (i) selection of initial structures for multiple independent MD simulations and (ii) conformational sampling by independent MD simulations are repeated until the convergence of the sampling. The selection is conducted so that protein conformation gradually approaches a target. The selection of snapshots is a key to enhance conformational changes by increasing the probability of rare event occurrence. Since the procedure of PaCS-MD is simple, no modification of MD programs is required; the selections of initial structures and the restart of the next cycle in the MD simulations can be handled with relatively simple scripts with straightforward implementation. Trajectories generated by PaCS-MD were further analyzed by the Markov state model (MSM), which enables calculation of free energy landscape. The combination of PaCS-MD and MSM is reported in this work.

  12. Development of molecularly imprinted column-on line-two dimensional liquid chromatography for rapidly and selectively monitoring estradiol in cosmetics.

    PubMed

    Guo, Pengqi; Xu, Xinya; Xian, Liang; Ge, Yanhui; Luo, Zhimin; Du, Wei; Jing, Wanghui; Zeng, Aiguo; Chang, Chun; Fu, Qiang

    2016-12-01

    Nowadays, the illegal use of estradiol in cosmetics has caused a series of events which endangering public health seriously. Therefore, it is imperative to establish a simple, fast and specific method for monitoring the illegal use of estradiol in cosmetics. In current study, we developed a molecular imprinted monolithic column two dimensional liquid chromatography method (MIMC-2D-LC) for rapid and selective determination of estradiol in various cosmetic samples. The best polymerization, morphology, structure property, surface groups, and the adsorption performance of the prepared material were investigated. The MIMC-2D-LC was validated and successfully used for detecting estradiol in cosmetic samples with good selectivity, sensitivity, efficiency and reproducibility. The linear range of the MIMC-2D-LC for estradiol was 0.5-50μgg -1 with the limit of detection of 0.08μgg -1 . Finally, six batches of cosmetic samples obtained from local markets were tested by the proposed method. The test results showed that the illegal use of estradiol still existed in the commercially available samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Evaluation of a gas chromatography method for azelaic acid determination in selected biological samples

    PubMed Central

    Garelnabi, Mahdi; Litvinov, Dmitry; Parthasarathy, Sampath

    2010-01-01

    Background: Azelaic acid (AzA) is the best known dicarboxilic acid to have pharmaceutical benefits and clinical applications and also to be associated with some diseases pathophysiology. Materials and Methods: We extracted and methylesterified AzA and determined its concentration in human plasma obtained from healthy individuals and also in mice fed AzA containing diet for three months. Results: AzA was detected in Gas Chromatography (GC) and confirmed by Liquid chromatography mass spectrometry (LCMS), and gas chromatography mass spectrometry (GCMC). Our results have shown that AzA can be determined efficiently in selected biological samples by GC method with 1nM limit of detection (LoD) and the limit of quantification (LoQ); was established at 50nM. Analytical Sensitivity as assayed by hexane demonstrated an analytical sensitivity at 0.050nM. The method has demonstrated 8-10% CV batch repeatability across the sample types and 13-18.9% CV for the Within-Lab Precision analysis. The method has shown that AzA can efficiently be recovered from various sample preparation including liver tissue homogenate (95%) and human plasma (97%). Conclusions: Because of its simplicity and lower limit of quantification, the present method provides a useful tool for determining AzA in various biological sample preparations. PMID:22558586

  14. Methods for collection and analysis of water samples

    USGS Publications Warehouse

    Rainwater, Frank Hays; Thatcher, Leland Lincoln

    1960-01-01

    This manual contains methods used by the U.S. Geological Survey to collect, preserve, and analyze water samples. Throughout, the emphasis is on obtaining analytical results that accurately describe the chemical composition of the water in situ. Among the topics discussed are selection of sampling sites, frequency of sampling, field equipment, preservatives and fixatives, analytical techniques of water analysis, and instruments. Seventy-seven laboratory and field procedures are given for determining fifty-three water properties.

  15. METHOD 544. DETERMINATION OF MICROCYSTINS AND ...

    EPA Pesticide Factsheets

    Method 544 is an accurate and precise analytical method to determine six microcystins (including MC-LR) and nodularin in drinking water using solid phase extraction and liquid chromatography tandem mass spectrometry (SPE-LC/MS/MS). The advantage of this SPE-LC/MS/MS is its sensitivity and ability to speciate the microcystins. This method development task establishes sample preservation techniques, sample concentration and analytical procedures, aqueous and extract holding time criteria and quality control procedures. Draft Method 544 undergone a multi-laboratory verification to ensure other laboratories can implement the method and achieve the quality control measures specified in the method. It is anticipated that Method 544 may be used in UCMR 4 to collect nationwide occurrence data for selected microcystins in drinking water. The purpose of this research project is to develop an accurate and precise analytical method to concentrate and determine selected MCs and nodularin in drinking water.

  16. Evaluation of alternative model selection criteria in the analysis of unimodal response curves using CART

    USGS Publications Warehouse

    Ribic, C.A.; Miller, T.W.

    1998-01-01

    We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (ie directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum-risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error-rule was more likely to choose the correct model than were the other tree-selection rules 1) with weaker relationships and equally important explanatory variables; and 2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.

  17. [Study on Application of NIR Spectral Information Screening in Identification of Maca Origin].

    PubMed

    Wang, Yuan-zhong; Zhao, Yan-li; Zhang, Ji; Jin, Hang

    2016-02-01

    Medicinal and edible plant Maca is rich in various nutrients and owns great medicinal value. Based on near infrared diffuse reflectance spectra, 139 Maca samples collected from Peru and Yunnan were used to identify their geographical origins. Multiplication signal correction (MSC) coupled with second derivative (SD) and Norris derivative filter (ND) was employed in spectral pretreatment. Spectrum range (7,500-4,061 cm⁻¹) was chosen by spectrum standard deviation. Combined with principal component analysis-mahalanobis distance (PCA-MD), the appropriate number of principal components was selected as 5. Based on the spectrum range and the number of principal components selected, two abnormal samples were eliminated by modular group iterative singular sample diagnosis method. Then, four methods were used to filter spectral variable information, competitive adaptive reweighted sampling (CARS), monte carlo-uninformative variable elimination (MC-UVE), genetic algorithm (GA) and subwindow permutation analysis (SPA). The spectral variable information filtered was evaluated by model population analysis (MPA). The results showed that RMSECV(SPA) > RMSECV(CARS) > RMSECV(MC-UVE) > RMSECV(GA), were 2. 14, 2. 05, 2. 02, and 1. 98, and the spectral variables were 250, 240, 250 and 70, respectively. According to the spectral variable filtered, partial least squares discriminant analysis (PLS-DA) was used to build the model, with random selection of 97 samples as training set, and the other 40 samples as validation set. The results showed that, R²: GA > MC-UVE > CARS > SPA, RMSEC and RMSEP: GA < MC-UVE < CARS

  18. On the enhanced sampling over energy barriers in molecular dynamics simulations.

    PubMed

    Gao, Yi Qin; Yang, Lijiang

    2006-09-21

    We present here calculations of free energies of multidimensional systems using an efficient sampling method. The method uses a transformed potential energy surface, which allows an efficient sampling of both low and high energy spaces and accelerates transitions over barriers. It allows efficient sampling of the configuration space over and only over the desired energy range(s). It does not require predetermined or selected reaction coordinate(s). We apply this method to study the dynamics of slow barrier crossing processes in a disaccharide and a dipeptide system.

  19. Migration monitoring with automated technology

    Treesearch

    Rhonda L. Millikin

    2005-01-01

    Automated technology can supplement ground-based methods of migration monitoring by providing: (1) unbiased and automated sampling; (2) independent validation of current methods; (3) a larger sample area for landscape-level analysis of habitat selection for stopover, and (4) an opportunity to study flight behavior. In particular, radar-acoustic sensor fusion can...

  20. [Selection of reference genes of Siraitia grosvenorii by real-time PCR].

    PubMed

    Tu, Dong-ping; Mo, Chang-ming; Ma, Xiao-jun; Zhao, Huan; Tang, Qi; Huang, Jie; Pan, Li-mei; Wei, Rong-chang

    2015-01-01

    Siraitia grosvenorii is a traditional Chinese medicine also as edible food. This study selected six candidate reference genes by real-time quantitative PCR, the expression stability of the candidate reference genes in the different samples was analyzed by using the software and methods of geNorm, NormFinder, BestKeeper, Delta CT method and RefFinder, reference genes for S. grosvenorii were selected for the first time. The results showed that 18SrRNA expressed most stable in all samples, was the best reference gene in the genetic analysis. The study has a guiding role for the analysis of gene expression using qRT-PCR methods, providing a suitable reference genes to ensure the results in the study on differential expressed gene in synthesis and biological pathways, also other genes of S. grosvenorii.

  1. Apparatus and method for the characterization of respirable aerosols

    DOEpatents

    Clark, Douglas K.; Hodges, Bradley W.; Bush, Jesse D.; Mishima, Jofu

    2016-05-31

    An apparatus for the characterization of respirable aerosols, including: a burn chamber configured to selectively contain a sample that is selectively heated to generate an aerosol; a heating assembly disposed within the burn chamber adjacent to the sample; and a sampling segment coupled to the burn chamber and configured to collect the aerosol such that it may be analyzed. The apparatus also includes an optional sight window disposed in a wall of the burn chamber such that the sample may be viewed during heating. Optionally, the sample includes one of a Lanthanide, an Actinide, and a Transition metal.

  2. Assessment of the Implementation of Continuous Assessment: The Case of METTU University

    ERIC Educational Resources Information Center

    Walde, Getinet Seifu

    2016-01-01

    This paper examines the status of the implementation of continuous assessment (CA) in Mettu University. A random stratified sampling method was used to select 309 students and 29 instructors and purposive method used to select quality assurance and faculty Deans. Questionnaires, focus group discussion, interview and documents were used for data…

  3. Site Selection in Experiments: An Assessment of Site Recruitment and Generalizability in Two Scale-Up Studies

    ERIC Educational Resources Information Center

    Tipton, Elizabeth; Fellers, Lauren; Caverly, Sarah; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Ruiz de Castilla, Veronica

    2016-01-01

    Recently, statisticians have begun developing methods to improve the generalizability of results from large-scale experiments in education. This work has included the development of methods for improved site selection when random sampling is infeasible, including the use of stratification and targeted recruitment strategies. This article provides…

  4. Detection of lead(II) ions with a DNAzyme and isothermal strand displacement signal amplification.

    PubMed

    Li, Wenying; Yang, Yue; Chen, Jian; Zhang, Qingfeng; Wang, Yan; Wang, Fangyuan; Yu, Cong

    2014-03-15

    A DNAzyme based method for the sensitive and selective quantification of lead(II) ions has been developed. A DNAzyme that requires Pb(2+) for activation was selected. An RNA containing DNA substrate was cleaved by the DNAzyme in the presence of Pb(2+). The 2',3'-cyclic phosphate of the cleaved 5'-part of the substrate was efficiently removed by Exonuclease III. The remaining part of the single stranded DNA (9 or 13 base long) was subsequently used as the primer for the strand displacement amplification reaction (SDAR). The method is highly sensitive, 200 pM lead(II) could be easily detected. A number of interference ions were tested, and the sensor showed good selectivity. Underground water samples were also tested, which demonstrated the feasibility of the current approach for real sample applications. It is feasible that our method could be used for DNAzyme or aptazyme based new sensing method developments for the quantification of other target analytes with high sensitivity and selectivity. © 2013 Elsevier B.V. All rights reserved.

  5. Using a Calendar and Explanatory Instructions to Aid Within-Household Selection in Mail Surveys

    ERIC Educational Resources Information Center

    Stange, Mathew; Smyth, Jolene D.; Olson, Kristen

    2016-01-01

    Although researchers can easily select probability samples of addresses using the U.S. Postal Service's Delivery Sequence File, randomly selecting respondents within households for surveys remains challenging. Researchers often place within-household selection instructions, such as the next or last birthday methods, in survey cover letters to…

  6. [Measurement of Water COD Based on UV-Vis Spectroscopy Technology].

    PubMed

    Wang, Xiao-ming; Zhang, Hai-liang; Luo, Wei; Liu, Xue-mei

    2016-01-01

    Ultraviolet/visible (UV/Vis) spectroscopy technology was used to measure water COD. A total of 135 water samples were collected from Zhejiang province. Raw spectra with 3 different pretreatment methods (Multiplicative Scatter Correction (MSC), Standard Normal Variate (SNV) and 1st Derivatives were compared to determine the optimal pretreatment method for analysis. Spectral variable selection is an important strategy in spectrum modeling analysis, because it tends to parsimonious data representation and can lead to multivariate models with better performance. In order to simply calibration models, the preprocessed spectra were then used to select sensitive wavelengths by competitive adaptive reweighted sampling (CARS), Random frog and Successive Genetic Algorithm (GA) methods. Different numbers of sensitive wavelengths were selected by different variable selection methods with SNV preprocessing method. Partial least squares (PLS) was used to build models with the full spectra, and Extreme Learning Machine (ELM) was applied to build models with the selected wavelength variables. The overall results showed that ELM model performed better than PLS model, and the ELM model with the selected wavelengths based on CARS obtained the best results with the determination coefficient (R2), RMSEP and RPD were 0.82, 14.48 and 2.34 for prediction set. The results indicated that it was feasible to use UV/Vis with characteristic wavelengths which were obtained by CARS variable selection method, combined with ELM calibration could apply for the rapid and accurate determination of COD in aquaculture water. Moreover, this study laid the foundation for further implementation of online analysis of aquaculture water and rapid determination of other water quality parameters.

  7. Identification and selection of cases and controls in the Pneumonia Etiology Research for Child Health project.

    PubMed

    Deloria-Knoll, Maria; Feikin, Daniel R; Scott, J Anthony G; O'Brien, Katherine L; DeLuca, Andrea N; Driscoll, Amanda J; Levine, Orin S

    2012-04-01

    Methods for the identification and selection of patients (cases) with severe or very severe pneumonia and controls for the Pneumonia Etiology Research for Child Health (PERCH) project were needed. Issues considered include eligibility criteria and sampling strategies, whether to enroll hospital or community controls, whether to exclude controls with upper respiratory tract infection (URTI) or nonsevere pneumonia, and matching criteria, among others. PERCH ultimately decided to enroll community controls and an additional human immunodeficiency virus (HIV)-infected control group at high HIV-prevalence sites matched on age and enrollment date of cases; controls with symptoms of URTI or nonsevere pneumonia will not be excluded. Systematic sampling of cases (when necessary) and random sampling of controls will be implemented. For each issue, we present the options that were considered, the advantages and disadvantages of each, the rationale for the methods selected for PERCH, and remaining implications and limitations.

  8. A novel aptamer-based online magnetic solid phase extraction method for the selective determination of 8-hydroxy-2'-deoxyguanosine in human urine.

    PubMed

    Gan, Haijiao; Xu, Hui

    2018-05-30

    In this work, an innovative magnetic aptamer adsorbent (Fe 3 O 4 -aptamer MNPs) was synthesized for the selective extraction of 8-hydroxy-2'-deoxyguanosine (8-OHdG). Amino-functionalized-Fe 3 O 4 was crosslinked with 8-OHdG aptamer by glutaraldehyde and fixed into a steel stainless tube as the sorbent of magnetic solid phase extraction (MSPE). After selective extraction by the aptamer adsorbent, the adsorbed 8-OHdG was desorbed dynamically and online analyzed by high performance liquid chromatography-mass spectrometry (HPLC-MS). The synthesized sorbent presented outstanding features, including specific selectivity, high enrichment capacity, stability and biocompatibility. Moreover, this proposed MSPE-HPLC-MS can achieve adsorption and desorption operation integration, greatly simplify the analysis process and reduce human errors. When compared with offline MSPE, a sensitivity enhancement of 800 times was obtained for the online method. Some experimental parameters such as the amount of the sorbent, sample flow rate and sample volume, were optimized systematically. Under the optimal conditions, low limit of detection (0.01 ng mL -1 , S/N = 3), limit of quantity (0.03 ng mL -1 , S/N = 10) and wide linear range with a satisfactory correlation coefficient (R 2  ≥ 0.9992) were obtained. And the recoveries of 8-OHdG in the urine samples varied from 82% to 116%. All these results revealed that the method is simple, rapid, selective, sensitive and automated, and it could be expected to become a potential approach for the selective determination of trace 8-OHdG in complex urinary samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Method and apparatus for noble gas atom detection with isotopic selectivity

    DOEpatents

    Hurst, G. Samuel; Payne, Marvin G.; Chen, Chung-Hsuan; Parks, James E.

    1984-01-01

    Apparatus and methods of operation are described for determining, with isotopic selectivity, the number of noble gas atoms in a sample. The analysis is conducted within an evacuated chamber which can be isolated by a valve from a vacuum pumping system capable of producing a pressure of 10.sup.-8 Torr. Provision is made to pass pulses of laser beams through the chamber, these pulses having wavelengths appropriate for the resonance ionization of atoms of the noble gas under analysis. A mass filter within the chamber selects ions of a specific isotope of the noble gas, and means are provided to accelerate these selected ions sufficiently for implantation into a target. Specific types of targets are discussed. An electron measuring device produces a signal relatable to the number of ions implanted into the target and thus to the number of atoms of the selected isotope of the noble gas removed from the gas sample. The measurement can be continued until a substantial fraction, or all, of the atoms in the sample have been counted. Furthermore, additional embodiments of the apparatus are described for bunching the atoms of a noble gas for more rapid analysis, and for changing the target for repetitive cycling of the gas in the chamber. The number of repetitions of the cyclic steps depend upon the concentration of the isotope of interest, the separative efficiency of the mass filter, etc. The cycles are continued until a desired selectivity is achieved. Also described are components and a method of operation for a pre-enrichment operation for use when an introduction of a total sample would elevate the pressure within the chamber to levels in excess of those for operation of the mass filter, specifically a quadrupole mass filter. Specific examples of three noble gas isotope analyses are described.

  10. Selected problems with boron determination in water treatment processes. Part I: comparison of the reference methods for ICP-MS and ICP-OES determinations.

    PubMed

    Kmiecik, Ewa; Tomaszewska, Barbara; Wątor, Katarzyna; Bodzek, Michał

    2016-06-01

    The aim of the study was to compare the two reference methods for the determination of boron in water samples and further assess the impact of the method of preparation of samples for analysis on the results obtained. Samples were collected during different desalination processes, ultrafiltration and the double reverse osmosis system, connected in series. From each point, samples were prepared in four different ways: the first was filtered (through a membrane filter of 0.45 μm) and acidified (using 1 mL ultrapure nitric acid for each 100 mL of samples) (FA), the second was unfiltered and not acidified (UFNA), the third was filtered but not acidified (FNA), and finally, the fourth was unfiltered but acidified (UFA). All samples were analysed using two analytical methods: inductively coupled plasma mass spectrometry (ICP-MS) and inductively coupled plasma optical emission spectrometry (ICP-OES). The results obtained were compared and correlated, and the differences between them were studied. The results show that there are statistically significant differences between the concentrations obtained using the ICP-MS and ICP-OES techniques regardless of the methods of sampling preparation (sample filtration and preservation). Finally, both the ICP-MS and ICP-OES methods can be used for determination of the boron concentration in water. The differences in the boron concentrations obtained using these two methods can be caused by several high-level concentrations in selected whole-water digestates and some matrix effects. Higher concentrations of iron (from 1 to 20 mg/L) than chromium (0.02-1 mg/L) in the samples analysed can influence boron determination. When iron concentrations are high, we can observe the emission spectrum as a double joined and overlapping peak.

  11. Comparing State SAT Scores: Problems, Biases, and Corrections.

    ERIC Educational Resources Information Center

    Gohmann, Stephen F.

    1988-01-01

    One method to correct for selection bias in comparing Scholastic Aptitude Test (SAT) scores among states is presented, which is a modification of J. J. Heckman's Selection Bias Correction (1976, 1979). Empirical results suggest that sample selection bias is present in SAT score regressions. (SLD)

  12. Comparison of a real-time PCR method with a culture method for the detection of Salmonella enterica serotype enteritidis in naturally contaminated environmental samples from integrated poultry houses.

    PubMed

    Lungu, Bwalya; Waltman, W Douglas; Berghaus, Roy D; Hofacre, Charles L

    2012-04-01

    Conventional culture methods have traditionally been considered the "gold standard" for the isolation and identification of foodborne bacterial pathogens. However, culture methods are labor-intensive and time-consuming. A Salmonella enterica serotype Enteritidis-specific real-time PCR assay that recently received interim approval by the National Poultry Improvement Plan for the detection of Salmonella Enteritidis was evaluated against a culture method that had also received interim National Poultry Improvement Plan approval for the analysis of environmental samples from integrated poultry houses. The method was validated with 422 field samples collected by either the boot sock or drag swab method. The samples were cultured by selective enrichment in tetrathionate broth followed by transfer onto a modified semisolid Rappaport-Vassiliadis medium and then plating onto brilliant green with novobiocin and xylose lysine brilliant Tergitol 4 plates. One-milliliter aliquots of the selective enrichment broths from each sample were collected for DNA extraction by the commercial PrepSEQ nucleic acid extraction assay and analysis by the Salmonella Enteritidis-specific real-time PCR assay. The real-time PCR assay detected no significant differences between the boot sock and drag swab samples. In contrast, the culture method detected a significantly higher number of positive samples from boot socks. The diagnostic sensitivity of the real-time PCR assay for the field samples was significantly higher than that of the culture method. The kappa value obtained was 0.46, indicating moderate agreement between the real-time PCR assay and the culture method. In addition, the real-time PCR method had a turnaround time of 2 days compared with 4 to 8 days for the culture method. The higher sensitivity as well as the reduction in time and labor makes this real-time PCR assay an excellent alternative to conventional culture methods for diagnostic purposes, surveillance, and research studies to improve food safety.

  13. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.

    PubMed

    Storlie, Curtis B; Bondell, Howard D; Reich, Brian J; Zhang, Hao Helen

    2011-04-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting.

  14. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property

    PubMed Central

    Storlie, Curtis B.; Bondell, Howard D.; Reich, Brian J.; Zhang, Hao Helen

    2010-01-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting. PMID:21603586

  15. Classification of epileptic EEG signals based on simple random sampling and sequential feature selection.

    PubMed

    Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui

    2016-06-01

    Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential feature selection (SFS) algorithm is applied to select the key features and to reduce the dimensionality of the data. Finally, the selected features are forwarded to a least square support vector machine (LS_SVM) classifier to classify the EEG signals. The LS_SVM classifier classified the features which are extracted and selected from the SRS and the SFS. The experimental results show that the method achieves 99.90, 99.80 and 100 % for classification accuracy, sensitivity and specificity, respectively.

  16. Targeted Proteomic Quantification on Quadrupole-Orbitrap Mass Spectrometer*

    PubMed Central

    Gallien, Sebastien; Duriez, Elodie; Crone, Catharina; Kellmann, Markus; Moehring, Thomas; Domon, Bruno

    2012-01-01

    There is an immediate need for improved methods to systematically and precisely quantify large sets of peptides in complex biological samples. To date protein quantification in biological samples has been routinely performed on triple quadrupole instruments operated in selected reaction monitoring mode (SRM), and two major challenges remain. Firstly, the number of peptides to be included in one survey experiment needs to be increased to routinely reach several hundreds, and secondly, the degree of selectivity should be improved so as to reliably discriminate the targeted analytes from background interferences. High resolution and accurate mass (HR/AM) analysis on the recently developed Q-Exactive mass spectrometer can potentially address these issues. This instrument presents a unique configuration: it is constituted of an orbitrap mass analyzer equipped with a quadrupole mass filter as the front-end for precursor ion mass selection. This configuration enables new quantitative methods based on HR/AM measurements, including targeted analysis in MS mode (single ion monitoring) and in MS/MS mode (parallel reaction monitoring). The ability of the quadrupole to select a restricted m/z range allows one to overcome the dynamic range limitations associated with trapping devices, and the MS/MS mode provides an additional stage of selectivity. When applied to targeted protein quantification in urine samples and benchmarked with the reference SRM technique, the quadrupole-orbitrap instrument exhibits similar or better performance in terms of selectivity, dynamic range, and sensitivity. This high performance is further enhanced by leveraging the multiplexing capability of the instrument to design novel acquisition methods and apply them to large targeted proteomic studies for the first time, as demonstrated on 770 tryptic yeast peptides analyzed in one 60-min experiment. The increased quality of quadrupole-orbitrap data has the potential to improve existing protein quantification methods in complex samples and address the pressing demand of systems biology or biomarker evaluation studies. PMID:22962056

  17. 40 CFR 63.7824 - What test methods and other procedures must I use to establish and demonstrate initial compliance...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... select sampling port locations and the number of traverse points. Sampling ports must be located at the... Method 25 (40 CFR part 60, appendix A), milligrams per dry standard cubic meters (mg/dscm) for each day... = Conversion factor (mg/lb); and K = Daily production rate of sinter, tons/hr. (4) Continue the sampling and...

  18. 40 CFR 63.7824 - What test methods and other procedures must I use to establish and demonstrate initial compliance...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... select sampling port locations and the number of traverse points. Sampling ports must be located at the... Method 25 (40 CFR part 60, appendix A), milligrams per dry standard cubic meters (mg/dscm) for each day... = Conversion factor (mg/lb); and K = Daily production rate of sinter, tons/hr. (4) Continue the sampling and...

  19. Guidelines for Measuring Disease Episodes: An Analysis of the Effects on the Components of Expenditure Growth.

    PubMed

    Dunn, Abe; Liebman, Eli; Rittmueller, Lindsey; Shapiro, Adam Hale

    2017-04-01

    To provide guidelines to researchers measuring health expenditures by disease and compare these methodologies' implied inflation estimates. A convenience sample of commercially insured individuals over the 2003 to 2007 period from Truven Health. Population weights are applied, based on age, sex, and region, to make the sample of over 4 million enrollees representative of the entire commercially insured population. Different methods are used to allocate medical-care expenditures to distinct condition categories. We compare the estimates of disease-price inflation by method. Across a variety of methods, the compound annual growth rate stays within the range 3.1 to 3.9 percentage points. Disease-specific inflation measures are more sensitive to the selected methodology. The selected allocation method impacts aggregate inflation rates, but considering the variety of methods applied, the differences appear small. Future research is necessary to better understand these differences in other population samples and to connect disease expenditures to measures of quality. © Health Research and Educational Trust.

  20. A method for measuring total thiaminase activity in fish tissues

    USGS Publications Warehouse

    Zajicek, James L.; Tillitt, Donald E.; Honeyfield, Dale C.; Brown, Scott B.; Fitzsimons, John D.

    2005-01-01

    An accurate, quantitative, and rapid method for the measurement of thiaminase activity in fish samples is required to provide sufficient information to characterize the role of dietary thiaminase in the onset of thiamine deficiency in Great Lakes salmonines. A radiometric method that uses 14C-thiamine was optimized for substrate and co-substrate (nicotinic acid) concentrations, incubation time, and sample dilution. Total thiaminase activity was successfully determined in extracts of selected Great Lakes fishes and invertebrates. Samples included whole-body and selected tissues of forage fishes. Positive control material prepared from frozen alewives Alosa pseudoharengus collected in Lake Michigan enhanced the development and application of the method. The method allowed improved discrimination of thiaminolytic activity among forage fish species and their tissues. The temperature dependence of the thiaminase activity observed in crude extracts of Lake Michigan alewives followed a Q10 = 2 relationship for the 1-37??C temperature range, which is consistent with the bacterial-derived thiaminase I protein. ?? Copyright by the American Fisheries Society 2005.

  1. Methodological approaches in analysing observational data: A practical example on how to address clustering and selection bias.

    PubMed

    Trutschel, Diana; Palm, Rebecca; Holle, Bernhard; Simon, Michael

    2017-11-01

    Because not every scientific question on effectiveness can be answered with randomised controlled trials, research methods that minimise bias in observational studies are required. Two major concerns influence the internal validity of effect estimates: selection bias and clustering. Hence, to reduce the bias of the effect estimates, more sophisticated statistical methods are needed. To introduce statistical approaches such as propensity score matching and mixed models into representative real-world analysis and to conduct the implementation in statistical software R to reproduce the results. Additionally, the implementation in R is presented to allow the results to be reproduced. We perform a two-level analytic strategy to address the problems of bias and clustering: (i) generalised models with different abilities to adjust for dependencies are used to analyse binary data and (ii) the genetic matching and covariate adjustment methods are used to adjust for selection bias. Hence, we analyse the data from two population samples, the sample produced by the matching method and the full sample. The different analysis methods in this article present different results but still point in the same direction. In our example, the estimate of the probability of receiving a case conference is higher in the treatment group than in the control group. Both strategies, genetic matching and covariate adjustment, have their limitations but complement each other to provide the whole picture. The statistical approaches were feasible for reducing bias but were nevertheless limited by the sample used. For each study and obtained sample, the pros and cons of the different methods have to be weighted. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  2. Residues of selected antibiotics in the South Moravian Rivers, Czech Republic.

    PubMed

    Jarova, Katerina; Vavrova, Milada; Koleckarova, Alice

    2015-01-01

    The aim of this study was to assess the contamination level of aquatic ecosystems of the Oslava and the Jihlava Rivers, and of the Nove Mlyny Water Reservoir, situated in the South Moravian Region (Czech Republic), by residues of selected veterinary pharmaceuticals. We isolated and determined 10 sulfonamide antibiotics in samples of surface water and bottom sediments using optimized analytical methods. A representative number of sampling sites in the entire basin of selected waters were chosen. Samples were collected particularly near the larger cities in order to assess their possible impact to the aquatic ecosystems. Extraction, pre-concentration and purification of samples were performed using optimized methods of solid phase extraction and pressurized solvent extraction. Final identification and quantification were carried out by high-performance liquid chromatography coupled with diode array detector. The concentration of sulfonamides in water samples were all under the limit of detection. Regarding sediment samples, sulfadimidine was found at most sampling sites; its highest values were recorded in the Jihlava River (up to 979.8 µg.kg(-1) dry matter). Other frequently detected sulfonamides were sulfamethoxazole and sulfamerazine. Most other sulfonamides were under the limit of detection or limit of quantification. Monitoring of antibiotic residues in the environment, especially in the aquatic ecosystem, is a current topic due to the growing worldwide use in both human and veterinary medicine. According to obtained results, we document the pollution of selected rivers and water reservoir by particular sulfonamides which basically reflects their application in veterinary medicine.

  3. Sample Collection Information Document for Chemical & Radiochemical Analytes – Companion to Selected Analytical Methods for Environmental Remediation and Recovery (SAM) 2012

    EPA Pesticide Factsheets

    Sample Collection Information Document is intended to provide sampling information to be used during site assessment, remediation and clearance activities following a chemical or radiological contamination incident.

  4. Novel ion imprinted magnetic mesoporous silica for selective magnetic solid phase extraction of trace Cd followed by graphite furnace atomic absorption spectrometry detection

    NASA Astrophysics Data System (ADS)

    Zhao, Bingshan; He, Man; Chen, Beibei; Hu, Bin

    2015-05-01

    Determination of trace Cd in environmental, biological and food samples is of great significance to toxicological research and environmental pollution monitoring. While the direct determination of Cd in real-world samples is difficult due to its low concentration and the complex matrix. Herein, a novel Cd(II)-ion imprinted magnetic mesoporous silica (Cd(II)-II-MMS) was prepared and was employed as a selective magnetic solid-phase extraction (MSPE) material for extraction of trace Cd in real-world samples followed by graphite furnace atomic absorption spectrometry (GFAAS) detection. Under the optimized conditions, the detection limit of the proposed method was 6.1 ng L- 1 for Cd with the relative standard deviation (RSD) of 4.0% (c = 50 ng L- 1, n = 7), and the enrichment factor was 50-fold. To validate the proposed method, Certified Reference Materials of GSBZ 50009-88 environmental water, ZK018-1 lyophilized human urine and NIES10-b rice flour were analyzed and the determined values were in a good agreement with the certified values. The proposed method exhibited a robust anti-interference ability due to the good selectivity of Cd(II)-II-MMS toward Cd(II). It was successfully employed for the determination of trace Cd(II) in environmental water, human urine and rice samples with recoveries of 89.3-116%, demonstrating that the proposed method has good application potential in real world samples with complex matrix.

  5. Determination of selected neurotoxic insecticides in small amounts of animal tissue utilizing a newly constructed mini-extractor.

    PubMed

    Seifertová, Marta; Čechová, Eliška; Llansola, Marta; Felipo, Vicente; Vykoukalová, Martina; Kočan, Anton

    2017-10-01

    We developed a simple analytical method for the simultaneous determination of representatives of various groups of neurotoxic insecticides (carbaryl, chlorpyrifos, cypermethrin, and α-endosulfan and β-endosulfan and their metabolite endosulfan sulfate) in limited amounts of animal tissues containing different amounts of lipids. Selected tissues (rodent fat, liver, and brain) were extracted in a special in-house-designed mini-extractor constructed on the basis of the Soxhlet and Twisselmann extractors. A dried tissue sample placed in a small cartridge was extracted, while the nascent extract was simultaneously filtered through a layer of sodium sulfate. The extraction was followed by combined clean-up, including gel permeation chromatography (in case of high lipid content), ultrasonication, and solid-phase extraction chromatography using C 18 on silica and aluminum oxide. Gas chromatography coupled with high-resolution mass spectrometry was used for analyte separation, detection, and quantification. Average recoveries for individual insecticides ranged from 82 to 111%. Expanded measurement uncertainties were generally lower than 35%. The developed method was successfully applied to rat tissue samples obtained from an animal model dealing with insecticide exposure during brain development. This method may also be applied to the analytical treatment of small amounts of various types of animal and human tissue samples. A significant advantage achieved using this method is high sample throughput due to the simultaneous treatment of many samples. Graphical abstract Optimized workflow for the determination of selected insecticides in small amounts of animal tissue including newly developed mini-extractor.

  6. Determination of secondary and tertiary amines as N-nitrosamine precursors in drinking water system using ultra-fast liquid chromatography-tandem mass spectrometry.

    PubMed

    Wu, Qihua; Shi, Honglan; Ma, Yinfa; Adams, Craig; Eichholz, Todd; Timmons, Terry; Jiang, Hua

    2015-01-01

    N-Nitrosamines are potent mutagenic and carcinogenic emerging water disinfection by-products (DBPs). The most effective strategy to control the formation of these DBPs is minimizing their precursors from source water. Secondary and tertiary amines are dominating precursors of N-nitrosamines formation during drinking water disinfection process. Therefore, the screening and removal of these amines in source water are very essential for preventing the formation of N-nitrosamines. A rapid, simple, and sensitive ultrafast liquid chromatography-tandem mass spectrometry (UFLC-MS/MS) method has been developed in this study to determine seven amines, including dimethylamine, ethylmethylamine, diethylamine, dipropylamine, trimethylamine, 3-(dimethylaminomethyl)indole, and 4-dimethylaminoantipyrine, as major precursors of N-nitrosamines in drinking water system. No sample preparation process is needed except a simple filtration. Separation and detection can be achieved in 11 min per sample. The method detection limits of selected amines are ranging from 0.02 μg/L to 1 μg/L except EMA (5 μg/L), and good calibration linearity was achieved. The developed method was applied to determine the selected precursors in source water and drinking water samples collected from Midwest area of the United States. In most of water samples, the concentrations of selected precursors of N-nitrosamines were below their method detection limits. Dimethylamine was detected in some of water samples at the concentration up to 25.4 μg/L. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Tabu search and binary particle swarm optimization for feature selection using microarray data.

    PubMed

    Chuang, Li-Yeh; Yang, Cheng-Huei; Yang, Cheng-Hong

    2009-12-01

    Gene expression profiles have great potential as a medical diagnosis tool because they represent the state of a cell at the molecular level. In the classification of cancer type research, available training datasets generally have a fairly small sample size compared to the number of genes involved. This fact poses an unprecedented challenge to some classification methodologies due to training data limitations. Therefore, a good selection method for genes relevant for sample classification is needed to improve the predictive accuracy, and to avoid incomprehensibility due to the large number of genes investigated. In this article, we propose to combine tabu search (TS) and binary particle swarm optimization (BPSO) for feature selection. BPSO acts as a local optimizer each time the TS has been run for a single generation. The K-nearest neighbor method with leave-one-out cross-validation and support vector machine with one-versus-rest serve as evaluators of the TS and BPSO. The proposed method is applied and compared to the 11 classification problems taken from the literature. Experimental results show that our method simplifies features effectively and either obtains higher classification accuracy or uses fewer features compared to other feature selection methods.

  8. Sorbent-based sampling methods for volatile and semi-volatile organic compounds in air Part 1: Sorbent-based air monitoring options.

    PubMed

    Woolfenden, Elizabeth

    2010-04-16

    Sorbent tubes/traps are widely used in combination with gas chromatographic (GC) analytical methods to monitor the vapour-phase fraction of organic compounds in air. Target compounds range in volatility from acetylene and freons to phthalates and PCBs and include apolar, polar and reactive species. Airborne vapour concentrations will vary depending on the nature of the location, nearby pollution sources, weather conditions, etc. Levels can range from low percent concentrations in stack and vent emissions to low part per trillion (ppt) levels in ultra-clean outdoor locations. Hundreds, even thousands of different compounds may be present in any given atmosphere. GC is commonly used in combination with mass spectrometry (MS) detection especially for environmental monitoring or for screening uncharacterised workplace atmospheres. Given the complexity and variability of organic vapours in air, no one sampling approach suits every monitoring scenario. A variety of different sampling strategies and sorbent media have been developed to address specific applications. Key sorbent-based examples include: active (pumped) sampling onto tubes packed with one or more sorbents held at ambient temperature; diffusive (passive) sampling onto sorbent tubes/cartridges; on-line sampling of air/gas streams into cooled sorbent traps; and transfer of air samples from containers (canisters, Tedlar) bags, etc.) into cooled sorbent focusing traps. Whichever sampling approach is selected, subsequent analysis almost always involves either solvent extraction or thermal desorption (TD) prior to GC(/MS) analysis. The overall performance of the air monitoring method will depend heavily on appropriate selection of key sampling and analytical parameters. This comprehensive review of air monitoring using sorbent tubes/traps is divided into 2 parts. (1) Sorbent-based air sampling option. (2) Sorbent selection and other aspects of optimizing sorbent-based air monitoring methods. The paper presents current state-of-the-art and recent developments in relevant areas such as sorbent research, sampler design, enhanced approaches to analytical quality assurance and on-tube derivatisation. Copyright 2009 Elsevier B.V. All rights reserved.

  9. Determination of Low Concentrations of Acetochlor in Water by Automated Solid-Phase Extraction and Gas Chromatography with Mass-Selective Detection

    USGS Publications Warehouse

    Lindley, C.E.; Stewart, J.T.; Sandstrom, M.W.

    1996-01-01

    A sensitive and reliable gas chromatographic/mass spectrometric (GC/MS) method for determining acetochlor in environmental water samples was developed. The method involves automated extraction of the herbicide from a filtered 1 L water sample through a C18 solid-phase extraction column, elution from the column with hexane-isopropyl alcohol (3 + 1), and concentration of the extract with nitrogen gas. The herbicide is quantitated by capillary/column GC/MS with selected-ion monitoring of 3 characteristic ions. The single-operator method detection limit for reagent water samples is 0.0015 ??g/L. Mean recoveries ranged from about 92 to 115% for 3 water matrixes fortified at 0.05 and 0.5 ??g/L. Average single-operator precision, over the course of 1 week, was better than 5%.

  10. Development of andrographolide molecularly imprinted polymer for solid-phase extraction

    NASA Astrophysics Data System (ADS)

    Yin, Xiaoying; Liu, Qingshan; Jiang, Yifan; Luo, Yongming

    2011-06-01

    A method employing molecularly imprinted polymer (MIP) as selective sorbent for solid-phase extraction (SPE) to pretreat samples was developed. The polymers were prepared by precipitation polymerization with andrographolide as template molecule. The structure of MIP was characterized and its static adsorption capacity was measured by the Scatchard equation. In comparison with C 18-SPE and non-imprinted polymer (NIP) SPE column, MIP-SPE column displays high selectivity and good affinity for andrographolide and dehydroandrographolide for extract of herb Andrographis paniculata ( Burm.f.) Nees (APN). MIP-SPE column capacity was 11.9 ± 0.6 μmol/g and 12.1 ± 0.5 μmol/g for andrographolide and dehydroandrographolide, respectively and was 2-3 times higher than that of other two columns. The precision and accuracy of the method developed were satisfactory with recoveries between 96.4% and 103.8% (RSD 3.1-4.3%, n = 5) and 96.0% and 104.2% (RSD 2.9-3.7%, n = 5) for andrographolide and dehydroandrographolide, respectively. Various real samples were employed to confirm the feasibility of method. This developed method demonstrates the potential of molecularly imprinted solid phase extraction for rapid, selective, and effective sample pretreatment.

  11. A similarity based learning framework for interim analysis of outcome prediction of acupuncture for neck pain.

    PubMed

    Zhang, Gang; Liang, Zhaohui; Yin, Jian; Fu, Wenbin; Li, Guo-Zheng

    2013-01-01

    Chronic neck pain is a common morbid disorder in modern society. Acupuncture has been administered for treating chronic pain as an alternative therapy for a long time, with its effectiveness supported by the latest clinical evidence. However, the potential effective difference in different syndrome types is questioned due to the limits of sample size and statistical methods. We applied machine learning methods in an attempt to solve this problem. Through a multi-objective sorting of subjective measurements, outstanding samples are selected to form the base of our kernel-oriented model. With calculation of similarities between the concerned sample and base samples, we are able to make full use of information contained in the known samples, which is especially effective in the case of a small sample set. To tackle the parameters selection problem in similarity learning, we propose an ensemble version of slightly different parameter setting to obtain stronger learning. The experimental result on a real data set shows that compared to some previous well-known methods, the proposed algorithm is capable of discovering the underlying difference among different syndrome types and is feasible for predicting the effective tendency in clinical trials of large samples.

  12. [Study on the method for the determination of trace boron, molybdenum, silver, tin and lead in geochemical samples by direct current arc full spectrum direct reading atomic emission spectroscopy (DC-Arc-AES)].

    PubMed

    Hao, Zhi-hong; Yao, Jian-zhen; Tang, Rui-ling; Zhang, Xue-mei; Li, Wen-ge; Zhang, Qin

    2015-02-01

    The method for the determmation of trace boron, molybdenum, silver, tin and lead in geochemical samples by direct current are full spectrum direct reading atomic emission spectroscopy (DC-Arc-AES) was established. Direct current are full spectrum direct reading atomic emission spectrometer with a large area of solid-state detectors has functions of full spectrum direct reading and real-time background correction. The new electrodes and new buffer recipe were proposed in this paper, and have applied for national patent. Suitable analytical line pairs, back ground correcting points of elements and the internal standard method were selected, and Ge was used as internal standard. Multistage currents were selected in the research on current program, and each current set different holding time to ensure that each element has a good signal to noise ratio. Continuous rising current mode selected can effectively eliminate the splash of the sample. Argon as shielding gas can eliminate CN band generating and reduce spectral background, also plays a role in stabilizing the are, and argon flow 3.5 L x min(-1) was selected. Evaporation curve of each element was made, and it was concluded that the evaporation behavior of each element is consistent, and combined with the effects of different spectrographic times on the intensity and background, the spectrographic time of 35s was selected. In this paper, national standards substances were selected as a standard series, and the standard series includes different nature and different content of standard substances which meet the determination of trace boron, molybdenum, silver, tin and lead in geochemical samples. In the optimum experimental conditions, the detection limits for B, Mo, Ag, Sn and Pb are 1.1, 0.09, 0.01, 0.41, and 0.56 microg x g(-1) respectively, and the precisions (RSD, n=12) for B, Mo, Ag, Sn and Pb are 4.57%-7.63%, 5.14%-7.75%, 5.48%-12.30%, 3.97%-10.46%, and 4.26%-9.21% respectively. The analytical accuracy was validated by national standards and the results are in agreement with certified values. The method is simple, rapid, is an advanced analytical method for the determination of trace amounts of geochemical samples' boron, molybdenum, silver, tin and lead, and has a certain practicality.

  13. High redshift galaxies in the ALHAMBRA survey . I. Selection method and number counts based on redshift PDFs

    NASA Astrophysics Data System (ADS)

    Viironen, K.; Marín-Franch, A.; López-Sanjuan, C.; Varela, J.; Chaves-Montero, J.; Cristóbal-Hornillos, D.; Molino, A.; Fernández-Soto, A.; Vilella-Rojo, G.; Ascaso, B.; Cenarro, A. J.; Cerviño, M.; Cepa, J.; Ederoclite, A.; Márquez, I.; Masegosa, J.; Moles, M.; Oteo, I.; Pović, M.; Aguerri, J. A. L.; Alfaro, E.; Aparicio-Villegas, T.; Benítez, N.; Broadhurst, T.; Cabrera-Caño, J.; Castander, J. F.; Del Olmo, A.; González Delgado, R. M.; Husillos, C.; Infante, L.; Martínez, V. J.; Perea, J.; Prada, F.; Quintana, J. M.

    2015-04-01

    Context. Most observational results on the high redshift restframe UV-bright galaxies are based on samples pinpointed using the so-called dropout technique or Ly-α selection. However, the availability of multifilter data now allows the dropout selections to be replaced by direct methods based on photometric redshifts. In this paper we present the methodology to select and study the population of high redshift galaxies in the ALHAMBRA survey data. Aims: Our aim is to develop a less biased methodology than the traditional dropout technique to study the high redshift galaxies in ALHAMBRA and other multifilter data. Thanks to the wide area ALHAMBRA covers, we especially aim at contributing to the study of the brightest, least frequent, high redshift galaxies. Methods: The methodology is based on redshift probability distribution functions (zPDFs). It is shown how a clean galaxy sample can be obtained by selecting the galaxies with high integrated probability of being within a given redshift interval. However, reaching both a complete and clean sample with this method is challenging. Hence, a method to derive statistical properties by summing the zPDFs of all the galaxies in the redshift bin of interest is introduced. Results: Using this methodology we derive the galaxy rest frame UV number counts in five redshift bins centred at z = 2.5,3.0,3.5,4.0, and 4.5, being complete up to the limiting magnitude at mUV(AB) = 24, where mUV refers to the first ALHAMBRA filter redwards of the Ly-α line. With the wide field ALHAMBRA data we especially contribute to the study of the brightest ends of these counts, accurately sampling the surface densities down to mUV(AB) = 21-22. Conclusions: We show that using the zPDFs it is easy to select a very clean sample of high redshift galaxies. We also show that it is better to do statistical analysis of the properties of galaxies using a probabilistic approach, which takes into account both the incompleteness and contamination issues in a natural way. Based on observations collected at the German-Spanish Astronomical Center, Calar Alto, jointly operated by the Max-Planck-Institut für Astronomie (MPIA) at Heidelberg and the Instituto de Astrofísica de Andalucía (CSIC).

  14. Volatile organic compounds: sampling methods and their worldwide profile in ambient air.

    PubMed

    Kumar, Anuj; Víden, Ivan

    2007-08-01

    The atmosphere is a particularly difficult analytical system because of the very low levels of substances to be analysed, sharp variations in pollutant levels with time and location, differences in wind, temperature and humidity. This makes the selection of an efficient sampling technique for air analysis a key step to reliable results. Generally, methods for volatile organic compounds sampling include collection of the whole air or preconcentration of samples on adsorbents. All the methods vary from each other according to the sampling technique, type of sorbent, method of extraction and identification technique. In this review paper we discuss various important aspects for sampling of volatile organic compounds by the widely used and advanced sampling methods. Characteristics of various adsorbents used for VOCs sampling are also described. Furthermore, this paper makes an effort to comprehensively review the concentration levels of volatile organic compounds along with the methodology used for analysis, in major cities of the world.

  15. Microwave synthesis of gibberellin acid 3 magnetic molecularly imprinted polymer beads for the trace analysis of gibberellin acids in plant samples by liquid chromatography-mass spectrometry detection.

    PubMed

    Zhang, Zhuomin; Tan, Wei; Hu, Yuling; Li, Gongke; Zan, Song

    2012-02-21

    In this study, novel GA3 magnetic molecularly imprinted polymer (mag-MIP) beads were synthesized by a microwave irradiation method, and the beads were applied for the trace analysis of gibberellin acids (GAs) in plant samples including rice and cucumber coupled with high performance liquid chromatography-mass spectrometry (HPLC-MS). The microwave synthetic procedure was optimized in detail. In particular, the interaction between GA3 and functional monomers was further studied for the selection of the optimal functional monomers during synthesis. It can be seen that the interaction between GA3 and acrylamide (AM) finally selected was stronger than that between GA3 and other functional monomers. GA3 mag-MIP beads were characterized by a series of physical tests. GA3 mag-MIP beads had a porous and homogeneous surface morphology with stable chemical, thermal and magnetic properties. Moreover, GA3 mag-MIP beads demonstrated selective and specific absorption behavior for the target compounds during unsaturated extraction, which resulted in a higher extraction capacity (∼708.4 pmol for GA3) and selectivity than GA3 mag-non-imprinted polymer beads. Finally, an analytical method of GA3 mag-AM-MIP bead extraction coupled with HPLC-MS detection was established and applied for the determination of trace GA1, GA3, GA4 and GA7 in rice and cucumber samples. It was satisfactory that GA4 could be actually found to be 121.5 ± 1.4 μg kg(-1) in real rice samples by this novel analytical method. The recoveries of spiked rice and cucumber samples were found to be 76.0-109.1% and 79.9-93.6% with RSDs of 2.8-8.8% and 3.1-7.7% (n = 3), respectively. The proposed method is efficient and applicable for the trace analysis of GAs in complicated plant samples.

  16. Reference Value Advisor: a new freeware set of macroinstructions to calculate reference intervals with Microsoft Excel.

    PubMed

    Geffré, Anne; Concordet, Didier; Braun, Jean-Pierre; Trumel, Catherine

    2011-03-01

    International recommendations for determination of reference intervals have been recently updated, especially for small reference sample groups, and use of the robust method and Box-Cox transformation is now recommended. Unfortunately, these methods are not included in most software programs used for data analysis by clinical laboratories. We have created a set of macroinstructions, named Reference Value Advisor, for use in Microsoft Excel to calculate reference limits applying different methods. For any series of data, Reference Value Advisor calculates reference limits (with 90% confidence intervals [CI]) using a nonparametric method when n≥40 and by parametric and robust methods from native and Box-Cox transformed values; tests normality of distributions using the Anderson-Darling test and outliers using Tukey and Dixon-Reed tests; displays the distribution of values in dot plots and histograms and constructs Q-Q plots for visual inspection of normality; and provides minimal guidelines in the form of comments based on international recommendations. The critical steps in determination of reference intervals are correct selection of as many reference individuals as possible and analysis of specimens in controlled preanalytical and analytical conditions. Computing tools cannot compensate for flaws in selection and size of the reference sample group and handling and analysis of samples. However, if those steps are performed properly, Reference Value Advisor, available as freeware at http://www.biostat.envt.fr/spip/spip.php?article63, permits rapid assessment and comparison of results calculated using different methods, including currently unavailable methods. This allows for selection of the most appropriate method, especially as the program provides the CI of limits. It should be useful in veterinary clinical pathology when only small reference sample groups are available. ©2011 American Society for Veterinary Clinical Pathology.

  17. Simultaneous determination of nickel and copper by H-point standard addition method-first-order derivative spectrophotometry in plant samples after separation and preconcentration on modified natural clinoptilolite as a new sorbent.

    PubMed

    Roohparvar, Rasool; Taher, Mohammad Ali; Mohadesi, Alireza

    2008-01-01

    For the simultaneous determination of nickel(ll) and copper(ll) in plant samples, a rapid and accurate method was developed. In this method, solid-phase extraction (SPE) and first-order derivative spectrophotometry (FDS) are combined, and the result is coupled with the H-point standard addition method (HPSAM). Compared with normal spectrophotometry, derivative spectrophotometry offers the advantages of increased selectivity and sensitivity. As there is no need for carrying out any pretreatment of the sample, the spectrophotometry method is easy, but because of a high detection limit, it is not so practical. In order to decrease the detection limit, it is suggested to combine spectrophotometry with a preconcentration method such as SPE. In the present work, after separation and preconcentration of Ni(ll) and Cu(ll) on modified clinoptilolite zeolite that is loaded with 2-[1-(2-hydroxy-5-sulforphenyl)-3-phenyl-5-formaza-no]-benzoic acid monosodium salt (zincon) as a selective chromogenic reagent, FDS-HPSAM, which is a simple and selective spectrophotometric method, has been applied for simultaneous determination of these ions. With optimum conditions, the detection limit in original solutions is 0.7 and 0.5 ng/mL, respectively, for nickel and copper. The linear concentration ranges in the proposed method for nickel and copper ions in original solutions are 1.1 to 3.0 x 10(3) and 0.9 to 2.0 x 10(3) ng/mL, respectively. The recommended procedure is applied to successful determination of Cu(ll) and Ni(ll) in standard and real samples.

  18. Improving the collection of knowledge, attitude and practice data with community surveys: a comparison of two second-stage sampling methods.

    PubMed

    Davis, Rosemary H; Valadez, Joseph J

    2014-12-01

    Second-stage sampling techniques, including spatial segmentation, are widely used in community health surveys when reliable household sampling frames are not available. In India, an unresearched technique for household selection is used in eight states, which samples the house with the last marriage or birth as the starting point. Users question whether this last-birth or last-marriage (LBLM) approach introduces bias affecting survey results. We conducted two simultaneous population-based surveys. One used segmentation sampling; the other used LBLM. LBLM sampling required modification before assessment was possible and a more systematic approach was tested using last birth only. We compared coverage proportions produced by the two independent samples for six malaria indicators and demographic variables (education, wealth and caste). We then measured the level of agreement between the caste of the selected participant and the caste of the health worker making the selection. No significant difference between methods was found for the point estimates of six malaria indicators, education, caste or wealth of the survey participants (range of P: 0.06 to >0.99). A poor level of agreement occurred between the caste of the health worker used in household selection and the caste of the final participant, (Κ = 0.185), revealing little association between the two, and thereby indicating that caste was not a source of bias. Although LBLM was not testable, a systematic last-birth approach was tested. If documented concerns of last-birth sampling are addressed, this new method could offer an acceptable alternative to segmentation in India. However, inter-state caste variation could affect this result. Therefore, additional assessment of last birth is required before wider implementation is recommended. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine © The Author 2013; all rights reserved.

  19. Multi-locus analysis of genomic time series data from experimental evolution.

    PubMed

    Terhorst, Jonathan; Schlötterer, Christian; Song, Yun S

    2015-04-01

    Genomic time series data generated by evolve-and-resequence (E&R) experiments offer a powerful window into the mechanisms that drive evolution. However, standard population genetic inference procedures do not account for sampling serially over time, and new methods are needed to make full use of modern experimental evolution data. To address this problem, we develop a Gaussian process approximation to the multi-locus Wright-Fisher process with selection over a time course of tens of generations. The mean and covariance structure of the Gaussian process are obtained by computing the corresponding moments in discrete-time Wright-Fisher models conditioned on the presence of a linked selected site. This enables our method to account for the effects of linkage and selection, both along the genome and across sampled time points, in an approximate but principled manner. We first use simulated data to demonstrate the power of our method to correctly detect, locate and estimate the fitness of a selected allele from among several linked sites. We study how this power changes for different values of selection strength, initial haplotypic diversity, population size, sampling frequency, experimental duration, number of replicates, and sequencing coverage depth. In addition to providing quantitative estimates of selection parameters from experimental evolution data, our model can be used by practitioners to design E&R experiments with requisite power. We also explore how our likelihood-based approach can be used to infer other model parameters, including effective population size and recombination rate. Then, we apply our method to analyze genome-wide data from a real E&R experiment designed to study the adaptation of D. melanogaster to a new laboratory environment with alternating cold and hot temperatures.

  20. Spectrophotometric determination of low levels arsenic species in beverages after ion-pairing vortex-assisted cloud-point extraction with acridine red.

    PubMed

    Altunay, Nail; Gürkan, Ramazan; Kır, Ufuk

    2016-01-01

    A new, low-cost, micellar-sensitive and selective spectrophotometric method was developed for the determination of inorganic arsenic (As) species in beverage samples. Vortex-assisted cloud-point extraction (VA-CPE) was used for the efficient pre-concentration of As(V) in the selected samples. The method is based on selective and sensitive ion-pairing of As(V) with acridine red (ARH(+)) in the presence of pyrogallol and sequential extraction into the micellar phase of Triton X-45 at pH 6.0. Under the optimised conditions, the calibration curve was highly linear in the range of 0.8-280 µg l(-1) for As(V). The limits of detection and quantification of the method were 0.25 and 0.83 µg l(-1), respectively. The method was successfully applied to the determination of trace As in the pre-treated and digested samples under microwave and ultrasonic power. As(V) and total As levels in the samples were spectrophotometrically determined after pre-concentration with VA-CPE at 494 nm before and after oxidation with acidic KMnO4. The As(III) levels were calculated from the difference between As(V) and total As levels. The accuracy of the method was demonstrated by analysis of two certified reference materials (CRMs) where the measured values for As were statistically within the 95% confidence limit for the certified values.

  1. A validated ultra-high-performance liquid chromatography-tandem mass spectrometry method for the selective analysis of free and total folate in plasma and red blood cells.

    PubMed

    Kiekens, Filip; Van Daele, Jeroen; Blancquaert, Dieter; Van Der Straeten, Dominique; Lambert, Willy E; Stove, Christophe P

    2015-06-12

    A stable isotope dilution LC-MS/MS method is the method of choice for the selective quantitative determination of several folate species in clinical samples. By implementing an integrated approach to determine both the plasma and red blood cell (RBC) folate status, the use of consumables and time remains limited. Starting from a single 300μl whole blood sample, the folate status in plasma and RBCs can be determined after separating plasma and RBCs and sequential washing of the latter with isotonic buffer, followed by reproducible lysis using an ammonium-based buffer. Acidification combines both liberation of protein bound folates and protein precipitation. Sample cleanup is performed using a 96-well reversed-phase solid-phase extraction procedure, similar for both plasma and RBC samples. Analyses are performed by UHPLC-MS/MS. Method validation was successfully performed based on EMA-guidelines and encompassed selectivity, carry-over, linearity, accuracy, precision, recovery, matrix effect and stability. Plasma and RBC folates could be quantified in the range of 1-150nmol/l and 5-1500nmol/l, respectively. This method allows for the determination of 6 folate monoglutamates in both plasma and RBCs. It can be used to determine short and long term folate status in both normal and severely deficient subjects in a single analytical sequence. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. On-line solid-phase microextraction of triclosan, bisphenol A, chlorophenols, and selected pharmaceuticals in environmental water samples by high-performance liquid chromatography-ultraviolet detection.

    PubMed

    Kim, Dalho; Han, Jungho; Choi, Yongwook

    2013-01-01

    A method using on-line solid-phase microextraction (SPME) on a carbowax-templated fiber followed by liquid chromatography (LC) with ultraviolet (UV) detection was developed for the determination of triclosan in environmental water samples. Along with triclosan, other selected phenolic compounds, bisphenol A, and acidic pharmaceuticals were studied. Previous SPME/LC or stir-bar sorptive extraction/LC-UV for polar analytes showed lack of sensitivity. In this study, the calculated octanol-water distribution coefficient (log D) values of the target analytes at different pH values were used to estimate polarity of the analytes. The lack of sensitivity observed in earlier studies is identified as a lack of desorption by strong polar-polar interactions between analyte and solid-phase. Calculated log D values were useful to understand or predict the interaction between analyte and solid phase. Under the optimized conditions, the method detection limit of selected analytes by using on-line SPME-LC-UV method ranged from 5 to 33 ng L(-1), except for very polar 3-chlorophenol and 2,4-dichlorophenol which was obscured in wastewater samples by an interfering substance. This level of detection represented a remarkable improvement over the conventional existing methods. The on-line SPME-LC-UV method, which did not require derivatization of analytes, was applied to the determination of TCS including phenolic compounds and acidic pharmaceuticals in tap water and river water and municipal wastewater samples.

  3. Determination of iodine in bread and fish using the iodide ion-selective electrode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steiner, J.B.

    The purpose of this study was to assess the potential for use of the ion-selective electrode (ISE) as a method for measuring the iodine content in bread and fish. Ashing methods, sample preparation and electrode responses were evaluated. The iodine values obtained using the iodide electrode were compared to iodine values obtained by the arsenic-cerium method (As-Ce). Ashing methods were used in preparing bread and haddock for iodine analysis by the ISE. The values were compared to unashed samples measured by the ISE. Electrode response to iodide was examined by varying the sample pH, measuring electrode equilibrium times, and comparingmore » direct measurement in ppm to iodide values obtained by the method of known addition. Oyster reference tissue with a known iodine concentration was used to determine rates of recovery. For the As-Ce procedure, an alkaline dry ash for two hour followed by colorimetric analysis at 320 nm was recommended. The study showed that the pre-treatment of bread and fish was necessary for ISE measurement. The iodine values obtained by the ISE in the analysis of oyster reference tissue, haddock and bread were not in agreement with their corresponding As-Ce values. Further work needs to be done to determine an ashing procedure that has minimal iodide loss an/or develop sample treatments that will improve the reliability and precision of iodine values obtained using the ion-selective electrode.« less

  4. Fast and selective pressurized liquid extraction with simultaneous in cell clean up for the analysis of alkylphenols and bisphenol A in bivalve molluscs.

    PubMed

    Salgueiro-González, N; Turnes-Carou, I; Muniategui-Lorenzoa, S; López-Mahía, P; Prada-Rodríguez, D

    2012-12-28

    A novel and green analytical methodology for the determination of alkylphenols (4-tert-octylphenol, 4-n-octylphenol, 4-n-nonylphenol, nonylphenol technical mixture) and bisphenol A in bivalve mollusc samples was developed and validated. The method was based on selective pressurized liquid extraction (SPLE) with a simultaneous in cell clean up combined with liquid chromatography–electrospray ionization tandem mass spectrometry in negative mode (LC–ESI-MS/MS). Quantitation was performed by standard addition curves in order to correct matrix effects. The analytical features of the method were satisfactory: relative recoveries varied between 80 and 107% and repeatability and intermediate precision were <20% for all compounds. Uncertainty assessment of measurement was estimated on the basis of an in-house validation according to EURACHEM/CITAC guide. Quantitation limits of the method (MQL) ranged between 0.34 (4-n-octylphenol) and 3.6 ng g(−1) dry weight (nonylphenol). The main advantages of the method are sensitivity, selectivity, automaticity, low volumes of solvents required and low sample analysis time (according with the principles of Green Chemistry). The method was applied to the analysis of mussel samples of Galicia coast (NW of Spain). Nonylphenol and 4-tert-octylphenol were measured in all samples at concentrations between 9.3 and 372 ng g(−1) dw. As an approach, the human daily intake of these compounds was estimated and no risk for human health was found.

  5. Thermomechanical Methodology for Stabilizing Shape Memory Alloy (SMA) Response

    NASA Technical Reports Server (NTRS)

    Padula, II, Santo A (Inventor)

    2013-01-01

    Methods and apparatuses for stabilizing the strain-temperature response for a shape memory alloy are provided. To perform stabilization of a second sample of the shape memory alloy, a first sample of the shape memory alloy is selected for isobaric treatment and the second sample is selected for isothermal treatment. When applying the isobaric treatment to the first sample, a constant stress is applied to the first sample. Temperature is also cycled from a minimum temperature to a maximum temperature until a strain on the first sample stabilizes. Once the strain on the first sample stabilizes, the isothermal treatment is performed on the second sample. During isothermal treatment, different levels of stress on the second sample are applied until a strain on the second sample matches the stabilized strain on the first sample.

  6. Thermomechanical Methodology for Stabilizing Shape Memory Alloy (SMA) Response

    NASA Technical Reports Server (NTRS)

    Padula, Santo A., II (Inventor)

    2016-01-01

    Methods and apparatuses for stabilizing the strain-temperature response for a shape memory alloy are provided. To perform stabilization of a second sample of the shape memory alloy, a first sample of the shape memory alloy is selected for isobaric treatment and the second sample is selected for isothermal treatment. When applying the isobaric treatment to the first sample, a constant stress is applied to the first sample. Temperature is also cycled from a minimum temperature to a maximum temperature until a strain on the first sample stabilizes. Once the strain on the first sample stabilizes, the isothermal treatment is performed on the second sample. During isothermal treatment, different levels of stress on the second sample are applied until a strain on the second sample matches the stabilized strain on the first sample.

  7. MRM-Lasso: A Sparse Multiview Feature Selection Method via Low-Rank Analysis.

    PubMed

    Yang, Wanqi; Gao, Yang; Shi, Yinghuan; Cao, Longbing

    2015-11-01

    Learning about multiview data involves many applications, such as video understanding, image classification, and social media. However, when the data dimension increases dramatically, it is important but very challenging to remove redundant features in multiview feature selection. In this paper, we propose a novel feature selection algorithm, multiview rank minimization-based Lasso (MRM-Lasso), which jointly utilizes Lasso for sparse feature selection and rank minimization for learning relevant patterns across views. Instead of simply integrating multiple Lasso from view level, we focus on the performance of sample-level (sample significance) and introduce pattern-specific weights into MRM-Lasso. The weights are utilized to measure the contribution of each sample to the labels in the current view. In addition, the latent correlation across different views is successfully captured by learning a low-rank matrix consisting of pattern-specific weights. The alternating direction method of multipliers is applied to optimize the proposed MRM-Lasso. Experiments on four real-life data sets show that features selected by MRM-Lasso have better multiview classification performance than the baselines. Moreover, pattern-specific weights are demonstrated to be significant for learning about multiview data, compared with view-specific weights.

  8. Parameters selection in gene selection using Gaussian kernel support vector machines by genetic algorithm.

    PubMed

    Mao, Yong; Zhou, Xiao-Bo; Pi, Dao-Ying; Sun, You-Xian; Wong, Stephen T C

    2005-10-01

    In microarray-based cancer classification, gene selection is an important issue owing to the large number of variables and small number of samples as well as its non-linearity. It is difficult to get satisfying results by using conventional linear statistical methods. Recursive feature elimination based on support vector machine (SVM RFE) is an effective algorithm for gene selection and cancer classification, which are integrated into a consistent framework. In this paper, we propose a new method to select parameters of the aforementioned algorithm implemented with Gaussian kernel SVMs as better alternatives to the common practice of selecting the apparently best parameters by using a genetic algorithm to search for a couple of optimal parameter. Fast implementation issues for this method are also discussed for pragmatic reasons. The proposed method was tested on two representative hereditary breast cancer and acute leukaemia datasets. The experimental results indicate that the proposed method performs well in selecting genes and achieves high classification accuracies with these genes.

  9. Current methods for detecting the presence of botulinum neurotoxins in food and other biological samples

    USDA-ARS?s Scientific Manuscript database

    Current methods for detecting the presence of botulinum neurotoxins in food and other biological samples Botulinum neurotoxins (BoNTs), the causative agents of botulism, are among the most lethal human bacterial toxins and the causative agent of botulism. BoNTs are also classified as Select Agents ...

  10. Determination of chlorpyrifos and its metabolites in cells and culture media by liquid chromatography-electrospray ionization tandem mass spectrometry.

    PubMed

    Yang, Xiangkun; Wu, Xian; Brown, Kyle A; Le, Thao; Stice, Steven L; Bartlett, Michael G

    2017-09-15

    A sensitive method to simultaneously quantitate chlorpyrifos, chlorpyrifos oxon and the detoxified product 3,5,6-trichloro-2-pyridinol (TCP) was developed using either liquid-liquid extraction for culture media samples, or protein precipitation for cell samples. Multiple reaction monitoring in positive ion mode was applied for the detection of chlorpyrifos and chlorpyrifos oxon, and selected ion recording in negative mode was applied to detect TCP. The method provided linear ranges from 5 to 500, 0.2-20 and 20-2000ng/mL for media samples and from 0.5-50, 0.02-2 and 2-200ng/million cells for CPF, CPO and TCP, respectively. The method was validated using selectivity, linearity, precision, accuracy, recovery, stability and dilution tests. All relative standard deviations (RSDs) and relative errors (REs) for QC samples were within 15% (except for LLOQ, within 20%). This method has been successfully applied to study the neurotoxicity and metabolism of chlorpyrifos in a human neuronal model. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Comparison of different methods for MP detection: What can we learn from them, and why asking the right question before measurements matters?

    PubMed

    Elert, Anna M; Becker, Roland; Duemichen, Erik; Eisentraut, Paul; Falkenhagen, Jana; Sturm, Heinz; Braun, Ulrike

    2017-12-01

    In recent years, an increasing trend towards investigating and monitoring the contamination of the environment by microplastics (MP) (plastic pieces < 5 mm) has been observed worldwide. Nonetheless, a reliable methodology that would facilitate and automate the monitoring of MP is still lacking. With the goal of selecting practical and standardized methods, and considering the challenges in microplastics detection, we present here a critical evaluation of two vibrational spectroscopies, Raman and Fourier transform infrared (FTIR) spectroscopy, and two extraction methods: thermal extraction desorption gas chromatography mass spectrometry (TED-GC-MS) and liquid extraction with subsequent size exclusion chromatography (SEC) using a soil with known contents of PE, PP, PS and PET as reference material. The obtained results were compared in terms of measurement time, technique handling, detection limits and requirements for sample preparation. The results showed that in designing and selecting the right methodology, the scientific question that determines what needs to be understood is significant, and should be considered carefully prior to analysis. Depending on whether the object of interest is quantification of the MP particles in the sample, or merely a quick estimate of sample contamination with plastics, the appropriate method must be selected. To obtain overall information about MP in environmental samples, the combination of several parallel approaches should be considered. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. [Variable selection methods combined with local linear embedding theory used for optimization of near infrared spectral quantitative models].

    PubMed

    Hao, Yong; Sun, Xu-Dong; Yang, Qiang

    2012-12-01

    Variables selection strategy combined with local linear embedding (LLE) was introduced for the analysis of complex samples by using near infrared spectroscopy (NIRS). Three methods include Monte Carlo uninformation variable elimination (MCUVE), successive projections algorithm (SPA) and MCUVE connected with SPA were used for eliminating redundancy spectral variables. Partial least squares regression (PLSR) and LLE-PLSR were used for modeling complex samples. The results shown that MCUVE can both extract effective informative variables and improve the precision of models. Compared with PLSR models, LLE-PLSR models can achieve more accurate analysis results. MCUVE combined with LLE-PLSR is an effective modeling method for NIRS quantitative analysis.

  13. Evaluation of a gas chromatography method for azelaic acid determination in selected biological samples.

    PubMed

    Garelnabi, Mahdi; Litvinov, Dmitry; Parthasarathy, Sampath

    2010-09-01

    Azelaic acid (AzA) is the best known dicarboxilic acid to have pharmaceutical benefits and clinical applications and also to be associated with some diseases pathophysiology. We extracted and methylesterified AzA and determined its concentration in human plasma obtained from healthy individuals and also in mice fed AzA containing diet for three months. AzA was detected in Gas Chromatography (GC) and confirmed by Liquid chromatography mass spectrometry (LCMS), and gas chromatography mass spectrometry (GCMC). Our results have shown that AzA can be determined efficiently in selected biological samples by GC method with 1nM limit of detection (LoD) and the limit of quantification (LoQ); was established at 50nM. Analytical Sensitivity as assayed by hexane demonstrated an analytical sensitivity at 0.050nM. The method has demonstrated 8-10% CV batch repeatability across the sample types and 13-18.9% CV for the Within-Lab Precision analysis. The method has shown that AzA can efficiently be recovered from various sample preparation including liver tissue homogenate (95%) and human plasma (97%). Because of its simplicity and lower limit of quantification, the present method provides a useful tool for determining AzA in various biological sample preparations.

  14. Sorbent-based sampling methods for volatile and semi-volatile organic compounds in air. Part 2. Sorbent selection and other aspects of optimizing air monitoring methods.

    PubMed

    Woolfenden, Elizabeth

    2010-04-16

    Sorbent tubes/traps are widely used in combination with gas chromatographic (GC) analytical methods to monitor the vapour-phase fraction of organic compounds in air. Applications range from atmospheric research and ambient air monitoring (indoor and outdoor) to occupational hygiene (personal exposure assessment) and measuring chemical emission levels. Part 1 of this paper reviewed the main sorbent-based air sampling strategies including active (pumped) tube monitoring, diffusive (passive) sampling onto sorbent tubes/cartridges plus sorbent trapping/focusing of whole air samples that are either collected in containers (such as canisters or bags) or monitored online. Options for subsequent extraction and transfer to GC(MS) analysis were also summarised and the trend to thermal desorption (TD)-based methods and away from solvent extraction was explained. As a result of this trend, demand for TD-compatible sorbents (alternatives to traditional charcoal) is growing. Part 2 of this paper therefore continues with a summary of TD-compatible sorbents, their respective advantages and limitations and considerations for sorbent selection. Other analytical considerations for optimizing sorbent-based air monitoring methods are also discussed together with recent technical developments and sampling accessories which have extended the application range of sorbent trapping technology generally. Copyright 2010 Elsevier B.V. All rights reserved.

  15. Use of High-Resolution Continuum Source Flame Atomic Absorption Spectrometry (HR-CS FAAS) for Sequential Multi-Element Determination of Metals in Seawater and Wastewater Samples

    NASA Astrophysics Data System (ADS)

    Peña-Vázquez, E.; Barciela-Alonso, M. C.; Pita-Calvo, C.; Domínguez-González, R.; Bermejo-Barrera, P.

    2015-09-01

    The objective of this work is to develop a method for the determination of metals in saline matrices using high-resolution continuum source flame atomic absorption spectrometry (HR-CS FAAS). Module SFS 6 for sample injection was used in the manual mode, and flame operating conditions were selected. The main absorption lines were used for all the elements, and the number of selected analytical pixels were 5 (CP±2) for Cd, Cu, Fe, Ni, Pb and Zn, and 3 pixels for Mn (CP±1). Samples were acidified (0.5% (v/v) nitric acid), and the standard addition method was used for the sequential determination of the analytes in diluted samples (1:2). The method showed good precision (RSD(%) < 4%, except for Pb (6.5%)) and good recoveries. Accuracy was checked after the analysis of an SPS-WW2 wastewater reference material diluted with synthetic seawater (dilution 1:2), showing a good agreement between certified and experimental results.

  16. Comparing Standard and Selective Degradation DNA Extraction Methods: Results from a Field Experiment with Sexual Assault Kits.

    PubMed

    Campbell, Rebecca; Pierce, Steven J; Sharma, Dhruv B; Shaw, Jessica; Feeney, Hannah; Nye, Jeffrey; Schelling, Kristin; Fehler-Cabral, Giannina

    2017-01-01

    A growing number of U.S. cities have large numbers of untested sexual assault kits (SAKs) in police property facilities. Testing older kits and maintaining current case work will be challenging for forensic laboratories, creating a need for more efficient testing methods. We evaluated selective degradation methods for DNA extraction using actual case work from a sample of previously unsubmitted SAKs in Detroit, Michigan. We randomly assigned 350 kits to either standard or selective degradation testing methods and then compared DNA testing rates and CODIS entry rates between the two groups. Continuation-ratio modeling showed no significant differences, indicating that the selective degradation method had no decrement in performance relative to customary methods. Follow-up equivalence tests indicated that CODIS entry rates for the two methods could differ by more than ±5%. Selective degradation methods required less personnel time for testing and scientific review than standard testing. © 2016 American Academy of Forensic Sciences.

  17. Automation of ⁹⁹Tc extraction by LOV prior ICP-MS detection: application to environmental samples.

    PubMed

    Rodríguez, Rogelio; Leal, Luz; Miranda, Silvia; Ferrer, Laura; Avivar, Jessica; García, Ariel; Cerdà, Víctor

    2015-02-01

    A new, fast, automated and inexpensive sample pre-treatment method for (99)Tc determination by inductively coupled plasma-mass spectrometry (ICP-MS) detection is presented. The miniaturized approach is based on a lab-on-valve (LOV) system, allowing automatic separation and preconcentration of (99)Tc. Selectivity is provided by the solid phase extraction system used (TEVA resin) which retains selectively pertechnetate ion in diluted nitric acid solution. The proposed system has some advantages such as minimization of sample handling, reduction of reagents volume, improvement of intermediate precision and sample throughput, offering a significant decrease of both time and cost per analysis in comparison to other flow techniques and batch methods. The proposed LOV system has been successfully applied to different samples of environmental interest (water and soil) with satisfactory recoveries, between 94% and 98%. The detection limit (LOD) of the developed method is 0.005 ng. The high durability of the resin and its low amount (32 mg), its good intermediate precision (RSD 3.8%) and repeatability (RSD 2%) and its high extraction frequency (up to 5 h(-1)) makes this method an inexpensive, high precision and fast tool for monitoring (99)Tc in environmental samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Vibration and acoustic frequency spectra for industrial process modeling using selective fusion multi-condition samples and multi-source features

    NASA Astrophysics Data System (ADS)

    Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen

    2018-01-01

    Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.

  19. Sampling in epidemiological research: issues, hazards and pitfalls.

    PubMed

    Tyrer, Stephen; Heyman, Bob

    2016-04-01

    Surveys of people's opinions are fraught with difficulties. It is easier to obtain information from those who respond to text messages or to emails than to attempt to obtain a representative sample. Samples of the population that are selected non-randomly in this way are termed convenience samples as they are easy to recruit. This introduces a sampling bias. Such non-probability samples have merit in many situations, but an epidemiological enquiry is of little value unless a random sample is obtained. If a sufficient number of those selected actually complete a survey, the results are likely to be representative of the population. This editorial describes probability and non-probability sampling methods and illustrates the difficulties and suggested solutions in performing accurate epidemiological research.

  20. Sampling in epidemiological research: issues, hazards and pitfalls

    PubMed Central

    Tyrer, Stephen; Heyman, Bob

    2016-01-01

    Surveys of people's opinions are fraught with difficulties. It is easier to obtain information from those who respond to text messages or to emails than to attempt to obtain a representative sample. Samples of the population that are selected non-randomly in this way are termed convenience samples as they are easy to recruit. This introduces a sampling bias. Such non-probability samples have merit in many situations, but an epidemiological enquiry is of little value unless a random sample is obtained. If a sufficient number of those selected actually complete a survey, the results are likely to be representative of the population. This editorial describes probability and non-probability sampling methods and illustrates the difficulties and suggested solutions in performing accurate epidemiological research. PMID:27087985

  1. Screening and confirmation of steroids and nitroimidazoles in urine, blood, and food matrices: Sample preparation methods and liquid chromatography tandem mass spectrometric separations.

    PubMed

    Tölgyesi, Ádám; Barta, Enikő; Simon, Andrea; McDonald, Thomas J; Sharma, Virender K

    2017-10-25

    Veterinary drugs containing synthetic anabolic steroid and nitroimidazole active agents are not allowed for their applications in livestock of the European Union (EU). This paper presents analyses of twelve selected steroids and six nitroimidazole antibiotics at low levels (1.56μg/L-4.95μg/L and 0.17μg/kg-2.14μg/kg, respectively) in body fluids and egg incurred samples. Analyses involved clean-up procedures, high performance liquid chromatography (HPLC) separation, and tandem mass spectrometric screening and confirmatory methods. Target steroids and nitroimidazoles in samples were cleaned by two independent supported liquid extraction and solid phase extraction procedures. Separation of the selected compounds was conducted on Kinetex XB C-18 HPLC column using gradient elution. The screening methods utilised supported liquid extraction that enabled fast and cost effective clean-up. The confirmatory methods were improved by extending the number of matrices and compounds, and by introducing an isotope dilution mass spectrometry for nitroimidazoles. The new methods were validated according to the recommendation of the European Union Reference Laboratories and the performance characteristics evaluated met fully the criteria. The methods were applied to incurred samples in the proficiency tests. The obtained results of Z-scores demonstrated the applicability of developed protocols of the methods to real samples. The confirmatory methods were applied to the national monitoring program and natural contamination of prednisolone could be detected in urine at low concentration in few samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Importance of highly selective LC-MS/MS analysis for the accurate quantification of tamoxifen and its metabolites: focus on endoxifen and 4-hydroxytamoxifen.

    PubMed

    Jager, N G L; Rosing, H; Linn, S C; Schellens, J H M; Beijnen, J H

    2012-06-01

    The antiestrogenic effect of tamoxifen is mainly attributable to the active metabolites endoxifen and 4-hydroxytamoxifen. This effect is assumed to be concentration-dependent and therefore quantitative analysis of tamoxifen and metabolites for clinical studies and therapeutic drug monitoring is increasing. We investigated the large discrepancies in reported mean endoxifen and 4-hydroxytamoxifen concentrations. Two published LC-MS/MS methods are used to analyse a set of 75 serum samples from patients treated with tamoxifen. The method from Teunissen et al. (J Chrom B, 879:1677-1685, 2011) separates endoxifen and 4-hydroxytamoxifen from other tamoxifen metabolites with similar masses and fragmentation patterns. The second method, published by Gjerde et al. (J Chrom A, 1082:6-14, 2005) however lacks selectivity, resulting in a factor 2-3 overestimation of the endoxifen and 4-hydroxytamoxifen levels, respectively. We emphasize the use of highly selective LC-MS/MS methods for the quantification of tamoxifen and its metabolites in biological samples.

  3. [Study of near infrared spectral preprocessing and wavelength selection methods for endometrial cancer tissue].

    PubMed

    Zhao, Li-Ting; Xiang, Yu-Hong; Dai, Yin-Mei; Zhang, Zhuo-Yong

    2010-04-01

    Near infrared spectroscopy was applied to measure the tissue slice of endometrial tissues for collecting the spectra. A total of 154 spectra were obtained from 154 samples. The number of normal, hyperplasia, and malignant samples was 36, 60, and 58, respectively. Original near infrared spectra are composed of many variables, for example, interference information including instrument errors and physical effects such as particle size and light scatter. In order to reduce these influences, original spectra data should be performed with different spectral preprocessing methods to compress variables and extract useful information. So the methods of spectral preprocessing and wavelength selection have played an important role in near infrared spectroscopy technique. In the present paper the raw spectra were processed using various preprocessing methods including first derivative, multiplication scatter correction, Savitzky-Golay first derivative algorithm, standard normal variate, smoothing, and moving-window median. Standard deviation was used to select the optimal spectral region of 4 000-6 000 cm(-1). Then principal component analysis was used for classification. Principal component analysis results showed that three types of samples could be discriminated completely and the accuracy almost achieved 100%. This study demonstrated that near infrared spectroscopy technology and chemometrics method could be a fast, efficient, and novel means to diagnose cancer. The proposed methods would be a promising and significant diagnosis technique of early stage cancer.

  4. Model for spectral and chromatographic data

    DOEpatents

    Jarman, Kristin [Richland, WA; Willse, Alan [Richland, WA; Wahl, Karen [Richland, WA; Wahl, Jon [Richland, WA

    2002-11-26

    A method and apparatus using a spectral analysis technique are disclosed. In one form of the invention, probabilities are selected to characterize the presence (and in another form, also a quantification of a characteristic) of peaks in an indexed data set for samples that match a reference species, and other probabilities are selected for samples that do not match the reference species. An indexed data set is acquired for a sample, and a determination is made according to techniques exemplified herein as to whether the sample matches or does not match the reference species. When quantification of peak characteristics is undertaken, the model is appropriately expanded, and the analysis accounts for the characteristic model and data. Further techniques are provided to apply the methods and apparatuses to process control, cluster analysis, hypothesis testing, analysis of variance, and other procedures involving multiple comparisons of indexed data.

  5. Point-Sampling and Line-Sampling Probability Theory, Geometric Implications, Synthesis

    Treesearch

    L.R. Grosenbaugh

    1958-01-01

    Foresters concerned with measuring tree populations on definite areas have long employed two well-known methods of representative sampling. In list or enumerative sampling the entire tree population is tallied with a known proportion being randomly selected and measured for volume or other variables. In area sampling all trees on randomly located plots or strips...

  6. Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling

    PubMed Central

    Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2016-01-01

    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464

  7. [Comparative studies of methods of salmonella enrichment (author's transl)].

    PubMed

    Pietzsch, O; Kretschmer, F J; Bulling, E

    1975-07-01

    Eight different methods of salmonella enrichment were compared in two series of experiments involving 100 samples of whole-egg powder and 80 samples of frozen whole liquid egg, respectively. 66 out of a total of 100 samples of whole-egg powder had been artificially infected with varying numbers of S. typhi-murium; 60 out of 80 samples of frozen whole liquid egg were found to be naturally infected with various salmonella species. 3 of the 8 methods (Table 1) were compared within an international collaborative study with 14 laboratories in 11 countries participating. A reduction of the pre-enrichment period from 18 to 6 hours and of volumes used in pre-enrichment and selective enrichment from 10 and 100 ml, respectively to 1 and 10 ml, respectively were found to have adverse influence upon the result of isolations, in particular in the case of weakly infected samples. In contrast, extended incubation over 48 hours as well as preparation of two sub-cultures on solid selective media following incubation of enrichment cultures over 18-24 hours and 42-48 hours, respectively always resulted in a certain increase of salmonella yield which, however, exhibited gradual differences for the individual methods examined. Preparation of a 2nd sub-culture meant, in particular, a decisive improvement of the result of isolations from artificially infected samples if selenite-cystine enrichment volumes were 10 and 100 ml, respectively. The best results could be obtained by means of the following methods of enrichment: Pre-enrichment of material in buffered peptone water at 37 degrees C over 18 hours; pipetting of 10 ml inoculated and incubated pre-enriched material into 100 ml selenite-cystine or tetrathionate enrichment medium according to MULLER-KAUFFMANN; onward incubation of the enrichment culture at 43 degrees C over 48 hours; and preparation of sub-cultures on solid selective media after 24 and 48 hours. The method using tetrathionate enrichment medium was found to be most expensive, results, however, were the most consistent ones.

  8. Variable selection under multiple imputation using the bootstrap in a prognostic study

    PubMed Central

    Heymans, Martijn W; van Buuren, Stef; Knol, Dirk L; van Mechelen, Willem; de Vet, Henrica CW

    2007-01-01

    Background Missing data is a challenging problem in many prognostic studies. Multiple imputation (MI) accounts for imputation uncertainty that allows for adequate statistical testing. We developed and tested a methodology combining MI with bootstrapping techniques for studying prognostic variable selection. Method In our prospective cohort study we merged data from three different randomized controlled trials (RCTs) to assess prognostic variables for chronicity of low back pain. Among the outcome and prognostic variables data were missing in the range of 0 and 48.1%. We used four methods to investigate the influence of respectively sampling and imputation variation: MI only, bootstrap only, and two methods that combine MI and bootstrapping. Variables were selected based on the inclusion frequency of each prognostic variable, i.e. the proportion of times that the variable appeared in the model. The discriminative and calibrative abilities of prognostic models developed by the four methods were assessed at different inclusion levels. Results We found that the effect of imputation variation on the inclusion frequency was larger than the effect of sampling variation. When MI and bootstrapping were combined at the range of 0% (full model) to 90% of variable selection, bootstrap corrected c-index values of 0.70 to 0.71 and slope values of 0.64 to 0.86 were found. Conclusion We recommend to account for both imputation and sampling variation in sets of missing data. The new procedure of combining MI with bootstrapping for variable selection, results in multivariable prognostic models with good performance and is therefore attractive to apply on data sets with missing values. PMID:17629912

  9. Filtration of water-sediment samples for the determination of organic compounds

    USGS Publications Warehouse

    Sandstrom, Mark W.

    1995-01-01

    This report describes the equipment and procedures used for on-site filtration of surface-water and ground-water samples for determination of organic compounds. Glass-fiber filters and a positive displacement pumping system are suitable for processing most samples for organic analyses. An optional system that uses disposable in-line membrane filters is suitable for a specific gas chromatography/mass spectrometry, selected-ion monitoring analytical method for determination of organonitrogen herbicides. General procedures to minimize contamination of the samples include preparing a clean workspace at the site, selecting appropriate sample-collection materials, and cleaning of the equipment with detergent, tap water, and methanol.

  10. Improved Sparse Multi-Class SVM and Its Application for Gene Selection in Cancer Classification

    PubMed Central

    Huang, Lingkang; Zhang, Hao Helen; Zeng, Zhao-Bang; Bushel, Pierre R.

    2013-01-01

    Background Microarray techniques provide promising tools for cancer diagnosis using gene expression profiles. However, molecular diagnosis based on high-throughput platforms presents great challenges due to the overwhelming number of variables versus the small sample size and the complex nature of multi-type tumors. Support vector machines (SVMs) have shown superior performance in cancer classification due to their ability to handle high dimensional low sample size data. The multi-class SVM algorithm of Crammer and Singer provides a natural framework for multi-class learning. Despite its effective performance, the procedure utilizes all variables without selection. In this paper, we propose to improve the procedure by imposing shrinkage penalties in learning to enforce solution sparsity. Results The original multi-class SVM of Crammer and Singer is effective for multi-class classification but does not conduct variable selection. We improved the method by introducing soft-thresholding type penalties to incorporate variable selection into multi-class classification for high dimensional data. The new methods were applied to simulated data and two cancer gene expression data sets. The results demonstrate that the new methods can select a small number of genes for building accurate multi-class classification rules. Furthermore, the important genes selected by the methods overlap significantly, suggesting general agreement among different variable selection schemes. Conclusions High accuracy and sparsity make the new methods attractive for cancer diagnostics with gene expression data and defining targets of therapeutic intervention. Availability: The source MATLAB code are available from http://math.arizona.edu/~hzhang/software.html. PMID:23966761

  11. Contrast enhanced spectroscopic optical coherence tomography

    NASA Technical Reports Server (NTRS)

    Xu, Chenyang (Inventor); Boppart, Stephen A. (Inventor)

    2010-01-01

    A method of forming an image of a sample includes performing SOCT on a sample. The sample may include a contrast agent, which may include an absorbing agent and/or a scattering agent. A method of forming an image of tissue may include selecting a contrast agent, delivering the contrast agent to the tissue, acquiring SOCT data from the tissue, and converting the SOCT data into an image. The contributions to the SOCT data of an absorbing agent and a scattering agent in a sample may be quantified separately.

  12. Characterisation of a reference site for quantifying uncertainties related to soil sampling.

    PubMed

    Barbizzi, Sabrina; de Zorzi, Paolo; Belli, Maria; Pati, Alessandra; Sansone, Umberto; Stellato, Luisa; Barbina, Maria; Deluisa, Andrea; Menegon, Sandro; Coletti, Valter

    2004-01-01

    The paper reports a methodology adopted to face problems related to quality assurance in soil sampling. The SOILSAMP project, funded by the Environmental Protection Agency of Italy (APAT), is aimed at (i) establishing protocols for soil sampling in different environments; (ii) assessing uncertainties associated with different soil sampling methods in order to select the "fit-for-purpose" method; (iii) qualifying, in term of trace elements spatial variability, a reference site for national and international inter-comparison exercises. Preliminary results and considerations are illustrated.

  13. 40 CFR 761.243 - Standard wipe sample method and size.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., AND USE PROHIBITIONS Determining a PCB Concentration for Purposes of Abandonment or Disposal of Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe.../Rinse Cleanup as Recommended by the Environmental Protection Agency PCB Spill Cleanup Policy,” dated...

  14. Concentration comparison of selected constituents between groundwater samples collected within the Missouri River alluvial aquifer using purge and pump and grab-sampling methods, near the city of Independence, Missouri, 2013

    USGS Publications Warehouse

    Krempa, Heather M.

    2015-10-29

    Relative percent differences between methods were greater than 10 percent for most analyzed trace elements. Barium, cobalt, manganese, and boron had concentrations that were significantly different between sampling methods. Barium, molybdenum, boron, and uranium method concentrations indicate a close association between pump and grab samples based on bivariate plots and simple linear regressions. Grab sample concentrations were generally larger than pump concentrations for these elements and may be because of using a larger pore sized filter for grab samples. Analysis of zinc blank samples suggests zinc contamination in filtered grab samples. Variations of analyzed trace elements between pump and grab samples could reduce the ability to monitor temporal changes and potential groundwater contamination threats. The degree of precision necessary for monitoring potential groundwater threats and application objectives need to be considered when determining acceptable variation amounts.

  15. Approaches to sampling and case selection in qualitative research: examples in the geography of health.

    PubMed

    Curtis, S; Gesler, W; Smith, G; Washburn, S

    2000-04-01

    This paper focuses on the question of sampling (or selection of cases) in qualitative research. Although the literature includes some very useful discussions of qualitative sampling strategies, the question of sampling often seems to receive less attention in methodological discussion than questions of how data is collected or is analysed. Decisions about sampling are likely to be important in many qualitative studies (although it may not be an issue in some research). There are varying accounts of the principles applicable to sampling or case selection. Those who espouse 'theoretical sampling', based on a 'grounded theory' approach, are in some ways opposed to those who promote forms of 'purposive sampling' suitable for research informed by an existing body of social theory. Diversity also results from the many different methods for drawing purposive samples which are applicable to qualitative research. We explore the value of a framework suggested by Miles and Huberman [Miles, M., Huberman,, A., 1994. Qualitative Data Analysis, Sage, London.], to evaluate the sampling strategies employed in three examples of research by the authors. Our examples comprise three studies which respectively involve selection of: 'healing places'; rural places which incorporated national anti-malarial policies; young male interviewees, identified as either chronically ill or disabled. The examples are used to show how in these three studies the (sometimes conflicting) requirements of the different criteria were resolved, as well as the potential and constraints placed on the research by the selection decisions which were made. We also consider how far the criteria Miles and Huberman suggest seem helpful for planning 'sample' selection in qualitative research.

  16. Determination of trichloroanisole and trichlorophenol in wineries' ambient air by passive sampling and thermal desorption-gas chromatography coupled to tandem mass spectrometry.

    PubMed

    Camino-Sánchez, F J; Bermúdez-Peinado, R; Zafra-Gómez, A; Ruíz-García, J; Vílchez-Quero, J L

    2015-02-06

    The present paper describes the calibration of selected passive samplers used in the quantitation of trichlorophenol and trichloroanisole in wineries' ambient air, by calculating the corresponding sampling rates. The method is based on passive sampling with sorbent tubes and involves thermal desorption-gas chromatography-triple quadrupole mass spectrometry analysis. Three commercially available sorbents were tested using sampling cartridges with a radial design instead of axial ones. The best results were found for Tenax TA™. Sampling rates (R-values) for the selected sorbents were determined. Passive sampling was also used for accurately determining the amount of compounds present in the air. Adequate correlation coefficients between the mass of the target analytes and exposure time were obtained. The proposed validated method is a useful tool for the early detection of trichloroanisole and its precursor trichlorophenol in wineries' ambient air while avoiding contamination of wine or winery facilities. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Validating Analytical Protocols to Determine Selected Pesticides and PCBs Using Routine Samples.

    PubMed

    Pindado Jiménez, Oscar; García Alonso, Susana; Pérez Pastor, Rosa María

    2017-01-01

    This study aims at providing recommendations concerning the validation of analytical protocols by using routine samples. It is intended to provide a case-study on how to validate the analytical methods in different environmental matrices. In order to analyze the selected compounds (pesticides and polychlorinated biphenyls) in two different environmental matrices, the current work has performed and validated two analytical procedures by GC-MS. A description is given of the validation of the two protocols by the analysis of more than 30 samples of water and sediments collected along nine months. The present work also scopes the uncertainty associated with both analytical protocols. In detail, uncertainty of water sample was performed through a conventional approach. However, for the sediments matrices, the estimation of proportional/constant bias is also included due to its inhomogeneity. Results for the sediment matrix are reliable, showing a range 25-35% of analytical variability associated with intermediate conditions. The analytical methodology for the water matrix determines the selected compounds with acceptable recoveries and the combined uncertainty ranges between 20 and 30%. Analyzing routine samples is rarely applied to assess trueness of novel analytical methods and up to now this methodology was not focused on organochlorine compounds in environmental matrices.

  18. Determination of selected quaternary ammonium compounds by liquid chromatography with mass spectrometry. Part II. Application to sediment and sludge samples in Austria.

    PubMed

    Martínez-Carballo, Elena; González-Barreiro, Carmen; Sitka, Andrea; Kreuzinger, Norbert; Scharf, Sigrid; Gans, Oliver

    2007-03-01

    Soxhlet extraction and high-performance liquid chromatography (HPLC) coupled to tandem mass spectrometry detection (MS/MS) was used for the determination of selected quaternary ammonium compounds (QACs) in solid samples. The method was applied for the determination of alkyl benzyl, dialkyl and trialkyl quaternary ammonium compounds in sediment and sludge samples in Austria. The overall method quantification limits range from 0.6 to 3 microg/kg for sediments and from 2 to 5 microg/kg for sewage sludges. Mean recoveries between 67% and 95% are achieved. In general sediments were especially contaminated by C12 chain benzalkonium chloride (BAC-C12) as well as by the long C-chain dialkyldimethylammonium chloride (DDAC-C18) with a maximum concentration of 3.6 mg/kg and 2.1mg/kg, respectively. Maxima of 27 mg/kg for DDAC-C10, 25 mg/kg for BAC-C12 and 23 mg/kg for BAC-C14 were determined for sludge samples. The sums of the 12 selected target compounds range from 22 mg/kg to 103 mg/kg in the sludge samples.

  19. Synthesis of surface Cr (VI)-imprinted magnetic nanoparticles for selective dispersive solid-phase extraction and determination of Cr (VI) in water samples.

    PubMed

    Qi, Xue; Gao, Shuang; Ding, Guosheng; Tang, An-Na

    2017-01-01

    A facile, rapid and selective magnetic dispersed solid-phase extraction (dSPE) method for the extraction and enrichment of Cr (VI) prior to flame atomic absorption spectrometry (AAS) was introduced. For highly selective and efficient extraction, magnetic Cr (VI)-imprinted nanoparticles (Fe 3 O 4 @ Cr (VI) IIPs) were prepared by hyphenating surface ion-imprinted with sol-gel techniques. In the preparation process, chromate (Cr(VI)) was used as the template ion; vinylimidazole and 3-aminopropyltriethoxysilane were selected as organic functional monomer and co-monomer respectively. Another reagent, methacryloxypropyltrimethoxysilane was adopted as coupling agent to form the stable covalent bonding between organic and inorganic phases. The effects of various parameters on the extraction efficiency, such as pH of sample solution, the amount of adsorbent, extraction time, the type and concentration of eluent were systematically investigated. Furthermore, the thermodynamic and kinetic properties of the adsorption process were studied to explore the internal adsorption mechanism. Under optimized conditions, the preconcentration factor, limit of detection and linear range of the established dSPE-AAS method for Cr (VI) were found to be 98, 0.29μgL -1 and 4-140μgL -1 , respectively. The developed method was also successfully applied to the analysis of Cr (VI) in different water samples with satisfactory results, proving its reliability and feasibility in real sample analysis. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Study of The Vector Product using Three Dimensions Vector Card of Engineering in Pathumwan Institute of Technology

    NASA Astrophysics Data System (ADS)

    Mueanploy, Wannapa

    2015-06-01

    The objective of this research was to offer the way to improve engineering students in Physics topic of vector product. The sampling of this research was the engineering students at Pathumwan Institute of Technology during the first semester of academic year 2013. 1) Select 120 students by random sampling are asked to fill in a satisfaction questionnaire scale, to select size of three dimensions vector card in order to apply in the classroom. 2) Select 60 students by random sampling to do achievement test and take the test to be used in the classroom. The methods used in analysis of achievement test by the Kuder-Richardson Method (KR- 20). The results show that 12 items of achievement test are appropriate to be applied in the classroom. The achievement test gets Difficulty (P) = 0.40-0.67, Discrimination = 0.33-0.73 and Reliability (r) = 0.70.The experimental in the classroom. 3) Select 60 students by random sampling divide into two groups; group one (the controlled group) with 30 students was chosen to study in the vector product lesson by the regular teaching method. Group two (the experimental group) with 30 students was chosen to learn the vector product lesson with three dimensions vector card. 4) Analyzed data between the controlled group and the experimental group, the result showed that experimental group got higher achievement test than the controlled group significant at .01 level.

  1. Capillary-valve-based fabrication of ion-selective membrane junction for electrokinetic sample preconcentration in PDMS chip.

    PubMed

    Liu, Vincent; Song, Yong-Ak; Han, Jongyoon

    2010-06-07

    In this paper, we report a novel method for fabricating ion-selective membranes in poly(dimethylsiloxane) (PDMS)/glass-based microfluidic preconcentrators. Based on the concept of capillary valves, this fabrication method involves filling a lithographically patterned junction between two microchannels with an ion-selective material such as Nafion resin; subsequent curing results in a high aspect-ratio membrane for use in electrokinetic sample preconcentration. To demonstrate the concentration performance of this high-aspect-ratio, ion-selective membrane, we integrated the preconcentrator with a surface-based immunoassay for R-Phycoerythrin (RPE). Using a 1x PBS buffer system, the preconcentrator-enhanced immunoassay showed an approximately 100x improvement in sensitivity within 30 min. This is the first time that an electrokinetic microfluidic preconcentrator based on ion concentration polarization (ICP) has been used in high ionic strength buffer solutions to enhance the sensitivity of a surface-based immunoassay.

  2. Training set optimization under population structure in genomic selection.

    PubMed

    Isidro, Julio; Jannink, Jean-Luc; Akdemir, Deniz; Poland, Jesse; Heslot, Nicolas; Sorrells, Mark E

    2015-01-01

    Population structure must be evaluated before optimization of the training set population. Maximizing the phenotypic variance captured by the training set is important for optimal performance. The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determination (CDmean), mean of predictor error variance (PEVmean), stratified CDmean (StratCDmean) and random sampling, were evaluated for prediction accuracy in the presence of different levels of population structure. In the presence of population structure, the most phenotypic variation captured by a sampling method in the TRS is desirable. The wheat dataset showed mild population structure, and CDmean and stratified CDmean methods showed the highest accuracies for all the traits except for test weight and heading date. The rice dataset had strong population structure and the approach based on stratified sampling showed the highest accuracies for all traits. In general, CDmean minimized the relationship between genotypes in the TRS, maximizing the relationship between TRS and the test set. This makes it suitable as an optimization criterion for long-term selection. Our results indicated that the best selection criterion used to optimize the TRS seems to depend on the interaction of trait architecture and population structure.

  3. Uniform Sampling Table Method and its Applications II--Evaluating the Uniform Sampling by Experiment.

    PubMed

    Chen, Yibin; Chen, Jiaxi; Chen, Xuan; Wang, Min; Wang, Wei

    2015-01-01

    A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of sampling methods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform sampling method was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform sampling method was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform sampling method has good accuracy and reproducibility, which can be put into use in drugs analysis.

  4. VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS

    PubMed Central

    Huang, Jian; Horowitz, Joel L.; Wei, Fengrong

    2010-01-01

    We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is “small” relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method. PMID:21127739

  5. Romer Labs RapidChek®Listeria monocytogenes Test System for the Detection of L. monocytogenes on Selected Foods and Environmental Surfaces.

    PubMed

    Juck, Gregory; Gonzalez, Verapaz; Allen, Ann-Christine Olsson; Sutzko, Meredith; Seward, Kody; Muldoon, Mark T

    2018-04-27

    The Romer Labs RapidChek ® Listeria monocytogenes test system (Performance Tested Method ℠ 011805) was validated against the U.S. Department of Agriculture-Food Safety and Inspection Service Microbiology Laboratory Guidebook (USDA-FSIS/MLG), U.S. Food and Drug Association Bacteriological Analytical Manual (FDA/BAM), and AOAC Official Methods of Analysis ℠ (AOAC/OMA) cultural reference methods for the detection of L. monocytogenes on selected foods including hot dogs, frozen cooked breaded chicken, frozen cooked shrimp, cured ham, and ice cream, and environmental surfaces including stainless steel and plastic in an unpaired study design. The RapidChek method uses a proprietary enrichment media system, a 44-48 h enrichment at 30 ± 1°C, and detects L. monocytogenes on an immunochromatographic lateral flow device within 10 min. Different L. monocytogenes strains were used to spike each of the matrixes. Samples were confirmed based on the reference method confirmations and an alternate confirmation method. A total of 140 low-level spiked samples were tested by the RapidChek method after enrichment for 44-48 h in parallel with the cultural reference method. There were 88 RapidChek presumptive positives. One of the presumptive positives was not confirmed culturally. Additionally, one of the culturally confirmed samples did not exhibit a presumptive positive. No difference between the alternate confirmation method and reference confirmation method was observed. The respective cultural reference methods (USDA-FSIS/MLG, FDA/BAM, and AOAC/OMA) produced a total of 63 confirmed positive results. Nonspiked samples from all foods were reported as negative for L. monocytogenes by all methods. Probability of detection analysis demonstrated no significant differences in the number of positive samples detected by the RapidChek method and the respective cultural reference method.

  6. Interval sampling methods and measurement error: a computer simulation.

    PubMed

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  7. Comparison of methods used for estimating pharmacist counseling behaviors.

    PubMed

    Schommer, J C; Sullivan, D L; Wiederholt, J B

    1994-01-01

    To compare the rates reported for provision of types of information conveyed by pharmacists among studies for which different methods of estimation were used and different dispensing situations were studied. Empiric studies conducted in the US, reported from 1982 through 1992, were selected from International Pharmaceutical Abstracts, MEDLINE, and noncomputerized sources. Empiric studies were selected for review if they reported the provision of at least three types of counseling information. Four components of methods used for estimating pharmacist counseling behaviors were extracted and summarized in a table: (1) sample type and area, (2) sampling unit, (3) sample size, and (4) data collection method. In addition, situations that were investigated in each study were compiled. Twelve studies met our inclusion criteria. Patients were interviewed via telephone in four studies and were surveyed via mail in two studies. Pharmacists were interviewed via telephone in one study and surveyed via mail in two studies. For three studies, researchers visited pharmacy sites for data collection using the shopper method or observation method. Studies with similar methods and situations provided similar results. Data collected by using patient surveys, pharmacist surveys, and observation methods can provide useful estimations of pharmacist counseling behaviors if researchers measure counseling for specific, well-defined dispensing situations.

  8. Classification of 'Chemlali' accessions according to the geographical area using chemometric methods of phenolic profiles analysed by HPLC-ESI-TOF-MS.

    PubMed

    Taamalli, Amani; Arráez Román, David; Zarrouk, Mokhtar; Segura-Carretero, Antonio; Fernández-Gutiérrez, Alberto

    2012-05-01

    The present work describes a classification method of Tunisian 'Chemlali' olive oils based on their phenolic composition and geographical area. For this purpose, the data obtained by HPLC-ESI-TOF-MS from 13 samples of extra virgin olive oils, obtained from different production area throughout the country, were used for this study focusing in 23 phenolics compounds detected. The quantitative results showed a significant variability among the analysed oil samples. Factor analysis method using principal component was applied to the data in order to reduce the number of factors which explain the variability of the selected compounds. The data matrix constructed was subjected to a canonical discriminant analysis (CDA) in order to classify the oil samples. These results showed that 100% of cross-validated original group cases were correctly classified, which proves the usefulness of the selected variables. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. [Development of selective determination methods for quinones with fluorescence and chemiluminescence detection and their application to environmental and biological samples].

    PubMed

    Kishikawa, Naoya

    2010-10-01

    Quinones are compounds that have various characteristics such as a biological electron transporter, an industrial product and a harmful environmental pollutant. Therefore, an effective determination method for quinones is required in many fields. This review describes the development of sensitive and selective determination methods for quinones based on some detection principles and their application to analyses in environmental, pharmaceutical and biological samples. Firstly, a fluorescence method was developed based on fluorogenic derivatization of quinones and applied to environmental analysis. Secondly, a luminol chemiluminescence method was developed based on generation of reactive oxygen species through the redox cycle of quinone and applied to pharmaceutical analysis. Thirdly, a photo-induced chemiluminescence method was developed based on formation of reactive oxygen species and fluorophore or chemiluminescence enhancer by the photoreaction of quinones and applied to biological and environmental analyses.

  10. Statistical methods for efficient design of community surveys of response to noise: Random coefficients regression models

    NASA Technical Reports Server (NTRS)

    Tomberlin, T. J.

    1985-01-01

    Research studies of residents' responses to noise consist of interviews with samples of individuals who are drawn from a number of different compact study areas. The statistical techniques developed provide a basis for those sample design decisions. These techniques are suitable for a wide range of sample survey applications. A sample may consist of a random sample of residents selected from a sample of compact study areas, or in a more complex design, of a sample of residents selected from a sample of larger areas (e.g., cities). The techniques may be applied to estimates of the effects on annoyance of noise level, numbers of noise events, the time-of-day of the events, ambient noise levels, or other factors. Methods are provided for determining, in advance, how accurately these effects can be estimated for different sample sizes and study designs. Using a simple cost function, they also provide for optimum allocation of the sample across the stages of the design for estimating these effects. These techniques are developed via a regression model in which the regression coefficients are assumed to be random, with components of variance associated with the various stages of a multi-stage sample design.

  11. Improved sample treatment for the determination of insoluble soap in sewage sludge samples by liquid chromatography with fluorescence detection.

    PubMed

    Cantarero, Samuel; Zafra-Gómez, A; Ballesteros, O; Navalón, A; Vílchez, J L; Crovetto, G; Verge, C; de Ferrer, J A

    2010-09-15

    A new selective and sensitive method for the determination of insoluble fatty acid salts (soap) in sewage sludge samples is proposed. The method involves a clean up of sample with petroleum ether, the conversion of calcium and magnesium insoluble salts into soluble potassium salts, potassium salts extraction with methanol, and a derivatization procedure previous to the liquid chromatography with fluorescence detection (LC-FLD) analysis. Three different extraction techniques (Soxhlet, microwave-assisted extraction and ultrasounds) were compared and microwave-assisted extraction (MAE) was selected as appropriate for our purpose. This allowed to reduce the extraction time and solvent waste (50 mL of methanol in contrast with 250 mL for Soxhlet procedure). The absence of matrix effect was demonstrated with two standards (C(13:0) and C(17:0)) that are not commercials and neither of them has been detected in sewage sludge samples. Therefore, it was possible to evaluate the matrix effect since both standards have similar environmental behaviour (adsorption and precipitation) to commercial soaps (C(10:0)-C(18:0)). The method was successfully applied to samples from different sources and consequently, with different composition. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  12. Comparison of various liquid chromatographic methods involving UV and atmospheric pressure chemical ionization mass spectrometric detection for the efficient trace analysis of phenylurea herbicides in various types of water samples.

    PubMed

    van der Heeft, E; Dijkman, E; Baumann, R A; Hogendoorn, E A

    2000-05-19

    The performance of mass spectrometric (MS) detection and UV detection in combination with reversed-phase liquid chromatography without and with the use of coupled column RPLC (LC-LC) has been compared for the trace analysis of phenylurea herbicides in environmental waters. The selected samples of this comparative study originated from an inter-laboratory study. For both detection modes, a 50 mm x 4.6 mm I.D. column and a 100 mm x 4.6 mm I.D. column packed with 3 microm C18 were used as the first (C-1) and second (C-2) column, respectively. Atmospheric pressure chemical ionization mass spectrometry was performed on a magnetic sector instrument. The LC-LC-MS analysis was carried out on-line by means of direct large volume (11.7 ml) injection (LVI). The performance of both on-line (LVI, 4 ml of sample) and off-line LC-LC-UV (244 nm) analysis was investigated. The latter procedure consisted of a solid-phase extraction (SPE) of 250 ml of water sample on a 500 mg C18 cartridge. The comparative study showed that LC-LC-MS is more selective then LC-LC-UV and, in most cases, more sensitive. The LVI-LC-LC-MS approach combines direct quantification and confirmation of most of the analytes down to a level of 0.01 microg/l in water samples in less then 30 min. As regards LC-LC-UV, the off-line method appeared to be a more viable approach in comparison with the on-line procedure. This method allows the screening of phenylurea's in various types of water samples down to a level of at least 0.05 microg/l. On-line analysis with LVI provided marginal sensitivity (limits of detection of about 0.1 microg/l) and selectivity was sometimes less in case of surface water samples. Both the on-line LVI-LC-LC-MS method and the LC-LC-UV method using off-line SPE were validated by analysing a series of real-life reference samples. These samples were part of an inter-laboratory test and contained residues of herbicides ranging from 0.02 to 0.8 microg/l. Beside good correlation between the methods the data agreed very well with the true values of the samples.

  13. Duplex sampling apparatus and method

    DOEpatents

    Brown, Paul E.; Lloyd, Robert

    1992-01-01

    An improved apparatus is provided for sampling a gaseous mixture and for measuring mixture components. The apparatus includes two sampling containers connected in series serving as a duplex sampling apparatus. The apparatus is adapted to independently determine the amounts of condensable and noncondensable gases in admixture from a single sample. More specifically, a first container includes a first port capable of selectively connecting to and disconnecting from a sample source and a second port capable of selectively connecting to and disconnecting from a second container. A second container also includes a first port capable of selectively connecting to and disconnecting from the second port of the first container and a second port capable of either selectively connecting to and disconnecting from a differential pressure source. By cooling a mixture sample in the first container, the condensable vapors form a liquid, leaving noncondensable gases either as free gases or dissolved in the liquid. The condensed liquid is heated to drive out dissolved noncondensable gases, and all the noncondensable gases are transferred to the second container. Then the first and second containers are separated from one another in order to separately determine the amount of noncondensable gases and the amount of condensable gases in the sample.

  14. Utility of Work Samples

    ERIC Educational Resources Information Center

    Muchinsky, Paul M.

    1975-01-01

    A work sample test can provide a high degree of content validity, and offers a practical method of screening job applicants in accordance with guidelines on employee selection procedures set forth by the Equal Employment Opportunity Commission. (MW)

  15. Nondestructive equipment study

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Identification of existing nondestructive Evaluation (NDE) methods that could be used in a low Earth orbit environment; evaluation of each method with respect to the set of criteria called out in the statement of work; selection of the most promising NDE methods for further evaluation; use of selected NDE methods to test samples of pressure vessel materials in a vacuum; pressure testing of a complex monolythic pressure vessel with known flaws using acoustic emissions in a vacuum; and recommendations for further studies based on analysis and testing are covered.

  16. Towards the harmonization between National Forest Inventory and Forest Condition Monitoring. Consistency of plot allocation and effect of tree selection methods on sample statistics in Italy.

    PubMed

    Gasparini, Patrizia; Di Cosmo, Lucio; Cenni, Enrico; Pompei, Enrico; Ferretti, Marco

    2013-07-01

    In the frame of a process aiming at harmonizing National Forest Inventory (NFI) and ICP Forests Level I Forest Condition Monitoring (FCM) in Italy, we investigated (a) the long-term consistency between FCM sample points (a subsample of the first NFI, 1985, NFI_1) and recent forest area estimates (after the second NFI, 2005, NFI_2) and (b) the effect of tree selection method (tree-based or plot-based) on sample composition and defoliation statistics. The two investigations were carried out on 261 and 252 FCM sites, respectively. Results show that some individual forest categories (larch and stone pine, Norway spruce, other coniferous, beech, temperate oaks and cork oak forests) are over-represented and others (hornbeam and hophornbeam, other deciduous broadleaved and holm oak forests) are under-represented in the FCM sample. This is probably due to a change in forest cover, which has increased by 1,559,200 ha from 1985 to 2005. In case of shift from a tree-based to a plot-based selection method, 3,130 (46.7%) of the original 6,703 sample trees will be abandoned, and 1,473 new trees will be selected. The balance between exclusion of former sample trees and inclusion of new ones will be particularly unfavourable for conifers (with only 16.4% of excluded trees replaced by new ones) and less for deciduous broadleaves (with 63.5% of excluded trees replaced). The total number of tree species surveyed will not be impacted, while the number of trees per species will, and the resulting (plot-based) sample composition will have a much larger frequency of deciduous broadleaved trees. The newly selected trees have-in general-smaller diameter at breast height (DBH) and defoliation scores. Given the larger rate of turnover, the deciduous broadleaved part of the sample will be more impacted. Our results suggest that both a revision of FCM network to account for forest area change and a plot-based approach to permit statistical inference and avoid bias in the tree sample composition in terms of DBH (and likely age and structure) are desirable in Italy. As the adoption of a plot-based approach will keep a large share of the trees formerly selected, direct tree-by-tree comparison will remain possible, thus limiting the impact on the time series comparability. In addition, the plot-based design will favour the integration with NFI_2.

  17. An ion-selective electrode method for determination of chlorine in geological materials

    USGS Publications Warehouse

    Aruscavage, P. J.; Campbell, E.Y.

    1983-01-01

    A method is presented for the determination of chlorine in geological materials, in which a chloride-selective ion electrode is used after decomposition of the sample with hydrofluoric acid and separation of chlorine in a gas-diffusion cell. Data are presented for 30 geological standard materials. The relative standard deviation of the method is estimated to be better than 8% for amounts of chloride of 10 ??g and greater. ?? 1983.

  18. Adaptive web sampling.

    PubMed

    Thompson, Steven K

    2006-12-01

    A flexible class of adaptive sampling designs is introduced for sampling in network and spatial settings. In the designs, selections are made sequentially with a mixture distribution based on an active set that changes as the sampling progresses, using network or spatial relationships as well as sample values. The new designs have certain advantages compared with previously existing adaptive and link-tracing designs, including control over sample sizes and of the proportion of effort allocated to adaptive selections. Efficient inference involves averaging over sample paths consistent with the minimal sufficient statistic. A Markov chain resampling method makes the inference computationally feasible. The designs are evaluated in network and spatial settings using two empirical populations: a hidden human population at high risk for HIV/AIDS and an unevenly distributed bird population.

  19. Analysis of arsenical metabolites in biological samples.

    PubMed

    Hernandez-Zavala, Araceli; Drobna, Zuzana; Styblo, Miroslav; Thomas, David J

    2009-11-01

    Quantitation of iAs and its methylated metabolites in biological samples provides dosimetric information needed to understand dose-response relations. Here, methods are described for separation of inorganic and mono-, di-, and trimethylated arsenicals by thin layer chromatography. This method has been extensively used to track the metabolism of the radionuclide [(73)As] in a variety of in vitro assay systems. In addition, a hydride generation-cryotrapping-gas chromatography-atomic absorption spectrometric method is described for the quantitation of arsenicals in biological samples. This method uses pH-selective hydride generation to differentiate among arsenicals containing trivalent or pentavalent arsenic.

  20. Comparison between Thermal Desorption Tubes and Stainless Steel Canisters Used for Measuring Volatile Organic Compounds in Petrochemical Factories

    PubMed Central

    Chang, Cheng-Ping; Lin, Tser-Cheng; Lin, Yu-Wen; Hua, Yi-Chun; Chu, Wei-Ming; Lin, Tzu-Yu; Lin, Yi-Wen; Wu, Jyun-De

    2016-01-01

    Objective: The purpose of this study was to compare thermal desorption tubes and stainless steel canisters for measuring volatile organic compounds (VOCs) emitted from petrochemical factories. Methods: Twelve petrochemical factories in the Mailiao Industrial Complex were recruited for conducting the measurements of VOCs. Thermal desorption tubes and 6-l specially prepared stainless steel canisters were used to simultaneously perform active sampling of environmental air samples. The sampling time of the environmental air samples was set up on 6h close to a full work shift of the workers. A total of 94 pairwise air samples were collected by using the thermal adsorption tubes and stainless steel canisters in these 12 factories in the petrochemical industrial complex. To maximize the number of comparative data points, all the measurements from all the factories in different sampling times were lumped together to perform a linear regression analysis for each selected VOC. Pearson product–moment correlation coefficient was used to examine the correlation between the pairwise measurements of these two sampling methods. A paired t-test was also performed to examine whether the difference in the concentrations of each selected VOC measured by the two methods was statistically significant. Results: The correlation coefficients of seven compounds, including acetone, n-hexane, benzene, toluene, 1,2-dichloroethane, 1,3-butadiene, and styrene were >0.80 indicating the two sampling methods for these VOCs’ measurements had high consistency. The paired t-tests for the measurements of n-hexane, benzene, m/p-xylene, o-xylene, 1,2-dichloroethane, and 1,3-butadiene showed statistically significant difference (P-value < 0.05). This indicated that the two sampling methods had various degrees of systematic errors. Looking at the results of six chemicals and these systematic errors probably resulted from the differences of the detection limits in the two sampling methods for these VOCs. Conclusions: The comparison between the concentrations of each of the 10 selected VOCs measured by the two sampling methods indicted that the thermal desorption tubes provided high accuracy and precision measurements for acetone, benzene, and 1,3-butadiene. The accuracy and precision of using the thermal desorption tubes for measuring the VOCs can be improved due to new developments in sorbent materials, multi-sorbent designs, and thermal desorption instrumentation. More applications of thermal desorption tubes for measuring occupational and environmental hazardous agents can be anticipated. PMID:26585828

  1. Training sample selection based on self-training for liver cirrhosis classification using ultrasound images

    NASA Astrophysics Data System (ADS)

    Fujita, Yusuke; Mitani, Yoshihiro; Hamamoto, Yoshihiko; Segawa, Makoto; Terai, Shuji; Sakaida, Isao

    2017-03-01

    Ultrasound imaging is a popular and non-invasive tool used in the diagnoses of liver disease. Cirrhosis is a chronic liver disease and it can advance to liver cancer. Early detection and appropriate treatment are crucial to prevent liver cancer. However, ultrasound image analysis is very challenging, because of the low signal-to-noise ratio of ultrasound images. To achieve the higher classification performance, selection of training regions of interest (ROIs) is very important that effect to classification accuracy. The purpose of our study is cirrhosis detection with high accuracy using liver ultrasound images. In our previous works, training ROI selection by MILBoost and multiple-ROI classification based on the product rule had been proposed, to achieve high classification performance. In this article, we propose self-training method to select training ROIs effectively. Evaluation experiments were performed to evaluate effect of self-training, using manually selected ROIs and also automatically selected ROIs. Experimental results show that self-training for manually selected ROIs achieved higher classification performance than other approaches, including our conventional methods. The manually ROI definition and sample selection are important to improve classification accuracy in cirrhosis detection using ultrasound images.

  2. Analyzing the Relationship between Positive Psychological Capital and Organizational Commitment of the Teachers

    ERIC Educational Resources Information Center

    Yalcin, Sinan

    2016-01-01

    In this study it was aimed to determine the relationship between teachers' positive psychological capital levels and organisational commitment. The study was conducted as a correlational survey which is one of the quantitative methods. The sample group consists of 244 teachers selected by using random sampling method among 1270 teachers working in…

  3. Should We Trust Web-Based Studies? A Comparative Analysis of Six Preconceptions about Internet Questionnaires

    ERIC Educational Resources Information Center

    Gosling, Samuel D.; Vazire, Simine; Srivastava, Sanjay; Oliver, John

    2004-01-01

    The rapid growth of the Internet provides a wealth of new research opportunities for psychologists. Internet data collection methods, with a focus on self-report questionnaires from self-selected samples, are evaluated and compared with traditional paper-and-pencil methods. Six preconceptions about Internet samples and data quality are evaluated…

  4. Mobbing Experiences of Instructors: Causes, Results, and Solution Suggestions

    ERIC Educational Resources Information Center

    Celep, Cevat; Konakli, Tugba

    2013-01-01

    In this study, it was aimed to investigate possible mobbing problems in universities, their causes and results, and to attract attention to precautions that can be taken. Phenomenology as one of the qualitative research methods was used in the study. Sample group of the study was selected through the criteria sampling method and eight instructors…

  5. Mid-infrared spectroscopy combined with chemometrics to detect Sclerotinia stem rot on oilseed rape (Brassica napus L.) leaves.

    PubMed

    Zhang, Chu; Feng, Xuping; Wang, Jian; Liu, Fei; He, Yong; Zhou, Weijun

    2017-01-01

    Detection of plant diseases in a fast and simple way is crucial for timely disease control. Conventionally, plant diseases are accurately identified by DNA, RNA or serology based methods which are time consuming, complex and expensive. Mid-infrared spectroscopy is a promising technique that simplifies the detection procedure for the disease. Mid-infrared spectroscopy was used to identify the spectral differences between healthy and infected oilseed rape leaves. Two different sample sets from two experiments were used to explore and validate the feasibility of using mid-infrared spectroscopy in detecting Sclerotinia stem rot (SSR) on oilseed rape leaves. The average mid-infrared spectra showed differences between healthy and infected leaves, and the differences varied among different sample sets. Optimal wavenumbers for the 2 sample sets selected by the second derivative spectra were similar, indicating the efficacy of selecting optimal wavenumbers. Chemometric methods were further used to quantitatively detect the oilseed rape leaves infected by SSR, including the partial least squares-discriminant analysis, support vector machine and extreme learning machine. The discriminant models using the full spectra and the optimal wavenumbers of the 2 sample sets were effective for classification accuracies over 80%. The discriminant results for the 2 sample sets varied due to variations in the samples. The use of two sample sets proved and validated the feasibility of using mid-infrared spectroscopy and chemometric methods for detecting SSR on oilseed rape leaves. The similarities among the selected optimal wavenumbers in different sample sets made it feasible to simplify the models and build practical models. Mid-infrared spectroscopy is a reliable and promising technique for SSR control. This study helps in developing practical application of using mid-infrared spectroscopy combined with chemometrics to detect plant disease.

  6. [Development of sample pretreatment techniques-rapid detection coupling methods for food security analysis].

    PubMed

    Huang, Yichun; Ding, Weiwei; Zhang, Zhuomin; Li, Gongke

    2013-07-01

    This paper summarizes the recent developments of the rapid detection methods for food security, such as sensors, optical techniques, portable spectral analysis, enzyme-linked immunosorbent assay, portable gas chromatograph, etc. Additionally, the applications of these rapid detection methods coupled with sample pretreatment techniques in real food security analysis are reviewed. The coupling technique has the potential to provide references to establish the selective, precise and quantitative rapid detection methods in food security analysis.

  7. Effects of uniformities of deposition of respirable particles on filters on determining their quartz contents by using the direct on-filter X-ray diffraction (DOF XRD) method.

    PubMed

    Chen, Ching-Hwa; Tsaia, Perng-Jy; Lai, Chane-Yu; Peng, Ya-Lian; Soo, Jhy-Charm; Chen, Cheng-Yao; Shih, Tung-Sheng

    2010-04-15

    In this study, field samplings were conducted in three workplaces of a foundry plant, including the molding, demolding, and bead blasting, respectively. Three respirable aerosol samplers (including a 25-mm aluminum cyclone, nylon cyclone, and IOSH cyclone) were used side-by-side to collect samples from each selected workplace. For each collected sample, the uniformity of the deposition of respirable dusts on the filter was measured and its free silica content was determined by both the DOF XRD method and NIOSH 7500 XRD method (i.e., the reference method). A same trend in measured uniformities can be found in all selected workplaces: 25-mm aluminum cyclone>nylon cyclone>IOSH cyclone. Even for samples collected by the sampler with the highest uniformity (i.e., 25-mm aluminum cyclone), the use of the DOF XRD method would lead to the measured free silica concentrations 1.15-2.89 times in magnitude higher than that of the reference method. A new filter holder should be developed with the minimum uniformity comparable to that of NIOSH 7500 XRD method (=0.78) in the future. The use of conversion factors for correcting quartz concentrations obtained from the DOF XRD method based on the measured uniformities could be suitable for the foundry industry at this stage. 2009 Elsevier B.V. All rights reserved.

  8. A regularized variable selection procedure in additive hazards model with stratified case-cohort design.

    PubMed

    Ni, Ai; Cai, Jianwen

    2018-07-01

    Case-cohort designs are commonly used in large epidemiological studies to reduce the cost associated with covariate measurement. In many such studies the number of covariates is very large. An efficient variable selection method is needed for case-cohort studies where the covariates are only observed in a subset of the sample. Current literature on this topic has been focused on the proportional hazards model. However, in many studies the additive hazards model is preferred over the proportional hazards model either because the proportional hazards assumption is violated or the additive hazards model provides more relevent information to the research question. Motivated by one such study, the Atherosclerosis Risk in Communities (ARIC) study, we investigate the properties of a regularized variable selection procedure in stratified case-cohort design under an additive hazards model with a diverging number of parameters. We establish the consistency and asymptotic normality of the penalized estimator and prove its oracle property. Simulation studies are conducted to assess the finite sample performance of the proposed method with a modified cross-validation tuning parameter selection methods. We apply the variable selection procedure to the ARIC study to demonstrate its practical use.

  9. Methods of Analysis by the U.S. Geological Survey National Water Quality Laboratory - Determination of Moderate-Use Pesticides and Selected Degradates in Water by C-18 Solid-Phase Extraction and Gas Chromatography/Mass Spectrometry

    USGS Publications Warehouse

    Sandstrom, Mark W.; Stroppel, Max E.; Foreman, William T.; Schroeder, Michael P.

    2001-01-01

    A method for the isolation and analysis of 21 parent pesticides and 20 pesticide degradates in natural-water samples is described. Water samples are filtered to remove suspended particulate matter and then are pumped through disposable solid-phase-extraction columns that contain octadecyl-bonded porous silica to extract the analytes. The columns are dried by using nitrogen gas, and adsorbed analytes are eluted with ethyl acetate. Extracted analytes are determined by capillary-column gas chromatography/mass spectrometry with selected-ion monitoring of three characteristic ions. The upper concentration limit is 2 micrograms per liter (?g/L) for most analytes. Single-operator method detection limits in reagent-water samples range from 0.00 1 to 0.057 ?g/L. Validation data also are presented for 14 parent pesticides and 20 degradates that were determined to have greater bias or variability, or shorter holding times than the other compounds. The estimated maximum holding time for analytes in pesticide-grade water before extraction was 4 days. The estimated maximum holding time for analytes after extraction on the dry solid-phase-extraction columns was 7 days. An optional on-site extraction procedure allows for samples to be collected and processed at remote sites where it is difficult to ship samples to the laboratory within the recommended pre-extraction holding time. The method complements existing U.S. Geological Survey Method O-1126-95 (NWQL Schedules 2001 and 2010) by using identical sample preparation and comparable instrument analytical conditions so that sample extracts can be analyzed by either method to expand the range of analytes determined from one water sample.

  10. Dielectric breakdown of additively manufactured polymeric materials

    DOE PAGES

    Monzel, W. Jacob; Hoff, Brad W.; Maestas, Sabrina S.; ...

    2016-01-11

    Dielectric strength testing of selected Polyjet-printed polymer plastics was performed in accordance with ASTM D149. This dielectric strength data is compared to manufacturer-provided dielectric strength data for selected plastics printed using the stereolithography (SLA), fused deposition modeling (FDM), and selective laser sintering (SLS) methods. Tested Polyjet samples demonstrated dielectric strengths as high as 47.5 kV/mm for a 0.5 mm thick sample and 32.1 kV/mm for a 1.0 mm sample. As a result, the dielectric strength of the additively manufactured plastics evaluated as part of this study was lower than the majority of non-printed plastics by at least 15% (with themore » exception of polycarbonate).« less

  11. Dielectric breakdown of additively manufactured polymeric materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monzel, W. Jacob; Hoff, Brad W.; Maestas, Sabrina S.

    Dielectric strength testing of selected Polyjet-printed polymer plastics was performed in accordance with ASTM D149. This dielectric strength data is compared to manufacturer-provided dielectric strength data for selected plastics printed using the stereolithography (SLA), fused deposition modeling (FDM), and selective laser sintering (SLS) methods. Tested Polyjet samples demonstrated dielectric strengths as high as 47.5 kV/mm for a 0.5 mm thick sample and 32.1 kV/mm for a 1.0 mm sample. As a result, the dielectric strength of the additively manufactured plastics evaluated as part of this study was lower than the majority of non-printed plastics by at least 15% (with themore » exception of polycarbonate).« less

  12. Sampling Large Graphs for Anticipatory Analytics

    DTIC Science & Technology

    2015-05-15

    low. C. Random Area Sampling Random area sampling [8] is a “ snowball ” sampling method in which a set of random seed vertices are selected and areas... Sampling Large Graphs for Anticipatory Analytics Lauren Edwards, Luke Johnson, Maja Milosavljevic, Vijay Gadepally, Benjamin A. Miller Lincoln...systems, greater human-in-the-loop involvement, or through complex algorithms. We are investigating the use of sampling to mitigate these challenges

  13. Improving sperm banking efficiency in endangered species through the use of a sperm selection method in brown bear (Ursus arctos) thawed sperm.

    PubMed

    Anel-Lopez, L; Ortega-Ferrusola, C; Álvarez, M; Borragán, S; Chamorro, C; Peña, F J; Morrell, J; Anel, L; de Paz, P

    2017-06-26

    Sperm selection methods such as Single Layer Centrifugation (SLC) have been demonstrated to be a useful tool to improve the quality of sperm samples and therefore to increase the efficiency of other artificial reproductive techniques in several species. This procedure could help to improve the quality of genetic resource banks, which is essential for endangered species. In contrast, these sperm selection methods are optimized and focused on farm animals, where the recovery task is not as important as in endangered species because of their higher sperm availability. The aim of this study was to evaluate two centrifugation methods (300 x g/20 min and 600 x g/10 min) and three concentrations of SLC media (Androcoll-Bear -80, 65 and 50%) to optimise the procedure in order to recover as many sperm with the highest quality as possible. Sperm morphology could be important in the hydrodynamic relationship between the cell and centrifugation medium and thus the effect of sperm head morphometry on sperm yield and its hydrodynamic relationship were studied. The samples selected with Androcoll-Bear 65% showed a very good yield (53.1 ± 2.9) although the yield from Androcoll-Bear 80% was lower (19.3 ± 3.3). The latter showed higher values of motility than the control immediately after post-thawing selection. However, both concentrations of colloid (65 and 80%) showed higher values of viable sperm and viable sperm with intact acrosome than the control. After an incubation of 2 h at 37 °C, the samples from Androcoll-Bear 80% had higher kinematics and proportion of viable sperm with intact acrosome. In the morphometric analysis, the sperm selected by the Androcoll-Bear 80% showed a head with a bigger area which was more elongated than the sperm from other treatments. We conclude that sperm selection with Androcoll-Bear at either 65% or 80% is a suitable technique that allows a sperm population with better quality than the initial sample to be obtained. We recommend the use of Androcoll-Bear 65% since the yield is better than Androcoll-Bear 80%. Our findings pave the way for further research on application of sperm selection techniques to sperm banking in the brown bear.

  14. 12 CFR 715.8 - Requirements for verification of accounts and passbooks.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... selection: (ii) A sample which is representative of the population from which it was selected; (iii) An equal chance of selecting each dollar in the population; (iv) Sufficient accounts in both number and... consistent with GAAS if such methods provide for: (i) Sufficient accounts in both number and scope on which...

  15. State-dependent biasing method for importance sampling in the weighted stochastic simulation algorithm.

    PubMed

    Roh, Min K; Gillespie, Dan T; Petzold, Linda R

    2010-11-07

    The weighted stochastic simulation algorithm (wSSA) was developed by Kuwahara and Mura [J. Chem. Phys. 129, 165101 (2008)] to efficiently estimate the probabilities of rare events in discrete stochastic systems. The wSSA uses importance sampling to enhance the statistical accuracy in the estimation of the probability of the rare event. The original algorithm biases the reaction selection step with a fixed importance sampling parameter. In this paper, we introduce a novel method where the biasing parameter is state-dependent. The new method features improved accuracy, efficiency, and robustness.

  16. Rapid Method for Sodium Hydroxide Fusion of Asphalt ...

    EPA Pesticide Factsheets

    Technical Brief--Addendum to Selected Analytical Methods (SAM) 2012 The method will be used for qualitative analysis of americium-241, plutonium-238, plutonium-239, radium-226, strontium-90, uranium-234, uranium-235 and uranium-238 in asphalt matrices samples.

  17. Detection and Identification of Salmonella spp. in Surface Water by Molecular Technology in Taiwan

    NASA Astrophysics Data System (ADS)

    Tseng, S. F.; Hsu, B. M.; Huang, K. H.; Hsiao, H. Y.; Kao, P. M.; Shen, S. M.; Tsai, H. F.; Chen, J. S.

    2012-04-01

    Salmonella spp. is classified to gram-negative bacterium and is one of the most important causal agents of waterborne diseases. The genus of Salmonella comprises more than 2,500 serotypes and its taxonomy is also very complicated. In tradition, the detection of Salmonella in environmental water samples by routines culture methods using selective media and characterization of suspicious colonies based on biochemical tests and serological assay are generally time and labor consuming. To overcome this disadvantage, it is desirable to use effective method which provides a higher discrimination and more rapid identification about Salmonella in environmental water. The aim of this study is to investigate the occurrence of Salmonella using novel procedures of detection method and to identify the serovars of Salmonella isolates from 157 surface water samples in Taiwan. The procedures include membrane filtration, non-selective pre-enrichment, selective enrichment of Salmonella, and then isolation of Salmonella strains by selective culture plates. The selective enrichment and culture plates were both detected by PCR. Finally, we used biochemical tests and serological assay to confirm the serovars of Salmonella and also used Pulsed-field gel electrophoresis (PFGE) to identify their sarovar catagories by the genetic pattern. In this study, 44 water samples (28%) were indentified as Salmonella. The 44 positive water samples by culture method were further identified as S. Agona(1/44), S. Albany (10/44), S. Bareilly (13/44),S. Choleraesuis (2/44),S. Derby (4/44),S. Isangi (3/44),S.Kedougou(3/44),S. Mbandaka(1/44),S.Newport (3/44), S. Oranienburg(1/44), S. Potsdam (1/44),S. Typhimurium (1/44), andS. Weltevreden(1/44) by PFGE. The presence of Salmonella in surface water indicates the possibility of waterborne transmission in drinking watershed if water is not adequately treated. Therefore, the authorities need to have operating systems that currently provide adequate source protection and maintaining the system to prevent disease. Keywords: Salmonella spp.; biochemical tests; Serological assay; PCR; PFGE

  18. Ion-selective electrodes in potentiometric titrations; a new method for processing and evaluating titration data.

    PubMed

    Granholm, Kim; Sokalski, Tomasz; Lewenstam, Andrzej; Ivaska, Ari

    2015-08-12

    A new method to convert the potential of an ion-selective electrode to concentration or activity in potentiometric titration is proposed. The advantage of this method is that the electrode standard potential and the slope of the calibration curve do not have to be known. Instead two activities on the titration curve have to be estimated e.g. the starting activity before the titration begins and the activity at the end of the titration in the presence of large excess of titrant. This new method is beneficial when the analyte is in a complexed matrix or in a harsh environment which affects the properties of the electrode and the traditional calibration procedure with standard solutions cannot be used. The new method was implemented both in a method of linearization based on the Grans's plot and in determination of the stability constant of a complex and the concentration of the complexing ligand in the sample. The new method gave accurate results when using titrations data from experiments with samples of known composition and with real industrial harsh black liquor sample. A complexometric titration model was also developed. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Methods for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry

    DOEpatents

    Chan, George C. Y. [Bloomington, IN; Hieftje, Gary M [Bloomington, IN

    2010-08-03

    A method for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry (ICP-AES). ICP-AES analysis is performed across a plurality of selected locations in the plasma on an unknown sample, collecting the light intensity at one or more selected wavelengths of one or more sought-for analytes, creating a first dataset. The first dataset is then calibrated with a calibration dataset creating a calibrated first dataset curve. If the calibrated first dataset curve has a variability along the location within the plasma for a selected wavelength, errors are present. Plasma-related errors are then corrected by diluting the unknown sample and performing the same ICP-AES analysis on the diluted unknown sample creating a calibrated second dataset curve (accounting for the dilution) for the one or more sought-for analytes. The cross-over point of the calibrated dataset curves yields the corrected value (free from plasma related errors) for each sought-for analyte.

  20. Purposeful Sampling for Qualitative Data Collection and Analysis in Mixed Method Implementation Research.

    PubMed

    Palinkas, Lawrence A; Horwitz, Sarah M; Green, Carla A; Wisdom, Jennifer P; Duan, Naihua; Hoagwood, Kimberly

    2015-09-01

    Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research.

  1. Precision Efficacy Analysis for Regression.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.

    When multiple linear regression is used to develop a prediction model, sample size must be large enough to ensure stable coefficients. If the derivation sample size is inadequate, the model may not predict well for future subjects. The precision efficacy analysis for regression (PEAR) method uses a cross- validity approach to select sample sizes…

  2. Effect of reference genome selection on the performance of computational methods for genome-wide protein-protein interaction prediction.

    PubMed

    Muley, Vijaykumar Yogesh; Ranjan, Akash

    2012-01-01

    Recent progress in computational methods for predicting physical and functional protein-protein interactions has provided new insights into the complexity of biological processes. Most of these methods assume that functionally interacting proteins are likely to have a shared evolutionary history. This history can be traced out for the protein pairs of a query genome by correlating different evolutionary aspects of their homologs in multiple genomes known as the reference genomes. These methods include phylogenetic profiling, gene neighborhood and co-occurrence of the orthologous protein coding genes in the same cluster or operon. These are collectively known as genomic context methods. On the other hand a method called mirrortree is based on the similarity of phylogenetic trees between two interacting proteins. Comprehensive performance analyses of these methods have been frequently reported in literature. However, very few studies provide insight into the effect of reference genome selection on detection of meaningful protein interactions. We analyzed the performance of four methods and their variants to understand the effect of reference genome selection on prediction efficacy. We used six sets of reference genomes, sampled in accordance with phylogenetic diversity and relationship between organisms from 565 bacteria. We used Escherichia coli as a model organism and the gold standard datasets of interacting proteins reported in DIP, EcoCyc and KEGG databases to compare the performance of the prediction methods. Higher performance for predicting protein-protein interactions was achievable even with 100-150 bacterial genomes out of 565 genomes. Inclusion of archaeal genomes in the reference genome set improves performance. We find that in order to obtain a good performance, it is better to sample few genomes of related genera of prokaryotes from the large number of available genomes. Moreover, such a sampling allows for selecting 50-100 genomes for comparable accuracy of predictions when computational resources are limited.

  3. Estimation of reference intervals from small samples: an example using canine plasma creatinine.

    PubMed

    Geffré, A; Braun, J P; Trumel, C; Concordet, D

    2009-12-01

    According to international recommendations, reference intervals should be determined from at least 120 reference individuals, which often are impossible to achieve in veterinary clinical pathology, especially for wild animals. When only a small number of reference subjects is available, the possible bias cannot be known and the normality of the distribution cannot be evaluated. A comparison of reference intervals estimated by different methods could be helpful. The purpose of this study was to compare reference limits determined from a large set of canine plasma creatinine reference values, and large subsets of this data, with estimates obtained from small samples selected randomly. Twenty sets each of 120 and 27 samples were randomly selected from a set of 1439 plasma creatinine results obtained from healthy dogs in another study. Reference intervals for the whole sample and for the large samples were determined by a nonparametric method. The estimated reference limits for the small samples were minimum and maximum, mean +/- 2 SD of native and Box-Cox-transformed values, 2.5th and 97.5th percentiles by a robust method on native and Box-Cox-transformed values, and estimates from diagrams of cumulative distribution functions. The whole sample had a heavily skewed distribution, which approached Gaussian after Box-Cox transformation. The reference limits estimated from small samples were highly variable. The closest estimates to the 1439-result reference interval for 27-result subsamples were obtained by both parametric and robust methods after Box-Cox transformation but were grossly erroneous in some cases. For small samples, it is recommended that all values be reported graphically in a dot plot or histogram and that estimates of the reference limits be compared using different methods.

  4. Characterizing lentic freshwater fish assemblages using multiple sampling methods

    USGS Publications Warehouse

    Fischer, Jesse R.; Quist, Michael C.

    2014-01-01

    Characterizing fish assemblages in lentic ecosystems is difficult, and multiple sampling methods are almost always necessary to gain reliable estimates of indices such as species richness. However, most research focused on lentic fish sampling methodology has targeted recreationally important species, and little to no information is available regarding the influence of multiple methods and timing (i.e., temporal variation) on characterizing entire fish assemblages. Therefore, six lakes and impoundments (48–1,557 ha surface area) were sampled seasonally with seven gear types to evaluate the combined influence of sampling methods and timing on the number of species and individuals sampled. Probabilities of detection for species indicated strong selectivities and seasonal trends that provide guidance on optimal seasons to use gears when targeting multiple species. The evaluation of species richness and number of individuals sampled using multiple gear combinations demonstrated that appreciable benefits over relatively few gears (e.g., to four) used in optimal seasons were not present. Specifically, over 90 % of the species encountered with all gear types and season combinations (N = 19) from six lakes and reservoirs were sampled with nighttime boat electrofishing in the fall and benthic trawling, modified-fyke, and mini-fyke netting during the summer. Our results indicated that the characterization of lentic fish assemblages was highly influenced by the selection of sampling gears and seasons, but did not appear to be influenced by waterbody type (i.e., natural lake, impoundment). The standardization of data collected with multiple methods and seasons to account for bias is imperative to monitoring of lentic ecosystems and will provide researchers with increased reliability in their interpretations and decisions made using information on lentic fish assemblages.

  5. Automatic variable selection method and a comparison for quantitative analysis in laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong

    2018-05-01

    In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.

  6. Fluorometric enzymatic assay of L-arginine

    NASA Astrophysics Data System (ADS)

    Stasyuk, Nataliya; Gayda, Galina; Yepremyan, Hasmik; Stepien, Agnieszka; Gonchar, Mykhailo

    2017-01-01

    The enzymes of L-arginine (further - Arg) metabolism are promising tools for elaboration of selective methods for quantitative Arg analysis. In our study we propose an enzymatic method for Arg assay based on fluorometric monitoring of ammonia, a final product of Arg splitting by human liver arginase I (further - arginase), isolated from the recombinant yeast strain, and commercial urease. The selective analysis of ammonia (at 415 nm under excitation at 360 nm) is based on reaction with o-phthalaldehyde (OPA) in the presence of sulfite in alkali medium: these conditions permit to avoid the reaction of OPA with any amino acid. A linearity range of the fluorometric arginase-urease-OPA method is from 100 nM to 6 μМ with a limit of detection of 34 nM Arg. The method was used for the quantitative determination of Arg in the pooled sample of blood serum. The obtained results proved to be in a good correlation with the reference enzymatic method and literature data. The proposed arginase-urease-OPA method being sensitive, economical, selective and suitable for both routine and micro-volume formats, can be used in clinical diagnostics for the simultaneous determination of Arg as well as urea and ammonia in serum samples.

  7. National Health and Nutrition Examination Survey: National Youth Fitness Survey Estimation Procedures, 2012.

    PubMed

    Johnson, Clifford L; Dohrmann, Sylvia M; Kerckove, Van de; Diallo, Mamadou S; Clark, Jason; Mohadjer, Leyla K; Burt, Vicki L

    2014-11-01

    The National Health and Nutrition Examination Survey's (NHANES) National Youth Fitness Survey (NNYFS) was conducted in 2012 by the Centers for Disease Control and Prevention's National Center for Health Statistics (NCHS). NNYFS collected data on physical activity and fitness levels to evaluate the health and fitness of children aged 3-15 in the United States. The survey comprised three levels of data collection: a household screening interview (or screener), an in-home personal interview, and a physical examination. The screener's primary objective was to determine whether any children in the household were eligible for the interview and examination. Eligibility was determined by preset selection probabilities for desired sex-age subdomains. After selection, the in-home personal interview collected demographic, health, physical activity, and nutrition information about the child as well as information about the household. The examination included physical measurements and fitness tests. This report provides background on the NNYFS program and summarizes the survey's sample design specifications. The report presents NNYFS estimation procedures, including the methods used to calculate survey weights for the full sample as well as a combined NHANES/NNYFS sample for 2012 (accessible only through the NCHS Research Data Center). The report also describes appropriate variance estimation methods. Documentation of the sample selection methods, survey content, data collection procedures, and methods to assess nonsampling errors are reported elsewhere. All material appearing in this report is in the public domain and may be reproduced or copied without permission; citation as to source, however, is appreciated.

  8. Analyzing Kernel Matrices for the Identification of Differentially Expressed Genes

    PubMed Central

    Xia, Xiao-Lei; Xing, Huanlai; Liu, Xueqin

    2013-01-01

    One of the most important applications of microarray data is the class prediction of biological samples. For this purpose, statistical tests have often been applied to identify the differentially expressed genes (DEGs), followed by the employment of the state-of-the-art learning machines including the Support Vector Machines (SVM) in particular. The SVM is a typical sample-based classifier whose performance comes down to how discriminant samples are. However, DEGs identified by statistical tests are not guaranteed to result in a training dataset composed of discriminant samples. To tackle this problem, a novel gene ranking method namely the Kernel Matrix Gene Selection (KMGS) is proposed. The rationale of the method, which roots in the fundamental ideas of the SVM algorithm, is described. The notion of ''the separability of a sample'' which is estimated by performing -like statistics on each column of the kernel matrix, is first introduced. The separability of a classification problem is then measured, from which the significance of a specific gene is deduced. Also described is a method of Kernel Matrix Sequential Forward Selection (KMSFS) which shares the KMGS method's essential ideas but proceeds in a greedy manner. On three public microarray datasets, our proposed algorithms achieved noticeably competitive performance in terms of the B.632+ error rate. PMID:24349110

  9. [Recent advances in sample preparation methods of plant hormones].

    PubMed

    Wu, Qian; Wang, Lus; Wu, Dapeng; Duan, Chunfeng; Guan, Yafeng

    2014-04-01

    Plant hormones are a group of naturally occurring trace substances which play a crucial role in controlling the plant development, growth and environment response. With the development of the chromatography and mass spectroscopy technique, chromatographic analytical method has become a widely used way for plant hormone analysis. Among the steps of chromatographic analysis, sample preparation is undoubtedly the most vital one. Thus, a highly selective and efficient sample preparation method is critical for accurate identification and quantification of phytohormones. For the three major kinds of plant hormones including acidic plant hormones & basic plant hormones, brassinosteroids and plant polypeptides, the sample preparation methods are reviewed in sequence especially the recently developed methods. The review includes novel methods, devices, extractive materials and derivative reagents for sample preparation of phytohormones analysis. Especially, some related works of our group are included. At last, the future developments in this field are also prospected.

  10. Rapid Column-Free Enrichment of Mononuclear Cells from Solid Tissues

    PubMed Central

    Scoville, Steven D.; Keller, Karen A.; Cheng, Stephanie; Zhang, Michael; Zhang, Xiaoli; Caligiuri, Michael A.; Freud, Aharon G.

    2015-01-01

    We have developed a rapid negative selection method to enrich rare mononuclear cells from human tissues. Unwanted and antibody-tethered cells are selectively depleted during a Ficoll separation step, and there is no need for magnetic-based reagents and equipment. The new method is fast, customizable, inexpensive, remarkably efficient, and easy to perform, and per sample the overall cost is less than one-tenth the cost associated with a magnetic column-based method. PMID:26223896

  11. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  12. Multi-residue screening of prioritised human pharmaceuticals, illicit drugs and bactericides in sediments and sludge.

    PubMed

    Langford, Katherine H; Reid, Malcolm; Thomas, Kevin V

    2011-08-01

    A robust multi-residue method was developed for the analysis of a selection of pharmaceutical compounds, illicit drugs and personal care product bactericides in sediments and sludges. Human pharmaceuticals were selected for analysis in Scottish sewage sludge and freshwater sediments based on prescription, physico-chemical and occurrence data. The method was suitable for the analysis of the selected illicit drugs amphetamine, benzoylecgonine, cocaine, and methamphetamine, the pharmaceuticals atenolol, bendroflumethiazide, carbamazepine, citalopram, diclofenac, fluoxetine, ibuprofen, and salbutamol, and the bactericides triclosan and triclocarban in sewage sludge and freshwater sediment. The method provided an overall recovery of between 56 and 128%, RSDs of between 2 and 19% and LODs of between 1 and 50 ng g(-1). Using the methodology the human pharmaceuticals atenolol, carbamazepine and citalopram and the bactericides triclosan and triclocarban were detected in Scottish sewage sludge. The illicit drugs cocaine, its metabolite benzoylecgonine, amphetamine and methamphetamine were not detected in any of the samples analysed. Triclosan and triclocarban were present at the highest concentrations with triclocarban detected in all but one sample and showing a pattern of co-occurrence in both sludge and sediment samples.

  13. A selective electromembrane extraction of uranium (VI) prior to its fluorometric determination in water.

    PubMed

    Davarani, Saied Saeed Hosseiny; Moazami, Hamid Reza; Keshtkar, Ali Reza; Banitaba, Mohammad Hossein; Nojavan, Saeed

    2013-06-14

    A novel method for the selective electromembrane extraction (EME) of U(6+) prior to fluorometric determination has been proposed. The effect of extraction conditions including supported liquid membrane (SLM) composition, extraction time and extraction voltage were investigated. An SLM composition of 1% di-2-ethyl hexyl phosphonic acid in nitrophenyl octyl ether (NPOE) showed good selectivity, recovery and enrichment factor. The best performance was achieved at an extraction potential of 80 volts and an extraction time of 14 minutes Under the optimized conditions, a linear range from 1 to 1000 ng mL(-1) and LOD of 0.1 ng mL(-1) were obtained for the determination of U(6+). The EME method showed good performance in sample cleanup and the reduction of the interfering effects of Mn(2+), Zn(2+), Cd(2+), Ni(2+), Fe(3+), Co(2+), Cu(2+), Cl(-) and PO4(3-) ions during fluorometric determination of uranium in real water samples. The recoveries above 54% and enrichment factors above 64.7 were obtained by the proposed method for real sample analysis. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Wet-chemistry based selective coatings for concentrating solar power

    NASA Astrophysics Data System (ADS)

    Maimon, Eran; Kribus, Abraham; Flitsanov, Yuri; Shkolnik, Oleg; Feuermann, Daniel; Zwicker, Camille; Larush, Liraz; Mandler, Daniel; Magdassi, Shlomo

    2013-09-01

    Spectrally selective coatings are common in low and medium temperature solar applications from solar water heating collectors to parabolic trough absorber tubes. They are also an essential element for high efficiency in higher temperature Concentrating Solar Power (CSP) systems. Selective coatings for CSP are usually prepared using advanced expensive methods such as sputtering and vapor deposition. In this work, coatings were prepared using low-cost wet-chemistry methods. Solutions based on Alumina and Silica sol gel were prepared and then dispersed with black spinel pigments. The black dispersions were applied by spray/roll coating methods on stainless steel plates. The spectral emissivity of sample coatings was measured in the temperature range between 200 and 500°C, while the spectral absorptivity was measured at room temperature and 500°C. Emissivity at wavelengths of 0.4-1.7 μm was evaluated indirectly using multiple measurements of directional reflectivity. Emissivity at wavelengths 2-14 μm was measured directly using a broadband IR camera that acquires the radiation emitted from the sample, and a range of spectral filters. Emissivity measurement results for a range of coated samples will be presented, and the impact of coating thickness, pigment loading, and surface preparation will be discussed.

  15. Method for the concentration and separation of actinides from biological and environmental samples

    DOEpatents

    Horwitz, E. Philip; Dietz, Mark L.

    1989-01-01

    A method and apparatus for the quantitative recover of actinide values from biological and environmental sample by passing appropriately prepared samples in a mineral acid solution through a separation column of a dialkyl(phenyl)-N,N-dialylcarbamoylmethylphosphine oxide dissolved in tri-n-butyl phosphate on an inert substrate which selectively extracts the actinide values. The actinide values can be eluted either as a group or individually and their presence quantitatively detected by alpha counting.

  16. Determination of a flame retardant hydrolysis product in human urine by SPE and LC-MS. Comparison of molecularly imprinted solid-phase extraction with a mixed-mode anion exchanger.

    PubMed

    Möller, Kristina; Crescenzi, Carlo; Nilsson, Ulrika

    2004-01-01

    Diphenyl phosphate is a hydrolysis product and possible metabolite of the flame retardant and plasticiser additive triphenyl phosphate. A molecularly imprinted polymer solid-phase extraction (MISPE) method for extracting diphenyl phosphate from aqueous solutions has been developed and compared with SPE using a commercially available mixed-mode anion exchanger. The imprinted polymer was prepared using 2-vinylpyridine (2-Vpy) as the functional monomer, ethylene glycol dimethacrylate (EGDMA) as the cross-linker, and a structural analogue of the analyte as the template molecule. The imprinted polymer was evaluated for use as a SPE sorbent, in tests with both aqueous standards and spiked urine samples, by comparing recovery and breakthrough data obtained using the imprinted form of the polymer and a non-imprinted form (NIP). Extraction from aqueous solutions resulted in more than 80% recovery. Adsorption by the molecularly imprinted polymer (MIP) was non-selective, but selectivity was achieved by selective desorption in the wash steps. Diphenyl phosphate could also be selectively extracted from urine samples, although the urine matrix reduced the capacity of the MISPE cartridges. Recoveries from urine extraction were higher than 70%. It was important to control pH during sample loading. The MISPE method was found to yield a less complex LC-ESI-MS chromatogram of the urine extracts compared with the mixed-mode anion-exchanger method. An LC-ESI-MS method using a Hypercarb LC column with a graphitised carbon stationary phase was also evaluated for organophosphate diesters. LC-ESI-MS using negative-ion detection in selected ion monitoring (SIM) mode was shown to be linear for diphenyl phosphate in the range 0.08-20 ng microL(-1).

  17. Molecular dynamics simulations using temperature-enhanced essential dynamics replica exchange.

    PubMed

    Kubitzki, Marcus B; de Groot, Bert L

    2007-06-15

    Today's standard molecular dynamics simulations of moderately sized biomolecular systems at full atomic resolution are typically limited to the nanosecond timescale and therefore suffer from limited conformational sampling. Efficient ensemble-preserving algorithms like replica exchange (REX) may alleviate this problem somewhat but are still computationally prohibitive due to the large number of degrees of freedom involved. Aiming at increased sampling efficiency, we present a novel simulation method combining the ideas of essential dynamics and REX. Unlike standard REX, in each replica only a selection of essential collective modes of a subsystem of interest (essential subspace) is coupled to a higher temperature, with the remainder of the system staying at a reference temperature, T(0). This selective excitation along with the replica framework permits efficient approximate ensemble-preserving conformational sampling and allows much larger temperature differences between replicas, thereby considerably enhancing sampling efficiency. Ensemble properties and sampling performance of the method are discussed using dialanine and guanylin test systems, with multi-microsecond molecular dynamics simulations of these test systems serving as references.

  18. A method for feature selection of APT samples based on entropy

    NASA Astrophysics Data System (ADS)

    Du, Zhenyu; Li, Yihong; Hu, Jinsong

    2018-05-01

    By studying the known APT attack events deeply, this paper propose a feature selection method of APT sample and a logic expression generation algorithm IOCG (Indicator of Compromise Generate). The algorithm can automatically generate machine readable IOCs (Indicator of Compromise), to solve the existing IOCs logical relationship is fixed, the number of logical items unchanged, large scale and cannot generate a sample of the limitations of the expression. At the same time, it can reduce the redundancy and useless APT sample processing time consumption, and improve the sharing rate of information analysis, and actively respond to complex and volatile APT attack situation. The samples were divided into experimental set and training set, and then the algorithm was used to generate the logical expression of the training set with the IOC_ Aware plug-in. The contrast expression itself was different from the detection result. The experimental results show that the algorithm is effective and can improve the detection effect.

  19. Conceptual data sampling for breast cancer histology image classification.

    PubMed

    Rezk, Eman; Awan, Zainab; Islam, Fahad; Jaoua, Ali; Al Maadeed, Somaya; Zhang, Nan; Das, Gautam; Rajpoot, Nasir

    2017-10-01

    Data analytics have become increasingly complicated as the amount of data has increased. One technique that is used to enable data analytics in large datasets is data sampling, in which a portion of the data is selected to preserve the data characteristics for use in data analytics. In this paper, we introduce a novel data sampling technique that is rooted in formal concept analysis theory. This technique is used to create samples reliant on the data distribution across a set of binary patterns. The proposed sampling technique is applied in classifying the regions of breast cancer histology images as malignant or benign. The performance of our method is compared to other classical sampling methods. The results indicate that our method is efficient and generates an illustrative sample of small size. It is also competing with other sampling methods in terms of sample size and sample quality represented in classification accuracy and F1 measure. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. [Study on correction of data bias caused by different missing mechanisms in survey of medical expenditure among students enrolling in Urban Resident Basic Medical Insurance].

    PubMed

    Zhang, Haixia; Zhao, Junkang; Gu, Caijiao; Cui, Yan; Rong, Huiying; Meng, Fanlong; Wang, Tong

    2015-05-01

    The study of the medical expenditure and its influencing factors among the students enrolling in Urban Resident Basic Medical Insurance (URBMI) in Taiyuan indicated that non response bias and selection bias coexist in dependent variable of the survey data. Unlike previous studies only focused on one missing mechanism, a two-stage method to deal with two missing mechanisms simultaneously was suggested in this study, combining multiple imputation with sample selection model. A total of 1 190 questionnaires were returned by the students (or their parents) selected in child care settings, schools and universities in Taiyuan by stratified cluster random sampling in 2012. In the returned questionnaires, 2.52% existed not missing at random (NMAR) of dependent variable and 7.14% existed missing at random (MAR) of dependent variable. First, multiple imputation was conducted for MAR by using completed data, then sample selection model was used to correct NMAR in multiple imputation, and a multi influencing factor analysis model was established. Based on 1 000 times resampling, the best scheme of filling the random missing values is the predictive mean matching (PMM) method under the missing proportion. With this optimal scheme, a two stage survey was conducted. Finally, it was found that the influencing factors on annual medical expenditure among the students enrolling in URBMI in Taiyuan included population group, annual household gross income, affordability of medical insurance expenditure, chronic disease, seeking medical care in hospital, seeking medical care in community health center or private clinic, hospitalization, hospitalization canceled due to certain reason, self medication and acceptable proportion of self-paid medical expenditure. The two-stage method combining multiple imputation with sample selection model can deal with non response bias and selection bias effectively in dependent variable of the survey data.

  1. Ionic liquid-impregnated agarose film two-phase micro-electrodriven membrane extraction (IL-AF-μ-EME) for the analysis of antidepressants in water samples.

    PubMed

    Mohamad Hanapi, Nor Suhaila; Sanagi, Mohd Marsin; Ismail, Abd Khamim; Wan Ibrahim, Wan Aini; Saim, Nor'ashikin; Wan Ibrahim, Wan Nazihah

    2017-03-01

    The aim of this study was to investigate and apply supported ionic liquid membrane (SILM) in two-phase micro-electrodriven membrane extraction combined with high performance liquid chromatography-ultraviolet detection (HPLC-UV) for pre-concentration and determination of three selected antidepressant drugs in water samples. A thin agarose film impregnated with 1-hexyl-3-methylimidazolium hexafluorophosphate, [C 6 MIM] [PF 6 ], was prepared and used as supported ionic liquid membrane between aqueous sample solution and acceptor phase for extraction of imipramine, amitriptyline and chlorpromazine. Under the optimized extraction conditions, the method provided good linearity in the range of 1.0-1000μgL -1 , good coefficients of determination (r 2 =0.9974-0.9992) and low limits of detection (0.1-0.4μgL -1 ). The method showed high enrichment factors in the range of 110-150 and high relative recoveries in the range of 88.2-111.4% and 90.9-107.0%, for river water and tap water samples, respectively with RSDs of ≤7.6 (n=3). This method was successfully applied to the determination of the drugs in river and tap water samples. It is envisaged that the SILM improved the perm-selectivity by providing a pathway for targeted analytes which resulted in rapid extraction with high degree of selectivity and high enrichment factor. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Trace level and highly selective determination of urea in various real samples based upon voltammetric analysis of diacetylmonoxime-urea reaction product on the carbon nanotube/carbon paste electrode.

    PubMed

    Alizadeh, Taher; Ganjali, Mohammad Reza; Rafiei, Faride

    2017-06-29

    In this study an innovative method was introduced for selective and precise determination of urea in various real samples including urine, blood serum, soil and water. The method was based on the square wave voltammetry determination of an electroactive product, generated during diacetylmonoxime reaction with urea. A carbon paste electrode, modified with multi-walled carbon nanotubes (MWCNTs) was found to be an appropriate electrochemical transducer for recording of the electrochemical signal. It was found that the chemical reaction conditions influenced the analytical signal directly. The calibration graph of the method was linear in the range of 1 × 10 -7 - 1 × 10 -2  mol L -1 . The detection limit was calculated to be 52 nmol L -1 . Relative standard error of the method was also calculated to be 3.9% (n = 3). The developed determination procedure was applied for urea determination in various real samples including soil, urine, plasma and water samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. [Validation of Differential Extraction Kit in forensic sexual assault cases].

    PubMed

    Wu, Dan; Cao, Yu; Xu, Yan; He, Bai-Fang; Bi, Gang; Zhou, Huai-Gu

    2009-12-01

    To evaluate the validity of Differential Extraction Kit in isolating spermatozoa and epithelial cell DNA from mixture samples. Selective lysis of spermatid and epithelial cells combined with paramagnetic particle method were applied to extract the DNA from the mock samples under controlled conditions and forensic case samples, and template DNA were analyzed by STR genotype method. This Differential Extraction Kit is efficient to obtain high quality spermatid and epithelial cell DNA from the mixture samples with different proportion of sperm to epithelial cell. The Differential Extraction Kit can be applied in DNA extraction for mixed stain from forensic sexual assault samples.

  4. 40 CFR 60.74 - Test methods and procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... select the sampling site, and the sampling point shall be the centroid of the stack or duct or at a point... the production rate (P) of 100 percent nitric acid for each run. Material balance over the production...

  5. Selective sampling and measurement of Cr (VI) in water with polyquaternary ammonium salt as a binding agent in diffusive gradients in thin-films technique.

    PubMed

    Chen, Hong; Zhang, Yang-Yang; Zhong, Ke-Li; Guo, Lian-Wen; Gu, Jia-Li; Bo, Le; Zhang, Meng-Han; Li, Jian-Rong

    2014-04-30

    A diffusive gradients in thin films (DGT) device with polyquaternary ammonium salt (PQAS) as a novel binding agent (PQAS DGT) combined with graphite furnace atomic absorption spectrometry (GFAAS) was developed for the selective sampling and measurement of Cr (VI) in water. The performance of PQAS DGT was independent of pH 3-12 and ionic strength from 1 × 10(-3) to 1 molL(-1). DGT validation experiments showed that Cr (VI) was measured accurately as well as selectively by PQAS DGT, whereas Cr (III) was not determined quantitatively. Compared with diphenylcarbazide spectrophotometric method (DPC), the measurement of Cr (VI) with PQAS DGT was agreement with that of DPC method in the industrial wastewater. PQAS-DGT device had been successfully deployed in local freshwater. The concentrations of Cr (VI) determined by PQAS DGT coupled with GFAAS in Nuer River, Ling River and North Lake were 0.73 ± 0.09 μg L(-1), 0.50 ± 0.07 μg L(-1) and 0.61 ± 0.07 μg L(-1), respectively. The results indicate that PQAS DGT device can be used for the selective sampling and measurement Cr (VI) in water and its detection limit is lower than that of DPC method. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Group Variable Selection Via Convex Log-Exp-Sum Penalty with Application to a Breast Cancer Survivor Study

    PubMed Central

    Geng, Zhigeng; Wang, Sijian; Yu, Menggang; Monahan, Patrick O.; Champion, Victoria; Wahba, Grace

    2017-01-01

    Summary In many scientific and engineering applications, covariates are naturally grouped. When the group structures are available among covariates, people are usually interested in identifying both important groups and important variables within the selected groups. Among existing successful group variable selection methods, some methods fail to conduct the within group selection. Some methods are able to conduct both group and within group selection, but the corresponding objective functions are non-convex. Such a non-convexity may require extra numerical effort. In this article, we propose a novel Log-Exp-Sum(LES) penalty for group variable selection. The LES penalty is strictly convex. It can identify important groups as well as select important variables within the group. We develop an efficient group-level coordinate descent algorithm to fit the model. We also derive non-asymptotic error bounds and asymptotic group selection consistency for our method in the high-dimensional setting where the number of covariates can be much larger than the sample size. Numerical results demonstrate the good performance of our method in both variable selection and prediction. We applied the proposed method to an American Cancer Society breast cancer survivor dataset. The findings are clinically meaningful and may help design intervention programs to improve the qualify of life for breast cancer survivors. PMID:25257196

  7. Methods for collecting algal samples as part of the National Water-Quality Assessment Program

    USGS Publications Warehouse

    Porter, Stephen D.; Cuffney, Thomas F.; Gurtz, Martin E.; Meador, Michael R.

    1993-01-01

    Benthic algae (periphyton) and phytoplankton communities are characterized in the U.S. Geological Survey's National Water-Quality Assessment Program as part of an integrated physical, chemical, and biological assessment of the Nation's water quality. This multidisciplinary approach provides multiple lines of evidence for evaluating water-quality status and trends, and for refining an understanding of the factors that affect water-quality conditions locally, regionally, and nationally. Water quality can be characterized by evaluating the results of qualitative and quantitative measurements of the algal community. Qualitative periphyton samples are collected to develop of list of taxa present in the sampling reach. Quantitative periphyton samples are collected to measure algal community structure within selected habitats. These samples of benthic algal communities are collected from natural substrates, using the sampling methods that are most appropriate for the habitat conditions. Phytoplankton samples may be collected in large nonwadeable streams and rivers to meet specific program objectives. Estimates of algal biomass (chlorophyll content and ash-free dry mass) also are optional measures that may be useful for interpreting water-quality conditions. A nationally consistent approach provides guidance on site, reach, and habitat selection, as well as information on methods and equipment for qualitative and quantitative sampling. Appropriate quality-assurance and quality-control guidelines are used to maximize the ability to analyze data locally, regionally, and nationally.

  8. Comparison between Thermal Desorption Tubes and Stainless Steel Canisters Used for Measuring Volatile Organic Compounds in Petrochemical Factories.

    PubMed

    Chang, Cheng-Ping; Lin, Tser-Cheng; Lin, Yu-Wen; Hua, Yi-Chun; Chu, Wei-Ming; Lin, Tzu-Yu; Lin, Yi-Wen; Wu, Jyun-De

    2016-04-01

    The purpose of this study was to compare thermal desorption tubes and stainless steel canisters for measuring volatile organic compounds (VOCs) emitted from petrochemical factories. Twelve petrochemical factories in the Mailiao Industrial Complex were recruited for conducting the measurements of VOCs. Thermal desorption tubes and 6-l specially prepared stainless steel canisters were used to simultaneously perform active sampling of environmental air samples. The sampling time of the environmental air samples was set up on 6 h close to a full work shift of the workers. A total of 94 pairwise air samples were collected by using the thermal adsorption tubes and stainless steel canisters in these 12 factories in the petrochemical industrial complex. To maximize the number of comparative data points, all the measurements from all the factories in different sampling times were lumped together to perform a linear regression analysis for each selected VOC. Pearson product-moment correlation coefficient was used to examine the correlation between the pairwise measurements of these two sampling methods. A paired t-test was also performed to examine whether the difference in the concentrations of each selected VOC measured by the two methods was statistically significant. The correlation coefficients of seven compounds, including acetone, n-hexane, benzene, toluene, 1,2-dichloroethane, 1,3-butadiene, and styrene were >0.80 indicating the two sampling methods for these VOCs' measurements had high consistency. The paired t-tests for the measurements of n-hexane, benzene, m/p-xylene, o-xylene, 1,2-dichloroethane, and 1,3-butadiene showed statistically significant difference (P-value < 0.05). This indicated that the two sampling methods had various degrees of systematic errors. Looking at the results of six chemicals and these systematic errors probably resulted from the differences of the detection limits in the two sampling methods for these VOCs. The comparison between the concentrations of each of the 10 selected VOCs measured by the two sampling methods indicted that the thermal desorption tubes provided high accuracy and precision measurements for acetone, benzene, and 1,3-butadiene. The accuracy and precision of using the thermal desorption tubes for measuring the VOCs can be improved due to new developments in sorbent materials, multi-sorbent designs, and thermal desorption instrumentation. More applications of thermal desorption tubes for measuring occupational and environmental hazardous agents can be anticipated. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  9. Sample similarity analysis of angles of repose based on experimental results for DEM calibration

    NASA Astrophysics Data System (ADS)

    Tan, Yuan; Günthner, Willibald A.; Kessler, Stephan; Zhang, Lu

    2017-06-01

    As a fundamental material property, particle-particle friction coefficient is usually calculated based on angle of repose which can be obtained experimentally. In the present study, the bottomless cylinder test was carried out to investigate this friction coefficient of a kind of biomass material, i.e. willow chips. Because of its irregular shape and varying particle size distribution, calculation of the angle becomes less applicable and decisive. In the previous studies only one section of those uneven slopes is chosen in most cases, although standard methods in definition of a representable section are barely found. Hence, we presented an efficient and reliable method from the new technology, 3D scan, which was used to digitize the surface of heaps and generate its point cloud. Then, two tangential lines of any selected section were calculated through the linear least-squares regression (LLSR), such that the left and right angle of repose of a pile could be derived. As the next step, a certain sum of sections were stochastic selected, and calculations were repeated correspondingly in order to achieve sample of angles, which was plotted in Cartesian coordinates as spots diagram. Subsequently, different samples were acquired through various selections of sections. By applying similarities and difference analysis of these samples, the reliability of this proposed method was verified. Phased results provides a realistic criterion to reduce the deviation between experiment and simulation as a result of random selection of a single angle, which will be compared with the simulation results in the future.

  10. Application of surface enhanced Raman scattering and competitive adaptive reweighted sampling on detecting furfural dissolved in transformer oil

    NASA Astrophysics Data System (ADS)

    Chen, Weigen; Zou, Jingxin; Wan, Fu; Fan, Zhou; Yang, Dingkun

    2018-03-01

    Detecting the dissolving furfural in mineral oil is an essential technical method to evaluate the ageing condition of oil-paper insulation and the degradation of mechanical properties. Compared with the traditional detection method, Raman spectroscopy is obviously convenient and timesaving in operation. This study explored the method of applying surface enhanced Raman scattering (SERS) on quantitative analysis of the furfural dissolved in oil. Oil solution with different concentration of furfural were prepared and calibrated by high-performance liquid chromatography. Confocal laser Raman spectroscopy (CLRS) and SERS technology were employed to acquire Raman spectral data. Monte Carlo cross validation (MCCV) was used to eliminate the outliers in sample set, then competitive adaptive reweighted sampling (CARS) was developed to select an optimal combination of informative variables that most reflect the chemical properties of concern. Based on selected Raman spectral features, support vector machine (SVM) combined with particle swarm algorithm (PSO) was used to set up a furfural quantitative analysis model. Finally, the generalization ability and prediction precision of the established method were verified by the samples made in lab. In summary, a new spectral method is proposed to quickly detect furfural in oil, which lays a foundation for evaluating the ageing of oil-paper insulation in oil immersed electrical equipment.

  11. The Empirical Selection of Anchor Items Using a Multistage Approach

    ERIC Educational Resources Information Center

    Craig, Brandon

    2017-01-01

    The purpose of this study was to determine if using a multistage approach for the empirical selection of anchor items would lead to more accurate DIF detection rates than the anchor selection methods proposed by Kopf, Zeileis, & Strobl (2015b). A simulation study was conducted in which the sample size, percentage of DIF, and balance of DIF…

  12. Matrix Effect Evaluation and Method Validation of Azoxystrobin and Difenoconazole Residues in Red Flesh Dragon Fruit (Hylocereus polyrhizus) Matrices Using QuEChERS Sample Preparation Methods Followed by LC-MS/MS Determination.

    PubMed

    Noegrohati, Sri; Hernadi, Elan; Asviastuti, Syanti

    2018-06-01

    Production of red flesh dragon fruit (Hylocereus polyrhizus) was hampered by Colletotrichum sp. Pre-harvest application of azoxystrobin and difenoconazole mixture is recommended, therefore, a selective and sensitive multi residues analytical method is required in monitoring and evaluating the commodity's safety. LC-MS/MS is a well-established analytical technique for qualitative and quantitative determination in complex matrices. However, this method is hurdled by co-eluted coextractives interferences. This work evaluated the pH effect of acetate buffered and citrate buffered QuEChERS sample preparation in their effectiveness of matrix effect reduction. Citrate buffered QuEChERS proved to produce clean final extract with relative matrix effect 0.4%-0.7%. Method validation of the selected sample preparation followed by LC-MS/MS for whole dragon fruit, flesh and peel matrices fortified at 0.005, 0.01, 0.1 and 1 g/g showed recoveries 75%-119%, intermediate repeatability 2%-14%. The expanded uncertainties were 7%-48%. Based on the international acceptance criteria, this method is valid.

  13. Selective extraction based on poly(MAA-VB-EGMDA) monolith followed by HPLC for determination of hordenine in plasma and urine samples.

    PubMed

    Chen, Yonggang; Meng, Junhua; Zou, Jili; An, Jing

    2015-06-01

    Hordenine is an active compound found in several foods, herbs and beer. In this work, a novel sorbent was fabricated for selective solid-phase extraction (SPE) of hordenine in biological samples. The organic polymer sorbent was synthesized in one step in the plastic barrel of a syringe by a pre-polymerization solution consisting of methacrylic acid (MAA), 4-vinylphenylboronic acid (VB) and ethylene glycol dimethacrylate (EGDMA). The conditions for preparation were optimized to generate a poly(MAA-VB-EGMDA) monolith with good permeability. The monolith exhibited good enrichment efficiency towards hordenine. By using tyramine as the internal standard, a poly(MAA-VB-EGMDA)-based SPE-HPLC method was established for analysis of hordenine. Conditions for SPE, including volume of eluting solvent, pH of sample solution, sampling rate and sample volume, were optimized. The proposed SPE-HPLC method presented good linearity (R(2)  = 0.9992) within 10-2000 ng/mL and the detection limits was 3 ng/mL, which is significantly more sensitive than reported methods. The method was also applied in plasma and urine samples; good capability of removing matrices was observed, while hordenine in low content was well extracted and enriched. The recoveries were from 90.6 to 94.7% and from 89.3 to 91.5% for the spiked plasma and urine samples, respectively, with the relative standard deviations <4.7%. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Rapid screening of selective serotonin re-uptake inhibitors in urine samples using solid-phase microextraction gas chromatography-mass spectrometry.

    PubMed

    Salgado-Petinal, Carmen; Lamas, J Pablo; Garcia-Jares, Carmen; Llompart, Maria; Cela, Rafael

    2005-07-01

    In this paper a solid-phase microextraction-gas chromatography-mass spectrometry (SPME-GC-MS) method is proposed for a rapid analysis of some frequently prescribed selective serotonin re-uptake inhibitors (SSRI)-venlafaxine, fluvoxamine, mirtazapine, fluoxetine, citalopram, and sertraline-in urine samples. The SPME-based method enables simultaneous determination of the target SSRI after simple in-situ derivatization of some of the target compounds. Calibration curves in water and in urine were validated and statistically compared. This revealed the absence of matrix effect and, in consequence, the possibility of quantifying SSRI in urine samples by external water calibration. Intra-day and inter-day precision was satisfactory for all the target compounds (relative standard deviation, RSD, <14%) and the detection limits achieved were <0.4 ng mL(-1) urine. The time required for the SPME step and for GC analysis (30 min each) enables high throughput. The method was applied to real urine samples from different patients being treated with some of these pharmaceuticals. Some SSRI metabolites were also detected and tentatively identified.

  15. The Impact of Selection, Gene Conversion, and Biased Sampling on the Assessment of Microbial Demography.

    PubMed

    Lapierre, Marguerite; Blin, Camille; Lambert, Amaury; Achaz, Guillaume; Rocha, Eduardo P C

    2016-07-01

    Recent studies have linked demographic changes and epidemiological patterns in bacterial populations using coalescent-based approaches. We identified 26 studies using skyline plots and found that 21 inferred overall population expansion. This surprising result led us to analyze the impact of natural selection, recombination (gene conversion), and sampling biases on demographic inference using skyline plots and site frequency spectra (SFS). Forward simulations based on biologically relevant parameters from Escherichia coli populations showed that theoretical arguments on the detrimental impact of recombination and especially natural selection on the reconstructed genealogies cannot be ignored in practice. In fact, both processes systematically lead to spurious interpretations of population expansion in skyline plots (and in SFS for selection). Weak purifying selection, and especially positive selection, had important effects on skyline plots, showing patterns akin to those of population expansions. State-of-the-art techniques to remove recombination further amplified these biases. We simulated three common sampling biases in microbiological research: uniform, clustered, and mixed sampling. Alone, or together with recombination and selection, they further mislead demographic inferences producing almost any possible skyline shape or SFS. Interestingly, sampling sub-populations also affected skyline plots and SFS, because the coalescent rates of populations and their sub-populations had different distributions. This study suggests that extreme caution is needed to infer demographic changes solely based on reconstructed genealogies. We suggest that the development of novel sampling strategies and the joint analyzes of diverse population genetic methods are strictly necessary to estimate demographic changes in populations where selection, recombination, and biased sampling are present. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  16. EPA Method 200.8: Determination of Trace Elements in Waters and Wastes by Inductively Coupled Plasma-Mass Spectrometry

    EPA Pesticide Factsheets

    \\tEPA’s Selected Analytical Methods for Environmental Remediation and Recovery (SAM) lists this method for preparation and analysis of drinking water samples to detect and measure compounds containing arsenic, thallium and vanadium.

  17. Mendelian breeding units versus standard sampling strategies: Mitochondrial DNA variation in southwest Sardinia

    PubMed Central

    Sanna, Daria; Pala, Maria; Cossu, Piero; Dedola, Gian Luca; Melis, Sonia; Fresu, Giovanni; Morelli, Laura; Obinu, Domenica; Tonolo, Giancarlo; Secchi, Giannina; Triunfo, Riccardo; Lorenz, Joseph G.; Scheinfeldt, Laura; Torroni, Antonio; Robledo, Renato; Francalacci, Paolo

    2011-01-01

    We report a sampling strategy based on Mendelian Breeding Units (MBUs), representing an interbreeding group of individuals sharing a common gene pool. The identification of MBUs is crucial for case-control experimental design in association studies. The aim of this work was to evaluate the possible existence of bias in terms of genetic variability and haplogroup frequencies in the MBU sample, due to severe sample selection. In order to reach this goal, the MBU sampling strategy was compared to a standard selection of individuals according to their surname and place of birth. We analysed mitochondrial DNA variation (first hypervariable segment and coding region) in unrelated healthy subjects from two different areas of Sardinia: the area around the town of Cabras and the western Campidano area. No statistically significant differences were observed when the two sampling methods were compared, indicating that the stringent sample selection needed to establish a MBU does not alter original genetic variability and haplogroup distribution. Therefore, the MBU sampling strategy can be considered a useful tool in association studies of complex traits. PMID:21734814

  18. Subrandom methods for multidimensional nonuniform sampling.

    PubMed

    Worley, Bradley

    2016-08-01

    Methods of nonuniform sampling that utilize pseudorandom number sequences to select points from a weighted Nyquist grid are commonplace in biomolecular NMR studies, due to the beneficial incoherence introduced by pseudorandom sampling. However, these methods require the specification of a non-arbitrary seed number in order to initialize a pseudorandom number generator. Because the performance of pseudorandom sampling schedules can substantially vary based on seed number, this can complicate the task of routine data collection. Approaches such as jittered sampling and stochastic gap sampling are effective at reducing random seed dependence of nonuniform sampling schedules, but still require the specification of a seed number. This work formalizes the use of subrandom number sequences in nonuniform sampling as a means of seed-independent sampling, and compares the performance of three subrandom methods to their pseudorandom counterparts using commonly applied schedule performance metrics. Reconstruction results using experimental datasets are also provided to validate claims made using these performance metrics. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. [Local Regression Algorithm Based on Net Analyte Signal and Its Application in Near Infrared Spectral Analysis].

    PubMed

    Zhang, Hong-guang; Lu, Jian-gang

    2016-02-01

    Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.

  20. Estimating the "impact" of out-of-home placement on child well-being: approaching the problem of selection bias.

    PubMed

    Berger, Lawrence M; Bruch, Sarah K; Johnson, Elizabeth I; James, Sigrid; Rubin, David

    2009-01-01

    This study used data on 2,453 children aged 4-17 from the National Survey of Child and Adolescent Well-Being and 5 analytic methods that adjust for selection factors to estimate the impact of out-of-home placement on children's cognitive skills and behavior problems. Methods included ordinary least squares (OLS) regressions and residualized change, simple change, difference-in-difference, and fixed effects models. Models were estimated using the full sample and a matched sample generated by propensity scoring. Although results from the unmatched OLS and residualized change models suggested that out-of-home placement is associated with increased child behavior problems, estimates from models that more rigorously adjust for selection bias indicated that placement has little effect on children's cognitive skills or behavior problems.

  1. Photoacoustic sensor for medical diagnostics

    NASA Astrophysics Data System (ADS)

    Wolff, Marcus; Groninga, Hinrich G.; Harde, Hermann

    2004-03-01

    The development of new optical sensor technologies has a major impact on the progress of diagnostic methods. Of the permanently increasing number of non-invasive breath tests, the 13C-Urea Breath Test (UBT) for the detection of Helicobacter pylori is the most prominent. However, many recent developments, like the detection of cancer by breath test, go beyond gastroenterological applications. We present a new detection scheme for breath analysis that employs an especially compact and simple set-up. Photoacoustic Spectroscopy (PAS) represents an offset-free technique that allows for short absorption paths and small sample cells. Using a single-frequency diode laser and taking advantage of acoustical resonances of the sample cell, we performed extremely sensitive and selective measurements. The smart data processing method contributes to the extraordinary sensitivity and selectivity as well. Also, the reasonable acquisition cost and low operational cost make this detection scheme attractive for many biomedical applications. The experimental set-up and data processing method, together with exemplary isotope-selective measurements on carbon dioxide, are presented.

  2. [Combining speech sample and feature bilateral selection algorithm for classification of Parkinson's disease].

    PubMed

    Zhang, Xiaoheng; Wang, Lirui; Cao, Yao; Wang, Pin; Zhang, Cheng; Yang, Liuyang; Li, Yongming; Zhang, Yanling; Cheng, Oumei

    2018-02-01

    Diagnosis of Parkinson's disease (PD) based on speech data has been proved to be an effective way in recent years. However, current researches just care about the feature extraction and classifier design, and do not consider the instance selection. Former research by authors showed that the instance selection can lead to improvement on classification accuracy. However, no attention is paid on the relationship between speech sample and feature until now. Therefore, a new diagnosis algorithm of PD is proposed in this paper by simultaneously selecting speech sample and feature based on relevant feature weighting algorithm and multiple kernel method, so as to find their synergy effects, thereby improving classification accuracy. Experimental results showed that this proposed algorithm obtained apparent improvement on classification accuracy. It can obtain mean classification accuracy of 82.5%, which was 30.5% higher than the relevant algorithm. Besides, the proposed algorithm detected the synergy effects of speech sample and feature, which is valuable for speech marker extraction.

  3. 40 CFR 90.419 - Raw emission sampling calculations-gasoline fueled engines.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Raw emission sampling calculations... KILOWATTS Gaseous Exhaust Test Procedures § 90.419 Raw emission sampling calculations—gasoline fueled... selected as the basis for mass emission calculations using the raw gas method. ER03JY95.022 Where: WHC...

  4. 40 CFR 90.419 - Raw emission sampling calculations-gasoline fueled engines.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Raw emission sampling calculations... KILOWATTS Gaseous Exhaust Test Procedures § 90.419 Raw emission sampling calculations—gasoline fueled... selected as the basis for mass emission calculations using the raw gas method. ER03JY95.022 Where: WHC...

  5. Performance of Random Effects Model Estimators under Complex Sampling Designs

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  6. Biochemical and nutritional components of selected honey samples.

    PubMed

    Chua, Lee Suan; Adnan, Nur Ardawati

    2014-01-01

    The purpose of this study was to investigate the relationship of biochemical (enzymes) and nutritional components in the selected honey samples from Malaysia. The relationship is important to estimate the quality of honey based on the concentration of these nutritious components. Such a study is limited for honey samples from tropical countries with heavy rainfall throughout the year. A number of six honey samples that commonly consumed by local people were collected for the study. Both the biochemical and nutritional components were analysed by using standard methods from Association of Official Analytical Chemists (AOAC). Individual monosaccharides, disaccharides and 17 amino acids in honey were determined by using liquid chromatographic method. The results showed that the peroxide activity was positively correlated with moisture content (r = 0.8264), but negatively correlated with carbohydrate content (r = 0.7755) in honey. The chromatographic sugar and free amino acid profiles showed that the honey samples could be clustered based on the type and maturity of honey. Proline explained for 64.9% of the total variance in principle component analysis (PCA). The correlation between honey components and honey quality has been established for the selected honey samples based on their biochemical and nutritional concentrations. PCA results revealed that the ratio of sucrose to maltose could be used to measure honey maturity, whereas proline was the marker compound used to distinguish honey either as floral or honeydew.

  7. Cocaine abuse determination by ion mobility spectrometry using molecular imprinting.

    PubMed

    Sorribes-Soriano, A; Esteve-Turrillas, F A; Armenta, S; de la Guardia, M; Herrero-Martínez, J M

    2017-01-20

    A cocaine-based molecular imprinted polymer (MIP) has been produced by bulk polymerization and employed as selective solid-phase extraction support for the determination of cocaine in saliva samples by ion mobility spectrometry (IMS). The most appropriate conditions for washing and elution of cocaine from MIPs were studied and MIPs were characterized in terms of analyte binding capacity, reusability in water and saliva analysis, imprinting factor and selectivity were established and compared with non-imprinted polymers. The proposed MIP-IMS method provided a LOD of 18μgL -1 and quantitative recoveries for blank saliva samples spiked from 75 to 500μgL -1 cocaine. Oral fluid samples were collected from cocaine consumers and analysed by the proposed MIP-IMS methodology. Results, ranging from below the LOD to 51±2mgL -1 , were statistically comparable to those obtained by a confirmatory gas chromatography-mass spectrometry method. Moreover, results were compared to a qualitative lateral flow immunoassay procedure providing similar classification of the samples. Thus, MIP-IMS can be considered an useful alternative that provided fast, selective and sensitive results with a cost affordable instrumentation that does not require skilled operators. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. 40 CFR 63.547 - Test methods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 9 2010-07-01 2010-07-01 false Test methods. 63.547 Section 63.547... Hazardous Air Pollutants from Secondary Lead Smelting § 63.547 Test methods. (a) The following test methods...), and 63.545(e): (1) Method 1 shall be used to select the sampling port location and the number of...

  9. 40 CFR 63.547 - Test methods.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 9 2011-07-01 2011-07-01 false Test methods. 63.547 Section 63.547... Hazardous Air Pollutants from Secondary Lead Smelting § 63.547 Test methods. (a) The following test methods...), and 63.545(e): (1) Method 1 shall be used to select the sampling port location and the number of...

  10. Improving the spectral measurement accuracy based on temperature distribution and spectra-temperature relationship

    NASA Astrophysics Data System (ADS)

    Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin

    2018-05-01

    Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.

  11. Effects of Sample Selection Bias on the Accuracy of Population Structure and Ancestry Inference

    PubMed Central

    Shringarpure, Suyash; Xing, Eric P.

    2014-01-01

    Population stratification is an important task in genetic analyses. It provides information about the ancestry of individuals and can be an important confounder in genome-wide association studies. Public genotyping projects have made a large number of datasets available for study. However, practical constraints dictate that of a geographical/ethnic population, only a small number of individuals are genotyped. The resulting data are a sample from the entire population. If the distribution of sample sizes is not representative of the populations being sampled, the accuracy of population stratification analyses of the data could be affected. We attempt to understand the effect of biased sampling on the accuracy of population structure analysis and individual ancestry recovery. We examined two commonly used methods for analyses of such datasets, ADMIXTURE and EIGENSOFT, and found that the accuracy of recovery of population structure is affected to a large extent by the sample used for analysis and how representative it is of the underlying populations. Using simulated data and real genotype data from cattle, we show that sample selection bias can affect the results of population structure analyses. We develop a mathematical framework for sample selection bias in models for population structure and also proposed a correction for sample selection bias using auxiliary information about the sample. We demonstrate that such a correction is effective in practice using simulated and real data. PMID:24637351

  12. Method validation for the determination of ochratoxin A in green and soluble coffee by immunoaffinity column cleanup and liquid chromatography.

    PubMed

    Diaz, G J; Ariza, D; Perilla, N S

    2004-06-01

    A method was validated for the determination of ochratoxin A (OTA) in soluble and green coffee. Performance parameters evaluated included selectivity, accuracy, intermediate precision, linearity, limit of detection, limit of quantitation, and ruggedness. The method was found to be selective for OTA in both matrices tested. Recovery rates from soluble coffee samples ranged from 73.5 to 91.2%, and from green coffee samples from 68.7 to 84.5%. The intermediate precision (RSDr) was between 9.1 and 9.4% for soluble coffee and between 14.3 and 15.5% for green coffee analysis. The linearity of the standard calibration curve (r(2)) was <0.999 for OTA levels of 1.0-20.0 μg/kg in coffee samples. The limit of detection was determined to be 0.01 ng of OTA on column, while the limit of quantitation was found to be 0.03 ng on column. The limit of quantitation is equivalent to 0.6 μg/kg in soluble coffee samples and 0.3 μg/kg in green coffee samples. The results of the ruggedness trial showed two factors are critical for soluble coffee analysis: the extraction method, and the flow rate of the mobile phase. For green coffee analysis two critical factors detected were the extraction method and the storage temperature of the immunoaffinity column.Five samples of soluble coffee and 42 of green coffee were analysed using the validated method. All soluble coffee samples contained OTA at levels that ranged from 8.4 to 13.9 μg/kg. Six of the 42 green coffee samples analysed (14.3%) contained OTA at levels ranging from 0.9 to 19.4 μg/kg. The validated method can be used to monitor OTA levels in Colombian coffee for export or for local consumption.

  13. Method for the concentration and separation of actinides from biological and environmental samples

    DOEpatents

    Horwitz, E.P.; Dietz, M.L.

    1989-05-30

    A method and apparatus for the quantitative recover of actinide values from biological and environmental sample by passing appropriately prepared samples in a mineral acid solution through a separation column of a dialkyl(phenyl)-N,N-dialylcarbamoylmethylphosphine oxide dissolved in tri-n-butyl phosphate on an inert substrate which selectively extracts the actinide values. The actinide values can be eluted either as a group or individually and their presence quantitatively detected by alpha counting. 3 figs.

  14. Using local multiplicity to improve effect estimation from a hypothesis-generating pharmacogenetics study.

    PubMed

    Zou, W; Ouyang, H

    2016-02-01

    We propose a multiple estimation adjustment (MEA) method to correct effect overestimation due to selection bias from a hypothesis-generating study (HGS) in pharmacogenetics. MEA uses a hierarchical Bayesian approach to model individual effect estimates from maximal likelihood estimation (MLE) in a region jointly and shrinks them toward the regional effect. Unlike many methods that model a fixed selection scheme, MEA capitalizes on local multiplicity independent of selection. We compared mean square errors (MSEs) in simulated HGSs from naive MLE, MEA and a conditional likelihood adjustment (CLA) method that model threshold selection bias. We observed that MEA effectively reduced MSE from MLE on null effects with or without selection, and had a clear advantage over CLA on extreme MLE estimates from null effects under lenient threshold selection in small samples, which are common among 'top' associations from a pharmacogenetics HGS.

  15. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research

    PubMed Central

    Palinkas, Lawrence A.; Horwitz, Sarah M.; Green, Carla A.; Wisdom, Jennifer P.; Duan, Naihua; Hoagwood, Kimberly

    2013-01-01

    Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research. PMID:24193818

  16. Comparison of a specific HPLC determination of toxic aconite alkaloids in processed Radix aconiti with a titration method of total alkaloids.

    PubMed

    Csupor, Dezso; Borcsa, Botond; Heydel, Barbara; Hohmann, Judit; Zupkó, István; Ma, Yan; Widowitz, Ute; Bauer, Rudolf

    2011-10-01

    In traditional Chinese medicine, Aconitum (Ranunculaceae) roots are only applied after processing. Nevertheless, several cases of poisoning by improperly processed aconite roots have been reported. The aim of this study was to develop a reliable analytical method to assess the amount of toxic aconite alkaloids in commercial aconite roots, and to compare this method with the commonly used total alkaloid content determination by titration. The content of mesaconitine, aconitine, and hypaconitine in 16 commercial samples of processed aconite roots was determined by an HPLC method and the total alkaloid content by indirect titration. Five samples were selected for in vivo toxicological investigation. In most of the commercial samples, toxic alkaloids were not detectable, or only traces were found. In four samples, we could detect >0.04% toxic aconite alkaloids, the highest with a content of 0.16%. The results of HPLC analysis were compared with the results obtained by titration, and no correlation was found between the two methods. The in vivo results reassured the validity of the HPLC determination. Samples with mesaconitine, aconitine, and hypaconitine content below the HPLC detection limit still contained up to 0.2% alkaloids determined by titration. Since titration of alkaloids gives no information selectively on the aconitine-type alkaloid content and toxicity of aconite roots this method is not appropriate for safety assessment. The HPLC method developed by us provides a quick and reliable assessment of toxicity and should be considered as a purity test in pharmacopoeia monographs.

  17. Microbiological Quality and Food Safety of Plants Grown on ISS Project

    NASA Technical Reports Server (NTRS)

    Wheeler, Raymond M. (Compiler)

    2014-01-01

    The goal of this project is to select and advance methods to enable real-time sampling, microbiological analysis, and sanitation of crops grown on the International Space Station (ISS). These methods would validate the microbiological quality of crops grown for consumption to ensure safe and palatable fresh foods. This would be achieved through the development / advancement of microbiological sample collection, rapid pathogen detection and effective sanitation methods that are compatible with a microgravity environment.

  18. A novel method for the determination of chemical purity and assay of menaquinone-7. Comparison with the methods from the official USP monograph.

    PubMed

    Jedynak, Łukasz; Jedynak, Maria; Kossykowska, Magdalena; Zagrodzka, Joanna

    2017-02-20

    An HPLC method with UV detection and separation with the use of a C30 reversed phase analytical column for the determination of chemical purity and assay of menaquinone-7 (MK7) in one chromatographic run was developed. The method is superior to the methods published in the USP Monograph in terms of selectivity, sensitivity and accuracy, as well as time, solvent and sample consumption. The developed methodology was applied to MK7 samples of active pharmaceutical ingredient (API) purity, MK7 samples of lower quality and crude MK7 samples before purification. The comparison of the results revealed that the use of USP methodology could lead to serious overestimation (up to a few percent) of both purity and MK7 assay in menaquinone-7 samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Local T1-T2 distribution measurements in porous media

    NASA Astrophysics Data System (ADS)

    Vashaee, S.; Li, M.; Newling, B.; MacMillan, B.; Marica, F.; Kwak, H. T.; Gao, J.; Al-harbi, A. M.; Balcom, B. J.

    2018-02-01

    A novel slice-selective T1-T2 measurement is proposed to measure spatially resolved T1-T2 distributions. An adiabatic inversion pulse is employed for slice-selection. The slice-selective pulse is able to select a quasi-rectangular slice, on the order of 1 mm, at an arbitrary position within the sample. The method does not employ conventional selective excitation in which selective excitation is often accomplished by rotation of the longitudinal magnetization in the slice of interest into the transverse plane, but rather a subtraction based on CPMG data acquired with and without adiabatic inversion slice selection. T1 weighting is introduced during recovery from the inversion associated with slice selection. The local T1-T2 distributions measured are of similar quality to bulk T1-T2 measurements. The new method can be employed to characterize oil-water mixtures and other fluids in porous media. The method is beneficial when a coarse spatial distribution of the components is of interest.

  20. Maximum entropy PDF projection: A review

    NASA Astrophysics Data System (ADS)

    Baggenstoss, Paul M.

    2017-06-01

    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  1. Rapid thermal processing by stamping

    DOEpatents

    Stradins, Pauls; Wang, Qi

    2013-03-05

    A rapid thermal processing device and methods are provided for thermal processing of samples such as semiconductor wafers. The device has components including a stamp (35) having a stamping surface and a heater or cooler (40) to bring it to a selected processing temperature, a sample holder (20) for holding a sample (10) in position for intimate contact with the stamping surface; and positioning components (25) for moving the stamping surface and the stamp (35) in and away from intimate, substantially non-pressured contact. Methods for using and making such devices are also provided. These devices and methods allow inexpensive, efficient, easily controllable thermal processing.

  2. Quantitative Method for Simultaneous Analysis of Acetaminophen and 6 Metabolites.

    PubMed

    Lammers, Laureen A; Achterbergh, Roos; Pistorius, Marcel C M; Romijn, Johannes A; Mathôt, Ron A A

    2017-04-01

    Hepatotoxicity after ingestion of high-dose acetaminophen [N-acetyl-para-aminophenol (APAP)] is caused by the metabolites of the drug. To gain more insight into factors influencing susceptibility to APAP hepatotoxicity, quantification of APAP and metabolites is important. A few methods have been developed to simultaneously quantify APAP and its most important metabolites. However, these methods require a comprehensive sample preparation and long run times. The aim of this study was to develop and validate a simplified, but sensitive method for the simultaneous quantification of acetaminophen, the main metabolites acetaminophen glucuronide and acetaminophen sulfate, and 4 Cytochrome P450-mediated metabolites by using liquid chromatography with mass spectrometric (LC-MS) detection. The method was developed and validated for the human plasma, and it entailed a single method for sample preparation, enabling quick processing of the samples followed by an LC-MS method with a chromatographic run time of 9 minutes. The method was validated for selectivity, linearity, accuracy, imprecision, dilution integrity, recovery, process efficiency, ionization efficiency, and carryover effect. The method showed good selectivity without matrix interferences. For all analytes, the mean process efficiency was >86%, and the mean ionization efficiency was >94%. Furthermore, the accuracy was between 90.3% and 112% for all analytes, and the within- and between-run imprecision were <20% for the lower limit of quantification and <14.3% for the middle level and upper limit of quantification. The method presented here enables the simultaneous quantification of APAP and 6 of its metabolites. It is less time consuming than previously reported methods because it requires only a single and simple method for the sample preparation followed by an LC-MS method with a short run time. Therefore, this analytical method provides a useful method for both clinical and research purposes.

  3. Multiple Imputation in Two-Stage Cluster Samples Using The Weighted Finite Population Bayesian Bootstrap.

    PubMed

    Zhou, Hanzhi; Elliott, Michael R; Raghunathan, Trivellore E

    2016-06-01

    Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in "Delta-V," a key crash severity measure.

  4. Multiple Imputation in Two-Stage Cluster Samples Using The Weighted Finite Population Bayesian Bootstrap

    PubMed Central

    Zhou, Hanzhi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in “Delta-V,” a key crash severity measure. PMID:29226161

  5. Testing for X-Ray–SZ Differences and Redshift Evolution in the X-Ray Morphology of Galaxy Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nurgaliev, D.; McDonald, M.; Benson, B. A.

    We present a quantitative study of the X-ray morphology of galaxy clusters, as a function of their detection method and redshift. We analyze two separate samples of galaxy clusters: a sample of 36 clusters atmore » $$0.35\\lt z\\lt 0.9$$ selected in the X-ray with the ROSAT PSPC 400 deg(2) survey, and a sample of 90 clusters at $$0.25\\lt z\\lt 1.2$$ selected via the Sunyaev–Zel’dovich (SZ) effect with the South Pole Telescope. Clusters from both samples have similar-quality Chandra observations, which allow us to quantify their X-ray morphologies via two distinct methods: centroid shifts (w) and photon asymmetry ($${A}_{\\mathrm{phot}}$$). The latter technique provides nearly unbiased morphology estimates for clusters spanning a broad range of redshift and data quality. We further compare the X-ray morphologies of X-ray- and SZ-selected clusters with those of simulated clusters. We do not find a statistically significant difference in the measured X-ray morphology of X-ray and SZ-selected clusters over the redshift range probed by these samples, suggesting that the two are probing similar populations of clusters. We find that the X-ray morphologies of simulated clusters are statistically indistinguishable from those of X-ray- or SZ-selected clusters, implying that the most important physics for dictating the large-scale gas morphology (outside of the core) is well-approximated in these simulations. Finally, we find no statistically significant redshift evolution in the X-ray morphology (both for observed and simulated clusters), over the range of $$z\\sim 0.3$$ to $$z\\sim 1$$, seemingly in contradiction with the redshift-dependent halo merger rate predicted by simulations.« less

  6. Testing for X-Ray–SZ Differences and Redshift Evolution in the X-Ray Morphology of Galaxy Clusters

    DOE PAGES

    Nurgaliev, D.; McDonald, M.; Benson, B. A.; ...

    2017-05-16

    We present a quantitative study of the X-ray morphology of galaxy clusters, as a function of their detection method and redshift. We analyze two separate samples of galaxy clusters: a sample of 36 clusters atmore » $$0.35\\lt z\\lt 0.9$$ selected in the X-ray with the ROSAT PSPC 400 deg(2) survey, and a sample of 90 clusters at $$0.25\\lt z\\lt 1.2$$ selected via the Sunyaev–Zel’dovich (SZ) effect with the South Pole Telescope. Clusters from both samples have similar-quality Chandra observations, which allow us to quantify their X-ray morphologies via two distinct methods: centroid shifts (w) and photon asymmetry ($${A}_{\\mathrm{phot}}$$). The latter technique provides nearly unbiased morphology estimates for clusters spanning a broad range of redshift and data quality. We further compare the X-ray morphologies of X-ray- and SZ-selected clusters with those of simulated clusters. We do not find a statistically significant difference in the measured X-ray morphology of X-ray and SZ-selected clusters over the redshift range probed by these samples, suggesting that the two are probing similar populations of clusters. We find that the X-ray morphologies of simulated clusters are statistically indistinguishable from those of X-ray- or SZ-selected clusters, implying that the most important physics for dictating the large-scale gas morphology (outside of the core) is well-approximated in these simulations. Finally, we find no statistically significant redshift evolution in the X-ray morphology (both for observed and simulated clusters), over the range of $$z\\sim 0.3$$ to $$z\\sim 1$$, seemingly in contradiction with the redshift-dependent halo merger rate predicted by simulations.« less

  7. Selecting Feature Subsets Based on SVM-RFE and the Overlapping Ratio with Applications in Bioinformatics.

    PubMed

    Lin, Xiaohui; Li, Chao; Zhang, Yanhui; Su, Benzhe; Fan, Meng; Wei, Hai

    2017-12-26

    Feature selection is an important topic in bioinformatics. Defining informative features from complex high dimensional biological data is critical in disease study, drug development, etc. Support vector machine-recursive feature elimination (SVM-RFE) is an efficient feature selection technique that has shown its power in many applications. It ranks the features according to the recursive feature deletion sequence based on SVM. In this study, we propose a method, SVM-RFE-OA, which combines the classification accuracy rate and the average overlapping ratio of the samples to determine the number of features to be selected from the feature rank of SVM-RFE. Meanwhile, to measure the feature weights more accurately, we propose a modified SVM-RFE-OA (M-SVM-RFE-OA) algorithm that temporally screens out the samples lying in a heavy overlapping area in each iteration. The experiments on the eight public biological datasets show that the discriminative ability of the feature subset could be measured more accurately by combining the classification accuracy rate with the average overlapping degree of the samples compared with using the classification accuracy rate alone, and shielding the samples in the overlapping area made the calculation of the feature weights more stable and accurate. The methods proposed in this study can also be used with other RFE techniques to define potential biomarkers from big biological data.

  8. A comparison of microscopic and spectroscopic identification methods for analysis of microplastics in environmental samples.

    PubMed

    Song, Young Kyoung; Hong, Sang Hee; Jang, Mi; Han, Gi Myung; Rani, Manviri; Lee, Jongmyoung; Shim, Won Joon

    2015-04-15

    The analysis of microplastics in various environmental samples requires the identification of microplastics from natural materials. The identification technique lacks a standardized protocol. Herein, stereomicroscope and Fourier transform infrared spectroscope (FT-IR) identification methods for microplastics (<1mm) were compared using the same samples from the sea surface microlayer (SML) and beach sand. Fragmented microplastics were significantly (p<0.05) underestimated and fiber was significantly overestimated using the stereomicroscope both in the SML and beach samples. The total abundance by FT-IR was higher than by microscope both in the SML and beach samples, but they were not significantly (p>0.05) different. Depending on the number of samples and the microplastic size range of interest, the appropriate identification method should be determined; selecting a suitable identification method for microplastics is crucial for evaluating microplastic pollution. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Discrimination of whisky brands and counterfeit identification by UV-Vis spectroscopy and multivariate data analysis.

    PubMed

    Martins, Angélica Rocha; Talhavini, Márcio; Vieira, Maurício Leite; Zacca, Jorge Jardim; Braga, Jez Willian Batista

    2017-08-15

    The discrimination of whisky brands and counterfeit identification were performed by UV-Vis spectroscopy combined with partial least squares for discriminant analysis (PLS-DA). In the proposed method all spectra were obtained with no sample preparation. The discrimination models were built with the employment of seven whisky brands: Red Label, Black Label, White Horse, Chivas Regal (12years), Ballantine's Finest, Old Parr and Natu Nobilis. The method was validated with an independent test set of authentic samples belonging to the seven selected brands and another eleven brands not included in the training samples. Furthermore, seventy-three counterfeit samples were also used to validate the method. Results showed correct classification rates for genuine and false samples over 98.6% and 93.1%, respectively, indicating that the method can be helpful for the forensic analysis of whisky samples. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Appearance-based representative samples refining method for palmprint recognition

    NASA Astrophysics Data System (ADS)

    Wen, Jiajun; Chen, Yan

    2012-07-01

    The sparse representation can deal with the lack of sample problem due to utilizing of all the training samples. However, the discrimination ability will degrade when more training samples are used for representation. We propose a novel appearance-based palmprint recognition method. We aim to find a compromise between the discrimination ability and the lack of sample problem so as to obtain a proper representation scheme. Under the assumption that the test sample can be well represented by a linear combination of a certain number of training samples, we first select the representative training samples according to the contributions of the samples. Then we further refine the training samples by an iteration procedure, excluding the training sample with the least contribution to the test sample for each time. Experiments on PolyU multispectral palmprint database and two-dimensional and three-dimensional palmprint database show that the proposed method outperforms the conventional appearance-based palmprint recognition methods. Moreover, we also explore and find out the principle of the usage for the key parameters in the proposed algorithm, which facilitates to obtain high-recognition accuracy.

  11. Use of space-filling curves to select sample locations in natural resource monitoring studies

    Treesearch

    Andrew Lister; Charles T. Scott

    2009-01-01

    The establishment of several large area monitoring networks over the past few decades has led to increased research into ways to spatially balance sample locations across the landscape. Many of these methods are well documented and have been used in the past with great success. In this paper, we present a method using geographic information systems (GIS) and fractals...

  12. Item Selection for the Development of Parallel Forms from an IRT-Based Seed Test Using a Sampling and Classification Approach

    ERIC Educational Resources Information Center

    Chen, Pei-Hua; Chang, Hua-Hua; Wu, Haiyan

    2012-01-01

    Two sampling-and-classification-based procedures were developed for automated test assembly: the Cell Only and the Cell and Cube methods. A simulation study based on a 540-item bank was conducted to compare the performance of the procedures with the performance of a mixed-integer programming (MIP) method for assembling multiple parallel test…

  13. Lecturers and Postgraduates Perception of Libraries as Promoters of Teaching, Learning, and Research at the University of Ibadan, Nigeria

    ERIC Educational Resources Information Center

    Oyewole, Olawale; Adetimirin, Airen

    2015-01-01

    Lecturers and postgraduates are among the users of the university libraries and their perception of the libraries has influence on utilization of the information resources, hence the need for this study. Survey method was adopted for the study and simple random sampling method was used to select sample size of 38 lecturers and 233 postgraduates.…

  14. 40 CFR Table 3 to Subpart Yyyy of... - Requirements for Performance Tests and Initial Compliance Demonstrations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the Administrator formaldehyde concentration must be corrected to 15 percent O2, dry basis. Results of... 100 percent load. b. select the sampling port location and the number of traverse points AND Method 1... concentration at the sampling port location AND Method 3A or 3B of 40 CFR part 60, appendix A measurements to...

  15. 40 CFR 63.9914 - What test methods and other procedures must I use to demonstrate initial compliance with chlorine...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... appendix A to 40 CFR part 60: (i) Method 1 to select sampling port locations and the number of traverse points. Sampling ports must be located at the outlet of the control device and prior to any releases to... = Concentration of chlorine or hydrochloric acid in the gas stream, milligrams per dry standard cubic meter (mg...

  16. 40 CFR 63.9914 - What test methods and other procedures must I use to demonstrate initial compliance with chlorine...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... appendix A to 40 CFR part 60: (i) Method 1 to select sampling port locations and the number of traverse points. Sampling ports must be located at the outlet of the control device and prior to any releases to... = Concentration of chlorine or hydrochloric acid in the gas stream, milligrams per dry standard cubic meter (mg...

  17. Selective detection of Co2+ by fluorescent nano probe: Diagnostic approach for analysis of environmental samples and biological activities

    NASA Astrophysics Data System (ADS)

    Mahajan, Prasad G.; Dige, Nilam C.; Desai, Netaji K.; Patil, Shivajirao R.; Kondalkar, Vijay V.; Hong, Seong-Karp; Lee, Ki Hwan

    2018-06-01

    Nowadays scientist over the world are engaging to put forth improved methods to detect metal ion in an aqueous medium based on fluorescence studies. A simple, selective and sensitive method was proposed for detection of Co2+ ion using fluorescent organic nanoparticles. We synthesized a fluorescent small molecule viz. 4,4‧-{benzene-1,4-diylbis-[(Z)methylylidenenitrilo]}dibenzoic acid (BMBA) to explore its suitability as sensor for Co2+ ion and biocompatibility in form of nanoparticles. Fluorescence nanoparticles (BMBANPs) prepared by simple reprecipitation method. Aggregation induced enhanced emission of BMBANPs exhibits the narrower particle size of 68 nm and sphere shape morphology. The selective fluorescence quenching was observed by addition of Co2+ and does not affected by presence of other coexisting ion solutions. The photo-physical properties, viz. UV-absorption, fluorescence emission, and lifetime measurements are in support of ligand-metal interaction followed by static fluorescence quenching phenomenon in emission of BMBANPs. Finally, we develop a simple analytical method for selective and sensitive determination of Co2+ ion in environmental samples. The cell culture E. coli, Bacillus sps., and M. tuberculosis H37RV strain in the vicinity of BMBANPs indicates virtuous anti-bacterial and anti-tuberculosis activity which is of additional novel application shown by prepared nanoparticles.

  18. Selective enrichment and determination of monoamine neurotransmitters by CU(II) immobilized magnetic solid phase extraction coupled with high-performance liquid chromatography-fluorescence detection.

    PubMed

    He, Maofang; Wang, Chaozhan; Wei, Yinmao

    2016-01-15

    In this paper, iminodiacetic acid-Cu(II) functionalized Fe3O4@SiO2 magnetic nanoparticles were prepared and used as new adsorbents for magnetic solid phase extraction (MSPE) of six monoamine neurotransmitters (MNTs) from rabbit plasma. The selective enrichment of MNTs at pH 5.0 was motivated by the specific coordination interaction between amino groups of MNTs and the immobilized Cu(II). The employed weak acidic extraction condition avoided the oxidation of MNTs, and thus facilitated operation and ensured higher recoveries. Under optimal conditions, the recoveries of six MNTs from rabbit plasma were in the range of 83.9-109.4%, with RSD of 2.0-10.0%. When coupled the Cu(II) immobilized MSPE with high-performance liquid chromatography-fluorescence detection, the method exhibited relatively lower detection limits than the previously reported methods, and the method was successfully used to determine the endogenous MNTs in rabbit plasma. The proposed method has potential application for the determination of MNTs in biological samples. Also, the utilization of coordination interaction to improve the selectivity might open another way to selectively enrich small alkaloids from complex samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Data-adaptive test statistics for microarray data.

    PubMed

    Mukherjee, Sach; Roberts, Stephen J; van der Laan, Mark J

    2005-09-01

    An important task in microarray data analysis is the selection of genes that are differentially expressed between different tissue samples, such as healthy and diseased. However, microarray data contain an enormous number of dimensions (genes) and very few samples (arrays), a mismatch which poses fundamental statistical problems for the selection process that have defied easy resolution. In this paper, we present a novel approach to the selection of differentially expressed genes in which test statistics are learned from data using a simple notion of reproducibility in selection results as the learning criterion. Reproducibility, as we define it, can be computed without any knowledge of the 'ground-truth', but takes advantage of certain properties of microarray data to provide an asymptotically valid guide to expected loss under the true data-generating distribution. We are therefore able to indirectly minimize expected loss, and obtain results substantially more robust than conventional methods. We apply our method to simulated and oligonucleotide array data. By request to the corresponding author.

  20. Hair-based rapid analyses for multiple drugs in forensics and doping: application of dynamic multiple reaction monitoring with LC-MS/MS.

    PubMed

    Shah, Iltaf; Petroczi, Andrea; Uvacsek, Martina; Ránky, Márta; Naughton, Declan P

    2014-01-01

    Considerable efforts are being extended to develop more effective methods to detect drugs in forensic science for applications such as preventing doping in sport. The aim of this study was to develop a sensitive and accurate method for analytes of forensic and toxicological nature in human hair at sub-pg levels. The hair test covers a range of different classes of drugs and metabolites of forensic and toxicological nature including selected anabolic steroids, cocaine, amphetamines, cannabinoids, opiates, bronchodilators, phencyclidine and ketamine. For extraction purposes, the hair samples were decontaminated using dichloromethane, ground and treated with 1 M sodium hydroxide and neutralised with hydrochloric acid and phosphate buffer and the homogenate was later extracted with hexane using liquid-liquid extraction (LLE). Following extraction from hair samples, drug-screening employed liquid chromatography coupled to tandem mass spectrometric (LC-MS/MS) analysis using dynamic multiple reaction monitoring (DYN-MRM) method using proprietary software. The screening method (for > 200 drugs/metabolites) was calibrated with a tailored drug mixture and was validated for 20 selected drugs for this study. Using standard additions to hair sample extracts, validation was in line with FDA guidance. A Zorbax Eclipse plus C18 (2.1 mm internal diameter × 100 mm length × 1.8 μm particle size) column was used for analysis. Total instrument run time was 8 minutes with no noted matrix interferences. The LOD of compounds ranged between 0.05-0.5 pg/mg of hair. 233 human hair samples were screened using this new method and samples were confirmed positive for 20 different drugs, mainly steroids and drugs of abuse. This is the first report of the application of this proprietary system to investigate the presence of drugs in human hair samples. The method is selective, sensitive and robust for the screening and confirmation of multiple drugs in a single analysis and has potential as a very useful tool for the analysis of large array of controlled substances and drugs of abuse.

  1. A GC-MS method for the detection and quantitation of ten major drugs of abuse in human hair samples.

    PubMed

    Orfanidis, A; Mastrogianni, O; Koukou, A; Psarros, G; Gika, H; Theodoridis, G; Raikos, N

    2017-03-15

    A sensitive analytical method has been developed in order to identify and quantify major drugs of abuse (DOA), namely morphine, codeine, 6-monoacetylmorphine, cocaine, ecgonine methyl ester, benzoylecgonine, amphetamine, methamphetamine, methylenedioxymethamphetamine and methylenedioxyamphetamine in human hair. Samples of hair were extracted with methanol under ultrasonication at 50°C after a three step rinsing process to remove external contamination and dirt hair. Derivatization with BSTFA was selected in order to increase detection sensitivity of GC/MS analysis. Optimization of derivatization parameters was based on experiments for the selection of derivatization time, temperature and volume of derivatising agent. Validation of the method included evaluation of linearity which ranged from 2 to 350ng/mg of hair mean concentration for all DOA, evaluation of sensitivity, accuracy, precision and repeatability. Limits of detection ranged from 0.05 to 0.46ng/mg of hair. The developed method was applied for the analysis of hair samples obtained from three human subjects and were found positive in cocaine, and opiates. Published by Elsevier B.V.

  2. Selective and comprehensive analysis of organohalogen compounds by GC × GC-HRTofMS and MS/MS.

    PubMed

    Hashimoto, Shunji; Zushi, Yasuyuki; Takazawa, Yoshikatsu; Ieda, Teruyo; Fushimi, Akihiro; Tanabe, Kiyoshi; Shibata, Yasuyuki

    2018-03-01

    Thousands of organohalogen compounds, including hazardous chemicals such as polychlorinated biphenyls (PCBs) and other persistent organic pollutants (POPs), were selectively and simultaneously detected and identified with simple, or no, purification from environmental sample extracts by using several advanced methods. The methods used were software extraction from two-dimensional gas chromatography-high-resolution time-of-flight mass spectrometry (GC × GC-HRTofMS) data, measurement by negative chemical ionization with HRTofMS, and neutral loss scanning (NLS) with GC × GC-MS/MS. Global and selective detection of organochlorines and bromines in environmental samples such as sediments and fly ash was achieved by NLS using GC × GC-MS/MS (QQQ), with the expected losses of 35 Cl and 79 Br. We confirmed that negative chemical ionization was effective for sensitive and selective ionization of organohalogens, even using GC × GC-HRTofMS. The 2D total ion chromatograms obtained by using negative chemical ionization and selective extraction of organohalogens using original software from data measured by electron impact ionization were very similar; the software thus functioned well to extract organohalogens. Combining measurements made by using these different methods will help to detect organohalogens selectively and globally. However, to compare the data obtained by individual measurements, the retention times of the peaks on the 2D chromatograms need to match.

  3. Evaluation of two main RNA-seq approaches for gene quantification in clinical RNA sequencing: polyA+ selection versus rRNA depletion.

    PubMed

    Zhao, Shanrong; Zhang, Ying; Gamini, Ramya; Zhang, Baohong; von Schack, David

    2018-03-19

    To allow efficient transcript/gene detection, highly abundant ribosomal RNAs (rRNA) are generally removed from total RNA either by positive polyA+ selection or by rRNA depletion (negative selection) before sequencing. Comparisons between the two methods have been carried out by various groups, but the assessments have relied largely on non-clinical samples. In this study, we evaluated these two RNA sequencing approaches using human blood and colon tissue samples. Our analyses showed that rRNA depletion captured more unique transcriptome features, whereas polyA+ selection outperformed rRNA depletion with higher exonic coverage and better accuracy of gene quantification. For blood- and colon-derived RNAs, we found that 220% and 50% more reads, respectively, would have to be sequenced to achieve the same level of exonic coverage in the rRNA depletion method compared with the polyA+ selection method. Therefore, in most cases we strongly recommend polyA+ selection over rRNA depletion for gene quantification in clinical RNA sequencing. Our evaluation revealed that a small number of lncRNAs and small RNAs made up a large fraction of the reads in the rRNA depletion RNA sequencing data. Thus, we recommend that these RNAs are specifically depleted to improve the sequencing depth of the remaining RNAs.

  4. Computational selection of antibody-drug conjugate targets for breast cancer

    PubMed Central

    Fauteux, François; Hill, Jennifer J.; Jaramillo, Maria L.; Pan, Youlian; Phan, Sieu; Famili, Fazel; O'Connor-McCourt, Maureen

    2016-01-01

    The selection of therapeutic targets is a critical aspect of antibody-drug conjugate research and development. In this study, we applied computational methods to select candidate targets overexpressed in three major breast cancer subtypes as compared with a range of vital organs and tissues. Microarray data corresponding to over 8,000 tissue samples were collected from the public domain. Breast cancer samples were classified into molecular subtypes using an iterative ensemble approach combining six classification algorithms and three feature selection techniques, including a novel kernel density-based method. This feature selection method was used in conjunction with differential expression and subcellular localization information to assemble a primary list of targets. A total of 50 cell membrane targets were identified, including one target for which an antibody-drug conjugate is in clinical use, and six targets for which antibody-drug conjugates are in clinical trials for the treatment of breast cancer and other solid tumors. In addition, 50 extracellular proteins were identified as potential targets for non-internalizing strategies and alternative modalities. Candidate targets linked with the epithelial-to-mesenchymal transition were identified by analyzing differential gene expression in epithelial and mesenchymal tumor-derived cell lines. Overall, these results show that mining human gene expression data has the power to select and prioritize breast cancer antibody-drug conjugate targets, and the potential to lead to new and more effective cancer therapeutics. PMID:26700623

  5. Balancing precision and risk: should multiple detection methods be analyzed separately in N-mixture models?

    USGS Publications Warehouse

    Graves, Tabitha A.; Royle, J. Andrew; Kendall, Katherine C.; Beier, Paul; Stetz, Jeffrey B.; Macleod, Amy C.

    2012-01-01

    Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic) and bear rubs (opportunistic). We used hierarchical abundance models (N-mixture models) with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1) lead to the selection of the same variables as important and (2) provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3) yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight), and (4) improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed against those risks. The analysis framework presented here will be useful for other species exhibiting heterogeneity by detection method.

  6. Comparison of 7 culture methods for Salmonella serovar Enteritidis and Salmonella serovar Typhimurium isolation in poultry feces.

    PubMed

    Rodríguez, Francisco I; Procura, Francisco; Bueno, Dante J

    2018-06-26

    The present work compared 7 different culture methods and 3 selective-differential plating media for Salmonella ser. Enteritidis (SE) and S. ser. Typhimurium (ST) isolation using artificially contaminated poultry feces. The sensitivity (Se) and accuracy (AC) values increased when Modified Semisolid Rappaport Vassiliadis (MSRV) methods were used in place of the Tetrathionate (TT) or Tetrathionate Hajna broth (TTH) method in the enrichment step. However, there was no significant difference between the pre-enrichment incubation at 4 to 6 and 18 to 24 h for MSRV5 and MSRV24 methods, respectively. All Salmonella strains were recovered in the lowest dilutions tested for MSRV24 and 3 out of 4 for MSRV5 methods (2 to 10 cfu/25 g). The TT and TTH methods showed a detection limit between 2.2 × 101 and 1.0 × 106 cfu/25 g of fecal sample. The agreement was variable between the methods. However, there was a very good agreement between the MSRV5 and MSRV24 methods, and between tetrathionate direct (TTD, no pre-enrichment media used) and buffered peptone water 18 to 24 h Tetrathionate broth combination (TT24 method) for Salmonella strains. The 3 selective-differential plating media showed an agreement between fair and excellent. They performed a high Se and AC in the MSRV methods for Salmonella strains. There was a significant difference between center and periphery for MSRV methods, and there was a fair agreement between them for all strains. The MSRV methods are better than TT/TTH methods for the isolation of different strains of SE and ST in poultry fecal samples. The MSRV5 method can be used to reduce the time for the detection of SE and ST in these samples. Furthermore, a loopful of the periphery of the growth should be streaked onto differential-selective plating media, even in the absence of halo, to decrease the number of false negative results.

  7. Determination of formetanate hydrochloride in fruit samples using liquid chromatography-mass selective detection or -tandem mass spectrometry.

    PubMed

    Podhorniak, Lynda V; Kamel, Alaa; Rains, Diane M

    2010-05-26

    A rapid multiresidue method that captures residues of the insecticide formetanate hydrochloride (FHCl) in selected fruits is described. The method was used to provide residue data for dietary exposure determinations of FHCl. Using an acetonitrile extraction with a dispersive cleanup based on AOAC International method 2007.01, also known as QuEChERS, which was further modified and streamlined, thousands of samples were successfully analyzed for FHCl residues. FHCl levels were determined both by liquid chromatography-single-stage mass spectrometry (LC-MS) and ultraperformance liquid chromatography (UPLC)-tandem mass spectrometry (LC-MS/MS). The target limit of detection (LOD) and the limit of quantitation (LOQ) achieved for FHCl were 3.33 and 10 ng/g, respectively, with LC-MS and 0.1 and 0.3 ng/g, respectively, with LC-MS/MS. Recoveries at these previously unpublished levels ranged from 95 to 109%. A set of 20-40 samples can be prepared in one working day by two chemists.

  8. Estimates of population change in selected species of tropical birds using mark-recapture data

    USGS Publications Warehouse

    Brawn, J.; Nichols, J.D.; Hines, J.E.; Nesbitt, J.

    2000-01-01

    The population biology of tropical birds is known for a only small sample of species; especially in the Neotropics. Robust estimates of parameters such as survival rate and finite rate of population change (A) are crucial for conservation purposes and useful for studies of avian life histories. We used methods developed by Pradel (1996, Biometrics 52:703-709) to estimate A for 10 species of tropical forest lowland birds using data from a long-term (> 20 yr) banding study in Panama. These species constitute a ecologically and phylogenetically diverse sample. We present these estimates and explore if they are consistent with what we know from selected studies of banded birds and from 5 yr of estimating nesting success (i.e., an important component of A). A major goal of these analyses is to assess if the mark-recapture methods generate reliable and reasonably precise estimates of population change than traditional methods that require more sampling effort.

  9. A multiresidue method for the determination of selected endocrine disrupting chemicals in human breast milk based on a simple extraction procedure.

    PubMed

    Rodríguez-Gómez, R; Jiménez-Díaz, I; Zafra-Gómez, A; Ballesteros, O; Navalón, A

    2014-12-01

    In recent decades, in parallel to industrial development, a large amount of new chemicals have emerged that are able to produce disorders in human endocrine system. These groups of substances, so-called endocrine disrupting chemicals (EDCs), include many families of compounds, such as parabens, benzophenone-UV filters and bisphenols. Given the demonstrated biological activity of those compounds, it is necessary to develop new analytical procedures to evaluate the exposure with the final objective of establishing, in an accurate way, relationships between EDCs concentrations and the harmful health effects observed in population. In the present work, a method based on a simplified sample treatment involving steps of precipitation, evaporation and clean-up of the extracts with C18 followed by ultrahigh performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) analysis for the determination of bisphenol A and its chlorinated derivatives (monochloro-, dichloro-, trichloro- and tetrachlorobisphenol A), parabens (methyl-, ethyl-, propyl- and butylparaben) and benzophenone-UV filters (benzophenone -1,-2, -3, -6, -8 and 4-hydroxybenzophenone) in human breast milk samples is proposed and validated. The limits of detections found ranged from 0.02 to 0.05 ng mL(-1). The method was validated using matrix-matched standard calibration followed by a recovery assay with spiked samples. Recovery rates ranged from 91% to 110% and the precision (evaluated as relative standard deviation) was lower than 15% for all compounds, being within the acceptable limits for the selected bioanalytical method validation guide. The method was satisfactorily applied for the determination of these compounds in human breast milk samples collected from 10 randomly selected women. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Least squares polynomial chaos expansion: A review of sampling strategies

    NASA Astrophysics Data System (ADS)

    Hadigol, Mohammad; Doostan, Alireza

    2018-04-01

    As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.

  11. IMMUNOCHEMICAL APPLICATIONS IN ENVIRONMENTAL SCIENCE

    EPA Science Inventory

    Immunochemical methods are based on selective antibodies combining with a particular target analyte or analyte group. The specific binding between antibody and analyte can be used to detect environmental contaminants in a variety of sample matrixes. Immunoassay methods provide ...

  12. Determination of Three Organochlorine Pesticides in Aqueous Samples by Solid-Phase Extraction Based on Natural Nano Diatomite in Packed Syringe Coupled to Gas Chromatography-Mass Spectrometry.

    PubMed

    Taghani, Abdollah; Goudarzi, Nasser; Bagherian, Ghadamali; Chamjangali, Mansour Arab

    2017-01-01

    A rapid, simple, and sensitive technique is proposed based on a miniaturized solid-phase extraction method named mictroextraction in a packed syringe coupled with gas chromatography-mass spectrometry for the preconcentration and determination of three organochlorine pesticides. These include hexachlorobenzene, heptachlor and aldrine in aqueous samples. For the first time, the natural nano diatomite is used a sorbent. Based on this technique, 6.0 mg of the nano sorbent is inserted in a syringe between two polypropylene frits. The analytes would be adsorbed on the solid phase, and would subsequently be eluted using organic solvents. The influence of some important parameters, such as the solution pH, type and volume of the organic desorption solvent, and amount of sorbent on the extraction efficiency of the selected pesticides, is investigated. The proposed method shows good linearity in the range of 0.1 - 40.0 μg L -1 , and at low limits of detection in the range of 0.02 - 0.13 μg L -1 using the selected ion-monitoring mode. The reproducibility of this method was found to be in the range of 3.5 - 11.1% for the understudied pesticides. In order to evaluate the matrix effect, the developed method is also applied to the preconcentration and determination of the selected pesticides in different water samples.

  13. Partially Identified Treatment Effects for Generalizability

    ERIC Educational Resources Information Center

    Chan, Wendy

    2017-01-01

    Recent methods to improve generalizations from nonrandom samples typically invoke assumptions such as the strong ignorability of sample selection, which is challenging to meet in practice. Although researchers acknowledge the difficulty in meeting this assumption, point estimates are still provided and used without considering alternative…

  14. Uniform deposition of size-selected clusters using Lissajous scanning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beniya, Atsushi; Watanabe, Yoshihide, E-mail: e0827@mosk.tytlabs.co.jp; Hirata, Hirohito

    2016-05-15

    Size-selected clusters can be deposited on the surface using size-selected cluster ion beams. However, because of the cross-sectional intensity distribution of the ion beam, it is difficult to define the coverage of the deposited clusters. The aggregation probability of the cluster depends on coverage, whereas cluster size on the surface depends on the position, despite the size-selected clusters are deposited. It is crucial, therefore, to deposit clusters uniformly on the surface. In this study, size-selected clusters were deposited uniformly on surfaces by scanning the cluster ions in the form of Lissajous pattern. Two sets of deflector electrodes set in orthogonalmore » directions were placed in front of the sample surface. Triangular waves were applied to the electrodes with an irrational frequency ratio to ensure that the ion trajectory filled the sample surface. The advantages of this method are simplicity and low cost of setup compared with raster scanning method. The authors further investigated CO adsorption on size-selected Pt{sub n} (n = 7, 15, 20) clusters uniformly deposited on the Al{sub 2}O{sub 3}/NiAl(110) surface and demonstrated the importance of uniform deposition.« less

  15. The use of integer programming to select bulls across breeding companies with volume price discounts.

    PubMed

    McConnel, M B; Galligan, D T

    2004-10-01

    Optimization programs are currently used to aid in the selection of bulls to be used in herd breeding programs. While these programs offer a systematic approach to the problem of semen selection, they ignore the impact of volume discounts. Volume discounts are discounts that vary depending on the number of straws purchased. The dynamic nature of volume discounts means that, in order to be adequately accounted for, they must be considered in the optimization routine. Failing to do this creates a missed economic opportunity because the potential benefits of optimally selecting and combining breeding company discount opportunities are not captured. To address these issues, an integer program was created which used binary decision variables to incorporate the effects of quantity discounts into the optimization program. A consistent set of trait criteria was used to select a group of bulls from 3 sample breeding companies. Three different selection programs were used to select the bulls, 2 traditional methods and the integer method. After the discounts were applied using each method, the integer program resulted in the lowest cost portfolio of bulls. A sensitivity analysis showed that the integer program also resulted in a low cost portfolio when the genetic trait goals were changed to be more or less stringent. In the sample application, a net benefit of the new approach over the traditional approaches was a 12.3 to 20.0% savings in semen cost.

  16. Selective Laser Melting of Metal Powder Of Steel 3161

    NASA Astrophysics Data System (ADS)

    Smelov, V. G.; Sotov, A. V.; Agapovichev, A. V.; Tomilina, T. M.

    2016-08-01

    In this article the results of experimental study of the structure and mechanical properties of materials obtained by selective laser melting (SLM), metal powder steel 316L was carried out. Before the process of cultivation of samples as the input control, the morphology of the surface of the powder particles was studied and particle size analysis was carried out. Also, 3D X-ray quality control of the grown samples was carried out in order to detect hidden defects, their qualitative and quantitative assessment. To determine the strength characteristics of the samples synthesized by the SLM method, static tensile tests were conducted. To determine the stress X-ray diffraction analysis was carried out in the material samples.

  17. Total sulfur determination in residues of crude oil distillation using FT-IR/ATR and variable selection methods

    NASA Astrophysics Data System (ADS)

    Müller, Aline Lima Hermes; Picoloto, Rochele Sogari; Mello, Paola de Azevedo; Ferrão, Marco Flores; dos Santos, Maria de Fátima Pereira; Guimarães, Regina Célia Lourenço; Müller, Edson Irineu; Flores, Erico Marlon Moraes

    2012-04-01

    Total sulfur concentration was determined in atmospheric residue (AR) and vacuum residue (VR) samples obtained from petroleum distillation process by Fourier transform infrared spectroscopy with attenuated total reflectance (FT-IR/ATR) in association with chemometric methods. Calibration and prediction set consisted of 40 and 20 samples, respectively. Calibration models were developed using two variable selection models: interval partial least squares (iPLS) and synergy interval partial least squares (siPLS). Different treatments and pre-processing steps were also evaluated for the development of models. The pre-treatment based on multiplicative scatter correction (MSC) and the mean centered data were selected for models construction. The use of siPLS as variable selection method provided a model with root mean square error of prediction (RMSEP) values significantly better than those obtained by PLS model using all variables. The best model was obtained using siPLS algorithm with spectra divided in 20 intervals and combinations of 3 intervals (911-824, 823-736 and 737-650 cm-1). This model produced a RMSECV of 400 mg kg-1 S and RMSEP of 420 mg kg-1 S, showing a correlation coefficient of 0.990.

  18. A simple aloe vera plant-extracted microwave and conventional combustion synthesis: Morphological, optical, magnetic and catalytic properties of CoFe2O4 nanostructures

    NASA Astrophysics Data System (ADS)

    Manikandan, A.; Sridhar, R.; Arul Antony, S.; Ramakrishna, Seeram

    2014-11-01

    Nanocrystalline magnetic spinel CoFe2O4 was synthesized by a simple microwave combustion method (MCM) using ferric nitrate, cobalt nitrate and Aloe vera plant extracted solution. For the comparative study, it was also prepared by a conventional combustion method (CCM). Powder X-ray diffraction, energy dispersive X-ray and selected-area electron diffraction results indicate that the as-synthesized samples have only single-phase spinel structure with high crystallinity and without the presence of other phase impurities. The crystal structure and morphology of the powders were revealed by high resolution scanning electron microscopy and transmission electron microscopy, show that the MCM products of CoFe2O4 samples contain sphere-like nanoparticles (SNPs), whereas the CCM method of samples consist of flake-like nanoplatelets (FNPs). The band gap of the samples was determined by UV-Visible diffuse reflectance and photoluminescence spectroscopy. The magnetization (Ms) results showed a ferromagnetic behavior of the CoFe2O4 nanostructures. The Ms value of CoFe2O4-SNPs is higher i.e. 77.62 emu/g than CoFe2O4-FNPs (25.46 emu/g). The higher Ms value of the sample suggest that the MCM technique is suitable for preparing high quality nanostructures for magnetic applications. Both the samples were successfully tested as catalysts for the conversion of benzyl alcohol. The resulting spinel ferrites were highly selective for the oxidation of benzyl alcohol and exhibit important difference among their activities. It was found that CoFe2O4-SNPs catalyst show the best performance, whereby 99.5% selectivity of benzaldehyde was achieved at close to 93.2% conversion.

  19. Determination of hazardous ingredients in personal care products using laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Abrar, M.; Iqbal, T.; Fahad, M.; Andleeb, M.; Farooq, Z.; Afsheen, S.

    2018-05-01

    In the present work, the laser-induced breakdown spectroscopy technique is applied to explore the concentration of toxic elements present in cosmetic materials. The elemental analysis of chromium (Cr), magnesium (Mg), cadmium (Cd) and lead (Pb) are selected as major elements and manganese (Mn), sodium (Na), potassium (P), sulfur (S), silicon (Si) and titanium (Ti) as minor elements in cosmetic products. In this technique, a plasma plume is generated by using an Nd:YAG Laser of 532 nm wavelength and spectral lines for the respective samples are observed. Four different samples of cosmetic products are selected, i.e. two samples for lipstick and two for eyeshadow. The observed spectral lines of all major and minor elements are used to calculate their concentration in all samples through the intensity ratio method. Among selected lipstick and eyeshadow samples, one sample is branded, and one is collected from the local market. It is observed that chromium, magnesium and lead have strong spectral lines and consequently show high concentration. The calculated concentrations are then compared to permissible limits set by the Food and Drug Administration with regard to the cosmetics industry. The concentration of these toxic elements in selected local cosmetic samples exceeds the safe permissible limit for human use and could lead to serious health problems.

  20. Multiresidue method for the determination of 77 pesticides in wine using QuEChERS sample preparation and gas chromatography with mass spectrometry.

    PubMed

    Jiang, Y; Li, X; Xu, J; Pan, C; Zhang, J; Niu, W

    2009-06-01

    A method based on a QuEChERS (quick, easy, cheap, effective, rugged, safe) sample preparation method and gas chromatography with mass spectrometric detection by selected ion monitoring (GC/MS-SIM) was developed for simultaneous determination of 77 pesticide residues in wine. An extraction of 10 ml of sample with acetonitrile followed by liquid-liquid partition formed by the addition of 4 g MgSO(4) and 3 g NaCl was applied in the sample preparation. The clean-up was carried out by applying dispersive solid-phase with 150 mg MgSO(4) as well as 50 mg primary secondary amine (PSA). One quantitation ion and at least two identification ions were selected in the analytical method for each pesticide compound by GC/MS. The recovery data were obtained by spiking blank samples at two concentration levels (0.05 and 0.2 mg l(-1)). The recoveries of all pesticides were in the range 70-110%, with intra-day precision of less than 15%, and the inter-day precision of less than 22% and 15% for 0.05 and 0.2 mg l(-1) fortification levels, respectively. Linearity was between 0.02 and 2 mg l(-1) with determination coefficients (R(2)) greater than 0.98 for all compounds. The limits of quantification (LOQs) for the 77 pesticides ranged from 0.003 to 0.05 mg l(-1). This method was applied for routine analysis in market products.

  1. Development of Gold Standard Ion-Selective Electrode-Based Methods for Fluoride Analysis

    PubMed Central

    Martínez-Mier, E.A.; Cury, J.A.; Heilman, J.R.; Katz, B.P.; Levy, S.M.; Li, Y.; Maguire, A.; Margineda, J.; O’Mullane, D.; Phantumvanit, P.; Soto-Rojas, A.E.; Stookey, G.K.; Villa, A.; Wefel, J.S.; Whelton, H.; Whitford, G.M.; Zero, D.T.; Zhang, W.; Zohouri, V.

    2011-01-01

    Background/Aims: Currently available techniques for fluoride analysis are not standardized. Therefore, this study was designed to develop standardized methods for analyzing fluoride in biological and nonbiological samples used for dental research. Methods A group of nine laboratories analyzed a set of standardized samples for fluoride concentration using their own methods. The group then reviewed existing analytical techniques for fluoride analysis, identified inconsistencies in the use of these techniques and conducted testing to resolve differences. Based on the results of the testing undertaken to define the best approaches for the analysis, the group developed recommendations for direct and microdiffusion methods using the fluoride ion-selective electrode. Results Initial results demonstrated that there was no consensus regarding the choice of analytical techniques for different types of samples. Although for several types of samples, the results of the fluoride analyses were similar among some laboratories, greater differences were observed for saliva, food and beverage samples. In spite of these initial differences, precise and true values of fluoride concentration, as well as smaller differences between laboratories, were obtained once the standardized methodologies were used. Intraclass correlation coefficients ranged from 0.90 to 0.93, for the analysis of a certified reference material, using the standardized methodologies. Conclusion The results of this study demonstrate that the development and use of standardized protocols for F analysis significantly decreased differences among laboratories and resulted in more precise and true values. PMID:21160184

  2. Development, validation, and application of a method for selected avermectin determination in rural waters using high performance liquid chromatography and fluorescence detection.

    PubMed

    Lemos, Maria Augusta Travassos; Matos, Camila Alves; de Resende, Michele Fabri; Prado, Rachel Bardy; Donagemma, Raquel Andrade; Netto, Annibal Duarte Pereira

    2016-11-01

    Avermectins (AVM) are macrocyclic lactones used in livestock and agriculture. A quantitative method of high performance liquid chromatography with fluorescence detection for the determination of eprinomectin, abamectin, doramectin and ivermectin in rural water samples was developed and validated. The method was employed to study samples collected in the Pito Aceso River microbasin, located in the Bom Jardim municipality, Rio de Janeiro State, Brazil. Samples were extracted by solid phase extraction using a polymeric stationary phase, the eluted fraction was re-concentrated under a gentle N2 flow and derivatized to allow AVM determination using liquid chromatography with fluorescence detection. The excitation and emission wavelengths of the derivatives were 365 and 470nm, respectively, and a total chromatographic run of 12min was achieved. Very low limits of quantification (22-58ngL(-1)) were found after re-concentration using N2. Recovery values varied from 85.7% to 119.2% with standard deviations between 1.2% and 10.2%. The validated method was applied in the determination of AVM in 15 water samples collected in the Pito Aceso River microbasin, but most of them were free of AVM or showed only trace levels of these compounds, except for a sample that contained doramectin (9.11µgL(-1)). The method is suitable for routine analysis with satisfactory recovery, sensitivity, and selectivity. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. A guide to the use of distance sampling to estimate abundance of Karner blue butterflies

    USGS Publications Warehouse

    Grundel, Ralph

    2015-01-01

    This guide is intended to describe the use of distance sampling as a method for evaluating the abundance of Karner blue butterflies at a location. Other methods for evaluating abundance exist, including mark-release-recapture and index counts derived from Pollard-Yates surveys, for example. Although this guide is not intended to be a detailed comparison of the pros and cons of each type of method, there are important preliminary considerations to think about before selecting any method for evaluating the abundance of Karner blue butterflies.

  4. Optimization of a sensitive method for the determination of nitro musk fragrances in waters by solid-phase microextraction and gas chromatography with micro electron capture detection using factorial experimental design.

    PubMed

    Polo, Maria; Garcia-Jares, Carmen; Llompart, Maria; Cela, Rafael

    2007-08-01

    A solid-phase microextraction method (SPME) followed by gas chromatography with micro electron capture detection for determining trace levels of nitro musk fragrances in residual waters was optimized. Four nitro musks, musk xylene, musk moskene, musk tibetene and musk ketone, were selected for the optimization of the method. Factors affecting the extraction process were studied using a multivariate approach. Two extraction modes (direct SPME and headspace SPME) were tried at different extraction temperatures using two fiber coatings [Carboxen-polydimethylsiloxane (CAR/PDMS) and polydimethylsiloxane-divinylbenzene (PDMS/DVB)] selected among five commercial tested fibers. Sample agitation and the salting-out effect were also factors studied. The main effects and interactions between the factors were studied for all the target compounds. An extraction temperature of 100 degrees C and sampling the headspace over the sample, using either CAR/PDMS or PDMS/DVB as fiber coatings, were found to be the experimental conditions that led to a more effective extraction. High sensitivity, with detection limits in the low nanogram per liter range, and good linearity and repeatability were achieved for all nitro musks. Since the method proposed performed well for real samples, it was applied to different water samples, including wastewater and sewage, in which some of the target compounds (musk xylene and musk ketone) were detected and quantified.

  5. A robust method of thin plate spline and its application to DEM construction

    NASA Astrophysics Data System (ADS)

    Chen, Chuanfa; Li, Yanyan

    2012-11-01

    In order to avoid the ill-conditioning problem of thin plate spline (TPS), the orthogonal least squares (OLS) method was introduced, and a modified OLS (MOLS) was developed. The MOLS of TPS (TPS-M) can not only select significant points, termed knots, from large and dense sampling data sets, but also easily compute the weights of the knots in terms of back-substitution. For interpolating large sampling points, we developed a local TPS-M, where some neighbor sampling points around the point being estimated are selected for computation. Numerical tests indicate that irrespective of sampling noise level, the average performance of TPS-M can advantage with smoothing TPS. Under the same simulation accuracy, the computational time of TPS-M decreases with the increase of the number of sampling points. The smooth fitting results on lidar-derived noise data indicate that TPS-M has an obvious smoothing effect, which is on par with smoothing TPS. The example of constructing a series of large scale DEMs, located in Shandong province, China, was employed to comparatively analyze the estimation accuracies of the two versions of TPS and the classical interpolation methods including inverse distance weighting (IDW), ordinary kriging (OK) and universal kriging with the second-order drift function (UK). Results show that regardless of sampling interval and spatial resolution, TPS-M is more accurate than the classical interpolation methods, except for the smoothing TPS at the finest sampling interval of 20 m, and the two versions of kriging at the spatial resolution of 15 m. In conclusion, TPS-M, which avoids the ill-conditioning problem, is considered as a robust method for DEM construction.

  6. Classification of Parkinson's disease utilizing multi-edit nearest-neighbor and ensemble learning algorithms with speech samples.

    PubMed

    Zhang, He-Hua; Yang, Liuyang; Liu, Yuchuan; Wang, Pin; Yin, Jun; Li, Yongming; Qiu, Mingguo; Zhu, Xueru; Yan, Fang

    2016-11-16

    The use of speech based data in the classification of Parkinson disease (PD) has been shown to provide an effect, non-invasive mode of classification in recent years. Thus, there has been an increased interest in speech pattern analysis methods applicable to Parkinsonism for building predictive tele-diagnosis and tele-monitoring models. One of the obstacles in optimizing classifications is to reduce noise within the collected speech samples, thus ensuring better classification accuracy and stability. While the currently used methods are effect, the ability to invoke instance selection has been seldomly examined. In this study, a PD classification algorithm was proposed and examined that combines a multi-edit-nearest-neighbor (MENN) algorithm and an ensemble learning algorithm. First, the MENN algorithm is applied for selecting optimal training speech samples iteratively, thereby obtaining samples with high separability. Next, an ensemble learning algorithm, random forest (RF) or decorrelated neural network ensembles (DNNE), is used to generate trained samples from the collected training samples. Lastly, the trained ensemble learning algorithms are applied to the test samples for PD classification. This proposed method was examined using a more recently deposited public datasets and compared against other currently used algorithms for validation. Experimental results showed that the proposed algorithm obtained the highest degree of improved classification accuracy (29.44%) compared with the other algorithm that was examined. Furthermore, the MENN algorithm alone was found to improve classification accuracy by as much as 45.72%. Moreover, the proposed algorithm was found to exhibit a higher stability, particularly when combining the MENN and RF algorithms. This study showed that the proposed method could improve PD classification when using speech data and can be applied to future studies seeking to improve PD classification methods.

  7. Computing the Free Energy Barriers for Less by Sampling with a Coarse Reference Potential while Retaining Accuracy of the Target Fine Model.

    PubMed

    Plotnikov, Nikolay V

    2014-08-12

    Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force.

  8. Computing the Free Energy Barriers for Less by Sampling with a Coarse Reference Potential while Retaining Accuracy of the Target Fine Model

    PubMed Central

    2015-01-01

    Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force. PMID:25136268

  9. Biomonitoring of 21 endocrine disrupting chemicals in human hair samples using ultra-high performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Rodríguez-Gómez, R; Martín, J; Zafra-Gómez, A; Alonso, E; Vílchez, J L; Navalón, A

    2017-02-01

    Rapid industrial growth has increased human exposure to a large variety of chemicals with adverse health effects. These industrial chemicals are usually present in the environment, foods, beverages, clothes and personal care products. Among these compounds, endocrine disrupting chemicals (EDCs) have raised concern over the last years. In the present work, the determination of 21 EDCs in human hair samples is proposed. An analytical method based on the digestion of the samples with a mixture of acetic acid/methanol (20:80, v/v) followed by a solid-liquid microextraction and analysis by ultra-high performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) was developed and validated. The most influential parameters affecting the extraction method were optimized. The method was validated using matrix-matched calibration and recovery assays. Limits of detection ranged from 0.2 to 4 ng g -1 , limits of quantification from 0.5 to 12 ng g -1 , and inter- and intra-day variability was under 15% in all cases. Recovery rates for spiked samples ranged from 92.1 to 113.8%. The method was applied for the determination of the selected compounds in human hair. Samples were collected weekly from six randomly selected volunteers (three men and three women) over a three-month period. All the analyzed samples tested positive for at least one of the analyzed compounds. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Elemental Analysis of Beryllium Samples Using a Microzond-EGP-10 Unit

    NASA Astrophysics Data System (ADS)

    Buzoverya, M. E.; Karpov, I. A.; Gorodnov, A. A.; Shishpor, I. V.; Kireycheva, V. I.

    2017-12-01

    Results concerning the structural and elemental analysis of beryllium samples obtained via different technologies using a Microzond-EGP-10 unit with the help of the PIXE and RBS methods are presented. As a result, the overall chemical composition and the nature of inclusions were determined. The mapping method made it possible to reveal the structural features of beryllium samples: to select the grains of the main substance having different size and chemical composition, to visualize the interfaces between the regions of different composition, and to describe the features of the distribution of impurities in the samples.

  11. Facile preparation of magnetic molecularly imprinted polymers for the selective extraction and determination of dexamethasone in skincare cosmetics using HPLC.

    PubMed

    Du, Wei; Zhang, Bilin; Guo, Pengqi; Chen, Guoning; Chang, Chun; Fu, Qiang

    2018-03-15

    Dexamethasone-imprinted polymers were fabricated by reversible addition-fragmentation chain transfer polymerization on the surface of magnetic nanoparticles under mild polymerization conditions, which exhibited a narrow polydispersity and high selectivity for dexamethasone extraction. The dexamethasone-imprinted polymers were characterized by scanning electron microscopy, transmission electron microscope, Fourier transform infrared spectroscopy, X-ray diffraction, energy dispersive spectrometry, and vibrating sample magnetometry. The adsorption performance was evaluated by static adsorption, kinetic adsorption and selectivity tests. The results confirmed the successful construction of an imprinted polymer layer on the surface of the magnetic nanoparticles, which benefits the characteristics of high adsorption capacity, fast mass transfer, specific molecular recognition, and simple magnetic separation. Combined with high-performance liquid chromatography, molecularly imprinted polymers as magnetic extraction sorbents were used for the rapid and selective extraction and determination of dexamethasone in skincare cosmetic samples, with the accuracies of the spiked samples ranging from 93.8 to 97.6%. The relative standard deviations were less than 2.7%. The limit of detection and limit of quantification were 0.05 and 0.20 μg/mL, respectively. The developed method was simple, fast and highly selective and could be a promising method for dexamethasone monitoring in cosmetic products. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Miniaturized matrix solid-phase dispersion followed by liquid chromatography-tandem mass spectrometry for the quantification of synthetic dyes in cosmetics and foodstuffs used or consumed by children.

    PubMed

    Guerra, Eugenia; Llompart, Maria; Garcia-Jares, Carmen

    2017-12-22

    Miniaturized matrix solid-phase dispersion (MSPD) followed by liquid chromatography tandem mass spectrometry (LC-MS/MS) has been proposed for the simultaneous analysis of different classes of synthetic dyes in confectionery and cosmetics intended for or mostly consumed by children. Selected compounds include most of the permitted dyes as food additives as well as some of the most frequently used to color cosmetic products in accordance with the respective European directives. MSPD procedure was optimized by means of experimental design, allowing an effective, rapid and simple extraction of dyes with low sample and reagents consumption (0.1g of sample and 2mL of elution solvent). LC-MS/MS was optimized for good resolution, selectivity and sensitivity using a low ionic strength mobile phase (3mM NH 4 Ac-methanol). Method performance was demonstrated in real samples showing good linearity (R≥0.9928) and intra- and inter-day precision (%RSD≤15%). Method LODs were ≤0.952μgg -1 and ≤0.476μgg -1 for confectionery and cosmetic samples, respectively. Recoveries of compounds from nine different matrices were quantitative. The validated method was successfully applied to 24 commercial samples (14 cosmetics and 10 foods) in which 9 of the selected dyes were found at concentrations up to 989μgg -1 , exceeding in some cases the regulated maximum permitted limits. A non-permitted dye, Acid Orange 7, was found in one candy. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. AEGIS: Demographics of X-ray and Optically Selected Active Galactic Nuclei

    NASA Astrophysics Data System (ADS)

    Yan, Renbin; Ho, Luis C.; Newman, Jeffrey A.; Coil, Alison L.; Willmer, Christopher N. A.; Laird, Elise S.; Georgakakis, Antonis; Aird, James; Barmby, Pauline; Bundy, Kevin; Cooper, Michael C.; Davis, Marc; Faber, S. M.; Fang, Taotao; Griffith, Roger L.; Koekemoer, Anton M.; Koo, David C.; Nandra, Kirpal; Park, Shinae Q.; Sarajedini, Vicki L.; Weiner, Benjamin J.; Willner, S. P.

    2011-02-01

    We develop a new diagnostic method to classify galaxies into active galactic nucleus (AGN) hosts, star-forming galaxies, and absorption-dominated galaxies by combining the [O III]/Hβ ratio with rest-frame U - B color. This can be used to robustly select AGNs in galaxy samples at intermediate redshifts (z < 1). We compare the result of this optical AGN selection with X-ray selection using a sample of 3150 galaxies with 0.3 < z < 0.8 and I AB < 22, selected from the DEEP2 Galaxy Redshift Survey and the All-wavelength Extended Groth Strip International Survey. Among the 146 X-ray sources in this sample, 58% are classified optically as emission-line AGNs, the rest as star-forming galaxies or absorption-dominated galaxies. The latter are also known as "X-ray bright, optically normal galaxies" (XBONGs). Analysis of the relationship between optical emission lines and X-ray properties shows that the completeness of optical AGN selection suffers from dependence on the star formation rate and the quality of observed spectra. It also shows that XBONGs do not appear to be a physically distinct population from other X-ray detected, emission-line AGNs. On the other hand, X-ray AGN selection also has strong bias. About 2/3 of all emission-line AGNs at L bol > 1044 erg s-1 in our sample are not detected in our 200 ks Chandra images, most likely due to moderate or heavy absorption by gas near the AGN. The 2-7 keV detection rate of Seyfert 2s at z ~ 0.6 suggests that their column density distribution and Compton-thick fraction are similar to that of local Seyferts. Multiple sample selection techniques are needed to obtain as complete a sample as possible.

  14. Natural image classification driven by human brain activity

    NASA Astrophysics Data System (ADS)

    Zhang, Dai; Peng, Hanyang; Wang, Jinqiao; Tang, Ming; Xue, Rong; Zuo, Zhentao

    2016-03-01

    Natural image classification has been a hot topic in computer vision and pattern recognition research field. Since the performance of an image classification system can be improved by feature selection, many image feature selection methods have been developed. However, the existing supervised feature selection methods are typically driven by the class label information that are identical for different samples from the same class, ignoring with-in class image variability and therefore degrading the feature selection performance. In this study, we propose a novel feature selection method, driven by human brain activity signals collected using fMRI technique when human subjects were viewing natural images of different categories. The fMRI signals associated with subjects viewing different images encode the human perception of natural images, and therefore may capture image variability within- and cross- categories. We then select image features with the guidance of fMRI signals from brain regions with active response to image viewing. Particularly, bag of words features based on GIST descriptor are extracted from natural images for classification, and a sparse regression base feature selection method is adapted to select image features that can best predict fMRI signals. Finally, a classification model is built on the select image features to classify images without fMRI signals. The validation experiments for classifying images from 4 categories of two subjects have demonstrated that our method could achieve much better classification performance than the classifiers built on image feature selected by traditional feature selection methods.

  15. Method and system for near-field spectroscopy using targeted deposition of nanoparticles

    NASA Technical Reports Server (NTRS)

    Anderson, Mark S. (Inventor)

    2012-01-01

    There is provided in one embodiment of the invention a method for analyzing a sample material using surface enhanced spectroscopy. The method comprises the steps of imaging the sample material with an atomic force microscope (AFM) to select an area of interest for analysis, depositing nanoparticles onto the area of interest with an AFM tip, illuminating the deposited nanoparticles with a spectrometer excitation beam, and disengaging the AFM tip and acquiring a localized surface enhanced spectrum. The method may further comprise the step of using the AFM tip to modulate the spectrometer excitation beam above the deposited nanoparticles to obtain improved sensitivity data and higher spatial resolution data from the sample material. The invention further comprises in one embodiment a system for analyzing a sample material using surface enhanced spectroscopy.

  16. A new strategy for surface modification of polysulfone membrane by in situ imprinted sol-gel method for the selective separation and screening of L-Tyrosine as a lung cancer biomarker.

    PubMed

    Moein, Mohammad Mahdi; Javanbakht, Mehran; Karimi, Mohammad; Akbari-adergani, Behrouz; Abdel-Rehim, Mohamed

    2015-03-21

    In this work, a novel method based on in situ molecularly imprinted sol-gel for the surface modification of a polysulfone membrane (PSM) was developed. A modified molecularly imprinted sol-gel polysulfone membrane (MSM) was placed in a homemade plastic tube and coupled on-line with LC/MS/MS for the selective extraction and screening of l-Tyrosine (Tyr) as a tentative lung cancer biomarker in human plasma samples. The existence of molecularly imprinted sol-gel layers on both sides of a PSM was examined using scanning electron microscopy (SEM). To evaluate the role of precursor in the extraction performance, repeatability, and selectivity of developed method, three precursors, 3-(propylmethacrylate) trimethoxysilane (P1), 3-(triethoxysilyl)-propylamine (P2), tetraethyl orthosilicate (P3), individually and together were used for treatment of PSM. Our investigation showed that a single precursor's route is more repeatable, straightforward, precise, accurate, and selective for the extraction of Tyr in plasma samples. Moreover, to achieve the best conditions and extraction efficiency, the effect of influential parameters, including the conditioning, washing, and elution of solvents, sample flow rate, loading time, desorption time, loading sample volume, salt effect, pH, and adsorption capacity for the most efficiently prepared membranes were truly investigated. The non-molecularly imprinted sol-gel polysulfone membrane (NSM) was prepared as a blank via the same process but in the absence of the Tyr. The LOD (S/N = 3/1) was 0.1 nmol L(-1) and the LOQ (S/N = 10/1) was 0.34 nmol L(-1) for Tyr in the plasma samples. The linearity for the Tyr was in the range of 0.34-2000 nmol L(-1) in the plasma samples. The coefficients of determination values were ≥0.998 for all runs. The extraction recovery was between 80%-85% for Tyr in the plasma samples. In addition, MSM could be used for up to 50 extractions without a significant change in recovery percentage.

  17. Evaluation of DNA extraction methods for the analysis of microbial community in biological activated carbon.

    PubMed

    Zheng, Lu; Gao, Naiyun; Deng, Yang

    2012-01-01

    It is difficult to isolate DNA from biological activated carbon (BAC) samples used in water treatment plants, owing to the scarcity of microorganisms in BAC samples. The aim of this study was to identify DNA extraction methods suitable for a long-term, comprehensive ecological analysis of BAC microbial communities. To identify a procedure that can produce high molecular weight DNA, maximizes detectable diversity and is relatively free from contaminants, the microwave extraction method, the cetyltrimethylammonium bromide (CTAB) extraction method, a commercial DNA extraction kit, and the ultrasonic extraction method were used for the extraction of DNA from BAC samples. Spectrophotometry, agarose gel electrophoresis and polymerase chain reaction (PCR)-restriction fragment length polymorphisms (RFLP) analysis were conducted to compare the yield and quality of DNA obtained using these methods. The results showed that the CTAB method produce the highest yield and genetic diversity of DNA from BAC samples, but DNA purity was slightly less than that obtained with the DNA extraction-kit method. This study provides a theoretical basis for establishing and selecting DNA extraction methods for BAC samples.

  18. Exhaustive Search for Sparse Variable Selection in Linear Regression

    NASA Astrophysics Data System (ADS)

    Igarashi, Yasuhiko; Takenaka, Hikaru; Nakanishi-Ohno, Yoshinori; Uemura, Makoto; Ikeda, Shiro; Okada, Masato

    2018-04-01

    We propose a K-sparse exhaustive search (ES-K) method and a K-sparse approximate exhaustive search method (AES-K) for selecting variables in linear regression. With these methods, K-sparse combinations of variables are tested exhaustively assuming that the optimal combination of explanatory variables is K-sparse. By collecting the results of exhaustively computing ES-K, various approximate methods for selecting sparse variables can be summarized as density of states. With this density of states, we can compare different methods for selecting sparse variables such as relaxation and sampling. For large problems where the combinatorial explosion of explanatory variables is crucial, the AES-K method enables density of states to be effectively reconstructed by using the replica-exchange Monte Carlo method and the multiple histogram method. Applying the ES-K and AES-K methods to type Ia supernova data, we confirmed the conventional understanding in astronomy when an appropriate K is given beforehand. However, we found the difficulty to determine K from the data. Using virtual measurement and analysis, we argue that this is caused by data shortage.

  19. Weak lensing magnification in the Dark Energy Survey Science Verification data

    NASA Astrophysics Data System (ADS)

    Garcia-Fernandez, M.; Sanchez, E.; Sevilla-Noarbe, I.; Suchyta, E.; Huff, E. M.; Gaztanaga, E.; Aleksić, J.; Ponce, R.; Castander, F. J.; Hoyle, B.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Benoit-Lévy, A.; Bernstein, G. M.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Diehl, H. T.; Eifler, T. F.; Evrard, A. E.; Fernandez, E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Giannantonio, T.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; James, D. J.; Jarvis, M.; Kirk, D.; Krause, E.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Lima, M.; MacCrann, N.; Maia, M. A. G.; March, M.; Marshall, J. L.; Melchior, P.; Miquel, R.; Mohr, J. J.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Rykoff, E. S.; Scarpine, V.; Schubnell, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Tarle, G.; Thomas, D.; Walker, A. R.; Wester, W.; DES Collaboration

    2018-05-01

    In this paper, the effect of weak lensing magnification on galaxy number counts is studied by cross-correlating the positions of two galaxy samples, separated by redshift, using the Dark Energy Survey Science Verification data set. This analysis is carried out for galaxies that are selected only by its photometric redshift. An extensive analysis of the systematic effects, using new methods based on simulations is performed, including a Monte Carlo sampling of the selection function of the survey.

  20. Bioassay and biomolecular identification, sorting, and collection methods using magnetic microspheres

    DOEpatents

    Kraus, Jr., Robert H.; Zhou, Feng [Los Alamos, NM; Nolan, John P [Santa Fe, NM

    2007-06-19

    The present invention is directed to processes of separating, analyzing and/or collecting selected species within a target sample by use of magnetic microspheres including magnetic particles, the magnetic microspheres adapted for attachment to a receptor agent that can subsequently bind to selected species within the target sample. The magnetic microspheres can be sorted into a number of distinct populations, each population with a specific range of magnetic moments and different receptor agents can be attached to each distinct population of magnetic microsphere.

  1. Accelerating assimilation development for new observing systems using EFSO

    NASA Astrophysics Data System (ADS)

    Lien, Guo-Yuan; Hotta, Daisuke; Kalnay, Eugenia; Miyoshi, Takemasa; Chen, Tse-Chun

    2018-03-01

    To successfully assimilate data from a new observing system, it is necessary to develop appropriate data selection strategies, assimilating only the generally useful data. This development work is usually done by trial and error using observing system experiments (OSEs), which are very time and resource consuming. This study proposes a new, efficient methodology to accelerate the development using ensemble forecast sensitivity to observations (EFSO). First, non-cycled assimilation of the new observation data is conducted to compute EFSO diagnostics for each observation within a large sample. Second, the average EFSO conditionally sampled in terms of various factors is computed. Third, potential data selection criteria are designed based on the non-cycled EFSO statistics, and tested in cycled OSEs to verify the actual assimilation impact. The usefulness of this method is demonstrated with the assimilation of satellite precipitation data. It is shown that the EFSO-based method can efficiently suggest data selection criteria that significantly improve the assimilation results.

  2. Detection of genomic loci associated with environmental variables using generalized linear mixed models.

    PubMed

    Lobréaux, Stéphane; Melodelima, Christelle

    2015-02-01

    We tested the use of Generalized Linear Mixed Models to detect associations between genetic loci and environmental variables, taking into account the population structure of sampled individuals. We used a simulation approach to generate datasets under demographically and selectively explicit models. These datasets were used to analyze and optimize GLMM capacity to detect the association between markers and selective coefficients as environmental data in terms of false and true positive rates. Different sampling strategies were tested, maximizing the number of populations sampled, sites sampled per population, or individuals sampled per site, and the effect of different selective intensities on the efficiency of the method was determined. Finally, we apply these models to an Arabidopsis thaliana SNP dataset from different accessions, looking for loci associated with spring minimal temperature. We identified 25 regions that exhibit unusual correlations with the climatic variable and contain genes with functions related to temperature stress. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Picric acid capped silver nanoparticles as a probe for colorimetric sensing of creatinine in human blood and cerebrospinal fluid samples.

    PubMed

    Parmar, Ankita K; Valand, Nikunj N; Solanki, Kalpesh B; Menon, Shobhana K

    2016-02-21

    Creatinine is the most important parameter to be determined in the diagnosis of renal, muscular and thyroid function. The most common method for the determination of creatinine is Jaffe's reaction, a routine practice for blood and urine analysis. However, in cases of icteric and haemolyzed blood samples, interference occurs during the estimation of creatinine by other constituents present in the blood like bilirubin, creatine, and urea, which lead to wrong diagnosis. To overcome such difficulty, we have developed a silver nanoparticle (Ag NPs) based sensor for the selective determination of creatinine. In this study, a new approach has been given to the traditional Jaffe's reaction, by coating Ag NPs with picric acid (PA) to form an assembly that can selectively detect creatinine. The Ag NPs based sensor proficiently and selectively recognizes creatinine due to the ability of picric acid to bind with it and form a complex. The nanoassembly and the interactions were investigated by transmission electron microscopy (TEM), dynamic light scattering (DLS) analysis, UV-Vis spectroscopy, FT-IR spectroscopy and ESI-MS, which demonstrated the binding affinity of creatinine with PA-capped Ag NPs. A linear correlation was obtained in the range of 0.01 μM-1 μM with an R(2) value of 0.9998 and a lower detection limit of 8.4 nM. The sensor was successfully applied to different types of blood and CSF samples for the determination of creatinine, and the results were compared to that of the Jaffe's method. With the advantages of high sensitivity, selectivity and low sample volume, this method is potentially suitable for the on-site monitoring of creatinine.

  4. Evaluation of Wet Chemical ICP-AES Elemental Analysis Methods usingSimulated Hanford Waste Samples-Phase I Interim Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coleman, Charles J.; Edwards, Thomas B.

    2005-04-30

    The wet chemistry digestion method development for providing process control elemental analyses of the Hanford Tank Waste Treatment and Immobilization Plant (WTP) Melter Feed Preparation Vessel (MFPV) samples is divided into two phases: Phase I consists of: (1) optimizing digestion methods as a precursor to elemental analyses by ICP-AES techniques; (2) selecting methods with the desired analytical reliability and speed to support the nine-hour or less turnaround time requirement of the WTP; and (3) providing baseline comparison to the laser ablation (LA) sample introduction technique for ICP-AES elemental analyses that is being developed at the Savannah River National Laboratory (SRNL).more » Phase II consists of: (1) Time-and-Motion study of the selected methods from Phase I with actual Hanford waste or waste simulants in shielded cell facilities to ensure that the methods can be performed remotely and maintain the desired characteristics; and (2) digestion of glass samples prepared from actual Hanford Waste tank sludge for providing comparative results to the LA Phase II study. Based on the Phase I testing discussed in this report, a tandem digestion approach consisting of sodium peroxide fusion digestions carried out in nickel crucibles and warm mixed-acid digestions carried out in plastic bottles has been selected for Time-and-Motion study in Phase II. SRNL experience with performing this analytical approach in laboratory hoods indicates that well-trained cell operator teams will be able to perform the tandem digestions in five hours or less. The selected approach will produce two sets of solutions for analysis by ICP-AES techniques. Four hours would then be allocated for performing the ICP-AES analyses and reporting results to meet the nine-hour or less turnaround time requirement. The tandem digestion approach will need to be performed in two separate shielded analytical cells by two separate cell operator teams in order to achieve the nine-hour or less turnaround time. Because of the simplicity of the warm mixed-acid method, a well-trained cell operator team may in time be able to perform both sets of digestions. However, having separate shielded cells for each of the methods is prudent to avoid overcrowding problems that would impede a minimal turnaround time.« less

  5. Gold Nanoparticles-based Extraction-Free Colorimetric Assay in Organic Media: An Optical Index for Determination of Total Polyphenols in Fat-Rich Samples.

    PubMed

    Della Pelle, Flavio; González, María Cristina; Sergi, Manuel; Del Carlo, Michele; Compagnone, Dario; Escarpa, Alberto

    2015-07-07

    In this work, a rapid and simple gold nanoparticle (AuNPs)-based colorimetric assay meets a new type of synthesis of AuNPs in organic medium requiring no sample extraction. The AuNPs synthesis extraction-free approach strategically involves the use of dimethyl sulfoxide (DMSO) acting as an organic solvent for simultaneous sample analyte solubilization and AuNPs stabilization. Moreover, DMSO works as a cryogenic protector avoiding solidification at the temperatures used to block the synthesis. In addition, the chemical function as AuNPs stabilizers of the sample endogenous fatty acids is also exploited, avoiding the use of common surfactant AuNPs stabilizers, which, in an organic/aqueous medium, rise to the formation of undesirable emulsions. This is controlled by adding a fat analyte free sample (sample blank). The method was exhaustively applied for the determination of total polyphenols in two selected kinds of fat-rich liquid and solid samples with high antioxidant activity and economic impact: olive oil (n = 28) and chocolate (n = 16) samples. Fatty sample absorbance is easily followed by the absorption band of localized surface plasmon resonance (LSPR) at 540 nm and quantitation is refereed to gallic acid equivalents. A rigorous evaluation of the method was performed by comparison with the well and traditionally established Folin-Ciocalteu (FC) method, obtaining an excellent correlation for olive oil samples (R = 0.990, n = 28) and for chocolate samples (R = 0.905, n = 16). Additionally, it was also found that the proposed approach was selective (vs other endogenous sample tocopherols and pigments), fast (15-20 min), cheap and simple (does not require expensive/complex equipment), with a very limited amount of sample (30 μL) needed and a significant lower solvent consumption (250 μL in 500 μL total reaction volume) compared to classical methods.

  6. Spectral feature characterization methods for blood stain detection in crime scene backgrounds

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Mathew, Jobin J.; Dube, Roger R.; Messinger, David W.

    2016-05-01

    Blood stains are one of the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Blood spectral signatures containing unique reflectance or absorption features are important both for forensic on-site investigation and laboratory testing. They can be used for target detection and identification applied to crime scene hyperspectral imagery, and also be utilized to analyze the spectral variation of blood on various backgrounds. Non-blood stains often mislead the detection and can generate false alarms at a real crime scene, especially for dark and red backgrounds. This paper measured the reflectance of liquid blood and 9 kinds of non-blood samples in the range of 350 nm - 2500 nm in various crime scene backgrounds, such as pure samples contained in petri dish with various thicknesses, mixed samples with different colors and materials of fabrics, and mixed samples with wood, all of which are examined to provide sub-visual evidence for detecting and recognizing blood from non-blood samples in a realistic crime scene. The spectral difference between blood and non-blood samples are examined and spectral features such as "peaks" and "depths" of reflectance are selected. Two blood stain detection methods are proposed in this paper. The first method uses index to denote the ratio of "depth" minus "peak" over"depth" add"peak" within a wavelength range of the reflectance spectrum. The second method uses relative band depth of the selected wavelength ranges of the reflectance spectrum. Results show that the index method is able to discriminate blood from non-blood samples in most tested crime scene backgrounds, but is not able to detect it from black felt. Whereas the relative band depth method is able to discriminate blood from non-blood samples on all of the tested background material types and colors.

  7. Evolution of the Interstellar Gas Fraction Over Cosmic Time

    NASA Astrophysics Data System (ADS)

    Wiklind, Tommy; CANDELS

    2018-01-01

    Galaxies evolve by transforming gas into stars. The gas is acquired through accretion and mergers and is a highly intricate process where feed-back processes play an important role. Directly measuring the gas content in distant galaxies is, however, both complicated and time consuming. A direct observations involves either observing neutral hydrogen using the 21cm line or observing the molecular gas component using tracer molecules such as CO. The former method is impeded by man-made radio interference, and the latter is time consuming even with sensitive instruments such s ALMA. An indirect method is to observe the Raleigh-Jeans part of the dust SED and from this infer the gas mass. Here we present the results from a project using ALMA to measure the RJ part of the dust SED in a carefully selected sample of 70 galaxies at redshifts z=2-5. The galaxies are selected solely based on their redshift and stellar mass and therefore represents an unbiased sample. The stellar masses are selected using the MEAM method and thus the sample corresponds to progenitors of a z=0 galaxy of a particular stellar mass. Preliminary results show that the average gas fraction increases with redshift over the range z=2-3 in accordance with theoretical models, but at z≥4 the observed gas fraction is lower.

  8. New paleomagnetic constraints on the lunar magnetic field evolution

    NASA Astrophysics Data System (ADS)

    Lepaulard, C.; Gattacceca, J.; Weiss, B. P.

    2017-12-01

    In the 1970s, the first paleomagnetic analyses of lunar samples from the Apollo missions allowed a glimpse of the global evolution of the Moon's magnetic field over time, with evidence for a past dynamo activity [Fuller et Cisowski, 1987]. During the last a decade, a new set of paleomagnetic studies has provided a more refined view of the evolution of the lunar dynamo activity (chronology, intensity) [Weiss et Tikoo, 2014]. The aim of this study is to further refine the knowledge of the lunar dynamo by providing new paleomagnetic data. Based on measurements of the natural remanent magnetization of the main masses of 135 Apollo samples (mass between 50 g and 5 kg) with a portable magnetometer, we have selected nine samples for laboratory analyzes. The selected Apollo samples are: 10018, 15505, 61195 (regolith breccia); 61015 (dimict breccia); 14169 (crystalline matrix breccia); 65055 (basaltic impact melt); 12005, 12021 and 15529 (basalts). Paleointensity of the lunar magnetic fields were obtained by demagnetization by alternative field and normalization with laboratory magnetizations; as well as thermal demagnetization under controlled oxygen fugacity (Thellier-Thellier method) for selected samples. Preliminary results indicate that only three samples (10018, 15505, and 15529) possess a stable high coercivity / high temperature component of magnetization. We estimated the following paleointensities: 1.5 µT for 15505, 13 µT for 15529 (both with alternating field-based methods), and 1 µT for 10018 (thermal demagnetization with the Thellier-Thellier method). The other samples provide only an upper limit for the lunar surface field. These data will be discussed in view of the age of the samples (ages from the literature, and additional dating in progress). References :Fuller, M., and S.M. Cisowski, 1987. Lunar paleomagnetism. Geomagnetism 2, 307-455. Weiss, B.P., and S.M. Tikoo, 2014. The lunar dynamo. Science, 346, doi: 10.1126/science.1246753.

  9. Individualized statistical learning from medical image databases: application to identification of brain lesions.

    PubMed

    Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos

    2014-04-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. One-step displacement dispersive liquid-liquid microextraction coupled with graphite furnace atomic absorption spectrometry for the selective determination of methylmercury in environmental samples.

    PubMed

    Liang, Pei; Kang, Caiyan; Mo, Yajun

    2016-01-01

    A novel method for the selective determination of methylmercury (MeHg) was developed by one-step displacement dispersive liquid-liquid microextraction (D-DLLME) coupled with graphite furnace atomic absorption spectrometry. In the proposed method, Cu(II) reacted with diethyldithiocarbamate (DDTC) to form Cu-DDTC complex, which was used as the chelating agent instead of DDTC for the dispersive liquid-liquid microextraction (DLLME) of MeHg. Because the stability of MeHg-DDTC is higher than that of Cu-DDTC, MeHg can displace Cu from the Cu-DDTC complex and be preconcentrated in a single DLLME procedure. MeHg could be extracted into the extraction solvent phase at pH 6 while Hg(II) remained in the sample solution. Potential interference from co-existing metal ions with lower DDTC complex stability was largely eliminated without the need of any masking reagent. Under the optimal conditions, the limit of detection of this method was 13.6ngL(-1) (as Hg), and an enhancement factor of 81 was achieved with a sample volume of 5.0mL. The proposed method was successfully applied for the determination of trace MeHg in some environmental samples with satisfactory results. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Recognizing ancient papyri by a combination of spectroscopic, diffractional and chromatographic analytical tools

    PubMed Central

    Łojewska, J.; Rabin, I.; Pawcenis, D.; Bagniuk, J.; Aksamit-Koperska, M. A.; Sitarz, M.; Missori, M.; Krutzsch, M.

    2017-01-01

    Ancient papyri are a written heritage of culture that flourished more than 3000 years ago in Egypt. One of the most significant collections in the world is housed in the Egyptian Museum and Papyrus Collection in Berlin, from where the samples for our investigation come. The papyrologists, curators and conservators of such collections search intensely for the analytical detail that would allow ancient papyri to be distinguished from modern fabrications, in order to detect possible forgeries, assess papyrus deterioration state, and improve the design of storage conditions and conservation methods. This has become the aim of our investigation. The samples were studied by a number of methods, including spectroscopic (FTIR, fluorescent-FS, Raman) diffractional (XRD) and chromatographic (size exclusion chromatography-SEC), selected in order to determine degradation parameters: overall oxidation of lignocellulosic material, degree of polymerization and crystallinity of cellulose. The results were correlated with those obtained from carefully selected model samples including modern papyri and paper of different composition aged at elevated temperature in humid air. The methods were classified in the order SEC > FS > FTIR > XRD, based on their effectiveness in discriminating the state of papyri degradation. However, the most trustworthy evaluation of the age of papyri samples should rely on several methods. PMID:28382971

  12. Recognizing ancient papyri by a combination of spectroscopic, diffractional and chromatographic analytical tools.

    PubMed

    Łojewska, J; Rabin, I; Pawcenis, D; Bagniuk, J; Aksamit-Koperska, M A; Sitarz, M; Missori, M; Krutzsch, M

    2017-04-06

    Ancient papyri are a written heritage of culture that flourished more than 3000 years ago in Egypt. One of the most significant collections in the world is housed in the Egyptian Museum and Papyrus Collection in Berlin, from where the samples for our investigation come. The papyrologists, curators and conservators of such collections search intensely for the analytical detail that would allow ancient papyri to be distinguished from modern fabrications, in order to detect possible forgeries, assess papyrus deterioration state, and improve the design of storage conditions and conservation methods. This has become the aim of our investigation. The samples were studied by a number of methods, including spectroscopic (FTIR, fluorescent-FS, Raman) diffractional (XRD) and chromatographic (size exclusion chromatography-SEC), selected in order to determine degradation parameters: overall oxidation of lignocellulosic material, degree of polymerization and crystallinity of cellulose. The results were correlated with those obtained from carefully selected model samples including modern papyri and paper of different composition aged at elevated temperature in humid air. The methods were classified in the order SEC > FS > FTIR > XRD, based on their effectiveness in discriminating the state of papyri degradation. However, the most trustworthy evaluation of the age of papyri samples should rely on several methods.

  13. Individualized Statistical Learning from Medical Image Databases: Application to Identification of Brain Lesions

    PubMed Central

    Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos

    2014-01-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564

  14. Directional selection in temporally replicated studies is remarkably consistent.

    PubMed

    Morrissey, Michael B; Hadfield, Jarrod D

    2012-02-01

    Temporal variation in selection is a fundamental determinant of evolutionary outcomes. A recent paper presented a synthetic analysis of temporal variation in selection in natural populations. The authors concluded that there is substantial variation in the strength and direction of selection over time, but acknowledged that sampling error would result in estimates of selection that were more variable than the true values. We reanalyze their dataset using techniques that account for the necessary effect of sampling error to inflate apparent levels of variation and show that directional selection is remarkably constant over time, both in magnitude and direction. Thus we cannot claim that the available data support the existence of substantial temporal heterogeneity in selection. Nonetheless, we conject that temporal variation in selection could be important, but that there are good reasons why it may not appear in the available data. These new analyses highlight the importance of applying techniques that estimate parameters of the distribution of selection, rather than parameters of the distribution of estimated selection (which will reflect both sampling error and "real" variation in selection); indeed, despite availability of methods for the former, focus on the latter has been common in synthetic reviews of the aspects of selection in nature, and can lead to serious misinterpretations. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  15. Correcting false positive medium-chain acyl-CoA dehydrogenase deficiency results from newborn screening; synthesis, purification, and standardization of branched-chain C8 acylcarnitines for use in their selective and accurate absolute quantitation by UHPLC-MS/MS.

    PubMed

    Minkler, Paul E; Stoll, Maria S K; Ingalls, Stephen T; Hoppel, Charles L

    2017-04-01

    While selectively quantifying acylcarnitines in thousands of patient samples using UHPLC-MS/MS, we have occasionally observed unidentified branched-chain C8 acylcarnitines. Such observations are not possible using tandem MS methods, which generate pseudo-quantitative acylcarnitine "profiles". Since these "profiles" select for mass alone, they cannot distinguish authentic signal from isobaric and isomeric interferences. For example, some of the samples containing branched-chain C8 acylcarnitines were, in fact, expanded newborn screening false positive "profiles" for medium-chain acyl-CoA dehydrogenase deficiency (MCADD). Using our fast, highly selective, and quantitatively accurate UHPLC-MS/MS acylcarnitine determination method, we corrected the false positive tandem MS results and reported the sample results as normal for octanoylcarnitine (the marker for MCADD). From instances such as these, we decided to further investigate the presence of branched-chain C8 acylcarnitines in patient samples. To accomplish this, we synthesized and chromatographically characterized several branched-chain C8 acylcarnitines (in addition to valproylcarnitine): 2-methylheptanoylcarnitine, 6-methylheptanoylcarnitine, 2,2-dimethylhexanoylcarnitine, 3,3-dimethylhexanoylcarnitine, 3,5-dimethylhexanoylcarnitine, 2-ethylhexanoylcarnitine, and 2,4,4-trimethylpentanoylcarnitine. We then compared their behavior with branched-chain C8 acylcarnitines observed in patient samples and demonstrated our ability to chromographically resolve, and thus distinguish, octanoylcarnitine from branched-chain C8 acylcarnitines, correcting false positive MCADD results from expanded newborn screening. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Comparisons of NDT Methods to Inspect Cork and Cork filled Epoxy Bands

    NASA Technical Reports Server (NTRS)

    Lingbloom, Mike

    2007-01-01

    Sheet cork and cork filled epoxy provide external insulation for the Reusable Solid Rocket Motor (RSRM) on the Nation's Space Transportation System (STS). Interest in the reliability of the external insulation bonds has increased since the Columbia incident. A non-destructive test (NDT) method that will provide the best inspection for these bonds has been under evaluation. Electronic Shearography has been selected as the primary NDT method for inspection of these bond lines in the RSRM production flow. ATK Launch Systems Group has purchased an electronic shearography system that includes a vacuum chamber that is used for evaluation of test parts and custom vacuum windows for inspection of full-scale motors. Although the electronic shearography technology has been selected as the primary method for inspection of the external bonds, other technologies that exist continue to be investigated. The NASA/Marshall Space Flight Center (MSFC) NDT department has inspected several samples for comparison with electronic shearography with various inspections systems in their laboratory. The systems that were evaluated are X-ray backscatter, terahertz imaging, and microwave imaging. The samples tested have some programmed flaws as well as some flaws that occurred naturally during the sample making process. These samples provide sufficient flaw variation for the evaluation of the different inspection systems. This paper will describe and compare the basic functionality, test method and test results including dissection for each inspection technology.

  17. Rapid determination of anti-estrogens by gas chromatography/mass spectrometry in urine: Method validation and application to real samples.

    PubMed

    Gerace, E; Salomone, A; Abbadessa, G; Racca, S; Vincenti, M

    2012-02-01

    A fast screening protocol was developed for the simultaneous determination of nine anti-estrogenic agents (aminoglutethimide, anastrozole, clomiphene, drostanolone, formestane, letrozole, mesterolone, tamoxifen, testolactone) plus five of their metabolites in human urine. After an enzymatic hydrolysis, these compounds can be extracted simultaneously from urine with a simple liquid-liquid extraction at alkaline conditions. The analytes were subsequently analyzed by fast-gas chromatography/mass spectrometry (fast-GC/MS) after derivatization. The use of a short column, high-flow carrier gas velocity and fast temperature ramping produced an efficient separation of all analytes in about 4 min, allowing a processing rate of 10 samples/h. The present analytical method was validated according to UNI EN ISO/IEC 17025 guidelines for qualitative methods. The range of investigated parameters included the limit of detection, selectivity, linearity, repeatability, robustness and extraction efficiency. High MS-sampling rate, using a benchtop quadrupole mass analyzer, resulted in accurate peak shape definition under both scan and selected ion monitoring modes, and high sensitivity in the latter mode. Therefore, the performances of the method are comparable to the ones obtainable from traditional GC/MS analysis. The method was successfully tested on real samples arising from clinical treatments of hospitalized patients and could profitably be used for clinical studies on anti-estrogenic drug administration.

  18. Rapid determination of anti-estrogens by gas chromatography/mass spectrometry in urine: Method validation and application to real samples

    PubMed Central

    Gerace, E.; Salomone, A.; Abbadessa, G.; Racca, S.; Vincenti, M.

    2011-01-01

    A fast screening protocol was developed for the simultaneous determination of nine anti-estrogenic agents (aminoglutethimide, anastrozole, clomiphene, drostanolone, formestane, letrozole, mesterolone, tamoxifen, testolactone) plus five of their metabolites in human urine. After an enzymatic hydrolysis, these compounds can be extracted simultaneously from urine with a simple liquid–liquid extraction at alkaline conditions. The analytes were subsequently analyzed by fast-gas chromatography/mass spectrometry (fast-GC/MS) after derivatization. The use of a short column, high-flow carrier gas velocity and fast temperature ramping produced an efficient separation of all analytes in about 4 min, allowing a processing rate of 10 samples/h. The present analytical method was validated according to UNI EN ISO/IEC 17025 guidelines for qualitative methods. The range of investigated parameters included the limit of detection, selectivity, linearity, repeatability, robustness and extraction efficiency. High MS-sampling rate, using a benchtop quadrupole mass analyzer, resulted in accurate peak shape definition under both scan and selected ion monitoring modes, and high sensitivity in the latter mode. Therefore, the performances of the method are comparable to the ones obtainable from traditional GC/MS analysis. The method was successfully tested on real samples arising from clinical treatments of hospitalized patients and could profitably be used for clinical studies on anti-estrogenic drug administration. PMID:29403714

  19. Detection of Listeria monocytogenes in pork and beef using the VIDAS® LMO2 automated enzyme linked immunoassay method.

    PubMed

    Meyer, Cornelia; Fredriksson-Ahomaa, Maria; Sperner, Brigitte; Märtlbauer, Erwin

    2011-07-01

    Listeria (L.) monocytogenes, a foodborne pathogen, is known to be a possible contaminant of foods during production and processing. Samples (n=985) of raw meat and by-products obtained from beef and pork were first screened by the VIDAS system for the presence of Listeria spp., followed by testing for the presence of L. monocytogenes. Positive L. monocytogenes results were confirmed by plating on selective agars: 14% of the samples were positive for Listeria and 4% tested positive for L. monocytogenes, of which 3% were confirmed on selective agars. In by-products (17%) the contamination with listeriae was higher than in meat cuts (10%). Only samples strongly positive for Listeria spp. by VIDAS were positive for L. monocytogenes. Overall, the prevalence of L. monocytogenes in beef and pork samples was rather low in comparison to most previous studies. The VIDAS system was shown to be a suitable method for screening out Listeria-negative samples; the main advantage being a markedly reduced assay time. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Human papillomavirus detection and typing using a nested-PCR-RFLP assay.

    PubMed

    Coser, Janaina; Boeira, Thaís da Rocha; Fonseca, André Salvador Kazantzi; Ikuta, Nilo; Lunge, Vagner Ricardo

    2011-01-01

    It is clinically important to detect and type human papillomavirus (HPV) in a sensitive and specific manner. Development of a nested-polymerase chain reaction-restriction fragment length polymorphism (nested-PCR-RFLP) assay to detect and type HPV based on the analysis of L1 gene. Analysis of published DNA sequence of mucosal HPV types to select sequences of new primers. Design of an original nested-PCR assay using the new primers pair selected and classical MY09/11 primers. HPV detection and typing in cervical samples using the nested-PCR-RFLP assay. The nested-PCR-RFLP assay detected and typed HPV in cervical samples. Of the total of 128 clinical samples submitted to simple PCR and nested-PCR for detection of HPV, 37 (28.9%) were positive for the virus by both methods and 25 samples were positive only by nested-PCR (67.5% increase in detection rate compared with single PCR). All HPV positive samples were effectively typed by RFLP assay. The method of nested-PCR proved to be an effective diagnostic tool for HPV detection and typing.

  1. Multivariate calibration on NIR data: development of a model for the rapid evaluation of ethanol content in bakery products.

    PubMed

    Bello, Alessandra; Bianchi, Federica; Careri, Maria; Giannetto, Marco; Mori, Giovanni; Musci, Marilena

    2007-11-05

    A new NIR method based on multivariate calibration for determination of ethanol in industrially packed wholemeal bread was developed and validated. GC-FID was used as reference method for the determination of actual ethanol concentration of different samples of wholemeal bread with proper content of added ethanol, ranging from 0 to 3.5% (w/w). Stepwise discriminant analysis was carried out on the NIR dataset, in order to reduce the number of original variables by selecting those that were able to discriminate between the samples of different ethanol concentrations. With the so selected variables a multivariate calibration model was then obtained by multiple linear regression. The prediction power of the linear model was optimized by a new "leave one out" method, so that the number of original variables resulted further reduced.

  2. Students' Entrepreneurial Self-Efficacy: Does the Teaching Method Matter?

    ERIC Educational Resources Information Center

    Abaho, Ernest; Olomi, Donath R.; Urassa, Goodluck Charles

    2015-01-01

    Purpose: The purpose of this paper is to examine the various entrepreneurship teaching methods in Uganda and how these methods relate to entrepreneurial self-efficacy (ESE). Design/methodology/approach: A sample of 522 final year students from selected universities and study programs was surveyed using self-reported questionnaires. Findings: There…

  3. COMPARISONS OF BOATING AND WADING METHODS USED TO ASSESS THE STATUS OF FLOWING WATERS

    EPA Science Inventory

    This document has been designed to provide an overview of the biological, physical and chemical methods of selected stream biomonitoring and assessment programs. It was written to satisfy the need to identifiy current methods that exist for sampling large rivers. The primary focu...

  4. A comparison of QuantStudio™ 3D Digital PCR and ARMS-PCR for measuring plasma EGFR T790M mutations of NSCLC patients

    PubMed Central

    Sang, Yaxiong; Zhang, Jie; Wang, Ping; Wang, Yue; Liu, Bing; Lin, Dongmei; Yu, Yang; Fang, Jian

    2018-01-01

    Background The AURA3 clinical trial has shown that advanced non-small cell lung cancer (NSCLC) patients with EGFR T790M mutations in circulating tumor DNA (ctDNA) could benefit from osimertinib. Purpose The aim of this study was to assess the usefulness of QuantStudio™ 3D Digital PCR System platform for the detection of plasma EGFR T790M mutations in NSCLC patients, and compare the performances of 3D Digital PCR and ARMS-PCR. Patients and methods A total of 119 Chinese patients were enrolled in this study. Mutant allele frequency of plasma EGFR T790M was detected by 3D Digital PCR, then 25 selected samples were verified by ARMS-PCR and four of them were verified by next generation sequencing (NGS). Results In total, 52.94% (69/119) had EGFR T790M mutations detected by 3D Digital PCR. In 69 positive samples, the median mutant allele frequency (AF) was 1.09% and three cases presented low concentration (AF <0.1%). Limited by the amount of plasma DNA, 17 samples (AF <2.5%) and eight samples (T790M-) were selected for verification by ARMS-PCR. Four of those samples were verified by NGS as a third verification method. Among the selected 17 positive cases, ten samples presented mutant allele frequency <0.5%, and seven samples presented intermediate mutant allele frequency (0.5% AF 2.5%). However, only three samples (3/17) were identified as positive by ARMS-PCR, namely, P6 (AF =1.09%), P7 (AF =2.09%), and P8 (AF =2.21%). It is worth mentioning that sample P9 (AF =2.05%, analyzed by 3D Digital PCR) was identified as T790M- by ARMS-PCR. Four samples were identified as T790M+ by both NGS and 3D Digital PCR, and typically three samples (3/4) presented at a low ratio (AF <0.5%). Conclusion Our study demonstrated that 3D Digital PCR is a novel method with high sensitivity and specificity to detect EGFR T790M mutation in plasma. PMID:29403309

  5. Development of a spatial sampling protocol using GIS to measure health disparities in Bobo-Dioulasso, Burkina Faso, a medium-sized African city.

    PubMed

    Kassié, Daouda; Roudot, Anna; Dessay, Nadine; Piermay, Jean-Luc; Salem, Gérard; Fournet, Florence

    2017-04-18

    Many cities in developing countries experience an unplanned and rapid growth. Several studies have shown that the irregular urbanization and equipment of cities produce different health risks and uneven exposure to specific diseases. Consequently, health surveys within cities should be carried out at the micro-local scale and sampling methods should try to capture this urban diversity. This article describes the methodology used to develop a multi-stage sampling protocol to select a population for a demographic survey that investigates health disparities in the medium-sized city of Bobo-Dioulasso, Burkina Faso. It is based on the characterization of Bobo-Dioulasso city typology by taking into account the city heterogeneity, as determined by analysis of the built environment and of the distribution of urban infrastructures, such as healthcare structures or even water fountains, by photo-interpretation of aerial photographs and satellite images. Principal component analysis and hierarchical ascendant classification were then used to generate the city typology. Five groups of spaces with specific profiles were identified according to a set of variables which could be considered as proxy indicators of health status. Within these five groups, four sub-spaces were randomly selected for the study. We were then able to survey 1045 households in all the selected sub-spaces. The pertinence of this approach is discussed regarding to classical sampling as random walk method for example. This urban space typology allowed to select a population living in areas representative of the uneven urbanization process, and to characterize its health status in regards to several indicators (nutritional status, communicable and non-communicable diseases, and anaemia). Although this method should be validated and compared with more established methods, it appears as an alternative in developing countries where geographic and population data are scarce.

  6. Accounting for selection bias in association studies with complex survey data.

    PubMed

    Wirth, Kathleen E; Tchetgen Tchetgen, Eric J

    2014-05-01

    Obtaining representative information from hidden and hard-to-reach populations is fundamental to describe the epidemiology of many sexually transmitted diseases, including HIV. Unfortunately, simple random sampling is impractical in these settings, as no registry of names exists from which to sample the population at random. However, complex sampling designs can be used, as members of these populations tend to congregate at known locations, which can be enumerated and sampled at random. For example, female sex workers may be found at brothels and street corners, whereas injection drug users often come together at shooting galleries. Despite the logistical appeal, complex sampling schemes lead to unequal probabilities of selection, and failure to account for this differential selection can result in biased estimates of population averages and relative risks. However, standard techniques to account for selection can lead to substantial losses in efficiency. Consequently, researchers implement a variety of strategies in an effort to balance validity and efficiency. Some researchers fully or partially account for the survey design, whereas others do nothing and treat the sample as a realization of the population of interest. We use directed acyclic graphs to show how certain survey sampling designs, combined with subject-matter considerations unique to individual exposure-outcome associations, can induce selection bias. Finally, we present a novel yet simple maximum likelihood approach for analyzing complex survey data; this approach optimizes statistical efficiency at no cost to validity. We use simulated data to illustrate this method and compare it with other analytic techniques.

  7. Determination of selected elements in whole coal and in coal ash from the eight argonne premium coal samples by atomic absorption spectrometry, atomic emission spectrometry, and ion-selective electrode

    USGS Publications Warehouse

    Doughten, M.W.; Gillison, J.R.

    1990-01-01

    Methods for the determination of 24 elements in whole coal and coal ash by inductively coupled argon plasma-atomic emission spectrometry, flame, graphite furnace, and cold vapor atomic absorption spectrometry, and by ion-selective electrode are described. Coal ashes were analyzed in triplicate to determine the precision of the methods. Results of the analyses of NBS Standard Reference Materials 1633, 1633a, 1632a, and 1635 are reported. Accuracy of the methods is determined by comparison of the analysis of standard reference materials to their certified values as well as other values in the literature.

  8. Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies

    PubMed Central

    Theis, Fabian J.

    2017-01-01

    Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464

  9. A rapid method for assessing the accumulation of microplastics in the sea surface microlayer (SML) of estuarine systems.

    PubMed

    Anderson, Zachary T; Cundy, Andrew B; Croudace, Ian W; Warwick, Phillip E; Celis-Hernandez, Omar; Stead, Jessica L

    2018-06-21

    Microplastics are an increasingly important contaminant in the marine environment. Depending on their composition and degree of biofouling, many common microplastics are less dense than seawater and so tend to float at or near the ocean surface. As such, they may exhibit high concentrations in the sea surface microlayer (SML - the upper 1-1000 μm of the ocean) relative to deeper water. This paper examines the accumulation of microplastics, in particular microfibres, in the SML in two contrasting estuarine systems - the Hamble estuary and the Beaulieu estuary, southern U.K., via a novel and rapid SML-selective sampling method using a dipped glass plate. Microplastic concentrations (for identified fibres, of 0.05 to 4.5 mm length) were highest in the SML-selective samples (with a mean concentration of 43 ± 36 fibres/L), compared to <5 fibres/L for surface and sub-surface bulk water samples. Data collected show the usefulness of the dipped glass plate method as a rapid and inexpensive tool for sampling SML-associated microplastics in estuaries, and indicate that microplastics preferentially accumulate at the SML in estuarine conditions (providing a potential transfer mechanism for incorporation into upper intertidal sinks). Fibres are present (and readily sampled) in both developed and more pristine estuarine systems.

  10. Inference on the Strength of Balancing Selection for Epistatically Interacting Loci

    PubMed Central

    Buzbas, Erkan Ozge; Joyce, Paul; Rosenberg, Noah A.

    2011-01-01

    Existing inference methods for estimating the strength of balancing selection in multi-locus genotypes rely on the assumption that there are no epistatic interactions between loci. Complex systems in which balancing selection is prevalent, such as sets of human immune system genes, are known to contain components that interact epistatically. Therefore, current methods may not produce reliable inference on the strength of selection at these loci. In this paper, we address this problem by presenting statistical methods that can account for epistatic interactions in making inference about balancing selection. A theoretical result due to Fearnhead (2006) is used to build a multi-locus Wright-Fisher model of balancing selection, allowing for epistatic interactions among loci. Antagonistic and synergistic types of interactions are examined. The joint posterior distribution of the selection and mutation parameters is sampled by Markov chain Monte Carlo methods, and the plausibility of models is assessed via Bayes factors. As a component of the inference process, an algorithm to generate multi-locus allele frequencies under balancing selection models with epistasis is also presented. Recent evidence on interactions among a set of human immune system genes is introduced as a motivating biological system for the epistatic model, and data on these genes are used to demonstrate the methods. PMID:21277883

  11. The K-selected Butcher-Oemler Effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stanford, S A; De Propris, R; Dickinson, M

    2004-03-02

    We investigate the Butcher-Oemler effect using samples of galaxies brighter than observed frame K* + 1.5 in 33 clusters at 0.1 {approx}< z {approx}< 0.9. We attempt to duplicate as closely as possible the methodology of Butcher & Oemler. Apart from selecting in the K-band, the most important difference is that we use a brightness limit fixed at 1.5 magnitudes below an observed frame K* rather than the nominal limit of rest frame M(V ) = -20 used by Butcher & Oemler. For an early type galaxy at z = 0.1 our sample cutoff is 0.2 magnitudes brighter than restmore » frame M(V ) = -20, while at z = 0.9 our cutoff is 0.9 magnitudes brighter. If the blue galaxies tend to be faint, then the difference in magnitude limits should result in our measuring lower blue fractions. A more minor difference from the Butcher & Oemler methodology is that the area covered by our galaxy samples has a radius of 0.5 or 0.7 Mpc at all redshifts rather than R{sub 30}, the radius containing 30% of the cluster population. In practice our field sizes are generally similar to those used by Butcher & Oemler. We find the fraction of blue galaxies in our K-selected samples to be lower on average than that derived from several optically selected samples, and that it shows little trend with redshift. However, at the redshifts z < 0.6 where our sample overlaps with that of Butcher & Oemler, the difference in fB as determined from our K-selected samples and those of Butcher & Oemler is much reduced. The large scatter in the measured f{sub B}, even in small redshift ranges, in our study indicates that determining the f{sub B} for a much larger sample of clusters from K-selected galaxy samples is important. As a test of our methods, our data allow us to construct optically-selected samples down to rest frame M(V ) = -20, as used by Butcher & Oemler, for four clusters that are common between our sample and that of Butcher & Oemler. For these rest V selected samples, we find similar fractions of blue galaxies to Butcher & Oemler, while the K selected samples for the same 4 clusters yield blue fractions which are typically half as large. This comparison indicates that selecting in the K-band is the primary difference between our study and previous optically-based studies of the Butcher & Oemler effect. Selecting in the observed K-band is more nearly a process of selecting galaxies by their mass than is the case for optically-selected samples. Our results suggest that the Butcher-Oemler effect is at least partly due to low mass galaxies whose optical luminosities are boosted. These lower mass galaxies could evolve into the rich dwarf population observed in nearby clusters.« less

  12. Methods and apparatus for measurement of a dimensional characteristic and methods of predictive modeling related thereto

    DOEpatents

    Robertson, Eric P [Idaho Falls, ID; Christiansen, Richard L [Littleton, CO

    2007-05-29

    A method of optically determining a change in magnitude of at least one dimensional characteristic of a sample in response to a selected chamber environment. A magnitude of at least one dimension of the at least one sample may be optically determined subsequent to altering the at least one environmental condition within the chamber. A maximum change in dimension of the at least one sample may be predicted. A dimensional measurement apparatus for indicating a change in at least one dimension of at least one sample. The dimensional measurement apparatus may include a housing with a chamber configured for accommodating pressure changes and an optical perception device for measuring a dimension of at least one sample disposed in the chamber. Methods of simulating injection of a gas into a subterranean formation, injecting gas into a subterranean formation, and producing methane from a coal bed are also disclosed.

  13. Methods for measurement of a dimensional characteristic and methods of predictive modeling related thereto

    DOEpatents

    Robertson, Eric P; Christiansen, Richard L.

    2007-10-23

    A method of optically determining a change in magnitude of at least one dimensional characteristic of a sample in response to a selected chamber environment. A magnitude of at least one dimension of the at least one sample may be optically determined subsequent to altering the at least one environmental condition within the chamber. A maximum change in dimension of the at least one sample may be predicted. A dimensional measurement apparatus for indicating a change in at least one dimension of at least one sample. The dimensional measurement apparatus may include a housing with a chamber configured for accommodating pressure changes and an optical perception device for measuring a dimension of at least one sample disposed in the chamber. Methods of simulating injection of a gas into a subterranean formation, injecting gas into a subterranean formation, and producing methane from a coal bed are also disclosed.

  14. A Fast Algorithm of Convex Hull Vertices Selection for Online Classification.

    PubMed

    Ding, Shuguang; Nie, Xiangli; Qiao, Hong; Zhang, Bo

    2018-04-01

    Reducing samples through convex hull vertices selection (CHVS) within each class is an important and effective method for online classification problems, since the classifier can be trained rapidly with the selected samples. However, the process of CHVS is NP-hard. In this paper, we propose a fast algorithm to select the convex hull vertices, based on the convex hull decomposition and the property of projection. In the proposed algorithm, the quadratic minimization problem of computing the distance between a point and a convex hull is converted into a linear equation problem with a low computational complexity. When the data dimension is high, an approximate, instead of exact, convex hull is allowed to be selected by setting an appropriate termination condition in order to delete more nonimportant samples. In addition, the impact of outliers is also considered, and the proposed algorithm is improved by deleting the outliers in the initial procedure. Furthermore, a dimension convention technique via the kernel trick is used to deal with nonlinearly separable problems. An upper bound is theoretically proved for the difference between the support vector machines based on the approximate convex hull vertices selected and all the training samples. Experimental results on both synthetic and real data sets show the effectiveness and validity of the proposed algorithm.

  15. Evolution of synchrotron-radiation-based Mössbauer absorption spectroscopy for various isotopes

    NASA Astrophysics Data System (ADS)

    Seto, Makoto; Masuda, Ryo; Kobayashi, Yasuhiro; Kitao, Shinji; Kurokuzu, Masayuki; Saito, Makina; Hosokawa, Shuuich; Ishibashi, Hiroki; Mitsui, Takaya; Yoda, Yoshitaka; Mibu, Ko

    2017-11-01

    Synchrotron-radiation-based Mössbauer spectroscopy that yields absorption type Mössbauer spectra has been applied to various isotopes. This method enables the advanced measurement by using the excellent features of synchrotron radiation, such as Mössbauer spectroscopic measurement under high-pressures. Furthermore, energy selectivity of synchrotron radiation allows us to measure 40K Mössbauer spectra, of which observation is impossible by using ordinary radioactive sources because the first excited state of 40K is not populated by any radioactive parent nuclides. Moreover, this method has flexibility of the experimental setup that the measured sample can be used as a transmitter or a scatterer, depending on the sample conditions. To enhance the measurement efficiency of the spectroscopy, we developed a detection system in which a windowless avalanche photodiode (APD) detector is combined with a vacuum cryostat to detect internal conversion electrons adding to X-rays accompanied by nuclear de-excitation. In particular, by selecting the emission from the scatterer sample, depth selective synchrotron-radiation-based Mössbauer spectroscopy is possible. Furthermore, limitation of the time window in the delayed components enables us to obtain narrow linewidth in Mössbauer spectra. Measurement system that records velocity dependent time spectra and energy information simultaneously realizes the depth selective and narrow linewidth measurement.

  16. Comparison of preprocessing methods and storage times for touch DNA samples

    PubMed Central

    Dong, Hui; Wang, Jing; Zhang, Tao; Ge, Jian-ye; Dong, Ying-qiang; Sun, Qi-fan; Liu, Chao; Li, Cai-xia

    2017-01-01

    Aim To select appropriate preprocessing methods for different substrates by comparing the effects of four different preprocessing methods on touch DNA samples and to determine the effect of various storage times on the results of touch DNA sample analysis. Method Hand touch DNA samples were used to investigate the detection and inspection results of DNA on different substrates. Four preprocessing methods, including the direct cutting method, stubbing procedure, double swab technique, and vacuum cleaner method, were used in this study. DNA was extracted from mock samples with four different preprocessing methods. The best preprocess protocol determined from the study was further used to compare performance after various storage times. DNA extracted from all samples was quantified and amplified using standard procedures. Results The amounts of DNA and the number of alleles detected on the porous substrates were greater than those on the non-porous substrates. The performances of the four preprocessing methods varied with different substrates. The direct cutting method displayed advantages for porous substrates, and the vacuum cleaner method was advantageous for non-porous substrates. No significant degradation trend was observed as the storage times increased. Conclusion Different substrates require the use of different preprocessing method in order to obtain the highest DNA amount and allele number from touch DNA samples. This study provides a theoretical basis for explorations of touch DNA samples and may be used as a reference when dealing with touch DNA samples in case work. PMID:28252870

  17. Optimum allocation for a dual-frame telephone survey.

    PubMed

    Wolter, Kirk M; Tao, Xian; Montgomery, Robert; Smith, Philip J

    2015-12-01

    Careful design of a dual-frame random digit dial (RDD) telephone survey requires selecting from among many options that have varying impacts on cost, precision, and coverage in order to obtain the best possible implementation of the study goals. One such consideration is whether to screen cell-phone households in order to interview cell-phone only (CPO) households and exclude dual-user household, or to take all interviews obtained via the cell-phone sample. We present a framework in which to consider the tradeoffs between these two options and a method to select the optimal design. We derive and discuss the optimum allocation of sample size between the two sampling frames and explore the choice of optimum p , the mixing parameter for the dual-user domain. We illustrate our methods using the National Immunization Survey , sponsored by the Centers for Disease Control and Prevention.

  18. Methods for collecting benthic invertebrate samples as part of the National Water-Quality Assessment Program

    USGS Publications Warehouse

    Cuffney, Thomas F.; Gurtz, Martin E.; Meador, Michael R.

    1993-01-01

    Benthic invertebrate communities are evaluated as part of the ecological survey component of the U.S. Geological Survey's National Water-Quality Assessment Program. These biological data are collected along with physical and chemical data to assess water-quality conditions and to develop an understanding of the factors that affect water-quality conditions locally, regionally, and nationally. The objectives of benthic invertebrate community characterizations are to (1) develop for each site a list of tax a within the associated stream reach and (2) determine the structure of benthic invertebrate communities within selected habitats of that reach. A nationally consistent approach is used to achieve these objectives. This approach provides guidance on site, reach, and habitat selection and methods and equipment for qualitative multihabitat sampling and semi-quantitative single habitat sampling. Appropriate quality-assurance and quality-control guidelines are used to maximize the ability to analyze data within and among study units.

  19. Origin and Correction of Magnetic Field Inhomogeneity at the Interface in Biphasic NMR Samples

    PubMed Central

    Martin, Bryan T.; Chingas, G. C.

    2012-01-01

    The use of susceptibility matching to minimize spectral distortion of biphasic samples layered in a standard 5 mm NMR tube is described. The approach uses magic angle spinning (MAS) to first extract chemical shift differences by suppressing bulk magnetization. Then, using biphasic coaxial samples, magnetic susceptibilities are matched by titration with a paramagnetic salt. The matched phases are then layered in a standard NMR tube where they can be shimmed and examined. Line widths of two distinct spectral lines, selected to characterize homogeneity in each phase, are simultaneously optimized. Two-dimensional distortion-free, slice-resolved spectra of an octanol/water system illustrate the method. These data are obtained using a 2D stepped-gradient pulse sequence devised for this application. Advantages of this sequence over slice-selective methods are that acquisition efficiency is increased and processing requires only conventional software. PMID:22459062

  20. Methods and devices for high-throughput dielectrophoretic concentration

    DOEpatents

    Simmons, Blake A.; Cummings, Eric B.; Fiechtner, Gregory J.; Fintschenko, Yolanda; McGraw, Gregory J.; Salmi, Allen

    2010-02-23

    Disclosed herein are methods and devices for assaying and concentrating analytes in a fluid sample using dielectrophoresis. As disclosed, the methods and devices utilize substrates having a plurality of pores through which analytes can be selectively prevented from passing, or inhibited, on application of an appropriate electric field waveform. The pores of the substrate produce nonuniform electric field having local extrema located near the pores. These nonuniform fields drive dielectrophoresis, which produces the inhibition. Arrangements of electrodes and porous substrates support continuous, bulk, multi-dimensional, and staged selective concentration.

Top