Methodology Series Module 5: Sampling Strategies.
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.
Methodology Series Module 5: Sampling Strategies
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438
Toward cost-efficient sampling methods
NASA Astrophysics Data System (ADS)
Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie
2015-09-01
The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.
40 CFR Appendix I to Part 261 - Representative Sampling Methods
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 25 2010-07-01 2010-07-01 false Representative Sampling Methods I...—Representative Sampling Methods The methods and equipment used for sampling waste materials will vary with the form and consistency of the waste materials to be sampled. Samples collected using the sampling...
40 CFR Appendix I to Part 261 - Representative Sampling Methods
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 26 2011-07-01 2011-07-01 false Representative Sampling Methods I...—Representative Sampling Methods The methods and equipment used for sampling waste materials will vary with the form and consistency of the waste materials to be sampled. Samples collected using the sampling...
An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang
2016-06-29
To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.
Archfield, Stacey A.; LeBlanc, Denis R.
2005-01-01
To evaluate diffusion sampling as an alternative method to monitor volatile organic compound (VOC) concentrations in ground water, concentrations in samples collected by traditional pumped-sampling methods were compared to concentrations in samples collected by diffusion-sampling methods for 89 monitoring wells at or near the Massachusetts Military Reservation, Cape Cod. Samples were analyzed for 36 VOCs. There was no substantial difference between the utility of diffusion and pumped samples to detect the presence or absence of a VOC. In wells where VOCs were detected, diffusion-sample concentrations of tetrachloroethene (PCE) and trichloroethene (TCE) were significantly lower than pumped-sample concentrations. Because PCE and TCE concentrations detected in the wells dominated the calculation of many of the total VOC concentrations, when VOC concentrations were summed and compared by sampling method, visual inspection also showed a downward concentration bias in the diffusion-sample concentration. The degree to which pumped- and diffusion-sample concentrations agreed was not a result of variability inherent within the sampling methods or the diffusion process itself. A comparison of the degree of agreement in the results from the two methods to 13 quantifiable characteristics external to the sampling methods offered only well-screen length as being related to the degree of agreement between the methods; however, there is also evidence to indicate that the flushing rate of water through the well screen affected the agreement between the sampling methods. Despite poor agreement between the concentrations obtained by the two methods at some wells, the degree to which the concentrations agree at a given well is repeatable. A one-time, well-bywell comparison between diffusion- and pumped-sampling methods could determine which wells are good candidates for the use of diffusion samplers. For wells with good method agreement, the diffusion-sampling method is a time-saving and cost-effective alternative to pumped-sampling methods in a long-term monitoring program, such as at the Massachusetts Military Reservation.
Modified electrokinetic sample injection method in chromatography and electrophoresis analysis
Davidson, J. Courtney; Balch, Joseph W.
2001-01-01
A sample injection method for horizontal configured multiple chromatography or electrophoresis units, each containing a number of separation/analysis channels, that enables efficient introduction of analyte samples. This method for loading when taken in conjunction with horizontal microchannels allows much reduced sample volumes and a means of sample stacking to greatly reduce the concentration of the sample. This reduction in the amount of sample can lead to great cost savings in sample preparation, particularly in massively parallel applications such as DNA sequencing. The essence of this method is in preparation of the input of the separation channel, the physical sample introduction, and subsequent removal of excess material. By this method, sample volumes of 100 nanoliter to 2 microliters have been used successfully, compared to the typical 5 microliters of sample required by the prior separation/analysis method.
Evaluation of respondent-driven sampling.
McCreesh, Nicky; Frost, Simon D W; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda N; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G
2012-01-01
Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total population data. Total population data on age, tribe, religion, socioeconomic status, sexual activity, and HIV status were available on a population of 2402 male household heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, using current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). We recruited 927 household heads. Full and small RDS samples were largely representative of the total population, but both samples underrepresented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven sampling statistical inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven sampling bootstrap 95% confidence intervals included the population proportion. Respondent-driven sampling produced a generally representative sample of this well-connected nonhidden population. However, current respondent-driven sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience sampling method, and caution is required when interpreting findings based on the sampling method.
A new sampling method for fibre length measurement
NASA Astrophysics Data System (ADS)
Wu, Hongyan; Li, Xianghong; Zhang, Junying
2018-06-01
This paper presents a new sampling method for fibre length measurement. This new method can meet the three features of an effective sampling method, also it can produce the beard with two symmetrical ends which can be scanned from the holding line to get two full fibrograms for each sample. The methodology was introduced and experiments were performed to investigate effectiveness of the new method. The results show that the new sampling method is an effective sampling method.
Evaluation of Respondent-Driven Sampling
McCreesh, Nicky; Frost, Simon; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda Ndagire; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G
2012-01-01
Background Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex-workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total-population data. Methods Total-population data on age, tribe, religion, socioeconomic status, sexual activity and HIV status were available on a population of 2402 male household-heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, employing current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). Results We recruited 927 household-heads. Full and small RDS samples were largely representative of the total population, but both samples under-represented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven-sampling statistical-inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven-sampling bootstrap 95% confidence intervals included the population proportion. Conclusions Respondent-driven sampling produced a generally representative sample of this well-connected non-hidden population. However, current respondent-driven-sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience-sampling method, and caution is required when interpreting findings based on the sampling method. PMID:22157309
Chao, Li-Wei; Szrek, Helena; Peltzer, Karl; Ramlagan, Shandir; Fleming, Peter; Leite, Rui; Magerman, Jesswill; Ngwenya, Godfrey B.; Pereira, Nuno Sousa; Behrman, Jere
2011-01-01
Finding an efficient method for sampling micro- and small-enterprises (MSEs) for research and statistical reporting purposes is a challenge in developing countries, where registries of MSEs are often nonexistent or outdated. This lack of a sampling frame creates an obstacle in finding a representative sample of MSEs. This study uses computer simulations to draw samples from a census of businesses and non-businesses in the Tshwane Municipality of South Africa, using three different sampling methods: the traditional probability sampling method, the compact segment sampling method, and the World Health Organization’s Expanded Programme on Immunization (EPI) sampling method. Three mechanisms by which the methods could differ are tested, the proximity selection of respondents, the at-home selection of respondents, and the use of inaccurate probability weights. The results highlight the importance of revisits and accurate probability weights, but the lesser effect of proximity selection on the samples’ statistical properties. PMID:22582004
Log sampling methods and software for stand and landscape analyses.
Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough
2008-01-01
We describe methods for efficient, accurate sampling of logs at landscape and stand scales to estimate density, total length, cover, volume, and weight. Our methods focus on optimizing the sampling effort by choosing an appropriate sampling method and transect length for specific forest conditions and objectives. Sampling methods include the line-intersect method and...
Methods for purifying carbon materials
Dailly, Anne [Pasadena, CA; Ahn, Channing [Pasadena, CA; Yazami, Rachid [Los Angeles, CA; Fultz, Brent T [Pasadena, CA
2009-05-26
Methods of purifying samples are provided that are capable of removing carbonaceous and noncarbonaceous impurities from a sample containing a carbon material having a selected structure. Purification methods are provided for removing residual metal catalyst particles enclosed in multilayer carbonaceous impurities in samples generate by catalytic synthesis methods. Purification methods are provided wherein carbonaceous impurities in a sample are at least partially exfoliated, thereby facilitating subsequent removal of carbonaceous and noncarbonaceous impurities from the sample. Methods of purifying carbon nanotube-containing samples are provided wherein an intercalant is added to the sample and subsequently reacted with an exfoliation initiator to achieve exfoliation of carbonaceous impurities.
A new design of groundwater sampling device and its application.
Tsai, Yih-jin; Kuo, Ming-ching T
2005-01-01
Compounds in the atmosphere contaminate samples of groundwater. An inexpensive and simple method for collecting groundwater samples is developed to prevent contamination when the background concentration of contaminants is high. This new design of groundwater sampling device involves a glass sampling bottle with a Teflon-lined valve at each end. A cleaned and dried sampling bottle was connected to a low flow-rate peristaltic pump with Teflon tubing and was filled with water. No headspace volume was remained in the sampling bottle. The sample bottle was then packed in a PVC bag to prevent the target component from infiltrating into the water sample through the valves. In this study, groundwater was sampled at six wells using both the conventional method and the improved method. The analysis of trichlorofluoromethane (CFC-11) concentrations at these six wells indicates that all the groundwater samples obtained by the conventional sampling method were contaminated by CFC-11 from the atmosphere. The improved sampling method greatly eliminated the problems of contamination, preservation and quantitative analysis of natural water.
Coes, Alissa L.; Paretti, Nicholas V.; Foreman, William T.; Iverson, Jana L.; Alvarez, David A.
2014-01-01
A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19–23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method.
Coes, Alissa L; Paretti, Nicholas V; Foreman, William T; Iverson, Jana L; Alvarez, David A
2014-03-01
A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19-23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method. Published by Elsevier B.V.
Evaluating fungal contamination indoors is complicated because of the many different sampling methods utilized. In this study, fungal contamination was evaluated using five sampling methods and four matrices for results. The five sampling methods were a 48 hour indoor air sample ...
Wroble, Julie; Frederick, Timothy; Frame, Alicia; Vallero, Daniel
2017-01-01
Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)'s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete ("grab") samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples.
2017-01-01
Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)’s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete (“grab”) samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples. PMID:28759607
Sampling techniques for thrips (Thysanoptera: Thripidae) in preflowering tomato.
Joost, P Houston; Riley, David G
2004-08-01
Sampling techniques for thrips (Thysanoptera: Thripidae) were compared in preflowering tomato plants at the Coastal Plain Experiment Station in Tifton, GA, in 2000 and 2003, to determine the most effective method of determining abundance of thrips on tomato foliage early in the growing season. Three relative sampling techniques, including a standard insect aspirator, a 946-ml beat cup, and an insect vacuum device, were compared for accuracy to an absolute method and to themselves for precision and efficiency of sampling thrips. Thrips counts of all relative sampling methods were highly correlated (R > 0.92) to the absolute method. The aspirator method was the most accurate compared with the absolute sample according to regression analysis in 2000. In 2003, all sampling methods were considered accurate according to Dunnett's test, but thrips numbers were lower and sample variation was greater than in 2000. In 2000, the beat cup method had the lowest relative variation (RV) or best precision, at 1 and 8 d after transplant (DAT). Only the beat cup method had RV values <25 for all sampling dates. In 2003, the beat cup method had the lowest RV value at 15 and 21 DAT. The beat cup method also was the most efficient method for all sample dates in both years. Frankliniella fusca (Pergande) was the most abundant thrips species on the foliage of preflowering tomato in both years of study at this location. Overall, the best thrips sampling technique tested was the beat cup method in terms of precision and sampling efficiency.
Comparison of Techniques for Sampling Adult Necrophilous Insects From Pig Carcasses.
Cruise, Angela; Hatano, Eduardo; Watson, David W; Schal, Coby
2018-02-06
Studies of the pre-colonization interval and mechanisms driving necrophilous insect ecological succession depend on effective sampling of adult insects and knowledge of their diel and successional activity patterns. The number of insects trapped, their diversity, and diel periodicity were compared with four sampling methods on neonate pigs. Sampling method, time of day and decomposition age of the pigs significantly affected the number of insects sampled from pigs. We also found significant interactions of sampling method and decomposition day, time of sampling and decomposition day. No single method was superior to the other methods during all three decomposition days. Sampling times after noon yielded the largest samples during the first 2 d of decomposition. On day 3 of decomposition however, all sampling times were equally effective. Therefore, to maximize insect collections from neonate pigs, the method used to sample must vary by decomposition day. The suction trap collected the most species-rich samples, but sticky trap samples were the most diverse, when both species richness and evenness were factored into a Shannon diversity index. Repeated sampling during the noon to 18:00 hours period was most effective to obtain the maximum diversity of trapped insects. The integration of multiple sampling techniques would most effectively sample the necrophilous insect community. However, because all four tested methods were deficient at sampling beetle species, future work should focus on optimizing the most promising methods, alone or in combinations, and incorporate hand-collections of beetles. © The Author(s) 2018. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Sampling Methods in Cardiovascular Nursing Research: An Overview.
Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie
2014-01-01
Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.
Face recognition based on symmetrical virtual image and original training image
NASA Astrophysics Data System (ADS)
Ke, Jingcheng; Peng, Yali; Liu, Shigang; Li, Jun; Pei, Zhao
2018-02-01
In face representation-based classification methods, we are able to obtain high recognition rate if a face has enough available training samples. However, in practical applications, we only have limited training samples to use. In order to obtain enough training samples, many methods simultaneously use the original training samples and corresponding virtual samples to strengthen the ability of representing the test sample. One is directly using the original training samples and corresponding mirror samples to recognize the test sample. However, when the test sample is nearly symmetrical while the original training samples are not, the integration of the original training and mirror samples might not well represent the test samples. To tackle the above-mentioned problem, in this paper, we propose a novel method to obtain a kind of virtual samples which are generated by averaging the original training samples and corresponding mirror samples. Then, the original training samples and the virtual samples are integrated to recognize the test sample. Experimental results on five face databases show that the proposed method is able to partly overcome the challenges of the various poses, facial expressions and illuminations of original face image.
Comparability of river suspended-sediment sampling and laboratory analysis methods
Groten, Joel T.; Johnson, Gregory D.
2018-03-06
Accurate measurements of suspended sediment, a leading water-quality impairment in many Minnesota rivers, are important for managing and protecting water resources; however, water-quality standards for suspended sediment in Minnesota are based on grab field sampling and total suspended solids (TSS) laboratory analysis methods that have underrepresented concentrations of suspended sediment in rivers compared to U.S. Geological Survey equal-width-increment or equal-discharge-increment (EWDI) field sampling and suspended sediment concentration (SSC) laboratory analysis methods. Because of this underrepresentation, the U.S. Geological Survey, in collaboration with the Minnesota Pollution Control Agency, collected concurrent grab and EWDI samples at eight sites to compare results obtained using different combinations of field sampling and laboratory analysis methods.Study results determined that grab field sampling and TSS laboratory analysis results were biased substantially low compared to EWDI sampling and SSC laboratory analysis results, respectively. Differences in both field sampling and laboratory analysis methods caused grab and TSS methods to be biased substantially low. The difference in laboratory analysis methods was slightly greater than field sampling methods.Sand-sized particles had a strong effect on the comparability of the field sampling and laboratory analysis methods. These results indicated that grab field sampling and TSS laboratory analysis methods fail to capture most of the sand being transported by the stream. The results indicate there is less of a difference among samples collected with grab field sampling and analyzed for TSS and concentration of fines in SSC. Even though differences are present, the presence of strong correlations between SSC and TSS concentrations provides the opportunity to develop site specific relations to address transport processes not captured by grab field sampling and TSS laboratory analysis methods.
Chen, Yibin; Chen, Jiaxi; Chen, Xuan; Wang, Min; Wang, Wei
2015-01-01
A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of sampling methods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform sampling method was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform sampling method was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform sampling method has good accuracy and reproducibility, which can be put into use in drugs analysis.
Gutiérrez-Fonseca, Pablo E; Lorion, Christopher M
2014-04-01
The use of aquatic macroinvertebrates as bio-indicators in water quality studies has increased considerably over the last decade in Costa Rica, and standard biomonitoring methods have now been formulated at the national level. Nevertheless, questions remain about the effectiveness of different methods of sampling freshwater benthic assemblages, and how sampling intensity may influence biomonitoring results. In this study, we compared the results of qualitative sampling using commonly applied methods with a more intensive quantitative approach at 12 sites in small, lowland streams on the southern Caribbean slope of Costa Rica. Qualitative samples were collected following the official protocol using a strainer during a set time period and macroinvertebrates were field-picked. Quantitative sampling involved collecting ten replicate Surber samples and picking out macroinvertebrates in the laboratory with a stereomicroscope. The strainer sampling method consistently yielded fewer individuals and families than quantitative samples. As a result, site scores calculated using the Biological Monitoring Working Party-Costa Rica (BMWP-CR) biotic index often differed greatly depending on the sampling method. Site water quality classifications using the BMWP-CR index differed between the two sampling methods for 11 of the 12 sites in 2005, and for 9 of the 12 sites in 2006. Sampling intensity clearly had a strong influence on BMWP-CR index scores, as well as perceived differences between reference and impacted sites. Achieving reliable and consistent biomonitoring results for lowland Costa Rican streams may demand intensive sampling and requires careful consideration of sampling methods.
Amonette, James E.; Autrey, S. Thomas; Foster-Mills, Nancy S.
2006-02-14
Methods and apparatus for simultaneous or sequential, rapid analysis of multiple samples by photoacoustic spectroscopy are disclosed. Particularly, a photoacoustic spectroscopy sample array vessel including a vessel body having multiple sample cells connected thereto is disclosed. At least one acoustic detector is acoustically positioned near the sample cells. Methods for analyzing the multiple samples in the sample array vessels using photoacoustic spectroscopy are provided.
Monte Carlo approaches to sampling forested tracts with lines or points
Harry T. Valentine; Jeffrey H. Gove; Timothy G. Gregoire
2001-01-01
Several line- and point-based sampling methods can be employed to estimate the aggregate dimensions of trees standing on a forested tract or pieces of coarse woody debris lying on the forest floor. Line methods include line intersect sampling, horizontal line sampling, and transect relascope sampling; point methods include variable- and fixed-radius plot sampling, and...
Monitoring benthic aIgal communides: A comparison of targeted and coefficient sampling methods
Edwards, Matthew S.; Tinker, M. Tim
2009-01-01
Choosing an appropriate sample unit is a fundamental decision in the design of ecological studies. While numerous methods have been developed to estimate organism abundance, they differ in cost, accuracy and precision.Using both field data and computer simulation modeling, we evaluated the costs and benefits associated with two methods commonly used to sample benthic organisms in temperate kelp forests. One of these methods, the Targeted Sampling method, relies on different sample units, each "targeted" for a specific species or group of species while the other method relies on coefficients that represent ranges of bottom cover obtained from visual esti-mates within standardized sample units. Both the field data and the computer simulations suggest that both methods yield remarkably similar estimates of organism abundance and among-site variability, although the Coefficient method slightly underestimates variability among sample units when abundances are low. In contrast, the two methods differ considerably in the effort needed to sample these communities; the Targeted Sampling requires more time and twice the personnel to complete. We conclude that the Coefficent Sampling method may be better for environmental monitoring programs where changes in mean abundance are of central concern and resources are limiting, but that the Targeted sampling methods may be better for ecological studies where quantitative relationships among species and small-scale variability in abundance are of central concern.
Maes, Sharon; Huu, Son Nguyen; Heyndrickx, Marc; Weyenberg, Stephanie van; Steenackers, Hans; Verplaetse, Alex; Vackier, Thijs; Sampers, Imca; Raes, Katleen; Reu, Koen De
2017-12-01
Biofilms are an important source of contamination in food companies, yet the composition of biofilms in practice is still mostly unknown. The chemical and microbiological characterization of surface samples taken after cleaning and disinfection is very important to distinguish free-living bacteria from the attached bacteria in biofilms. In this study, sampling methods that are potentially useful for both chemical and microbiological analyses of surface samples were evaluated. In the manufacturing facilities of eight Belgian food companies, surfaces were sampled after cleaning and disinfection using two sampling methods: the scraper-flocked swab method and the sponge stick method. Microbiological and chemical analyses were performed on these samples to evaluate the suitability of the sampling methods for the quantification of extracellular polymeric substance components and microorganisms originating from biofilms in these facilities. The scraper-flocked swab method was most suitable for chemical analyses of the samples because the material in these swabs did not interfere with determination of the chemical components. For microbiological enumerations, the sponge stick method was slightly but not significantly more effective than the scraper-flocked swab method. In all but one of the facilities, at least 20% of the sampled surfaces had more than 10 2 CFU/100 cm 2 . Proteins were found in 20% of the chemically analyzed surface samples, and carbohydrates and uronic acids were found in 15 and 8% of the samples, respectively. When chemical and microbiological results were combined, 17% of the sampled surfaces were contaminated with both microorganisms and at least one of the analyzed chemical components; thus, these surfaces were characterized as carrying biofilm. Overall, microbiological contamination in the food industry is highly variable by food sector and even within a facility at various sampling points and sampling times.
Rapid detection of Salmonella spp. in food by use of the ISO-GRID hydrophobic grid membrane filter.
Entis, P; Brodsky, M H; Sharpe, A N; Jarvis, G A
1982-01-01
A rapid hydrophobic grid-membrane filter (HGMF) method was developed and compared with the Health Protection Branch cultural method for the detection of Salmonella spp. in 798 spiked samples and 265 naturally contaminated samples of food. With the HGMF method, Salmonella spp. were isolated from 618 of the spiked samples and 190 of the naturally contaminated samples. The conventional method recovered Salmonella spp. from 622 spiked samples and 204 unspiked samples. The isolation rates from Salmonella-positive samples for the two methods were not significantly different (94.6% overall for the HGMF method and 96.7% for the conventional approach), but the HGMF results were available in only 2 to 3 days after sample receipt compared with 3 to 4 days by the conventional method. Images PMID:7059168
Bannerman, J A; Costamagna, A C; McCornack, B P; Ragsdale, D W
2015-06-01
Generalist natural enemies play an important role in controlling soybean aphid, Aphis glycines (Hemiptera: Aphididae), in North America. Several sampling methods are used to monitor natural enemy populations in soybean, but there has been little work investigating their relative bias, precision, and efficiency. We compare five sampling methods: quadrats, whole-plant counts, sweep-netting, walking transects, and yellow sticky cards to determine the most practical methods for sampling the three most prominent species, which included Harmonia axyridis (Pallas), Coccinella septempunctata L. (Coleoptera: Coccinellidae), and Orius insidiosus (Say) (Hemiptera: Anthocoridae). We show an important time by sampling method interaction indicated by diverging community similarities within and between sampling methods as the growing season progressed. Similarly, correlations between sampling methods for the three most abundant species over multiple time periods indicated differences in relative bias between sampling methods and suggests that bias is not consistent throughout the growing season, particularly for sticky cards and whole-plant samples. Furthermore, we show that sticky cards produce strongly biased capture rates relative to the other four sampling methods. Precision and efficiency differed between sampling methods and sticky cards produced the most precise (but highly biased) results for adult natural enemies, while walking transects and whole-plant counts were the most efficient methods for detecting coccinellids and O. insidiosus, respectively. Based on bias, precision, and efficiency considerations, the most practical sampling methods for monitoring in soybean include walking transects for coccinellid detection and whole-plant counts for detection of small predators like O. insidiosus. Sweep-netting and quadrat samples are also useful for some applications, when efficiency is not paramount. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Perpendicular distance sampling: an alternative method for sampling downed coarse woody debris
Michael S. Williams; Jeffrey H. Gove
2003-01-01
Coarse woody debris (CWD) plays an important role in many forest ecosystem processes. In recent years, a number of new methods have been proposed to sample CWD. These methods select individual logs into the sample using some form of unequal probability sampling. One concern with most of these methods is the difficulty in estimating the volume of each log. A new method...
A new sampling scheme for developing metamodels with the zeros of Chebyshev polynomials
NASA Astrophysics Data System (ADS)
Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing
2015-09-01
The accuracy of metamodelling is determined by both the sampling and approximation. This article proposes a new sampling method based on the zeros of Chebyshev polynomials to capture the sampling information effectively. First, the zeros of one-dimensional Chebyshev polynomials are applied to construct Chebyshev tensor product (CTP) sampling, and the CTP is then used to construct high-order multi-dimensional metamodels using the 'hypercube' polynomials. Secondly, the CTP sampling is further enhanced to develop Chebyshev collocation method (CCM) sampling, to construct the 'simplex' polynomials. The samples of CCM are randomly and directly chosen from the CTP samples. Two widely studied sampling methods, namely the Smolyak sparse grid and Hammersley, are used to demonstrate the effectiveness of the proposed sampling method. Several numerical examples are utilized to validate the approximation accuracy of the proposed metamodel under different dimensions.
40 CFR 80.8 - Sampling methods for gasoline, diesel fuel, fuel additives, and renewable fuels.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Sampling methods for gasoline, diesel... Provisions § 80.8 Sampling methods for gasoline, diesel fuel, fuel additives, and renewable fuels. The sampling methods specified in this section shall be used to collect samples of gasoline, diesel fuel...
Evaluation of seven aquatic sampling methods for amphibians and other aquatic fauna
Gunzburger, M.S.
2007-01-01
To design effective and efficient research and monitoring programs researchers must have a thorough understanding of the capabilities and limitations of their sampling methods. Few direct comparative studies exist for aquatic sampling methods for amphibians. The objective of this study was to simultaneously employ seven aquatic sampling methods in 10 wetlands to compare amphibian species richness and number of individuals detected with each method. Four sampling methods allowed counts of individuals (metal dipnet, D-frame dipnet, box trap, crayfish trap), whereas the other three methods allowed detection of species (visual encounter, aural, and froglogger). Amphibian species richness was greatest with froglogger, box trap, and aural samples. For anuran species, the sampling methods by which each life stage was detected was related to relative length of larval and breeding periods and tadpole size. Detection probability of amphibians varied across sampling methods. Box trap sampling resulted in the most precise amphibian count, but the precision of all four count-based methods was low (coefficient of variation > 145 for all methods). The efficacy of the four count sampling methods at sampling fish and aquatic invertebrates was also analyzed because these predatory taxa are known to be important predictors of amphibian habitat distribution. Species richness and counts were similar for fish with the four methods, whereas invertebrate species richness and counts were greatest in box traps. An effective wetland amphibian monitoring program in the southeastern United States should include multiple sampling methods to obtain the most accurate assessment of species community composition at each site. The combined use of frogloggers, crayfish traps, and dipnets may be the most efficient and effective amphibian monitoring protocol. ?? 2007 Brill Academic Publishers.
Sample size determination for mediation analysis of longitudinal data.
Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying
2018-03-27
Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.
Foo, Lee Kien; McGree, James; Duffull, Stephen
2012-01-01
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.
An improved sampling method of complex network
NASA Astrophysics Data System (ADS)
Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing
2014-12-01
Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.
This Paper covers the basics of passive sampler design, compares passive samplers to conventional methods of air sampling, and discusses considerations when implementing a passive sampling program. The Paper also discusses field sampling and sample analysis considerations to ensu...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.
2013-04-27
This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that amore » decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account for FNR > 0 2. qualitative data when the FNR > 0 but statistical sampling methods are used that assume the FNR = 0 3. quantitative data (e.g., contaminant concentrations expressed as CFU/cm2) when the FNR = 0 or when using statistical sampling methods that account for FNR > 0 4. quantitative data when the FNR > 0 but statistical sampling methods are used that assume the FNR = 0. For Situation 2, the hotspot sampling approach provides for stating with Z% confidence that a hotspot of specified shape and size with detectable contamination will be found. Also for Situation 2, the CJR approach provides for stating with X% confidence that at least Y% of the decision area does not contain detectable contamination. Forms of these statements for the other three situations are discussed in Section 2.2. Statistical methods that account for FNR > 0 currently only exist for the hotspot sampling approach with qualitative data (or quantitative data converted to qualitative data). This report documents the current status of methods and formulas for the hotspot and CJR sampling approaches. Limitations of these methods are identified. Extensions of the methods that are applicable when FNR = 0 to account for FNR > 0, or to address other limitations, will be documented in future revisions of this report if future funding supports the development of such extensions. For quantitative data, this report also presents statistical methods and formulas for 1. quantifying the uncertainty in measured sample results 2. estimating the true surface concentration corresponding to a surface sample 3. quantifying the uncertainty of the estimate of the true surface concentration. All of the methods and formulas discussed in the report were applied to example situations to illustrate application of the methods and interpretation of the results.« less
Surface sampling techniques for 3D object inspection
NASA Astrophysics Data System (ADS)
Shih, Chihhsiong S.; Gerhardt, Lester A.
1995-03-01
While the uniform sampling method is quite popular for pointwise measurement of manufactured parts, this paper proposes three novel sampling strategies which emphasize 3D non-uniform inspection capability. They are: (a) the adaptive sampling, (b) the local adjustment sampling, and (c) the finite element centroid sampling techniques. The adaptive sampling strategy is based on a recursive surface subdivision process. Two different approaches are described for this adaptive sampling strategy. One uses triangle patches while the other uses rectangle patches. Several real world objects were tested using these two algorithms. Preliminary results show that sample points are distributed more closely around edges, corners, and vertices as desired for many classes of objects. Adaptive sampling using triangle patches is shown to generally perform better than both uniform and adaptive sampling using rectangle patches. The local adjustment sampling strategy uses a set of predefined starting points and then finds the local optimum position of each nodal point. This method approximates the object by moving the points toward object edges and corners. In a hybrid approach, uniform points sets and non-uniform points sets, first preprocessed by the adaptive sampling algorithm on a real world object were then tested using the local adjustment sampling method. The results show that the initial point sets when preprocessed by adaptive sampling using triangle patches, are moved the least amount of distance by the subsequently applied local adjustment method, again showing the superiority of this method. The finite element sampling technique samples the centroids of the surface triangle meshes produced from the finite element method. The performance of this algorithm was compared to that of the adaptive sampling using triangular patches. The adaptive sampling with triangular patches was once again shown to be better on different classes of objects.
Representativeness of direct observations selected using a work-sampling equation.
Sharp, Rebecca A; Mudford, Oliver C; Elliffe, Douglas
2015-01-01
Deciding on appropriate sampling to obtain representative samples of behavior is important but not straightforward, because the relative duration of the target behavior may affect its observation in a given sampling interval. Work-sampling methods, which offer a way to adjust the frequency of sampling according to a priori or ongoing estimates of the behavior to achieve a preselected level of representativeness, may provide a solution. Full-week observations of 7 behaviors were conducted for 3 students with autism spectrum disorder and intellectual disabilities. Work-sampling methods were used to select momentary time samples from the full time-of-interest, which produced representative samples. However, work sampling required impractically high numbers of time samples to obtain representative samples. More practical momentary time samples produced less representative samples, particularly for low-duration behaviors. The utility and limits of work-sampling methods for applied behavior analysis are discussed. © Society for the Experimental Analysis of Behavior.
Comparability among four invertebrate sampling methods, Fountain Creek Basin, Colorado, 2010-2012
Zuellig, Robert E.; Bruce, James F.; Stogner, Sr., Robert W.; Brown, Krystal D.
2014-01-01
The U.S. Geological Survey, in cooperation with Colorado Springs City Engineering and Colorado Springs Utilities, designed a study to determine if sampling method and sample timing resulted in comparable samples and assessments of biological condition. To accomplish this task, annual invertebrate samples were collected concurrently using four sampling methods at 15 U.S. Geological Survey streamflow gages in the Fountain Creek basin from 2010 to 2012. Collectively, the four methods are used by local (U.S. Geological Survey cooperative monitoring program) and State monitoring programs (Colorado Department of Public Health and Environment) in the Fountain Creek basin to produce two distinct sample types for each program that target single-and multiple-habitats. This study found distinguishable differences between single-and multi-habitat sample types using both community similarities and multi-metric index values, while methods from each program within sample type were comparable. This indicates that the Colorado Department of Public Health and Environment methods were compatible with the cooperative monitoring program methods within multi-and single-habitat sample types. Comparisons between September and October samples found distinguishable differences based on community similarities for both sample types, whereas only differences were found for single-habitat samples when multi-metric index values were considered. At one site, differences between September and October index values from single-habitat samples resulted in opposing assessments of biological condition. Direct application of the results to inform the revision of the existing Fountain Creek basin U.S. Geological Survey cooperative monitoring program are discussed.
Some connections between importance sampling and enhanced sampling methods in molecular dynamics.
Lie, H C; Quer, J
2017-11-21
In molecular dynamics, enhanced sampling methods enable the collection of better statistics of rare events from a reference or target distribution. We show that a large class of these methods is based on the idea of importance sampling from mathematical statistics. We illustrate this connection by comparing the Hartmann-Schütte method for rare event simulation (J. Stat. Mech. Theor. Exp. 2012, P11004) and the Valsson-Parrinello method of variationally enhanced sampling [Phys. Rev. Lett. 113, 090601 (2014)]. We use this connection in order to discuss how recent results from the Monte Carlo methods literature can guide the development of enhanced sampling methods.
Some connections between importance sampling and enhanced sampling methods in molecular dynamics
NASA Astrophysics Data System (ADS)
Lie, H. C.; Quer, J.
2017-11-01
In molecular dynamics, enhanced sampling methods enable the collection of better statistics of rare events from a reference or target distribution. We show that a large class of these methods is based on the idea of importance sampling from mathematical statistics. We illustrate this connection by comparing the Hartmann-Schütte method for rare event simulation (J. Stat. Mech. Theor. Exp. 2012, P11004) and the Valsson-Parrinello method of variationally enhanced sampling [Phys. Rev. Lett. 113, 090601 (2014)]. We use this connection in order to discuss how recent results from the Monte Carlo methods literature can guide the development of enhanced sampling methods.
Ng, Ding-Quan; Liu, Shu-Wei; Lin, Yi-Pin
2018-09-15
In this study, a sampling campaign with a total of nine sampling events investigating lead in drinking water was conducted at 7 sampling locations in an old building with lead pipes in service in part of the building on the National Taiwan University campus. This study aims to assess the effectiveness of four different sampling methods, namely first draw sampling, sequential sampling, random daytime sampling and flush sampling, in lead contamination detection. In 3 out of the 7 sampling locations without lead pipe, lead could not be detected (<1.1 μg/L) in most samples regardless of the sampling methods. On the other hand, in the 4 sampling locations where lead pipes still existed, total lead concentrations >10 μg/L were consistently observed in 3 locations using any of the four sampling methods while the remaining location was identified to be contaminated using sequential sampling. High lead levels were consistently measured by the four sampling methods in the 3 locations in which particulate lead was either predominant or comparable to soluble lead. Compared to first draw and random daytime samplings, although flush sampling had a high tendency to reduce total lead in samples in lead-contaminated sites, the extent of lead reduction was location-dependent and not dependent on flush durations between 5 and 10 min. Overall, first draw sampling and random daytime sampling were reliable and effective in determining lead contamination in this study. Flush sampling could reveal the contamination if the extent is severe but tends to underestimate lead exposure risk. Copyright © 2018 Elsevier B.V. All rights reserved.
Popic, Tony J; Davila, Yvonne C; Wardle, Glenda M
2013-01-01
Methods for sampling ecological assemblages strive to be efficient, repeatable, and representative. Unknowingly, common methods may be limited in terms of revealing species function and so of less value for comparative studies. The global decline in pollination services has stimulated surveys of flower-visiting invertebrates, using pan traps and net sampling. We explore the relative merits of these two methods in terms of species discovery, quantifying abundance, function, and composition, and responses of species to changing floral resources. Using a spatially-nested design we sampled across a 5000 km(2) area of arid grasslands, including 432 hours of net sampling and 1296 pan trap-days, between June 2010 and July 2011. Net sampling yielded 22% more species and 30% higher abundance than pan traps, and better reflected the spatio-temporal variation of floral resources. Species composition differed significantly between methods; from 436 total species, 25% were sampled by both methods, 50% only by nets, and the remaining 25% only by pans. Apart from being less comprehensive, if pan traps do not sample flower-visitors, the link to pollination is questionable. By contrast, net sampling functionally linked species to pollination through behavioural observations of flower-visitation interaction frequency. Netted specimens are also necessary for evidence of pollen transport. Benefits of net-based sampling outweighed minor differences in overall sampling effort. As pan traps and net sampling methods are not equivalent for sampling invertebrate-flower interactions, we recommend net sampling of invertebrate pollinator assemblages, especially if datasets are intended to document declines in pollination and guide measures to retain this important ecosystem service.
Popic, Tony J.; Davila, Yvonne C.; Wardle, Glenda M.
2013-01-01
Methods for sampling ecological assemblages strive to be efficient, repeatable, and representative. Unknowingly, common methods may be limited in terms of revealing species function and so of less value for comparative studies. The global decline in pollination services has stimulated surveys of flower-visiting invertebrates, using pan traps and net sampling. We explore the relative merits of these two methods in terms of species discovery, quantifying abundance, function, and composition, and responses of species to changing floral resources. Using a spatially-nested design we sampled across a 5000 km2 area of arid grasslands, including 432 hours of net sampling and 1296 pan trap-days, between June 2010 and July 2011. Net sampling yielded 22% more species and 30% higher abundance than pan traps, and better reflected the spatio-temporal variation of floral resources. Species composition differed significantly between methods; from 436 total species, 25% were sampled by both methods, 50% only by nets, and the remaining 25% only by pans. Apart from being less comprehensive, if pan traps do not sample flower-visitors, the link to pollination is questionable. By contrast, net sampling functionally linked species to pollination through behavioural observations of flower-visitation interaction frequency. Netted specimens are also necessary for evidence of pollen transport. Benefits of net-based sampling outweighed minor differences in overall sampling effort. As pan traps and net sampling methods are not equivalent for sampling invertebrate-flower interactions, we recommend net sampling of invertebrate pollinator assemblages, especially if datasets are intended to document declines in pollination and guide measures to retain this important ecosystem service. PMID:23799127
Amonette, James E.; Autrey, S. Thomas; Foster-Mills, Nancy S.; Green, David
2005-03-29
Methods and apparatus for analysis of multiple samples by photoacoustic spectroscopy are disclosed. Particularly, a photoacoustic spectroscopy sample array vessel including a vessel body having multiple sample cells connected thereto is disclosed. At least one acoustic detector is acoustically coupled with the vessel body. Methods for analyzing the multiple samples in the sample array vessels using photoacoustic spectroscopy are provided.
Comparison of four sampling methods for the detection of Salmonella in broiler litter.
Buhr, R J; Richardson, L J; Cason, J A; Cox, N A; Fairchild, B D
2007-01-01
Experiments were conducted to compare litter sampling methods for the detection of Salmonella. In experiment 1, chicks were challenged orally with a suspension of naladixic acid-resistant Salmonella and wing banded, and additional nonchallenged chicks were placed into each of 2 challenge pens. Nonchallenged chicks were placed into each nonchallenge pen located adjacent to the challenge pens. At 7, 8, 10, and 11 wk of age the litter was sampled using 4 methods: fecal droppings, litter grab, drag swab, and sock. For the challenge pens, Salmonella-positive samples were detected in 3 of 16 fecal samples, 6 of 16 litter grab samples, 7 of 16 drag swabs samples, and 7 of 16 sock samples. Samples from the nonchallenge pens were Salmonella positive in 2 of 16 litter grab samples, 9 of 16 drag swab samples, and 9 of 16 sock samples. In experiment 2, chicks were challenged with Salmonella, and the litter in the challenge and adjacent nonchallenge pens were sampled at 4, 6, and 8 wk of age with broilers remaining in all pens. For the challenge pens, Salmonella was detected in 10 of 36 fecal samples, 20 of 36 litter grab samples, 14 of 36 drag swab samples, and 26 of 36 sock samples. Samples from the adjacent nonchallenge pens were positive for Salmonella in 6 of 36 fecal droppings samples, 4 of 36 litter grab samples, 7 of 36 drag swab samples, and 19 of 36 sock samples. Sock samples had the highest rates of Salmonella detection. In experiment 3, the litter from a Salmonella-challenged flock was sampled at 7, 8, and 9 wk by socks and drag swabs. In addition, comparisons with drag swabs that were stepped on during sampling were made. Both socks (24 of 36, 67%) and drag swabs that were stepped on (25 of 36, 69%) showed significantly more Salmonella-positive samples than the traditional drag swab method (16 of 36, 44%). Drag swabs that were stepped on had comparable Salmonella detection level to that for socks. Litter sampling methods that incorporate stepping on the sample material while in contact with the litter appear to detect Salmonella in greater incidence than traditional sampling methods of dragging swabs over the litter surface.
Method and apparatus for data sampling
Odell, Daniel M. C.
1994-01-01
A method and apparatus for sampling radiation detector outputs and determining event data from the collected samples. The method uses high speed sampling of the detector output, the conversion of the samples to digital values, and the discrimination of the digital values so that digital values representing detected events are determined. The high speed sampling and digital conversion is performed by an A/D sampler that samples the detector output at a rate high enough to produce numerous digital samples for each detected event. The digital discrimination identifies those digital samples that are not representative of detected events. The sampling and discrimination also provides for temporary or permanent storage, either serially or in parallel, to a digital storage medium.
[Sampling methods for PM2.5 from stationary sources: a review].
Jiang, Jing-Kun; Deng, Jian-Guo; Li, Zhen; Li, Xing-Hua; Duan, Lei; Hao, Ji-Ming
2014-05-01
The new China national ambient air quality standard has been published in 2012 and will be implemented in 2016. To meet the requirements in this new standard, monitoring and controlling PM2,,5 emission from stationary sources are very important. However, so far there is no national standard method on sampling PM2.5 from stationary sources. Different sampling methods for PM2.5 from stationary sources and relevant international standards were reviewed in this study. It includes the methods for PM2.5 sampling in flue gas and the methods for PM2.5 sampling after dilution. Both advantages and disadvantages of these sampling methods were discussed. For environmental management, the method for PM2.5 sampling in flue gas such as impactor and virtual impactor was suggested as a standard to determine filterable PM2.5. To evaluate environmental and health effects of PM2.5 from stationary sources, standard dilution method for sampling of total PM2.5 should be established.
Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine
2017-09-01
According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.
NASA Technical Reports Server (NTRS)
Kim, Hyun Jung; Choi, Sang H.; Bae, Hyung-Bin; Lee, Tae Woo
2012-01-01
The National Aeronautics and Space Administration-invented X-ray diffraction (XRD) methods, including the total defect density measurement method and the spatial wafer mapping method, have confirmed super hetero epitaxy growth for rhombohedral single crystalline silicon germanium (Si1-xGex) on a c-plane sapphire substrate. However, the XRD method cannot observe the surface morphology or roughness because of the method s limited resolution. Therefore the authors used transmission electron microscopy (TEM) with samples prepared in two ways, the focused ion beam (FIB) method and the tripod method to study the structure between Si1-xGex and sapphire substrate and Si1?xGex itself. The sample preparation for TEM should be as fast as possible so that the sample should contain few or no artifacts induced by the preparation. The standard sample preparation method of mechanical polishing often requires a relatively long ion milling time (several hours), which increases the probability of inducing defects into the sample. The TEM sampling of the Si1-xGex on sapphire is also difficult because of the sapphire s high hardness and mechanical instability. The FIB method and the tripod method eliminate both problems when performing a cross-section TEM sampling of Si1-xGex on c-plane sapphire, which shows the surface morphology, the interface between film and substrate, and the crystal structure of the film. This paper explains the FIB sampling method and the tripod sampling method, and why sampling Si1-xGex, on a sapphire substrate with TEM, is necessary.
Prevalence of Mixed-Methods Sampling Designs in Social Science Research
ERIC Educational Resources Information Center
Collins, Kathleen M. T.
2006-01-01
The purpose of this mixed-methods study was to document the prevalence of sampling designs utilised in mixed-methods research and to examine the interpretive consistency between interpretations made in mixed-methods studies and the sampling design used. Classification of studies was based on a two-dimensional mixed-methods sampling model. This…
Surveying immigrants without sampling frames - evaluating the success of alternative field methods.
Reichel, David; Morales, Laura
2017-01-01
This paper evaluates the sampling methods of an international survey, the Immigrant Citizens Survey, which aimed at surveying immigrants from outside the European Union (EU) in 15 cities in seven EU countries. In five countries, no sample frame was available for the target population. Consequently, alternative ways to obtain a representative sample had to be found. In three countries 'location sampling' was employed, while in two countries traditional methods were used with adaptations to reach the target population. The paper assesses the main methodological challenges of carrying out a survey among a group of immigrants for whom no sampling frame exists. The samples of the survey in these five countries are compared to results of official statistics in order to assess the accuracy of the samples obtained through the different sampling methods. It can be shown that alternative sampling methods can provide meaningful results in terms of core demographic characteristics although some estimates differ to some extent from the census results.
Che, W W; Frey, H Christopher; Lau, Alexis K H
2014-12-01
Population and diary sampling methods are employed in exposure models to sample simulated individuals and their daily activity on each simulation day. Different sampling methods may lead to variations in estimated human exposure. In this study, two population sampling methods (stratified-random and random-random) and three diary sampling methods (random resampling, diversity and autocorrelation, and Markov-chain cluster [MCC]) are evaluated. Their impacts on estimated children's exposure to ambient fine particulate matter (PM2.5 ) are quantified via case studies for children in Wake County, NC for July 2002. The estimated mean daily average exposure is 12.9 μg/m(3) for simulated children using the stratified population sampling method, and 12.2 μg/m(3) using the random sampling method. These minor differences are caused by the random sampling among ages within census tracts. Among the three diary sampling methods, there are differences in the estimated number of individuals with multiple days of exposures exceeding a benchmark of concern of 25 μg/m(3) due to differences in how multiday longitudinal diaries are estimated. The MCC method is relatively more conservative. In case studies evaluated here, the MCC method led to 10% higher estimation of the number of individuals with repeated exposures exceeding the benchmark. The comparisons help to identify and contrast the capabilities of each method and to offer insight regarding implications of method choice. Exposure simulation results are robust to the two population sampling methods evaluated, and are sensitive to the choice of method for simulating longitudinal diaries, particularly when analyzing results for specific microenvironments or for exposures exceeding a benchmark of concern. © 2014 Society for Risk Analysis.
Equilibrium Molecular Thermodynamics from Kirkwood Sampling
2015-01-01
We present two methods for barrierless equilibrium sampling of molecular systems based on the recently proposed Kirkwood method (J. Chem. Phys.2009, 130, 134102). Kirkwood sampling employs low-order correlations among internal coordinates of a molecule for random (or non-Markovian) sampling of the high dimensional conformational space. This is a geometrical sampling method independent of the potential energy surface. The first method is a variant of biased Monte Carlo, where Kirkwood sampling is used for generating trial Monte Carlo moves. Using this method, equilibrium distributions corresponding to different temperatures and potential energy functions can be generated from a given set of low-order correlations. Since Kirkwood samples are generated independently, this method is ideally suited for massively parallel distributed computing. The second approach is a variant of reservoir replica exchange, where Kirkwood sampling is used to construct a reservoir of conformations, which exchanges conformations with the replicas performing equilibrium sampling corresponding to different thermodynamic states. Coupling with the Kirkwood reservoir enhances sampling by facilitating global jumps in the conformational space. The efficiency of both methods depends on the overlap of the Kirkwood distribution with the target equilibrium distribution. We present proof-of-concept results for a model nine-atom linear molecule and alanine dipeptide. PMID:25915525
Code of Federal Regulations, 2010 CFR
2010-07-01
... will consider a sample obtained using any of the applicable sampling methods specified in appendix I to... appendix I sampling methods are not being formally adopted by the Administrator, a person who desires to employ an alternative sampling method is not required to demonstrate the equivalency of his method under...
Jannink, I; Bennen, J N; Blaauw, J; van Diest, P J; Baak, J P
1995-01-01
This study compares the influence of two different nuclear sampling methods on the prognostic value of assessments of mean and standard deviation of nuclear area (MNA, SDNA) in 191 consecutive invasive breast cancer patients with long term follow up. The first sampling method used was 'at convenience' sampling (ACS); the second, systematic random sampling (SRS). Both sampling methods were tested with a sample size of 50 nuclei (ACS-50 and SRS-50). To determine whether, besides the sampling methods, sample size had impact on prognostic value as well, the SRS method was also tested using a sample size of 100 nuclei (SRS-100). SDNA values were systematically lower for ACS, obviously due to (unconsciously) not including small and large nuclei. Testing prognostic value of a series of cut off points, MNA and SDNA values assessed by the SRS method were prognostically significantly stronger than the values obtained by the ACS method. This was confirmed in Cox regression analysis. For the MNA, the Mantel-Cox p-values from SRS-50 and SRS-100 measurements were not significantly different. However, for the SDNA, SRS-100 yielded significantly lower p-values than SRS-50. In conclusion, compared with the 'at convenience' nuclear sampling method, systematic random sampling of nuclei is not only superior with respect to reproducibility of results, but also provides a better prognostic value in patients with invasive breast cancer.
Parajulee, M N; Shrestha, R B; Leser, J F
2006-04-01
A 2-yr field study was conducted to examine the effectiveness of two sampling methods (visual and plant washing techniques) for western flower thrips, Frankliniella occidentalis (Pergande), and five sampling methods (visual, beat bucket, drop cloth, sweep net, and vacuum) for cotton fleahopper, Pseudatomoscelis seriatus (Reuter), in Texas cotton, Gossypium hirsutum (L.), and to develop sequential sampling plans for each pest. The plant washing technique gave similar results to the visual method in detecting adult thrips, but the washing technique detected significantly higher number of thrips larvae compared with the visual sampling. Visual sampling detected the highest number of fleahoppers followed by beat bucket, drop cloth, vacuum, and sweep net sampling, with no significant difference in catch efficiency between vacuum and sweep net methods. However, based on fixed precision cost reliability, the sweep net sampling was the most cost-effective method followed by vacuum, beat bucket, drop cloth, and visual sampling. Taylor's Power Law analysis revealed that the field dispersion patterns of both thrips and fleahoppers were aggregated throughout the crop growing season. For thrips management decision based on visual sampling (0.25 precision), 15 plants were estimated to be the minimum sample size when the estimated population density was one thrips per plant, whereas the minimum sample size was nine plants when thrips density approached 10 thrips per plant. The minimum visual sample size for cotton fleahoppers was 16 plants when the density was one fleahopper per plant, but the sample size decreased rapidly with an increase in fleahopper density, requiring only four plants to be sampled when the density was 10 fleahoppers per plant. Sequential sampling plans were developed and validated with independent data for both thrips and cotton fleahoppers.
Measuring larval nematode contamination on cattle pastures: Comparing two herbage sampling methods.
Verschave, S H; Levecke, B; Duchateau, L; Vercruysse, J; Charlier, J
2015-06-15
Assessing levels of pasture larval contamination is frequently used to study the population dynamics of the free-living stages of parasitic nematodes of livestock. Direct quantification of infective larvae (L3) on herbage is the most applied method to measure pasture larval contamination. However, herbage collection remains labour intensive and there is a lack of studies addressing the variation induced by the sampling method and the required sample size. The aim of this study was (1) to compare two different sampling methods in terms of pasture larval count results and time required to sample, (2) to assess the amount of variation in larval counts at the level of sample plot, pasture and season, respectively and (3) to calculate the required sample size to assess pasture larval contamination with a predefined precision using random plots across pasture. Eight young stock pastures of different commercial dairy herds were sampled in three consecutive seasons during the grazing season (spring, summer and autumn). On each pasture, herbage samples were collected through both a double-crossed W-transect with samples taken every 10 steps (method 1) and four random located plots of 0.16 m(2) with collection of all herbage within the plot (method 2). The average (± standard deviation (SD)) pasture larval contamination using sampling methods 1 and 2 was 325 (± 479) and 305 (± 444)L3/kg dry herbage (DH), respectively. Large discrepancies in pasture larval counts of the same pasture and season were often seen between methods, but no significant difference (P = 0.38) in larval counts between methods was found. Less time was required to collect samples with method 2. This difference in collection time between methods was most pronounced for pastures with a surface area larger than 1 ha. The variation in pasture larval counts from samples generated by random plot sampling was mainly due to the repeated measurements on the same pasture in the same season (residual variance component = 6.2), rather than due to pasture (variance component = 0.55) or season (variance component = 0.15). Using the observed distribution of L3, the required sample size (i.e. number of plots per pasture) for sampling a pasture through random plots with a particular precision was simulated. A higher relative precision was acquired when estimating PLC on pastures with a high larval contamination and a low level of aggregation compared to pastures with a low larval contamination when the same sample size was applied. In the future, herbage sampling through random plots across pasture (method 2) seems a promising method to develop further as no significant difference in counts between the methods was found and this method was less time consuming. Copyright © 2015 Elsevier B.V. All rights reserved.
Luo, Yong; Wu, Dapeng; Zeng, Shaojiang; Gai, Hongwei; Long, Zhicheng; Shen, Zheng; Dai, Zhongpeng; Qin, Jianhua; Lin, Bingcheng
2006-09-01
A novel sample injection method for chip CE was presented. This injection method uses hydrostatic pressure, generated by emptying the sample waste reservoir, for sample loading and electrokinetic force for dispensing. The injection was performed on a double-cross microchip. One cross, created by the sample and separation channels, is used for formation of a sample plug. Another cross, formed by the sample and controlling channels, is used for plug control. By varying the electric field in the controlling channel, the sample plug volume can be linearly adjusted. Hydrostatic pressure takes advantage of its ease of generation on a microfluidic chip, without any electrode or external pressure pump, thus allowing a sample injection with a minimum number of electrodes. The potential of this injection method was demonstrated by a four-separation-channel chip CE system. In this system, parallel sample separation can be achieved with only two electrodes, which is otherwise impossible with conventional injection methods. Hydrostatic pressure maintains the sample composition during the sample loading, allowing the injection to be free of injection bias.
NASA Astrophysics Data System (ADS)
Liu, Xiaodong
2017-08-01
A sampling method by using scattering amplitude is proposed for shape and location reconstruction in inverse acoustic scattering problems. Only matrix multiplication is involved in the computation, thus the novel sampling method is very easy and simple to implement. With the help of the factorization of the far field operator, we establish an inf-criterion for characterization of underlying scatterers. This result is then used to give a lower bound of the proposed indicator functional for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functional decays like the bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functional continuously depends on the scattering amplitude, this further implies that the novel sampling method is extremely stable with respect to errors in the data. Different to the classical sampling method such as the linear sampling method or the factorization method, from the numerical point of view, the novel indicator takes its maximum near the boundary of the underlying target and decays like the bessel functions as the sampling points go away from the boundary. The numerical simulations also show that the proposed sampling method can deal with multiple multiscale case, even the different components are close to each other.
Improved radiation dose efficiency in solution SAXS using a sheath flow sample environment
Kirby, Nigel; Cowieson, Nathan; Hawley, Adrian M.; Mudie, Stephen T.; McGillivray, Duncan J.; Kusel, Michael; Samardzic-Boban, Vesna; Ryan, Timothy M.
2016-01-01
Radiation damage is a major limitation to synchrotron small-angle X-ray scattering analysis of biomacromolecules. Flowing the sample during exposure helps to reduce the problem, but its effectiveness in the laminar-flow regime is limited by slow flow velocity at the walls of sample cells. To overcome this limitation, the coflow method was developed, where the sample flows through the centre of its cell surrounded by a flow of matched buffer. The method permits an order-of-magnitude increase of X-ray incident flux before sample damage, improves measurement statistics and maintains low sample concentration limits. The method also efficiently handles sample volumes of a few microlitres, can increase sample throughput, is intrinsically resistant to capillary fouling by sample and is suited to static samples and size-exclusion chromatography applications. The method unlocks further potential of third-generation synchrotron beamlines to facilitate new and challenging applications in solution scattering. PMID:27917826
Ha, Ji Won; Hahn, Jong Hoon
2017-02-01
Acupuncture sample injection is a simple method to deliver well-defined nanoliter-scale sample plugs in PDMS microfluidic channels. This acupuncture injection method in microchip CE has several advantages, including minimization of sample consumption, the capability of serial injections of different sample solutions into the same microchannel, and the capability of injecting sample plugs into any desired position of a microchannel. Herein, we demonstrate that the simple and cost-effective acupuncture sample injection method can be used for PDMS microchip-based field amplified sample stacking in the most simplified straight channel by applying a single potential. We achieved the increase in electropherogram signals for the case of sample stacking. Furthermore, we present that microchip CGE of ΦX174 DNA-HaeⅢ digest can be performed with the acupuncture injection method on a glass microchip while minimizing sample loss and voltage control hardware. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Uechi, Ken; Asakura, Keiko; Ri, Yui; Masayasu, Shizuko; Sasaki, Satoshi
2016-02-01
Several estimation methods for 24-h sodium excretion using spot urine sample have been reported, but accurate estimation at the individual level remains difficult. We aimed to clarify the most accurate method of estimating 24-h sodium excretion with different numbers of available spot urine samples. A total of 370 participants from throughout Japan collected multiple 24-h urine and spot urine samples independently. Participants were allocated randomly into a development and a validation dataset. Two estimation methods were established in the development dataset using the two 24-h sodium excretion samples as reference: the 'simple mean method' estimated by multiplying the sodium-creatinine ratio by predicted 24-h creatinine excretion, whereas the 'regression method' employed linear regression analysis. The accuracy of the two methods was examined by comparing the estimated means and concordance correlation coefficients (CCC) in the validation dataset. Mean sodium excretion by the simple mean method with three spot urine samples was closest to that by 24-h collection (difference: -1.62 mmol/day). CCC with the simple mean method increased with an increased number of spot urine samples at 0.20, 0.31, and 0.42 using one, two, and three samples, respectively. This method with three spot urine samples yielded higher CCC than the regression method (0.40). When only one spot urine sample was available for each study participant, CCC was higher with the regression method (0.36). The simple mean method with three spot urine samples yielded the most accurate estimates of sodium excretion. When only one spot urine sample was available, the regression method was preferable.
40 CFR Appendix I to Part 261 - Representative Sampling Methods
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 27 2013-07-01 2013-07-01 false Representative Sampling Methods I Appendix I to Part 261 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES...—Representative Sampling Methods The methods and equipment used for sampling waste materials will vary with the...
Method and apparatus for data sampling
Odell, D.M.C.
1994-04-19
A method and apparatus for sampling radiation detector outputs and determining event data from the collected samples is described. The method uses high speed sampling of the detector output, the conversion of the samples to digital values, and the discrimination of the digital values so that digital values representing detected events are determined. The high speed sampling and digital conversion is performed by an A/D sampler that samples the detector output at a rate high enough to produce numerous digital samples for each detected event. The digital discrimination identifies those digital samples that are not representative of detected events. The sampling and discrimination also provides for temporary or permanent storage, either serially or in parallel, to a digital storage medium. 6 figures.
Comparison of methods for sampling plant bugs on cotton in South Texas (2010)
USDA-ARS?s Scientific Manuscript database
A total of 26 cotton fields were sampled by experienced and inexperienced samplers at 3 growth stages using 5 methods to compare the most efficient and accurate method for sampling plant bugs in cotton. Each of the 5 methods had its own distinct advantages and disadvantages as a sampling method (too...
Apparatus and method for handheld sampling
Staab, Torsten A.
2005-09-20
The present invention includes an apparatus, and corresponding method, for taking a sample. The apparatus is built around a frame designed to be held in at least one hand. A sample media is used to secure the sample. A sample media adapter for securing the sample media is operated by a trigger mechanism connectively attached within the frame to the sample media adapter.
Goeman, Valerie R; Tinkler, Stacy H; Hammac, G Kenitra; Ruple, Audrey
2018-04-01
Environmental surveillance for Salmonella enterica can be used for early detection of contamination; thus routine sampling is an integral component of infection control programs in hospital environments. At the Purdue University Veterinary Teaching Hospital (PUVTH), the technique regularly employed in the large animal hospital for sample collection uses sterile gauze sponges for environmental sampling, which has proven labor-intensive and time-consuming. Alternative sampling methods use Swiffer brand electrostatic wipes for environmental sample collection, which are reportedly effective and efficient. It was hypothesized that use of Swiffer wipes for sample collection would be more efficient and less costly than the use of gauze sponges. A head-to-head comparison between the 2 sampling methods was conducted in the PUVTH large animal hospital and relative agreement, cost-effectiveness, and sampling efficiency were compared. There was fair agreement in culture results between the 2 sampling methods, but Swiffer wipes required less time and less physical effort to collect samples and were more cost-effective.
Lísa, Miroslav; Cífková, Eva; Khalikova, Maria; Ovčačíková, Magdaléna; Holčapek, Michal
2017-11-24
Lipidomic analysis of biological samples in a clinical research represents challenging task for analytical methods given by the large number of samples and their extreme complexity. In this work, we compare direct infusion (DI) and chromatography - mass spectrometry (MS) lipidomic approaches represented by three analytical methods in terms of comprehensiveness, sample throughput, and validation results for the lipidomic analysis of biological samples represented by tumor tissue, surrounding normal tissue, plasma, and erythrocytes of kidney cancer patients. Methods are compared in one laboratory using the identical analytical protocol to ensure comparable conditions. Ultrahigh-performance liquid chromatography/MS (UHPLC/MS) method in hydrophilic interaction liquid chromatography mode and DI-MS method are used for this comparison as the most widely used methods for the lipidomic analysis together with ultrahigh-performance supercritical fluid chromatography/MS (UHPSFC/MS) method showing promising results in metabolomics analyses. The nontargeted analysis of pooled samples is performed using all tested methods and 610 lipid species within 23 lipid classes are identified. DI method provides the most comprehensive results due to identification of some polar lipid classes, which are not identified by UHPLC and UHPSFC methods. On the other hand, UHPSFC method provides an excellent sensitivity for less polar lipid classes and the highest sample throughput within 10min method time. The sample consumption of DI method is 125 times higher than for other methods, while only 40μL of organic solvent is used for one sample analysis compared to 3.5mL and 4.9mL in case of UHPLC and UHPSFC methods, respectively. Methods are validated for the quantitative lipidomic analysis of plasma samples with one internal standard for each lipid class. Results show applicability of all tested methods for the lipidomic analysis of biological samples depending on the analysis requirements. Copyright © 2017 Elsevier B.V. All rights reserved.
Validated Test Method 5030C: Purge-and-Trap for Aqueous Samples
This method describes a purge-and-trap procedure for the analysis of volatile organic compoundsin aqueous samples & water miscible liquid samples. It also describes the analysis of high concentration soil and waste sample extracts prepared in Method 5035.
Appearance-based representative samples refining method for palmprint recognition
NASA Astrophysics Data System (ADS)
Wen, Jiajun; Chen, Yan
2012-07-01
The sparse representation can deal with the lack of sample problem due to utilizing of all the training samples. However, the discrimination ability will degrade when more training samples are used for representation. We propose a novel appearance-based palmprint recognition method. We aim to find a compromise between the discrimination ability and the lack of sample problem so as to obtain a proper representation scheme. Under the assumption that the test sample can be well represented by a linear combination of a certain number of training samples, we first select the representative training samples according to the contributions of the samples. Then we further refine the training samples by an iteration procedure, excluding the training sample with the least contribution to the test sample for each time. Experiments on PolyU multispectral palmprint database and two-dimensional and three-dimensional palmprint database show that the proposed method outperforms the conventional appearance-based palmprint recognition methods. Moreover, we also explore and find out the principle of the usage for the key parameters in the proposed algorithm, which facilitates to obtain high-recognition accuracy.
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
Arnold, Mark E; Mueller-Doblies, Doris; Gosling, Rebecca J; Martelli, Francesca; Davies, Robert H
2015-01-01
Reports of Salmonella in ducks in the UK currently rely upon voluntary submissions from the industry, and as there is no harmonized statutory monitoring and control programme, it is difficult to compare data from different years in order to evaluate any trends in Salmonella prevalence in relation to sampling methodology. Therefore, the aim of this project was to assess the sensitivity of a selection of environmental sampling methods, including the sampling of faeces, dust and water troughs or bowls for the detection of Salmonella in duck flocks, and a range of sampling methods were applied to 67 duck flocks. Bayesian methods in the absence of a gold standard were used to provide estimates of the sensitivity of each of the sampling methods relative to the within-flock prevalence. There was a large influence of the within-flock prevalence on the sensitivity of all sample types, with sensitivity reducing as the within-flock prevalence reduced. Boot swabs (individual and pool of four), swabs of faecally contaminated areas and whole house hand-held fabric swabs showed the overall highest sensitivity for low-prevalence flocks and are recommended for use to detect Salmonella in duck flocks. The sample type with the highest proportion positive was a pool of four hair nets used as boot swabs, but this was not the most sensitive sample for low-prevalence flocks. All the environmental sampling types (faeces swabs, litter pinches, drag swabs, water trough samples and dust) had higher sensitivity than individual faeces sampling. None of the methods consistently identified all the positive flocks, and at least 10 samples would be required for even the most sensitive method (pool of four boot swabs) to detect a 5% prevalence. The sampling of dust had a low sensitivity and is not recommended for ducks.
7 CFR 58.812 - Methods of sample analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Methods of sample analysis. 58.812 Section 58.812... Procedures § 58.812 Methods of sample analysis. Samples shall be tested according to the applicable methods of laboratory analysis contained in either DA Instruction 918-RL, as issued by the USDA, Agricultural...
7 CFR 58.245 - Method of sample analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Method of sample analysis. 58.245 Section 58.245... Procedures § 58.245 Method of sample analysis. Samples shall be tested according to the applicable methods of laboratory analysis contained in either DA Instruction 918-RL as issued by the USDA, Agricultural Marketing...
This is a sampling and analysis method for the determination of asbestos in air. Samples are analyzed by transmission electron microscopy (TEM). Although a small subset of samples are to be prepared using a direct procedure, the majority of samples analyzed using this method wil...
Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M
2018-04-01
A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.
Comparison of individual and pooled sampling methods for detecting bacterial pathogens of fish
Mumford, Sonia; Patterson, Chris; Evered, J.; Brunson, Ray; Levine, J.; Winton, J.
2005-01-01
Examination of finfish populations for viral and bacterial pathogens is an important component of fish disease control programs worldwide. Two methods are commonly used for collecting tissue samples for bacteriological culture, the currently accepted standards for detection of bacterial fish pathogens. The method specified in the Office International des Epizooties Manual of Diagnostic Tests for Aquatic Animals permits combining renal and splenic tissues from as many as 5 fish into pooled samples. The American Fisheries Society (AFS) Blue Book/US Fish and Wildlife Service (USFWS) Inspection Manual specifies the use of a bacteriological loop for collecting samples from the kidney of individual fish. An alternative would be to more fully utilize the pooled samples taken for virology. If implemented, this approach would provide substantial savings in labor and materials. To compare the relative performance of the AFS/USFWS method and this alternative approach, cultures of Yersinia ruckeri were used to establish low-level infections in groups of rainbow trout (Oncorhynchus mykiss) that were sampled by both methods. Yersinia ruckeri was cultured from 22 of 37 groups by at least 1 method. The loop method yielded 18 positive groups, with 1 group positive in the loop samples but negative in the pooled samples. The pooled samples produced 21 positive groups, with 4 groups positive in the pooled samples but negative in the loop samples. There was statistically significant agreement (Spearman coefficient 0.80, P < 0.001) in the relative ability of the 2 sampling methods to permit detection of low-level bacterial infections of rainbow trout.
Xun-Ping, W; An, Z
2017-07-27
Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.
Model-based inference for small area estimation with sampling weights
Vandendijck, Y.; Faes, C.; Kirby, R.S.; Lawson, A.; Hens, N.
2017-01-01
Obtaining reliable estimates about health outcomes for areas or domains where only few to no samples are available is the goal of small area estimation (SAE). Often, we rely on health surveys to obtain information about health outcomes. Such surveys are often characterised by a complex design, stratification, and unequal sampling weights as common features. Hierarchical Bayesian models are well recognised in SAE as a spatial smoothing method, but often ignore the sampling weights that reflect the complex sampling design. In this paper, we focus on data obtained from a health survey where the sampling weights of the sampled individuals are the only information available about the design. We develop a predictive model-based approach to estimate the prevalence of a binary outcome for both the sampled and non-sampled individuals, using hierarchical Bayesian models that take into account the sampling weights. A simulation study is carried out to compare the performance of our proposed method with other established methods. The results indicate that our proposed method achieves great reductions in mean squared error when compared with standard approaches. It performs equally well or better when compared with more elaborate methods when there is a relationship between the responses and the sampling weights. The proposed method is applied to estimate asthma prevalence across districts. PMID:28989860
Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E
2014-06-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.
Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.
2017-01-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608
Cox, Jennie; Indugula, Reshmi; Vesper, Stephen; Zhu, Zheng; Jandarov, Roman; Reponen, Tiina
2017-10-18
Evaluating fungal contamination indoors is complicated because of the many different sampling methods utilized. In this study, fungal contamination was evaluated using five sampling methods and four matrices for results. The five sampling methods were a 48 hour indoor air sample collected with a Button™ inhalable aerosol sampler and four types of dust samples: a vacuumed floor dust sample, newly settled dust collected for four weeks onto two types of electrostatic dust cloths (EDCs) in trays, and a wipe sample of dust from above floor surfaces. The samples were obtained in the bedrooms of asthmatic children (n = 14). Quantitative polymerase chain reaction (qPCR) was used to analyze the dust and air samples for the 36 fungal species that make up the Environmental Relative Moldiness Index (ERMI). The results from the samples were compared by four matrices: total concentration of fungal cells, concentration of fungal species associated with indoor environments, concentration of fungal species associated with outdoor environments, and ERMI values (or ERMI-like values for air samples). The ERMI values for the dust samples and the ERMI-like values for the 48 hour air samples were not significantly different. The total cell concentrations of the 36 species obtained with the four dust collection methods correlated significantly (r = 0.64-0.79, p < 0.05), with the exception of the vacuumed floor dust and newly settled dust. In addition, fungal cell concentrations of indoor associated species correlated well between all four dust sampling methods (r = 0.68-0.86, p < 0.01). No correlation was found between the fungal concentrations in the air and dust samples primarily because of differences in concentrations of Cladosporium cladosporioides Type 1 and Epicoccum nigrum. A representative type of dust sample and a 48 hour air sample might both provide useful information about fungal exposures.
Evaluating performance of stormwater sampling approaches using a dynamic watershed model.
Ackerman, Drew; Stein, Eric D; Ritter, Kerry J
2011-09-01
Accurate quantification of stormwater pollutant levels is essential for estimating overall contaminant discharge to receiving waters. Numerous sampling approaches exist that attempt to balance accuracy against the costs associated with the sampling method. This study employs a novel and practical approach of evaluating the accuracy of different stormwater monitoring methodologies using stormflows and constituent concentrations produced by a fully validated continuous simulation watershed model. A major advantage of using a watershed model to simulate pollutant concentrations is that a large number of storms representing a broad range of conditions can be applied in testing the various sampling approaches. Seventy-eight distinct methodologies were evaluated by "virtual samplings" of 166 simulated storms of varying size, intensity and duration, representing 14 years of storms in Ballona Creek near Los Angeles, California. The 78 methods can be grouped into four general strategies: volume-paced compositing, time-paced compositing, pollutograph sampling, and microsampling. The performances of each sampling strategy was evaluated by comparing the (1) median relative error between the virtually sampled and the true modeled event mean concentration (EMC) of each storm (accuracy), (2) median absolute deviation about the median or "MAD" of the relative error or (precision), and (3) the percentage of storms where sampling methods were within 10% of the true EMC (combined measures of accuracy and precision). Finally, costs associated with site setup, sampling, and laboratory analysis were estimated for each method. Pollutograph sampling consistently outperformed the other three methods both in terms of accuracy and precision, but was the most costly method evaluated. Time-paced sampling consistently underestimated while volume-paced sampling over estimated the storm EMCs. Microsampling performance approached that of pollutograph sampling at a substantial cost savings. The most efficient method for routine stormwater monitoring in terms of a balance between performance and cost was volume-paced microsampling, with variable sample pacing to ensure that the entirety of the storm was captured. Pollutograph sampling is recommended if the data are to be used for detailed analysis of runoff dynamics.
Chen, Meilian; Lee, Jong-Hyeon; Hur, Jin
2015-10-01
Despite literature evidence suggesting the importance of sampling methods on the properties of sediment pore waters, their effects on the dissolved organic matter (PW-DOM) have been unexplored to date. Here, we compared the effects of two commonly used sampling methods (i.e., centrifuge and Rhizon sampler) on the characteristics of PW-DOM for the first time. The bulk dissolved organic carbon (DOC), ultraviolet-visible (UV-Vis) absorption, and excitation-emission matrixes coupled with parallel factor analysis (EEM-PARAFAC) of the PW-DOM samples were compared for the two sampling methods with the sediments from minimal to severely contaminated sites. The centrifuged samples were found to have higher average values of DOC, UV absorption, and protein-like EEM-PARAFAC components. The samples collected with the Rhizon sampler, however, exhibited generally more humified characteristics than the centrifuged ones, implying a preferential collection of PW-DOM with respect to the sampling methods. Furthermore, the differences between the two sampling methods seem more pronounced in relatively more polluted sites. Our observations were possibly explained by either the filtration effect resulting from the smaller pore size of the Rhizon sampler or the desorption of DOM molecules loosely bound to minerals during centrifugation, or both. Our study suggests that consistent use of one sampling method is crucial for PW-DOM studies and also that caution should be taken in the comparison of data collected with different sampling methods.
Volatile organic compounds: sampling methods and their worldwide profile in ambient air.
Kumar, Anuj; Víden, Ivan
2007-08-01
The atmosphere is a particularly difficult analytical system because of the very low levels of substances to be analysed, sharp variations in pollutant levels with time and location, differences in wind, temperature and humidity. This makes the selection of an efficient sampling technique for air analysis a key step to reliable results. Generally, methods for volatile organic compounds sampling include collection of the whole air or preconcentration of samples on adsorbents. All the methods vary from each other according to the sampling technique, type of sorbent, method of extraction and identification technique. In this review paper we discuss various important aspects for sampling of volatile organic compounds by the widely used and advanced sampling methods. Characteristics of various adsorbents used for VOCs sampling are also described. Furthermore, this paper makes an effort to comprehensively review the concentration levels of volatile organic compounds along with the methodology used for analysis, in major cities of the world.
Comparisons of discrete and integrative sampling accuracy in estimating pulsed aquatic exposures.
Morrison, Shane A; Luttbeg, Barney; Belden, Jason B
2016-11-01
Most current-use pesticides have short half-lives in the water column and thus the most relevant exposure scenarios for many aquatic organisms are pulsed exposures. Quantifying exposure using discrete water samples may not be accurate as few studies are able to sample frequently enough to accurately determine time-weighted average (TWA) concentrations of short aquatic exposures. Integrative sampling methods that continuously sample freely dissolved contaminants over time intervals (such as integrative passive samplers) have been demonstrated to be a promising measurement technique. We conducted several modeling scenarios to test the assumption that integrative methods may require many less samples for accurate estimation of peak 96-h TWA concentrations. We compared the accuracies of discrete point samples and integrative samples while varying sampling frequencies and a range of contaminant water half-lives (t 50 = 0.5, 2, and 8 d). Differences the predictive accuracy of discrete point samples and integrative samples were greatest at low sampling frequencies. For example, when the half-life was 0.5 d, discrete point samples required 7 sampling events to ensure median values > 50% and no sampling events reporting highly inaccurate results (defined as < 10% of the true 96-h TWA). Across all water half-lives investigated, integrative sampling only required two samples to prevent highly inaccurate results and measurements resulting in median values > 50% of the true concentration. Regardless, the need for integrative sampling diminished as water half-life increased. For an 8-d water half-life, two discrete samples produced accurate estimates and median values greater than those obtained for two integrative samples. Overall, integrative methods are the more accurate method for monitoring contaminants with short water half-lives due to reduced frequency of extreme values, especially with uncertainties around the timing of pulsed events. However, the acceptability of discrete sampling methods for providing accurate concentration measurements increases with increasing aquatic half-lives. Copyright © 2016 Elsevier Ltd. All rights reserved.
Drummond, A; Rodrigo, A G
2000-12-01
Reconstruction of evolutionary relationships from noncontemporaneous molecular samples provides a new challenge for phylogenetic reconstruction methods. With recent biotechnological advances there has been an increase in molecular sequencing throughput, and the potential to obtain serial samples of sequences from populations, including rapidly evolving pathogens, is fast being realized. A new method called the serial-sample unweighted pair grouping method with arithmetic means (sUPGMA) is presented that reconstructs a genealogy or phylogeny of sequences sampled serially in time using a matrix of pairwise distances. The resulting tree depicts the terminal lineages of each sample ending at a different level consistent with the sample's temporal order. Since sUPGMA is a variant of UPGMA, it will perform best when sequences have evolved at a constant rate (i.e., according to a molecular clock). On simulated data, this new method performs better than standard cluster analysis under a variety of longitudinal sampling strategies. Serial-sample UPGMA is particularly useful for analysis of longitudinal samples of viruses and bacteria, as well as ancient DNA samples, with the minimal requirement that samples of sequences be ordered in time.
7 CFR 29.110 - Method of sampling.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Method of sampling. 29.110 Section 29.110 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Regulations Inspectors, Samplers, and Weighers § 29.110 Method of sampling. In sampling tobacco...
7 CFR 29.110 - Method of sampling.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Method of sampling. 29.110 Section 29.110 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Regulations Inspectors, Samplers, and Weighers § 29.110 Method of sampling. In sampling tobacco...
Zhang, Heng; Lan, Fang; Shi, Yupeng; Wan, Zhi-Gang; Yue, Zhen-Feng; Fan, Fang; Lin, Yan-Kui; Tang, Mu-Jin; Lv, Jing-Zhang; Xiao, Tan; Yi, Changqing
2014-06-15
VitaFast(®) test kits designed for the microbiological assay in microtiter plate format can be applied to quantitative determination of B-group water-soluble vitamins such as vitamin B12, folic acid and biotin, et al. Compared to traditional microbiological methods, VitaFast(®) kits significantly reduce sample processing time and provide greater reliability, higher productivity and better accuracy. Recently, simultaneous determination of vitamin B12, folic acid and biotin in one sample is urgently required when evaluating the quality of infant formulae in our practical work. However, the present sample preparation protocols which are developed for individual test systems, are incompatible with simultaneous determination of several analytes. To solve this problem, a novel "three-in-one" sample preparation method is herein developed for simultaneous determination of B-group water-soluble vitamins using VitaFast(®) kits. The performance of this novel "three-in-one" sample preparation method was systematically evaluated through comparing with individual sample preparation protocols. The experimental results of the assays which employed "three-in-one" sample preparation method were in good agreement with those obtained from conventional VitaFast(®) extraction methods, indicating that the proposed "three-in-one" sample preparation method is applicable to the present three VitaFast(®) vitamin test systems, thus offering a promising alternative for the three independent sample preparation methods. The proposed new sample preparation method will significantly improve the efficiency of infant formulae inspection. Copyright © 2013 Elsevier Ltd. All rights reserved.
Krempa, Heather M.
2015-10-29
Relative percent differences between methods were greater than 10 percent for most analyzed trace elements. Barium, cobalt, manganese, and boron had concentrations that were significantly different between sampling methods. Barium, molybdenum, boron, and uranium method concentrations indicate a close association between pump and grab samples based on bivariate plots and simple linear regressions. Grab sample concentrations were generally larger than pump concentrations for these elements and may be because of using a larger pore sized filter for grab samples. Analysis of zinc blank samples suggests zinc contamination in filtered grab samples. Variations of analyzed trace elements between pump and grab samples could reduce the ability to monitor temporal changes and potential groundwater contamination threats. The degree of precision necessary for monitoring potential groundwater threats and application objectives need to be considered when determining acceptable variation amounts.
Oral sampling methods are associated with differences in immune marker concentrations.
Fakhry, Carole; Qeadan, Fares; Gilman, Robert H; Yori, Pablo; Kosek, Margaret; Patterson, Nicole; Eisele, David W; Gourin, Christine G; Chitguppi, Chandala; Marks, Morgan; Gravitt, Patti
2018-06-01
To determine whether the concentration and distribution of immune markers in paired oral samples were similar. Clinical research. Cross-sectional study. Paired saliva and oral secretions (OS) samples were collected. The concentration of immune markers was estimated using Luminex multiplex assay (Thermo Fisher Scientific, Waltham, MA). For each sample, the concentration of respective immune markers was normalized to total protein present and log-transformed. Median concentrations of immune markers were compared between both types of samples. Intermarker correlation in each sampling method and across sampling methods was evaluated. There were 90 study participants. Concentrations of immune markers in saliva samples were significantly different from concentrations in OS samples. Oral secretions samples showed higher concentrations of immunoregulatory markers, whereas the saliva samples contained proinflammatory markers in higher concentration. The immune marker profile in saliva samples is distinct from the immune marker profile in paired OS samples. 2b. Laryngoscope, 128:E214-E221, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
An evaluation of flow-stratified sampling for estimating suspended sediment loads
Robert B. Thomas; Jack Lewis
1995-01-01
Abstract - Flow-stratified sampling is a new method for sampling water quality constituents such as suspended sediment to estimate loads. As with selection-at-list-time (SALT) and time-stratified sampling, flow-stratified sampling is a statistical method requiring random sampling, and yielding unbiased estimates of load and variance. It can be used to estimate event...
Mechanisms of fracture of ring samples made of FCC metals on loading with magnetic-pulse method
NASA Astrophysics Data System (ADS)
Morozov, Viktor; Kats, Victor; Savenkov, Georgiy; Lukin, Anton
2018-05-01
Results of study of deformation and fracture of ring-shaped samples made of thin strips of cuprum, aluminum and steel in wide range of loading velocity are presented. Three developed by us schemes of magnetic-pulse method are used for the samples loading. The method of samples fracture with the high electrical resistance (e.g. steel) is proposed. Crack velocity at the sample fracture is estimated. Fracture surfaces are inspected. Mechanisms of dynamic fracture of the sample arere discussed.
Conceptual data sampling for breast cancer histology image classification.
Rezk, Eman; Awan, Zainab; Islam, Fahad; Jaoua, Ali; Al Maadeed, Somaya; Zhang, Nan; Das, Gautam; Rajpoot, Nasir
2017-10-01
Data analytics have become increasingly complicated as the amount of data has increased. One technique that is used to enable data analytics in large datasets is data sampling, in which a portion of the data is selected to preserve the data characteristics for use in data analytics. In this paper, we introduce a novel data sampling technique that is rooted in formal concept analysis theory. This technique is used to create samples reliant on the data distribution across a set of binary patterns. The proposed sampling technique is applied in classifying the regions of breast cancer histology images as malignant or benign. The performance of our method is compared to other classical sampling methods. The results indicate that our method is efficient and generates an illustrative sample of small size. It is also competing with other sampling methods in terms of sample size and sample quality represented in classification accuracy and F1 measure. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sydoff, Marie; Stenström, Kristina
2010-04-01
The Department of Physics at Lund University is participating in a European Union project called EUMAPP (European Union Microdose AMS Partnership Programme), in which sample preparation and accelerator mass spectrometry (AMS) measurements of biological samples from microdosing studies have been made. This paper describes a simplified method of converting biological samples to solid graphite for 14C analysis with AMS. The method is based on online combustion of the samples, and reduction of CO 2 in septa-sealed vials. The septa-sealed vials and disposable materials are used to eliminate sample cross-contamination. Measurements of ANU and Ox I standards show deviations of 2% and 3%, respectively, relative to reference values. This level of accuracy is sufficient for biological samples from microdosing studies. Since the method has very few handling steps from sample to graphite, the risk of failure during the sample preparation process is minimized, making the method easy to use in routine preparation of samples.
Zainathan, S C; Carson, J; Crane, M St J; Nowak, B F
2013-04-01
The use of swabs relative to organs as a sample collection method for the detection of Tasmanian salmon reovirus (TSRV) in farmed Tasmanian Atlantic salmon, Salmo salar L., was evaluated by RT-qPCR. Evaluation of individual and pooled sample collection (organs vs swabs) was carried out to determine the sensitivity of the collection methods and the effect of pooling of samples for the detection of TSRV. Detection of TSRV in individual samples was as sensitive when organs were sampled compared to swabs, and in pooled samples, organs demonstrated a sensitivity of one 10-fold dilution higher than sampling of pooled swabs. Storage of swabs at 4 °C for t = 24 h demonstrated results similar to those at t = 0. Advantages of using swabs as a preferred sample collection method for the detection of TSRV compared to organ samples are evident from these experimental trials. © 2012 Blackwell Publishing Ltd.
Single-view phase retrieval of an extended sample by exploiting edge detection and sparsity
Tripathi, Ashish; McNulty, Ian; Munson, Todd; ...
2016-10-14
We propose a new approach to robustly retrieve the exit wave of an extended sample from its coherent diffraction pattern by exploiting sparsity of the sample's edges. This approach enables imaging of an extended sample with a single view, without ptychography. We introduce nonlinear optimization methods that promote sparsity, and we derive update rules to robustly recover the sample's exit wave. We test these methods on simulated samples by varying the sparsity of the edge-detected representation of the exit wave. Finally, our tests illustrate the strengths and limitations of the proposed method in imaging extended samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geist, William H.
2017-09-15
The objectives for this presentation are to describe the method that the IAEA uses to determine a sampling plan for nuclear material measurements; describe the terms detection probability and significant quantity; list the three nuclear materials measurement types; describe the sampling method applied to an item facility; and describe multiple method sampling.
Comparison of preprocessing methods and storage times for touch DNA samples
Dong, Hui; Wang, Jing; Zhang, Tao; Ge, Jian-ye; Dong, Ying-qiang; Sun, Qi-fan; Liu, Chao; Li, Cai-xia
2017-01-01
Aim To select appropriate preprocessing methods for different substrates by comparing the effects of four different preprocessing methods on touch DNA samples and to determine the effect of various storage times on the results of touch DNA sample analysis. Method Hand touch DNA samples were used to investigate the detection and inspection results of DNA on different substrates. Four preprocessing methods, including the direct cutting method, stubbing procedure, double swab technique, and vacuum cleaner method, were used in this study. DNA was extracted from mock samples with four different preprocessing methods. The best preprocess protocol determined from the study was further used to compare performance after various storage times. DNA extracted from all samples was quantified and amplified using standard procedures. Results The amounts of DNA and the number of alleles detected on the porous substrates were greater than those on the non-porous substrates. The performances of the four preprocessing methods varied with different substrates. The direct cutting method displayed advantages for porous substrates, and the vacuum cleaner method was advantageous for non-porous substrates. No significant degradation trend was observed as the storage times increased. Conclusion Different substrates require the use of different preprocessing method in order to obtain the highest DNA amount and allele number from touch DNA samples. This study provides a theoretical basis for explorations of touch DNA samples and may be used as a reference when dealing with touch DNA samples in case work. PMID:28252870
Inoue, Hiroaki; Takama, Tomoko; Yoshizaki, Miwa; Agata, Kunio
2015-01-01
We detected Legionella species in 111 bath water samples and 95 cooling tower water samples by using a combination of conventional plate culture, quantitative polymerase chain reaction (qPCR) and qPCR combined with ethidium monoazide treatment (EMA-qPCR) methods. In the case of bath water samples, Legionella spp. were detected in 30 samples by plate culture, in 85 samples by qPCR, and in 49 samples by EMA-qPCR. Of 81 samples determined to be Legionella-negative by plate culture, 56 and 23 samples were positive by qPCR and EMA-qPCR, respectively. Therefore, EMA treatment decreased the number of Legionella-positive bath water samples detected by qPCR. In contrast, EMA treatment had no effect on cooling tower water samples. We therefore expect that EMA-qPCR is a useful method for the rapid detection of viable Legionella spp. from bath water samples.
Intra prediction using face continuity in 360-degree video coding
NASA Astrophysics Data System (ADS)
Hanhart, Philippe; He, Yuwen; Ye, Yan
2017-09-01
This paper presents a new reference sample derivation method for intra prediction in 360-degree video coding. Unlike the conventional reference sample derivation method for 2D video coding, which uses the samples located directly above and on the left of the current block, the proposed method considers the spherical nature of 360-degree video when deriving reference samples located outside the current face to which the block belongs, and derives reference samples that are geometric neighbors on the sphere. The proposed reference sample derivation method was implemented in the Joint Exploration Model 3.0 (JEM-3.0) for the cubemap projection format. Simulation results for the all intra configuration show that, when compared with the conventional reference sample derivation method, the proposed method gives, on average, luma BD-rate reduction of 0.3% in terms of the weighted spherical PSNR (WS-PSNR) and spherical PSNR (SPSNR) metrics.
Sommer, D; Enderlein, D; Antakli, A; Schönenbrücher, H; Slaghuis, J; Redmann, T; Lierz, M
2012-01-01
The efficiency of two commercial PCR methods based on real-time technology, the foodproof® Salmonella detection system and the BAX® PCR Assay Salmonella system was compared to standardized culture methods (EN ISO 6579:2002 - Annex D) for the detection of Salmonella spp. in poultry samples. Four sample matrices (feed, dust, boot swabs, feces) obtained directly from poultry flocks, as well as artificially spiked samples of the same matrices, were used. All samples were tested for Salmonella spp. using culture methods first as the gold standard. In addition samples spiked with Salmonella Enteridis were tested to evaluate the sensitivity of both PCR methods. Furthermore all methods were evaluated in an annual ring-trial of the National Salmonella Reference Laboratory of Germany. Salmonella detection in the matrices feed, dust and boot swabs were comparable in both PCR systems whereas the results from feces differed markedly. The quality, especially the freshness, of the fecal samples had an influence on the sensitivity of the real-time PCR and the results of the culture methods. In fresh fecal samples an initial spiking level of 100cfu/25g Salmonella Enteritidis was detected. Two-days-dried fecal samples allowed the detection of 14cfu/25g. Both real- time PCR protocols appear to be suitable for the detection of Salmonella spp. in all four matrices. The foodproof® system detected eight samples more to be positive compared to the BAX® system, but had a potential false positive result in one case. In 7-days-dried samples none of the methods was able to detect Salmonella likely through letal cell damage. In general the advantage of PCR analyses over the culture method is the reduction of working time from 4-5 days to only 2 days. However, especially for the analysis of fecal samples official validation should be conducted according to the requirement of EN ISO6579:2002 - Annex D.
DuPont Qualicon BAX System polymerase chain reaction assay. Performance Tested Method 100201.
Tice, George; Andaloro, Bridget; Fallon, Dawn; Wallace, F Morgan
2009-01-01
A recent outbreak of Salmonella in peanut butter has highlighted the need for validation of rapid detection methods. A multilaboratory study for detecting Salmonella in peanut butter was conducted as part of the AOAC Research Institute Emergency Response Validation program for methods that detect outbreak threats to food safety. Three sites tested spiked samples from the same master mix according to the U.S. Food and Drug Administration's Bacteriological Analytical Manual (FDA-BAM) method and the BAX System method. Salmonella Typhimurium (ATCC 14028) was grown in brain heart infusion for 24 h at 37 degrees C, then diluted to appropriate levels for sample inoculation. Master samples of peanut butter were spiked at high and low target levels, mixed, and allowed to equilibrate at room temperature for 2 weeks. Spike levels were low [1.08 most probable number (MPN)/25 g]; high (11.5 MPN/25 g) and unspiked to serve as negative controls. Each master sample was divided into 25 g portions and coded to blind the samples. Twenty portions of each spiked master sample and five portions of the unspiked sample were tested at each site. At each testing site, samples were blended in 25 g portions with 225 mL prewarmed lactose broth until thoroughly homogenized, then allowed to remain at room temperature for 55-65 min. Samples were adjusted to a pH of 6.8 +/- 0.2, if necessary, and incubated for 22-26 h at 35 degrees C. Across the three reporting laboratories, the BAX System detected Salmonella in 10/60 low-spike samples and 58/60 high-spike samples. The reference FDA-BAM method yielded positive results for 11/60 low-spike and 58/60 high-spike samples. Neither method demonstrated positive results for any of the 15 unspiked samples.
Ruple-Czerniak, A; Bolte, D S; Burgess, B A; Morley, P S
2014-07-01
Nosocomial salmonellosis is an important problem in veterinary hospitals that treat horses and other large animals. Detection and mitigation of outbreaks and prevention of healthcare-associated infections often require detection of Salmonella enterica in the hospital environment. To compare 2 previously published methods for detecting environmental contamination with S. enterica in a large animal veterinary teaching hospital. Hospital-based comparison of environmental sampling techniques. A total of 100 pairs of environmental samples were collected from stalls used to house large animal cases (horses, cows or New World camelids) that were confirmed to be shedding S. enterica by faecal culture. Stalls were cleaned and disinfected prior to sampling, and the same areas within each stall were sampled for the paired samples. One method of detection used sterile, premoistened sponges that were cultured using thioglycolate enrichment before plating on XLT-4 agar. The other method used electrostatic wipes that were cultured using buffered peptone water, tetrathionate and Rappaport-Vassiliadis R10 broths before plating on XLT-4 agar. Salmonella enterica was recovered from 14% of samples processed using the electrostatic wipe sampling and culture procedure, whereas S. enterica was recovered from only 4% of samples processed using the sponge sampling and culture procedure. There was test agreement for 85 pairs of culture-negative samples and 3 pairs of culture-positive samples. However, the remaining 12 pairs of samples with discordant results created significant disagreement between the 2 detection methods (P<0.01). Persistence of Salmonella in the environment of veterinary hospitals can occur even with rigorous cleaning and disinfection. Use of sensitive methods for detection of environmental contamination is critical when detecting and mitigating this problem in veterinary hospitals. These results suggest that the electrostatic wipe sampling and culture method was more sensitive than the sponge sampling and culture method. © 2013 EVJ Ltd.
Filter forensics: microbiota recovery from residential HVAC filters.
Maestre, Juan P; Jennings, Wiley; Wylie, Dennis; Horner, Sharon D; Siegel, Jeffrey; Kinney, Kerry A
2018-01-30
Establishing reliable methods for assessing the microbiome within the built environment is critical for understanding the impact of biological exposures on human health. High-throughput DNA sequencing of dust samples provides valuable insights into the microbiome present in human-occupied spaces. However, the effect that different sampling methods have on the microbial community recovered from dust samples is not well understood across sample types. Heating, ventilation, and air conditioning (HVAC) filters hold promise as long-term, spatially integrated, high volume samplers to characterize the airborne microbiome in homes and other climate-controlled spaces. In this study, the effect that dust recovery method (i.e., cut and elution, swabbing, or vacuuming) has on the microbial community structure, membership, and repeatability inferred by Illumina sequencing was evaluated. The results indicate that vacuum samples captured higher quantities of total, bacterial, and fungal DNA than swab or cut samples. Repeated swab and vacuum samples collected from the same filter were less variable than cut samples with respect to both quantitative DNA recovery and bacterial community structure. Vacuum samples captured substantially greater bacterial diversity than the other methods, whereas fungal diversity was similar across all three methods. Vacuum and swab samples of HVAC filter dust were repeatable and generally superior to cut samples. Nevertheless, the contribution of environmental and human sources to the bacterial and fungal communities recovered via each sampling method was generally consistent across the methods investigated. Dust recovery methodologies have been shown to affect the recovery, repeatability, structure, and membership of microbial communities recovered from dust samples in the built environment. The results of this study are directly applicable to indoor microbiota studies utilizing the filter forensics approach. More broadly, this study provides a better understanding of the microbial community variability attributable to sampling methodology and helps inform interpretation of data collected from other types of dust samples collected from indoor environments.
Configurations and calibration methods for passive sampling techniques.
Ouyang, Gangfeng; Pawliszyn, Janusz
2007-10-19
Passive sampling technology has developed very quickly in the past 15 years, and is widely used for the monitoring of pollutants in different environments. The design and quantification of passive sampling devices require an appropriate calibration method. Current calibration methods that exist for passive sampling, including equilibrium extraction, linear uptake, and kinetic calibration, are presented in this review. A number of state-of-the-art passive sampling devices that can be used for aqueous and air monitoring are introduced according to their calibration methods.
19 CFR 151.83 - Method of sampling.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Method of sampling. 151.83 Section 151.83 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Cotton § 151.83 Method of sampling. For...
Comparing three sampling techniques for estimating fine woody down dead biomass
Robert E. Keane; Kathy Gray
2013-01-01
Designing woody fuel sampling methods that quickly, accurately and efficiently assess biomass at relevant spatial scales requires extensive knowledge of each sampling method's strengths, weaknesses and tradeoffs. In this study, we compared various modifications of three common sampling methods (planar intercept, fixed-area microplot and photoload) for estimating...
NASA Technical Reports Server (NTRS)
Carson, John M., III; Bayard, David S.
2006-01-01
G-SAMPLE is an in-flight dynamical method for use by sample collection missions to identify the presence and quantity of collected sample material. The G-SAMPLE method implements a maximum-likelihood estimator to identify the collected sample mass, based on onboard force sensor measurements, thruster firings, and a dynamics model of the spacecraft. With G-SAMPLE, sample mass identification becomes a computation rather than an extra hardware requirement; the added cost of cameras or other sensors for sample mass detection is avoided. Realistic simulation examples are provided for a spacecraft configuration with a sample collection device mounted on the end of an extended boom. In one representative example, a 1000 gram sample mass is estimated to within 110 grams (95% confidence) under realistic assumptions of thruster profile error, spacecraft parameter uncertainty, and sensor noise. For convenience to future mission design, an overall sample-mass estimation error budget is developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.
[Respondent-Driven Sampling: a new sampling method to study visible and hidden populations].
Mantecón, Alejandro; Juan, Montse; Calafat, Amador; Becoña, Elisardo; Román, Encarna
2008-01-01
The paper introduces a variant of chain-referral sampling: respondent-driven sampling (RDS). This sampling method shows that methods based on network analysis can be combined with the statistical validity of standard probability sampling methods. In this sense, RDS appears to be a mathematical improvement of snowball sampling oriented to the study of hidden populations. However, we try to prove its validity with populations that are not within a sampling frame but can nonetheless be contacted without difficulty. The basics of RDS are explained through our research on young people (aged 14 to 25) who go clubbing, consume alcohol and other drugs, and have sex. Fieldwork was carried out between May and July 2007 in three Spanish regions: Baleares, Galicia and Comunidad Valenciana. The presentation of the study shows the utility of this type of sampling when the population is accessible but there is a difficulty deriving from the lack of a sampling frame. However, the sample obtained is not a random representative one in statistical terms of the target population. It must be acknowledged that the final sample is representative of a 'pseudo-population' that approximates to the target population but is not identical to it.
Rothrock, Michael J.; Hiett, Kelli L.; Gamble, John; Caudill, Andrew C.; Cicconi-Hogan, Kellie M.; Caporaso, J. Gregory
2014-01-01
The efficacy of DNA extraction protocols can be highly dependent upon both the type of sample being investigated and the types of downstream analyses performed. Considering that the use of new bacterial community analysis techniques (e.g., microbiomics, metagenomics) is becoming more prevalent in the agricultural and environmental sciences and many environmental samples within these disciplines can be physiochemically and microbiologically unique (e.g., fecal and litter/bedding samples from the poultry production spectrum), appropriate and effective DNA extraction methods need to be carefully chosen. Therefore, a novel semi-automated hybrid DNA extraction method was developed specifically for use with environmental poultry production samples. This method is a combination of the two major types of DNA extraction: mechanical and enzymatic. A two-step intense mechanical homogenization step (using bead-beating specifically formulated for environmental samples) was added to the beginning of the “gold standard” enzymatic DNA extraction method for fecal samples to enhance the removal of bacteria and DNA from the sample matrix and improve the recovery of Gram-positive bacterial community members. Once the enzymatic extraction portion of the hybrid method was initiated, the remaining purification process was automated using a robotic workstation to increase sample throughput and decrease sample processing error. In comparison to the strict mechanical and enzymatic DNA extraction methods, this novel hybrid method provided the best overall combined performance when considering quantitative (using 16S rRNA qPCR) and qualitative (using microbiomics) estimates of the total bacterial communities when processing poultry feces and litter samples. PMID:25548939
Myatt, Mark; Mai, Nguyen Phuong; Quynh, Nguyen Quang; Nga, Nguyen Huy; Tai, Ha Huy; Long, Nguyen Hung; Minh, Tran Hung; Limburg, Hans
2005-10-01
To report on the use of lot quality-assurance sampling (LQAS) surveys undertaken within an area-sampling framework to identify priority areas for intervention with trachoma control activities in Viet Nam. The LQAS survey method for the rapid assessment of the prevalence of active trachoma was adapted for use in Viet Nam with the aim of classifying individual communes by the prevalence of active trachoma among children in primary school. School-based sampling was used; school sites to be sampled were selected using an area-sampling approach. A total of 719 communes in 41 districts in 18 provinces were surveyed. Survey staff found the LQAS survey method both simple and rapid to use after initial problems with area-sampling methods were identified and remedied. The method yielded a finer spatial resolution of prevalence than had been previously achieved in Viet Nam using semiquantitative rapid assessment surveys and multistage cluster-sampled surveys. When used with area-sampling techniques, the LQAS survey method has the potential to form the basis of survey instruments that can be used to efficiently target resources for interventions against active trachoma. With additional work, such methods could provide a generally applicable tool for effective programme planning and for the certification of the elimination of trachoma as a blinding disease.
Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.
2013-01-01
Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-23
... Alimentarius Commission: Meeting of the Codex Committee on Methods of Analysis and Sampling AGENCY: Office of... discussed at the 33rd Session of the Codex Committee on Methods of Analysis and Sampling (CCMAS) of the... the criteria appropriate to Codex Methods of Analysis and Sampling; serving as a coordinating body for...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-20
... Alimentarius Commission: Meeting of the Codex Committee on Methods of Analysis and Sampling AGENCY: Office of... discussed at the 32nd session of the Codex Committee on Methods of Analysis and Sampling (CCMAS) of the... appropriate to Codex Methods of Analysis and Sampling; serving as a coordinating body for Codex with other...
Myatt, Mark; Mai, Nguyen Phuong; Quynh, Nguyen Quang; Nga, Nguyen Huy; Tai, Ha Huy; Long, Nguyen Hung; Minh, Tran Hung; Limburg, Hans
2005-01-01
OBJECTIVE: To report on the use of lot quality-assurance sampling (LQAS) surveys undertaken within an area-sampling framework to identify priority areas for intervention with trachoma control activities in Viet Nam. METHODS: The LQAS survey method for the rapid assessment of the prevalence of active trachoma was adapted for use in Viet Nam with the aim of classifying individual communes by the prevalence of active trachoma among children in primary school. School-based sampling was used; school sites to be sampled were selected using an area-sampling approach. A total of 719 communes in 41 districts in 18 provinces were surveyed. FINDINGS: Survey staff found the LQAS survey method both simple and rapid to use after initial problems with area-sampling methods were identified and remedied. The method yielded a finer spatial resolution of prevalence than had been previously achieved in Viet Nam using semiquantitative rapid assessment surveys and multistage cluster-sampled surveys. CONCLUSION: When used with area-sampling techniques, the LQAS survey method has the potential to form the basis of survey instruments that can be used to efficiently target resources for interventions against active trachoma. With additional work, such methods could provide a generally applicable tool for effective programme planning and for the certification of the elimination of trachoma as a blinding disease. PMID:16283052
Observational studies of patients in the emergency department: a comparison of 4 sampling methods.
Valley, Morgan A; Heard, Kennon J; Ginde, Adit A; Lezotte, Dennis C; Lowenstein, Steven R
2012-08-01
We evaluate the ability of 4 sampling methods to generate representative samples of the emergency department (ED) population. We analyzed the electronic records of 21,662 consecutive patient visits at an urban, academic ED. From this population, we simulated different models of study recruitment in the ED by using 2 sample sizes (n=200 and n=400) and 4 sampling methods: true random, random 4-hour time blocks by exact sample size, random 4-hour time blocks by a predetermined number of blocks, and convenience or "business hours." For each method and sample size, we obtained 1,000 samples from the population. Using χ(2) tests, we measured the number of statistically significant differences between the sample and the population for 8 variables (age, sex, race/ethnicity, language, triage acuity, arrival mode, disposition, and payer source). Then, for each variable, method, and sample size, we compared the proportion of the 1,000 samples that differed from the overall ED population to the expected proportion (5%). Only the true random samples represented the population with respect to sex, race/ethnicity, triage acuity, mode of arrival, language, and payer source in at least 95% of the samples. Patient samples obtained using random 4-hour time blocks and business hours sampling systematically differed from the overall ED patient population for several important demographic and clinical variables. However, the magnitude of these differences was not large. Common sampling strategies selected for ED-based studies may affect parameter estimates for several representative population variables. However, the potential for bias for these variables appears small. Copyright © 2012. Published by Mosby, Inc.
A Typology of Mixed Methods Sampling Designs in Social Science Research
ERIC Educational Resources Information Center
Onwuegbuzie, Anthony J.; Collins, Kathleen M. T.
2007-01-01
This paper provides a framework for developing sampling designs in mixed methods research. First, we present sampling schemes that have been associated with quantitative and qualitative research. Second, we discuss sample size considerations and provide sample size recommendations for each of the major research designs for quantitative and…
Sampling bee communities using pan traps: alternative methods increase sample size
USDA-ARS?s Scientific Manuscript database
Monitoring of the status of bee populations and inventories of bee faunas require systematic sampling. Efficiency and ease of implementation has encouraged the use of pan traps to sample bees. Efforts to find an optimal standardized sampling method for pan traps have focused on pan trap color. Th...
Neutron activation analysis of certified samples by the absolute method
NASA Astrophysics Data System (ADS)
Kadem, F.; Belouadah, N.; Idiri, Z.
2015-07-01
The nuclear reactions analysis technique is mainly based on the relative method or the use of activation cross sections. In order to validate nuclear data for the calculated cross section evaluated from systematic studies, we used the neutron activation analysis technique (NAA) to determine the various constituent concentrations of certified samples for animal blood, milk and hay. In this analysis, the absolute method is used. The neutron activation technique involves irradiating the sample and subsequently performing a measurement of the activity of the sample. The fundamental equation of the activation connects several physical parameters including the cross section that is essential for the quantitative determination of the different elements composing the sample without resorting to the use of standard sample. Called the absolute method, it allows a measurement as accurate as the relative method. The results obtained by the absolute method showed that the values are as precise as the relative method requiring the use of standard sample for each element to be quantified.
Evaluation of the Biological Sampling Kit (BiSKit) for Large-Area Surface Sampling
Buttner, Mark P.; Cruz, Patricia; Stetzenbach, Linda D.; Klima-Comba, Amy K.; Stevens, Vanessa L.; Emanuel, Peter A.
2004-01-01
Current surface sampling methods for microbial contaminants are designed to sample small areas and utilize culture analysis. The total number of microbes recovered is low because a small area is sampled, making detection of a potential pathogen more difficult. Furthermore, sampling of small areas requires a greater number of samples to be collected, which delays the reporting of results, taxes laboratory resources and staffing, and increases analysis costs. A new biological surface sampling method, the Biological Sampling Kit (BiSKit), designed to sample large areas and to be compatible with testing with a variety of technologies, including PCR and immunoassay, was evaluated and compared to other surface sampling strategies. In experimental room trials, wood laminate and metal surfaces were contaminated by aerosolization of Bacillus atrophaeus spores, a simulant for Bacillus anthracis, into the room, followed by settling of the spores onto the test surfaces. The surfaces were sampled with the BiSKit, a cotton-based swab, and a foam-based swab. Samples were analyzed by culturing, quantitative PCR, and immunological assays. The results showed that the large surface area (1 m2) sampled with the BiSKit resulted in concentrations of B. atrophaeus in samples that were up to 10-fold higher than the concentrations obtained with the other methods tested. A comparison of wet and dry sampling with the BiSKit indicated that dry sampling was more efficient (efficiency, 18.4%) than wet sampling (efficiency, 11.3%). The sensitivities of detection of B. atrophaeus on metal surfaces were 42 ± 5.8 CFU/m2 for wet sampling and 100.5 ± 10.2 CFU/m2 for dry sampling. These results demonstrate that the use of a sampling device capable of sampling larger areas results in higher sensitivity than that obtained with currently available methods and has the advantage of sampling larger areas, thus requiring collection of fewer samples per site. PMID:15574898
Nonprobability and probability-based sampling strategies in sexual science.
Catania, Joseph A; Dolcini, M Margaret; Orellana, Roberto; Narayanan, Vasudah
2015-01-01
With few exceptions, much of sexual science builds upon data from opportunistic nonprobability samples of limited generalizability. Although probability-based studies are considered the gold standard in terms of generalizability, they are costly to apply to many of the hard-to-reach populations of interest to sexologists. The present article discusses recent conclusions by sampling experts that have relevance to sexual science that advocates for nonprobability methods. In this regard, we provide an overview of Internet sampling as a useful, cost-efficient, nonprobability sampling method of value to sex researchers conducting modeling work or clinical trials. We also argue that probability-based sampling methods may be more readily applied in sex research with hard-to-reach populations than is typically thought. In this context, we provide three case studies that utilize qualitative and quantitative techniques directed at reducing limitations in applying probability-based sampling to hard-to-reach populations: indigenous Peruvians, African American youth, and urban men who have sex with men (MSM). Recommendations are made with regard to presampling studies, adaptive and disproportionate sampling methods, and strategies that may be utilized in evaluating nonprobability and probability-based sampling methods.
NASA Astrophysics Data System (ADS)
Sung, S.; Kim, H. G.; Lee, D. K.; Park, J. H.; Mo, Y.; Kil, S.; Park, C.
2016-12-01
The impact of climate change has been observed throughout the globe. The ecosystem experiences rapid changes such as vegetation shift, species extinction. In these context, Species Distribution Model (SDM) is one of the popular method to project impact of climate change on the ecosystem. SDM basically based on the niche of certain species with means to run SDM present point data is essential to find biological niche of species. To run SDM for plants, there are certain considerations on the characteristics of vegetation. Normally, to make vegetation data in large area, remote sensing techniques are used. In other words, the exact point of presence data has high uncertainties as we select presence data set from polygons and raster dataset. Thus, sampling methods for modeling vegetation presence data should be carefully selected. In this study, we used three different sampling methods for selection of presence data of vegetation: Random sampling, Stratified sampling and Site index based sampling. We used one of the R package BIOMOD2 to access uncertainty from modeling. At the same time, we included BioCLIM variables and other environmental variables as input data. As a result of this study, despite of differences among the 10 SDMs, the sampling methods showed differences in ROC values, random sampling methods showed the lowest ROC value while site index based sampling methods showed the highest ROC value. As a result of this study the uncertainties from presence data sampling methods and SDM can be quantified.
Duyvejonck, Hans; Cools, Piet; Decruyenaere, Johan; Roelens, Kristien; Noens, Lucien; Vermeulen, Stefan; Claeys, Geert; Decat, Ellen; Van Mechelen, Els; Vaneechoutte, Mario
2015-01-01
Candida species are known as opportunistic pathogens, and a possible cause of invasive infections. Because of their species-specific antimycotic resistance patterns, reliable techniques for their detection, quantification and identification are needed. We validated a DNA amplification method for direct detection of Candida spp. from clinical samples, namely the ITS2-High Resolution Melting Analysis (direct method), by comparing it with a culture and MALDI-TOF Mass Spectrometry based method (indirect method) to establish the presence of Candida species in three different types of clinical samples. A total of 347 clinical samples, i.e. throat swabs, rectal swabs and vaginal swabs, were collected from the gynaecology/obstetrics, intensive care and haematology wards at the Ghent University Hospital, Belgium. For the direct method, ITS2-HRM was preceded by NucliSENS easyMAG DNA extraction, directly on the clinical samples. For the indirect method, clinical samples were cultured on Candida ID and individual colonies were identified by MALDI-TOF. For 83.9% of the samples there was complete concordance between both techniques, i.e. the same Candida species were detected in 31.1% of the samples or no Candida species were detected in 52.8% of the samples. In 16.1% of the clinical samples, discrepant results were obtained, of which only 6.01% were considered as major discrepancies. Discrepancies occurred mostly when overall numbers of Candida cells in the samples were low and/or when multiple species were present in the sample. Most of the discrepancies could be decided in the advantage of the direct method. This is due to samples in which no yeast could be cultured whereas low amounts could be detected by the direct method and to samples in which high quantities of Candida robusta according to ITS2-HRM were missed by culture on Candida ID agar. It remains to be decided whether the diagnostic advantages of the direct method compensate for its disadvantages.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tripathi, Ashish; McNulty, Ian; Munson, Todd
We propose a new approach to robustly retrieve the exit wave of an extended sample from its coherent diffraction pattern by exploiting sparsity of the sample's edges. This approach enables imaging of an extended sample with a single view, without ptychography. We introduce nonlinear optimization methods that promote sparsity, and we derive update rules to robustly recover the sample's exit wave. We test these methods on simulated samples by varying the sparsity of the edge-detected representation of the exit wave. Finally, our tests illustrate the strengths and limitations of the proposed method in imaging extended samples.
Herrington, Jason S; Fan, Zhi-Hua Tina; Lioy, Paul J; Zhang, Junfeng Jim
2007-01-15
Airborne aldehyde and ketone (carbonyl) sampling methodologies based on derivatization with 2,4-dinitrophenylhydrazine (DNPH)-coated solid sorbents could unequivocally be considered the "gold" standard. Originally developed in the late 1970s, these methods have been extensively evaluated and developed up to the present day. However, these methods have been inadequately evaluated for the long-term (i.e., 24 h or greater) sampling collection efficiency (CE) of carbonyls other than formaldehyde. The current body of literature fails to demonstrate that DNPH-coated solid sorbent sampling methods have acceptable CEs for the long-term sampling of carbonyls other than formaldehyde. Despite this, such methods are widely used to report the concentrations of multiple carbonyls from long-term sampling, assuming approximately 100% CEs. Laboratory experiments were conducted in this study to evaluate the long-term formaldehyde and acetaldehyde sampling CEs for several commonly used DNPH-coated solid sorbents. Results from sampling known concentrations of formaldehyde and acetaldehyde generated in a dynamic atmosphere generation system demonstrate that the 24-hour formaldehyde sampling CEs ranged from 83 to 133%, confirming the findings made in previous studies. However, the 24-hour acetaldehyde sampling CEs ranged from 1 to 62%. Attempts to increase the acetaldehyde CEs by adding acid to the samples post sampling were unsuccessful. These results indicate that assuming approximately 100% CEs for 24-hour acetaldehyde sampling, as commonly done with DNPH-coated solid sorbent methods, would substantially under estimate acetaldehyde concentrations.
Evaluating Composite Sampling Methods of Bacillus Spores at Low Concentrations
Hess, Becky M.; Amidan, Brett G.; Anderson, Kevin K.; Hutchison, Janine R.
2016-01-01
Restoring all facility operations after the 2001 Amerithrax attacks took years to complete, highlighting the need to reduce remediation time. Some of the most time intensive tasks were environmental sampling and sample analyses. Composite sampling allows disparate samples to be combined, with only a single analysis needed, making it a promising method to reduce response times. We developed a statistical experimental design to test three different composite sampling methods: 1) single medium single pass composite (SM-SPC): a single cellulose sponge samples multiple coupons with a single pass across each coupon; 2) single medium multi-pass composite: a single cellulose sponge samples multiple coupons with multiple passes across each coupon (SM-MPC); and 3) multi-medium post-sample composite (MM-MPC): a single cellulose sponge samples a single surface, and then multiple sponges are combined during sample extraction. Five spore concentrations of Bacillus atrophaeus Nakamura spores were tested; concentrations ranged from 5 to 100 CFU/coupon (0.00775 to 0.155 CFU/cm2). Study variables included four clean surface materials (stainless steel, vinyl tile, ceramic tile, and painted dry wallboard) and three grime coated/dirty materials (stainless steel, vinyl tile, and ceramic tile). Analysis of variance for the clean study showed two significant factors: composite method (p< 0.0001) and coupon material (p = 0.0006). Recovery efficiency (RE) was higher overall using the MM-MPC method compared to the SM-SPC and SM-MPC methods. RE with the MM-MPC method for concentrations tested (10 to 100 CFU/coupon) was similar for ceramic tile, dry wall, and stainless steel for clean materials. RE was lowest for vinyl tile with both composite methods. Statistical tests for the dirty study showed RE was significantly higher for vinyl and stainless steel materials, but lower for ceramic tile. These results suggest post-sample compositing can be used to reduce sample analysis time when responding to a Bacillus anthracis contamination event of clean or dirty surfaces. PMID:27736999
Evaluating Composite Sampling Methods of Bacillus Spores at Low Concentrations.
Hess, Becky M; Amidan, Brett G; Anderson, Kevin K; Hutchison, Janine R
2016-01-01
Restoring all facility operations after the 2001 Amerithrax attacks took years to complete, highlighting the need to reduce remediation time. Some of the most time intensive tasks were environmental sampling and sample analyses. Composite sampling allows disparate samples to be combined, with only a single analysis needed, making it a promising method to reduce response times. We developed a statistical experimental design to test three different composite sampling methods: 1) single medium single pass composite (SM-SPC): a single cellulose sponge samples multiple coupons with a single pass across each coupon; 2) single medium multi-pass composite: a single cellulose sponge samples multiple coupons with multiple passes across each coupon (SM-MPC); and 3) multi-medium post-sample composite (MM-MPC): a single cellulose sponge samples a single surface, and then multiple sponges are combined during sample extraction. Five spore concentrations of Bacillus atrophaeus Nakamura spores were tested; concentrations ranged from 5 to 100 CFU/coupon (0.00775 to 0.155 CFU/cm2). Study variables included four clean surface materials (stainless steel, vinyl tile, ceramic tile, and painted dry wallboard) and three grime coated/dirty materials (stainless steel, vinyl tile, and ceramic tile). Analysis of variance for the clean study showed two significant factors: composite method (p< 0.0001) and coupon material (p = 0.0006). Recovery efficiency (RE) was higher overall using the MM-MPC method compared to the SM-SPC and SM-MPC methods. RE with the MM-MPC method for concentrations tested (10 to 100 CFU/coupon) was similar for ceramic tile, dry wall, and stainless steel for clean materials. RE was lowest for vinyl tile with both composite methods. Statistical tests for the dirty study showed RE was significantly higher for vinyl and stainless steel materials, but lower for ceramic tile. These results suggest post-sample compositing can be used to reduce sample analysis time when responding to a Bacillus anthracis contamination event of clean or dirty surfaces.
Method for chromium analysis and speciation
Aiken, Abigail M.; Peyton, Brent M.; Apel, William A.; Petersen, James N.
2004-11-02
A method of detecting a metal in a sample comprising a plurality of metal is disclosed. The method comprises providing the sample comprising a metal to be detected. The sample is added to a reagent solution comprising an enzyme and a substrate, where the enzyme is inhibited by the metal to be detected. An array of chelating agents is used to eliminate the inhibitory effects of additional metals in the sample. An enzymatic activity in the sample is determined and compared to an enzymatic activity in a control solution to detect the metal to be detected. A method of determining a concentration of the metal in the sample is also disclosed. A method of detecting a valence state of a metal is also disclosed.
40 CFR 80.8 - Sampling methods for gasoline and diesel fuel.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Sampling methods for gasoline and... gasoline and diesel fuel. The sampling methods specified in this section shall be used to collect samples of gasoline and diesel fuel for purposes of determining compliance with the requirements of this part...
40 CFR 80.8 - Sampling methods for gasoline and diesel fuel.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 16 2011-07-01 2011-07-01 false Sampling methods for gasoline and... gasoline and diesel fuel. The sampling methods specified in this section shall be used to collect samples of gasoline and diesel fuel for purposes of determining compliance with the requirements of this part...
40 CFR 80.8 - Sampling methods for gasoline and diesel fuel.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 17 2013-07-01 2013-07-01 false Sampling methods for gasoline and... gasoline and diesel fuel. The sampling methods specified in this section shall be used to collect samples of gasoline and diesel fuel for purposes of determining compliance with the requirements of this part...
Flexible sampling large-scale social networks by self-adjustable random walk
NASA Astrophysics Data System (ADS)
Xu, Xiao-Ke; Zhu, Jonathan J. H.
2016-12-01
Online social networks (OSNs) have become an increasingly attractive gold mine for academic and commercial researchers. However, research on OSNs faces a number of difficult challenges. One bottleneck lies in the massive quantity and often unavailability of OSN population data. Sampling perhaps becomes the only feasible solution to the problems. How to draw samples that can represent the underlying OSNs has remained a formidable task because of a number of conceptual and methodological reasons. Especially, most of the empirically-driven studies on network sampling are confined to simulated data or sub-graph data, which are fundamentally different from real and complete-graph OSNs. In the current study, we propose a flexible sampling method, called Self-Adjustable Random Walk (SARW), and test it against with the population data of a real large-scale OSN. We evaluate the strengths of the sampling method in comparison with four prevailing methods, including uniform, breadth-first search (BFS), random walk (RW), and revised RW (i.e., MHRW) sampling. We try to mix both induced-edge and external-edge information of sampled nodes together in the same sampling process. Our results show that the SARW sampling method has been able to generate unbiased samples of OSNs with maximal precision and minimal cost. The study is helpful for the practice of OSN research by providing a highly needed sampling tools, for the methodological development of large-scale network sampling by comparative evaluations of existing sampling methods, and for the theoretical understanding of human networks by highlighting discrepancies and contradictions between existing knowledge/assumptions of large-scale real OSN data.
Performance of Traditional and Molecular Methods for Detecting Biological Agents in Drinking Water
Francy, Donna S.; Bushon, Rebecca N.; Brady, Amie M.G.; Bertke, Erin E.; Kephart, Christopher M.; Likirdopulos, Christina A.; Mailot, Brian E.; Schaefer, Frank W.; Lindquist, H.D. Alan
2009-01-01
To reduce the impact from a possible bioterrorist attack on drinking-water supplies, analytical methods are needed to rapidly detect the presence of biological agents in water. To this end, 13 drinking-water samples were collected at 9 water-treatment plants in Ohio to assess the performance of a molecular method in comparison to traditional analytical methods that take longer to perform. Two 100-liter samples were collected at each site during each sampling event; one was seeded in the laboratory with six biological agents - Bacillus anthracis (B. anthracis), Burkholderia cepacia (as a surrogate for Bu. pseudomallei), Francisella tularensis (F. tularensis), Salmonella Typhi (S. Typhi), Vibrio cholerae (V. cholerae), and Cryptospordium parvum (C. parvum). The seeded and unseeded samples were processed by ultrafiltration and analyzed by use of quantiative polymerase chain reaction (qPCR), a molecular method, and culture methods for bacterial agents or the immunomagnetic separation/fluorescent antibody (IMS/FA) method for C. parvum as traditional methods. Six replicate seeded samples were also processed and analyzed. For traditional methods, recoveries were highly variable between samples and even between some replicate samples, ranging from below detection to greater than 100 percent. Recoveries were significantly related to water pH, specific conductance, and dissolved organic carbon (DOC) for all bacteria combined by culture methods, but none of the water-quality characteristics tested were related to recoveries of C. parvum by IMS/FA. Recoveries were not determined by qPCR because of problems in quantifying organisms by qPCR in the composite seed. Instead, qPCR results were reported as detected, not detected (no qPCR signal), or +/- detected (Cycle Threshold or 'Ct' values were greater than 40). Several sample results by qPCR were omitted from the dataset because of possible problems with qPCR reagents, primers, and probes. For the remaining 14 qPCR results (including some replicate samples), F. tularensis and V. cholerae were detected in all samples after ultrafiltration, B. anthracis was detected in 13 and +/- detected in 1 sample, and C. parvum was detected in 9 and +/- detected in 4 samples. Bu. cepacia was detected in nine samples, +/- detected in two samples, and not detected in three samples (for two out of three samples not detected, a different strain was used). The qPCR assay for V. cholerae provided two false positive - but late - signals in one unseeded sample. Numbers found by qPCR after ultrafiltration were significantly or nearly significantly related to those found by traditional methods for B. anthracis, F. tularensis, and V. cholerae but not for Bu. cepacia and C. parvum. A qPCR assay for S. Typhi was not available. The qPCR method can be used to rapidly detect B. anthracis, F. tularensis, and V. cholerae with some certainty in drinking-water samples, but additional work would be needed to optimize and test qPCR for Bu. cepacia and C. parvum and establish relations to traditional methods. The specificity for the V. cholerae assay needs to be further investigated. Evidence is provided that ultrafiltration and qPCR are promising methods to rapidly detect biological agents in the Nation's drinking-water supplies and thus reduce the impact and consequences from intentional bioterrorist events. To our knowledge, this is the first study to compare the use of traditional and qPCR methods to detect biological agents in large-volume drinking-water samples.
Systematic Evaluation of Aggressive Air Sampling for Bacillus ...
Report The primary objectives of this project were to evaluate the Aggressive Air Sampling (AAS) method compared to currently used surface sampling methods and to determine if AAS is a viable option for sampling Bacillus anthracis spores.
Evaluation of Surface Sampling for Bacillus Spores Using ...
Report The primary objectives of this project were to evaluate the Aggressive Air Sampling (AAS) method compared to currently used surface sampling methods and to determine if AAS is a viable option for sampling Bacillus anthracis spores.
Effects of Sample Preparation on the Infrared Reflectance Spectra of Powders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brauer, Carolyn S.; Johnson, Timothy J.; Myers, Tanya L.
2015-05-22
While reflectance spectroscopy is a useful tool in identifying molecular compounds, laboratory measurement of solid (particularly powder) samples often is confounded by sample preparation methods. For example, both the packing density and surface roughness can have an effect on the quantitative reflectance spectra of powdered samples. Recent efforts in our group have focused on developing standard methods for measuring reflectance spectra that accounts for sample preparation, as well as other factors such as particle size and provenance. In this work, the effect of preparation method on sample reflectivity was investigated by measuring the directional-hemispherical spectra of samples that were hand-packedmore » as well as pressed into pellets using an integrating sphere attached to a Fourier transform infrared spectrometer. The results show that the methods used to prepare the sample have a substantial effect on the measured reflectance spectra, as do other factors such as particle size.« less
Effects of sample preparation on the infrared reflectance spectra of powders
NASA Astrophysics Data System (ADS)
Brauer, Carolyn S.; Johnson, Timothy J.; Myers, Tanya L.; Su, Yin-Fong; Blake, Thomas A.; Forland, Brenda M.
2015-05-01
While reflectance spectroscopy is a useful tool for identifying molecular compounds, laboratory measurement of solid (particularly powder) samples often is confounded by sample preparation methods. For example, both the packing density and surface roughness can have an effect on the quantitative reflectance spectra of powdered samples. Recent efforts in our group have focused on developing standard methods for measuring reflectance spectra that accounts for sample preparation, as well as other factors such as particle size and provenance. In this work, the effect of preparation method on sample reflectivity was investigated by measuring the directional-hemispherical spectra of samples that were hand-loaded as well as pressed into pellets using an integrating sphere attached to a Fourier transform infrared spectrometer. The results show that the methods used to prepare the sample can have a substantial effect on the measured reflectance spectra, as do other factors such as particle size.
Subrandom methods for multidimensional nonuniform sampling.
Worley, Bradley
2016-08-01
Methods of nonuniform sampling that utilize pseudorandom number sequences to select points from a weighted Nyquist grid are commonplace in biomolecular NMR studies, due to the beneficial incoherence introduced by pseudorandom sampling. However, these methods require the specification of a non-arbitrary seed number in order to initialize a pseudorandom number generator. Because the performance of pseudorandom sampling schedules can substantially vary based on seed number, this can complicate the task of routine data collection. Approaches such as jittered sampling and stochastic gap sampling are effective at reducing random seed dependence of nonuniform sampling schedules, but still require the specification of a seed number. This work formalizes the use of subrandom number sequences in nonuniform sampling as a means of seed-independent sampling, and compares the performance of three subrandom methods to their pseudorandom counterparts using commonly applied schedule performance metrics. Reconstruction results using experimental datasets are also provided to validate claims made using these performance metrics. Copyright © 2016 Elsevier Inc. All rights reserved.
Zhang, Hong-guang; Lu, Jian-gang
2016-02-01
Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.
Molecular cancer classification using a meta-sample-based regularized robust coding method.
Wang, Shu-Lin; Sun, Liuchao; Fang, Jianwen
2014-01-01
Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.
Valid statistical inference methods for a case-control study with missing data.
Tian, Guo-Liang; Zhang, Chi; Jiang, Xuejun
2018-04-01
The main objective of this paper is to derive the valid sampling distribution of the observed counts in a case-control study with missing data under the assumption of missing at random by employing the conditional sampling method and the mechanism augmentation method. The proposed sampling distribution, called the case-control sampling distribution, can be used to calculate the standard errors of the maximum likelihood estimates of parameters via the Fisher information matrix and to generate independent samples for constructing small-sample bootstrap confidence intervals. Theoretical comparisons of the new case-control sampling distribution with two existing sampling distributions exhibit a large difference. Simulations are conducted to investigate the influence of the three different sampling distributions on statistical inferences. One finding is that the conclusion by the Wald test for testing independency under the two existing sampling distributions could be completely different (even contradictory) from the Wald test for testing the equality of the success probabilities in control/case groups under the proposed distribution. A real cervical cancer data set is used to illustrate the proposed statistical methods.
Extending the solvent-free MALDI sample preparation method.
Hanton, Scott D; Parees, David M
2005-01-01
Matrix-assisted laser desorption/ionization (MALDI) mass spectrometry is an important technique to characterize many different materials, including synthetic polymers. MALDI mass spectral data can be used to determine the polymer average molecular weights, repeat units, and end groups. One of the key issues in traditional MALDI sample preparation is making good solutions of the analyte and the matrix. Solvent-free sample preparation methods have been developed to address these issues. Previous results of solvent-free or dry prepared samples show some advantages over traditional wet sample preparation methods. Although the results of the published solvent-free sample preparation methods produced excellent mass spectra, we found the method to be very time-consuming, with significant tool cleaning, which presents a significant possibility of cross contamination. To address these issues, we developed an extension of the solvent-free method that replaces the mortar and pestle grinding with ball milling the sample in a glass vial with two small steel balls. This new method generates mass spectra with equal quality of the previous methods, but has significant advantages in productivity, eliminates cross contamination, and is applicable to liquid and soft or waxy analytes.
Combinatorial Screening Of Inorganic And Organometallic Materials
Li, Yi , Li, Jing , Britton, Ted W.
2002-06-25
A method for differentiating and enumerating nucleated red blood cells in a blood sample is described. The method includes the steps of lysing red blood cells of a blood sample with a lytic reagent, measuring nucleated blood cells by DC impedance measurement in a non-focused flow aperture, differentiating nucleated red blood cells from other cell types, and reporting nucleated red blood cells in the blood sample. The method further includes subtracting nucleated red blood cells and other interference materials from the count of remaining blood cells, and reporting a corrected white blood cell count of the blood sample. Additionally, the method further includes measuring spectrophotometric absorbance of the sample mixture at a predetermined wavelength of a hemoglobin chromogen formed upon lysing the blood sample, and reporting hemoglobin concentration of the blood sample.
Le Boedec, Kevin
2016-12-01
According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.
Wang, Hongbin; Zhang, Yongqian; Gui, Shuqi; Zhang, Yong; Lu, Fuping; Deng, Yulin
2017-08-15
Comparisons across large numbers of samples are frequently necessary in quantitative proteomics. Many quantitative methods used in proteomics are based on stable isotope labeling, but most of these are only useful for comparing two samples. For up to eight samples, the iTRAQ labeling technique can be used. For greater numbers of samples, the label-free method has been used, but this method was criticized for low reproducibility and accuracy. An ingenious strategy has been introduced, comparing each sample against a 18 O-labeled reference sample that was created by pooling equal amounts of all samples. However, it is necessary to use proportion-known protein mixtures to investigate and evaluate this new strategy. Another problem for comparative proteomics of multiple samples is the poor coincidence and reproducibility in protein identification results across samples. In present study, a method combining 18 O-reference strategy and a quantitation and identification-decoupled strategy was investigated with proportion-known protein mixtures. The results obviously demonstrated that the 18 O-reference strategy had greater accuracy and reliability than other previously used comparison methods based on transferring comparison or label-free strategies. By the decoupling strategy, the quantification data acquired by LC-MS and the identification data acquired by LC-MS/MS are matched and correlated to identify differential expressed proteins, according to retention time and accurate mass. This strategy made protein identification possible for all samples using a single pooled sample, and therefore gave a good reproducibility in protein identification across multiple samples, and allowed for optimizing peptide identification separately so as to identify more proteins. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kuusimäki, Leea; Peltonen, Kimmo; Vainiotalo, Sinikka
A previously introduced method for monitoring environmental tobacco smoke (ETS) was further validated. The method is based on diffusive sampling of a vapour-phase marker, 3-ethenylpyridine (3-EP), with 3 M passive monitors (type 3500). Experiments were done in a dynamic chamber to assess diffusive sampling in comparison with active sampling in charcoal tubes or XAD-4 tubes. The sampling rate for 3-EP collected on the diffusive sampler was 23.1±0.6 mL min -1. The relative standard deviation for parallel samples ( n=6) ranged from 4% to 14% among experiments ( n=9). No marked reverse diffusion of 3-EP was detected nor any significant effect of relative humidity at 20%, 50% or 80%. The diffusive sampling of 3-EP was validated in field measurements in 15 restaurants in comparison with 3-EP and nicotine measurements using active sampling. The 3-EP concentration in restaurants ranged from 0.01 to 9.8 μg m -3, and the uptake rate for 3-EP based on 92 parallel samples was 24.0±0.4 mL min -1. A linear correlation ( r=0.98) was observed between 3-EP and nicotine concentrations, the average ratio of 3-EP to nicotine being 1:8. Active sampling of 3-EP and nicotine in charcoal tubes provided more reliable results than sampling in XAD-4 tubes. All samples were analysed using gas chromatography-mass spectrometry after elution with a 15% solution of pyridine in toluene. For nicotine, the limit of quantification of the charcoal tube method was 4 ng per sample, corresponding to 0.04 μg m -3 for an air sample of 96 L. For 3-EP, the limit of quantification of the diffusive method was 0.5-1.0 ng per sample, corresponding to 0.04-0.09 μg m -3 for 8 h sampling. The diffusive method proved suitable for ETS monitoring, even at low levels of ETS.
Application of work sampling technique to analyze logging operations.
Edwin S. Miyata; Helmuth M. Steinhilb; Sharon A. Winsauer
1981-01-01
Discusses the advantages and disadvantages of various time study methods for determining efficiency and productivity in logging. The work sampling method is compared with the continuous time-study method. Gives the feasibility, capability, and limitation of the work sampling method.
Liu, Gui-Long; Huang, Shi-Hong; Shi, Che-Si; Zeng, Bin; Zhang, Ke-Shi; Zhong, Xian-Ci
2018-02-10
Using copper thin-walled tubular specimens, the subsequent yield surfaces under pre-tension, pre-torsion and pre-combined tension-torsion are measured, where the single-sample and multi-sample methods are applied respectively to determine the yield stresses at specified offset strain. The rule and characteristics of the evolution of the subsequent yield surface are investigated. Under the conditions of different pre-strains, the influence of test point number, test sequence and specified offset strain on the measurement of subsequent yield surface and the concave phenomenon for measured yield surface are studied. Moreover, the feasibility and validity of the two methods are compared. The main conclusions are drawn as follows: (1) For the single or multi-sample method, the measured subsequent yield surfaces are remarkably different from cylindrical yield surfaces proposed by the classical plasticity theory; (2) there are apparent differences between the test results from the two kinds of methods: the multi-sample method is not influenced by the number of test points, test order and the cumulative effect of residual plastic strain resulting from the other test point, while those are very influential in the single-sample method; and (3) the measured subsequent yield surface may appear concave, which can be transformed to convex for single-sample method by changing the test sequence. However, for the multiple-sample method, the concave phenomenon will disappear when a larger offset strain is specified.
Espino, L; Way, M O; Wilson, L T
2008-02-01
Commercial rice, Oryza sativa L., fields in southeastern Texas were sampled during 2003 and 2004, and visual samples were compared with sweep net samples. Fields were sampled at different stages of panicle development, times of day, and by different operators. Significant differences were found between perimeter and within field sweep net samples, indicating that samples taken 9 m from the field margin overestimate within field Oebalus pugnax (F.) (Hemiptera: Pentatomidae) populations. Time of day did not significantly affect the number of O. pugnax caught with the sweep net; however, there was a trend to capture more insects during morning than afternoon. For all sampling methods evaluated during this study, O. pugnax was found to have an aggregated spatial pattern at most densities. When comparing sweep net with visual sampling methods, one sweep of the "long stick" and two sweeps of the "sweep stick" correlated well with the sweep net (r2 = 0.639 and r2 = 0.815, respectively). This relationship was not affected by time of day of sampling, stage of panicle development, type of planting or operator. Relative cost-reliability, which incorporates probability of adoption, indicates the visual methods are more cost-reliable than the sweep net for sampling O.
Yeheyis, Likawent; Kijora, Claudia; Wink, Michael; Peters, Kurt J
2011-01-01
The effect of a traditional Ethiopian lupin processing method on the chemical composition of lupin seed samples was studied. Two sampling districts, namely Mecha and Sekela, representing the mid- and high-altitude areas of north-western Ethiopia, respectively, were randomly selected. Different types of traditionally processed and marketed lupin seed samples (raw, roasted, and finished) were collected in six replications from each district. Raw samples are unprocessed, and roasted samples are roasted using firewood. Finished samples are those ready for human consumption as snack. Thousand seed weight for raw and roasted samples within a study district was similar (P > 0.05), but it was lower (P < 0.01) for finished samples compared to raw and roasted samples. The crude fibre content of finished lupin seed sample from Mecha was lower (P < 0.01) than that of raw and roasted samples. However, the different lupin samples from Sekela had similar crude fibre content (P > 0.05). The crude protein and crude fat contents of finished samples within a study district were higher (P < 0.01) than those of raw and roasted samples, respectively. Roasting had no effect on the crude protein content of lupin seed samples. The crude ash content of raw and roasted lupin samples within a study district was higher (P < 0.01) than that of finished lupin samples of the respective study districts. The content of quinolizidine alkaloids of finished lupin samples was lower than that of raw and roasted samples. There was also an interaction effect between location and lupin sample type. The traditional processing method of lupin seeds in Ethiopia has a positive contribution improving the crude protein and crude fat content, and lowering the alkaloid content of the finished product. The study showed the possibility of adopting the traditional processing method to process bitter white lupin for the use as protein supplement in livestock feed in Ethiopia, but further work has to be done on the processing method and animal evaluation.
Method and apparatus for measuring nuclear magnetic properties
Weitekamp, D.P.; Bielecki, A.; Zax, D.B.; Zilm, K.W.; Pines, A.
1987-12-01
A method for studying the chemical and structural characteristics of materials is disclosed. The method includes placement of a sample material in a high strength polarizing magnetic field to order the sample nuclei. The condition used to order the sample is then removed abruptly and the ordering of the sample allowed to evolve for a time interval. At the end of the time interval, the ordering of the sample is measured by conventional nuclear magnetic resonance techniques. 5 figs.
Method and apparatus for measuring nuclear magnetic properties
Weitekamp, Daniel P.; Bielecki, Anthony; Zax, David B.; Zilm, Kurt W.; Pines, Alexander
1987-01-01
A method for studying the chemical and structural characteristics of materials is disclosed. The method includes placement of a sample material in a high strength polarizing magnetic field to order the sample nucleii. The condition used to order the sample is then removed abruptly and the ordering of the sample allowed to evolve for a time interval. At the end of the time interval, the ordering of the sample is measured by conventional nuclear magnetic resonance techniques.
A LITERATURE REVIEW OF WIPE SAMPLING METHODS ...
Wipe sampling is an important technique for the estimation of contaminant deposition in buildings, homes, or outdoor surfaces as a source of possible human exposure. Numerousmethods of wipe sampling exist, and each method has its own specification for the type of wipe, wetting solvent, and determinative step to be used, depending upon the contaminant of concern. The objective of this report is to concisely summarize the findings of a literature review that was conducted to identify the state-of-the-art wipe sampling techniques for a target list of compounds. This report describes the methods used to perform the literature review; a brief review of wipe sampling techniques in general; an analysis of physical and chemical properties of each target analyte; an analysis of wipe sampling techniques for the target analyte list; and asummary of the wipe sampling techniques for the target analyte list, including existing data gaps. In general, no overwhelming consensus can be drawn from the current literature on how to collect a wipe sample for the chemical warfare agents, organophosphate pesticides, and other toxic industrial chemicals of interest to this study. Different methods, media, and wetting solvents have been recommended and used by various groups and different studies. For many of the compounds of interest, no specific wipe sampling methodology has been established for their collection. Before a wipe sampling method (or methods) can be established for the co
NASA Astrophysics Data System (ADS)
Aspinall, M. D.; Joyce, M. J.; Mackin, R. O.; Jarrah, Z.; Boston, A. J.; Nolan, P. J.; Peyton, A. J.; Hawkes, N. P.
2009-01-01
A unique, digital time pick-off method, known as sample-interpolation timing (SIT) is described. This method demonstrates the possibility of improved timing resolution for the digital measurement of time of flight compared with digital replica-analogue time pick-off methods for signals sampled at relatively low rates. Three analogue timing methods have been replicated in the digital domain (leading-edge, crossover and constant-fraction timing) for pulse data sampled at 8 GSa s-1. Events arising from the 7Li(p, n)7Be reaction have been detected with an EJ-301 organic liquid scintillator and recorded with a fast digital sampling oscilloscope. Sample-interpolation timing was developed solely for the digital domain and thus performs more efficiently on digital signals compared with analogue time pick-off methods replicated digitally, especially for fast signals that are sampled at rates that current affordable and portable devices can achieve. Sample interpolation can be applied to any analogue timing method replicated digitally and thus also has the potential to exploit the generic capabilities of analogue techniques with the benefits of operating in the digital domain. A threshold in sampling rate with respect to the signal pulse width is observed beyond which further improvements in timing resolution are not attained. This advance is relevant to many applications in which time-of-flight measurement is essential.
Comparing two sampling methods to engage hard-to-reach communities in research priority setting.
Valerio, Melissa A; Rodriguez, Natalia; Winkler, Paula; Lopez, Jaime; Dennison, Meagen; Liang, Yuanyuan; Turner, Barbara J
2016-10-28
Effective community-partnered and patient-centered outcomes research needs to address community priorities. However, optimal sampling methods to engage stakeholders from hard-to-reach, vulnerable communities to generate research priorities have not been identified. In two similar rural, largely Hispanic communities, a community advisory board guided recruitment of stakeholders affected by chronic pain using a different method in each community: 1) snowball sampling, a chain- referral method or 2) purposive sampling to recruit diverse stakeholders. In both communities, three groups of stakeholders attended a series of three facilitated meetings to orient, brainstorm, and prioritize ideas (9 meetings/community). Using mixed methods analysis, we compared stakeholder recruitment and retention as well as priorities from both communities' stakeholders on mean ratings of their ideas based on importance and feasibility for implementation in their community. Of 65 eligible stakeholders in one community recruited by snowball sampling, 55 (85 %) consented, 52 (95 %) attended the first meeting, and 36 (65 %) attended all 3 meetings. In the second community, the purposive sampling method was supplemented by convenience sampling to increase recruitment. Of 69 stakeholders recruited by this combined strategy, 62 (90 %) consented, 36 (58 %) attended the first meeting, and 26 (42 %) attended all 3 meetings. Snowball sampling recruited more Hispanics and disabled persons (all P < 0.05). Despite differing recruitment strategies, stakeholders from the two communities identified largely similar ideas for research, focusing on non-pharmacologic interventions for management of chronic pain. Ratings on importance and feasibility for community implementation differed only on the importance of massage services (P = 0.045) which was higher for the purposive/convenience sampling group and for city improvements/transportation services (P = 0.004) which was higher for the snowball sampling group. In each of the two similar hard-to-reach communities, a community advisory board partnered with researchers to implement a different sampling method to recruit stakeholders. The snowball sampling method achieved greater participation with more Hispanics but also more individuals with disabilities than a purposive-convenience sampling method. However, priorities for research on chronic pain from both stakeholder groups were similar. Although utilizing a snowball sampling method appears to be superior, further research is needed on implementation costs and resources.
SnagPRO: snag and tree sampling and analysis methods for wildlife
Lisa J. Bate; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough
2008-01-01
We describe sampling methods and provide software to accurately and efficiently estimate snag and tree densities at desired scales to meet a variety of research and management objectives. The methods optimize sampling effort by choosing a plot size appropriate for the specified forest conditions and sampling goals. Plot selection and data analyses are supported by...
ERIC Educational Resources Information Center
Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.; Jiao, Qun G.
2007-01-01
A sequential design utilizing identical samples was used to classify mixed methods studies via a two-dimensional model, wherein sampling designs were grouped according to the time orientation of each study's components and the relationship of the qualitative and quantitative samples. A quantitative analysis of 121 studies representing nine fields…
Geldsetzer, Pascal; Fink, Günther; Vaikath, Maria; Bärnighausen, Till
2018-02-01
(1) To evaluate the operational efficiency of various sampling methods for patient exit interviews; (2) to discuss under what circumstances each method yields an unbiased sample; and (3) to propose a new, operationally efficient, and unbiased sampling method. Literature review, mathematical derivation, and Monte Carlo simulations. Our simulations show that in patient exit interviews it is most operationally efficient if the interviewer, after completing an interview, selects the next patient exiting the clinical consultation. We demonstrate mathematically that this method yields a biased sample: patients who spend a longer time with the clinician are overrepresented. This bias can be removed by selecting the next patient who enters, rather than exits, the consultation room. We show that this sampling method is operationally more efficient than alternative methods (systematic and simple random sampling) in most primary health care settings. Under the assumption that the order in which patients enter the consultation room is unrelated to the length of time spent with the clinician and the interviewer, selecting the next patient entering the consultation room tends to be the operationally most efficient unbiased sampling method for patient exit interviews. © 2016 The Authors. Health Services Research published by Wiley Periodicals, Inc. on behalf of Health Research and Educational Trust.
Recommendations for level-determined sampling in wells
NASA Astrophysics Data System (ADS)
Lerner, David N.; Teutsch, Georg
1995-10-01
Level-determined samples of groundwater are increasingly important for hydrogeological studies. The techniques for collecting them range from the use of purpose drilled wells, sometimes with sophisticated dedicated multi-level samplers in them, to a variety of methods used in open wells. Open, often existing, wells are frequently used on cost grounds, but there are risks of obtaining poor and unrepresentative samples. Alternative approaches to level-determined sampling incorporate seven concepts: depth sampling; packer systems; individual wells; dedicated multi-level systems; separation pumping; baffle systems; multi-port sock samplers. These are outlined and evaluated in terms of the environment to be sampled, and the features and performance of the methods. Recommendations are offered to match methods to sampling problems.
Compendium of selected methods for sampling and analysis at geothermal facilities
NASA Astrophysics Data System (ADS)
Kindle, C. H.; Pool, K. H.; Ludwick, J. D.; Robertson, D. E.
1984-06-01
An independent study of the field has resulted in a compilation of the best methods for sampling, preservation and analysis of potential pollutants from geothermally fueled electric power plants. These methods are selected as the most usable over the range of application commonly experienced in the various geothermal plant sample locations. In addition to plant and well piping, techniques for sampling cooling towers, ambient gases, solids, surface and subsurface waters are described. Emphasis is placed on the use of sampling proves to extract samples from heterogeneous flows. Certain sampling points, constituents and phases of plant operation are more amenable to quality assurance improvement in the emission measurements than others and are so identified.
Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu
2015-07-01
Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.
THE NEW YORK CITY URBAN DISPERSION PROGRAM MARCH 2005 FIELD STUDY: TRACER METHODS AND RESULTS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
WATSON, T.B.; HEISER, J.; KALB, P.
The Urban Dispersion Program March 2005 Field Study tracer releases, sampling, and analytical methods are described in detail. There were two days where tracer releases and sampling were conducted. A total of 16.0 g of six tracers were released during the first test day or Intensive Observation Period (IOP) 1 and 15.7 g during IOP 2. Three types of sampling instruments were used in this study. Sequential air samplers, or SAS, collected six-minute samples, while Brookhaven atmospheric tracer samplers (BATS) and personal air samplers (PAS) collected thirty-minute samples. There were a total of 1300 samples resulting from the two IOPs.more » Confidence limits in the sampling and analysis method were 20% as determined from 100 duplicate samples. The sample recovery rate was 84%. The integrally averaged 6-minute samples were compared to the 30-minute samples. The agreement was found to be good in most cases. The validity of using a background tracer to calculate sample volumes was examined and also found to have a confidence level of 20%. Methods for improving sampling and analysis are discussed. The data described in this report are available as Excel files. An additional Excel file of quality assured tracer data for use in model validation efforts is also available. The file consists of extensively quality assured BATS tracer data with background concentrations subtracted.« less
Sample processing approach for detection of ricin in surface samples.
Kane, Staci; Shah, Sanjiv; Erler, Anne Marie; Alfaro, Teneile
2017-12-01
With several ricin contamination incidents reported over the past decade, rapid and accurate methods are needed for environmental sample analysis, especially after decontamination. A sample processing method was developed for common surface sampling devices to improve the limit of detection and avoid false negative/positive results for ricin analysis. Potential assay interferents from the sample matrix (bleach residue, sample material, wetting buffer), including reference dust, were tested using a Time-Resolved Fluorescence (TRF) immunoassay. Test results suggested that the sample matrix did not cause the elevated background fluorescence sometimes observed when analyzing post-bleach decontamination samples from ricin incidents. Furthermore, sample particulates (80mg/mL Arizona Test Dust) did not enhance background fluorescence or interfere with ricin detection by TRF. These results suggested that high background fluorescence in this immunoassay could be due to labeled antibody quality and/or quantity issues. Centrifugal ultrafiltration devices were evaluated for ricin concentration as a part of sample processing. Up to 30-fold concentration of ricin was observed by the devices, which serve to remove soluble interferents and could function as the front-end sample processing step to other ricin analytical methods. The procedure has the potential to be used with a broader range of environmental sample types and with other potential interferences and to be followed by other ricin analytical methods, although additional verification studies would be required. Published by Elsevier B.V.
Field efficiency and bias of snag inventory methods
Robert S. Kenning; Mark J. Ducey; John C. Brissette; Jeffery H. Gove
2005-01-01
Snags and cavity trees are important components of forests, but can be difficult to inventory precisely and are not always included in inventories because of limited resources. We tested the application of N-tree distance sampling as a time-saving snag sampling method and compared N-tree distance sampling to fixed-area sampling and modified horizontal line sampling in...
40 CFR 761.292 - Chemical extraction and analysis of individual samples and composite samples.
Code of Federal Regulations, 2011 CFR
2011-07-01
... individual samples and composite samples. 761.292 Section 761.292 Protection of Environment ENVIRONMENTAL... Cleanup and On-Site Disposal of Bulk PCB Remediation Waste and Porous Surfaces in Accordance With § 761... individual and composite samples of PCB remediation waste. Use Method 8082 from SW-846, or a method validated...
40 CFR 761.292 - Chemical extraction and analysis of individual samples and composite samples.
Code of Federal Regulations, 2013 CFR
2013-07-01
... individual samples and composite samples. 761.292 Section 761.292 Protection of Environment ENVIRONMENTAL... Cleanup and On-Site Disposal of Bulk PCB Remediation Waste and Porous Surfaces in Accordance With § 761... individual and composite samples of PCB remediation waste. Use Method 8082 from SW-846, or a method validated...
40 CFR 761.292 - Chemical extraction and analysis of individual samples and composite samples.
Code of Federal Regulations, 2014 CFR
2014-07-01
... individual samples and composite samples. 761.292 Section 761.292 Protection of Environment ENVIRONMENTAL... Cleanup and On-Site Disposal of Bulk PCB Remediation Waste and Porous Surfaces in Accordance With § 761... individual and composite samples of PCB remediation waste. Use Method 8082 from SW-846, or a method validated...
40 CFR 761.292 - Chemical extraction and analysis of individual samples and composite samples.
Code of Federal Regulations, 2012 CFR
2012-07-01
... individual samples and composite samples. 761.292 Section 761.292 Protection of Environment ENVIRONMENTAL... Cleanup and On-Site Disposal of Bulk PCB Remediation Waste and Porous Surfaces in Accordance With § 761... individual and composite samples of PCB remediation waste. Use Method 8082 from SW-846, or a method validated...
Sampling Operations on Big Data
2015-11-29
gories. These include edge sampling methods where edges are selected by a predetermined criteria; snowball sampling methods where algorithms start... Sampling Operations on Big Data Vijay Gadepally, Taylor Herr, Luke Johnson, Lauren Milechin, Maja Milosavljevic, Benjamin A. Miller Lincoln...process and disseminate information for discovery and exploration under real-time constraints. Common signal processing operations such as sampling and
Systems and methods for separating particles and/or substances from a sample fluid
Mariella, Jr., Raymond P.; Dougherty, George M.; Dzenitis, John M.; Miles, Robin R.; Clague, David S.
2016-11-01
Systems and methods for separating particles and/or toxins from a sample fluid. A method according to one embodiment comprises simultaneously passing a sample fluid and a buffer fluid through a chamber such that a fluidic interface is formed between the sample fluid and the buffer fluid as the fluids pass through the chamber, the sample fluid having particles of interest therein; applying a force to the fluids for urging the particles of interest to pass through the interface into the buffer fluid; and substantially separating the buffer fluid from the sample fluid.
Detecting the sampling rate through observations
NASA Astrophysics Data System (ADS)
Shoji, Isao
2018-09-01
This paper proposes a method to detect the sampling rate of discrete time series of diffusion processes. Using the maximum likelihood estimates of the parameters of a diffusion process, we establish a criterion based on the Kullback-Leibler divergence and thereby estimate the sampling rate. Simulation studies are conducted to check whether the method can detect the sampling rates from data and their results show a good performance in the detection. In addition, the method is applied to a financial time series sampled on daily basis and shows the detected sampling rate is different from the conventional rates.
Probe Heating Method for the Analysis of Solid Samples Using a Portable Mass Spectrometer
Kumano, Shun; Sugiyama, Masuyuki; Yamada, Masuyoshi; Nishimura, Kazushige; Hasegawa, Hideki; Morokuma, Hidetoshi; Inoue, Hiroyuki; Hashimoto, Yuichiro
2015-01-01
We previously reported on the development of a portable mass spectrometer for the onsite screening of illicit drugs, but our previous sampling system could only be used for liquid samples. In this study, we report on an attempt to develop a probe heating method that also permits solid samples to be analyzed using a portable mass spectrometer. An aluminum rod is used as the sampling probe. The powdered sample is affixed to the sampling probe or a droplet of sample solution is placed on the tip of the probe and dried. The probe is then placed on a heater to vaporize the sample. The vapor is then introduced into the portable mass spectrometer and analyzed. With the heater temperature set to 130°C, the developed system detected 1 ng of methamphetamine, 1 ng of amphetamine, 3 ng of 3,4-methylenedioxymethamphetamine, 1 ng of 3,4-methylenedioxyamphetamine, and 0.3 ng of cocaine. Even from mixtures consisting of clove powder and methamphetamine powder, methamphetamine ions were detected by tandem mass spectrometry. The developed probe heating method provides a simple method for the analysis of solid samples. A portable mass spectrometer incorporating this method would thus be useful for the onsite screening of illicit drugs. PMID:26819909
Filla, Robert T; Schrell, Adrian M; Coulton, John B; Edwards, James L; Roper, Michael G
2018-02-20
A method for multiplexed sample analysis by mass spectrometry without the need for chemical tagging is presented. In this new method, each sample is pulsed at unique frequencies, mixed, and delivered to the mass spectrometer while maintaining a constant total flow rate. Reconstructed ion currents are then a time-dependent signal consisting of the sum of the ion currents from the various samples. Spectral deconvolution of each reconstructed ion current reveals the identity of each sample, encoded by its unique frequency, and its concentration encoded by the peak height in the frequency domain. This technique is different from other approaches that have been described, which have used modulation techniques to increase the signal-to-noise ratio of a single sample. As proof of concept of this new method, two samples containing up to 9 analytes were multiplexed. The linear dynamic range of the calibration curve was increased with extended acquisition times of the experiment and longer oscillation periods of the samples. Because of the combination of the samples, salt had little effect on the ability of this method to achieve relative quantitation. Continued development of this method is expected to allow for increased numbers of samples that can be multiplexed.
Methods of sampling airborne fungi in working environments of waste treatment facilities.
Černá, Kristýna; Wittlingerová, Zdeňka; Zimová, Magdaléna; Janovský, Zdeněk
2016-01-01
The objective of the present study was to evaluate and compare the efficiency of a filter based sampling method and a high volume sampling method for sampling airborne culturable fungi present in waste sorting facilities. Membrane filters method was compared with surface air system method. The selected sampling methods were modified and tested in 2 plastic waste sorting facilities. The total number of colony-forming units (CFU)/m3 of airborne fungi was dependent on the type of sampling device, on the time of sampling, which was carried out every hour from the beginning of the work shift, and on the type of cultivation medium (p < 0.001). Detected concentrations of airborne fungi ranged 2×102-1.7×106 CFU/m3 when using the membrane filters (MF) method, and 3×102-6.4×104 CFU/m3 when using the surface air system (SAS) method. Both methods showed comparable sensitivity to the fluctuations of the concentrations of airborne fungi during the work shifts. The SAS method is adequate for a fast indicative determination of concentration of airborne fungi. The MF method is suitable for thorough assessment of working environment contamination by airborne fungi. Therefore we recommend the MF method for the implementation of a uniform standard methodology of airborne fungi sampling in working environments of waste treatment facilities. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.
Methods of analyzing crude oil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cooks, Robert Graham; Jjunju, Fred Paul Mark; Li, Anyin
The invention generally relates to methods of analyzing crude oil. In certain embodiments, methods of the invention involve obtaining a crude oil sample, and subjecting the crude oil sample to mass spectrometry analysis. In certain embodiments, the method is performed without any sample pre-purification steps.
Carter, Melissa D.; Crow, Brian S.; Pantazides, Brooke G.; Watson, Caroline M.; deCastro, B. Rey; Thomas, Jerry D.; Blake, Thomas A.; Johnson, Rudolph C.
2017-01-01
A high-throughput prioritization method was developed for use with a validated confirmatory method detecting organophosphorus nerve agent exposure by immunomagnetic separation-HPLC-MS/MS. A ballistic gradient was incorporated into this analytical method in order to profile unadducted butyrylcholinesterase (BChE) in clinical samples. With Zhang, et al. 1999’s Z′-factor of 0.88 ± 0.01 (SD) of control analytes and Z-factor of 0.25 ± 0.06 (SD) of serum samples, the assay is rated an “excellent assay” for the synthetic peptide controls used and a “double assay” when used to prioritize clinical samples. Hits, defined as samples containing BChE Ser-198 adducts or no BChE present, were analyzed in a confirmatory method for identification and quantitation of the BChE adduct, if present. The ability to prioritize samples by highest exposure for confirmatory analysis is of particular importance in an exposure to cholinesterase inhibitors such as organophosphorus nerve agents where a large number of clinical samples may be collected. In an initial blind screen, 67 out of 70 samples were accurately identified giving an assay accuracy of 96% and yielded no false negatives. The method is the first to provide a high-throughput prioritization assay for profiling adduction of Ser-198 BChE in clinical samples. PMID:23954929
Nelson, Jennifer Clark; Marsh, Tracey; Lumley, Thomas; Larson, Eric B; Jackson, Lisa A; Jackson, Michael L
2013-08-01
Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased owing to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. We applied two such methods, namely imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method's ability to reduce bias using the control time period before influenza circulation. Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not use the validation sample confounders. Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from health care database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which the data can be imputed or reweighted using the additional validation sample information. Copyright © 2013 Elsevier Inc. All rights reserved.
Brady, Amie M.G.; Bushon, Rebecca N.; Bertke, Erin E.
2009-01-01
Water quality at beaches is monitored for fecal indicator bacteria by traditional, culture-based methods that can take 18 to 24 hours to obtain results. A rapid detection method that provides estimated concentrations of fecal indicator bacteria within 1 hour from the start of sample processing would allow beach managers to post advisories or close the beach when the conditions are actually considered unsafe instead of a day later, when conditions may have changed. A rapid method that couples immunomagnetic separation with adenosine triphosphate detection (IMS/ATP rapid method) was evaluated through monitoring of Escherichia coli (E. coli) at three Lake Erie beaches in Ohio (Edgewater and Villa Angela in Cleveland and Huntington in Bay Village). Beach water samples were collected between 4 and 5 days per week during the recreational seasons (May through September) of 2006 and 2007. Composite samples were created in the lab from two point samples collected at each beach and were shown to be comparable substitutes for analysis of two individual samples. E. coli concentrations in composite samples, as determined by the culture-based method, ranged from 4 to 24,000 colony-forming units per 100 milliliters during this study across all beaches. Turbidity also was measured for each sample and ranged from 0.8 to 260 neophelometric turbidity ratio units. Environmental variables were noted at the time of sampling, including number of birds at the beach and wave height. Rainfall amounts were measured at National Weather Service stations at local airports. Turbidity, rainfall, and wave height were significantly related to the culture-based method results each year and for both years combined at each beach. The number of birds at the beach was significantly related to the culture-based method results only at Edgewater during 2006 and during both years combined. Results of the IMS/ATP method were compared to results of the culture-based method for samples by year for each beach. The IMS/ATP method underwent several changes and refinements during the first year, including changes in reagents and antibodies and alterations to the method protocol. Because of the changes in the method, results from the two years of study could not be combined. Kendall's tau correlation coefficients for relations between the IMS/ATP and culture-based methods were significant except for samples collected during 2006 at Edgewater and for samples collected during 2007 at Villa Angela. Further, relations were stronger for samples collected in 2006 than for those collected in 2007, except at Edgewater where the reverse was observed. The 2007 dataset was examined to identify possible reasons for the observed difference in significance of relations by year. By dividing the 2007 data set into groups as a function of sampling date, relations (Kendall's tau) between methods were observed to be stronger for samples collected earlier in the season than for those collected later in the season. At Edgewater and Villa Angela, there were more birds at the beach at time of sampling later in the season compared to earlier in the season. (The number of birds was not examined at Huntington.) Also, more wet days (when rainfall during the 24 hours prior to sampling was greater than 0.05 inch) were sampled later in the season compared to earlier in the season. Differences in the dominant fecal source may explain the change in the relations between the culture-based and IMS/ATP methods.
Fortes, Esther D; David, John; Koeritzer, Bob; Wiedmann, Martin
2013-05-01
There is a continued need to develop improved rapid methods for detection of foodborne pathogens. The aim of this project was to evaluate the 3M Molecular Detection System (3M MDS), which uses isothermal DNA amplification, and the 3M Molecular Detection Assay Listeria using environmental samples obtained from retail delicatessens and meat, seafood, and dairy processing plants. Environmental sponge samples were tested for Listeria with the 3M MDS after 22 and 48 h of enrichment in 3M Modified Listeria Recovery Broth (3M mLRB); enrichments were also used for cultural detection of Listeria spp. Among 391 samples tested for Listeria, 74 were positive by both the 3M MDS and the cultural method, 310 were negative by both methods, 2 were positive by the 3M MDS and negative by the cultural method, and one sample was negative by the 3M MDS and positive by the cultural method. Four samples were removed from the sample set, prior to statistical analyses, due to potential cross-contamination during testing. Listeria isolates from positive samples represented L. monocytogenes, L. innocua, L. welshimeri, and L. seeligeri. Overall, the 3M MDS and culture-based detection after enrichment in 3M mLRB did not differ significantly (P < 0.05) with regard to the number of positive samples, when chi-square analyses were performed for (i) number of positive samples after 22 h, (ii) number of positive samples after 48 h, and (iii) number of positive samples after 22 and/or 48 h of enrichment in 3M mLRB. Among 288 sampling sites that were tested with duplicate sponges, 67 each tested positive with the 3M MDS and the traditional U.S. Food and Drug Administration Bacteriological Analytical Manual method, further supporting that the 3M MDS performs equivalently to traditional methods when used with environmental sponge samples.
Evaluating Composite Sampling Methods of Bacillus spores at Low Concentrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hess, Becky M.; Amidan, Brett G.; Anderson, Kevin K.
Restoring facility operations after the 2001 Amerithrax attacks took over three months to complete, highlighting the need to reduce remediation time. The most time intensive tasks were environmental sampling and sample analyses. Composite sampling allows disparate samples to be combined, with only a single analysis needed, making it a promising method to reduce response times. We developed a statistical experimental design to test three different composite sampling methods: 1) single medium single pass composite: a single cellulose sponge samples multiple coupons; 2) single medium multi-pass composite: a single cellulose sponge is used to sample multiple coupons; and 3) multi-medium post-samplemore » composite: a single cellulose sponge samples a single surface, and then multiple sponges are combined during sample extraction. Five spore concentrations of Bacillus atrophaeus Nakamura spores were tested; concentrations ranged from 5 to 100 CFU/coupon (0.00775 to 0.155CFU/cm2, respectively). Study variables included four clean surface materials (stainless steel, vinyl tile, ceramic tile, and painted wallboard) and three grime coated/dirty materials (stainless steel, vinyl tile, and ceramic tile). Analysis of variance for the clean study showed two significant factors: composite method (p-value < 0.0001) and coupon material (p-value = 0.0008). Recovery efficiency (RE) was higher overall using the post-sample composite (PSC) method compared to single medium composite from both clean and grime coated materials. RE with the PSC method for concentrations tested (10 to 100 CFU/coupon) was similar for ceramic tile, painted wall board, and stainless steel for clean materials. RE was lowest for vinyl tile with both composite methods. Statistical tests for the dirty study showed RE was significantly higher for vinyl and stainless steel materials, but significantly lower for ceramic tile. These results suggest post-sample compositing can be used to reduce sample analysis time when responding to a Bacillus anthracis contamination event of clean or dirty surfaces.« less
Unconstrained Enhanced Sampling for Free Energy Calculations of Biomolecules: A Review
Miao, Yinglong; McCammon, J. Andrew
2016-01-01
Free energy calculations are central to understanding the structure, dynamics and function of biomolecules. Yet insufficient sampling of biomolecular configurations is often regarded as one of the main sources of error. Many enhanced sampling techniques have been developed to address this issue. Notably, enhanced sampling methods based on biasing collective variables (CVs), including the widely used umbrella sampling, adaptive biasing force and metadynamics, have been discussed in a recent excellent review (Abrams and Bussi, Entropy, 2014). Here, we aim to review enhanced sampling methods that do not require predefined system-dependent CVs for biomolecular simulations and as such do not suffer from the hidden energy barrier problem as encountered in the CV-biasing methods. These methods include, but are not limited to, replica exchange/parallel tempering, self-guided molecular/Langevin dynamics, essential energy space random walk and accelerated molecular dynamics. While it is overwhelming to describe all details of each method, we provide a summary of the methods along with the applications and offer our perspectives. We conclude with challenges and prospects of the unconstrained enhanced sampling methods for accurate biomolecular free energy calculations. PMID:27453631
Unconstrained Enhanced Sampling for Free Energy Calculations of Biomolecules: A Review.
Miao, Yinglong; McCammon, J Andrew
Free energy calculations are central to understanding the structure, dynamics and function of biomolecules. Yet insufficient sampling of biomolecular configurations is often regarded as one of the main sources of error. Many enhanced sampling techniques have been developed to address this issue. Notably, enhanced sampling methods based on biasing collective variables (CVs), including the widely used umbrella sampling, adaptive biasing force and metadynamics, have been discussed in a recent excellent review (Abrams and Bussi, Entropy, 2014). Here, we aim to review enhanced sampling methods that do not require predefined system-dependent CVs for biomolecular simulations and as such do not suffer from the hidden energy barrier problem as encountered in the CV-biasing methods. These methods include, but are not limited to, replica exchange/parallel tempering, self-guided molecular/Langevin dynamics, essential energy space random walk and accelerated molecular dynamics. While it is overwhelming to describe all details of each method, we provide a summary of the methods along with the applications and offer our perspectives. We conclude with challenges and prospects of the unconstrained enhanced sampling methods for accurate biomolecular free energy calculations.
NASA Astrophysics Data System (ADS)
Roether, Wolfgang; Vogt, Martin; Vogel, Sandra; Sültenfuß, Jürgen
2013-06-01
We present a new method to obtain samples for the measurement of helium isotopes and neon in water, to replace the classical sampling procedure using clamped-off Cu tubing containers that we have been using so far. The new method saves the gas extraction step prior to admission to the mass spectrometer, which the classical method requires. Water is drawn into evacuated glass ampoules with subsequent flame sealing. Approximately 50% headspace is left, from which admission into the mass spectrometer occurs without further treatment. Extensive testing has shown that, with due care and with small corrections applied, the samples represent the gas concentrations in the water within ±0.07% (95% confidence level; ±0.05% with special handling). Fast evacuation is achieved by pumping on a small charge of water placed in the ampoule. The new method was successfully tested at sea in comparison with Cu-tubing sampling. We found that the ampoule samples were superior in data precision and that a lower percentage of samples were lost prior to measurement. Further measurements revealed agreement between the two methods in helium, 3He and neon within ±0.1%. The new method facilitates the dealing with large sample sets and minimizes the delay between sampling and measurement. The method is applicable also for gases other than helium and neon.
Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat
2018-03-01
To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.
Composite Sampling Approaches for Bacillus anthracis Surrogate Extracted from Soil
France, Brian; Bell, William; Chang, Emily; Scholten, Trudy
2015-01-01
Any release of anthrax spores in the U.S. would require action to decontaminate the site and restore its use and operations as rapidly as possible. The remediation activity would require environmental sampling, both initially to determine the extent of contamination (hazard mapping) and post-decon to determine that the site is free of contamination (clearance sampling). Whether the spore contamination is within a building or outdoors, collecting and analyzing what could be thousands of samples can become the factor that limits the pace of restoring operations. To address this sampling and analysis bottleneck and decrease the time needed to recover from an anthrax contamination event, this study investigates the use of composite sampling. Pooling or compositing of samples is an established technique to reduce the number of analyses required, and its use for anthrax spore sampling has recently been investigated. However, use of composite sampling in an anthrax spore remediation event will require well-documented and accepted methods. In particular, previous composite sampling studies have focused on sampling from hard surfaces; data on soil sampling are required to extend the procedure to outdoor use. Further, we must consider whether combining liquid samples, thus increasing the volume, lowers the sensitivity of detection and produces false negatives. In this study, methods to composite bacterial spore samples from soil are demonstrated. B. subtilis spore suspensions were used as a surrogate for anthrax spores. Two soils (Arizona Test Dust and sterilized potting soil) were contaminated and spore recovery with composites was shown to match individual sample performance. Results show that dilution can be overcome by concentrating bacterial spores using standard filtration methods. This study shows that composite sampling can be a viable method of pooling samples to reduce the number of analysis that must be performed during anthrax spore remediation. PMID:26714315
Improving the analysis of composite endpoints in rare disease trials.
McMenamin, Martina; Berglind, Anna; Wason, James M S
2018-05-22
Composite endpoints are recommended in rare diseases to increase power and/or to sufficiently capture complexity. Often, they are in the form of responder indices which contain a mixture of continuous and binary components. Analyses of these outcomes typically treat them as binary, thus only using the dichotomisations of continuous components. The augmented binary method offers a more efficient alternative and is therefore especially useful for rare diseases. Previous work has indicated the method may have poorer statistical properties when the sample size is small. Here we investigate small sample properties and implement small sample corrections. We re-sample from a previous trial with sample sizes varying from 30 to 80. We apply the standard binary and augmented binary methods and determine the power, type I error rate, coverage and average confidence interval width for each of the estimators. We implement Firth's adjustment for the binary component models and a small sample variance correction for the generalized estimating equations, applying the small sample adjusted methods to each sub-sample as before for comparison. For the log-odds treatment effect the power of the augmented binary method is 20-55% compared to 12-20% for the standard binary method. Both methods have approximately nominal type I error rates. The difference in response probabilities exhibit similar power but both unadjusted methods demonstrate type I error rates of 6-8%. The small sample corrected methods have approximately nominal type I error rates. On both scales, the reduction in average confidence interval width when using the adjusted augmented binary method is 17-18%. This is equivalent to requiring a 32% smaller sample size to achieve the same statistical power. The augmented binary method with small sample corrections provides a substantial improvement for rare disease trials using composite endpoints. We recommend the use of the method for the primary analysis in relevant rare disease trials. We emphasise that the method should be used alongside other efforts in improving the quality of evidence generated from rare disease trials rather than replace them.
Flow Cytometric Human Leukocyte Antigen-B27 Typing with Stored Samples for Batch Testing
Seo, Bo Young
2013-01-01
Background Flow cytometry (FC) HLA-B27 typing is still used extensively for the diagnosis of spondyloarthropathies. If patient blood samples are stored for a prolonged duration, this testing can be performed in a batch manner, and in-house cellular controls could easily be procured. In this study, we investigated various methods of storing patient blood samples. Methods We compared four storage methods: three methods of analyzing lymphocytes (whole blood stored at room temperature, frozen mononuclear cells, and frozen white blood cells [WBCs] after lysing red blood cells [RBCs]), and one method using frozen platelets (FPLT). We used three ratios associated with mean fluorescence intensities (MFI) for HLAB27 assignment: the B27 MFI ratio (sample/control) for HLA-B27 fluorescein-5-isothiocyanate (FITC); the B7 MFI ratio for HLA-B7 phycoerythrin (PE); and the ratio of these two ratios, B7/B27 ratio. Results Comparing the B27 MFI ratios of each storage method for the HLA-B27+ samples and the B7/B27 ratios for the HLA-B7+ samples revealed that FPLT was the best of the four methods. FPLT had a sensitivity of 100% and a specificity of 99.3% for HLA-B27 assignment in DNA-typed samples (N=164) when the two criteria, namely, B27 MFI ratio >4.0 and B7/B27 ratio <1.5, were used. Conclusions The FPLT method was found to offer a simple, economical, and accurate method of FC HLA-B27 typing by using stored patient samples. If stored samples are used, this method has the potential to replace the standard FC typing method when used in combination with a complementary DNA-based method. PMID:23667843
Evaluation on determination of iodine in coal by energy dispersive X-ray fluorescence
Wang, B.; Jackson, J.C.; Palmer, C.; Zheng, B.; Finkelman, R.B.
2005-01-01
A quick and inexpensive method of relative high iodine determination from coal samples was evaluated. Energy dispersive X-ray fluorescence (EDXRF) provided a detection limit of about 14 ppm (3 times of standard deviations of the blank sample), without any complex sample preparation. An analytical relative standard deviation of 16% was readily attainable for coal samples. Under optimum conditions, coal samples with iodine concentrations higher than 5 ppm can be determined using this EDXRF method. For the time being, due to the general iodine concentrations of coal samples lower than 5 ppm, except for some high iodine content coal, this method can not effectively been used for iodine determination. More work needed to meet the requirement of determination of iodine from coal samples for this method. Copyright ?? 2005 by The Geochemical Society of Japan.
NASA Astrophysics Data System (ADS)
Almirall, Jose R.; Trejos, Tatiana; Hobbs, Andria; Furton, Kenneth G.
2003-09-01
The importance of small amounts of glass and paint evidence as a means to associate a crime event to a suspect or a suspect to another individual has been demonstrated in many cases. Glass is a fragile material that is often found at the scenes of crimes such as burglaries, hit-and-run accidents and violent crime offenses. Previous work has demonstrated the utility of elemental analysis by solution ICP-MS of small amounts of glass for the comparison between a fragment found at a crime scene to a possible source of the glass. The multi-element capability and the sensitivity of ICP-MS combined with the simplified sample introduction of laser ablation prior to ion detection provides for an excellent and relatively non-destructive technique for elemental analysis of glass fragments. The direct solid sample introduction technique of laser ablation (LA) is reported as an alternative to the solution method. Direct solid sampling provides several advantages over solution methods and shows great potential for a number of solid sample analyses in forensic science. The advantages of laser ablation include the simplification of sample preparation, thereby reducing the time and complexity of the analysis, the elimination of handling acid dissolution reagents such as HF and the reduction of sources of interferences in the ionization plasma. Direct sampling also provides for essentially "non-destructive" sampling due to the removal of very small amounts of sample needed for analysis. The discrimination potential of LA-ICP-MS is compared with previously reported solution ICP-MS methods using external calibration with internal standardization and a newly reported solution isotope dilution (ID) method. A total of ninety-one different glass samples were used for the comparison study using the techniques mentioned. One set consisted of forty-five headlamps taken from a variety of automobiles representing a range of twenty years of manufacturing dates. A second set consisted of forty-six automotive glasses (side windows and windshields) representing casework glass from different vehicle manufacturers over several years was also characterized by RI and elemental composition analysis. The solution sample introduction techniques (external calibration and isotope dilution) provide for excellent sensitivity and precision but have the disadvantages of destroying the sample and also involve complex sample preparation. The laser ablation method was simpler, faster and produced comparable discrimination to the EC-ICP-MS and ID-ICP-MS. LA-ICP-MS can provide for an excellent alternative to solution analysis of glass in forensic casework samples. Paints and coatings are frequently encountered as trace evidence samples submitted to forensic science laboratories. A LA-ICP-MS method has been developed to complement the commonly used techniques in forensic laboratories in order to better characterize these samples for forensic purposes. Time-resolved plots of each sample can be compared to associate samples to each other or to discriminate between samples. Additionally, the concentration of lead and the ratios of other elements have been determined in various automotive paints by the reported method. A sample set of eighteen (18) survey automotive paint samples have been analyzed with the developed method in order to determine the utility of LA-ICP-MS and to compare the method to the more commonly used scanning electron microscopy (SEM) method for elemental characterization of paint layers in forensic casework.
2018-01-01
ABSTRACT To assess phenotypic bacterial antimicrobial resistance (AMR) in different strata (e.g., host populations, environmental areas, manure, or sewage effluents) for epidemiological purposes, isolates of target bacteria can be obtained from a stratum using various sample types. Also, different sample processing methods can be applied. The MIC of each target antimicrobial drug for each isolate is measured. Statistical equivalence testing of the MIC data for the isolates allows evaluation of whether different sample types or sample processing methods yield equivalent estimates of the bacterial antimicrobial susceptibility in the stratum. We demonstrate this approach on the antimicrobial susceptibility estimates for (i) nontyphoidal Salmonella spp. from ground or trimmed meat versus cecal content samples of cattle in processing plants in 2013-2014 and (ii) nontyphoidal Salmonella spp. from urine, fecal, and blood human samples in 2015 (U.S. National Antimicrobial Resistance Monitoring System data). We found that the sample types for cattle yielded nonequivalent susceptibility estimates for several antimicrobial drug classes and thus may gauge distinct subpopulations of salmonellae. The quinolone and fluoroquinolone susceptibility estimates for nontyphoidal salmonellae from human blood are nonequivalent to those from urine or feces, conjecturally due to the fluoroquinolone (ciprofloxacin) use to treat infections caused by nontyphoidal salmonellae. We also demonstrate statistical equivalence testing for comparing sample processing methods for fecal samples (culturing one versus multiple aliquots per sample) to assess AMR in fecal Escherichia coli. These methods yield equivalent results, except for tetracyclines. Importantly, statistical equivalence testing provides the MIC difference at which the data from two sample types or sample processing methods differ statistically. Data users (e.g., microbiologists and epidemiologists) may then interpret practical relevance of the difference. IMPORTANCE Bacterial antimicrobial resistance (AMR) needs to be assessed in different populations or strata for the purposes of surveillance and determination of the efficacy of interventions to halt AMR dissemination. To assess phenotypic antimicrobial susceptibility, isolates of target bacteria can be obtained from a stratum using different sample types or employing different sample processing methods in the laboratory. The MIC of each target antimicrobial drug for each of the isolates is measured, yielding the MIC distribution across the isolates from each sample type or sample processing method. We describe statistical equivalence testing for the MIC data for evaluating whether two sample types or sample processing methods yield equivalent estimates of the bacterial phenotypic antimicrobial susceptibility in the stratum. This includes estimating the MIC difference at which the data from the two approaches differ statistically. Data users (e.g., microbiologists, epidemiologists, and public health professionals) can then interpret whether that present difference is practically relevant. PMID:29475868
Shakeri, Heman; Volkova, Victoriya; Wen, Xuesong; Deters, Andrea; Cull, Charley; Drouillard, James; Müller, Christian; Moradijamei, Behnaz; Jaberi-Douraki, Majid
2018-05-01
To assess phenotypic bacterial antimicrobial resistance (AMR) in different strata (e.g., host populations, environmental areas, manure, or sewage effluents) for epidemiological purposes, isolates of target bacteria can be obtained from a stratum using various sample types. Also, different sample processing methods can be applied. The MIC of each target antimicrobial drug for each isolate is measured. Statistical equivalence testing of the MIC data for the isolates allows evaluation of whether different sample types or sample processing methods yield equivalent estimates of the bacterial antimicrobial susceptibility in the stratum. We demonstrate this approach on the antimicrobial susceptibility estimates for (i) nontyphoidal Salmonella spp. from ground or trimmed meat versus cecal content samples of cattle in processing plants in 2013-2014 and (ii) nontyphoidal Salmonella spp. from urine, fecal, and blood human samples in 2015 (U.S. National Antimicrobial Resistance Monitoring System data). We found that the sample types for cattle yielded nonequivalent susceptibility estimates for several antimicrobial drug classes and thus may gauge distinct subpopulations of salmonellae. The quinolone and fluoroquinolone susceptibility estimates for nontyphoidal salmonellae from human blood are nonequivalent to those from urine or feces, conjecturally due to the fluoroquinolone (ciprofloxacin) use to treat infections caused by nontyphoidal salmonellae. We also demonstrate statistical equivalence testing for comparing sample processing methods for fecal samples (culturing one versus multiple aliquots per sample) to assess AMR in fecal Escherichia coli These methods yield equivalent results, except for tetracyclines. Importantly, statistical equivalence testing provides the MIC difference at which the data from two sample types or sample processing methods differ statistically. Data users (e.g., microbiologists and epidemiologists) may then interpret practical relevance of the difference. IMPORTANCE Bacterial antimicrobial resistance (AMR) needs to be assessed in different populations or strata for the purposes of surveillance and determination of the efficacy of interventions to halt AMR dissemination. To assess phenotypic antimicrobial susceptibility, isolates of target bacteria can be obtained from a stratum using different sample types or employing different sample processing methods in the laboratory. The MIC of each target antimicrobial drug for each of the isolates is measured, yielding the MIC distribution across the isolates from each sample type or sample processing method. We describe statistical equivalence testing for the MIC data for evaluating whether two sample types or sample processing methods yield equivalent estimates of the bacterial phenotypic antimicrobial susceptibility in the stratum. This includes estimating the MIC difference at which the data from the two approaches differ statistically. Data users (e.g., microbiologists, epidemiologists, and public health professionals) can then interpret whether that present difference is practically relevant. Copyright © 2018 Shakeri et al.
Paudyal, Priyamvada; Llewellyn, Carrie; Lau, Jason; Mahmud, Mohammad; Smith, Helen
2015-01-01
Background Routine screening is key to sexually transmitted infection (STI) prevention and control. Previous studies suggest that clinic-based screening programmes capture only a small proportion of people with STIs. Self-sampling using non- or minimally invasive techniques may be beneficial for those reluctant to actively engage with conventional sampling methods. We systematically reviewed studies of patients’ experiences of obtaining self-samples to diagnose curable STIs. Methods We conducted an electronic search of MEDLINE, EMBASE, CINAHL, PsychINFO, BNI, and Cochrane Database of Systematic Reviews to identify relevant articles published in English between January 1980 and March 2014. Studies were included if participants self-sampled for the diagnosis of a curable STI and had specifically sought participants’ opinions of their experience, acceptability, preferences, or willingness to self-sample. Results The initial search yielded 558 references. Of these, 45 studies met the inclusion criteria. Thirty-six studies assessed patients’ acceptability and experiences of self-sampling. Pooled results from these studies shows that self-sampling is a highly acceptable method with 85% of patients reporting the method to be well received and acceptable. Twenty-eight studies reported on ease of self-sampling; the majority of patients (88%) in these studies found self-sampling an “easy” procedure. Self-sampling was favoured compared to clinician sampling, and home sampling was preferred to clinic-based sampling. Females and older participants were more accepting of self-sampling. Only a small minority of participants (13%) reported pain during self-sampling. Participants were willing to undergo self-sampling and recommend others. Privacy and safety were the most common concerns. Conclusion Self-sampling for diagnostic testing is well accepted with the majority having a positive experience and willingness to use again. Standardization of self-sampling procedures and rigorous validation of outcome measurement will lead to better comparability across studies. Future studies need to conduct rigorous economic evaluations of self-sampling to inform policy development for the management of STI. PMID:25909508
Feasibility of zero tolerance for Salmonella on raw poultry
USDA-ARS?s Scientific Manuscript database
Ideally, poultry producing countries around the globe should use internationally standardized sampling methods for Salmonella. It is difficult to compare prevalence data from country-to-country when sample plan, sample type, sample frequency and laboratory media along with methods differ. The Europe...
Jiang, Jia-Jia; Duan, Fa-Jie; Li, Yan-Chao; Hua, Xiang-Ning
2014-03-01
Synchronization sampling is very important in underwater towed array system where every acquisition node (AN) samples analog signals by its own analog-digital converter (ADC). In this paper, a simple and effective synchronization sampling method is proposed to ensure synchronized operation among different ANs of the underwater towed array system. We first present a master-slave synchronization sampling model, and then design a high accuracy phase-locked loop to synchronize all delta-sigma ADCs to a reference clock. However, when the master-slave synchronization sampling model is used, both the time-delay (TD) of messages traveling along the wired transmission medium and the jitter of the clocks will bring out synchronization sampling error (SSE). Therefore, a simple method is proposed to estimate and compensate the TD of the messages transmission, and then another effective method is presented to overcome the SSE caused by the jitter of the clocks. An experimental system with three ANs is set up, and the related experimental results verify the validity of the synchronization sampling method proposed in this paper.
NASA Astrophysics Data System (ADS)
Jiang, Jia-Jia; Duan, Fa-Jie; Li, Yan-Chao; Hua, Xiang-Ning
2014-03-01
Synchronization sampling is very important in underwater towed array system where every acquisition node (AN) samples analog signals by its own analog-digital converter (ADC). In this paper, a simple and effective synchronization sampling method is proposed to ensure synchronized operation among different ANs of the underwater towed array system. We first present a master-slave synchronization sampling model, and then design a high accuracy phase-locked loop to synchronize all delta-sigma ADCs to a reference clock. However, when the master-slave synchronization sampling model is used, both the time-delay (TD) of messages traveling along the wired transmission medium and the jitter of the clocks will bring out synchronization sampling error (SSE). Therefore, a simple method is proposed to estimate and compensate the TD of the messages transmission, and then another effective method is presented to overcome the SSE caused by the jitter of the clocks. An experimental system with three ANs is set up, and the related experimental results verify the validity of the synchronization sampling method proposed in this paper.
Rapid fusion method for the determination of refractory thorium and uranium isotopes in soil samples
Maxwell, Sherrod L.; Hutchison, Jay B.; McAlister, Daniel R.
2015-02-14
Recently, approximately 80% of participating laboratories failed to accurately determine uranium isotopes in soil samples in the U.S Department of Energy Mixed Analyte Performance Evaluation Program (MAPEP) Session 30, due to incomplete dissolution of refractory particles in the samples. Failing laboratories employed acid dissolution methods, including hydrofluoric acid, to recover uranium from the soil matrix. The failures illustrate the importance of rugged soil dissolution methods for the accurate measurement of analytes in the sample matrix. A new rapid fusion method has been developed by the Savannah River National Laboratory (SRNL) to prepare 1-2 g soil sample aliquots very quickly, withmore » total dissolution of refractory particles. Soil samples are fused with sodium hydroxide at 600 ºC in zirconium crucibles to enable complete dissolution of the sample. Uranium and thorium are separated on stacked TEVA and TRU extraction chromatographic resin cartridges, prior to isotopic measurements by alpha spectrometry on cerium fluoride microprecipitation sources. Plutonium can also be separated and measured using this method. Batches of 12 samples can be prepared for measurement in <5 hours.« less
Method and Apparatus for Measuring Near-Angle Scattering of Mirror Coatings
NASA Technical Reports Server (NTRS)
Chipman, Russell A. (Inventor); Daugherty, Brian J. (Inventor); McClain, Stephen C. (Inventor); Macenka, Steven A. (Inventor)
2013-01-01
Disclosed herein is a method of determining the near angle scattering of a sample reflective surface comprising the steps of: a) splitting a beam of light having a coherence length of greater than or equal to about 2 meters into a sample beam and a reference beam; b) frequency shifting both the sample beam and the reference beam to produce a fixed beat frequency between the sample beam and the reference beam; c) directing the sample beam through a focusing lens and onto the sample reflective surface, d) reflecting the sample beam from the sample reflective surface through a detection restriction disposed on a movable stage; e) recombining the sample beam with the reference beam to form a recombined beam, followed by f) directing the recombined beam to a detector and performing heterodyne analysis on the recombined beam to measure the near-angle scattering of the sample reflective surface, wherein the position of the detection restriction relative to the sample beam is varied to occlude at least a portion of the sample beam to measure the near-angle scattering of the sample reflective surface. An apparatus according to the above method is also disclosed.
Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco
2016-01-01
Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population's sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns.
Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco
2016-01-01
Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population’s sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns. PMID:27441554
Methyl-CpG island-associated genome signature tags
Dunn, John J
2014-05-20
Disclosed is a method for analyzing the organismic complexity of a sample through analysis of the nucleic acid in the sample. In the disclosed method, through a series of steps, including digestion with a type II restriction enzyme, ligation of capture adapters and linkers and digestion with a type IIS restriction enzyme, genome signature tags are produced. The sequences of a statistically significant number of the signature tags are determined and the sequences are used to identify and quantify the organisms in the sample. Various embodiments of the invention described herein include methods for using single point genome signature tags to analyze the related families present in a sample, methods for analyzing sequences associated with hyper- and hypo-methylated CpG islands, methods for visualizing organismic complexity change in a sampling location over time and methods for generating the genome signature tag profile of a sample of fragmented DNA.
Hu, Zheng; Lin, Jun; Chen, Zhong-Sheng; Yang, Yong-Min; Li, Xue-Jun
2015-01-22
High-speed blades are often prone to fatigue due to severe blade vibrations. In particular, synchronous vibrations can cause irreversible damages to the blade. Blade tip-timing methods (BTT) have become a promising way to monitor blade vibrations. However, synchronous vibrations are unsuitably monitored by uniform BTT sampling. Therefore, non-equally mounted probes have been used, which will result in the non-uniformity of the sampling signal. Since under-sampling is an intrinsic drawback of BTT methods, how to analyze non-uniformly under-sampled BTT signals is a big challenge. In this paper, a novel reconstruction method for non-uniformly under-sampled BTT data is presented. The method is based on the periodically non-uniform sampling theorem. Firstly, a mathematical model of a non-uniform BTT sampling process is built. It can be treated as the sum of certain uniform sample streams. For each stream, an interpolating function is required to prevent aliasing in the reconstructed signal. Secondly, simultaneous equations of all interpolating functions in each sub-band are built and corresponding solutions are ultimately derived to remove unwanted replicas of the original signal caused by the sampling, which may overlay the original signal. In the end, numerical simulations and experiments are carried out to validate the feasibility of the proposed method. The results demonstrate the accuracy of the reconstructed signal depends on the sampling frequency, the blade vibration frequency, the blade vibration bandwidth, the probe static offset and the number of samples. In practice, both types of blade vibration signals can be particularly reconstructed by non-uniform BTT data acquired from only two probes.
Sampling Methods for Detection and Monitoring of the Asian Citrus Psyllid (Hemiptera: Psyllidae).
Monzo, C; Arevalo, H A; Jones, M M; Vanaclocha, P; Croxton, S D; Qureshi, J A; Stansly, P A
2015-06-01
The Asian citrus psyllid (ACP), Diaphorina citri Kuwayama is a key pest of citrus due to its role as vector of citrus greening disease or "huanglongbing." ACP monitoring is considered an indispensable tool for management of vector and disease. In the present study, datasets collected between 2009 and 2013 from 245 citrus blocks were used to evaluate precision, sensitivity for detection, and efficiency of five sampling methods. The number of samples needed to reach a 0.25 standard error-mean ratio was estimated using Taylor's power law and used to compare precision among sampling methods. Comparison of detection sensitivity and time expenditure (cost) between stem-tap and other sampling methodologies conducted consecutively at the same location were also assessed. Stem-tap sampling was the most efficient sampling method when ACP densities were moderate to high and served as the basis for comparison with all other methods. Protocols that grouped trees near randomly selected locations across the block were more efficient than sampling trees at random across the block. Sweep net sampling was similar to stem-taps in number of captures per sampled unit, but less precise at any ACP density. Yellow sticky traps were 14 times more sensitive than stem-taps but much more time consuming and thus less efficient except at very low population densities. Visual sampling was efficient for detecting and monitoring ACP at low densities. Suction sampling was time consuming and taxing but the most sensitive of all methods for detection of sparse populations. This information can be used to optimize ACP monitoring efforts. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Capillary microextraction: A new method for sampling methamphetamine vapour.
Nair, M V; Miskelly, G M
2016-11-01
Clandestine laboratories pose a serious health risk to first responders, investigators, decontamination companies, and the public who may be inadvertently exposed to methamphetamine and other chemicals used in its manufacture. Therefore there is an urgent need for reliable methods to detect and measure methamphetamine at such sites. The most common method for determining methamphetamine contamination at former clandestine laboratory sites is selected surface wipe sampling, followed by analysis with gas chromatography-mass spectrometry (GC-MS). We are investigating the use of sampling for methamphetamine vapour to complement such wipe sampling. In this study, we report the use of capillary microextraction (CME) devices for sampling airborne methamphetamine, and compare their sampling efficiency with a previously reported dynamic SPME method. The CME devices consisted of PDMS-coated glass filter strips inside a glass tube. The devices were used to dynamically sample methamphetamine vapour in the range of 0.42-4.2μgm -3 , generated by a custom-built vapour dosing system, for 1-15min, and methamphetamine was analysed using a GC-MS fitted with a ChromatoProbe thermal desorption unit. The devices showed good reproducibility (RSD<15%), and a curvilinear pre-equilibrium relationship between sampling times and peak area, which can be utilised for calibration. Under identical sampling conditions, the CME devices were approximately 30 times more sensitive than the dynamic SPME method. The CME devices could be stored for up to 3days after sampling prior to analysis. Consecutive sampling of methamphetamine and its isotopic substitute, d-9 methamphetamine showed no competitive displacement. This suggests that CME devices, pre-loaded with an internal standard, could be a feasible method for sampling airborne methamphetamine at former clandestine laboratories. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Hu, Zheng; Lin, Jun; Chen, Zhong-Sheng; Yang, Yong-Min; Li, Xue-Jun
2015-01-01
High-speed blades are often prone to fatigue due to severe blade vibrations. In particular, synchronous vibrations can cause irreversible damages to the blade. Blade tip-timing methods (BTT) have become a promising way to monitor blade vibrations. However, synchronous vibrations are unsuitably monitored by uniform BTT sampling. Therefore, non-equally mounted probes have been used, which will result in the non-uniformity of the sampling signal. Since under-sampling is an intrinsic drawback of BTT methods, how to analyze non-uniformly under-sampled BTT signals is a big challenge. In this paper, a novel reconstruction method for non-uniformly under-sampled BTT data is presented. The method is based on the periodically non-uniform sampling theorem. Firstly, a mathematical model of a non-uniform BTT sampling process is built. It can be treated as the sum of certain uniform sample streams. For each stream, an interpolating function is required to prevent aliasing in the reconstructed signal. Secondly, simultaneous equations of all interpolating functions in each sub-band are built and corresponding solutions are ultimately derived to remove unwanted replicas of the original signal caused by the sampling, which may overlay the original signal. In the end, numerical simulations and experiments are carried out to validate the feasibility of the proposed method. The results demonstrate the accuracy of the reconstructed signal depends on the sampling frequency, the blade vibration frequency, the blade vibration bandwidth, the probe static offset and the number of samples. In practice, both types of blade vibration signals can be particularly reconstructed by non-uniform BTT data acquired from only two probes. PMID:25621612
Jeffrey H. Gove
2003-01-01
Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...
Voelz, David G; Roggemann, Michael C
2009-11-10
Accurate simulation of scalar optical diffraction requires consideration of the sampling requirement for the phase chirp function that appears in the Fresnel diffraction expression. We describe three sampling regimes for FFT-based propagation approaches: ideally sampled, oversampled, and undersampled. Ideal sampling, where the chirp and its FFT both have values that match analytic chirp expressions, usually provides the most accurate results but can be difficult to realize in practical simulations. Under- or oversampling leads to a reduction in the available source plane support size, the available source bandwidth, or the available observation support size, depending on the approach and simulation scenario. We discuss three Fresnel propagation approaches: the impulse response/transfer function (angular spectrum) method, the single FFT (direct) method, and the two-step method. With illustrations and simulation examples we show the form of the sampled chirp functions and their discrete transforms, common relationships between the three methods under ideal sampling conditions, and define conditions and consequences to be considered when using nonideal sampling. The analysis is extended to describe the sampling limitations for the more exact Rayleigh-Sommerfeld diffraction solution.
A fast learning method for large scale and multi-class samples of SVM
NASA Astrophysics Data System (ADS)
Fan, Yu; Guo, Huiming
2017-06-01
A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.
Wells, Beth; Shaw, Hannah; Innocent, Giles; Guido, Stefano; Hotchkiss, Emily; Parigi, Maria; Opsteegh, Marieke; Green, James; Gillespie, Simon; Innes, Elisabeth A; Katzer, Frank
2015-12-15
Waterborne transmission of Toxoplasma gondii is a potential public health risk and there are currently no agreed optimised methods for the recovery, processing and detection of T. gondii oocysts in water samples. In this study modified methods of T. gondii oocyst recovery and DNA extraction were applied to 1427 samples collected from 147 public water supplies throughout Scotland. T. gondii DNA was detected, using real time PCR (qPCR) targeting the 529bp repeat element, in 8.79% of interpretable samples (124 out of 1411 samples). The samples which were positive for T. gondii DNA originated from a third of the sampled water sources. The samples which were positive by qPCR and some of the negative samples were reanalysed using ITS1 nested PCR (nPCR) and results compared. The 529bp qPCR was the more sensitive technique and a full analysis of assay performance, by Bayesian analysis using a Markov Chain Monte Carlo method, was completed which demonstrated the efficacy of this method for the detection of T. gondii in water samples. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Multilattice sampling strategies for region of interest dynamic MRI.
Rilling, Gabriel; Tao, Yuehui; Marshall, Ian; Davies, Mike E
2013-08-01
A multilattice sampling approach is proposed for dynamic MRI with Cartesian trajectories. It relies on the use of sampling patterns composed of several different lattices and exploits an image model where only some parts of the image are dynamic, whereas the rest is assumed static. Given the parameters of such an image model, the methodology followed for the design of a multilattice sampling pattern adapted to the model is described. The multi-lattice approach is compared to single-lattice sampling, as used by traditional acceleration methods such as UNFOLD (UNaliasing by Fourier-Encoding the Overlaps using the temporal Dimension) or k-t BLAST, and random sampling used by modern compressed sensing-based methods. On the considered image model, it allows more flexibility and higher accelerations than lattice sampling and better performance than random sampling. The method is illustrated on a phase-contrast carotid blood velocity mapping MR experiment. Combining the multilattice approach with the KEYHOLE technique allows up to 12× acceleration factors. Simulation and in vivo undersampling results validate the method. Compared to lattice and random sampling, multilattice sampling provides significant gains at high acceleration factors. © 2012 Wiley Periodicals, Inc.
An Overview of Conventional and Emerging Analytical Methods for the Determination of Mycotoxins
Cigić, Irena Kralj; Prosen, Helena
2009-01-01
Mycotoxins are a group of compounds produced by various fungi and excreted into the matrices on which they grow, often food intended for human consumption or animal feed. The high toxicity and carcinogenicity of these compounds and their ability to cause various pathological conditions has led to widespread screening of foods and feeds potentially polluted with them. Maximum permissible levels in different matrices have also been established for some toxins. As these are quite low, analytical methods for determination of mycotoxins have to be both sensitive and specific. In addition, an appropriate sample preparation and pre-concentration method is needed to isolate analytes from rather complicated samples. In this article, an overview of methods for analysis and sample preparation published in the last ten years is given for the most often encountered mycotoxins in different samples, mainly in food. Special emphasis is on liquid chromatography with fluorescence and mass spectrometric detection, while in the field of sample preparation various solid-phase extraction approaches are discussed. However, an overview of other analytical and sample preparation methods less often used is also given. Finally, different matrices where mycotoxins have to be determined are discussed with the emphasis on their specific characteristics important for the analysis (human food and beverages, animal feed, biological samples, environmental samples). Various issues important for accurate qualitative and quantitative analyses are critically discussed: sampling and choice of representative sample, sample preparation and possible bias associated with it, specificity of the analytical method and critical evaluation of results. PMID:19333436
Usefulness of in-house PCR methods for hepatitis B virus DNA detection.
Portilho, Moyra Machado; Baptista, Marcia Leite; da Silva, Messias; de Sousa, Paulo Sérgio Fonseca; Lewis-Ximenez, Lia Laura; Lampe, Elisabeth; Villar, Livia Melo
2015-10-01
The aim of the present study was to evaluate the performance of three in-house PCR techniques for HBV DNA detection and compare it with commercial quantitative methods to evaluate the usefulness of in-house methods for HBV diagnosis. Three panels of HBsAg reactive sera samples were evaluated: (i) 50 samples were examined using three methods for in-house qualitative PCR and the Cobas Amplicor HBV Monitor Assay; (ii) 87 samples were assayed using in-house semi-nested PCR and the Cobas TaqMan HBV test; (iii) 11 serial samples obtained from 2 HBV-infected individuals were assayed using the Cobas Amplicor HBV test and semi-nested PCR. In panel I, HBV DNA was detected in 44 samples using the Cobas Amplicor HBV test, 42 samples using semi-nested PCR (90% concordance with Cobas Amplicor), 22 samples using PCR for the core gene (63.6% concordance) and 29 samples using single-round PCR for the pre-S/S gene (75% concordance). In panel II, HBV DNA was quantified in 78 of the 87 HBsAg reactive samples using Cobas TaqMan but 52 samples using semi-nested PCR (67.8% concordance). HBV DNA was detected in serial samples until the 17th and 26th week after first donation using in-house semi-nested PCR and the Cobas Amplicor HBV test, respectively. In-house semi-nested PCR presented adequate concordance with commercial methods as an alternative method for HBV molecular diagnosis in low-resource settings. Copyright © 2015 Elsevier B.V. All rights reserved.
Rosenblum, Michael A; Laan, Mark J van der
2009-01-07
The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes. We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability under much weaker assumptions than are required for standard methods. A drawback of this approach, as we show, is that these confidence intervals are often quite wide. In response to this, we present a method for constructing much narrower confidence intervals, which are better suited for practical applications, and that are still more robust than confidence intervals based on standard methods, when dealing with small sample sizes. We show how to extend our approaches to much more general estimation problems than estimating the sample mean. We describe how these methods can be used to obtain more reliable confidence intervals in survey sampling. As a concrete example, we construct confidence intervals using our methods for the number of violent deaths between March 2003 and July 2006 in Iraq, based on data from the study "Mortality after the 2003 invasion of Iraq: A cross sectional cluster sample survey," by Burnham et al. (2006).
Won, Jonghun; Lee, Gyu Rie; Park, Hahnbeom; Seok, Chaok
2018-06-07
The second extracellular loops (ECL2s) of G-protein-coupled receptors (GPCRs) are often involved in GPCR functions, and their structures have important implications in drug discovery. However, structure prediction of ECL2 is difficult because of its long length and the structural diversity among different GPCRs. In this study, a new ECL2 conformational sampling method involving both template-based and ab initio sampling was developed. Inspired by the observation of similar ECL2 structures of closely related GPCRs, a template-based sampling method employing loop structure templates selected from the structure database was developed. A new metric for evaluating similarity of the target loop to templates was introduced for template selection. An ab initio loop sampling method was also developed to treat cases without highly similar templates. The ab initio method is based on the previously developed fragment assembly and loop closure method. A new sampling component that takes advantage of secondary structure prediction was added. In addition, a conserved disulfide bridge restraining ECL2 conformation was predicted and analytically incorporated into sampling, reducing the effective dimension of the conformational search space. The sampling method was combined with an existing energy function for comparison with previously reported loop structure prediction methods, and the benchmark test demonstrated outstanding performance.
[Comparison of the Conventional Centrifuged and Filtrated Preparations in Urine Cytology].
Sekita, Nobuyuki; Shimosakai, Hirofumi; Nishikawa, Rika; Sato, Hiroaki; Kouno, Hiroyoshi; Fujimura, Masaaki; Mikami, Kazuo
2016-03-01
The urine cytology test is one of the most important tools for the diagnosis of malignant urinary tract tumors. This test is also of great value for predicting malignancy. However, the sensitivity of this test is not high enough to screen for malignant cells. In our laboratory, we were able to attain a high sensitivity of urine cytology tests after changing the preparation method of urine samples. The differences in the cytodiagnosis between the two methods are discussed here. From January 2012 to June 2013, 2,031 urine samples were prepared using the conventional centrifuge method (C method) ; and from September 2013 to March 2015, 2,453 urine samples were prepared using the filtration method (F method) for the cytology test. When the samples included in category 4 or 5, were defined as cytological positive, the sensitivities of this test with samples prepared using the F method were significantly high compared with samples prepared using the C method (72% vs 28%, p<0.001). The number of cells on the glass slides prepared by the F method was significantly higher than that of the samples prepared by the C method (p<0.001). After introduction of the F method, the number of f alse negative cases was decreased in the urine cytology test because a larger number of cells was seen and easily detected as atypical or malignant epithelial cells. Therefore, this method has a higher sensitivity than the conventional C method as the sensitivity of urine cytology tests relies partially on the number of cells visualized in the prepared samples.
Molecular detection of airborne Coccidioides in Tucson, Arizona
Chow, Nancy A.; Griffin, Dale W.; Barker, Bridget M.; Loparev, Vladimir N.; Litvintseva, Anastasia P.
2016-01-01
Environmental surveillance of the soil-dwelling fungus Coccidioides is essential for the prevention of Valley fever, a disease primarily caused by inhalation of the arthroconidia. Methods for collecting and detectingCoccidioides in soil samples are currently in use by several laboratories; however, a method utilizing current air sampling technologies has not been formally demonstrated for the capture of airborne arthroconidia. In this study, we collected air/dust samples at two sites (Site A and Site B) in the endemic region of Tucson, Arizona, and tested a variety of air samplers and membrane matrices. We then employed a single-tube nested qPCR assay for molecular detection. At both sites, numerous soil samples (n = 10 at Site A and n = 24 at Site B) were collected and Coccidioides was detected in two samples (20%) at Site A and in eight samples (33%) at Site B. Of the 25 air/dust samples collected at both sites using five different air sampling methods, we detected Coccidioides in three samples from site B. All three samples were collected using a high-volume sampler with glass-fiber filters. In this report, we describe these methods and propose the use of these air sampling and molecular detection strategies for environmental surveillance of Coccidioides.
Galway, Lp; Bell, Nathaniel; Sae, Al Shatari; Hagopian, Amy; Burnham, Gilbert; Flaxman, Abraham; Weiss, Wiliam M; Rajaratnam, Julie; Takaro, Tim K
2012-04-27
Mortality estimates can measure and monitor the impacts of conflict on a population, guide humanitarian efforts, and help to better understand the public health impacts of conflict. Vital statistics registration and surveillance systems are rarely functional in conflict settings, posing a challenge of estimating mortality using retrospective population-based surveys. We present a two-stage cluster sampling method for application in population-based mortality surveys. The sampling method utilizes gridded population data and a geographic information system (GIS) to select clusters in the first sampling stage and Google Earth TM imagery and sampling grids to select households in the second sampling stage. The sampling method is implemented in a household mortality study in Iraq in 2011. Factors affecting feasibility and methodological quality are described. Sampling is a challenge in retrospective population-based mortality studies and alternatives that improve on the conventional approaches are needed. The sampling strategy presented here was designed to generate a representative sample of the Iraqi population while reducing the potential for bias and considering the context specific challenges of the study setting. This sampling strategy, or variations on it, are adaptable and should be considered and tested in other conflict settings.
2012-01-01
Background Mortality estimates can measure and monitor the impacts of conflict on a population, guide humanitarian efforts, and help to better understand the public health impacts of conflict. Vital statistics registration and surveillance systems are rarely functional in conflict settings, posing a challenge of estimating mortality using retrospective population-based surveys. Results We present a two-stage cluster sampling method for application in population-based mortality surveys. The sampling method utilizes gridded population data and a geographic information system (GIS) to select clusters in the first sampling stage and Google Earth TM imagery and sampling grids to select households in the second sampling stage. The sampling method is implemented in a household mortality study in Iraq in 2011. Factors affecting feasibility and methodological quality are described. Conclusion Sampling is a challenge in retrospective population-based mortality studies and alternatives that improve on the conventional approaches are needed. The sampling strategy presented here was designed to generate a representative sample of the Iraqi population while reducing the potential for bias and considering the context specific challenges of the study setting. This sampling strategy, or variations on it, are adaptable and should be considered and tested in other conflict settings. PMID:22540266
Method and apparatus for imaging a sample on a device
Trulson, Mark; Stern, David; Fiekowsky, Peter; Rava, Richard; Walton, Ian; Fodor, Stephen P. A.
2001-01-01
A method and apparatus for imaging a sample are provided. An electromagnetic radiation source generates excitation radiation which is sized by excitation optics to a line. The line is directed at a sample resting on a support and excites a plurality of regions on the sample. Collection optics collect response radiation reflected from the sample I and image the reflected radiation. A detector senses the reflected radiation and is positioned to permit discrimination between radiation reflected from a certain focal plane in the sample and certain other planes within the sample.
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
2017-10-26
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
Least squares polynomial chaos expansion: A review of sampling strategies
NASA Astrophysics Data System (ADS)
Hadigol, Mohammad; Doostan, Alireza
2018-04-01
As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.
40 CFR 60.496 - Test methods and procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Surface Coating Industry § 60.496 Test methods and procedures. (a) The reference methods in appendix A to...) Method 4 for stack gas moisture. (b) For Method 24, the coating sample must be a 1-litre sample collected... volume must be 0.003 dscm except that shorter sampling times or smaller volumes, when necessitated by...
40 CFR 60.496 - Test methods and procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Surface Coating Industry § 60.496 Test methods and procedures. (a) The reference methods in appendix A to...) Method 4 for stack gas moisture. (b) For Method 24, the coating sample must be a 1-litre sample collected... volume must be 0.003 dscm except that shorter sampling times or smaller volumes, when necessitated by...
40 CFR 60.496 - Test methods and procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Surface Coating Industry § 60.496 Test methods and procedures. (a) The reference methods in appendix A to...) Method 4 for stack gas moisture. (b) For Method 24, the coating sample must be a 1-litre sample collected... volume must be 0.003 dscm except that shorter sampling times or smaller volumes, when necessitated by...
40 CFR 60.496 - Test methods and procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Surface Coating Industry § 60.496 Test methods and procedures. (a) The reference methods in appendix A to...) Method 4 for stack gas moisture. (b) For Method 24, the coating sample must be a 1-litre sample collected... volume must be 0.003 dscm except that shorter sampling times or smaller volumes, when necessitated by...
40 CFR 60.496 - Test methods and procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Surface Coating Industry § 60.496 Test methods and procedures. (a) The reference methods in appendix A to...) Method 4 for stack gas moisture. (b) For Method 24, the coating sample must be a 1-litre sample collected... volume must be 0.003 dscm except that shorter sampling times or smaller volumes, when necessitated by...
Swezey, Robert; Shinn, Walter; Green, Carol; Drover, David R.; Hammer, Gregory B.; Schulman, Scott R.; Zajicek, Anne; Jett, David A.; Boss, Gerry R.
2013-01-01
Most hospital laboratories do not measure blood cyanide concentrations, and samples must be sent to reference laboratories. A simple method is needed for measuring cyanide in hospitals. The authors previously developed a method to quantify cyanide based on the high binding affinity of the vitamin B12 analog, cobinamide, for cyanide and a major spectral change observed for cyanide-bound cobinamide. This method is now validated in human blood, and the findings include a mean inter-assay accuracy of 99.1%, precision of 8.75% and a lower limit of quantification of 3.27 µM cyanide. The method was applied to blood samples from children treated with sodium nitroprusside and it yielded measurable results in 88 of 172 samples (51%), whereas the reference laboratory yielded results in only 19 samples (11%). In all 19 samples, the cobinamide-based method also yielded measurable results. The two methods showed reasonable agreement when analyzed by linear regression, but not when analyzed by a standard error of the estimate or paired t-test. Differences in results between the two methods may be because samples were assayed at different times on different sample types. The cobinamide-based method is applicable to human blood, and can be used in hospital laboratories and emergency rooms. PMID:23653045
Crepeau, Kathryn L.; Fram, Miranda S.; Bush, Noel
2004-01-01
An analytical method for the determination of the trihalomethane formation potential of water samples has been developed. The trihalomethane formation potential is measured by dosing samples with chlorine under specified conditions of pH, temperature, incubation time, darkness, and residual-free chlorine, and then analyzing the resulting trihalomethanes by purge and trap/gas chromatography equipped with an electron capture detector. Detailed explanations of the method and quality-control practices are provided. Method validation experiments showed that the trihalomethane formation potential varies as a function of time between sample collection and analysis, residual-free chlorine concentration, method of sample dilution, and the concentration of bromide in the sample.
Savoie, Jennifer G.; LeBlanc, Denis R.
2012-01-01
Field tests were conducted near the Impact Area at Camp Edwards on the Massachusetts Military Reservation, Cape Cod, Massachusetts, to determine the utility of no-purge groundwater sampling for monitoring concentrations of ordnance-related explosive compounds and perchlorate in the sand and gravel aquifer. The no-purge methods included (1) a diffusion sampler constructed of rigid porous polyethylene, (2) a diffusion sampler constructed of regenerated-cellulose membrane, and (3) a tubular grab sampler (bailer) constructed of polyethylene film. In samples from 36 monitoring wells, concentrations of perchlorate (ClO4-), hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX), and octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX), the major contaminants of concern in the Impact Area, in the no-purge samples were compared to concentrations of these compounds in samples collected by low-flow pumped sampling with dedicated bladder pumps. The monitoring wells are constructed of 2- and 2.5-inch-diameter polyvinyl chloride pipe and have approximately 5- to 10-foot-long slotted screens. The no-purge samplers were left in place for 13-64 days to ensure that ambient groundwater flow had flushed the well screen and concentrations in the screen represented water in the adjacent formation. The sampling methods were compared first in six monitoring wells. Concentrations of ClO4-, RDX, and HMX in water samples collected by the three no-purge sampling methods and low-flow pumped sampling were in close agreement for all six monitoring wells. There is no evidence of a systematic bias in the concentration differences among the methods on the basis of type of sampling device, type of contaminant, or order in which the no-purge samplers were tested. A subsequent examination of vertical variations in concentrations of ClO4- in the 10-foot-long screens of six wells by using rigid porous polyethylene diffusion samplers indicated that concentrations in a given well varied by less than 15 percent and the small variations were unlikely to affect the utility of the various sampling methods. The grab sampler was selected for additional tests in 29 of the 36 monitoring wells used during the study. Concentrations of ClO4-, RDX, HMX, and other minor explosive compounds in water samples collected by using a 1-liter grab sampler and low-flow pumped sampling were in close agreement in field tests in the 29 wells. A statistical analysis based on the sign test indicated that there was no bias in the concentration differences between the methods. There also was no evidence for a systematic bias in concentration differences between the methods related to location of the monitoring wells laterally or vertically in the groundwater-flow system. Field tests in five wells also demonstrated that sample collection by using a 2-liter grab sampler and sequential bailing with the 1-liter grab sampler were options for obtaining sufficient sample volume for replicate and spiked quality assurance and control samples. The evidence from the field tests supports the conclusion that diffusion sampling with the rigid porous polyethylene and regenerated-cellulose membranes and grab sampling with the polyethylene-film samplers provide comparable data on the concentrations of ordnance-related compounds in groundwater at the MMR to that obtained by low-flow pumped sampling. These sampling methods are useful methods for monitoring these compounds at the MMR and in similar hydrogeologic environments.
Su, Xiaoquan; Xu, Jian; Ning, Kang
2012-10-01
It has long been intriguing scientists to effectively compare different microbial communities (also referred as 'metagenomic samples' here) in a large scale: given a set of unknown samples, find similar metagenomic samples from a large repository and examine how similar these samples are. With the current metagenomic samples accumulated, it is possible to build a database of metagenomic samples of interests. Any metagenomic samples could then be searched against this database to find the most similar metagenomic sample(s). However, on one hand, current databases with a large number of metagenomic samples mostly serve as data repositories that offer few functionalities for analysis; and on the other hand, methods to measure the similarity of metagenomic data work well only for small set of samples by pairwise comparison. It is not yet clear, how to efficiently search for metagenomic samples against a large metagenomic database. In this study, we have proposed a novel method, Meta-Storms, that could systematically and efficiently organize and search metagenomic data. It includes the following components: (i) creating a database of metagenomic samples based on their taxonomical annotations, (ii) efficient indexing of samples in the database based on a hierarchical taxonomy indexing strategy, (iii) searching for a metagenomic sample against the database by a fast scoring function based on quantitative phylogeny and (iv) managing database by index export, index import, data insertion, data deletion and database merging. We have collected more than 1300 metagenomic data from the public domain and in-house facilities, and tested the Meta-Storms method on these datasets. Our experimental results show that Meta-Storms is capable of database creation and effective searching for a large number of metagenomic samples, and it could achieve similar accuracies compared with the current popular significance testing-based methods. Meta-Storms method would serve as a suitable database management and search system to quickly identify similar metagenomic samples from a large pool of samples. ningkang@qibebt.ac.cn Supplementary data are available at Bioinformatics online.
Phytoforensics—Using trees to find contamination
Wilson, Jordan L.
2017-09-28
The water we drink, air we breathe, and soil we come into contact with have the potential to adversely affect our health because of contaminants in the environment. Environmental samples can characterize the extent of potential contamination, but traditional methods for collecting water, air, and soil samples below the ground (for example, well drilling or direct-push soil sampling) are expensive and time consuming. Trees are closely connected to the subsurface and sampling tree trunks can indicate subsurface pollutants, a process called phytoforensics. Scientists at the Missouri Water Science Center were among the first to use phytoforensics to screen sites for contamination before using traditional sampling methods, to guide additional sampling, and to show the large cost savings associated with tree sampling compared to traditional methods.
Exploring high dimensional free energy landscapes: Temperature accelerated sliced sampling
NASA Astrophysics Data System (ADS)
Awasthi, Shalini; Nair, Nisanth N.
2017-03-01
Biased sampling of collective variables is widely used to accelerate rare events in molecular simulations and to explore free energy surfaces. However, computational efficiency of these methods decreases with increasing number of collective variables, which severely limits the predictive power of the enhanced sampling approaches. Here we propose a method called Temperature Accelerated Sliced Sampling (TASS) that combines temperature accelerated molecular dynamics with umbrella sampling and metadynamics to sample the collective variable space in an efficient manner. The presented method can sample a large number of collective variables and is advantageous for controlled exploration of broad and unbound free energy basins. TASS is also shown to achieve quick free energy convergence and is practically usable with ab initio molecular dynamics techniques.
Quantitative Evaluation of Hard X-ray Damage to Biological Samples using EUV Ptychography
NASA Astrophysics Data System (ADS)
Baksh, Peter; Odstrcil, Michal; Parsons, Aaron; Bailey, Jo; Deinhardt, Katrin; Chad, John E.; Brocklesby, William S.; Frey, Jeremy G.
2017-06-01
Coherent diffractive imaging (CDI) has become a standard method on a variety of synchrotron beam lines. The high brilliance short wavelength radiation from these sources can be used to reconstruct attenuation and relative phase of a sample with nanometre resolution via CDI methods. However, the interaction between the sample and high energy ionising radiation can cause degradation to sample structure. We demonstrate, using a laboratory based high harmonic generation (HHG) based extreme ultraviolet (EUV) source, imaging a sample of hippocampal neurons using the ptychography method. The significant increase in contrast of the sample in the EUV light allows identification of damage induced from exposure to 7.3 keV photons, without causing any damage to the sample itself.
A STANDARDIZED ASSESSMENT METHOD (SAM) FOR RIVERINE MACROINVERTEBRATES
A macroinvertebrate sampling method for large rivers based on desirable characteristics of existing nonwadeable methods was developed and tested. Six sites each were sampled on the Great Miami and Kentucky Rivers, reflecting a human disturbance gradient. Samples were collected ...
CTEPP STANDARD OPERATING PROCEDURE FOR PACKING AND SHIPPING STUDY SAMPLES (SOP-3.11)
This SOP describes the methods for packing and shipping study samples. These methods are for packing and shipping biological and environmental samples. The methods have been tested and used in the previous pilot studies.
Rapid Sampling of Molecules via Skin for Diagnostic and Forensic Applications
Paliwal, Sumit; Ogura, Makoto
2010-01-01
ABSTRACT Purpose Skin provides an excellent portal for diagnostic monitoring of a variety of entities; however, there is a dearth of reliable methods for patient-friendly sampling of skin constituents. This study describes the use of low-frequency ultrasound as a one-step methodology for rapid sampling of molecules from the skin. Methods Sampling was performed using a brief exposure of 20 kHz ultrasound to skin in the presence of a sampling fluid. In vitro sampling from porcine skin was performed to assess the effectiveness of the method and its ability to sample drugs and endogenous epidermal biomolecules from the skin. Dermal presence of an antifungal drug—fluconazole and an abused substance, cocaine—was assessed in rats. Results Ultrasonic sampling captured the native profile of various naturally occurring moisturizing factors in skin. A high sampling efficiency (79 ± 13%) of topically delivered drug was achieved. Ultrasound consistently sampled greater amounts of drug from the skin compared to tape stripping. Ultrasonic sampling also detected sustained presence of cocaine in rat skin for up to 7 days as compared to its rapid disappearance from the urine. Conclusions Ultrasonic sampling provides significant advantages including enhanced sampling from deeper layers of skin and high temporal sampling sensitivity. PMID:20238151
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Fangyan; Zhang, Song; Chung Wong, Pak
Effectively visualizing large graphs and capturing the statistical properties are two challenging tasks. To aid in these two tasks, many sampling approaches for graph simplification have been proposed, falling into three categories: node sampling, edge sampling, and traversal-based sampling. It is still unknown which approach is the best. We evaluate commonly used graph sampling methods through a combined visual and statistical comparison of graphs sampled at various rates. We conduct our evaluation on three graph models: random graphs, small-world graphs, and scale-free graphs. Initial results indicate that the effectiveness of a sampling method is dependent on the graph model, themore » size of the graph, and the desired statistical property. This benchmark study can be used as a guideline in choosing the appropriate method for a particular graph sampling task, and the results presented can be incorporated into graph visualization and analysis tools.« less
Resampling methods in Microsoft Excel® for estimating reference intervals
Theodorsson, Elvar
2015-01-01
Computer- intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples. PMID:26527366
Resampling methods in Microsoft Excel® for estimating reference intervals.
Theodorsson, Elvar
2015-01-01
Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.
Uran, Harun; Gokoglu, Nalan
2014-04-01
The aim of this study was to determine the nutritional and quality characteristics of anchovy after cooking. The fish were cooked by different methods (frying, baking and grilling) at two different temperatures (160 °C, 180 °C). Crude ash, crude protein and crude fat contents of cooked fish increased due to rise in dry matter contents. While cooking methods affected mineral content of anchovy, cooking temperature did not affect. The highest values of monounsaturated fatty acids were found in baked samples. Polyunsaturated fatty acids in baked samples were also high and similar in fried samples. Fried samples, which were the most preferred, lost its nutritional characteristics more than baked and grilled samples. Grilled and baked fish samples can be recommended for healthy consumption. However, grilled fish samples had hard texture due to more moisture loss than other methods. Therefore, it is concluded that baking is the best cooking method for anchovy.
Cruz, Mutya; Wang, Miao; Frisch-Daiello, Jessica; Han, Xianlin
2016-07-01
Extraction of lipids from biological samples is a critical step in lipidomics, especially for shotgun lipidomics where lipid extracts are directly infused into a mass spectrometer. The butanol-methanol (BUME) extraction method was originally developed to extract lipids from plasma samples with 1 % acetic acid. Considering some lipids are sensitive to acidic environments, we modified this protocol by replacing acetic acid with lithium chloride solution and extended the modified extraction to tissue samples. Although no significant reduction of plasmalogen levels in the acidic BUME extracts of rat heart samples was found, the modified method was established to extract various tissue samples, including rat liver, heart, and plasma. Essentially identical profiles of the majority of lipid classes were obtained from the extracts of the modified BUME and traditional Bligh-Dyer methods. However, it was found that neither the original, nor the modified BUME method was suitable for 4-hydroxyalkenal species measurement in biological samples.
Cruz, Mutya; Wang, Miao; Frisch-Daiello, Jessica; Han, Xianlin
2016-01-01
Extraction of lipids from biological samples is a critical step in lipidomics, especially for shotgun lipidomics where lipid extracts are directly infused into a mass spectrometer. The butanol-methanol (BUME) extraction method was originally developed to extract lipids from plasma samples with 1% acetic acid. Considering some lipids are sensitive to acidic environments, we modified this protocol by replacing acetic acid with lithium chloride solution and extended the modified extraction to tissue samples. Although no significant reduction of plasmalogen levels in the acidic BUME extracts of rat heart samples was found, the modified method was established to extract various tissue samples, including rat liver, heart, and plasma. Essentially identical profiles of the majority of lipid classes were obtained from the extracts of the modified BUME and traditional Bligh-Dyer methods. However, it was found that neither the original, nor the modified BUME method was suitable for 4-hydroxyalkenal species measurement in biological samples. PMID:27245345
Optical method for the determination of grain orientation in films
Maris, Humphrey J.
2001-01-01
A method for the determination of grain orientation in a film sample is provided comprising the steps of measuring a first transient optical response of the film and determining the contribution to the transient optical response arising from a change in the energy distribution of the electrons in the sample, determining the contribution to the transient optical response arising from a propagating strain pulse within the sample, and determining the contribution to the transient optical response arising from a change in sample temperature of the sample. The grain orientation of the sample may be determined using the contributions to the transient optical response arising from the change in the energy distribution of the electrons, the propagating strain pulse, and the change in sample temperature. Additionally, a method for determination of the thickness of a film sample is provided. The grain orientation of the sample is first determined. The grain orientation, together with the velocity of sound and a propagation time of a strain pulse through the sample are then used to determine the thickness of the film sample.
Optical method for the determination of grain orientation in films
Maris, Humphrey J.
2003-05-13
A method for the determination of grain orientation in a film sample is provided comprising the steps of measuring a first transient optical response of the film and determining the contribution to the transient optical response arising from a change in the energy distribution of the electrons in the sample, determining the contribution to the transient optical response arising from a propagating strain pulse within the sample, and determining the contribution to the transient optical response arising from a change in sample temperature of the sample. The grain orientation of the sample may be determined using the contributions to the transient optical response arising from the change in the energy distribution of the electrons, the propagating strain pulse, and the change in sample temperature. Additionally, a method for determination of the thickness of a film sample is provided. The grain orientation of the sample is first determined. The grain orientation, together with the velocity of sound and a propagation time of a strain pulse through the sample are then used to determine the thickness of the film sample.
A novel heterogeneous training sample selection method on space-time adaptive processing
NASA Astrophysics Data System (ADS)
Wang, Qiang; Zhang, Yongshun; Guo, Yiduo
2018-04-01
The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.
Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean
2014-01-01
MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.
Fourcade, Yoan; Engler, Jan O.; Rödder, Dennis; Secondi, Jean
2014-01-01
MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one “virtual” derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases. PMID:24818607
Tulipan, Rachel J; Phillips, Heidi; Garrett, Laura D; Dirikolu, Levent; Mitchell, Mark A
2017-05-01
OBJECTIVE To characterize long-term elution of platinum from carboplatin-impregnated calcium sulfate hemihydrate (CI-CSH) beads in vitro by comparing 2 distinct sample collection methods designed to mimic 2 in vivo environments. SAMPLES 162 CI-CSH beads containing 4.6 mg of carboplatin (2.4 mg of platinum/bead). PROCEDURES For method 1, which mimicked an in vivo environment with rapid and complete fluid exchange, each of 3 plastic 10-mL conical tubes contained 3 CI-CSH beads and 5 mL of PBS solution. Eluent samples were obtained by evacuation of all fluid at 1, 2, 3, 6, 9, and 12 hours and 1, 2, 3, 6, 9, 12, 15, 18, 22, 26, and 30 days. Five milliliters of fresh PBS solution was then added to each tube. For method 2, which mimicked an in vivo environment with no fluid exchange, each of 51 tubes (ie, 3 tubes/17 sample collection times) contained 3 CI-CSH beads and 5 mL of PBS solution. Eluent samples were obtained from the assigned tubes for each time point. All samples were analyzed for platinum content by inductively coupled plasma-mass spectrometry. RESULTS Platinum was released from CI-CSH beads for 22 to 30 days. Significant differences were found in platinum concentration and percentage of platinum eluted from CI-CSH beads over time for each method. Platinum concentrations and elution percentages in method 2 samples were significantly higher than those of method 1 samples, except for the first hour measurements. CONCLUSIONS AND CLINICAL RELEVANCE Sample collection methods 1 and 2 may provide estimates of the minimum and maximum platinum release, respectively, from CI-CSH beads in vivo.
A cryopreservation method for Pasteurella multocida from wetland samples
Moore, Melody K.; Shadduck, D.J.; Goldberg, Diana R.; Samuel, M.D.
1998-01-01
A cryopreservation method and improved isolation techniques for detection of Pasteurella multocida from wetland samples were developed. Wetland water samples were collected in the field, diluted in dimethyl sulfoxide (DMSO, final concentration 10%), and frozen at -180 C in a liquid nitrogen vapor shipper. Frozen samples were transported to the laboratory where they were subsequently thawed and processed in Pasteurella multocida selective broth (PMSB) to isolate P. multocida. This method allowed for consistent isolation of 2 to 18 organisms/ml from water seeded with known concentrations of P. multocida. The method compared favorably with the standard mouse inoculation method and allowed for preservation of the samples until they could be processed in the laboratory.
Random vs. systematic sampling from administrative databases involving human subjects.
Hagino, C; Lo, R J
1998-09-01
Two sampling techniques, simple random sampling (SRS) and systematic sampling (SS), were compared to determine whether they yield similar and accurate distributions for the following four factors: age, gender, geographic location and years in practice. Any point estimate within 7 yr or 7 percentage points of its reference standard (SRS or the entire data set, i.e., the target population) was considered "acceptably similar" to the reference standard. The sampling frame was from the entire membership database of the Canadian Chiropractic Association. The two sampling methods were tested using eight different sample sizes of n (50, 100, 150, 200, 250, 300, 500, 800). From the profile/characteristics, summaries of four known factors [gender, average age, number (%) of chiropractors in each province and years in practice], between- and within-methods chi 2 tests and unpaired t tests were performed to determine whether any of the differences [descriptively greater than 7% or 7 yr] were also statistically significant. The strengths of the agreements between the provincial distributions were quantified by calculating the percent agreements for each (provincial pairwise-comparison methods). Any percent agreement less than 70% was judged to be unacceptable. Our assessments of the two sampling methods (SRS and SS) for the different sample sizes tested suggest that SRS and SS yielded acceptably similar results. Both methods started to yield "correct" sample profiles at approximately the same sample size (n > 200). SS is not only convenient, it can be recommended for sampling from large databases in which the data are listed without any inherent order biases other than alphabetical listing by surname.
Guo, Yan; Li, Xiaoming; Fang, Xiaoyi; Lin, Xiuyun; Song, Yan; Jiang, Shuling; Stanton, Bonita
2011-01-01
Sample representativeness remains one of the challenges in effective HIV/STD surveillance and prevention targeting MSM worldwide. Although convenience samples are widely used in studies of MSM, previous studies suggested that these samples might not be representative of the broader MSM population. This issue becomes even more critical in many developing countries where needed resources for conducting probability sampling are limited. We examined variations in HIV and Syphilis infections and sociodemographic and behavioral factors among 307 young migrant MSM recruited using four different convenience sampling methods (peer outreach, informal social network, Internet, and venue-based) in Beijing, China in 2009. The participants completed a self-administered survey and provided blood specimens for HIV/STD testing. Among the four MSM samples using different recruitment methods, rates of HIV infections were 5.1%, 5.8%, 7.8%, and 3.4%; rates of Syphilis infection were 21.8%, 36.2%, 11.8%, and 13.8%; rates of inconsistent condom use were 57%, 52%, 58%, and 38%. Significant differences were found in various sociodemographic characteristics (e.g., age, migration history, education, income, places of employment) and risk behaviors (e.g., age at first sex, number of sex partners, involvement in commercial sex, and substance use) among samples recruited by different sampling methods. The results confirmed the challenges of obtaining representative MSM samples and underscored the importance of using multiple sampling methods to reach MSM from diverse backgrounds and in different social segments and to improve the representativeness of the MSM samples when the use of probability sampling approach is not feasible. PMID:21711162
Guo, Yan; Li, Xiaoming; Fang, Xiaoyi; Lin, Xiuyun; Song, Yan; Jiang, Shuling; Stanton, Bonita
2011-11-01
Sample representativeness remains one of the challenges in effective HIV/STD surveillance and prevention targeting men who have sex with men (MSM) worldwide. Although convenience samples are widely used in studies of MSM, previous studies suggested that these samples might not be representative of the broader MSM population. This issue becomes even more critical in many developing countries where needed resources for conducting probability sampling are limited. We examined variations in HIV and Syphilis infections and sociodemographic and behavioral factors among 307 young migrant MSM recruited using four different convenience sampling methods (peer outreach, informal social network, Internet, and venue-based) in Beijing, China in 2009. The participants completed a self-administered survey and provided blood specimens for HIV/STD testing. Among the four MSM samples using different recruitment methods, rates of HIV infections were 5.1%, 5.8%, 7.8%, and 3.4%; rates of Syphilis infection were 21.8%, 36.2%, 11.8%, and 13.8%; and rates of inconsistent condom use were 57%, 52%, 58%, and 38%. Significant differences were found in various sociodemographic characteristics (e.g., age, migration history, education, income, and places of employment) and risk behaviors (e.g., age at first sex, number of sex partners, involvement in commercial sex, and substance use) among samples recruited by different sampling methods. The results confirmed the challenges of obtaining representative MSM samples and underscored the importance of using multiple sampling methods to reach MSM from diverse backgrounds and in different social segments and to improve the representativeness of the MSM samples when the use of probability sampling approach is not feasible.
Lundin, Jessica I; Dills, Russell L; Ylitalo, Gina M; Hanson, M Bradley; Emmons, Candice K; Schorr, Gregory S; Ahmad, Jacqui; Hempelmann, Jennifer A; Parsons, Kim M; Wasser, Samuel K
2016-01-01
Biologic sample collection in wild cetacean populations is challenging. Most information on toxicant levels is obtained from blubber biopsy samples; however, sample collection is invasive and strictly regulated under permit, thus limiting sample numbers. Methods are needed to monitor toxicant levels that increase temporal and repeat sampling of individuals for population health and recovery models. The objective of this study was to optimize measuring trace levels (parts per billion) of persistent organic pollutants (POPs), namely polychlorinated-biphenyls (PCBs), polybrominated-diphenyl-ethers (PBDEs), dichlorodiphenyltrichloroethanes (DDTs), and hexachlorocyclobenzene, in killer whale scat (fecal) samples. Archival scat samples, initially collected, lyophilized, and extracted with 70 % ethanol for hormone analyses, were used to analyze POP concentrations. The residual pellet was extracted and analyzed using gas chromatography coupled with mass spectrometry. Method detection limits ranged from 11 to 125 ng/g dry weight. The described method is suitable for p,p'-DDE, PCBs-138, 153, 180, and 187, and PBDEs-47 and 100; other POPs were below the limit of detection. We applied this method to 126 scat samples collected from Southern Resident killer whales. Scat samples from 22 adult whales also had known POP concentrations in blubber and demonstrated significant correlations (p < 0.01) between matrices across target analytes. Overall, the scat toxicant measures matched previously reported patterns from blubber samples of decreased levels in reproductive-age females and a decreased p,p'-DDE/∑PCB ratio in J-pod. Measuring toxicants in scat samples provides an unprecedented opportunity to noninvasively evaluate contaminant levels in wild cetacean populations; these data have the prospect to provide meaningful information for vital management decisions.
Bruce, James F.; Roberts, James J.; Zuellig, Robert E.
2018-05-24
The U.S. Geological Survey (USGS), in cooperation with Colorado Springs City Engineering and Colorado Springs Utilities, analyzed previously collected invertebrate data to determine the comparability among four sampling methods and two versions (2010 and 2017) of the Colorado Benthic Macroinvertebrate Multimetric Index (MMI). For this study, annual macroinvertebrate samples were collected concurrently (in space and time) at 15 USGS surface-water gaging stations in the Fountain Creek Basin from 2010 to 2012 using four sampling methods. The USGS monitoring project in the basin uses two of the methods and the Colorado Department of Public Health and Environment recommends the other two. These methods belong to two distinct sample types, one that targets single habitats and one that targets multiple habitats. The study results indicate that there are significant differences in MMI values obtained from the single-habitat and multihabitat sample types but methods from each program within each sample type produced comparable values. This study also determined that MMI values calculated by different versions of the Colorado Benthic Macroinvertebrate MMI are indistinguishable. This indicates that the Colorado Department of Public Health and Environment methods are comparable with the USGS monitoring project methods for single-habitat and multihabitat sample types. This report discusses the direct application of the study results to inform the revision of the existing USGS monitoring project in the Fountain Creek Basin.
Towards robust and repeatable sampling methods in eDNA based studies.
Dickie, Ian A; Boyer, Stephane; Buckley, Hannah; Duncan, Richard P; Gardner, Paul; Hogg, Ian D; Holdaway, Robert J; Lear, Gavin; Makiola, Andreas; Morales, Sergio E; Powell, Jeff R; Weaver, Louise
2018-05-26
DNA based techniques are increasingly used for measuring the biodiversity (species presence, identity, abundance and community composition) of terrestrial and aquatic ecosystems. While there are numerous reviews of molecular methods and bioinformatic steps, there has been little consideration of the methods used to collect samples upon which these later steps are based. This represents a critical knowledge gap, as methodologically sound field sampling is the foundation for subsequent analyses. We reviewed field sampling methods used for metabarcoding studies of both terrestrial and freshwater ecosystem biodiversity over a nearly three-year period (n = 75). We found that 95% (n = 71) of these studies used subjective sampling methods, inappropriate field methods, and/or failed to provide critical methodological information. It would be possible for researchers to replicate only 5% of the metabarcoding studies in our sample, a poorer level of reproducibility than for ecological studies in general. Our findings suggest greater attention to field sampling methods and reporting is necessary in eDNA-based studies of biodiversity to ensure robust outcomes and future reproducibility. Methods must be fully and accurately reported, and protocols developed that minimise subjectivity. Standardisation of sampling protocols would be one way to help to improve reproducibility, and have additional benefits in allowing compilation and comparison of data from across studies. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Mixed Methods Sampling: A Typology with Examples
ERIC Educational Resources Information Center
Teddlie, Charles; Yu, Fen
2007-01-01
This article presents a discussion of mixed methods (MM) sampling techniques. MM sampling involves combining well-established qualitative and quantitative techniques in creative ways to answer research questions posed by MM research designs. Several issues germane to MM sampling are presented including the differences between probability and…
40 CFR 761.272 - Chemical extraction and analysis of samples.
Code of Federal Regulations, 2012 CFR
2012-07-01
... samples. 761.272 Section 761.272 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... COMMERCE, AND USE PROHIBITIONS Cleanup Site Characterization Sampling for PCB Remediation Waste in... composite samples of PCB remediation waste. Use Method 8082 from SW-846, or a method validated under subpart...
40 CFR 761.272 - Chemical extraction and analysis of samples.
Code of Federal Regulations, 2011 CFR
2011-07-01
... samples. 761.272 Section 761.272 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... COMMERCE, AND USE PROHIBITIONS Cleanup Site Characterization Sampling for PCB Remediation Waste in... composite samples of PCB remediation waste. Use Method 8082 from SW-846, or a method validated under subpart...
40 CFR 761.272 - Chemical extraction and analysis of samples.
Code of Federal Regulations, 2013 CFR
2013-07-01
... samples. 761.272 Section 761.272 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... COMMERCE, AND USE PROHIBITIONS Cleanup Site Characterization Sampling for PCB Remediation Waste in... composite samples of PCB remediation waste. Use Method 8082 from SW-846, or a method validated under subpart...
40 CFR 761.272 - Chemical extraction and analysis of samples.
Code of Federal Regulations, 2014 CFR
2014-07-01
... samples. 761.272 Section 761.272 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... COMMERCE, AND USE PROHIBITIONS Cleanup Site Characterization Sampling for PCB Remediation Waste in... composite samples of PCB remediation waste. Use Method 8082 from SW-846, or a method validated under subpart...
A Comparison of Two Sampling Strategies to Assess Discomycete Diversity in Wet Tropical Forests
SHARON A. CANTRELL
2004-01-01
Most of the fungal diversity studies that have used a systematic collecting scheme have not included the discomycetes, so optimal sampling methods are not available for this group. In this study, I tested two sampling methods at each sites in the Caribbean National Forest, Puerto Rico and Ebano Verde Reserve, Dominican Republic. For a plot-based sampling method, 10 Ã...
Zhang, L; Liu, X J
2016-06-03
With the rapid development of next-generation high-throughput sequencing technology, RNA-seq has become a standard and important technique for transcriptome analysis. For multi-sample RNA-seq data, the existing expression estimation methods usually deal with each single-RNA-seq sample, and ignore that the read distributions are consistent across multiple samples. In the current study, we propose a structured sparse regression method, SSRSeq, to estimate isoform expression using multi-sample RNA-seq data. SSRSeq uses a non-parameter model to capture the general tendency of non-uniformity read distribution for all genes across multiple samples. Additionally, our method adds a structured sparse regularization, which not only incorporates the sparse specificity between a gene and its corresponding isoform expression levels, but also reduces the effects of noisy reads, especially for lowly expressed genes and isoforms. Four real datasets were used to evaluate our method on isoform expression estimation. Compared with other popular methods, SSRSeq reduced the variance between multiple samples, and produced more accurate isoform expression estimations, and thus more meaningful biological interpretations.
Taylor, Vivien F; Toms, Andrew; Longerich, Henry P
2002-01-01
The application of open vessel focused microwave acid digestion is described for the preparation of geological and environmental samples for analysis using inductively coupled plasma-mass spectrometry (ICP-MS). The method is compared to conventional closed-vessel high pressure methods which are limited in the use of HF to break down silicates. Open-vessel acid digestion more conveniently enables the use of HF to remove Si from geological and plant samples as volatile SiF4, as well as evaporation-to-dryness and sequential acid addition during the procedure. Rock reference materials (G-2 granite, MRG-1 gabbros, SY-2 syenite, JA-1 andesite, and JB-2 and SRM-688 basalts) and plant reference materials (BCR and IAEA lichens, peach leaves, apple leaves, Durham wheat flour, and pine needles) were digested with results comparable to conventional hotplate digestion. The microwave digestion method gave poor results for granitic samples containing refractory minerals, however fusion was the preferred method of preparation for these samples. Sample preparation time was reduced from several days, using conventional hotplate digestion method, to one hour per sample using our microwave method.
Ikebe, Jinzen; Umezawa, Koji; Higo, Junichi
2016-03-01
Molecular dynamics (MD) simulations using all-atom and explicit solvent models provide valuable information on the detailed behavior of protein-partner substrate binding at the atomic level. As the power of computational resources increase, MD simulations are being used more widely and easily. However, it is still difficult to investigate the thermodynamic properties of protein-partner substrate binding and protein folding with conventional MD simulations. Enhanced sampling methods have been developed to sample conformations that reflect equilibrium conditions in a more efficient manner than conventional MD simulations, thereby allowing the construction of accurate free-energy landscapes. In this review, we discuss these enhanced sampling methods using a series of case-by-case examples. In particular, we review enhanced sampling methods conforming to trivial trajectory parallelization, virtual-system coupled multicanonical MD, and adaptive lambda square dynamics. These methods have been recently developed based on the existing method of multicanonical MD simulation. Their applications are reviewed with an emphasis on describing their practical implementation. In our concluding remarks we explore extensions of the enhanced sampling methods that may allow for even more efficient sampling.
A sequential bioequivalence design with a potential ethical advantage.
Fuglsang, Anders
2014-07-01
This paper introduces a two-stage approach for evaluation of bioequivalence, where, in contrast to the designs of Diane Potvin and co-workers, two stages are mandatory regardless of the data obtained at stage 1. The approach is derived from Potvin's method C. It is shown that under circumstances with relatively high variability and relatively low initial sample size, this method has an advantage over Potvin's approaches in terms of sample sizes while controlling type I error rates at or below 5% with a minute occasional trade-off in power. Ethically and economically, the method may thus be an attractive alternative to the Potvin designs. It is also shown that when using the method introduced here, average total sample sizes are rather independent of initial sample size. Finally, it is shown that when a futility rule in terms of sample size for stage 2 is incorporated into this method, i.e., when a second stage can be abolished due to sample size considerations, there is often an advantage in terms of power or sample size as compared to the previously published methods.
Blekhman, Ran; Tang, Karen; Archie, Elizabeth A; Barreiro, Luis B; Johnson, Zachary P; Wilson, Mark E; Kohn, Jordan; Yuan, Michael L; Gesquiere, Laurence; Grieneisen, Laura E; Tung, Jenny
2016-08-16
Field studies of wild vertebrates are frequently associated with extensive collections of banked fecal samples-unique resources for understanding ecological, behavioral, and phylogenetic effects on the gut microbiome. However, we do not understand whether sample storage methods confound the ability to investigate interindividual variation in gut microbiome profiles. Here, we extend previous work on storage methods for gut microbiome samples by comparing immediate freezing, the gold standard of preservation, to three methods commonly used in vertebrate field studies: lyophilization, storage in ethanol, and storage in RNAlater. We found that the signature of individual identity consistently outweighed storage effects: alpha diversity and beta diversity measures were significantly correlated across methods, and while samples often clustered by donor, they never clustered by storage method. Provided that all analyzed samples are stored the same way, banked fecal samples therefore appear highly suitable for investigating variation in gut microbiota. Our results open the door to a much-expanded perspective on variation in the gut microbiome across species and ecological contexts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maxwell, S.; Jones, V.
2009-05-27
A new rapid separation method that allows separation and preconcentration of actinides in urine samples was developed for the measurement of longer lived actinides by inductively coupled plasma mass spectrometry (ICP-MS) and short-lived actinides by alpha spectrometry; a hybrid approach. This method uses stacked extraction chromatography cartridges and vacuum box technology to facilitate rapid separations. Preconcentration, if required, is performed using a streamlined calcium phosphate precipitation. Similar technology has been applied to separate actinides prior to measurement by alpha spectrometry, but this new method has been developed with elution reagents now compatible with ICP-MS as well. Purified solutions are splitmore » between ICP-MS and alpha spectrometry so that long- and short-lived actinide isotopes can be measured successfully. The method allows for simultaneous extraction of 24 samples (including QC samples) in less than 3 h. Simultaneous sample preparation can offer significant time savings over sequential sample preparation. For example, sequential sample preparation of 24 samples taking just 15 min each requires 6 h to complete. The simplicity and speed of this new method makes it attractive for radiological emergency response. If preconcentration is applied, the method is applicable to larger sample aliquots for occupational exposures as well. The chemical recoveries are typically greater than 90%, in contrast to other reported methods using flow injection separation techniques for urine samples where plutonium yields were 70-80%. This method allows measurement of both long-lived and short-lived actinide isotopes. 239Pu, 242Pu, 237Np, 243Am, 234U, 235U and 238U were measured by ICP-MS, while 236Pu, 238Pu, 239Pu, 241Am, 243Am and 244Cm were measured by alpha spectrometry. The method can also be adapted so that the separation of uranium isotopes for assay is not required, if uranium assay by direct dilution of the urine sample is preferred instead. Multiple vacuum box locations may be set-up to supply several ICP-MS units with purified sample fractions such that a high sample throughput may be achieved, while still allowing for rapid measurement of short-lived actinides by alpha spectrometry.« less
An evaluation of methods for estimating decadal stream loads
NASA Astrophysics Data System (ADS)
Lee, Casey J.; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.
2016-11-01
Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen - lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale's ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.
An evaluation of methods for estimating decadal stream loads
Lee, Casey; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.
2016-01-01
Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen – lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale’s ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.
Hoffman, G.L.; Fishman, M. J.; Garbarino, J.R.
1996-01-01
Water samples for trace-metal determinations routinely have been prepared in open laboratories. For example, the U.S. Geological Survey method I-3485-85 (Extraction Procedure, for Water- Suspended Sediment) is performed in a laboratory hood on a laboratory bench without any special precautions to control airborne contamination. This method tends to be contamination prone for several trace metals primarily because the samples are transferred, acidified, digested, and filtered in an open laboratory environment. To reduce trace-metal contamination of digested water samples, procedures were established that rely on minimizing sample-transfer steps and using a class-100 clean bench during sample filtration. This new procedure involves the following steps: 1. The sample is acidified with HCl directly in the original water-sample bottle. 2. The water-sample bottle with the cap secured is heated in a laboratory oven. 3. The digestate is filtered in a class-100 laminar-flow clean bench. The exact conditions used (that is, oven temperature, time of heating, and filtration methods) for this digestion procedure are described. Comparisons between the previous U.S Geological Survey open-beaker method I-3485-85 and the new in-bottle procedure for synthetic and field-collected water samples are given. When the new procedure is used, blank concentrations for most trace metals determined are reduced significantly.
Mellerup, Anders; Ståhl, Marie
2015-01-01
The aim of this article was to define the sampling level and method combination that captures antibiotic resistance at pig herd level utilizing qPCR antibiotic resistance gene quantification and culture-based quantification of antibiotic resistant coliform indicator bacteria. Fourteen qPCR assays for commonly detected antibiotic resistance genes were developed, and used to quantify antibiotic resistance genes in total DNA from swine fecal samples that were obtained using different sampling and pooling methods. In parallel, the number of antibiotic resistant coliform indicator bacteria was determined in the same swine fecal samples. The results showed that the qPCR assays were capable of detecting differences in antibiotic resistance levels in individual animals that the coliform bacteria colony forming units (CFU) could not. Also, the qPCR assays more accurately quantified antibiotic resistance genes when comparing individual sampling and pooling methods. qPCR on pooled samples was found to be a good representative for the general resistance level in a pig herd compared to the coliform CFU counts. It had significantly reduced relative standard deviations compared to coliform CFU counts in the same samples, and therefore differences in antibiotic resistance levels between samples were more readily detected. To our knowledge, this is the first study to describe sampling and pooling methods for qPCR quantification of antibiotic resistance genes in total DNA extracted from swine feces. PMID:26114765
Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R
2017-09-14
While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.
Systems and methods for self-synchronized digital sampling
NASA Technical Reports Server (NTRS)
Samson, Jr., John R. (Inventor)
2008-01-01
Systems and methods for self-synchronized data sampling are provided. In one embodiment, a system for capturing synchronous data samples is provided. The system includes an analog to digital converter adapted to capture signals from one or more sensors and convert the signals into a stream of digital data samples at a sampling frequency determined by a sampling control signal; and a synchronizer coupled to the analog to digital converter and adapted to receive a rotational frequency signal from a rotating machine, wherein the synchronizer is further adapted to generate the sampling control signal, and wherein the sampling control signal is based on the rotational frequency signal.
Huffman, Raegan L.
2002-01-01
Ground-water samples were collected in April 1999 at Naval Air Station Whidbey Island, Washington, with passive diffusion samplers and a submersible pump to compare concentrations of volatile organic compounds (VOCs) in water samples collected using the two sampling methods. Single diffusion samplers were installed in wells with 10-foot screened intervals, and multiple diffusion samplers were installed in wells with 20- to 40-foot screened intervals. The diffusion samplers were recovered after 20 days and the wells were then sampled using a submersible pump. VOC concentrations in the 10-foot screened wells in water samples collected with diffusion samplers closely matched concentrations in samples collected with the submersible pump. Analysis of VOC concentrations in samples collected from the 20- to 40-foot screened wells with multiple diffusion samplers indicated vertical concentration variation within the screened interval, whereas the analysis of VOC concentrations in samples collected with the submersible pump indicated mixing during pumping. The results obtained using the two sampling methods indicate that the samples collected with the diffusion samplers were comparable with and can be considerably less expensive than samples collected using a submersible pump.
A METHODS COMPARISON FOR COLLECTING MACROINVERTEBRATES IN THE OHIO RIVER
Collection of representative benthic macroinvertebrate samples from large rivers has been challenging researchers for many years. The objective of our study was to develop an appropriate method(s) for sampling macroinvertebrates from the Ohio River. Four existing sampling metho...
John C. Brissette; Mark J. Ducey; Jeffrey H. Gove
2003-01-01
We field tested a new method for sampling down coarse woody material (CWM) using an angle gauge and compared it with the more traditional line intersect sampling (LIS) method. Permanent sample locations in stands managed with different silvicultural treatments within the Penobscot Experimental Forest (Maine, USA) were used as the sampling locations. Point relascope...
Mavridou, A; Smeti, E; Mandilara, G; Mandilara, G; Boufa, P; Vagiona-Arvanitidou, M; Vantarakis, A; Vassilandonopoulou, G; Pappa, O; Roussia, V; Tzouanopoulos, A; Livadara, M; Aisopou, I; Maraka, V; Nikolaou, E; Mandilara, G
2010-01-01
In this study ten laboratories in Greece compared the performance of reference method TTC Tergitol 7 Agar (with the additional test of beta-glucuronidase production) with five alternative methods, to detect E. coli in water, in line with European Water Directive recommendations. The samples were prepared by spiking drinking water with sewage effluent following a standard protocol. Chlorinated and non-chlorinated samples were used. The statistical analysis was based on the mean relative difference of confirmed counts and was performed in line with ISO 17994. The results showed that in total, three of the alternative methods (Chromocult Coliform agar, Membrane Lauryl Sulfate agar and Trypton Bilex-glucuronidase medium) were not different from TTC Tergitol 7 agar (TTC Tergitol 7 agar vs Chromocult Coliform agar, 294 samples, mean RD% 5.55; vs MLSA, 302 samples, mean RD% 1; vs TBX, 297 samples, mean RD% -2.78). The other two alternative methods (Membrane Faecal coliform medium and Colilert 18/ Quantitray) gave significantly higher counts than TTC Tergitol 7 agar (TTC Tergitol 7 agar vs MFc, 303 samples, mean RD% 8.81; vs Colilert-18/Quantitray, 76 samples, mean RD% 18.91). In other words, the alternative methods generated performance that was as reliable as, or even better than, the reference method. This study will help laboratories in Greece overcome culture and counting problems deriving from the EU reference method for E. coli counts in water samples.
Yi, Ming; Stephens, Robert M.
2008-01-01
Analysis of microarray and other high throughput data often involves identification of genes consistently up or down-regulated across samples as the first step in extraction of biological meaning. This gene-level paradigm can be limited as a result of valid sample fluctuations and biological complexities. In this report, we describe a novel method, SLEPR, which eliminates this limitation by relying on pathway-level consistencies. Our method first selects the sample-level differentiated genes from each individual sample, capturing genes missed by other analysis methods, ascertains the enrichment levels of associated pathways from each of those lists, and then ranks annotated pathways based on the consistency of enrichment levels of individual samples from both sample classes. As a proof of concept, we have used this method to analyze three public microarray datasets with a direct comparison with the GSEA method, one of the most popular pathway-level analysis methods in the field. We found that our method was able to reproduce the earlier observations with significant improvements in depth of coverage for validated or expected biological themes, but also produced additional insights that make biological sense. This new method extends existing analyses approaches and facilitates integration of different types of HTP data. PMID:18818771
Flotemersch, Joseph E; North, Sheila; Blocksom, Karen A
2014-02-01
Benthic macroinvertebrates are sampled in streams and rivers as one of the assessment elements of the US Environmental Protection Agency's National Rivers and Streams Assessment. In a 2006 report, the recommendation was made that different yet comparable methods be evaluated for different types of streams (e.g., low gradient vs. high gradient). Consequently, a research element was added to the 2008-2009 National Rivers and Streams Assessment to conduct a side-by-side comparison of the standard macroinvertebrate sampling method with an alternate method specifically designed for low-gradient wadeable streams and rivers that focused more on stream edge habitat. Samples were collected using each method at 525 sites in five of nine aggregate ecoregions located in the conterminous USA. Methods were compared using the benthic macroinvertebrate multimetric index developed for the 2006 Wadeable Streams Assessment. Statistical analysis did not reveal any trends that would suggest the overall assessment of low-gradient streams on a regional or national scale would change if the alternate method was used rather than the standard sampling method, regardless of the gradient cutoff used to define low-gradient streams. Based on these results, the National Rivers and Streams Survey should continue to use the standard field method for sampling all streams.
Integrating conventional and inverse representation for face recognition.
Xu, Yong; Li, Xuelong; Yang, Jian; Lai, Zhihui; Zhang, David
2014-10-01
Representation-based classification methods are all constructed on the basis of the conventional representation, which first expresses the test sample as a linear combination of the training samples and then exploits the deviation between the test sample and the expression result of every class to perform classification. However, this deviation does not always well reflect the difference between the test sample and each class. With this paper, we propose a novel representation-based classification method for face recognition. This method integrates conventional and the inverse representation-based classification for better recognizing the face. It first produces conventional representation of the test sample, i.e., uses a linear combination of the training samples to represent the test sample. Then it obtains the inverse representation, i.e., provides an approximation representation of each training sample of a subject by exploiting the test sample and training samples of the other subjects. Finally, the proposed method exploits the conventional and inverse representation to generate two kinds of scores of the test sample with respect to each class and combines them to recognize the face. The paper shows the theoretical foundation and rationale of the proposed method. Moreover, this paper for the first time shows that a basic nature of the human face, i.e., the symmetry of the face can be exploited to generate new training and test samples. As these new samples really reflect some possible appearance of the face, the use of them will enable us to obtain higher accuracy. The experiments show that the proposed conventional and inverse representation-based linear regression classification (CIRLRC), an improvement to linear regression classification (LRC), can obtain very high accuracy and greatly outperforms the naive LRC and other state-of-the-art conventional representation based face recognition methods. The accuracy of CIRLRC can be 10% greater than that of LRC.
GEOSTATISTICAL SAMPLING DESIGNS FOR HAZARDOUS WASTE SITES
This chapter discusses field sampling design for environmental sites and hazardous waste sites with respect to random variable sampling theory, Gy's sampling theory, and geostatistical (kriging) sampling theory. The literature often presents these sampling methods as an adversari...
Photoacoustic sample vessel and method of elevated pressure operation
Autrey, Tom; Yonker, Clement R.
2004-05-04
An improved photoacoustic vessel and method of photoacoustic analysis. The photoacoustic sample vessel comprises an acoustic detector, an acoustic couplant, and an acoustic coupler having a chamber for holding the acoustic couplant and a sample. The acoustic couplant is selected from the group consisting of liquid, solid, and combinations thereof. Passing electromagnetic energy through the sample generates an acoustic signal within the sample, whereby the acoustic signal propagates through the sample to and through the acoustic couplant to the acoustic detector.
Nelson, Jennifer C.; Marsh, Tracey; Lumley, Thomas; Larson, Eric B.; Jackson, Lisa A.; Jackson, Michael
2014-01-01
Objective Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased due to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. Study Design and Setting We applied two such methods, imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method’s ability to reduce bias using the control time period prior to influenza circulation. Results Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not utilize the validation sample confounders. Conclusion Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from healthcare database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which data can be imputed or reweighted using the additional validation sample information. PMID:23849144
Densitometry By Acoustic Levitation
NASA Technical Reports Server (NTRS)
Trinh, Eugene H.
1989-01-01
"Static" and "dynamic" methods developed for measuring mass density of acoustically levitated solid particle or liquid drop. "Static" method, unknown density of sample found by comparison with another sample of known density. "Dynamic" method practiced with or without gravitational field. Advantages over conventional density-measuring techniques: sample does not have to make contact with container or other solid surface, size and shape of samples do not affect measurement significantly, sound field does not have to be know in detail, and sample can be smaller than microliter. Detailed knowledge of acoustic field not necessary.
Mixture and method for simulating soiling and weathering of surfaces
Sleiman, Mohamad; Kirchstetter, Thomas; Destaillats, Hugo; Levinson, Ronnen; Berdahl, Paul; Akbari, Hashem
2018-01-02
This disclosure provides systems, methods, and apparatus related to simulated soiling and weathering of materials. In one aspect, a soiling mixture may include an aqueous suspension of various amounts of salt, soot, dust, and humic acid. In another aspect, a method may include weathering a sample of material in a first exposure of the sample to ultraviolet light, water vapor, and elevated temperatures, depositing a soiling mixture on the sample, and weathering the sample in a second exposure of the sample to ultraviolet light, water vapor, and elevated temperatures.
Deterministic multidimensional nonuniform gap sampling.
Worley, Bradley; Powers, Robert
2015-12-01
Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities. Copyright © 2015 Elsevier Inc. All rights reserved.
Henderson, Gemma; Cox, Faith; Kittelmann, Sandra; Miri, Vahideh Heidarian; Zethof, Michael; Noel, Samantha J.; Waghorn, Garry C.; Janssen, Peter H.
2013-01-01
Molecular microbial ecology techniques are widely used to study the composition of the rumen microbiota and to increase understanding of the roles they play. Therefore, sampling and DNA extraction methods that result in adequate yields of microbial DNA that also accurately represents the microbial community are crucial. Fifteen different methods were used to extract DNA from cow and sheep rumen samples. The DNA yield and quality, and its suitability for downstream PCR amplifications varied considerably, depending on the DNA extraction method used. DNA extracts from nine extraction methods that passed these first quality criteria were evaluated further by quantitative PCR enumeration of microbial marker loci. Absolute microbial numbers, determined on the same rumen samples, differed by more than 100-fold, depending on the DNA extraction method used. The apparent compositions of the archaeal, bacterial, ciliate protozoal, and fungal communities in identical rumen samples were assessed using 454 Titanium pyrosequencing. Significant differences in microbial community composition were observed between extraction methods, for example in the relative abundances of members of the phyla Bacteroidetes and Firmicutes. Microbial communities in parallel samples collected from cows by oral stomach-tubing or through a rumen fistula, and in liquid and solid rumen digesta fractions, were compared using one of the DNA extraction methods. Community representations were generally similar, regardless of the rumen sampling technique used, but significant differences in the abundances of some microbial taxa such as the Clostridiales and the Methanobrevibacter ruminantium clade were observed. The apparent microbial community composition differed between rumen sample fractions, and Prevotellaceae were most abundant in the liquid fraction. DNA extraction methods that involved phenol-chloroform extraction and mechanical lysis steps tended to be more comparable. However, comparison of data from studies in which different sampling techniques, different rumen sample fractions or different DNA extraction methods were used should be avoided. PMID:24040342
Henderson, Gemma; Cox, Faith; Kittelmann, Sandra; Miri, Vahideh Heidarian; Zethof, Michael; Noel, Samantha J; Waghorn, Garry C; Janssen, Peter H
2013-01-01
Molecular microbial ecology techniques are widely used to study the composition of the rumen microbiota and to increase understanding of the roles they play. Therefore, sampling and DNA extraction methods that result in adequate yields of microbial DNA that also accurately represents the microbial community are crucial. Fifteen different methods were used to extract DNA from cow and sheep rumen samples. The DNA yield and quality, and its suitability for downstream PCR amplifications varied considerably, depending on the DNA extraction method used. DNA extracts from nine extraction methods that passed these first quality criteria were evaluated further by quantitative PCR enumeration of microbial marker loci. Absolute microbial numbers, determined on the same rumen samples, differed by more than 100-fold, depending on the DNA extraction method used. The apparent compositions of the archaeal, bacterial, ciliate protozoal, and fungal communities in identical rumen samples were assessed using 454 Titanium pyrosequencing. Significant differences in microbial community composition were observed between extraction methods, for example in the relative abundances of members of the phyla Bacteroidetes and Firmicutes. Microbial communities in parallel samples collected from cows by oral stomach-tubing or through a rumen fistula, and in liquid and solid rumen digesta fractions, were compared using one of the DNA extraction methods. Community representations were generally similar, regardless of the rumen sampling technique used, but significant differences in the abundances of some microbial taxa such as the Clostridiales and the Methanobrevibacter ruminantium clade were observed. The apparent microbial community composition differed between rumen sample fractions, and Prevotellaceae were most abundant in the liquid fraction. DNA extraction methods that involved phenol-chloroform extraction and mechanical lysis steps tended to be more comparable. However, comparison of data from studies in which different sampling techniques, different rumen sample fractions or different DNA extraction methods were used should be avoided.
Sampling strategies for estimating brook trout effective population size
Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher
2012-01-01
The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...
40 CFR 761.289 - Compositing samples.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Remediation Waste and Porous Surfaces in Accordance With § 761.61(a)(6) § 761.289 Compositing samples. Compositing is a method of combining several samples of a specific type of bulk PCB remediation waste or... compositing bulk PCB remediation waste samples. These procedures are based on the method for selecting...
40 CFR 761.289 - Compositing samples.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Remediation Waste and Porous Surfaces in Accordance With § 761.61(a)(6) § 761.289 Compositing samples. Compositing is a method of combining several samples of a specific type of bulk PCB remediation waste or... compositing bulk PCB remediation waste samples. These procedures are based on the method for selecting...
NASA Astrophysics Data System (ADS)
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-04-01
In the last three decades, an increasing number of studies analyzed spatial patterns in throughfall to investigate the consequences of rainfall redistribution for biogeochemical and hydrological processes in forests. In the majority of cases, variograms were used to characterize the spatial properties of the throughfall data. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and an appropriate layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation methods on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with heavy outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling), and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the numbers recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes << 200, our current knowledge about throughfall spatial variability stands on shaky ground.
NASA Astrophysics Data System (ADS)
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-09-01
In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes ≪200, currently available data are prone to large uncertainties.
A Novel Method to Handle the Effect of Uneven Sampling Effort in Biodiversity Databases
Pardo, Iker; Pata, María P.; Gómez, Daniel; García, María B.
2013-01-01
How reliable are results on spatial distribution of biodiversity based on databases? Many studies have evidenced the uncertainty related to this kind of analysis due to sampling effort bias and the need for its quantification. Despite that a number of methods are available for that, little is known about their statistical limitations and discrimination capability, which could seriously constrain their use. We assess for the first time the discrimination capacity of two widely used methods and a proposed new one (FIDEGAM), all based on species accumulation curves, under different scenarios of sampling exhaustiveness using Receiver Operating Characteristic (ROC) analyses. Additionally, we examine to what extent the output of each method represents the sampling completeness in a simulated scenario where the true species richness is known. Finally, we apply FIDEGAM to a real situation and explore the spatial patterns of plant diversity in a National Park. FIDEGAM showed an excellent discrimination capability to distinguish between well and poorly sampled areas regardless of sampling exhaustiveness, whereas the other methods failed. Accordingly, FIDEGAM values were strongly correlated with the true percentage of species detected in a simulated scenario, whereas sampling completeness estimated with other methods showed no relationship due to null discrimination capability. Quantifying sampling effort is necessary to account for the uncertainty in biodiversity analyses, however, not all proposed methods are equally reliable. Our comparative analysis demonstrated that FIDEGAM was the most accurate discriminator method in all scenarios of sampling exhaustiveness, and therefore, it can be efficiently applied to most databases in order to enhance the reliability of biodiversity analyses. PMID:23326357
A novel method to handle the effect of uneven sampling effort in biodiversity databases.
Pardo, Iker; Pata, María P; Gómez, Daniel; García, María B
2013-01-01
How reliable are results on spatial distribution of biodiversity based on databases? Many studies have evidenced the uncertainty related to this kind of analysis due to sampling effort bias and the need for its quantification. Despite that a number of methods are available for that, little is known about their statistical limitations and discrimination capability, which could seriously constrain their use. We assess for the first time the discrimination capacity of two widely used methods and a proposed new one (FIDEGAM), all based on species accumulation curves, under different scenarios of sampling exhaustiveness using Receiver Operating Characteristic (ROC) analyses. Additionally, we examine to what extent the output of each method represents the sampling completeness in a simulated scenario where the true species richness is known. Finally, we apply FIDEGAM to a real situation and explore the spatial patterns of plant diversity in a National Park. FIDEGAM showed an excellent discrimination capability to distinguish between well and poorly sampled areas regardless of sampling exhaustiveness, whereas the other methods failed. Accordingly, FIDEGAM values were strongly correlated with the true percentage of species detected in a simulated scenario, whereas sampling completeness estimated with other methods showed no relationship due to null discrimination capability. Quantifying sampling effort is necessary to account for the uncertainty in biodiversity analyses, however, not all proposed methods are equally reliable. Our comparative analysis demonstrated that FIDEGAM was the most accurate discriminator method in all scenarios of sampling exhaustiveness, and therefore, it can be efficiently applied to most databases in order to enhance the reliability of biodiversity analyses.
Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering
NASA Astrophysics Data System (ADS)
Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki
2018-03-01
We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.
THE SCREENING AND RANKING ALGORITHM FOR CHANGE-POINTS DETECTION IN MULTIPLE SAMPLES
Song, Chi; Min, Xiaoyi; Zhang, Heping
2016-01-01
The chromosome copy number variation (CNV) is the deviation of genomic regions from their normal copy number states, which may associate with many human diseases. Current genetic studies usually collect hundreds to thousands of samples to study the association between CNV and diseases. CNVs can be called by detecting the change-points in mean for sequences of array-based intensity measurements. Although multiple samples are of interest, the majority of the available CNV calling methods are single sample based. Only a few multiple sample methods have been proposed using scan statistics that are computationally intensive and designed toward either common or rare change-points detection. In this paper, we propose a novel multiple sample method by adaptively combining the scan statistic of the screening and ranking algorithm (SaRa), which is computationally efficient and is able to detect both common and rare change-points. We prove that asymptotically this method can find the true change-points with almost certainty and show in theory that multiple sample methods are superior to single sample methods when shared change-points are of interest. Additionally, we report extensive simulation studies to examine the performance of our proposed method. Finally, using our proposed method as well as two competing approaches, we attempt to detect CNVs in the data from the Primary Open-Angle Glaucoma Genes and Environment study, and conclude that our method is faster and requires less information while our ability to detect the CNVs is comparable or better. PMID:28090239
Efficient global biopolymer sampling with end-transfer configurational bias Monte Carlo
NASA Astrophysics Data System (ADS)
Arya, Gaurav; Schlick, Tamar
2007-01-01
We develop an "end-transfer configurational bias Monte Carlo" method for efficient thermodynamic sampling of complex biopolymers and assess its performance on a mesoscale model of chromatin (oligonucleosome) at different salt conditions compared to other Monte Carlo moves. Our method extends traditional configurational bias by deleting a repeating motif (monomer) from one end of the biopolymer and regrowing it at the opposite end using the standard Rosenbluth scheme. The method's sampling efficiency compared to local moves, pivot rotations, and standard configurational bias is assessed by parameters relating to translational, rotational, and internal degrees of freedom of the oligonucleosome. Our results show that the end-transfer method is superior in sampling every degree of freedom of the oligonucleosomes over other methods at high salt concentrations (weak electrostatics) but worse than the pivot rotations in terms of sampling internal and rotational sampling at low-to-moderate salt concentrations (strong electrostatics). Under all conditions investigated, however, the end-transfer method is several orders of magnitude more efficient than the standard configurational bias approach. This is because the characteristic sampling time of the innermost oligonucleosome motif scales quadratically with the length of the oligonucleosomes for the end-transfer method while it scales exponentially for the traditional configurational-bias method. Thus, the method we propose can significantly improve performance for global biomolecular applications, especially in condensed systems with weak nonbonded interactions and may be combined with local enhancements to improve local sampling.
NASA Astrophysics Data System (ADS)
Peselnick, L.
1982-08-01
An ultrasonic method is presented which combines features of the differential path and the phase comparison methods. The proposed differential path phase comparison method, referred to as the `hybrid' method for brevity, eliminates errors resulting from phase changes in the bond between the sample and buffer rod. Define r(P) [and R(P)] as the square of the normalized frequency for cancellation of sample waves for shear [and for compressional] waves. Define N as the number of wavelengths in twice the sample length. The pressure derivatives r'(P) and R' (P) for samples of Alcoa 2024-T4 aluminum were obtained by using the phase comparison and the hybrid methods. The values of the pressure derivatives obtained by using the phase comparison method show variations by as much as 40% for small values of N (N < 50). The pressure derivatives as determined from the hybrid method are reproducible to within ±2% independent of N. The values of the pressure derivatives determined by the phase comparison method for large N are the same as those determined by the hybrid method. Advantages of the hybrid method are (1) no pressure dependent phase shift at the buffer-sample interface, (2) elimination of deviatoric stress in the sample portion of the sample assembly with application of hydrostatic pressure, and (3) operation at lower ultrasonic frequencies (for comparable sample lengths), which eliminates detrimental high frequency ultrasonic problems. A reduction of the uncertainties of the pressure derivatives of single crystals and of low porosity polycrystals permits extrapolation of such experimental data to deeper mantle depths.
Soil sampling kit and a method of sampling therewith
Thompson, Cyril V.
1991-01-01
A soil sampling device and a sample containment device for containing a soil sample is disclosed. In addition, a method for taking a soil sample using the soil sampling device and soil sample containment device to minimize the loss of any volatile organic compounds contained in the soil sample prior to analysis is disclosed. The soil sampling device comprises two close fitting, longitudinal tubular members of suitable length, the inner tube having the outward end closed. With the inner closed tube withdrawn a selected distance, the outer tube can be inserted into the ground or other similar soft material to withdraw a sample of material for examination. The inner closed end tube controls the volume of the sample taken and also serves to eject the sample. The soil sample containment device has a sealing member which is adapted to attach to an analytical apparatus which analyzes the volatile organic compounds contained in the sample. The soil sampling device in combination with the soil sample containment device allow an operator to obtain a soil sample containing volatile organic compounds and minimizing the loss of the volatile organic compounds prior to analysis of the soil sample for the volatile organic compounds.
Soil sampling kit and a method of sampling therewith
Thompson, C.V.
1991-02-05
A soil sampling device and a sample containment device for containing a soil sample is disclosed. In addition, a method for taking a soil sample using the soil sampling device and soil sample containment device to minimize the loss of any volatile organic compounds contained in the soil sample prior to analysis is disclosed. The soil sampling device comprises two close fitting, longitudinal tubular members of suitable length, the inner tube having the outward end closed. With the inner closed tube withdrawn a selected distance, the outer tube can be inserted into the ground or other similar soft material to withdraw a sample of material for examination. The inner closed end tube controls the volume of the sample taken and also serves to eject the sample. The soil sample containment device has a sealing member which is adapted to attach to an analytical apparatus which analyzes the volatile organic compounds contained in the sample. The soil sampling device in combination with the soil sample containment device allows an operator to obtain a soil sample containing volatile organic compounds and minimizing the loss of the volatile organic compounds prior to analysis of the soil sample for the volatile organic compounds. 11 figures.
Almutairy, Meznah; Torng, Eric
2018-01-01
Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.
Torng, Eric
2018-01-01
Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989
Bouchard, Daniel; Wanner, Philipp; Luo, Hong; McLoughlin, Patrick W; Henderson, James K; Pirkle, Robert J; Hunkeler, Daniel
2017-10-20
The methodology of the solvent-based dissolution method used to sample gas phase volatile organic compounds (VOC) for compound-specific isotope analysis (CSIA) was optimized to lower the method detection limits for TCE and benzene. The sampling methodology previously evaluated by [1] consists in pulling the air through a solvent to dissolve and accumulate the gaseous VOC. After the sampling process, the solvent can then be treated similarly as groundwater samples to perform routine CSIA by diluting an aliquot of the solvent into water to reach the required concentration of the targeted contaminant. Among solvents tested, tetraethylene glycol dimethyl ether (TGDE) showed the best aptitude for the method. TGDE has a great affinity with TCE and benzene, hence efficiently dissolving the compounds during their transition through the solvent. The method detection limit for TCE (5±1μg/m 3 ) and benzene (1.7±0.5μg/m 3 ) is lower when using TGDE compared to methanol, which was previously used (385μg/m 3 for TCE and 130μg/m 3 for benzene) [2]. The method detection limit refers to the minimal gas phase concentration in ambient air required to load sufficient VOC mass into TGDE to perform δ 13 C analysis. Due to a different analytical procedure, the method detection limit associated with δ 37 Cl analysis was found to be 156±6μg/m 3 for TCE. Furthermore, the experimental results validated the relationship between the gas phase TCE and the progressive accumulation of dissolved TCE in the solvent during the sampling process. Accordingly, based on the air-solvent partitioning coefficient, the sampling methodology (e.g. sampling rate, sampling duration, amount of solvent) and the final TCE concentration in the solvent, the concentration of TCE in the gas phase prevailing during the sampling event can be determined. Moreover, the possibility to analyse for TCE concentration in the solvent after sampling (or other targeted VOCs) allows the field deployment of the sampling method without the need to determine the initial gas phase TCE concentration. The simplified field deployment approach of the solvent-based dissolution method combined with the conventional analytical procedure used for groundwater samples substantially facilitates the application of CSIA to gas phase studies. Copyright © 2017 Elsevier B.V. All rights reserved.
Ball assisted device for analytical surface sampling
ElNaggar, Mariam S; Van Berkel, Gary J; Covey, Thomas R
2015-11-03
A system for sampling a surface includes a sampling probe having a housing and a socket, and a rolling sampling sphere within the socket. The housing has a sampling fluid supply conduit and a sampling fluid exhaust conduit. The sampling fluid supply conduit supplies sampling fluid to the sampling sphere. The sampling fluid exhaust conduit has an inlet opening for receiving sampling fluid carried from the surface by the sampling sphere. A surface sampling probe and a method for sampling a surface are also disclosed.
Novel methodology to isolate microplastics from vegetal-rich samples.
Herrera, Alicia; Garrido-Amador, Paloma; Martínez, Ico; Samper, María Dolores; López-Martínez, Juan; Gómez, May; Packard, Theodore T
2018-04-01
Microplastics are small plastic particles, globally distributed throughout the oceans. To properly study them, all the methodologies for their sampling, extraction, and measurement should be standardized. For heterogeneous samples containing sediments, animal tissues and zooplankton, several procedures have been described. However, definitive methodologies for samples, rich in algae and plant material, have not yet been developed. The aim of this study was to find the best extraction protocol for vegetal-rich samples by comparing the efficacies of five previously described digestion methods, and a novel density separation method. A protocol using 96% ethanol for density separation was better than the five digestion methods tested, even better than using H 2 O 2 digestion. As it was the most efficient, simple, safe and inexpensive method for isolating microplastics from vegetal rich samples, we recommend it as a standard separation method. Copyright © 2018 Elsevier Ltd. All rights reserved.
Method and system for laser-based formation of micro-shapes in surfaces of optical elements
Bass, Isaac Louis; Guss, Gabriel Mark
2013-03-05
A method of forming a surface feature extending into a sample includes providing a laser operable to emit an output beam and modulating the output beam to form a pulse train having a plurality of pulses. The method also includes a) directing the pulse train along an optical path intersecting an exposed portion of the sample at a position i and b) focusing a first portion of the plurality of pulses to impinge on the sample at the position i. Each of the plurality of pulses is characterized by a spot size at the sample. The method further includes c) ablating at least a portion of the sample at the position i to form a portion of the surface feature and d) incrementing counter i. The method includes e) repeating steps a) through d) to form the surface feature. The sample is free of a rim surrounding the surface feature.
Rapid method for sampling metals for materials identification
NASA Technical Reports Server (NTRS)
Higgins, L. E.
1971-01-01
Nondamaging process similar to electrochemical machining is useful in obtaining metal samples from places inaccessible to conventional sampling methods or where methods would be hazardous or contaminating to specimens. Process applies to industries where metals or metal alloys play a vital role.
Methods for making nucleotide probes for sequencing and synthesis
Church, George M; Zhang, Kun; Chou, Joseph
2014-07-08
Compositions and methods for making a plurality of probes for analyzing a plurality of nucleic acid samples are provided. Compositions and methods for analyzing a plurality of nucleic acid samples to obtain sequence information in each nucleic acid sample are also provided.
Reflexion on linear regression trip production modelling method for ensuring good model quality
NASA Astrophysics Data System (ADS)
Suprayitno, Hitapriya; Ratnasari, Vita
2017-11-01
Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.
Girndt, Antje; Cockburn, Glenn; Sánchez-Tójar, Alfredo; Løvlie, Hanne; Schroeder, Julia
2017-01-01
Birds are model organisms in sperm biology. Previous work in zebra finches, suggested that sperm sampled from males' faeces and ejaculates do not differ in size. Here, we tested this assumption in a captive population of house sparrows, Passer domesticus. We compared sperm length in samples from three collection techniques: female dummy, faecal and abdominal massage samples. We found that sperm were significantly shorter in faecal than abdominal massage samples, which was explained by shorter heads and midpieces, but not flagella. This result might indicate that faecal sampled sperm could be less mature than sperm collected by abdominal massage. The female dummy method resulted in an insufficient number of experimental ejaculates because most males ignored it. In light of these results, we recommend using abdominal massage as a preferred method for avian sperm sampling. Where avian sperm cannot be collected by abdominal massage alone, we advise controlling for sperm sampling protocol statistically.
Metzinger, Anikó; Kovács-Széles, Eva; Almási, István; Galbács, Gábor
2014-01-01
The present study describes the development of an analytical method for the determination of cesium in biological fluid samples (human urine and blood samples) by laser-induced breakdown spectroscopy (LIBS). The developed method is based on sample presentation by liquid-to-solid conversion, enhancing the emission signal by drying the liquid into small "pockets" created in a metal support (zinc plate), and allows the analysis to be carried out on as little as 1 μL of sample volume, in a closed sample cell. Absolute detection limits on the Cs I 852.1 nm spectral line were calculated by the IUPAC 3σ method to be 6 ng in the urine sample and 27 ng in the blood serum sample. It is estimated that LIBS may be used to detect highly elevated concentration levels of Cs in fluid samples taken from people potentially exposed to surges of Cs from non-natural sources.
Pipes, W O; Minnigh, H A; Moyer, B; Troy, M A
1986-01-01
A total of 2,601 water samples from six different water systems were tested for coliform bacteria by Clark's presence-absence (P-A) test and by the membrane filter (MF) method. There was no significant difference in the fraction of samples positive for coliform bacteria for any of the systems tested. It was concluded that the two tests are equivalent for monitoring purposes. However, 152 samples were positive for coliform bacteria by the MF method but negative by the P-A test, and 132 samples were positive by the P-A test but negative by the MF method. Many of these differences for individual samples can be explained by random dispersion of bacteria in subsamples when the coliform density is low. However, 15 samples had MF counts greater than 3 and gave negative P-A results. The only apparent explanation for most of these results is that coliform bacteria were present in the P-A test bottles but did not produce acid and gas. Two other studies have reported more samples positive by Clark's P-A test than by the MF method. PMID:3532953
Efficient free energy calculations by combining two complementary tempering sampling methods.
Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun
2017-01-14
Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.
Efficient free energy calculations by combining two complementary tempering sampling methods
NASA Astrophysics Data System (ADS)
Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun
2017-01-01
Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-06
... Analytic Methods and Sampling Procedures for the United States National Residue Program for Meat, Poultry... implementing several multi-residue methods for analyzing samples of meat, poultry, and egg products for animal.... These modern, high-efficiency methods will conserve resources and provide useful and reliable results...
USDA-ARS?s Scientific Manuscript database
A sample preparation method was evaluated for the determination of polybrominated diphenyl ethers (PBDEs) in mussel samples, by using colorimetric and electrochemical immunoassay-based screening methods. A simple sample preparation in conjunction with a rapid screening method possesses the desired c...
An Improved Manual Method for NOx Emission Measurement.
ERIC Educational Resources Information Center
Dee, L. A.; And Others
The current manual NO (x) sampling and analysis method was evaluated. Improved time-integrated sampling and rapid analysis methods were developed. In the new method, the sample gas is drawn through a heated bed of uniquely active, crystalline, Pb02 where NO (x) is quantitatively absorbed. Nitrate ion is later extracted with water and the…
Treatment of Nuclear Data Covariance Information in Sample Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swiler, Laura Painton; Adams, Brian M.; Wieselquist, William
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on developing a sampling capability that can handle the challenges of generating samples from nuclear cross-section data. The covariance information between energy groups tends to be very ill-conditioned and thus poses a problem using traditional methods for generated correlated samples. This report outlines a method that addresses the sample generation from cross-section matrices.
Elimination of ``memory`` from sample handling and inlet system of a mass spectrometer
Chastgner, P.
1991-05-08
This paper describes a method for preparing the sample handling and inlet system of a mass spectrometer for analysis of a subsequent sample following analysis of a previous sample comprising the flushing of the system interior with supercritical CO{sub 2} and venting the interior. The method eliminates the effect of system ``memory`` on the subsequent analysis, especially following persistent samples such as xenon and krypton.
Schillaci, Michael A; Schillaci, Mario E
2009-02-01
The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (n<10) or very small (n < or = 5) sample sizes. This method can be used by researchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.
The efficacy of respondent-driven sampling for the health assessment of minority populations.
Badowski, Grazyna; Somera, Lilnabeth P; Simsiman, Brayan; Lee, Hye-Ryeon; Cassel, Kevin; Yamanaka, Alisha; Ren, JunHao
2017-10-01
Respondent driven sampling (RDS) is a relatively new network sampling technique typically employed for hard-to-reach populations. Like snowball sampling, initial respondents or "seeds" recruit additional respondents from their network of friends. Under certain assumptions, the method promises to produce a sample independent from the biases that may have been introduced by the non-random choice of "seeds." We conducted a survey on health communication in Guam's general population using the RDS method, the first survey that has utilized this methodology in Guam. It was conducted in hopes of identifying a cost-efficient non-probability sampling strategy that could generate reasonable population estimates for both minority and general populations. RDS data was collected in Guam in 2013 (n=511) and population estimates were compared with 2012 BRFSS data (n=2031) and the 2010 census data. The estimates were calculated using the unweighted RDS sample and the weighted sample using RDS inference methods and compared with known population characteristics. The sample size was reached in 23days, providing evidence that the RDS method is a viable, cost-effective data collection method, which can provide reasonable population estimates. However, the results also suggest that the RDS inference methods used to reduce bias, based on self-reported estimates of network sizes, may not always work. Caution is needed when interpreting RDS study findings. For a more diverse sample, data collection should not be conducted in just one location. Fewer questions about network estimates should be asked, and more careful consideration should be given to the kind of incentives offered to participants. Copyright © 2017. Published by Elsevier Ltd.
Strelka: accurate somatic small-variant calling from sequenced tumor-normal sample pairs.
Saunders, Christopher T; Wong, Wendy S W; Swamy, Sajani; Becq, Jennifer; Murray, Lisa J; Cheetham, R Keira
2012-07-15
Whole genome and exome sequencing of matched tumor-normal sample pairs is becoming routine in cancer research. The consequent increased demand for somatic variant analysis of paired samples requires methods specialized to model this problem so as to sensitively call variants at any practical level of tumor impurity. We describe Strelka, a method for somatic SNV and small indel detection from sequencing data of matched tumor-normal samples. The method uses a novel Bayesian approach which represents continuous allele frequencies for both tumor and normal samples, while leveraging the expected genotype structure of the normal. This is achieved by representing the normal sample as a mixture of germline variation with noise, and representing the tumor sample as a mixture of the normal sample with somatic variation. A natural consequence of the model structure is that sensitivity can be maintained at high tumor impurity without requiring purity estimates. We demonstrate that the method has superior accuracy and sensitivity on impure samples compared with approaches based on either diploid genotype likelihoods or general allele-frequency tests. The Strelka workflow source code is available at ftp://strelka@ftp.illumina.com/. csaunders@illumina.com
Air sampling with solid phase microextraction
NASA Astrophysics Data System (ADS)
Martos, Perry Anthony
There is an increasing need for simple yet accurate air sampling methods. The acceptance of new air sampling methods requires compatibility with conventional chromatographic equipment, and the new methods have to be environmentally friendly, simple to use, yet with equal, or better, detection limits, accuracy and precision than standard methods. Solid phase microextraction (SPME) satisfies the conditions for new air sampling methods. Analyte detection limits, accuracy and precision of analysis with SPME are typically better than with any conventional air sampling methods. Yet, air sampling with SPME requires no pumps, solvents, is re-usable, extremely simple to use, is completely compatible with current chromatographic equipment, and requires a small capital investment. The first SPME fiber coating used in this study was poly(dimethylsiloxane) (PDMS), a hydrophobic liquid film, to sample a large range of airborne hydrocarbons such as benzene and octane. Quantification without an external calibration procedure is possible with this coating. Well understood are the physical and chemical properties of this coating, which are quite similar to those of the siloxane stationary phase used in capillary columns. The log of analyte distribution coefficients for PDMS are linearly related to chromatographic retention indices and to the inverse of temperature. Therefore, the actual chromatogram from the analysis of the PDMS air sampler will yield the calibration parameters which are used to quantify unknown airborne analyte concentrations (ppb v to ppm v range). The second fiber coating used in this study was PDMS/divinyl benzene (PDMS/DVB) onto which o-(2,3,4,5,6- pentafluorobenzyl) hydroxylamine (PFBHA) was adsorbed for the on-fiber derivatization of gaseous formaldehyde (ppb v range), with and without external calibration. The oxime formed from the reaction can be detected with conventional gas chromatographic detectors. Typical grab sampling times were as small as 5 seconds. With 300 seconds sampling, the formaldehyde detection limit was 2.1 ppbv, better than any other 5 minute sampling device for formaldehyde. The first-order rate constant for product formation was used to quantify formaldehyde concentrations without a calibration curve. This spot sampler was used to sample the headspace of hair gel, particle board, plant material and coffee grounds for formaldehyde, and other carbonyl compounds, with extremely promising results. The SPME sampling devices were also used for time- weighted average sampling (30 minutes to 16 hours). Finally, the four new SPME air sampling methods were field tested with side-by-side comparisons to standard air sampling methods, showing a tremendous use of SPME as an air sampler.
Meghdadi, Hossein; Khosravi, Azar D.; Ghadiri, Ata A.; Sina, Amir H.; Alami, Ameneh
2015-01-01
Present study was aimed to examine the diagnostic utility of polymerase chain reaction (PCR) and nested PCR techniques for the detection of Mycobacterium tuberculosis (MTB) DNA in samples from patients with extra pulmonary tuberculosis (EPTB). In total 80 formalin-fixed, paraffin-embedded (FFPE) samples comprising 70 samples with definite diagnosis of EPTB and 10 samples from known non- EPTB on the basis of histopathology examination, were included in the study. PCR amplification targeting IS6110, rpoB gene and nested PCR targeting the rpoB gene were performed on the extracted DNAs from 80 FFPE samples. The strong positive samples were directly sequenced. For negative samples and those with weak band in nested-rpoB PCR, TA cloning was performed by cloning the products into the plasmid vector with subsequent sequencing. The 95% confidence intervals (CI) for the estimates of sensitivity and specificity were calculated for each method. Fourteen (20%), 34 (48.6%), and 60 (85.7%) of the 70 positive samples confirmed by histopathology, were positive by rpoB-PCR, IS6110-PCR, and nested-rpoB PCR, respectively. By performing TA cloning on samples that yielded weak (n = 8) or negative results (n = 10) in the PCR methods, we were able to improve their quality for later sequencing. All samples with weak band and 7 out of 10 negative samples, showed strong positive results after cloning. So nested-rpoB PCR cloning revealed positivity in 67 out of 70 confirmed samples (95.7%). The sensitivity of these combination methods was calculated as 95.7% in comparison with histopathology examination. The CI for sensitivity of the PCR methods were calculated as 11.39–31.27% for rpoB-PCR, 36.44–60.83% for IS6110- PCR, 75.29–92.93% for nested-rpoB PCR, and 87.98–99.11% for nested-rpoB PCR cloning. The 10 true EPTB negative samples by histopathology, were negative by all tested methods including cloning and were used to calculate the specificity of the applied methods. The CI for 100% specificity of each PCR method were calculated as 69.15–100%. Our results indicated that nested-rpoB PCR combined with TA cloning and sequencing is a preferred method for the detection of MTB DNA in EPTB samples with high sensitivity and specificity which confirm the histopathology results. PMID:26191059
Meghdadi, Hossein; Khosravi, Azar D; Ghadiri, Ata A; Sina, Amir H; Alami, Ameneh
2015-01-01
Present study was aimed to examine the diagnostic utility of polymerase chain reaction (PCR) and nested PCR techniques for the detection of Mycobacterium tuberculosis (MTB) DNA in samples from patients with extra pulmonary tuberculosis (EPTB). In total 80 formalin-fixed, paraffin-embedded (FFPE) samples comprising 70 samples with definite diagnosis of EPTB and 10 samples from known non- EPTB on the basis of histopathology examination, were included in the study. PCR amplification targeting IS6110, rpoB gene and nested PCR targeting the rpoB gene were performed on the extracted DNAs from 80 FFPE samples. The strong positive samples were directly sequenced. For negative samples and those with weak band in nested-rpoB PCR, TA cloning was performed by cloning the products into the plasmid vector with subsequent sequencing. The 95% confidence intervals (CI) for the estimates of sensitivity and specificity were calculated for each method. Fourteen (20%), 34 (48.6%), and 60 (85.7%) of the 70 positive samples confirmed by histopathology, were positive by rpoB-PCR, IS6110-PCR, and nested-rpoB PCR, respectively. By performing TA cloning on samples that yielded weak (n = 8) or negative results (n = 10) in the PCR methods, we were able to improve their quality for later sequencing. All samples with weak band and 7 out of 10 negative samples, showed strong positive results after cloning. So nested-rpoB PCR cloning revealed positivity in 67 out of 70 confirmed samples (95.7%). The sensitivity of these combination methods was calculated as 95.7% in comparison with histopathology examination. The CI for sensitivity of the PCR methods were calculated as 11.39-31.27% for rpoB-PCR, 36.44-60.83% for IS6110- PCR, 75.29-92.93% for nested-rpoB PCR, and 87.98-99.11% for nested-rpoB PCR cloning. The 10 true EPTB negative samples by histopathology, were negative by all tested methods including cloning and were used to calculate the specificity of the applied methods. The CI for 100% specificity of each PCR method were calculated as 69.15-100%. Our results indicated that nested-rpoB PCR combined with TA cloning and sequencing is a preferred method for the detection of MTB DNA in EPTB samples with high sensitivity and specificity which confirm the histopathology results.
Methods for estimating the amount of vernal pool habitat in the northeastern United States
Van Meter, R.; Bailey, L.L.; Grant, E.H.C.
2008-01-01
The loss of small, seasonal wetlands is a major concern for a variety of state, local, and federal organizations in the northeastern U.S. Identifying and estimating the number of vernal pools within a given region is critical to developing long-term conservation and management strategies for these unique habitats and their faunal communities. We use three probabilistic sampling methods (simple random sampling, adaptive cluster sampling, and the dual frame method) to estimate the number of vernal pools on protected, forested lands. Overall, these methods yielded similar values of vernal pool abundance for each study area, and suggest that photographic interpretation alone may grossly underestimate the number of vernal pools in forested habitats. We compare the relative efficiency of each method and discuss ways of improving precision. Acknowledging that the objectives of a study or monitoring program ultimately determine which sampling designs are most appropriate, we recommend that some type of probabilistic sampling method be applied. We view the dual-frame method as an especially useful way of combining incomplete remote sensing methods, such as aerial photograph interpretation, with a probabilistic sample of the entire area of interest to provide more robust estimates of the number of vernal pools and a more representative sample of existing vernal pool habitats.
Application of a Permethrin Immunosorbent Assay Method to Residential Soil and Dust Samples
A low-cost, high throughput bioanalytical screening method was developed for monitoring cis/trans-permethrin in dust and soil samples. The method consisted of a simple sample preparation procedure [sonication with dichloromethane followed by a solvent exchange into methanol:wate...
Frison, Severine; Kerac, Marko; Checchi, Francesco; Nicholas, Jennifer
2017-01-01
The assessment of the prevalence of acute malnutrition in children under five is widely used for the detection of emergencies, planning interventions, advocacy, and monitoring and evaluation. This study examined PROBIT Methods which convert parameters (mean and standard deviation (SD)) of a normally distributed variable to a cumulative probability below any cut-off to estimate acute malnutrition in children under five using Middle-Upper Arm Circumference (MUAC). We assessed the performance of: PROBIT Method I, with mean MUAC from the survey sample and MUAC SD from a database of previous surveys; and PROBIT Method II, with mean and SD of MUAC observed in the survey sample. Specifically, we generated sub-samples from 852 survey datasets, simulating 100 surveys for eight sample sizes. Overall the methods were tested on 681 600 simulated surveys. PROBIT methods relying on sample sizes as small as 50 had better performance than the classic method for estimating and classifying the prevalence of acute malnutrition. They had better precision in the estimation of acute malnutrition for all sample sizes and better coverage for smaller sample sizes, while having relatively little bias. They classified situations accurately for a threshold of 5% acute malnutrition. Both PROBIT methods had similar outcomes. PROBIT Methods have a clear advantage in the assessment of acute malnutrition prevalence based on MUAC, compared to the classic method. Their use would require much lower sample sizes, thus enable great time and resource savings and permit timely and/or locally relevant prevalence estimates of acute malnutrition for a swift and well-targeted response.
Rapid fusion method for the determination of Pu, Np, and Am in large soil samples
Maxwell, Sherrod L.; Culligan, Brian; Hutchison, Jay B.; ...
2015-02-14
A new rapid sodium hydroxide fusion method for the preparation of 10-20 g soil samples has been developed by the Savannah River National Laboratory (SRNL). The method enables lower detection limits for plutonium, neptunium, and americium in environmental soil samples. The method also significantly reduces sample processing time and acid fume generation compared to traditional soil digestion techniques using hydrofluoric acid. Ten gram soil aliquots can be ashed and fused using the new method in 1-2 hours, completely dissolving samples, including refractory particles. Pu, Np and Am are separated using stacked 2mL cartridges of TEVA and DGA Resin and measuredmore » using alpha spectrometry. The method can be adapted for measurement by inductively-coupled plasma mass spectrometry (ICP-MS). Two 10 g soil aliquots of fused soil may be combined prior to chromatographic separations to further improve detection limits. Total sample preparation time, including chromatographic separations and alpha spectrometry source preparation, is less than 8 hours.« less
Krämer, Nadine; Löfström, Charlotta; Vigre, Håkan; Hoorfar, Jeffrey; Bunge, Cornelia; Malorny, Burkhard
2011-03-01
Salmonella is a major zoonotic pathogen which causes outbreaks and sporadic cases of gastroenteritis in humans worldwide. The primary sources for Salmonella are food-producing animals such as pigs and poultry. For risk assessment and hazard analysis and critical control point (HACCP) concepts, it is essential to produce large amounts of quantitative data, which is currently not achievable with the standard cultural based methods for enumeration of Salmonella. This study presents the development of a novel strategy to enumerate low numbers of Salmonella in cork borer samples taken from pig carcasses as a first concept and proof of principle for a new sensitive and rapid quantification method based on combined enrichment and real-time PCR. The novelty of the approach is in the short pre-enrichment step, where for most bacteria, growth is in the log phase. The method consists of an 8h pre-enrichment of the cork borer sample diluted 1:10 in non-selective buffered peptone water, followed by DNA extraction, and Salmonella detection and quantification by real-time PCR. The limit of quantification was 1.4 colony forming units (CFU)/20 cm(2) (approximately 10 g) of artificially contaminated sample with 95% confidence interval of ± 0.7 log CFU/sample. The precision was similar to the standard reference most probable number (MPN) method. A screening of 200 potentially naturally contaminated cork borer samples obtained over seven weeks in a slaughterhouse resulted in 25 Salmonella-positive samples. The analysis of salmonellae within these samples showed that the PCR method had a higher sensitivity for samples with a low contamination level (<6.7 CFU/sample), where 15 of the samples negative with the MPN method was detected with the PCR method and 5 were found to be negative by both methods. For the samples with a higher contamination level (6.7-310 CFU/sample) a good agreement between the results obtained with the PCR and MPN methods was obtained. The quantitative real-time PCR method can easily be applied to other food and environmental matrices by adaptation of the pre-enrichment time and media. Copyright © 2010 Elsevier B.V. All rights reserved.
Gan, Yanjun; Duan, Qingyun; Gong, Wei; ...
2014-01-01
Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.« less
Gyawali, P; Ahmed, W; Jagals, P; Sidhu, J P S; Toze, S
2015-12-01
Hookworm infection contributes around 700 million infections worldwide especially in developing nations due to increased use of wastewater for crop production. The effective recovery of hookworm ova from wastewater matrices is difficult due to their low concentrations and heterogeneous distribution. In this study, we compared the recovery rates of (i) four rapid hookworm ova concentration methods from municipal wastewater, and (ii) two concentration methods from sludge samples. Ancylostoma caninum ova were used as surrogate for human hookworm (Ancylostoma duodenale and Necator americanus). Known concentration of A. caninum hookworm ova were seeded into wastewater (treated and raw) and sludge samples collected from two wastewater treatment plants (WWTPs) in Brisbane and Perth, Australia. The A. caninum ova were concentrated from treated and raw wastewater samples using centrifugation (Method A), hollow fiber ultrafiltration (HFUF) (Method B), filtration (Method C) and flotation (Method D) methods. For sludge samples, flotation (Method E) and direct DNA extraction (Method F) methods were used. Among the four methods tested, filtration (Method C) method was able to recover higher concentrations of A. caninum ova consistently from treated wastewater (39-50%) and raw wastewater (7.1-12%) samples collected from both WWTPs. The remaining methods (Methods A, B and D) yielded variable recovery rate ranging from 0.2 to 40% for treated and raw wastewater samples. The recovery rates for sludge samples were poor (0.02-4.7), although, Method F (direct DNA extraction) provided 1-2 orders of magnitude higher recovery rate than Method E (flotation). Based on our results it can be concluded that the recovery rates of hookworm ova from wastewater matrices, especially sludge samples, can be poor and highly variable. Therefore, choice of concentration method is vital for the sensitive detection of hookworm ova in wastewater matrices. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.
COST-EFFECTIVE SAMPLING FOR SPATIALLY DISTRIBUTED PHENOMENA
Various measures of sampling plan cost and loss are developed and analyzed as they relate to a variety of multidisciplinary sampling techniques. The sampling choices examined include methods from design-based sampling, model-based sampling, and geostatistics. Graphs and tables ar...
Developments in Sampling and Analysis Instrumentation for Stationary Sources
ERIC Educational Resources Information Center
Nader, John S.
1973-01-01
Instrumentation for the measurement of pollutant emissions is considered including sample-site selection, sample transport, sample treatment, sample analysis, and data reduction, display, and interpretation. Measurement approaches discussed involve sample extraction from within the stack and electro-optical methods. (BL)
GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 1: ASSESSING SOIL SPLITTING PROTOCOLS
Five soil sample splitting methods (riffle splitting, paper cone riffle splitting, fractional shoveling, coning and quartering, and grab sampling) were evaluated with synthetic samples to verify Pierre Gy sampling theory expectations. Individually prepared samples consisting of l...
Methods for Determining Particle Size Distributions from Nuclear Detonations.
1987-03-01
Debris . . . 30 IV. Summary of Sample Preparation Method . . . . 35 V. Set Parameters for PCS ... ........... 39 VI. Analysis by Vendors...54 XV. Results From Brookhaven Analysis Using The Method of Cumulants ... ........... . 54 XVI. Results From Brookhaven Analysis of Sample...R-3 Using Histogram Method ......... .55 XVII. Results From Brookhaven Analysis of Sample R-8 Using Histogram Method ........... 56 XVIII.TEM Particle
Alum, Absar; Rock, Channah; Abbaszadegan, Morteza
2014-01-01
For land application, biosolids are classified as Class A or Class B based on the levels of bacterial, viral, and helminths pathogens in residual biosolids. The current EPA methods for the detection of these groups of pathogens in biosolids include discrete steps. Therefore, a separate sample is processed independently to quantify the number of each group of the pathogens in biosolids. The aim of the study was to develop a unified method for simultaneous processing of a single biosolids sample to recover bacterial, viral, and helminths pathogens. At the first stage for developing a simultaneous method, nine eluents were compared for their efficiency to recover viruses from a 100 gm spiked biosolids sample. In the second stage, the three top performing eluents were thoroughly evaluated for the recovery of bacteria, viruses, and helminthes. For all three groups of pathogens, the glycine-based eluent provided higher recovery than the beef extract-based eluent. Additional experiments were performed to optimize performance of glycine-based eluent under various procedural factors such as, solids to eluent ratio, stir time, and centrifugation conditions. Last, the new method was directly compared with the EPA methods for the recovery of the three groups of pathogens spiked in duplicate samples of biosolids collected from different sources. For viruses, the new method yielded up to 10% higher recoveries than the EPA method. For bacteria and helminths, recoveries were 74% and 83% by the new method compared to 34% and 68% by the EPA method, respectively. The unified sample processing method significantly reduces the time required for processing biosolids samples for different groups of pathogens; it is less impacted by the intrinsic variability of samples, while providing higher yields (P = 0.05) and greater consistency than the current EPA methods.
van der Elst, Kim C. M.; Span, Lambert F. R.; van Hateren, Kai; Vermeulen, Karin M.; van der Werf, Tjip S.; Greijdanus, Ben; Kosterink, Jos G. W.; Uges, Donald R. A.
2013-01-01
Invasive aspergillosis and candidemia are important causes of morbidity and mortality in immunocompromised and critically ill patients. The triazoles voriconazole, fluconazole, and posaconazole are widely used for the treatment and prophylaxis of these fungal infections. Due to the variability of the pharmacokinetics of the triazoles among and within individual patients, therapeutic drug monitoring is important for optimizing the efficacy and safety of antifungal treatment. A dried blood spot (DBS) analysis was developed and was clinically validated for voriconazole, fluconazole, and posaconazole in 28 patients. Furthermore, a questionnaire was administered to evaluate the patients' opinions of the sampling method. The DBS analytical method showed linearity over the concentration range measured for all triazoles. Results for accuracy and precision were within accepted ranges; samples were stable at room temperature for at least 12 days; and different hematocrit values and blood spot volumes had no significant influence. The ratio of the drug concentration in DBS samples to that in plasma was 1.0 for voriconazole and fluconazole and 0.9 for posaconazole. Sixty percent of the patients preferred DBS analysis as a sampling method; 15% preferred venous blood sampling; and 25% had no preferred method. There was significantly less perception of pain with the DBS sampling method (P = 0.021). In conclusion, DBS analysis is a reliable alternative to venous blood sampling and can be used for therapeutic drug monitoring of voriconazole, fluconazole, and posaconazole. Patients were satisfied with DBS sampling and had less pain than with venous sampling. Most patients preferred DBS sampling to venous blood sampling. PMID:23896473
Feng, Yaoyu; Zhao, Xukun; Chen, Jiaxu; Jin, Wei; Zhou, Xiaonong; Li, Na; Wang, Lin; Xiao, Lihua
2011-01-01
Genotyping studies on the source and human infection potential of Cryptosporidium oocysts in water have been almost exclusively conducted in industrialized nations. In this study, 50 source water samples and 30 tap water samples were collected in Shanghai, China, and analyzed by the U.S. Environmental Protection Agency (EPA) Method 1623. To find a cost-effective method to replace the filtration procedure, the water samples were also concentrated by calcium carbonate flocculation (CCF). Of the 50 source water samples, 32% were positive for Cryptosporidium and 18% for Giardia by Method 1623, whereas 22% were positive for Cryptosporidium and 10% for Giardia by microscopy of CCF concentrates. When CCF was combined with PCR for detection, the occurrence of Cryptosporidium (28%) was similar to that obtained by Method 1623. Genotyping of Cryptosporidium in 17 water samples identified the presence of C. andersoni in 14 water samples, C. suis in 7 water samples, C. baileyi in 2 water samples, C. meleagridis in 1 water sample, and C. hominis in 1 water sample. Therefore, farm animals, especially cattle and pigs, were the major sources of water contamination in Shanghai source water, and most oocysts found in source water in the area were not infectious to humans. Cryptosporidium oocysts were found in 2 of 30 tap water samples. The combined use of CCF for concentration and PCR for detection and genotyping provides a less expensive alternative to filtration and fluorescence microscopy for accurate assessment of Cryptosporidium contamination in water, although the results from this method are semiquantitative. PMID:21498768
We examined the effects of using a fixed-count subsample of 300 organisms on metric values using macroinvertebrate samples collected with 3 field sampling methods at 12 boatable river sites. For each sample, we used metrics to compare an initial fixed-count subsample of approxima...
A random spatial sampling method in a rural developing nation
Michelle C. Kondo; Kent D.W. Bream; Frances K. Barg; Charles C. Branas
2014-01-01
Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method...
A comparison of fitness-case sampling methods for genetic programming
NASA Astrophysics Data System (ADS)
Martínez, Yuliana; Naredo, Enrique; Trujillo, Leonardo; Legrand, Pierrick; López, Uriel
2017-11-01
Genetic programming (GP) is an evolutionary computation paradigm for automatic program induction. GP has produced impressive results but it still needs to overcome some practical limitations, particularly its high computational cost, overfitting and excessive code growth. Recently, many researchers have proposed fitness-case sampling methods to overcome some of these problems, with mixed results in several limited tests. This paper presents an extensive comparative study of four fitness-case sampling methods, namely: Interleaved Sampling, Random Interleaved Sampling, Lexicase Selection and Keep-Worst Interleaved Sampling. The algorithms are compared on 11 symbolic regression problems and 11 supervised classification problems, using 10 synthetic benchmarks and 12 real-world data-sets. They are evaluated based on test performance, overfitting and average program size, comparing them with a standard GP search. Comparisons are carried out using non-parametric multigroup tests and post hoc pairwise statistical tests. The experimental results suggest that fitness-case sampling methods are particularly useful for difficult real-world symbolic regression problems, improving performance, reducing overfitting and limiting code growth. On the other hand, it seems that fitness-case sampling cannot improve upon GP performance when considering supervised binary classification.
Domain Regeneration for Cross-Database Micro-Expression Recognition
NASA Astrophysics Data System (ADS)
Zong, Yuan; Zheng, Wenming; Huang, Xiaohua; Shi, Jingang; Cui, Zhen; Zhao, Guoying
2018-05-01
In this paper, we investigate the cross-database micro-expression recognition problem, where the training and testing samples are from two different micro-expression databases. Under this setting, the training and testing samples would have different feature distributions and hence the performance of most existing micro-expression recognition methods may decrease greatly. To solve this problem, we propose a simple yet effective method called Target Sample Re-Generator (TSRG) in this paper. By using TSRG, we are able to re-generate the samples from target micro-expression database and the re-generated target samples would share same or similar feature distributions with the original source samples. For this reason, we can then use the classifier learned based on the labeled source samples to accurately predict the micro-expression categories of the unlabeled target samples. To evaluate the performance of the proposed TSRG method, extensive cross-database micro-expression recognition experiments designed based on SMIC and CASME II databases are conducted. Compared with recent state-of-the-art cross-database emotion recognition methods, the proposed TSRG achieves more promising results.
Pikkemaat, M G; Rapallini, M L B A; Karp, M T; Elferink, J W A
2010-08-01
Tetracyclines are extensively used in veterinary medicine. For the detection of tetracycline residues in animal products, a broad array of methods is available. Luminescent bacterial biosensors represent an attractive inexpensive, simple and fast method for screening large numbers of samples. A previously developed cell-biosensor method was subjected to an evaluation study using over 300 routine poultry samples and the results were compared with a microbial inhibition test. The cell-biosensor assay yielded many more suspect samples, 10.2% versus 2% with the inhibition test, which all could be confirmed by liquid chromatography-tandem mass spectrometry (LC-MS/MS). Only one sample contained a concentration above the maximum residue limit (MRL) of 100 microg kg(-1), while residue levels in most of the suspect samples were very low (<10 microg kg(-1)). The method appeared to be specific and robust. Using an experimental set-up comprising the analysis of a series of three sample dilutions allowed an appropriate cut-off for confirmatory analysis, limiting the number of samples and requiring further analysis to a minimum.
Mass load estimation errors utilizing grab sampling strategies in a karst watershed
Fogle, A.W.; Taraba, J.L.; Dinger, J.S.
2003-01-01
Developing a mass load estimation method appropriate for a given stream and constituent is difficult due to inconsistencies in hydrologic and constituent characteristics. The difficulty may be increased in flashy flow conditions such as karst. Many projects undertaken are constrained by budget and manpower and do not have the luxury of sophisticated sampling strategies. The objectives of this study were to: (1) examine two grab sampling strategies with varying sampling intervals and determine the error in mass load estimates, and (2) determine the error that can be expected when a grab sample is collected at a time of day when the diurnal variation is most divergent from the daily mean. Results show grab sampling with continuous flow to be a viable data collection method for estimating mass load in the study watershed. Comparing weekly, biweekly, and monthly grab sampling, monthly sampling produces the best results with this method. However, the time of day the sample is collected is important. Failure to account for diurnal variability when collecting a grab sample may produce unacceptable error in mass load estimates. The best time to collect a sample is when the diurnal cycle is nearest the daily mean.
An improved SRC method based on virtual samples for face recognition
NASA Astrophysics Data System (ADS)
Fu, Lijun; Chen, Deyun; Lin, Kezheng; Li, Ao
2018-07-01
The sparse representation classifier (SRC) performs classification by evaluating which class leads to the minimum representation error. However, in real world, the number of available training samples is limited due to noise interference, training samples cannot accurately represent the test sample linearly. Therefore, in this paper, we first produce virtual samples by exploiting original training samples at the aim of increasing the number of training samples. Then, we take the intra-class difference as data representation of partial noise, and utilize the intra-class differences and training samples simultaneously to represent the test sample in a linear way according to the theory of SRC algorithm. Using weighted score level fusion, the respective representation scores of the virtual samples and the original training samples are fused together to obtain the final classification results. The experimental results on multiple face databases show that our proposed method has a very satisfactory classification performance.
Galea, Karen S.; McGonagle, Carolyn; Sleeuwenhoek, Anne; Todd, David; Jiménez, Araceli Sánchez
2014-01-01
Dermal exposure to drilling fluids and crude oil is an exposure route of concern. However, there have been no published studies describing sampling methods or reporting dermal exposure measurements. We describe a study that aimed to evaluate a wipe sampling method to assess dermal exposure to an oil-based drilling fluid and crude oil, as well as to investigate the feasibility of using an interception cotton glove sampler for exposure on the hands/wrists. A direct comparison of the wipe and interception methods was also completed using pigs’ trotters as a surrogate for human skin and a direct surface contact exposure scenario. Overall, acceptable recovery and sampling efficiencies were reported for both methods, and both methods had satisfactory storage stability at 1 and 7 days, although there appeared to be some loss over 14 days. The methods’ comparison study revealed significantly higher removal of both fluids from the metal surface with the glove samples compared with the wipe samples (on average 2.5 times higher). Both evaluated sampling methods were found to be suitable for assessing dermal exposure to oil-based drilling fluids and crude oil; however, the comparison study clearly illustrates that glove samplers may overestimate the amount of fluid transferred to the skin. Further comparison of the two dermal sampling methods using additional exposure situations such as immersion or deposition, as well as a field evaluation, is warranted to confirm their appropriateness and suitability in the working environment. PMID:24598941
Meyer, M.T.; Lee, E.A.; Ferrell, G.M.; Bumgarner, J.E.; Varns, Jerry
2007-01-01
This report describes the performance of an offline tandem solid-phase extraction (SPE) method and an online SPE method that use liquid chromatography/mass spectrometry for the analysis of 23 and 35 antibiotics, respectively, as used in several water-quality surveys conducted since 1999. In the offline tandem SPE method, normalized concentrations for the quinolone, macrolide, and sulfonamide antibiotics in spiked environmental samples averaged from 81 to 139 percent of the expected spiked concentrations. A modified standard-addition technique was developed to improve the quantitation of the tetracycline antibiotics, which had 'apparent' concentrations that ranged from 185 to 1,200 percent of their expected spiked concentrations in matrix-spiked samples. In the online SPE method, normalized concentrations for the quinolone, macrolide, sulfonamide, and tetracycline antibiotics in matrix-spiked samples averaged from 51 to 142 percent of their expected spiked concentrations, and the beta-lactam antibiotics in matrix-spiked samples averaged from 22 to 76 percent of their expected spiked concentration. Comparison of 44 samples analyzed by both the offline tandem SPE and online SPE methods showed 50 to 100 percent agreement in sample detection for overlapping analytes and 68 to 100 percent agreement in a presence-absence comparison for all analytes. The offline tandem and online SPE methods were compared to an independent method that contains two overlapping antibiotic compounds, sulfamethoxazole and trimethoprim, for 96 and 44 environmental samples, respectively. The offline tandem SPE showed 86 and 92 percent agreement in sample detection and 96 and 98 percent agreement in a presence-absence comparison for sulfamethoxazole and trimethoprim, respectively. The online SPE method showed 57 and 56 percent agreement in sample detection and 72 and 91 percent agreement in presence-absence comparison for sulfamethoxazole and trimethoprim, respectively. A linear regression with an R2 of 0.91 was obtained for trimethoprim concentrations, and an R2 of 0.35 was obtained for sulfamethoxazole concentrations determined from samples analyzed by the offline tandem SPE and online SPE methods. Linear regressions of trimethoprim and sulfamethoxazole concentrations determined from samples analyzed by the offline tandem SPE method and the independent M3 pharmaceutical method yielded R2 of 0.95 and 0.87, respectively. Regressed comparison of the offline tandem SPE method to the online SPE and M3 methods showed that the online SPE method gave higher concentrations for sulfamethoxazole and trimethoprim than were obtained from the offline tandem SPE or M3 methods.
Al, Kait F; Bisanz, Jordan E; Gloor, Gregory B; Reid, Gregor; Burton, Jeremy P
2018-01-01
The increasing interest on the impact of the gut microbiota on health and disease has resulted in multiple human microbiome-related studies emerging. However, multiple sampling methods are being used, making cross-comparison of results difficult. To avoid additional clinic visits and increase patient recruitment to these studies, there is the potential to utilize at-home stool sampling. The aim of this pilot study was to compare simple self-sampling collection and storage methods. To simulate storage conditions, stool samples from three volunteers were freshly collected, placed on toilet tissue, and stored at four temperatures (-80, 7, 22 and 37°C), either dry or in the presence of a stabilization agent (RNAlater®) for 3 or 7days. Using 16S rRNA gene sequencing by Illumina, the effect of storage variations for each sample was compared to a reference community from fresh, unstored counterparts. Fastq files may be accessed in the NCBI Sequence Read Archive: Bioproject ID PRJNA418287. Microbial diversity and composition were not significantly altered by any storage method. Samples were always separable based on participant, regardless of storage method suggesting there was no need for sample preservation by a stabilization agent. In summary, if immediate sample processing is not feasible, short term storage of unpreserved stool samples on toilet paper offers a reliable way to assess the microbiota composition by 16S rRNA gene sequencing. Copyright © 2017 Elsevier B.V. All rights reserved.
Sparse feature learning for instrument identification: Effects of sampling and pooling methods.
Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu
2016-05-01
Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.
40 CFR 761.272 - Chemical extraction and analysis of samples.
Code of Federal Regulations, 2010 CFR
2010-07-01
... COMMERCE, AND USE PROHIBITIONS Cleanup Site Characterization Sampling for PCB Remediation Waste in... composite samples of PCB remediation waste. Use Method 8082 from SW-846, or a method validated under subpart...
Houts, Carrie R; Edwards, Michael C; Wirth, R J; Deal, Linda S
2016-11-01
There has been a notable increase in the advocacy of using small-sample designs as an initial quantitative assessment of item and scale performance during the scale development process. This is particularly true in the development of clinical outcome assessments (COAs), where Rasch analysis has been advanced as an appropriate statistical tool for evaluating the developing COAs using a small sample. We review the benefits such methods are purported to offer from both a practical and statistical standpoint and detail several problematic areas, including both practical and statistical theory concerns, with respect to the use of quantitative methods, including Rasch-consistent methods, with small samples. The feasibility of obtaining accurate information and the potential negative impacts of misusing large-sample statistical methods with small samples during COA development are discussed.
NASA Astrophysics Data System (ADS)
Cheng, T.; Zhou, X.; Jia, Y.; Yang, G.; Bai, J.
2018-04-01
In the project of China's First National Geographic Conditions Census, millions of sample data have been collected all over the country for interpreting land cover based on remote sensing images, the quantity of data files reaches more than 12,000,000 and has grown in the following project of National Geographic Conditions Monitoring. By now, using database such as Oracle for storing the big data is the most effective method. However, applicable method is more significant for sample data's management and application. This paper studies a database construction method which is based on relational database with distributed file system. The vector data and file data are saved in different physical location. The key issues and solution method are discussed. Based on this, it studies the application method of sample data and analyzes some kinds of using cases, which could lay the foundation for sample data's application. Particularly, sample data locating in Shaanxi province are selected for verifying the method. At the same time, it takes 10 first-level classes which defined in the land cover classification system for example, and analyzes the spatial distribution and density characteristics of all kinds of sample data. The results verify that the method of database construction which is based on relational database with distributed file system is very useful and applicative for sample data's searching, analyzing and promoted application. Furthermore, sample data collected in the project of China's First National Geographic Conditions Census could be useful in the earth observation and land cover's quality assessment.
LeBouf, Ryan F; Virji, Mohammed Abbas; Ranpara, Anand; Stefaniak, Aleksandr B
2017-07-01
This method was designed for sampling select quaternary ammonium (quat) compounds in air or on surfaces followed by analysis using ultraperformance liquid chromatography tandem mass spectrometry. Target quats were benzethonium chloride, didecyldimethylammonium bromide, benzyldimethyldodecylammonium chloride, benzyldimethyltetradecylammonium chloride, and benzyldimethylhexadecylammonium chloride. For air sampling, polytetrafluoroethylene (PTFE) filters are recommended for 15-min to 24-hour sampling. For surface sampling, Pro-wipe® 880 (PW) media was chosen. Samples were extracted in 60:40 acetonitrile:0.1% formic acid for 1 hour on an orbital shaker. Method detection limits range from 0.3 to 2 ng/ml depending on media and analyte. Matrix effects of media are minimized through the use of multiple reaction monitoring versus selected ion recording. Upper confidence limits on accuracy meet the National Institute for Occupational Safety and Health 25% criterion for PTFE and PW media for all analytes. Using PTFE and PW analyzed with multiple reaction monitoring, the method quantifies levels among the different quats compounds with high precision (<10% relative standard deviation) and low bias (<11%). The method is sensitive enough with very low method detection limits to capture quats on air sampling filters with only a 15-min sample duration with a maximum assessed storage time of 103 days before sample extraction. This method will support future exposure assessment and quantitative epidemiologic studies to explore exposure-response relationships and establish levels of quats exposures associated with adverse health effects. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Zonta, Marco Antonio; Velame, Fernanda; Gema, Samara; Filassi, Jose Roberto; Longatto-Filho, Adhemar
2014-01-01
Background Breast cancer is the second cause of death in women worldwide. The spontaneous breast nipple discharge may contain cells that can be analyzed for malignancy. Halo® Mamo Cyto Test (HMCT) was recently developed as an automated system indicated to aspirate cells from the breast ducts. The objective of this study was to standardize the methodology of sampling and sample preparation of nipple discharge obtained by the automated method Halo breast test and perform cytological evaluation in samples preserved in liquid medium (SurePath™). Methods We analyzed 564 nipple fluid samples, from women between 20 and 85 years old, without history of breast disease and neoplasia, no pregnancy, and without gynecologic medical history, collected by HMCT method and preserved in two different vials with solutions for transport. Results From 306 nipple fluid samples from method 1, 199 (65%) were classified as unsatisfactory (class 0), 104 (34%) samples were classified as benign findings (class II), and three (1%) were classified as undetermined to neoplastic cells (class III). From 258 samples analyzed in method 2, 127 (49%) were classified as class 0, 124 (48%) were classified as class II, and seven (2%) were classified as class III. Conclusion Our study suggests an improvement in the quality and quantity of cellular samples when the association of the two methodologies is performed, Halo breast test and the method in liquid medium. PMID:29147397
Jesse, Stephen [Knoxville, TN; Geohegan, David B [Knoxville, TN; Guillorn, Michael [Brooktondale, NY
2009-02-17
Methods and apparatus are described for SEM imaging and measuring electronic transport in nanocomposites based on electric field induced contrast. A method includes mounting a sample onto a sample holder, the sample including a sample material; wire bonding leads from the sample holder onto the sample; placing the sample holder in a vacuum chamber of a scanning electron microscope; connecting leads from the sample holder to a power source located outside the vacuum chamber; controlling secondary electron emission from the sample by applying a predetermined voltage to the sample through the leads; and generating an image of the secondary electron emission from the sample. An apparatus includes a sample holder for a scanning electron microscope having an electrical interconnect and leads on top of the sample holder electrically connected to the electrical interconnect; a power source and a controller connected to the electrical interconnect for applying voltage to the sample holder to control the secondary electron emission from a sample mounted on the sample holder; and a computer coupled to a secondary electron detector to generate images of the secondary electron emission from the sample.
NASA Astrophysics Data System (ADS)
Yang, Linlin; Sun, Hai; Fu, Xudong; Wang, Suli; Jiang, Luhua; Sun, Gongquan
2014-07-01
A novel method for measuring effective diffusion coefficient of porous materials is developed. The oxygen concentration gradient is established by an air-breathing proton exchange membrane fuel cell (PEMFC). The porous sample is set in a sample holder located in the cathode plate of the PEMFC. At a given oxygen flux, the effective diffusion coefficients are related to the difference of oxygen concentration across the samples, which can be correlated with the differences of the output voltage of the PEMFC with and without inserting the sample in the cathode plate. Compared to the conventional electrical conductivity method, this method is more reliable for measuring non-wetting samples.
Rapid method for the determination of 226Ra in hydraulic fracturing wastewater samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maxwell, Sherrod L.; Culligan, Brian K.; Warren, Richard A.
A new method that rapidly preconcentrates and measures 226Ra from hydraulic fracturing wastewater samples was developed in the Savannah River Environmental Laboratory. The method improves the quality of 226Ra measurements using gamma spectrometry by providing up to 100x preconcentration of 226Ra from this difficult sample matrix, which contains very high levels of calcium, barium, strontium, magnesium and sodium. The high chemical yield, typically 80-90%, facilitates a low detection limit, important for lower level samples, and indicates method ruggedness. Ba-133 tracer is used to determine chemical yield and correct for geometry-related counting issues. The 226Ra sample preparation takes < 2 hours.
Rapid method for the determination of 226Ra in hydraulic fracturing wastewater samples
Maxwell, Sherrod L.; Culligan, Brian K.; Warren, Richard A.; ...
2016-03-24
A new method that rapidly preconcentrates and measures 226Ra from hydraulic fracturing wastewater samples was developed in the Savannah River Environmental Laboratory. The method improves the quality of 226Ra measurements using gamma spectrometry by providing up to 100x preconcentration of 226Ra from this difficult sample matrix, which contains very high levels of calcium, barium, strontium, magnesium and sodium. The high chemical yield, typically 80-90%, facilitates a low detection limit, important for lower level samples, and indicates method ruggedness. Ba-133 tracer is used to determine chemical yield and correct for geometry-related counting issues. The 226Ra sample preparation takes < 2 hours.
GROUND WATER PURGING AND SAMPLING METHODS: HISTORY VS. HYSTERIA
It has been over 10 years since the low-flow ground water purging and sampling method was initially reported in the literature. The method grew from the recognition that well purging was necessary to collect representative samples, bailers could not achieve well purging, and high...
THE INFLUENCE OF PHYSICAL FACTORS ON COMPARATIVE PERFORMANCE OF SAMPLING METHODS IN LARGE RIVERS
In 1999, we compared five existing benthic macroinvertebrate sampling methods used in boatable rivers. Each sampling protocol was performed at each of 60 sites distributed among four rivers in the Ohio River drainage basin. Initial comparison of methods using key macroinvertebr...
Fernández-Soto, Pedro; Velasco Tirado, Virginia; Carranza Rodríguez, Cristina; Pérez-Arellano, José Luis; Muro, Antonio
2013-01-01
Human schistosomiasis remains a serious worldwide public health problem. At present, a sensitive and specific assay for routine diagnosis of schistosome infection is not yet available. The potential for detecting schistosome-derived DNA by PCR-based methods in human clinical samples is currently being investigated as a diagnostic tool with potential application in routine schistosomiasis diagnosis. Collection of diagnostic samples such as stool or blood is usually difficult in some populations. However, urine is a biological sample that can be collected in a non-invasive method, easy to get from people of all ages and easy in management, but as a sample for PCR diagnosis is still not widely used. This could be due to the high variability in the reported efficiency of detection as a result of the high variation in urine samples' storage or conditions for handling and DNA preservation and extraction methods. We evaluate different commercial DNA extraction methods from a series of long-term frozen storage human urine samples from patients with parasitological confirmed schistosomiasis in order to assess the PCR effectiveness for Schistosoma spp. detection. Patients urine samples were frozen for 18 months up to 7 years until use. Results were compared with those obtained in PCR assays using fresh healthy human urine artificially contaminated with Schistosoma mansoni DNA and urine samples from mice experimentally infected with S. mansoni cercariae stored frozen for at least 12 months before use. PCR results in fresh human artificial urine samples using different DNA based extraction methods were much more effective than those obtained when long-term frozen human urine samples were used as the source of DNA template. Long-term frozen human urine samples are probably not a good source for DNA extraction for use as a template in PCR detection of Schistosoma spp., regardless of the DNA method of extraction used.
NASA Astrophysics Data System (ADS)
Kautz, M.
2016-12-01
Microplastic research in aquatic environments has quickly evolved over the last decade. To have meaningful inter-study comparisons, it is necessary to define methodological criteria for both the sampling and sorting of microplastics. The most common sampling method used for sea surface samples has traditionally been a neuston net (NN) tow. Originally designed for plankton collection, neuston tows allow for a large volume of water to be sampled and can be coupled with phytoplankton monitoring. The widespread use of surface nets allows for easy comparison between data sets, but the units of measurement for calculating microplastic concentration vary, from surface area m2 and Km2, to volume of water sampled, m3. Contamination by the air, equipment, or sampler is a constant concern in microplastic research. Significant in-field contamination concerns for neuston tow sampling include air exposure time, microplastics in rinse water, sampler contact, and plastic net material. Seeking to overcome the lack of contamination control and the intrinsic instrumental size limitation associated with surface tow nets, we developed an alternative sampling method. The whole water (WW) method is a one-liter grab sample of surface water adapted from College of the Atlantic and Sea Education Association (SEA) student, Marina Garland. This is the only WW method that we are aware of being used to sample microplastic. The method addresses the increasing need to explore smaller size domains, to reduce potential contamination and to incorporate citizen scientists into data collection. Less water is analyzed using the WW method, but it allows for targeted sampling of point-source pollution, intertidal, and shallow areas. The WW methodology can easily be integrated into long-term or citizen science monitoring initiatives due to its simplicity and low equipment demands. The aim of our study was to demonstrate a practical and economically feasible method for sampling microplastic abundance at the micro (10-6m) and nano (10-8m) scale that can be used in a wide variety of environments, and for assessing spatial and temporal distributions. The method has been employed in a multi-year citizen science collaboration with Adventurers and Scientists for Conservation to study microplastic worldwide.
Comparison of oral fluid collection methods for the molecular detection of hepatitis B virus.
Portilho, M M; Mendonça, Acf; Marques, V A; Nabuco, L C; Villela-Nogueira, C A; Ivantes, Cap; Lewis-Ximenez, L L; Lampe, E; Villar, L M
2017-11-01
This study aims to compare the efficiency of four oral fluid collection methods (Salivette, FTA Card, spitting and DNA-Sal) to detect HBV DNA by qualitative PCR. Seventy-four individuals (32 HBV reactive and 42 with no HBV markers) donated serum and oral fluid. In-house qualitative PCR to detect HBV was used for both samples and commercial quantitative PCR for serum. HBV DNA was detected in all serum samples from HBV-infected individuals, and it was not detected in control group. HBV DNA from HBV group was detected in 17 samples collected with Salivette device, 16 samples collected by FTA Card device, 16 samples collected from spitting and 13 samples collected by DNA-Sal device. Samples that corresponded to a higher viral load in their paired serum sample could be detected using all oral fluid collection methods, but Salivette collection device yielded the largest numbers of positive samples and had a wide range of viral load that was detected. It was possible to detect HBV DNA using all devices tested, but higher number of positive samples was observed when samples were collected using Salivette device, which shows high concordance to viral load observed in the paired serum samples. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd. All rights reserved.
Tack, Pieter; Vekemans, Bart; Laforce, Brecht; Rudloff-Grund, Jennifer; Hernández, Willinton Y; Garrevoet, Jan; Falkenberg, Gerald; Brenker, Frank; Van Der Voort, Pascal; Vincze, Laszlo
2017-02-07
Using X-ray absorption near edge structure (XANES) spectroscopy, information on the local chemical structure and oxidation state of an element of interest can be acquired. Conventionally, this information can be obtained in a spatially resolved manner by scanning a sample through a focused X-ray beam. Recently, full-field methods have been developed to obtain direct 2D chemical state information by imaging a large sample area. These methods are usually in transmission mode, thus restricting the use to thin and transmitting samples. Here, a fluorescence method is displayed using an energy-dispersive pnCCD detector, the SLcam, characterized by measurement times far superior to what is generally applicable. Additionally, this method operates in confocal mode, thus providing direct 3D spatially resolved chemical state information from a selected subvolume of a sample, without the need of rotating a sample. The method is applied to two samples: a gold-supported magnesia catalyst (Au/MgO) and a natural diamond containing Fe-rich inclusions. Both samples provide XANES spectra that can be overlapped with reference XANES spectra, allowing this method to be used for fingerprinting and linear combination analysis of known XANES reference compounds.
Rapid method to determine 226Ra in steel samples
Maxwell, Sherrod L.; Culligan, Brian; Hutchison, Jay B.; ...
2017-09-22
The rapid measurement of 226Ra in steel samples is very important in the event of a radiological emergency. 226Ra (T 1/2 = 1600 y) is a natural radionuclide present in the environment and a highly toxic alpha-emitter. Due to its long life and tendency to concentrate in bones, 226Ra ingestion or inhalation can lead to significant committed dose to individuals. A new method for the determination of 226Ra in steel samples has been developed at the Savannah River Environmental Laboratory. The new method employs a rugged acid digestion method that includes hydrofluoric acid, followed by a single precipitation step tomore » rapidly preconcentrate the radium and remove most of the dissolved steel sample matrix. Radium is then separated using a combination of cation exchange and extraction chromatography, and 226Ra is measured by alpha spectrometry. This approach has a sample preparation time of ~ 8 h for steel samples, has a very high tracer yield (> 88%), and removes interferences effectively. A 133Ba yield tracer is used so that samples can be counted immediately following the separation method, avoiding lengthy ingrowth times that are required in other methods.« less
Rapid method to determine 226Ra in steel samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maxwell, Sherrod L.; Culligan, Brian; Hutchison, Jay B.
The rapid measurement of 226Ra in steel samples is very important in the event of a radiological emergency. 226Ra (T 1/2 = 1600 y) is a natural radionuclide present in the environment and a highly toxic alpha-emitter. Due to its long life and tendency to concentrate in bones, 226Ra ingestion or inhalation can lead to significant committed dose to individuals. A new method for the determination of 226Ra in steel samples has been developed at the Savannah River Environmental Laboratory. The new method employs a rugged acid digestion method that includes hydrofluoric acid, followed by a single precipitation step tomore » rapidly preconcentrate the radium and remove most of the dissolved steel sample matrix. Radium is then separated using a combination of cation exchange and extraction chromatography, and 226Ra is measured by alpha spectrometry. This approach has a sample preparation time of ~ 8 h for steel samples, has a very high tracer yield (> 88%), and removes interferences effectively. A 133Ba yield tracer is used so that samples can be counted immediately following the separation method, avoiding lengthy ingrowth times that are required in other methods.« less
Mori, Yoshiharu; Okamoto, Yuko
2013-02-01
A simulated tempering method, which is referred to as simulated-tempering umbrella sampling, for calculating the free energy of chemical reactions is proposed. First principles molecular dynamics simulations with this simulated tempering were performed to study the intramolecular proton transfer reaction of malonaldehyde in an aqueous solution. Conformational sampling in reaction coordinate space can be easily enhanced with this method, and the free energy along a reaction coordinate can be calculated accurately. Moreover, the simulated-tempering umbrella sampling provides trajectory data more efficiently than the conventional umbrella sampling method.
NASA Technical Reports Server (NTRS)
Slivon, L. E.; Hernon-Kenny, L. A.; Katona, V. R.; Dejarme, L. E.
1995-01-01
This report describes analytical methods and results obtained from chemical analysis of 31 charcoal samples in five sets. Each set was obtained from a single scrubber used to filter ambient air on board a Spacelab mission. Analysis of the charcoal samples was conducted by thermal desorption followed by gas chromatography/mass spectrometry (GC/MS). All samples were analyzed using identical methods. The method used for these analyses was able to detect compounds independent of their polarity or volatility. In addition to the charcoal samples, analyses of three Environmental Control and Life Support System (ECLSS) water samples were conducted specifically for trimethylamine.
Introduction to Field Water-Quality Methods for the Collection of Metals - 2007 Project Summary
Allen, Monica L.
2008-01-01
The U.S. Geological Survey (USGS), Region VI of the U.S. Environmental Protection Agency (USEPA), and the Osage Nation presented three 3-day workshops, in June-August 2007, entitled ?Introduction to Field Water-Quality Methods for the Collection of Metals.? The purpose of the workshops was to provide instruction to tribes within USEPA Region VI on various USGS surface-water measurement methods and water-quality sampling protocols for the collection of surface-water samples for metals analysis. Workshop attendees included members from over 22 tribes and pueblos. USGS instructors came from Oklahoma, New Mexico, and Georgia. Workshops were held in eastern and south-central Oklahoma and New Mexico and covered many topics including presampling preparation, water-quality monitors, and sampling for metals in surface water. Attendees spent one full classroom day learning the field methods used by the USGS Water Resources Discipline and learning about the complexity of obtaining valid water-quality and quality-assurance data. Lectures included (1) a description of metal contamination sources in surface water; (2) introduction on how to select field sites, equipment, and laboratories for sample analysis; (3) collection of sediment in surface water; and (4) utilization of proper protocol and methodology for sampling metals in surface water. Attendees also were provided USGS sampling equipment for use during the field portion of the class so they had actual ?hands-on? experience to take back to their own organizations. The final 2 days of the workshop consisted of field demonstrations of current USGS water-quality sample-collection methods. The hands-on training ensured that attendees were exposed to and experienced proper sampling procedures. Attendees learned integrated-flow techniques during sample collection, field-property documentation, and discharge measurements and calculations. They also used enclosed chambers for sample processing and collected quality-assurance samples to verify their techniques. Benefits of integrated water-quality sample-collection methods are varied. Tribal environmental programs now have the ability to collect data that are comparable across watersheds. The use of consistent sample collection, manipulation, and storage techniques will provide consistent quality data that will enhance the understanding of local water resources. The improved data quality also will help the USEPA better document the condition of the region?s water. Ultimately, these workshops equipped tribes to use uniform sampling methods and to provide consistent quality data that are comparable across the region.
Hubbell, Joel M.; Sisson, James B.
2001-01-01
A method of determining matric potential of a sample, the method comprising placing the sample in a container, the container having an opening; and contacting the sample with a tensiometer via the opening. An apparatus for determining matric potential of a sample, the apparatus comprising a housing configured to receive a sample; a portable matric potential sensing device extending into the housing and having a porous member; and a wall closing the housing to insulate the sample and at least a portion of the matric potential sensing device including the porous member.
Rapid method to determine actinides and 89/90Sr in limestone and marble samples
Maxwell, Sherrod L.; Culligan, Brian; Hutchison, Jay B.; ...
2016-04-12
A new method for the determination of actinides and radiostrontium in limestone and marble samples has been developed that utilizes a rapid sodium hydroxide fusion to digest the sample. Following rapid pre-concentration steps to remove sample matrix interferences, the actinides and 89/90Sr are separated using extraction chromatographic resins and measured radiometrically. The advantages of sodium hydroxide fusion versus other fusion techniques will be discussed. Lastly, this approach has a sample preparation time for limestone and marble samples of <4 hours.
Jha, Virendra K.; Wydoski, Duane S.
2002-01-01
A method for the isolation of 20 parent organophosphate pesticides and 5 pesticide degradates from filtered natural-water samples is described. Seven of these compounds are reported permanently with an estimated concentration because of performance issues. Water samples are filtered to remove suspended particulate matter, and then 1 liter of filtrate is pumped through disposable solid-phase extraction columns that contain octadecyl-bonded porous silica to extract the compounds. The C-18 columns are dried with nitrogen gas, and method compounds are eluted from the columns with ethyl acetate. The extract is analyzed by dual capillary-column gas chromatography with flame photometric detection. Single-operator method detection limits in all three water-matrix samples ranged from 0.004 to 0.012 microgram per liter. Method performance was validated by spiking all compounds into three different matrices at three different concentrations. Eight replicates were analyzed at each concentration level in each matrix. Mean recoveries of method compounds spiked in surface-water samples ranged from 39 to 149 percent and those in ground-water samples ranged from 40 to 124 percent for all pesticides except dimethoate. Mean recoveries of method compounds spiked in reagent-water samples ranged from 41 to 119 percent for all pesticides except dimethoate. Dimethoate exhibited reduced recoveries (mean of 43 percent in low- and medium-concentration level spiked samples and 20 percent in high-concentration level spiked samples) in all matrices because of incomplete collection on the C-18 column. As a result, concen-trations of dimethoate and six other compounds (based on performance issues) in samples are reported in this method with an estimated remark code.
A passive guard for low thermal conductivity measurement of small samples by the hot plate method
NASA Astrophysics Data System (ADS)
Jannot, Yves; Degiovanni, Alain; Grigorova-Moutiers, Veneta; Godefroy, Justine
2017-01-01
Hot plate methods under steady state conditions are based on a 1D model to estimate the thermal conductivity, using measurements of the temperatures T 0 and T 1 of the two sides of the sample and of the heat flux crossing it. To be consistent with the hypothesis of the 1D heat flux, either a hot plate guarded apparatus is used, or the temperature is measured at the centre of the sample. On one hand the latter method can be used only if the ratio thickness/width of the sample is sufficiently low and on the other hand the guarded hot plate method requires large width samples (typical cross section of 0.6 × 0.6 m2). That is why both methods cannot be used for low width samples. The method presented in this paper is based on an optimal choice of the temperatures T 0 and T 1 compared to the ambient temperature T a, enabling the estimation of the thermal conductivity with a centered hot plate method, by applying the 1D heat flux model. It will be shown that these optimal values do not depend on the size or on the thermal conductivity of samples (in the range 0.015-0.2 W m-1 K-1), but only on T a. The experimental results obtained validate the method for several reference samples for values of the ratio thickness/width up to 0.3, thus enabling the measurement of the thermal conductivity of samples having a small cross-section, down to 0.045 × 0.045 m2.
Detection and monitoring of invasive exotic plants: a comparison of four sampling methods
Cynthia D. Huebner
2007-01-01
The ability to detect and monitor exotic invasive plants is likely to vary depending on the sampling method employed. Methods with strong qualitative thoroughness for species detection often lack the intensity necessary to monitor vegetation change. Four sampling methods (systematic plot, stratified-random plot, modified Whittaker, and timed meander) in hemlock and red...
NASA Astrophysics Data System (ADS)
Ma, Yinbiao; Wei, Xiaojuan
2017-04-01
A novel method for the determination of platinum in waste platinum-loaded carbon catalyst samples was established by inductively coupled plasma optical emission spectrometry after samples digested by microwave oven with aqua regia. Such experiment conditions were investigated as the influence of sample digestion methods, digestion time, digestion temperature and interfering ions on the determination. Under the optimized conditions, the linear range of calibration graph for Pt was 0 ˜ 200.00 mg L-1, and the recovery was 95.67% ˜ 104.29%. The relative standard deviation (RSDs) for Pt was 1.78 %. The proposed method was applied to determine the same samples with atomic absorption spectrometry with the results consistently, which is suitable for the determination of platinum in waste platinum-loaded carbon catalyst samples.
Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks
NASA Astrophysics Data System (ADS)
Sun, Wei; Chang, K. C.
2005-05-01
Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.
Electrodeposition as an alternate method for preparation of environmental samples for iodide by AMS
Adamic, M. L.; Lister, T. E.; Dufek, E. J.; ...
2015-03-25
This paper presents an evaluation of an alternate method for preparing environmental samples for 129I analysis by accelerator mass spectrometry (AMS) at Idaho National Laboratory. The optimal sample preparation method is characterized by ease of preparation, capability of processing very small quantities of iodide, and ease of loading into a cathode. Electrodeposition of iodide on a silver wire was evaluated using these criteria. This study indicates that the electrochemically-formed silver iodide deposits produce ion currents similar to those from precipitated silver iodide for the same sample mass. Furthermore, precipitated silver iodide samples are usually mixed with niobium or silver powdermore » prior to loading in a cathode. Using electrodeposition, the silver is already mixed with the sample and can simply be picked up with tweezers, placed in the sample die, and pressed into a cathode. The major advantage of this method is that the silver wire/electrodeposited silver iodide is much easier to load into a cathode.« less
Evaluation of passive samplers for the collection of dissolved organic matter in streams.
Warner, Daniel L; Oviedo-Vargas, Diana; Royer, Todd V
2015-01-01
Traditional sampling methods for dissolved organic matter (DOM) in streams limit opportunities for long-term studies due to time and cost constraints. Passive DOM samplers were constructed following a design proposed previously which utilizes diethylaminoethyl (DEAE) cellulose as a sampling medium, and they were deployed throughout a temperate stream network in Indiana. Two deployments of the passive samplers were conducted, during which grab samples were frequently collected for comparison. Differences in DOM quality between sites and sampling methods were assessed using several common optical analyses. The analyses revealed significant differences in optical properties between sampling methods, with the passive samplers preferentially collecting terrestrial, humic-like DOM. We assert that the differences in DOM composition from each sampling method were caused by preferential binding of complex humic compounds to the DEAE cellulose in the passive samplers. Nonetheless, the passive samplers may provide a cost-effective, integrated sample of DOM in situations where the bulk DOM pool is composed mainly of terrestrial, humic-like compounds.
Viability qPCR, a new tool for Legionella risk management.
Lizana, X; López, A; Benito, S; Agustí, G; Ríos, M; Piqué, N; Marqués, A M; Codony, F
2017-11-01
Viability quantitative Polymerase Chain Reaction (v-qPCR) is a recent analytical approach for only detecting live microorganisms by DNA amplification-based methods This approach is based on the use of a reagent that irreversibly fixes dead cells DNA. In this study, we evaluate the utility of v-qPCR versus culture method for Legionellosis risk management. The present study was performed using 116 real samples. Water samples were simultaneously analysed by culture, v-qPCR and qPCR methods. Results were compared by means of a non-parametric test. In 11.6% of samples using both methods (culture method and v-qPCR) results were positive, in 50.0% of samples both methods gave rise to negative results. As expected, equivalence between methods was not observed in all cases, as in 32.1% of samples positive results were obtained by v-qPCR and all of them gave rise to negative results by culture. Only in 6.3% of samples, with very low Legionella levels, was culture positive and v-qPCR negative. In 3.5% of samples, overgrowth of other bacteria did not allow performing the culture. When comparing both methods, significant differences between culture and v-qPCR were in the samples belonging to the cooling towers-evaporative condensers group. The v-qPCR method detected greater presence and obtained higher concentrations of Legionella spp. (p<0.001). Otherwise, no significant differences between methods were found in the rest of the groups. The v-qPCR method can be used as a quick tool to evaluate Legionellosis risk, especially in cooling towers-evaporative condensers, where this technique can detect higher levels than culture. The combined interpretation of PCR results along with the ratio of live cells is proposed as a tool for understanding the sample context and estimating the Legionellosis risk potential according to 4 levels of hierarchy. Copyright © 2017 Elsevier GmbH. All rights reserved.
[Free crystalline silica: a comparison of methods for its determination in total dust].
Maciejewska, Aleksandra; Szadkowska-Stańczyk, Irena; Kondratowicz, Grzegorz
2005-01-01
The major objective of the study was to compare and investigate the usefulness of quantitative analyses of free crystalline silica (FCS) in the assessment of dust exposure in samples of total dust of varied composition, using three methods: chemical method in common use in Poland; infrared spectrometry; and x-ray powder diffraction. Mineral composition and FCS contents were investigated in 9 laboratory samples of raw materials, materials, and industrial wastes, containing from about 2 to over 80% of crystalline silica and reduced to particles of size corresponding with that of total dust. Sample components were identified using XRD and FT-IR methods. Ten independent determinations of FCS with each of the three study methods were performed in dust samples. An analysis of linear correlation was applied to investigate interrelationship between mean FCS determinations. In analyzed dust samples, along with silica dust there were numerous minerals interfering with silica during the quantitative analysis. Comparison of mean results of FCS determinations showed that the results obtained using the FT-IR method were by 12-13% lower than those obtained with two other methods. However, the differences observed were within the limits of changeability of results associated with their precision and dependence on reference materials used. Assessment of occupational exposure to dusts containing crystalline silica can be performed on the basis of quantitative analysis of FCS in total dusts using each of the compared methods. The FT-IR method is most appropriate for the FCS determination in samples of small amount of silica or collected at low dust concentrations; the XRD method for the analysis of multicomponent samples; and the chemical method in the case of medium and high FCS contents in samples or high concentrations of dusts in the work environment.
Feng, Shu; Gale, Michael J.; Fay, Jonathan D.; Faridi, Ambar; Titus, Hope E.; Garg, Anupam K.; Michaels, Keith V.; Erker, Laura R.; Peters, Dawn; Smith, Travis B.; Pennesi, Mark E.
2015-01-01
Purpose To describe a standardized flood-illuminated adaptive optics (AO) imaging protocol suitable for the clinical setting and to assess sampling methods for measuring cone density. Methods Cone density was calculated following three measurement protocols: 50 × 50-μm sampling window values every 0.5° along the horizontal and vertical meridians (fixed-interval method), the mean density of expanding 0.5°-wide arcuate areas in the nasal, temporal, superior, and inferior quadrants (arcuate mean method), and the peak cone density of a 50 × 50-μm sampling window within expanding arcuate areas near the meridian (peak density method). Repeated imaging was performed in nine subjects to determine intersession repeatability of cone density. Results Cone density montages could be created for 67 of the 74 subjects. Image quality was determined to be adequate for automated cone counting for 35 (52%) of the 67 subjects. We found that cone density varied with different sampling methods and regions tested. In the nasal and temporal quadrants, peak density most closely resembled histological data, whereas the arcuate mean and fixed-interval methods tended to underestimate the density compared with histological data. However, in the inferior and superior quadrants, arcuate mean and fixed-interval methods most closely matched histological data, whereas the peak density method overestimated cone density compared with histological data. Intersession repeatability testing showed that repeatability was greatest when sampling by arcuate mean and lowest when sampling by fixed interval. Conclusions We show that different methods of sampling can significantly affect cone density measurements. Therefore, care must be taken when interpreting cone density results, even in a normal population. PMID:26325414
Comparing the NIOSH Method 5040 to a Diesel Particulate Matter Meter for Elemental Carbon
NASA Astrophysics Data System (ADS)
Ayers, David Matthew
Introduction: The sampling of elemental carbon has been associated with monitoring exposures in the trucking and mining industries. Recently, in the field of engineered nanomaterials, single wall and muti-wall carbon nanotubes (MWCNTs) are being produced in ever increasing quantities. The only approved atmospheric sampling for multi-wall carbon nanotubes in NIOSH Method 5040. These results are accurate but can take up to 30 days for sample results to be received. Objectives: Compare the results of elemental carbon sampling from the NIOSH Method 5040 to a Diesel Particulate Matter (DPM) Meter. Methods: MWCNTs were transferred and weighed between several trays placed on a scale. The NIOSH Method 5040 and DPM sampling train was hung 6 inches above the receiving tray. The transferring and weighing of the MWCNTs created an aerosol containing elemental carbon. Twenty-one total samples using both meters type were collected. Results: The assumptions for a Two-Way ANOVA were violated therefore, Mann-Whitney U Tests and a Kruskal-Wallis Test were performed. The hypotheses for both research questions were rejected. There was a significant difference in the EC concentrations obtained by the NIOSH Method 5040 and the DPM meter. There were also significant differences in elemental carbon level concentrations when sampled using a DPM meter versus a sampling pump based upon the three concentration levels (low, medium and high). Conclusions: The differences in the EC concentrations were statistically significant therefore, the two methods (NIOSH Method 5040 and DPM) are not the same. The NIOSH Method 5040 should continue to be the only authorized method of establishing an EC concentration for MWCNTs until a MWCNT specific method or an instantaneous meter is invented.
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.
Preparation of bone samples in the Gliwice Radiocarbon Laboratory for AMS radiocarbon dating.
Piotrowska, N; Goslar, T
2002-12-01
In the Gliwice Radiocarbon Laboratory, a system for preparation of samples for AMS dating has been built. At first it was used to produce graphite targets from plant macrofossils and sediments. In this study we extended its capabilities with the preparation of bones. We dealt with 3 methods; the first was the classical Longin method of collagen extraction, the second one included additional treatment of powdered bone in alkali solution, while in the third one carboxyl carbon was separated from amino acids obtained after hydrolysis of protein. The suitability of the methods was tested on 2 bone samples. Most of our samples gave ages > 40 kyr BP, suggesting good performance of the adapted methods, except for one sample prepared with simple Longin method. For routine preparation of bones we chose the Longin method with additional alkali treatment.
Local Feature Selection for Data Classification.
Armanfard, Narges; Reilly, James P; Komeili, Majid
2016-06-01
Typical feature selection methods choose an optimal global feature subset that is applied over all regions of the sample space. In contrast, in this paper we propose a novel localized feature selection (LFS) approach whereby each region of the sample space is associated with its own distinct optimized feature set, which may vary both in membership and size across the sample space. This allows the feature set to optimally adapt to local variations in the sample space. An associated method for measuring the similarities of a query datum to each of the respective classes is also proposed. The proposed method makes no assumptions about the underlying structure of the samples; hence the method is insensitive to the distribution of the data over the sample space. The method is efficiently formulated as a linear programming optimization problem. Furthermore, we demonstrate the method is robust against the over-fitting problem. Experimental results on eleven synthetic and real-world data sets demonstrate the viability of the formulation and the effectiveness of the proposed algorithm. In addition we show several examples where localized feature selection produces better results than a global feature selection method.
NASA Astrophysics Data System (ADS)
Zhang, Zhuomin; Zhan, Yisen; Huang, Yichun; Li, Gongke
2017-08-01
In this work, a portable large-volume constant-concentration (LVCC) sampling technique coupling with surface-enhanced Raman spectroscopy (SERS) was developed for the rapid on-site gas analysis based on suitable derivatization methods. LVCC sampling technique mainly consisted of a specially designed sampling cell including the rigid sample container and flexible sampling bag, and an absorption-derivatization module with a portable pump and a gas flowmeter. LVCC sampling technique allowed large, alterable and well-controlled sampling volume, which kept the concentration of gas target in headspace phase constant during the entire sampling process and made the sampling result more representative. Moreover, absorption and derivatization of gas target during LVCC sampling process were efficiently merged in one step using bromine-thiourea and OPA-NH4+ strategy for ethylene and SO2 respectively, which made LVCC sampling technique conveniently adapted to consequent SERS analysis. Finally, a new LVCC sampling-SERS method was developed and successfully applied for rapid analysis of trace ethylene and SO2 from fruits. It was satisfied that trace ethylene and SO2 from real fruit samples could be actually and accurately quantified by this method. The minor concentration fluctuations of ethylene and SO2 during the entire LVCC sampling process were proved to be < 4.3% and 2.1% respectively. Good recoveries for ethylene and sulfur dioxide from fruit samples were achieved in range of 95.0-101% and 97.0-104% respectively. It is expected that portable LVCC sampling technique would pave the way for rapid on-site analysis of accurate concentrations of trace gas targets from real samples by SERS.
Zhang, Zhuomin; Zhan, Yisen; Huang, Yichun; Li, Gongke
2017-08-05
In this work, a portable large-volume constant-concentration (LVCC) sampling technique coupling with surface-enhanced Raman spectroscopy (SERS) was developed for the rapid on-site gas analysis based on suitable derivatization methods. LVCC sampling technique mainly consisted of a specially designed sampling cell including the rigid sample container and flexible sampling bag, and an absorption-derivatization module with a portable pump and a gas flowmeter. LVCC sampling technique allowed large, alterable and well-controlled sampling volume, which kept the concentration of gas target in headspace phase constant during the entire sampling process and made the sampling result more representative. Moreover, absorption and derivatization of gas target during LVCC sampling process were efficiently merged in one step using bromine-thiourea and OPA-NH 4 + strategy for ethylene and SO 2 respectively, which made LVCC sampling technique conveniently adapted to consequent SERS analysis. Finally, a new LVCC sampling-SERS method was developed and successfully applied for rapid analysis of trace ethylene and SO 2 from fruits. It was satisfied that trace ethylene and SO 2 from real fruit samples could be actually and accurately quantified by this method. The minor concentration fluctuations of ethylene and SO 2 during the entire LVCC sampling process were proved to be <4.3% and 2.1% respectively. Good recoveries for ethylene and sulfur dioxide from fruit samples were achieved in range of 95.0-101% and 97.0-104% respectively. It is expected that portable LVCC sampling technique would pave the way for rapid on-site analysis of accurate concentrations of trace gas targets from real samples by SERS. Copyright © 2017 Elsevier B.V. All rights reserved.
Ort, Christoph; Lawrence, Michael G; Rieckermann, Jörg; Joss, Adriano
2010-08-15
The analysis of 87 peer-reviewed journal articles reveals that sampling for pharmaceuticals and personal care products (PPCPs) and illicit drugs in sewers and sewage treatment plant influents is mostly carried out according to existing tradition or standard laboratory protocols. Less than 5% of all studies explicitly consider internationally acknowledged guidelines or methods for the experimental design of monitoring campaigns. In the absence of a proper analysis of the system under investigation, the importance of short-term pollutant variations was typically not addressed. Therefore, due to relatively long sampling intervals, potentially inadequate sampling modes, or insufficient documentation, it remains unclear for the majority of reviewed studies whether observed variations can be attributed to "real" variations or if they simply reflect sampling artifacts. Based on results from previous and current work, the present paper demonstrates that sampling errors can lead to overinterpretation of measured data and ultimately, wrong conclusions. Depending on catchment size, sewer type, sampling setup, substance of interest, and accuracy of analytical method, avoidable sampling artifacts can range from "not significant" to "100% or more" for different compounds even within the same study. However, in most situations sampling errors can be reduced greatly, and sampling biases can be eliminated completely, by choosing an appropriate sampling mode and frequency. This is crucial, because proper sampling will help to maximize the value of measured data for the experimental assessment of the fate of PPCPs as well as for the formulation and validation of mathematical models. The trend from reporting presence or absence of a compound in "clean" water samples toward the quantification of PPCPs in raw wastewater requires not only sophisticated analytical methods but also adapted sampling methods. With increasing accuracy of chemical analyses, inappropriate sampling increasingly represents the major source of inaccuracy. A condensed step-by-step Sampling Guide is proposed as a starting point for future studies.
Fucci, Nadia; Gambelunghe, Cristiana; Aroni, Kyriaki; Rossi, Riccardo
2014-12-01
Because levamisole has been increasingly found as a component of illicit drugs, a robust method to detect its presence in hair samples is needed. However, no systematic research on the detection of levamisole in hair samples has been published. The method presented here uses direct immersion solid-phase microextraction coupled with gas chromatography and mass spectrometry (DI-SPME-GC/MS) to detect levamisole and minor cocaine congeners in hair samples using a single-extraction method. Fifty hair samples taken in the last 4 years were obtained from cocaine abusers, along with controls taken from drug-free volunteers. Sampling was performed using direct immersion with a 30-μm polydimethylsiloxane fused silica/stainless steel fiber. Calibration curves were prepared by adding known amounts of analytes and deuterated internal standards to the hair samples taken from drug-free volunteers. This study focused on the adulterant levamisole and some minor cocaine congeners (tropococaine, norcocaine, and cocaethylene). Levamisole was detected in 38% of the hair samples analyzed; its concentration ranged from 0.2 to 0.8 ng/mg. The limit of quantification and limit of detection for levamisole, tropococaine, norcocaine, and cocaine were 0.2 and 0.1 ng/mg, respectively. DI-SPME-GC/MS is a sensitive and specific method to detect the presence of levamisole and cocaine congeners in hair samples.
Dümichen, Erik; Eisentraut, Paul; Bannick, Claus Gerhard; Barthel, Anne-Kathrin; Senz, Rainer; Braun, Ulrike
2017-05-01
In order to determine the relevance of microplastic particles in various environmental media, comprehensive investigations are needed. However, no analytical method exists for fast identification and quantification. At present, optical spectroscopy methods like IR and RAMAN imaging are used. Due to their time consuming procedures and uncertain extrapolation, reliable monitoring is difficult. For analyzing polymers Py-GC-MS is a standard method. However, due to a limited sample amount of about 0.5 mg it is not suited for analysis of complex sample mixtures like environmental samples. Therefore, we developed a new thermoanalytical method as a first step for identifying microplastics in environmental samples. A sample amount of about 20 mg, which assures the homogeneity of the sample, is subjected to complete thermal decomposition. The specific degradation products of the respective polymer are adsorbed on a solid-phase adsorber and subsequently analyzed by thermal desorption gas chromatography mass spectrometry. For certain identification, the specific degradation products for the respective polymer were selected first. Afterwards real environmental samples from the aquatic (three different rivers) and the terrestrial (bio gas plant) systems were screened for microplastics. Mainly polypropylene (PP), polyethylene (PE) and polystyrene (PS) were identified for the samples from the bio gas plant and PE and PS from the rivers. However, this was only the first step and quantification measurements will follow. Copyright © 2017 Elsevier Ltd. All rights reserved.
Galea, Karen S; McGonagle, Carolyn; Sleeuwenhoek, Anne; Todd, David; Jiménez, Araceli Sánchez
2014-06-01
Dermal exposure to drilling fluids and crude oil is an exposure route of concern. However, there have been no published studies describing sampling methods or reporting dermal exposure measurements. We describe a study that aimed to evaluate a wipe sampling method to assess dermal exposure to an oil-based drilling fluid and crude oil, as well as to investigate the feasibility of using an interception cotton glove sampler for exposure on the hands/wrists. A direct comparison of the wipe and interception methods was also completed using pigs' trotters as a surrogate for human skin and a direct surface contact exposure scenario. Overall, acceptable recovery and sampling efficiencies were reported for both methods, and both methods had satisfactory storage stability at 1 and 7 days, although there appeared to be some loss over 14 days. The methods' comparison study revealed significantly higher removal of both fluids from the metal surface with the glove samples compared with the wipe samples (on average 2.5 times higher). Both evaluated sampling methods were found to be suitable for assessing dermal exposure to oil-based drilling fluids and crude oil; however, the comparison study clearly illustrates that glove samplers may overestimate the amount of fluid transferred to the skin. Further comparison of the two dermal sampling methods using additional exposure situations such as immersion or deposition, as well as a field evaluation, is warranted to confirm their appropriateness and suitability in the working environment. © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Method of analysis of asbestiform minerals by thermoluminescence
Fisher, Gerald L.; Bradley, Edward W.
1980-01-01
A method for the qualitative and quantitative analysis of asbestiform minerals, including the steps of subjecting a sample to be analyzed to the thermoluminescent analysis, annealing the sample, subjecting the sample to ionizing radiation, and subjecting the sample to a second thermoluminescent analysis. Glow curves are derived from the two thermoluminescent analyses and their shapes then compared to established glow curves of known asbestiform minerals to identify the type of asbestiform in the sample. Also, during at least one of the analyses, the thermoluminescent response for each sample is integrated during a linear heating period of the analysis in order to derive the total thermoluminescence per milligram of sample. This total is a measure of the quantity of asbestiform in the sample and may also be used to identify the source of the sample.
Approximation of the exponential integral (well function) using sampling methods
NASA Astrophysics Data System (ADS)
Baalousha, Husam Musa
2015-04-01
Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.
Accuracy or precision: Implications of sample design and methodology on abundance estimation
Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.
2015-01-01
Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.
Optical method for the characterization of laterally-patterned samples in integrated circuits
Maris, Humphrey J.
2001-01-01
Disclosed is a method for characterizing a sample having a structure disposed on or within the sample, comprising the steps of applying a first pulse of light to a surface of the sample for creating a propagating strain pulse in the sample, applying a second pulse of light to the surface so that the second pulse of light interacts with the propagating strain pulse in the sample, sensing from a reflection of the second pulse a change in optical response of the sample, and relating a time of occurrence of the change in optical response to at least one dimension of the structure.
Maris, Humphrey J.
2008-03-04
Disclosed is a method for characterizing a sample having a structure disposed on or within the sample, comprising the steps of applying a first pulse of light to a surface of the sample for creating a propagating strain pulse in the sample, applying a second pulse of light to the surface so that the second pulse of light interacts with the propagating strain pulse in the sample, sensing from a reflection of the second pulse a change in optical response of the sample, and relating a time of occurrence of the change in optical response to at least one dimension of the structure.
Optical method for the characterization of laterally-patterned samples in integrated circuits
Maris, Humphrey J.
2010-08-24
Disclosed is a method for characterizing a sample having a structure disposed on or within the sample, comprising the steps of applying a first pulse of light to a surface of the sample for creating a propagating strain pulse in the sample, applying a second pulse of light to the surface so that the second pulse of light interacts with the propagating strain pulse in the sample, sensing from a reflection of the second pulse a change in optical response of the sample, and relating a time of occurrence of the change in optical response to at least one dimension of the structure.
Optical method for the characterization of laterally patterned samples in integrated circuits
Maris, Humphrey J [Barrington, RI
2009-03-17
Disclosed is a method for characterizing a sample having a structure disposed on or within the sample, comprising the steps of applying a first pulse of light to a surface of the sample for creating a propagating strain pulse in the sample, applying a second pulse of light to the surface so that the second pulse of light interacts with the propagating strain pulse in the sample, sensing from a reflection of the second pulse a change in optical response of the sample, and relating a time of occurrence of the change in optical response to at least one dimension of the structure.
Maris, Humphrey J [Barrington, RI
2011-02-22
Disclosed is a method for characterizing a sample having a structure disposed on or within the sample, comprising the steps of applying a first pulse of light to a surface of the sample for creating a propagating strain pulse in the sample, applying a second pulse of light to the surface so that the second pulse of light interacts with the propagating strain pulse in the sample, sensing from a reflection of the second pulse a change in optical response of the sample, and relating a time of occurrence of the change in optical response to at least one dimension of the structure.
Palinkas, Lawrence A; Horwitz, Sarah M; Green, Carla A; Wisdom, Jennifer P; Duan, Naihua; Hoagwood, Kimberly
2015-09-01
Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research.
Sample normalization methods in quantitative metabolomics.
Wu, Yiman; Li, Liang
2016-01-22
To reveal metabolomic changes caused by a biological event in quantitative metabolomics, it is critical to use an analytical tool that can perform accurate and precise quantification to examine the true concentration differences of individual metabolites found in different samples. A number of steps are involved in metabolomic analysis including pre-analytical work (e.g., sample collection and storage), analytical work (e.g., sample analysis) and data analysis (e.g., feature extraction and quantification). Each one of them can influence the quantitative results significantly and thus should be performed with great care. Among them, the total sample amount or concentration of metabolites can be significantly different from one sample to another. Thus, it is critical to reduce or eliminate the effect of total sample amount variation on quantification of individual metabolites. In this review, we describe the importance of sample normalization in the analytical workflow with a focus on mass spectrometry (MS)-based platforms, discuss a number of methods recently reported in the literature and comment on their applicability in real world metabolomics applications. Sample normalization has been sometimes ignored in metabolomics, partially due to the lack of a convenient means of performing sample normalization. We show that several methods are now available and sample normalization should be performed in quantitative metabolomics where the analyzed samples have significant variations in total sample amounts. Copyright © 2015 Elsevier B.V. All rights reserved.
Improved Sampling Method Reduces Isokinetic Sampling Errors.
ERIC Educational Resources Information Center
Karels, Gale G.
The particulate sampling system currently in use by the Bay Area Air Pollution Control District, San Francisco, California is described in this presentation for the 12th Conference on Methods in Air Pollution and Industrial Hygiene Studies, University of Southern California, April, 1971. The method represents a practical, inexpensive tool that can…
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2011 CFR
2011-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2010 CFR
2010-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2013 CFR
2013-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2014 CFR
2014-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2012 CFR
2012-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
Investigations at hazardous waste sites and sites of chemical spills often require on-site measurements and sampling activities to assess the type and extent of contamination. This document is a compilation of sampling methods and materials suitable to address most needs that ari...
Absolute method of measuring magnetic susceptibility
Thorpe, A.; Senftle, F.E.
1959-01-01
An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.
Sampling and estimating recreational use.
Timothy G. Gregoire; Gregory J. Buhyoff
1999-01-01
Probability sampling methods applicable to estimate recreational use are presented. Both single- and multiple-access recreation sites are considered. One- and two-stage sampling methods are presented. Estimation of recreational use is presented in a series of examples.
REPRESENTATIVE SAMPLING AND ANALYSIS OF HETEROGENEOUS SOILS
Standard sampling and analysis methods for hazardous substances in contaminated soils currently are available and routinely employed. Standard methods inherently assume a homogeneous soil matrix and contaminant distribution; therefore only small sample quantities typically are p...
Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella
2016-12-09
Measuring toxicity is one of the main steps in drug development. Hence, there is a high demand for computational models to predict the toxicity effects of the potential drugs. In this study, we used a dataset, which consists of four toxicity effects:mutagenic, tumorigenic, irritant and reproductive effects. The proposed model consists of three phases. In the first phase, rough set-based methods are used to select the most discriminative features for reducing the classification time and improving the classification performance. Due to the imbalanced class distribution, in the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique are used to solve the problem of imbalanced datasets. ITerative Sampling (ITS) method is proposed to avoid the limitations of those methods. ITS method has two steps. The first step (sampling step) iteratively modifies the prior distribution of the minority and majority classes. In the second step, a data cleaning method is used to remove the overlapping that is produced from the first step. In the third phase, Bagging classifier is used to classify an unknown drug into toxic or non-toxic. The experimental results proved that the proposed model performed well in classifying the unknown samples according to all toxic effects in the imbalanced datasets.
A study of active learning methods for named entity recognition in clinical text.
Chen, Yukun; Lasko, Thomas A; Mei, Qiaozhu; Denny, Joshua C; Xu, Hua
2015-12-01
Named entity recognition (NER), a sequential labeling task, is one of the fundamental tasks for building clinical natural language processing (NLP) systems. Machine learning (ML) based approaches can achieve good performance, but they often require large amounts of annotated samples, which are expensive to build due to the requirement of domain experts in annotation. Active learning (AL), a sample selection approach integrated with supervised ML, aims to minimize the annotation cost while maximizing the performance of ML-based models. In this study, our goal was to develop and evaluate both existing and new AL methods for a clinical NER task to identify concepts of medical problems, treatments, and lab tests from the clinical notes. Using the annotated NER corpus from the 2010 i2b2/VA NLP challenge that contained 349 clinical documents with 20,423 unique sentences, we simulated AL experiments using a number of existing and novel algorithms in three different categories including uncertainty-based, diversity-based, and baseline sampling strategies. They were compared with the passive learning that uses random sampling. Learning curves that plot performance of the NER model against the estimated annotation cost (based on number of sentences or words in the training set) were generated to evaluate different active learning and the passive learning methods and the area under the learning curve (ALC) score was computed. Based on the learning curves of F-measure vs. number of sentences, uncertainty sampling algorithms outperformed all other methods in ALC. Most diversity-based methods also performed better than random sampling in ALC. To achieve an F-measure of 0.80, the best method based on uncertainty sampling could save 66% annotations in sentences, as compared to random sampling. For the learning curves of F-measure vs. number of words, uncertainty sampling methods again outperformed all other methods in ALC. To achieve 0.80 in F-measure, in comparison to random sampling, the best uncertainty based method saved 42% annotations in words. But the best diversity based method reduced only 7% annotation effort. In the simulated setting, AL methods, particularly uncertainty-sampling based approaches, seemed to significantly save annotation cost for the clinical NER task. The actual benefit of active learning in clinical NER should be further evaluated in a real-time setting. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Theunissen, Raf; Kadosh, Jesse S.; Allen, Christian B.
2015-06-01
Spatially varying signals are typically sampled by collecting uniformly spaced samples irrespective of the signal content. For signals with inhomogeneous information content, this leads to unnecessarily dense sampling in regions of low interest or insufficient sample density at important features, or both. A new adaptive sampling technique is presented directing sample collection in proportion to local information content, capturing adequately the short-period features while sparsely sampling less dynamic regions. The proposed method incorporates a data-adapted sampling strategy on the basis of signal curvature, sample space-filling, variable experimental uncertainty and iterative improvement. Numerical assessment has indicated a reduction in the number of samples required to achieve a predefined uncertainty level overall while improving local accuracy for important features. The potential of the proposed method has been further demonstrated on the basis of Laser Doppler Anemometry experiments examining the wake behind a NACA0012 airfoil and the boundary layer characterisation of a flat plate.
Burtis, C.A.; Johnson, W.F.; Walker, W.A.
1985-08-05
A rotor and disc assembly for use in a centrifugal fast analyzer. The assembly is designed to process multiple samples of whole blood followed by aliquoting of the resultant serum into precisely measured samples for subsequent chemical analysis. The assembly requires minimal operator involvement with no mechanical pipetting. The system comprises: (1) a whole blood sample disc; (2) a serum sample disc; (3) a sample preparation rotor; and (4) an analytical rotor. The blood sample disc and serum sample disc are designed with a plurality of precision bore capillary tubes arranged in a spoked array. Samples of blood are loaded into the blood sample disc by capillary action and centrifugally discharged into cavities of the sample preparation rotor where separation of serum and solids is accomplished. The serum is loaded into the capillaries of the serum sample disc by capillary action and subsequently centrifugally expelled into cuvettes of the analyticaly rotor for conventional methods. 5 figs.
Burtis, Carl A.; Johnson, Wayne F.; Walker, William A.
1988-01-01
A rotor and disc assembly for use in a centrifugal fast analyzer. The assembly is designed to process multiple samples of whole blood followed by aliquoting of the resultant serum into precisely measured samples for subsequent chemical analysis. The assembly requires minimal operator involvement with no mechanical pipetting. The system comprises (1) a whole blood sample disc, (2) a serum sample disc, (3) a sample preparation rotor, and (4) an analytical rotor. The blood sample disc and serum sample disc are designed with a plurality of precision bore capillary tubes arranged in a spoked array. Samples of blood are loaded into the blood sample disc in capillary tubes filled by capillary action and centrifugally discharged into cavities of the sample preparation rotor where separation of serum and solids is accomplished. The serum is loaded into the capillaries of the serum sample disc by capillary action and subsequently centrifugally expelled into cuvettes of the analytical rotor for analysis by conventional methods.
Piecewise SALT sampling for estimating suspended sediment yields
Robert B. Thomas
1989-01-01
A probability sampling method called SALT (Selection At List Time) has been developed for collecting and summarizing data on delivery of suspended sediment in rivers. It is based on sampling and estimating yield using a suspended-sediment rating curve for high discharges and simple random sampling for low flows. The method gives unbiased estimates of total yield and...
Sampling methods for amphibians in streams in the Pacific Northwest.
R. Bruce Bury; Paul Stephen Corn
1991-01-01
Methods describing how to sample aquatic and semiaquatic amphibians in small streams and headwater habitats in the Pacific Northwest are presented. We developed a technique that samples 10-meter stretches of selected streams, which was adequate to detect presence or absence of amphibian species and provided sample sizes statistically sufficient to compare abundance of...
Sampling estimators of total mill receipts for use in timber product output studies
John P. Brown; Richard G. Oderwald
2012-01-01
Data from the 2001 timber product output study for Georgia was explored to determine new methods for stratifying mills and finding suitable sampling estimators. Estimators for roundwood receipts totals comprised several types: simple random sample, ratio, stratified sample, and combined ratio. Two stratification methods were examined: the Dalenius-Hodges (DH) square...
Pamela J. Edwards; Karl W.J. Williard; James N. Kochenderfer
2004-01-01
Five methods for estimating maximum daily and annual nitrate (NO3) and suspended sediment loads using periodic sampling of varying intensities were compared to actual loads calculated from intensive stormflow and baseflow sampling from small, forested watersheds in north central West Virginia to determine if the less intensive sampling methods were accurate and could...
ERIC Educational Resources Information Center
Kogan, Steven M.; Wejnert, Cyprian; Chen, Yi-fu; Brody, Gene H.; Slater, LaTrina M.
2011-01-01
Obtaining representative samples from populations of emerging adults who do not attend college is challenging for researchers. This article introduces respondent-driven sampling (RDS), a method for obtaining representative samples of hard-to-reach but socially interconnected populations. RDS combines a prescribed method for chain referral with a…
Tao, Guohua; Miller, William H
2011-07-14
An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.
K-Nearest Neighbor Algorithm Optimization in Text Categorization
NASA Astrophysics Data System (ADS)
Chen, Shufeng
2018-01-01
K-Nearest Neighbor (KNN) classification algorithm is one of the simplest methods of data mining. It has been widely used in classification, regression and pattern recognition. The traditional KNN method has some shortcomings such as large amount of sample computation and strong dependence on the sample library capacity. In this paper, a method of representative sample optimization based on CURE algorithm is proposed. On the basis of this, presenting a quick algorithm QKNN (Quick k-nearest neighbor) to find the nearest k neighbor samples, which greatly reduces the similarity calculation. The experimental results show that this algorithm can effectively reduce the number of samples and speed up the search for the k nearest neighbor samples to improve the performance of the algorithm.
Production and distribution of dilute species in semiconducting materials
James, Ralph B.; Camarda, Giuseppe; Bolotnikov, Aleksey E.; Hossain, Anwar; Yang, Ge; Kim, Kihyun
2016-09-06
Technologies are described effective to implement systems and methods of producing a material. The methods comprise receiving a tertiary semiconductor sample with a dilute species. The sample has two ends. The first end of the sample includes a first concentration of the dilute species lower than a second concentration of the dilute species in the second end of the sample. The method further comprises heating the sample in a chamber. The chamber has a first zone and a second zone. The first zone having a first temperature higher than a second temperature in the second zone. The sample is orientated such that the first end is in the first zone and the second end is in the second zone.
Berlinger, Balazs; Harper, Martin
2018-02-01
There is interest in the bioaccessible metal components of aerosols, but this has been minimally studied because standardized sampling and analytical methods have not yet been developed. An interlaboratory study (ILS) has been carried out to evaluate a method for determining the water-soluble component of realistic welding fume (WF) air samples. Replicate samples were generated in the laboratory and distributed to participating laboratories to be analyzed according to a standardized procedure. Within-laboratory precision of replicate sample analysis (repeatability) was very good. Reproducibility between laboratories was not as good, but within limits of acceptability for the analysis of typical aerosol samples. These results can be used to support the development of a standardized test method.
Protocol for Detection of Yersinia pestis in Environmental ...
Methods Report This is the first ever open-access and detailed protocol available to all government departments and agencies, and their contractors to detect Yersinia pestis, the pathogen that causes plague, from multiple environmental sample types including water. Each analytical method includes sample processing procedure for each sample type in a step-by-step manner. It includes real-time PCR, traditional microbiological culture, and the Rapid Viability PCR (RV-PCR) analytical methods. For large volume water samples it also includes an ultra-filtration-based sample concentration procedure. Because of such a non-restrictive availability of this protocol to all government departments and agencies, and their contractors, the nation will now have increased laboratory capacity to analyze large number of samples during a wide-area plague incident.
Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun
2014-12-19
In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different situations.
Durbin, Gregory W; Salter, Robert
2006-01-01
The Ecolite High Volume Juice (HVJ) presence-absence method for a 10-ml juice sample was compared with the U.S. Food and Drug Administration Bacteriological Analytical Manual most-probable-number (MPN) method for analysis of artificially contaminated orange juices. Samples were added to Ecolite-HVJ medium and incubated at 35 degrees C for 24 to 48 h. Fluorescent blue results were positive for glucuronidase- and galactosidase-producing microorganisms, specifically indicative of about 94% of Escherichia coli strains. Four strains of E. coli were added to juices at concentrations of 0.21 to 6.8 CFU/ ml. Mixtures of enteric bacteria (Enterobacter plus Klebsiella, Citrobacter plus Proteus, or Hafnia plus Citrobacter plus Enterobacter) were added to simulate background flora. Three orange juice types were evaluated (n = 10) with and without the addition of the E. coli strains. Ecolite-HVJ produced 90 of 90 (10 of 10 samples of three juice types, each inoculated with three different E. coli strains) positive (blue-fluorescent) results with artificially contaminated E. coli that had MPN concentrations of <0.3 to 9.3 CFU/ml. Ten of 30 E. coli ATCC 11229 samples with MPN concentrations of <0.3 CFU/ml were identified as positive with Ecolite-HVJ. Isolated colonies recovered from positive Ecolite-HVJ samples were confirmed biochemically as E. coli. Thirty (10 samples each of three juice types) negative (not fluorescent) results were obtained for samples contaminated with only enteric bacteria and for uninoculated control samples. A juice manufacturer evaluated citrus juice production with both the Ecolite-HVJ and Colicomplete methods and recorded identical negative results for 95 20-ml samples and identical positive results for 5 20-ml samples artificially contaminated with E. coli. The Ecolite-HVJ method requires no preenrichment and subsequent transfer steps, which makes it a simple and easy method for use by juice producers.
2011-01-01
Background Bioinformatics data analysis is often using linear mixture model representing samples as additive mixture of components. Properly constrained blind matrix factorization methods extract those components using mixture samples only. However, automatic selection of extracted components to be retained for classification analysis remains an open issue. Results The method proposed here is applied to well-studied protein and genomic datasets of ovarian, prostate and colon cancers to extract components for disease prediction. It achieves average sensitivities of: 96.2 (sd = 2.7%), 97.6% (sd = 2.8%) and 90.8% (sd = 5.5%) and average specificities of: 93.6% (sd = 4.1%), 99% (sd = 2.2%) and 79.4% (sd = 9.8%) in 100 independent two-fold cross-validations. Conclusions We propose an additive mixture model of a sample for feature extraction using, in principle, sparseness constrained factorization on a sample-by-sample basis. As opposed to that, existing methods factorize complete dataset simultaneously. The sample model is composed of a reference sample representing control and/or case (disease) groups and a test sample. Each sample is decomposed into two or more components that are selected automatically (without using label information) as control specific, case specific and not differentially expressed (neutral). The number of components is determined by cross-validation. Automatic assignment of features (m/z ratios or genes) to particular component is based on thresholds estimated from each sample directly. Due to the locality of decomposition, the strength of the expression of each feature across the samples can vary. Yet, they will still be allocated to the related disease and/or control specific component. Since label information is not used in the selection process, case and control specific components can be used for classification. That is not the case with standard factorization methods. Moreover, the component selected by proposed method as disease specific can be interpreted as a sub-mode and retained for further analysis to identify potential biomarkers. As opposed to standard matrix factorization methods this can be achieved on a sample (experiment)-by-sample basis. Postulating one or more components with indifferent features enables their removal from disease and control specific components on a sample-by-sample basis. This yields selected components with reduced complexity and generally, it increases prediction accuracy. PMID:22208882
Active Search on Carcasses versus Pitfall Traps: a Comparison of Sampling Methods.
Zanetti, N I; Camina, R; Visciarelli, E C; Centeno, N D
2016-04-01
The study of insect succession in cadavers and the classification of arthropods have mostly been done by placing a carcass in a cage, protected from vertebrate scavengers, which is then visited periodically. An alternative is to use specific traps. Few studies on carrion ecology and forensic entomology involving the carcasses of large vertebrates have employed pitfall traps. The aims of this study were to compare both sampling methods (active search on a carcass and pitfall trapping) for each coleopteran family, and to establish whether there is a discrepancy (underestimation and/or overestimation) in the presence of each family by either method. A great discrepancy was found for almost all families with some of them being more abundant in samples obtained through active search on carcasses and others in samples from traps, whereas two families did not show any bias towards a given sampling method. The fact that families may be underestimated or overestimated by the type of sampling technique highlights the importance of combining both methods, active search on carcasses and pitfall traps, in order to obtain more complete information on decomposition, carrion habitat and cadaveric families or species. Furthermore, a hypothesis advanced on the reasons for the underestimation by either sampling method showing biases towards certain families. Information about the sampling techniques indicating which would be more appropriate to detect or find a particular family is provided.
Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses
Lanfear, Robert; Hua, Xia; Warren, Dan L.
2016-01-01
Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794
Stout, P R; Horn, C K; Klette, K L
2001-10-01
In order to facilitate the confirmation analysis of large numbers of urine samples previously screened positive for delta9-tetrahydrocannabinol (THC), an extraction, derivitization, and GC-MS analysis method was developed. This method utilized a positive pressure manifold anion-exchange polymer-based solid-phase extraction followed by elution directly into the automated liquid sampling (ALS) vials. Rapid derivitization was accomplished using pentafluoropropionic anhydride/pentafluoropropanol (PFPA/PFPOH). Recoveries averaged 95% with a limit of detection of 0.875 ng/mL with a 3-mL sample volume. Performance of 11-nor-delta9-tetrahydrocannabinol-9-carboxylic acid (THC-COOH)-d3 and THC-COOH-d9 internal standards were evaluated. The method was linear to 900 ng/mL THC-COOH using THC-COOH-d9 with negligible contribution from the internal standard to very weak samples. Excellent agreement was seen with previous quantitations of human urine samples. More than 1000 human urine samples were analyzed using the method with 300 samples analyzed using an alternate qualifier ion (m/z 622) after some interference was observed with a qualifier ion (m/z 489). The 622 ion did not exhibit any interference even in samples with interfering peaks present in the 489 ion. The method resulted in dramatic reductions in processing time, waste production, and exposure hazards to laboratory personnel.
Yebra, M Carmen
2012-01-01
A simple and rapid analytical method was developed for the determination of iron, manganese, and zinc in soluble solid samples. The method is based on continuous ultrasonic water dissolution of the sample (5-30 mg) at room temperature followed by flow injection flame atomic absorption spectrometric determination. A good precision of the whole procedure (1.2-4.6%) and a sample throughput of ca. 25 samples h(-1) were obtained. The proposed green analytical method has been successfully applied for the determination of iron, manganese, and zinc in soluble solid food samples (soluble cocoa and soluble coffee) and pharmaceutical preparations (multivitamin tablets). The ranges of concentrations found were 21.4-25.61 μg g(-1) for iron, 5.74-18.30 μg g(-1) for manganese, and 33.27-57.90 μg g(-1) for zinc in soluble solid food samples and 3.75-9.90 μg g(-1) for iron, 0.47-5.05 μg g(-1) for manganese, and 1.55-15.12 μg g(-1) for zinc in multivitamin tablets. The accuracy of the proposed method was established by a comparison with the conventional wet acid digestion method using a paired t-test, indicating the absence of systematic errors.
Galea, Karen S; Mueller, Will; Arfaj, Ayman M; Llamas, Jose L; Buick, Jennifer; Todd, David; McGonagle, Carolyn
2018-05-21
Crude oil may cause adverse dermal effects therefore dermal exposure is an exposure route of concern. Galea et al. (2014b) reported on a study comparing recovery (wipe) and interception (cotton glove) dermal sampling methods. The authors concluded that both methods were suitable for assessing dermal exposure to oil-based drilling fluids and crude oil but that glove samplers may overestimate the amount of fluid transferred to the skin. We describe a study which aimed to further evaluate the wipe sampling method to assess dermal exposure to crude oil, with this assessment including extended sample storage periods and sampling efficiency tests being undertaken at environmental conditions to mimic those typical of outdoor conditions in Saudi Arabia. The wipe sampling method was then used to assess the laboratory technicians' actual exposure to crude oil during typical petroleum laboratory tasks. Overall, acceptable storage efficiencies up to 54 days were reported with results suggesting storage stability over time. Sampling efficiencies were also reported to be satisfactory at both ambient and elevated temperature and relative humidity environmental conditions for surrogate skin spiked with known masses of crude oil and left up to 4 h prior to wiping, though there was an indication of reduced sampling efficiency over time. Nineteen petroleum laboratory technicians provided a total of 35 pre- and 35 post-activity paired hand wipe samples. Ninety-three percent of the pre-exposure paired hand wipes were less than the analytical limit of detection (LOD), whereas 46% of the post-activity paired hand wipes were less than the LOD. The geometric mean paired post-activity wipe sample measurement was 3.09 µg cm-2 (range 1.76-35.4 µg cm-2). It was considered that dermal exposure most frequently occurred through direct contact with the crude oil (emission) or via deposition. The findings of this study suggest that the wipe sampling method is satisfactory in quantifying laboratory technicians' dermal exposure to crude oil. It is therefore considered that this wipe sampling method may be suitable to quantify dermal exposure to crude oil for other petroleum workers.
Terré, M; Castells, L; Fàbregas, F; Bach, A
2013-08-01
The objective of this study was to compare rumen samples from young dairy calves obtained via a stomach tube (ST) or a ruminal cannula (RC). Five male Holstein calves (46±4.0 kg of body weight and 11±4.9 d of age) were ruminally cannulated at 15 d of age. Calves received 4 L/d of a commercial milk replacer (25% crude protein and 19.2% fat) at 12.5% dry matter, and were provided concentrate and chopped oats hay ad libitum throughout the study (56 d). In total, 29 paired rumen samples were obtained weekly throughout the study in most of the calves by each extraction method. These samples were used to determine pH and volatile fatty acids (VFA) concentration, and to quantify Prevotella ruminicola and Streptococcus bovis by quantitative PCR. Furthermore, a denaturing gradient gel electrophoresis was performed on rumen samples harvested during wk 8 of the study to determine the degree of similarity between rumen bacteria communities. Rumen pH was 0.30 units greater in ST compared with RC samples. Furthermore, total VFA concentrations were greater in RC than in ST samples. However, when analyzing the proportion of each VFA by ANOVA, no differences were found between the sampling methods. The quantification of S. bovis and P. ruminicola was similar in both extraction methods, and values obtained using different methods were highly correlated (R(2)=0.89 and 0.98 for S. bovis and P. ruminicola, respectively). Fingerprinting analysis showed similar bacteria band profiles between samples obtained from the same calves using different extraction methods. In conclusion, when comparing rumen parameters obtained using different sampling techniques, it is recommended that VFA profiles be used rather than total VFA concentrations, as total VFA concentrations are more affected by the method of collection. Furthermore, although comparisons of pH across studies should be avoided when samples are not obtained using the same sampling method, the comparison of fingerprinting of a bacteria community or a specific rumen bacterium is valid. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
[Determination of ethylene glycol in biological fluids--propylene glycol interferences].
Gomółka, Ewa; Cudzich-Czop, Sylwia; Sulka, Adrianna
2013-01-01
Many laboratories in Poland do not use gas chromatography (GC) method for determination of ethylene glycol (EG) and methanol in blood of poisoned patients, they use non specific spectrophotometry methods. One of the interfering substances is propylene glycol (PG)--compound present in many medical and cosmetic products: drops, air freshens, disinfectants, electronic cigarettes and others. In Laboratory of Analytical Toxicology and Drug Monitoring in Krakow determination of EG is made by GC method. The method enables to distinguish and make resolution of (EG) and (PG) in biological samples. In the years 2011-2012 in several serum samples from diagnosed patients PG was present in concentration from several to higher than 100 mg/dL. The aim of the study was to estimate PG interferences of serum EG determination by spectrophotometry method. Serum samples containing PG and EG were used in the study. The samples were analyzed by two methods: GC and spectrophotometry. Results of serum samples spiked with PG with no EG analysed by spectrophotometry method were improper ("false positive"). The results were correlated to PG concentration in samples. Calculated cross-reactivity of PG in the method was 42%. Positive results of EG measured by spectrophotometry method must be confirmed by reference GC method. Spectrophotometry method shouldn't be used for diagnostics and monitoring of patients poisoned by EG.
Deasy, William; Shepherd, Tom; Alexander, Colin J; Birch, A Nicholas E; Evans, K Andrew
2016-11-01
Research on plant root chemical ecology has benefited greatly from recent developments in analytical chemistry. Numerous reports document techniques for sampling root volatiles, although only a limited number describe in situ collection. To demonstrate a new method for non-invasive in situ passive sampling using solid phase micro extraction (SPME), from the immediate vicinity of growing roots. SPME fibres inserted into polyfluorotetrafluoroethylene (PTFE) sampling tubes located in situ which were either perforated, covered with stainless steel mesh or with microporous PTFE tubing, were used for non-invasive sub-surface sampling of root volatiles from glasshouse-grown broccoli. Sampling methods were compared with above surface headspace collection using Tenax TA. The roots were either mechanically damaged or infested with Delia radicum larvae. Principal component analysis (PCA) was used to investigate the effect of damage on the composition of volatiles released by broccoli roots. Analyses by gas chromatography-mass spectrometry (GC-MS) with SPME and automated thermal desorption (ATD) confirmed that sulphur compounds, showing characteristic temporal emission patterns, were the principal volatiles released by roots following insect larval damage. Use of SPME with in situ perforated PTFE sampling tubes was the most robust method for out-of-lab sampling. This study describes a new method for non-invasive passive sampling of volatiles in situ from intact and insect damaged roots using SPME. The method is highly suitable for remote sampling and has potential for wide application in chemical ecology/root/soil research. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Sample handling for mass spectrometric proteomic investigations of human sera.
West-Nielsen, Mikkel; Høgdall, Estrid V; Marchiori, Elena; Høgdall, Claus K; Schou, Christian; Heegaard, Niels H H
2005-08-15
Proteomic investigations of sera are potentially of value for diagnosis, prognosis, choice of therapy, and disease activity assessment by virtue of discovering new biomarkers and biomarker patterns. Much debate focuses on the biological relevance and the need for identification of such biomarkers while less effort has been invested in devising standard procedures for sample preparation and storage in relation to model building based on complex sets of mass spectrometric (MS) data. Thus, development of standardized methods for collection and storage of patient samples together with standards for transportation and handling of samples are needed. This requires knowledge about how sample processing affects MS-based proteome analyses and thereby how nonbiological biased classification errors are avoided. In this study, we characterize the effects of sample handling, including clotting conditions, storage temperature, storage time, and freeze/thaw cycles, on MS-based proteomics of human serum by using principal components analysis, support vector machine learning, and clustering methods based on genetic algorithms as class modeling and prediction methods. Using spiking to artificially create differentiable sample groups, this integrated approach yields data that--even when working with sample groups that differ more than may be expected in biological studies--clearly demonstrate the need for comparable sampling conditions for samples used for modeling and for the samples that are going into the test set group. Also, the study emphasizes the difference between class prediction and class comparison studies as well as the advantages and disadvantages of different modeling methods.
Boiano, J M; Wallace, M E; Sieber, W K; Groff, J H; Wang, J; Ashley, K
2000-08-01
A field study was conducted with the goal of comparing the performance of three recently developed or modified sampling and analytical methods for the determination of airborne hexavalent chromium (Cr(VI)). The study was carried out in a hard chrome electroplating facility and in a jet engine manufacturing facility where airborne Cr(VI) was expected to be present. The analytical methods evaluated included two laboratory-based procedures (OSHA Method ID-215 and NIOSH Method 7605) and a field-portable method (NIOSH Method 7703). These three methods employ an identical sampling methodology: collection of Cr(VI)-containing aerosol on a polyvinyl chloride (PVC) filter housed in a sampling cassette, which is connected to a personal sampling pump calibrated at an appropriate flow rate. The basis of the analytical methods for all three methods involves extraction of the PVC filter in alkaline buffer solution, chemical isolation of the Cr(VI) ion, complexation of the Cr(VI) ion with 1,5-diphenylcarbazide, and spectrometric measurement of the violet chromium diphenylcarbazone complex at 540 nm. However, there are notable specific differences within the sample preparation procedures used in three methods. To assess the comparability of the three measurement protocols, a total of 20 side-by-side air samples were collected, equally divided between a chromic acid electroplating operation and a spray paint operation where water soluble forms of Cr(VI) were used. A range of Cr(VI) concentrations from 0.6 to 960 microg m(-3), with Cr(VI) mass loadings ranging from 0.4 to 32 microg, was measured at the two operations. The equivalence of the means of the log-transformed Cr(VI) concentrations obtained from the different analytical methods was compared. Based on analysis of variance (ANOVA) results, no statistically significant differences were observed between mean values measured using each of the three methods. Small but statistically significant differences were observed between results obtained from performance evaluation samples for the NIOSH field method and the OSHA laboratory method.
Generating virtual training samples for sparse representation of face images and face recognition
NASA Astrophysics Data System (ADS)
Du, Yong; Wang, Yu
2016-03-01
There are many challenges in face recognition. In real-world scenes, images of the same face vary with changing illuminations, different expressions and poses, multiform ornaments, or even altered mental status. Limited available training samples cannot convey these possible changes in the training phase sufficiently, and this has become one of the restrictions to improve the face recognition accuracy. In this article, we view the multiplication of two images of the face as a virtual face image to expand the training set and devise a representation-based method to perform face recognition. The generated virtual samples really reflect some possible appearance and pose variations of the face. By multiplying a training sample with another sample from the same subject, we can strengthen the facial contour feature and greatly suppress the noise. Thus, more human essential information is retained. Also, uncertainty of the training data is simultaneously reduced with the increase of the training samples, which is beneficial for the training phase. The devised representation-based classifier uses both the original and new generated samples to perform the classification. In the classification phase, we first determine K nearest training samples for the current test sample by calculating the Euclidean distances between the test sample and training samples. Then, a linear combination of these selected training samples is used to represent the test sample, and the representation result is used to classify the test sample. The experimental results show that the proposed method outperforms some state-of-the-art face recognition methods.
Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong
2015-12-26
This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB.
Determination of polarimetric parameters of honey by near-infrared transflectance spectroscopy.
García-Alvarez, M; Ceresuela, S; Huidobro, J F; Hermida, M; Rodríguez-Otero, J L
2002-01-30
NIR transflectance spectroscopy was used to determine polarimetric parameters (direct polarization, polarization after inversion, specific rotation in dry matter, and polarization due to nonmonosaccharides) and sucrose in honey. In total, 156 honey samples were collected during 1992 (45 samples), 1995 (56 samples), and 1996 (55 samples). Samples were analyzed by NIR spectroscopy and polarimetric methods. Calibration (118 samples) and validation (38 samples) sets were made up; honeys from the three years were included in both sets. Calibrations were performed by modified partial least-squares regression and scatter correction by standard normal variation and detrend methods. For direct polarization, polarization after inversion, specific rotation in dry matter, and polarization due to nonmonosaccharides, good statistics (bias, SEV, and R(2)) were obtained for the validation set, and no statistically (p = 0.05) significant differences were found between instrumental and polarimetric methods for these parameters. Statistical data for sucrose were not as good as those of the other parameters. Therefore, NIR spectroscopy is not an effective method for quantitative analysis of sucrose in these honey samples. However, NIR spectroscopy may be an acceptable method for semiquantitative evaluation of sucrose for honeys, such as those in our study, containing up to 3% of sucrose. Further work is necessary to validate the uncertainty at higher levels.
Manifold Regularized Experimental Design for Active Learning.
Zhang, Lining; Shum, Hubert P H; Shao, Ling
2016-12-02
Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.
Methods for the preparation and analysis of solids and suspended solids for total mercury
Olund, Shane D.; DeWild, John F.; Olson, Mark L.; Tate, Michael T.
2004-01-01
The methods documented in this report are utilized by the Wisconsin District Mercury Lab for analysis of total mercury in solids (soils and sediments) and suspended solids (isolated on filters). Separate procedures are required for the different sample types. For solids, samples are prepared by room-temperature acid digestion and oxidation with aqua regia. The samples are brought up to volume with a 5 percent bromine monochloride solution to ensure complete oxidation and heated at 50?C in an oven overnight. Samples are then analyzed with an automated flow injection system incorporating a cold vapor atomic fluorescence spectrometer. A method detection limit of 0.3 ng of mercury per digestion bomb was established using multiple analyses of an environmental sample. Based on the range of masses processed, the minimum sample reporting limit varies from 0.6 ng/g to 6 ng/g. Suspended solids samples are oxidized with a 5 percent bromine monochloride solution and held at 50?C in an oven for 5 days. The samples are then analyzed with an automated flow injection system incorporating a cold vapor atomic fluorescence spectrometer. Using a certified reference material as a surrogate for an environmental sample, a method detection limit of 0.059 ng of mercury per filter was established. The minimum sample reporting limit varies from 0.059 ng/L to 1.18 ng/L, depending on the volume of water filtered.
Mori, Toshifumi; Hamers, Robert J; Pedersen, Joel A; Cui, Qiang
2014-07-17
Motivated by specific applications and the recent work of Gao and co-workers on integrated tempering sampling (ITS), we have developed a novel sampling approach referred to as integrated Hamiltonian sampling (IHS). IHS is straightforward to implement and complementary to existing methods for free energy simulation and enhanced configurational sampling. The method carries out sampling using an effective Hamiltonian constructed by integrating the Boltzmann distributions of a series of Hamiltonians. By judiciously selecting the weights of the different Hamiltonians, one achieves rapid transitions among the energy landscapes that underlie different Hamiltonians and therefore an efficient sampling of important regions of the conformational space. Along this line, IHS shares similar motivations as the enveloping distribution sampling (EDS) approach of van Gunsteren and co-workers, although the ways that distributions of different Hamiltonians are integrated are rather different in IHS and EDS. Specifically, we report efficient ways for determining the weights using a combination of histogram flattening and weighted histogram analysis approaches, which make it straightforward to include many end-state and intermediate Hamiltonians in IHS so as to enhance its flexibility. Using several relatively simple condensed phase examples, we illustrate the implementation and application of IHS as well as potential developments for the near future. The relation of IHS to several related sampling methods such as Hamiltonian replica exchange molecular dynamics and λ-dynamics is also briefly discussed.
Aladaghlo, Zolfaghar; Fakhari, Alireza; Behbahani, Mohammad
2016-10-01
In this work, an efficient sample preparation method termed solvent-assisted dispersive solid-phase extraction was applied. The used sample preparation method was based on the dispersion of the sorbent (benzophenone) into the aqueous sample to maximize the interaction surface. In this approach, the dispersion of the sorbent at a very low milligram level was achieved by inserting a solution of the sorbent and disperser solvent into the aqueous sample. The cloudy solution created from the dispersion of the sorbent in the bulk aqueous sample. After pre-concentration of the butachlor, the cloudy solution was centrifuged and butachlor in the sediment phase dissolved in ethanol and determined by gas chromatography with flame ionization detection. Under the optimized conditions (solution pH = 7.0, sorbent: benzophenone, 2%, disperser solvent: ethanol, 500 μL, centrifuged at 4000 rpm for 3 min), the method detection limit for butachlor was 2, 3 and 3 μg/L for distilled water, waste water, and urine sample, respectively. Furthermore, the preconcentration factor was 198.8, 175.0, and 174.2 in distilled water, waste water, and urine sample, respectively. Solvent-assisted dispersive solid-phase extraction was successfully used for the trace monitoring of butachlor in urine and waste water samples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Button, Mark; Weber, Kela; Nivala, Jaime; Aubron, Thomas; Müller, Roland Arno
2016-03-01
Community-level physiological profiling (CLPP) using BIOLOG® EcoPlates™ has become a popular method for characterizing and comparing the functional diversity, functional potential, and metabolic activity of heterotrophic microbial communities. The method was originally developed for profiling soil communities; however, its usage has expanded into the fields of ecotoxicology, agronomy, and the monitoring and profiling of microbial communities in various wastewater treatment systems, including constructed wetlands for water pollution control. When performing CLPP on aqueous samples from constructed wetlands, a wide variety of sample characteristics can be encountered and challenges may arise due to excessive solids, color, or turbidity. The aim of this study was to investigate the impacts of different sample preparation methods on CLPP performed on a variety of aqueous samples covering a broad range of physical and chemical characteristics. The results show that using filter paper, centrifugation, or settling helped clarify samples for subsequent CLPP analysis, however did not do so as effectively as dilution for the darkest samples. Dilution was able to provide suitable clarity for the darkest samples; however, 100-fold dilution significantly affected the carbon source utilization patterns (CSUPs), particularly with samples that were already partially or fully clear. Ten-fold dilution also had some effect on the CSUPs of samples which were originally clear; however, the effect was minimal. Based on these findings, for this specific set of samples, a 10-fold dilution provided a good balance between ease of use, sufficient clarity (for dark samples), and limited effect on CSUPs. The process and findings outlined here can hopefully serve future studies looking to utilize CLPP for functional analysis of microbial communities and also assist in comparing data from studies where different sample preparation methods were utilized.
Mischnik, Alexander; Mieth, Markus; Busch, Cornelius J; Hofer, Stefan; Zimmermann, Stefan
2012-08-01
Automation of plate streaking is ongoing in clinical microbiological laboratories, but evaluation for routine use is mostly open. In the present study, the recovery of microorganisms from the Previ Isola system plated polyurethane (PU) swab samples is compared to manually plated control viscose swab samples from wounds according to the CLSI procedure M40-A (quality control of microbiological transport systems). One hundred twelve paired samples (224 swabs) were analyzed. In 80/112 samples (71%), concordant culture results were obtained with the two methods. In 32/112 samples (29%), CFU recovery of microorganisms from the two methods was discordant. In 24 (75%) of the 32 paired samples with a discordant result, Previ Isola plated PU swabs were superior. In 8 (25%) of the 32 paired samples with a discordant result, control viscose swabs were superior. The quality of colony growth on culture media for further investigations was superior with Previ Isola inoculated plates compared to manual plating techniques. Gram stain results were concordant between the two methods in 62/112 samples (55%). In 50/112 samples (45%), the results of Gram staining were discordant between the two methods. In 34 (68%) of the 50 paired samples with discordant results, Gram staining of PU swabs was superior to that of control viscose swabs. In 16 (32%) of the 50 paired samples, Gram staining of control viscose swabs was superior to that of PU swabs. We report the first clinical evaluation of Previ Isola automated specimen inoculation for wound swab samples. This study suggests that use of an automated specimen inoculation system has good results with regard to CFU recovery, quality of Gram staining, and accuracy of diagnosis.
Mieth, Markus; Busch, Cornelius J.; Hofer, Stefan; Zimmermann, Stefan
2012-01-01
Automation of plate streaking is ongoing in clinical microbiological laboratories, but evaluation for routine use is mostly open. In the present study, the recovery of microorganisms from the Previ Isola system plated polyurethane (PU) swab samples is compared to manually plated control viscose swab samples from wounds according to the CLSI procedure M40-A (quality control of microbiological transport systems). One hundred twelve paired samples (224 swabs) were analyzed. In 80/112 samples (71%), concordant culture results were obtained with the two methods. In 32/112 samples (29%), CFU recovery of microorganisms from the two methods was discordant. In 24 (75%) of the 32 paired samples with a discordant result, Previ Isola plated PU swabs were superior. In 8 (25%) of the 32 paired samples with a discordant result, control viscose swabs were superior. The quality of colony growth on culture media for further investigations was superior with Previ Isola inoculated plates compared to manual plating techniques. Gram stain results were concordant between the two methods in 62/112 samples (55%). In 50/112 samples (45%), the results of Gram staining were discordant between the two methods. In 34 (68%) of the 50 paired samples with discordant results, Gram staining of PU swabs was superior to that of control viscose swabs. In 16 (32%) of the 50 paired samples, Gram staining of control viscose swabs was superior to that of PU swabs. We report the first clinical evaluation of Previ Isola automated specimen inoculation for wound swab samples. This study suggests that use of an automated specimen inoculation system has good results with regard to CFU recovery, quality of Gram staining, and accuracy of diagnosis. PMID:22692745
Feng, Shu; Gale, Michael J; Fay, Jonathan D; Faridi, Ambar; Titus, Hope E; Garg, Anupam K; Michaels, Keith V; Erker, Laura R; Peters, Dawn; Smith, Travis B; Pennesi, Mark E
2015-09-01
To describe a standardized flood-illuminated adaptive optics (AO) imaging protocol suitable for the clinical setting and to assess sampling methods for measuring cone density. Cone density was calculated following three measurement protocols: 50 × 50-μm sampling window values every 0.5° along the horizontal and vertical meridians (fixed-interval method), the mean density of expanding 0.5°-wide arcuate areas in the nasal, temporal, superior, and inferior quadrants (arcuate mean method), and the peak cone density of a 50 × 50-μm sampling window within expanding arcuate areas near the meridian (peak density method). Repeated imaging was performed in nine subjects to determine intersession repeatability of cone density. Cone density montages could be created for 67 of the 74 subjects. Image quality was determined to be adequate for automated cone counting for 35 (52%) of the 67 subjects. We found that cone density varied with different sampling methods and regions tested. In the nasal and temporal quadrants, peak density most closely resembled histological data, whereas the arcuate mean and fixed-interval methods tended to underestimate the density compared with histological data. However, in the inferior and superior quadrants, arcuate mean and fixed-interval methods most closely matched histological data, whereas the peak density method overestimated cone density compared with histological data. Intersession repeatability testing showed that repeatability was greatest when sampling by arcuate mean and lowest when sampling by fixed interval. We show that different methods of sampling can significantly affect cone density measurements. Therefore, care must be taken when interpreting cone density results, even in a normal population.
Yong, Dongeun; Ki, Chang-Seok; Kim, Jae-Seok; Seong, Moon-Woo; Lee, Hyukmin
2016-01-01
Background Real-time reverse transcription PCR (rRT-PCR) of sputum samples is commonly used to diagnose Middle East respiratory syndrome coronavirus (MERS-CoV) infection. Owing to the difficulty of extracting RNA from sputum containing mucus, sputum homogenization is desirable prior to nucleic acid isolation. We determined optimal homogenization methods for isolating viral nucleic acids from sputum. Methods We evaluated the following three sputum-homogenization methods: proteinase K and DNase I (PK-DNase) treatment, phosphate-buffered saline (PBS) treatment, and N-acetyl-L-cysteine and sodium citrate (NALC) treatment. Sputum samples were spiked with inactivated MERS-CoV culture isolates. RNA was extracted from pretreated, spiked samples using the easyMAG system (bioMérieux, France). Extracted RNAs were then subjected to rRT-PCR for MERS-CoV diagnosis (DiaPlex Q MERS-coronavirus, SolGent, Korea). Results While analyzing 15 spiked sputum samples prepared in technical duplicate, false-negative results were obtained with five (16.7%) and four samples (13.3%), respectively, by using the PBS and NALC methods. The range of threshold cycle (Ct) values observed when detecting upE in sputum samples was 31.1–35.4 with the PK-DNase method, 34.7–39.0 with the PBS method, and 33.9–38.6 with the NALC method. Compared with the control, which were prepared by adding a one-tenth volume of 1:1,000 diluted viral culture to PBS solution, the ranges of Ct values obtained by the PBS and NALC methods differed significantly from the mean control Ct of 33.2 (both P<0.0001). Conclusions The PK-DNase method is suitable for homogenizing sputum samples prior to RNA extraction. PMID:27374711
Rosing, H.; Hillebrand, M. J. X.; Blesson, S.; Mengesha, B.; Diro, E.; Hailu, A.; Schellens, J. H. M.; Beijnen, J. H.
2016-01-01
To facilitate future pharmacokinetic studies of combination treatments against leishmaniasis in remote regions in which the disease is endemic, a simple cheap sampling method is required for miltefosine quantification. The aims of this study were to validate a liquid chromatography-tandem mass spectrometry method to quantify miltefosine in dried blood spot (DBS) samples and to validate its use with Ethiopian patients with visceral leishmaniasis (VL). Since hematocrit (Ht) levels are typically severely decreased in VL patients, returning to normal during treatment, the method was evaluated over a range of clinically relevant Ht values. Miltefosine was extracted from DBS samples using a simple method of pretreatment with methanol, resulting in >97% recovery. The method was validated over a calibration range of 10 to 2,000 ng/ml, and accuracy and precision were within ±11.2% and ≤7.0% (≤19.1% at the lower limit of quantification), respectively. The method was accurate and precise for blood spot volumes between 10 and 30 μl and for Ht levels of 20 to 35%, although a linear effect of Ht levels on miltefosine quantification was observed in the bioanalytical validation. DBS samples were stable for at least 162 days at 37°C. Clinical validation of the method using paired DBS and plasma samples from 16 VL patients showed a median observed DBS/plasma miltefosine concentration ratio of 0.99, with good correlation (Pearson's r = 0.946). Correcting for patient-specific Ht levels did not further improve the concordance between the sampling methods. This successfully validated method to quantify miltefosine in DBS samples was demonstrated to be a valid and practical alternative to venous blood sampling that can be applied in future miltefosine pharmacokinetic studies with leishmaniasis patients, without Ht correction. PMID:26787691
de Vries, W; Wieggers, H J J; Brus, D J
2010-08-05
Element fluxes through forest ecosystems are generally based on measurements of concentrations in soil solution at regular time intervals at plot locations sampled in a regular grid. Here we present spatially averaged annual element leaching fluxes in three Dutch forest monitoring plots using a new sampling strategy in which both sampling locations and sampling times are selected by probability sampling. Locations were selected by stratified random sampling with compact geographical blocks of equal surface area as strata. In each sampling round, six composite soil solution samples were collected, consisting of five aliquots, one per stratum. The plot-mean concentration was estimated by linear regression, so that the bias due to one or more strata being not represented in the composite samples is eliminated. The sampling times were selected in such a way that the cumulative precipitation surplus of the time interval between two consecutive sampling times was constant, using an estimated precipitation surplus averaged over the past 30 years. The spatially averaged annual leaching flux was estimated by using the modeled daily water flux as an ancillary variable. An important advantage of the new method is that the uncertainty in the estimated annual leaching fluxes due to spatial and temporal variation and resulting sampling errors can be quantified. Results of this new method were compared with the reference approach in which daily leaching fluxes were calculated by multiplying daily interpolated element concentrations with daily water fluxes and then aggregated to a year. Results show that the annual fluxes calculated with the reference method for the period 2003-2005, including all plots, elements and depths, lies only in 53% of the cases within the range of the average +/-2 times the standard error of the new method. Despite the differences in results, both methods indicate comparable N retention and strong Al mobilization in all plots, with Al leaching being nearly equal to the leaching of SO(4) and NO(3) with fluxes expressed in mol(c) ha(-1) yr(-1). This illustrates that Al release, which is the clearest signal of soil acidification, is mainly due to the external input of SO(4) and NO(3).
Commutability of food microbiology proficiency testing samples.
Abdelmassih, M; Polet, M; Goffaux, M-J; Planchon, V; Dierick, K; Mahillon, J
2014-03-01
Food microbiology proficiency testing (PT) is a useful tool to assess the analytical performances among laboratories. PT items should be close to routine samples to accurately evaluate the acceptability of the methods. However, most PT providers distribute exclusively artificial samples such as reference materials or irradiated foods. This raises the issue of the suitability of these samples because the equivalence-or 'commutability'-between results obtained on artificial vs. authentic food samples has not been demonstrated. In the clinical field, the use of noncommutable PT samples has led to erroneous evaluation of the performances when different analytical methods were used. This study aimed to provide a first assessment of the commutability of samples distributed in food microbiology PT. REQUASUD and IPH organized 13 food microbiology PTs including 10-28 participants. Three types of PT items were used: genuine food samples, sterile food samples and reference materials. The commutability of the artificial samples (reference material or sterile samples) was assessed by plotting the distribution of the results on natural and artificial PT samples. This comparison highlighted matrix-correlated issues when nonfood matrices, such as reference materials, were used. Artificially inoculated food samples, on the other hand, raised only isolated commutability issues. In the organization of a PT-scheme, authentic or artificially inoculated food samples are necessary to accurately evaluate the analytical performances. Reference materials, used as PT items because of their convenience, may present commutability issues leading to inaccurate penalizing conclusions for methods that would have provided accurate results on food samples. For the first time, the commutability of food microbiology PT samples was investigated. The nature of the samples provided by the organizer turned out to be an important factor because matrix effects can impact on the analytical results. © 2013 The Society for Applied Microbiology.
NASA Astrophysics Data System (ADS)
Pries, V. V.; Proskuriakov, N. E.
2018-04-01
To control the assembly quality of multi-element mass-produced products on automatic rotor lines, control methods with operational feedback are required. However, due to possible failures in the operation of the devices and systems of automatic rotor line, there is always a real probability of getting defective (incomplete) products into the output process stream. Therefore, a continuous sampling control of the products completeness, based on the use of statistical methods, remains an important element in managing the quality of assembly of multi-element mass products on automatic rotor lines. The feature of continuous sampling control of the multi-element products completeness in the assembly process is its breaking sort, which excludes the possibility of returning component parts after sampling control to the process stream and leads to a decrease in the actual productivity of the assembly equipment. Therefore, the use of statistical procedures for continuous sampling control of the multi-element products completeness when assembled on automatic rotor lines requires the use of such sampling plans that ensure a minimum size of control samples. Comparison of the values of the limit of the average output defect level for the continuous sampling plan (CSP) and for the automated continuous sampling plan (ACSP) shows the possibility of providing lower limit values for the average output defects level using the ACSP-1. Also, the average sample size when using the ACSP-1 plan is less than when using the CSP-1 plan. Thus, the application of statistical methods in the assembly quality management of multi-element products on automatic rotor lines, involving the use of proposed plans and methods for continuous selective control, will allow to automating sampling control procedures and the required level of quality of assembled products while minimizing sample size.
Montagna, Maria Teresa; De Giglio, Osvalda; Cristina, Maria Luisa; Napoli, Christian; Pacifico, Claudia; Agodi, Antonella; Baldovin, Tatjana; Casini, Beatrice; Coniglio, Maria Anna; D'Errico, Marcello Mario; Delia, Santi Antonino; Deriu, Maria Grazia; Guida, Marco; Laganà, Pasqualina; Liguori, Giorgio; Moro, Matteo; Mura, Ida; Pennino, Francesca; Privitera, Gaetano; Romano Spica, Vincenzo; Sembeni, Silvia; Spagnolo, Anna Maria; Tardivo, Stefano; Torre, Ida; Valeriani, Federica; Albertini, Roberto; Pasquarella, Cesira
2017-06-22
Healthcare facilities (HF) represent an at-risk environment for legionellosis transmission occurring after inhalation of contaminated aerosols. In general, the control of water is preferred to that of air because, to date, there are no standardized sampling protocols. Legionella air contamination was investigated in the bathrooms of 11 HF by active sampling (Surface Air System and Coriolis ® μ) and passive sampling using settling plates. During the 8-hour sampling, hot tap water was sampled three times. All air samples were evaluated using culture-based methods, whereas liquid samples collected using the Coriolis ® μ were also analyzed by real-time PCR. Legionella presence in the air and water was then compared by sequence-based typing (SBT) methods. Air contamination was found in four HF (36.4%) by at least one of the culturable methods. The culturable investigation by Coriolis ® μ did not yield Legionella in any enrolled HF. However, molecular investigation using Coriolis ® μ resulted in eight HF testing positive for Legionella in the air. Comparison of Legionella air and water contamination indicated that Legionella water concentration could be predictive of its presence in the air. Furthermore, a molecular study of 12 L. pneumophila strains confirmed a match between the Legionella strains from air and water samples by SBT for three out of four HF that tested positive for Legionella by at least one of the culturable methods. Overall, our study shows that Legionella air detection cannot replace water sampling because the absence of microorganisms from the air does not necessarily represent their absence from water; nevertheless, air sampling may provide useful information for risk assessment. The liquid impingement technique appears to have the greatest capacity for collecting airborne Legionella if combined with molecular investigations.
Montagna, Maria Teresa; De Giglio, Osvalda; Cristina, Maria Luisa; Napoli, Christian; Pacifico, Claudia; Agodi, Antonella; Baldovin, Tatjana; Casini, Beatrice; Coniglio, Maria Anna; D’Errico, Marcello Mario; Delia, Santi Antonino; Deriu, Maria Grazia; Guida, Marco; Laganà, Pasqualina; Liguori, Giorgio; Moro, Matteo; Mura, Ida; Pennino, Francesca; Privitera, Gaetano; Romano Spica, Vincenzo; Sembeni, Silvia; Spagnolo, Anna Maria; Tardivo, Stefano; Torre, Ida; Valeriani, Federica; Albertini, Roberto; Pasquarella, Cesira
2017-01-01
Healthcare facilities (HF) represent an at-risk environment for legionellosis transmission occurring after inhalation of contaminated aerosols. In general, the control of water is preferred to that of air because, to date, there are no standardized sampling protocols. Legionella air contamination was investigated in the bathrooms of 11 HF by active sampling (Surface Air System and Coriolis®μ) and passive sampling using settling plates. During the 8-hour sampling, hot tap water was sampled three times. All air samples were evaluated using culture-based methods, whereas liquid samples collected using the Coriolis®μ were also analyzed by real-time PCR. Legionella presence in the air and water was then compared by sequence-based typing (SBT) methods. Air contamination was found in four HF (36.4%) by at least one of the culturable methods. The culturable investigation by Coriolis®μ did not yield Legionella in any enrolled HF. However, molecular investigation using Coriolis®μ resulted in eight HF testing positive for Legionella in the air. Comparison of Legionella air and water contamination indicated that Legionella water concentration could be predictive of its presence in the air. Furthermore, a molecular study of 12 L. pneumophila strains confirmed a match between the Legionella strains from air and water samples by SBT for three out of four HF that tested positive for Legionella by at least one of the culturable methods. Overall, our study shows that Legionella air detection cannot replace water sampling because the absence of microorganisms from the air does not necessarily represent their absence from water; nevertheless, air sampling may provide useful information for risk assessment. The liquid impingement technique appears to have the greatest capacity for collecting airborne Legionella if combined with molecular investigations. PMID:28640202
Sun, Yangbo; Chen, Long; Huang, Bisheng; Chen, Keli
2017-07-01
As a mineral, the traditional Chinese medicine calamine has a similar shape to many other minerals. Investigations of commercially available calamine samples have shown that there are many fake and inferior calamine goods sold on the market. The conventional identification method for calamine is complicated, therefore as a result of the large scale of calamine samples, a rapid identification method is needed. To establish a qualitative model using near-infrared (NIR) spectroscopy for rapid identification of various calamine samples, large quantities of calamine samples including crude products, counterfeits and processed products were collected and correctly identified using the physicochemical and powder X-ray diffraction method. The NIR spectroscopy method was used to analyze these samples by combining the multi-reference correlation coefficient (MRCC) method and the error back propagation artificial neural network algorithm (BP-ANN), so as to realize the qualitative identification of calamine samples. The accuracy rate of the model based on NIR and MRCC methods was 85%; in addition, the model, which took comprehensive multiple factors into consideration, can be used to identify crude calamine products, its counterfeits and processed products. Furthermore, by in-putting the correlation coefficients of multiple references as the spectral feature data of samples into BP-ANN, a BP-ANN model of qualitative identification was established, of which the accuracy rate was increased to 95%. The MRCC method can be used as a NIR-based method in the process of BP-ANN modeling.
Apel, William A.; Thompson, Vicki S; Lacey, Jeffrey A.; Gentillon, Cynthia A.
2016-08-09
A method for determining a plurality of proteins for discriminating and positively identifying an individual based from a biological sample. The method may include profiling a biological sample from a plurality of individuals against a protein array including a plurality of proteins. The protein array may include proteins attached to a support in a preselected pattern such that locations of the proteins are known. The biological sample may be contacted with the protein array such that a portion of antibodies in the biological sample reacts with and binds to the proteins forming immune complexes. A statistical analysis method, such as discriminant analysis, may be performed to determine discriminating proteins for distinguishing individuals. Proteins of interest may be used to form a protein array. Such a protein array may be used, for example, to compare a forensic sample from an unknown source with a sample from a known source.
Thompson, Vicki S; Lacey, Jeffrey A; Gentillon, Cynthia A; Apel, William A
2015-03-03
A method for determining a plurality of proteins for discriminating and positively identifying an individual based from a biological sample. The method may include profiling a biological sample from a plurality of individuals against a protein array including a plurality of proteins. The protein array may include proteins attached to a support in a preselected pattern such that locations of the proteins are known. The biological sample may be contacted with the protein array such that a portion of antibodies in the biological sample reacts with and binds to the proteins forming immune complexes. A statistical analysis method, such as discriminant analysis, may be performed to determine discriminating proteins for distinguishing individuals. Proteins of interest may be used to form a protein array. Such a protein array may be used, for example, to compare a forensic sample from an unknown source with a sample from a known source.
Song, Young Kyoung; Hong, Sang Hee; Jang, Mi; Han, Gi Myung; Rani, Manviri; Lee, Jongmyoung; Shim, Won Joon
2015-04-15
The analysis of microplastics in various environmental samples requires the identification of microplastics from natural materials. The identification technique lacks a standardized protocol. Herein, stereomicroscope and Fourier transform infrared spectroscope (FT-IR) identification methods for microplastics (<1mm) were compared using the same samples from the sea surface microlayer (SML) and beach sand. Fragmented microplastics were significantly (p<0.05) underestimated and fiber was significantly overestimated using the stereomicroscope both in the SML and beach samples. The total abundance by FT-IR was higher than by microscope both in the SML and beach samples, but they were not significantly (p>0.05) different. Depending on the number of samples and the microplastic size range of interest, the appropriate identification method should be determined; selecting a suitable identification method for microplastics is crucial for evaluating microplastic pollution. Copyright © 2015 Elsevier Ltd. All rights reserved.
Floating Ultrasonic Transducer Inspection System and Method for Nondestructive Evaluation
NASA Technical Reports Server (NTRS)
Johnston, Patrick H. (Inventor); Zalameda, Joseph N. (Inventor)
2016-01-01
A method for inspecting a structural sample using ultrasonic energy includes positioning an ultrasonic transducer adjacent to a surface of the sample, and then transmitting ultrasonic energy into the sample. Force pulses are applied to the transducer concurrently with transmission of the ultrasonic energy. A host machine processes ultrasonic return pulses from an ultrasonic pulser/receiver to quantify attenuation of the ultrasonic energy within the sample. The host machine detects a defect in the sample using the quantified level of attenuation. The method may include positioning a dry couplant between an ultrasonic transducer and the surface. A system includes an actuator, an ultrasonic transducer, a dry couplant between the transducer the sample, a scanning device that moves the actuator and transducer, and a measurement system having a pulsed actuator power supply, an ultrasonic pulser/receiver, and a host machine that executes the above method.
Method and apparatus for generating motor current spectra to enhance motor system fault detection
Linehan, Daniel J.; Bunch, Stanley L.; Lyster, Carl T.
1995-01-01
A method and circuitry for sampling periodic amplitude modulations in a nonstationary periodic carrier wave to determine frequencies in the amplitude modulations. The method and circuit are described in terms of an improved motor current signature analysis. The method insures that the sampled data set contains an exact whole number of carrier wave cycles by defining the rate at which samples of motor current data are collected. The circuitry insures that a sampled data set containing stationary carrier waves is recreated from the analog motor current signal containing nonstationary carrier waves by conditioning the actual sampling rate to adjust with the frequency variations in the carrier wave. After the sampled data is transformed to the frequency domain via the Discrete Fourier Transform, the frequency distribution in the discrete spectra of those components due to the carrier wave and its harmonics will be minimized so that signals of interest are more easily analyzed.
Martins, Angélica Rocha; Talhavini, Márcio; Vieira, Maurício Leite; Zacca, Jorge Jardim; Braga, Jez Willian Batista
2017-08-15
The discrimination of whisky brands and counterfeit identification were performed by UV-Vis spectroscopy combined with partial least squares for discriminant analysis (PLS-DA). In the proposed method all spectra were obtained with no sample preparation. The discrimination models were built with the employment of seven whisky brands: Red Label, Black Label, White Horse, Chivas Regal (12years), Ballantine's Finest, Old Parr and Natu Nobilis. The method was validated with an independent test set of authentic samples belonging to the seven selected brands and another eleven brands not included in the training samples. Furthermore, seventy-three counterfeit samples were also used to validate the method. Results showed correct classification rates for genuine and false samples over 98.6% and 93.1%, respectively, indicating that the method can be helpful for the forensic analysis of whisky samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
La 2-xSr xCuO 4-δ superconducting samples prepared by the wet-chemical method
NASA Astrophysics Data System (ADS)
Loose, A.; Gonzalez, J. L.; Lopez, A.; Borges, H. A.; Baggio-Saitovitch, E.
2009-10-01
In this work, we report on the physical properties of good-quality polycrystalline superconducting samples of La 2-xSr xCu 1-yZn yO 4-δ ( y=0, 0.02) prepared by a wet-chemical method, focusing on the temperature dependence of the critical current. Using the wet-chemical method, we were able to produce samples with improved homogeneity compared to the solid-state method. A complete set of samples with several carrier concentrations, ranging from the underdoped (strontium concentration x≈0.05) to the highly overdoped ( x≈0.25) region, were prepared and investigated. The X-ray diffraction analysis, zero-field cooling magnetization and electrical resistivity measurements were reported on earlier. The structural parameters of the prepared samples seem to be slightly modified by the preparation method and their critical temperatures were lower than reported in the literature. The temperature dependence of the critical current was explained by a theoretical model which took the granular structure of the samples into account.
Correcting for Sample Contamination in Genotype Calling of DNA Sequence Data
Flickinger, Matthew; Jun, Goo; Abecasis, Gonçalo R.; Boehnke, Michael; Kang, Hyun Min
2015-01-01
DNA sample contamination is a frequent problem in DNA sequencing studies and can result in genotyping errors and reduced power for association testing. We recently described methods to identify within-species DNA sample contamination based on sequencing read data, showed that our methods can reliably detect and estimate contamination levels as low as 1%, and suggested strategies to identify and remove contaminated samples from sequencing studies. Here we propose methods to model contamination during genotype calling as an alternative to removal of contaminated samples from further analyses. We compare our contamination-adjusted calls to calls that ignore contamination and to calls based on uncontaminated data. We demonstrate that, for moderate contamination levels (5%–20%), contamination-adjusted calls eliminate 48%–77% of the genotyping errors. For lower levels of contamination, our contamination correction methods produce genotypes nearly as accurate as those based on uncontaminated data. Our contamination correction methods are useful generally, but are particularly helpful for sample contamination levels from 2% to 20%. PMID:26235984
Flagging versus dragging as sampling methods for nymphal Ixodes scapularis (Acari: Ixodidae)
Rulison, Eric L.; Kuczaj, Isis; Pang, Genevieve; Hickling, Graham J.; Tsao, Jean I.; Ginsberg, Howard S.
2013-01-01
The nymphal stage of the blacklegged tick, Ixodes scapularis (Acari: Ixodidae), is responsible for most transmission of Borrelia burgdorferi, the etiologic agent of Lyme disease, to humans in North America. From 2010 to fall of 2012, we compared two commonly used techniques, flagging and dragging, as sampling methods for nymphal I. scapularis at three sites, each with multiple sampling arrays (grids), in the eastern and central United States. Flagging and dragging collected comparable numbers of nymphs, with no consistent differences between methods. Dragging collected more nymphs than flagging in some samples, but these differences were not consistent among sites or sampling years. The ratio of nymphs collected by flagging vs dragging was not significantly related to shrub density, so habitat type did not have a strong effect on the relative efficacy of these methods. Therefore, although dragging collected more ticks in a few cases, the numbers collected by each method were so variable that neither technique had a clear advantage for sampling nymphal I. scapularis.
Cockburn, Glenn; Sánchez-Tójar, Alfredo; Løvlie, Hanne; Schroeder, Julia
2017-01-01
Birds are model organisms in sperm biology. Previous work in zebra finches, suggested that sperm sampled from males' faeces and ejaculates do not differ in size. Here, we tested this assumption in a captive population of house sparrows, Passer domesticus. We compared sperm length in samples from three collection techniques: female dummy, faecal and abdominal massage samples. We found that sperm were significantly shorter in faecal than abdominal massage samples, which was explained by shorter heads and midpieces, but not flagella. This result might indicate that faecal sampled sperm could be less mature than sperm collected by abdominal massage. The female dummy method resulted in an insufficient number of experimental ejaculates because most males ignored it. In light of these results, we recommend using abdominal massage as a preferred method for avian sperm sampling. Where avian sperm cannot be collected by abdominal massage alone, we advise controlling for sperm sampling protocol statistically. PMID:28813481
Methods for sample size determination in cluster randomized trials
Rutterford, Clare; Copas, Andrew; Eldridge, Sandra
2015-01-01
Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515
Carter, James L.; Resh, Vincent H.
2001-01-01
A survey of methods used by US state agencies for collecting and processing benthic macroinvertebrate samples from streams was conducted by questionnaire; 90 responses were received and used to describe trends in methods. The responses represented an estimated 13,000-15,000 samples collected and processed per year. Kicknet devices were used in 64.5% of the methods; other sampling devices included fixed-area samplers (Surber and Hess), artificial substrates (Hester-Dendy and rock baskets), grabs, and dipnets. Regional differences existed, e.g., the 1-m kicknet was used more often in the eastern US than in the western US. Mesh sizes varied among programs but 80.2% of the methods used a mesh size between 500 and 600 (mu or u)m. Mesh size variations within US Environmental Protection Agency regions were large, with size differences ranging from 100 to 700 (mu or u)m. Most samples collected were composites; the mean area sampled was 1.7 m2. Samples rarely were collected using a random method (4.7%); most samples (70.6%) were collected using "expert opinion", which may make data obtained operator-specific. Only 26.3% of the methods sorted all the organisms from a sample; the remainder subsampled in the laboratory. The most common method of subsampling was to remove 100 organisms (range = 100-550). The magnification used for sorting ranged from 1 (sorting by eye) to 30x, which results in inconsistent separation of macroinvertebrates from detritus. In addition to subsampling, 53% of the methods sorted large/rare organisms from a sample. The taxonomic level used for identifying organisms varied among taxa; Ephemeroptera, Plecoptera, and Trichoptera were generally identified to a finer taxonomic resolution (genus and species) than other taxa. Because there currently exists a large range of field and laboratory methods used by state programs, calibration among all programs to increase data comparability would be exceptionally challenging. However, because many techniques are shared among methods, limited testing could be designed to evaluate whether procedural differences affect the ability to determine levels of environmental impairment using benthic macroinvertebrate communities.
2015-01-01
Two independent sampling and analytical methods for ortho-phthalaldehyde (OPA) in air have been developed, evaluated and compared (1) a reagent-coated solid sorbent HPLC-UV method and (2) an impinger-fluorescence method. In the first method, air sampling is conducted at 1.0 L min−1 with a sampler containing 350 mg of silica gel coated with 1 mg of acidified 2,4-dinitrophenylhydrazine (DNPH). After sampling, excess DNPH in ethyl acetate is added to the sampler prior to storage for 68 hours. The OPA-DNPH derivative is eluted with 4.0 mL of dimethyl sulfoxide (DMSO) for measurement by HPLC with a UV detector set at 3S5 nm. The estimated detection limit is 0.016 µg per sample or 0.067 µg m−3 (0.012 ppb) for a 240 L air sample. Recoveries of vapor spikes at levels of 1.2 to 6.2 µg were 96 to 101%. Recoveries of spikes as mixtures of vapor and condensation aerosols were 97 to 100%. In the second method, air sampling is conducted at 1.0 L mm−1 with a midget impinger containing 10 mL of DMSO solution containing N-acetyl-l-cysteine and ethylenediamine. The fluorescence reading is taken 80 min after the completion of air sampling. Since the time of taking the fluorescence reading is critical, the reading is taken with a portable fluorometer. The estimated detection limit is 0.024 µg per sample or 0.1 µg m−3 (0.018 ppb) for a 240 L air sample. Recoveries of OPA vapor spikes at levels of 1.4 to 5.0 µg per sample were 97 to 105%. Recoveries of spikes as mixtures of vapors and condensation aerosols were 95 to 99%. The collection efficiency for a mixture of vapor and condensation aerosol was 99.4%. The two methods were compared side-by-side in a generation system constructed for producing controlled atmospheres of OPA vapor in air. Average air concentrations of OPA vapor found by both methods agreed within ±10%. PMID:26346658
NASA Astrophysics Data System (ADS)
Nasir, N. F.; Mirus, M. F.; Ismail, M.
2017-09-01
Crude glycerol which produced from transesterification reaction has limited usage if it does not undergo purification process. It also contains excess methanol, catalyst and soap. Conventionally, purification method of the crude glycerol involves high cost and complex processes. This study aimed to determine the effects of using different purification methods which are direct method (comprises of ion exchange and methanol removal steps) and multistep method (comprises of neutralization, filtration, ion exchange and methanol removal steps). Two crude glycerol samples were investigated; the self-produced sample through the transesterification process of palm oil and the sample obtained from biodiesel plant. Samples were analysed using Fourier Transform Infrared Spectroscopy, Gas Chromatography and High Performance Liquid Chromatography. The results of this study for both samples after purification have showed that the pure glycerol was successfully produced and fatty acid salts were eliminated. Also, the results indicated the absence of methanol in both samples after purification process. In short, the combination of 4 purification steps has contributed to a higher quality of glycerol. Multistep purification method gave a better result compared to the direct method as neutralization and filtration steps helped in removing most excess salt, fatty acid and catalyst.
PhyloChip™ microarray comparison of sampling methods used for coral microbial ecology
Kellogg, Christina A.; Piceno, Yvette M.; Tom, Lauren M.; DeSantis, Todd Z.; Zawada, David G.; Andersen, Gary L.
2012-01-01
Interest in coral microbial ecology has been increasing steadily over the last decade, yet standardized methods of sample collection still have not been defined. Two methods were compared for their ability to sample coral-associated microbial communities: tissue punches and foam swabs, the latter being less invasive and preferred by reef managers. Four colonies of star coral, Montastraea annularis, were sampled in the Dry Tortugas National Park (two healthy and two with white plague disease). The PhyloChip™ G3 microarray was used to assess microbial community structure of amplified 16S rRNA gene sequences. Samples clustered based on methodology rather than coral colony. Punch samples from healthy and diseased corals were distinct. All swab samples clustered closely together with the seawater control and did not group according to the health state of the corals. Although more microbial taxa were detected by the swab method, there is a much larger overlap between the water control and swab samples than punch samples, suggesting some of the additional diversity is due to contamination from water absorbed by the swab. While swabs are useful for noninvasive studies of the coral surface mucus layer, these results show that they are not optimal for studies of coral disease.
PhyloChip™ microarray comparison of sampling methods used for coral microbial ecology.
Kellogg, Christina A; Piceno, Yvette M; Tom, Lauren M; DeSantis, Todd Z; Zawada, David G; Andersen, Gary L
2012-01-01
Interest in coral microbial ecology has been increasing steadily over the last decade, yet standardized methods of sample collection still have not been defined. Two methods were compared for their ability to sample coral-associated microbial communities: tissue punches and foam swabs, the latter being less invasive and preferred by reef managers. Four colonies of star coral, Montastraea annularis, were sampled in the Dry Tortugas National Park (two healthy and two with white plague disease). The PhyloChip™ G3 microarray was used to assess microbial community structure of amplified 16S rRNA gene sequences. Samples clustered based on methodology rather than coral colony. Punch samples from healthy and diseased corals were distinct. All swab samples clustered closely together with the seawater control and did not group according to the health state of the corals. Although more microbial taxa were detected by the swab method, there is a much larger overlap between the water control and swab samples than punch samples, suggesting some of the additional diversity is due to contamination from water absorbed by the swab. While swabs are useful for noninvasive studies of the coral surface mucus layer, these results show that they are not optimal for studies of coral disease. Published by Elsevier B.V.
Appel, David I.; Brinda, Bryan; Markowitz, John S.; Newcorn, Jeffrey H.; Zhu, Hao-Jie
2012-01-01
A simple, rapid and sensitive method for quantification of atomoxetine by liquid chromatography- tandem mass spectrometry (LC-MS/MS) was developed. This assay represents the first LC-MS/MS quantification method for atomoxetine utilizing electrospray ionization. Deuterated atomoxetine (d3-atomoxetine) was adopted as the internal standard. Direct protein precipitation was utilized for sample preparation. This method was validated for both human plasma and in vitro cellular samples. The lower limit of quantification was 3 ng/ml and 10 nM for human plasma and cellular samples, respectively. The calibration curves were linear within the ranges of 3 ng/ml to 900 ng/ml and 10 nM to 10 μM for human plasma and cellular samples, respectively (r2 > 0.999). The intra- and inter-day assay accuracy and precision were evaluated using quality control samples at 3 different concentrations in both human plasma and cellular lysate. Sample run stability, assay selectivity, matrix effect, and recovery were also successfully demonstrated. The present assay is superior to previously published LC-MS and LC-MS/MS methods in terms of sensitivity or the simplicity of sample preparation. This assay is applicable to the analysis of atomoxetine in both human plasma and in vitro cellular samples. PMID:22275222
Nunes, Rita G; Hajnal, Joseph V
2018-06-01
Point spread function (PSF) mapping enables estimating the displacement fields required for distortion correction of echo planar images. Recently, a highly accelerated approach was introduced for estimating displacements from the phase slope of under-sampled PSF mapping data. Sampling schemes with varying spacing were proposed requiring stepwise phase unwrapping. To avoid unwrapping errors, an alternative approach applying the concept of finite rate of innovation to PSF mapping (FRIP) is introduced, using a pattern search strategy to locate the PSF peak, and the two methods are compared. Fully sampled PSF data was acquired in six subjects at 3.0 T, and distortion maps were estimated after retrospective under-sampling. The two methods were compared for both previously published and newly optimized sampling patterns. Prospectively under-sampled data were also acquired. Shift maps were estimated and deviations relative to the fully sampled reference map were calculated. The best performance was achieved when using FRIP with a previously proposed sampling scheme. The two methods were comparable for the remaining schemes. The displacement field errors tended to be lower as the number of samples or their spacing increased. A robust method for estimating the position of the PSF peak has been introduced.
Results from the FIN-2 formal comparison
NASA Astrophysics Data System (ADS)
Connolly, Paul; Hoose, Corinna; Liu, Xiaohong; Moehler, Ottmar; Cziczo, Daniel; DeMott, Paul
2017-04-01
During the Fifth International Ice Nucleation Workshop (FIN-2) at the AIDA Ice Nucleation facility in Karlsruhe, Germany in March 2015, a formal comparison of ice nucleation measurement methods was conducted. During the experiments the samples of ice nucleating particles were not revealed to the instrument scientists, hence this was referred to as a "blind comparison". The two samples used were later revealed to be Arizona Test Dust and an Argentina soil sample. For these two samples seven mobile ice nucleating particle counters sampled directly from the AIDA chamber or from the aerosol preparation chamber at specified temperatures, whereas filter samples were taken for two offline deposition nucleation instruments. Wet suspension methods for determining IN concentrations were also used with 10 different methods employed. For the wet suspension methods experiments were conducted using INPs collected from the air inside the chambers (impinger sampling) and INPs taken from the bulk samples (vial sampling). Direct comparisons of the ice nucleating particle concentrations are reported as well as derived ice nucleation active site densities. The study highlights the difficulties in performing such analyses, but generally indicates that there is reasonable agreement between the wet suspension techniques. It is noted that ice nucleation efficiency derived from the AIDA chamber (quantified using the ice active surface site density approach) is higher than that for the cold stage techniques. This is both true for the Argentina soil sample and, to a lesser extent, for the Arizona Test Dust sample too. Other interesting effects were noted: for the ATD the impinger sampling demonstrated higher INP efficiency at higher temperatures (>255 K) than the vial sampling, but agreed at the lower temperatures (<255K), whereas the opposite was true for the Argentina soil sample. The results are analysed to better understand the performance of the various techniques and to address any size-sorting effects and / or sampling line loses.
Gao, Yi Qin
2008-04-07
Here, we introduce a simple self-adaptive computational method to enhance the sampling in energy, configuration, and trajectory spaces. The method makes use of two strategies. It first uses a non-Boltzmann distribution method to enhance the sampling in the phase space, in particular, in the configuration space. The application of this method leads to a broad energy distribution in a large energy range and a quickly converged sampling of molecular configurations. In the second stage of simulations, the configuration space of the system is divided into a number of small regions according to preselected collective coordinates. An enhanced sampling of reactive transition paths is then performed in a self-adaptive fashion to accelerate kinetics calculations.
Analysis of spreadable cheese by Raman spectroscopy and chemometric tools.
Oliveira, Kamila de Sá; Callegaro, Layce de Souza; Stephani, Rodrigo; Almeida, Mariana Ramos; de Oliveira, Luiz Fernando Cappa
2016-03-01
In this work, FT-Raman spectroscopy was explored to evaluate spreadable cheese samples. A partial least squares discriminant analysis was employed to identify the spreadable cheese samples containing starch. To build the models, two types of samples were used: commercial samples and samples manufactured in local industries. The method of supervised classification PLS-DA was employed to classify the samples as adulterated or without starch. Multivariate regression was performed using the partial least squares method to quantify the starch in the spreadable cheese. The limit of detection obtained for the model was 0.34% (w/w) and the limit of quantification was 1.14% (w/w). The reliability of the models was evaluated by determining the confidence interval, which was calculated using the bootstrap re-sampling technique. The results show that the classification models can be used to complement classical analysis and as screening methods. Copyright © 2015 Elsevier Ltd. All rights reserved.
Rocha, C F D; Van Sluys, M; Hatano, F H; Boquimpani-Freitas, L; Marra, R V; Marques, R V
2004-11-01
Studies on anurans in restinga habitats are few and, as a result, there is little information on which methods are more efficient for sampling them in this environment. Ten methods are usually used for sampling anuran communities in tropical and sub-tropical areas. In this study we evaluate which methods are more appropriate for this purpose in the restinga environment of Parque Nacional da Restinga de Jurubatiba. We analyzed six methods among those usually used for anuran samplings. For each method, we recorded the total amount of time spent (in min.), the number of researchers involved, and the number of species captured. We calculated a capture efficiency index (time necessary for a researcher to capture an individual frog) in order to make comparable the data obtained. Of the methods analyzed, the species inventory (9.7 min/searcher /ind.- MSI; richness = 6; abundance = 23) and the breeding site survey (9.5 MSI; richness = 4; abundance = 22) were the most efficient. The visual encounter inventory (45.0 MSI) and patch sampling (65.0 MSI) methods were of comparatively lower efficiency restinga, whereas the plot sampling and the pit-fall traps with drift-fence methods resulted in no frog capture. We conclude that there is a considerable difference in efficiency of methods used in the restinga environment and that the complete species inventory method is highly efficient for sampling frogs in the restinga studied and may be so in other restinga environments. Methods that are usually efficient in forested areas seem to be of little value in open restinga habitats.
Chuang, Jane C; Emon, Jeanette M Van; Durnford, Joyce; Thomas, Kent
2005-09-15
An enzyme-linked immunosorbent assay (ELISA) method was developed to quantitatively measure 2,4-dichlorophenoxyacetic acid (2,4-D) in human urine. Samples were diluted (1:5) with phosphate-buffered saline containing 0.05% Tween and 0.02% sodium azide, with analysis by a 96-microwell plate immunoassay format. No clean up was required as dilution step minimized sample interferences. Fifty urine samples were received without identifiers from a subset of pesticide applicators and their spouses in an EPA pesticide exposure study (PES) and analyzed by the ELISA method and a conventional gas chromatography/mass spectrometry (GC/MS) procedure. For the GC/MS analysis, urine samples were extracted with acidic dichloromethane (DCM); methylated by diazomethane and fractionated by a Florisil solid phase extraction (SPE) column prior to GC/MS detection. The percent relative standard deviation (%R.S.D.) of the 96-microwell plate triplicate assays ranged from 1.2 to 22% for the urine samples. Day-to-day variation of the assay results was within +/-20%. Quantitative recoveries (>70%) of 2,4-D were obtained for the spiked urine samples by the ELISA method. Quantitative recoveries (>80%) of 2,4-D were also obtained for these samples by the GC/MS procedure. The overall method precision of these samples was within +/-20% for both the ELISA and GC/MS methods. The estimated quantification limit for 2,4-D in urine was 30ng/mL by ELISA and 0.2ng/mL by GC/MS. A higher quantification limit for the ELISA method is partly due to the requirement of a 1:5 dilution to remove the urine sample matrix effect. The GC/MS method can accommodate a 10:1 concentration factor (10mL of urine converted into 1mL organic solvent for analysis) but requires extraction, methylation and clean up on a solid phase column. The immunoassay and GC/MS data were highly correlated, with a correlation coefficient of 0.94 and a slope of 1.00. Favorable results between the two methods were achieved despite the vast differences in sample preparation. Results indicated that the ELISA method could be used as a high throughput, quantitative monitoring tool for human urine samples to identify individuals with exposure to 2,4-D above the typical background levels.
Ahmed, W; Stewart, J; Gardner, T; Powell, D; Brooks, P; Sullivan, D; Tindale, N
2007-08-01
Library-dependent (LD) (biochemical fingerprinting of Escherichia coli and enterococci) and library-independent (LI) (PCR detection of human-specific biomarkers) methods were used to detect human faecal pollution in three non-sewered catchments. In all, 550 E. coli isolates and 700 enterococci isolates were biochemically fingerprinted from 18 water samples and compared with metabolic fingerprint libraries of 4508 E. coli and 4833 enterococci isolates. E. coli fingerprints identified human unique biochemical phenotypes (BPTs) in nine out of 18 water samples; similarly, enterococci fingerprints identified human faecal pollution in 10 water samples. Seven samples were tested by PCR for the detection of biomarkers. Human-specific HF134 Bacteroides and enterococci surface protein (esp) biomarkers were detected in five samples. Four samples were also positive for HF183 Bacteroides biomarker. The combination of biomarkers detected human faecal pollution in six out of seven water samples. Of the seven samples analysed for both the indicators/markers, at least one indicator/marker was detected in every sample. Four of the seven PCR-positive samples were also positive for one of the human-specific E. coli or enterococci BPTs. The results indicated human faecal pollution in the studied sub-catchments after storm events. LD and LI methods used in this study complimented each other and provided additional information regarding the polluting sources when one method failed to detect human faecal pollution. Therefore, it is recommended that a combination of methods should be used to identify the source(s) of faecal pollution where possible.
Sample selection via angular distance in the space of the arguments of an artificial neural network
NASA Astrophysics Data System (ADS)
Fernández Jaramillo, J. M.; Mayerle, R.
2018-05-01
In the construction of an artificial neural network (ANN) a proper data splitting of the available samples plays a major role in the training process. This selection of subsets for training, testing and validation affects the generalization ability of the neural network. Also the number of samples has an impact in the time required for the design of the ANN and the training. This paper introduces an efficient and simple method for reducing the set of samples used for training a neural network. The method reduces the required time to calculate the network coefficients, while keeping the diversity and avoiding overtraining the ANN due the presence of similar samples. The proposed method is based on the calculation of the angle between two vectors, each one representing one input of the neural network. When the angle formed among samples is smaller than a defined threshold only one input is accepted for the training. The accepted inputs are scattered throughout the sample space. Tidal records are used to demonstrate the proposed method. The results of a cross-validation show that with few inputs the quality of the outputs is not accurate and depends on the selection of the first sample, but as the number of inputs increases the accuracy is improved and differences among the scenarios with a different starting sample have and important reduction. A comparison with the K-means clustering algorithm shows that for this application the proposed method with a smaller number of samples is producing a more accurate network.
Jantzi, Sarah C; Almirall, José R
2011-07-01
A method for the quantitative elemental analysis of surface soil samples using laser-induced breakdown spectroscopy (LIBS) was developed and applied to the analysis of bulk soil samples for discrimination between specimens. The use of a 266 nm laser for LIBS analysis is reported for the first time in forensic soil analysis. Optimization of the LIBS method is discussed, and the results compared favorably to a laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) method previously developed. Precision for both methods was <10% for most elements. LIBS limits of detection were <33 ppm and bias <40% for most elements. In a proof of principle study, the LIBS method successfully discriminated samples from two different sites in Dade County, FL. Analysis of variance, Tukey's post hoc test and Student's t test resulted in 100% discrimination with no type I or type II errors. Principal components analysis (PCA) resulted in clear groupings of the two sites. A correct classification rate of 99.4% was obtained with linear discriminant analysis using leave-one-out validation. Similar results were obtained when the same samples were analyzed by LA-ICP-MS, showing that LIBS can provide similar information to LA-ICP-MS. In a forensic sampling/spatial heterogeneity study, the variation between sites, between sub-plots, between samples and within samples was examined on three similar Dade sites. The closer the sampling locations, the closer the grouping on a PCA plot and the higher the misclassification rate. These results underscore the importance of careful sampling for geographic site characterization.
Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.
Wang, Zuozhen
2018-01-01
Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.
[Recent advances in sample preparation methods of plant hormones].
Wu, Qian; Wang, Lus; Wu, Dapeng; Duan, Chunfeng; Guan, Yafeng
2014-04-01
Plant hormones are a group of naturally occurring trace substances which play a crucial role in controlling the plant development, growth and environment response. With the development of the chromatography and mass spectroscopy technique, chromatographic analytical method has become a widely used way for plant hormone analysis. Among the steps of chromatographic analysis, sample preparation is undoubtedly the most vital one. Thus, a highly selective and efficient sample preparation method is critical for accurate identification and quantification of phytohormones. For the three major kinds of plant hormones including acidic plant hormones & basic plant hormones, brassinosteroids and plant polypeptides, the sample preparation methods are reviewed in sequence especially the recently developed methods. The review includes novel methods, devices, extractive materials and derivative reagents for sample preparation of phytohormones analysis. Especially, some related works of our group are included. At last, the future developments in this field are also prospected.
[Standard sample preparation method for quick determination of trace elements in plastic].
Yao, Wen-Qing; Zong, Rui-Long; Zhu, Yong-Fa
2011-08-01
Reference sample was prepared by masterbatch method, containing heavy metals with known concentration of electronic information products (plastic), the repeatability and precision were determined, and reference sample preparation procedures were established. X-Ray fluorescence spectroscopy (XRF) analysis method was used to determine the repeatability and uncertainty in the analysis of the sample of heavy metals and bromine element. The working curve and the metrical methods for the reference sample were carried out. The results showed that the use of the method in the 200-2000 mg x kg(-1) concentration range for Hg, Pb, Cr and Br elements, and in the 20-200 mg x kg(-1) range for Cd elements, exhibited a very good linear relationship, and the repeatability of analysis methods for six times is good. In testing the circuit board ICB288G and ICB288 from the Mitsubishi Heavy Industry Company, results agreed with the recommended values.
Vu, Kim-Nhien; Gilbert, Guillaume; Chalut, Marianne; Chagnon, Miguel; Chartrand, Gabriel; Tang, An
2016-05-01
To assess the agreement between published magnetic resonance imaging (MRI)-based regions of interest (ROI) sampling methods using liver mean proton density fat fraction (PDFF) as the reference standard. This retrospective, internal review board-approved study was conducted in 35 patients with type 2 diabetes. Liver PDFF was measured by magnetic resonance spectroscopy (MRS) using a stimulated-echo acquisition mode sequence and MRI using a multiecho spoiled gradient-recalled echo sequence at 3.0T. ROI sampling methods reported in the literature were reproduced and liver mean PDFF obtained by whole-liver segmentation was used as the reference standard. Intraclass correlation coefficients (ICCs), Bland-Altman analysis, repeated-measures analysis of variance (ANOVA), and paired t-tests were performed. ICC between MRS and MRI-PDFF was 0.916. Bland-Altman analysis showed excellent intermethod agreement with a bias of -1.5 ± 2.8%. The repeated-measures ANOVA found no systematic variation of PDFF among the nine liver segments. The correlation between liver mean PDFF and ROI sampling methods was very good to excellent (0.873 to 0.975). Paired t-tests revealed significant differences (P < 0.05) with ROI sampling methods that exclusively or predominantly sampled the right lobe. Significant correlations with mean PDFF were found with sampling methods that included higher number of segments, total area equal or larger than 5 cm(2) , or sampled both lobes (P = 0.001, 0.023, and 0.002, respectively). MRI-PDFF quantification methods should sample each liver segment in both lobes and include a total surface area equal or larger than 5 cm(2) to provide a close estimate of the liver mean PDFF. © 2015 Wiley Periodicals, Inc.
Kmiecik, Ewa; Tomaszewska, Barbara; Wątor, Katarzyna; Bodzek, Michał
2016-06-01
The aim of the study was to compare the two reference methods for the determination of boron in water samples and further assess the impact of the method of preparation of samples for analysis on the results obtained. Samples were collected during different desalination processes, ultrafiltration and the double reverse osmosis system, connected in series. From each point, samples were prepared in four different ways: the first was filtered (through a membrane filter of 0.45 μm) and acidified (using 1 mL ultrapure nitric acid for each 100 mL of samples) (FA), the second was unfiltered and not acidified (UFNA), the third was filtered but not acidified (FNA), and finally, the fourth was unfiltered but acidified (UFA). All samples were analysed using two analytical methods: inductively coupled plasma mass spectrometry (ICP-MS) and inductively coupled plasma optical emission spectrometry (ICP-OES). The results obtained were compared and correlated, and the differences between them were studied. The results show that there are statistically significant differences between the concentrations obtained using the ICP-MS and ICP-OES techniques regardless of the methods of sampling preparation (sample filtration and preservation). Finally, both the ICP-MS and ICP-OES methods can be used for determination of the boron concentration in water. The differences in the boron concentrations obtained using these two methods can be caused by several high-level concentrations in selected whole-water digestates and some matrix effects. Higher concentrations of iron (from 1 to 20 mg/L) than chromium (0.02-1 mg/L) in the samples analysed can influence boron determination. When iron concentrations are high, we can observe the emission spectrum as a double joined and overlapping peak.