Comparability of river suspended-sediment sampling and laboratory analysis methods
Groten, Joel T.; Johnson, Gregory D.
2018-03-06
Accurate measurements of suspended sediment, a leading water-quality impairment in many Minnesota rivers, are important for managing and protecting water resources; however, water-quality standards for suspended sediment in Minnesota are based on grab field sampling and total suspended solids (TSS) laboratory analysis methods that have underrepresented concentrations of suspended sediment in rivers compared to U.S. Geological Survey equal-width-increment or equal-discharge-increment (EWDI) field sampling and suspended sediment concentration (SSC) laboratory analysis methods. Because of this underrepresentation, the U.S. Geological Survey, in collaboration with the Minnesota Pollution Control Agency, collected concurrent grab and EWDI samples at eight sites to compare results obtained using different combinations of field sampling and laboratory analysis methods.Study results determined that grab field sampling and TSS laboratory analysis results were biased substantially low compared to EWDI sampling and SSC laboratory analysis results, respectively. Differences in both field sampling and laboratory analysis methods caused grab and TSS methods to be biased substantially low. The difference in laboratory analysis methods was slightly greater than field sampling methods.Sand-sized particles had a strong effect on the comparability of the field sampling and laboratory analysis methods. These results indicated that grab field sampling and TSS laboratory analysis methods fail to capture most of the sand being transported by the stream. The results indicate there is less of a difference among samples collected with grab field sampling and analyzed for TSS and concentration of fines in SSC. Even though differences are present, the presence of strong correlations between SSC and TSS concentrations provides the opportunity to develop site specific relations to address transport processes not captured by grab field sampling and TSS laboratory analysis methods.
Methods for Determining Particle Size Distributions from Nuclear Detonations.
1987-03-01
Debris . . . 30 IV. Summary of Sample Preparation Method . . . . 35 V. Set Parameters for PCS ... ........... 39 VI. Analysis by Vendors...54 XV. Results From Brookhaven Analysis Using The Method of Cumulants ... ........... . 54 XVI. Results From Brookhaven Analysis of Sample...R-3 Using Histogram Method ......... .55 XVII. Results From Brookhaven Analysis of Sample R-8 Using Histogram Method ........... 56 XVIII.TEM Particle
Wroble, Julie; Frederick, Timothy; Frame, Alicia; Vallero, Daniel
2017-01-01
Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)'s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete ("grab") samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples.
2017-01-01
Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)’s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete (“grab”) samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples. PMID:28759607
Gutiérrez-Fonseca, Pablo E; Lorion, Christopher M
2014-04-01
The use of aquatic macroinvertebrates as bio-indicators in water quality studies has increased considerably over the last decade in Costa Rica, and standard biomonitoring methods have now been formulated at the national level. Nevertheless, questions remain about the effectiveness of different methods of sampling freshwater benthic assemblages, and how sampling intensity may influence biomonitoring results. In this study, we compared the results of qualitative sampling using commonly applied methods with a more intensive quantitative approach at 12 sites in small, lowland streams on the southern Caribbean slope of Costa Rica. Qualitative samples were collected following the official protocol using a strainer during a set time period and macroinvertebrates were field-picked. Quantitative sampling involved collecting ten replicate Surber samples and picking out macroinvertebrates in the laboratory with a stereomicroscope. The strainer sampling method consistently yielded fewer individuals and families than quantitative samples. As a result, site scores calculated using the Biological Monitoring Working Party-Costa Rica (BMWP-CR) biotic index often differed greatly depending on the sampling method. Site water quality classifications using the BMWP-CR index differed between the two sampling methods for 11 of the 12 sites in 2005, and for 9 of the 12 sites in 2006. Sampling intensity clearly had a strong influence on BMWP-CR index scores, as well as perceived differences between reference and impacted sites. Achieving reliable and consistent biomonitoring results for lowland Costa Rican streams may demand intensive sampling and requires careful consideration of sampling methods.
Chen, Yibin; Chen, Jiaxi; Chen, Xuan; Wang, Min; Wang, Wei
2015-01-01
A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of sampling methods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform sampling method was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform sampling method was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform sampling method has good accuracy and reproducibility, which can be put into use in drugs analysis.
Lísa, Miroslav; Cífková, Eva; Khalikova, Maria; Ovčačíková, Magdaléna; Holčapek, Michal
2017-11-24
Lipidomic analysis of biological samples in a clinical research represents challenging task for analytical methods given by the large number of samples and their extreme complexity. In this work, we compare direct infusion (DI) and chromatography - mass spectrometry (MS) lipidomic approaches represented by three analytical methods in terms of comprehensiveness, sample throughput, and validation results for the lipidomic analysis of biological samples represented by tumor tissue, surrounding normal tissue, plasma, and erythrocytes of kidney cancer patients. Methods are compared in one laboratory using the identical analytical protocol to ensure comparable conditions. Ultrahigh-performance liquid chromatography/MS (UHPLC/MS) method in hydrophilic interaction liquid chromatography mode and DI-MS method are used for this comparison as the most widely used methods for the lipidomic analysis together with ultrahigh-performance supercritical fluid chromatography/MS (UHPSFC/MS) method showing promising results in metabolomics analyses. The nontargeted analysis of pooled samples is performed using all tested methods and 610 lipid species within 23 lipid classes are identified. DI method provides the most comprehensive results due to identification of some polar lipid classes, which are not identified by UHPLC and UHPSFC methods. On the other hand, UHPSFC method provides an excellent sensitivity for less polar lipid classes and the highest sample throughput within 10min method time. The sample consumption of DI method is 125 times higher than for other methods, while only 40μL of organic solvent is used for one sample analysis compared to 3.5mL and 4.9mL in case of UHPLC and UHPSFC methods, respectively. Methods are validated for the quantitative lipidomic analysis of plasma samples with one internal standard for each lipid class. Results show applicability of all tested methods for the lipidomic analysis of biological samples depending on the analysis requirements. Copyright © 2017 Elsevier B.V. All rights reserved.
Surveying immigrants without sampling frames - evaluating the success of alternative field methods.
Reichel, David; Morales, Laura
2017-01-01
This paper evaluates the sampling methods of an international survey, the Immigrant Citizens Survey, which aimed at surveying immigrants from outside the European Union (EU) in 15 cities in seven EU countries. In five countries, no sample frame was available for the target population. Consequently, alternative ways to obtain a representative sample had to be found. In three countries 'location sampling' was employed, while in two countries traditional methods were used with adaptations to reach the target population. The paper assesses the main methodological challenges of carrying out a survey among a group of immigrants for whom no sampling frame exists. The samples of the survey in these five countries are compared to results of official statistics in order to assess the accuracy of the samples obtained through the different sampling methods. It can be shown that alternative sampling methods can provide meaningful results in terms of core demographic characteristics although some estimates differ to some extent from the census results.
Swezey, Robert; Shinn, Walter; Green, Carol; Drover, David R.; Hammer, Gregory B.; Schulman, Scott R.; Zajicek, Anne; Jett, David A.; Boss, Gerry R.
2013-01-01
Most hospital laboratories do not measure blood cyanide concentrations, and samples must be sent to reference laboratories. A simple method is needed for measuring cyanide in hospitals. The authors previously developed a method to quantify cyanide based on the high binding affinity of the vitamin B12 analog, cobinamide, for cyanide and a major spectral change observed for cyanide-bound cobinamide. This method is now validated in human blood, and the findings include a mean inter-assay accuracy of 99.1%, precision of 8.75% and a lower limit of quantification of 3.27 µM cyanide. The method was applied to blood samples from children treated with sodium nitroprusside and it yielded measurable results in 88 of 172 samples (51%), whereas the reference laboratory yielded results in only 19 samples (11%). In all 19 samples, the cobinamide-based method also yielded measurable results. The two methods showed reasonable agreement when analyzed by linear regression, but not when analyzed by a standard error of the estimate or paired t-test. Differences in results between the two methods may be because samples were assayed at different times on different sample types. The cobinamide-based method is applicable to human blood, and can be used in hospital laboratories and emergency rooms. PMID:23653045
Evaluating fungal contamination indoors is complicated because of the many different sampling methods utilized. In this study, fungal contamination was evaluated using five sampling methods and four matrices for results. The five sampling methods were a 48 hour indoor air sample ...
A new sampling method for fibre length measurement
NASA Astrophysics Data System (ADS)
Wu, Hongyan; Li, Xianghong; Zhang, Junying
2018-06-01
This paper presents a new sampling method for fibre length measurement. This new method can meet the three features of an effective sampling method, also it can produce the beard with two symmetrical ends which can be scanned from the holding line to get two full fibrograms for each sample. The methodology was introduced and experiments were performed to investigate effectiveness of the new method. The results show that the new sampling method is an effective sampling method.
Viability qPCR, a new tool for Legionella risk management.
Lizana, X; López, A; Benito, S; Agustí, G; Ríos, M; Piqué, N; Marqués, A M; Codony, F
2017-11-01
Viability quantitative Polymerase Chain Reaction (v-qPCR) is a recent analytical approach for only detecting live microorganisms by DNA amplification-based methods This approach is based on the use of a reagent that irreversibly fixes dead cells DNA. In this study, we evaluate the utility of v-qPCR versus culture method for Legionellosis risk management. The present study was performed using 116 real samples. Water samples were simultaneously analysed by culture, v-qPCR and qPCR methods. Results were compared by means of a non-parametric test. In 11.6% of samples using both methods (culture method and v-qPCR) results were positive, in 50.0% of samples both methods gave rise to negative results. As expected, equivalence between methods was not observed in all cases, as in 32.1% of samples positive results were obtained by v-qPCR and all of them gave rise to negative results by culture. Only in 6.3% of samples, with very low Legionella levels, was culture positive and v-qPCR negative. In 3.5% of samples, overgrowth of other bacteria did not allow performing the culture. When comparing both methods, significant differences between culture and v-qPCR were in the samples belonging to the cooling towers-evaporative condensers group. The v-qPCR method detected greater presence and obtained higher concentrations of Legionella spp. (p<0.001). Otherwise, no significant differences between methods were found in the rest of the groups. The v-qPCR method can be used as a quick tool to evaluate Legionellosis risk, especially in cooling towers-evaporative condensers, where this technique can detect higher levels than culture. The combined interpretation of PCR results along with the ratio of live cells is proposed as a tool for understanding the sample context and estimating the Legionellosis risk potential according to 4 levels of hierarchy. Copyright © 2017 Elsevier GmbH. All rights reserved.
Comparison of preprocessing methods and storage times for touch DNA samples
Dong, Hui; Wang, Jing; Zhang, Tao; Ge, Jian-ye; Dong, Ying-qiang; Sun, Qi-fan; Liu, Chao; Li, Cai-xia
2017-01-01
Aim To select appropriate preprocessing methods for different substrates by comparing the effects of four different preprocessing methods on touch DNA samples and to determine the effect of various storage times on the results of touch DNA sample analysis. Method Hand touch DNA samples were used to investigate the detection and inspection results of DNA on different substrates. Four preprocessing methods, including the direct cutting method, stubbing procedure, double swab technique, and vacuum cleaner method, were used in this study. DNA was extracted from mock samples with four different preprocessing methods. The best preprocess protocol determined from the study was further used to compare performance after various storage times. DNA extracted from all samples was quantified and amplified using standard procedures. Results The amounts of DNA and the number of alleles detected on the porous substrates were greater than those on the non-porous substrates. The performances of the four preprocessing methods varied with different substrates. The direct cutting method displayed advantages for porous substrates, and the vacuum cleaner method was advantageous for non-porous substrates. No significant degradation trend was observed as the storage times increased. Conclusion Different substrates require the use of different preprocessing method in order to obtain the highest DNA amount and allele number from touch DNA samples. This study provides a theoretical basis for explorations of touch DNA samples and may be used as a reference when dealing with touch DNA samples in case work. PMID:28252870
Comparability among four invertebrate sampling methods, Fountain Creek Basin, Colorado, 2010-2012
Zuellig, Robert E.; Bruce, James F.; Stogner, Sr., Robert W.; Brown, Krystal D.
2014-01-01
The U.S. Geological Survey, in cooperation with Colorado Springs City Engineering and Colorado Springs Utilities, designed a study to determine if sampling method and sample timing resulted in comparable samples and assessments of biological condition. To accomplish this task, annual invertebrate samples were collected concurrently using four sampling methods at 15 U.S. Geological Survey streamflow gages in the Fountain Creek basin from 2010 to 2012. Collectively, the four methods are used by local (U.S. Geological Survey cooperative monitoring program) and State monitoring programs (Colorado Department of Public Health and Environment) in the Fountain Creek basin to produce two distinct sample types for each program that target single-and multiple-habitats. This study found distinguishable differences between single-and multi-habitat sample types using both community similarities and multi-metric index values, while methods from each program within sample type were comparable. This indicates that the Colorado Department of Public Health and Environment methods were compatible with the cooperative monitoring program methods within multi-and single-habitat sample types. Comparisons between September and October samples found distinguishable differences based on community similarities for both sample types, whereas only differences were found for single-habitat samples when multi-metric index values were considered. At one site, differences between September and October index values from single-habitat samples resulted in opposing assessments of biological condition. Direct application of the results to inform the revision of the existing Fountain Creek basin U.S. Geological Survey cooperative monitoring program are discussed.
Comparing the NIOSH Method 5040 to a Diesel Particulate Matter Meter for Elemental Carbon
NASA Astrophysics Data System (ADS)
Ayers, David Matthew
Introduction: The sampling of elemental carbon has been associated with monitoring exposures in the trucking and mining industries. Recently, in the field of engineered nanomaterials, single wall and muti-wall carbon nanotubes (MWCNTs) are being produced in ever increasing quantities. The only approved atmospheric sampling for multi-wall carbon nanotubes in NIOSH Method 5040. These results are accurate but can take up to 30 days for sample results to be received. Objectives: Compare the results of elemental carbon sampling from the NIOSH Method 5040 to a Diesel Particulate Matter (DPM) Meter. Methods: MWCNTs were transferred and weighed between several trays placed on a scale. The NIOSH Method 5040 and DPM sampling train was hung 6 inches above the receiving tray. The transferring and weighing of the MWCNTs created an aerosol containing elemental carbon. Twenty-one total samples using both meters type were collected. Results: The assumptions for a Two-Way ANOVA were violated therefore, Mann-Whitney U Tests and a Kruskal-Wallis Test were performed. The hypotheses for both research questions were rejected. There was a significant difference in the EC concentrations obtained by the NIOSH Method 5040 and the DPM meter. There were also significant differences in elemental carbon level concentrations when sampled using a DPM meter versus a sampling pump based upon the three concentration levels (low, medium and high). Conclusions: The differences in the EC concentrations were statistically significant therefore, the two methods (NIOSH Method 5040 and DPM) are not the same. The NIOSH Method 5040 should continue to be the only authorized method of establishing an EC concentration for MWCNTs until a MWCNT specific method or an instantaneous meter is invented.
Approximation of the exponential integral (well function) using sampling methods
NASA Astrophysics Data System (ADS)
Baalousha, Husam Musa
2015-04-01
Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.
Brady, Amie M.G.; Bushon, Rebecca N.; Bertke, Erin E.
2009-01-01
Water quality at beaches is monitored for fecal indicator bacteria by traditional, culture-based methods that can take 18 to 24 hours to obtain results. A rapid detection method that provides estimated concentrations of fecal indicator bacteria within 1 hour from the start of sample processing would allow beach managers to post advisories or close the beach when the conditions are actually considered unsafe instead of a day later, when conditions may have changed. A rapid method that couples immunomagnetic separation with adenosine triphosphate detection (IMS/ATP rapid method) was evaluated through monitoring of Escherichia coli (E. coli) at three Lake Erie beaches in Ohio (Edgewater and Villa Angela in Cleveland and Huntington in Bay Village). Beach water samples were collected between 4 and 5 days per week during the recreational seasons (May through September) of 2006 and 2007. Composite samples were created in the lab from two point samples collected at each beach and were shown to be comparable substitutes for analysis of two individual samples. E. coli concentrations in composite samples, as determined by the culture-based method, ranged from 4 to 24,000 colony-forming units per 100 milliliters during this study across all beaches. Turbidity also was measured for each sample and ranged from 0.8 to 260 neophelometric turbidity ratio units. Environmental variables were noted at the time of sampling, including number of birds at the beach and wave height. Rainfall amounts were measured at National Weather Service stations at local airports. Turbidity, rainfall, and wave height were significantly related to the culture-based method results each year and for both years combined at each beach. The number of birds at the beach was significantly related to the culture-based method results only at Edgewater during 2006 and during both years combined. Results of the IMS/ATP method were compared to results of the culture-based method for samples by year for each beach. The IMS/ATP method underwent several changes and refinements during the first year, including changes in reagents and antibodies and alterations to the method protocol. Because of the changes in the method, results from the two years of study could not be combined. Kendall's tau correlation coefficients for relations between the IMS/ATP and culture-based methods were significant except for samples collected during 2006 at Edgewater and for samples collected during 2007 at Villa Angela. Further, relations were stronger for samples collected in 2006 than for those collected in 2007, except at Edgewater where the reverse was observed. The 2007 dataset was examined to identify possible reasons for the observed difference in significance of relations by year. By dividing the 2007 data set into groups as a function of sampling date, relations (Kendall's tau) between methods were observed to be stronger for samples collected earlier in the season than for those collected later in the season. At Edgewater and Villa Angela, there were more birds at the beach at time of sampling later in the season compared to earlier in the season. (The number of birds was not examined at Huntington.) Also, more wet days (when rainfall during the 24 hours prior to sampling was greater than 0.05 inch) were sampled later in the season compared to earlier in the season. Differences in the dominant fecal source may explain the change in the relations between the culture-based and IMS/ATP methods.
[Determination of ethylene glycol in biological fluids--propylene glycol interferences].
Gomółka, Ewa; Cudzich-Czop, Sylwia; Sulka, Adrianna
2013-01-01
Many laboratories in Poland do not use gas chromatography (GC) method for determination of ethylene glycol (EG) and methanol in blood of poisoned patients, they use non specific spectrophotometry methods. One of the interfering substances is propylene glycol (PG)--compound present in many medical and cosmetic products: drops, air freshens, disinfectants, electronic cigarettes and others. In Laboratory of Analytical Toxicology and Drug Monitoring in Krakow determination of EG is made by GC method. The method enables to distinguish and make resolution of (EG) and (PG) in biological samples. In the years 2011-2012 in several serum samples from diagnosed patients PG was present in concentration from several to higher than 100 mg/dL. The aim of the study was to estimate PG interferences of serum EG determination by spectrophotometry method. Serum samples containing PG and EG were used in the study. The samples were analyzed by two methods: GC and spectrophotometry. Results of serum samples spiked with PG with no EG analysed by spectrophotometry method were improper ("false positive"). The results were correlated to PG concentration in samples. Calculated cross-reactivity of PG in the method was 42%. Positive results of EG measured by spectrophotometry method must be confirmed by reference GC method. Spectrophotometry method shouldn't be used for diagnostics and monitoring of patients poisoned by EG.
Preliminary evaluation of a gel tube agglutination major cross-match method in dogs.
Villarnovo, Dania; Burton, Shelley A; Horney, Barbara S; MacKenzie, Allan L; Vanderstichel, Raphaël
2016-09-01
A major cross-match gel tube test is available for use in dogs yet has not been clinically evaluated. This study compared cross-match results obtained using the gel tube and the standard tube methods for canine samples. Study 1 included 107 canine sample donor-recipient pairings cross-match tested with the RapidVet-H method gel tube test and compared results with the standard tube method. Additionally, 120 pairings using pooled sera containing anti-canine erythrocyte antibody at various concentrations were tested with leftover blood from a hospital population to assess sensitivity and specificity of the gel tube method in comparison with the standard method. The gel tube method had a good relative specificity of 96.1% in detecting lack of agglutination (compatibility) compared to the standard tube method. Agreement between the 2 methods was moderate. Nine of 107 pairings showed agglutination/incompatibility on either test, too few to allow reliable calculation of relative sensitivity. Fifty percent of the gel tube method results were difficult to interpret due to sample spreading in the reaction and/or negative control tubes. The RapidVet-H method agreed with the standard cross-match method on compatible samples, but detected incompatibility in some sample pairs that were compatible with the standard method. Evaluation using larger numbers of incompatible pairings is needed to assess diagnostic utility. The gel tube method results were difficult to categorize due to sample spreading. Weak agglutination reactions or other factors such as centrifuge model may be responsible. © 2016 American Society for Veterinary Clinical Pathology.
Durbin, Gregory W; Salter, Robert
2006-01-01
The Ecolite High Volume Juice (HVJ) presence-absence method for a 10-ml juice sample was compared with the U.S. Food and Drug Administration Bacteriological Analytical Manual most-probable-number (MPN) method for analysis of artificially contaminated orange juices. Samples were added to Ecolite-HVJ medium and incubated at 35 degrees C for 24 to 48 h. Fluorescent blue results were positive for glucuronidase- and galactosidase-producing microorganisms, specifically indicative of about 94% of Escherichia coli strains. Four strains of E. coli were added to juices at concentrations of 0.21 to 6.8 CFU/ ml. Mixtures of enteric bacteria (Enterobacter plus Klebsiella, Citrobacter plus Proteus, or Hafnia plus Citrobacter plus Enterobacter) were added to simulate background flora. Three orange juice types were evaluated (n = 10) with and without the addition of the E. coli strains. Ecolite-HVJ produced 90 of 90 (10 of 10 samples of three juice types, each inoculated with three different E. coli strains) positive (blue-fluorescent) results with artificially contaminated E. coli that had MPN concentrations of <0.3 to 9.3 CFU/ml. Ten of 30 E. coli ATCC 11229 samples with MPN concentrations of <0.3 CFU/ml were identified as positive with Ecolite-HVJ. Isolated colonies recovered from positive Ecolite-HVJ samples were confirmed biochemically as E. coli. Thirty (10 samples each of three juice types) negative (not fluorescent) results were obtained for samples contaminated with only enteric bacteria and for uninoculated control samples. A juice manufacturer evaluated citrus juice production with both the Ecolite-HVJ and Colicomplete methods and recorded identical negative results for 95 20-ml samples and identical positive results for 5 20-ml samples artificially contaminated with E. coli. The Ecolite-HVJ method requires no preenrichment and subsequent transfer steps, which makes it a simple and easy method for use by juice producers.
Multi-laboratory survey of qPCR enterococci analysis method performance
Quantitative polymerase chain reaction (qPCR) has become a frequently used technique for quantifying enterococci in recreational surface waters, but there are several methodological options. Here we evaluated how three method permutations, type of mastermix, sample extract dilution and use of controls in results calculation, affect method reliability among multiple laboratories with respect to sample interference. Multiple samples from each of 22 sites representing an array of habitat types were analyzed using EPA Method 1611 and 1609 reagents with full strength and five-fold diluted extracts. The presence of interference was assessed three ways: using sample processing and PCR amplifications controls; consistency of results across extract dilutions; and relative recovery of target genes from spiked enterococci in water sample compared to control matrices with acceptable recovery defined as 50 to 200%. Method 1609, which is based on an environmental mastermix, was found to be superior to Method 1611, which is based on a universal mastermix. Method 1611 had over a 40% control assay failure rate with undiluted extracts and a 6% failure rate with diluted extracts. Method 1609 failed in only 11% and 3% of undiluted and diluted extracts analyses. Use of sample processing control assay results in the delta-delta Ct method for calculating relative target gene recoveries increased the number of acceptable recovery results. Delta-delta tended to bias recoveries fr
NASA Astrophysics Data System (ADS)
Nasir, N. F.; Mirus, M. F.; Ismail, M.
2017-09-01
Crude glycerol which produced from transesterification reaction has limited usage if it does not undergo purification process. It also contains excess methanol, catalyst and soap. Conventionally, purification method of the crude glycerol involves high cost and complex processes. This study aimed to determine the effects of using different purification methods which are direct method (comprises of ion exchange and methanol removal steps) and multistep method (comprises of neutralization, filtration, ion exchange and methanol removal steps). Two crude glycerol samples were investigated; the self-produced sample through the transesterification process of palm oil and the sample obtained from biodiesel plant. Samples were analysed using Fourier Transform Infrared Spectroscopy, Gas Chromatography and High Performance Liquid Chromatography. The results of this study for both samples after purification have showed that the pure glycerol was successfully produced and fatty acid salts were eliminated. Also, the results indicated the absence of methanol in both samples after purification process. In short, the combination of 4 purification steps has contributed to a higher quality of glycerol. Multistep purification method gave a better result compared to the direct method as neutralization and filtration steps helped in removing most excess salt, fatty acid and catalyst.
Measuring solids concentration in stormwater runoff: comparison of analytical methods.
Clark, Shirley E; Siu, Christina Y S
2008-01-15
Stormwater suspended solids typically are quantified using one of two methods: aliquot/subsample analysis (total suspended solids [TSS]) or whole-sample analysis (suspended solids concentration [SSC]). Interproject comparisons are difficult because of inconsistencies in the methods and in their application. To address this concern, the suspended solids content has been measured using both methodologies in many current projects, but the question remains about how to compare these values with historical water-quality data where the analytical methodology is unknown. This research was undertaken to determine the effect of analytical methodology on the relationship between these two methods of determination of the suspended solids concentration, including the effect of aliquot selection/collection method and of particle size distribution (PSD). The results showed that SSC was best able to represent the known sample concentration and that the results were independent of the sample's PSD. Correlations between the results and the known sample concentration could be established for TSS samples, but they were highly dependent on the sample's PSD and on the aliquot collection technique. These results emphasize the need to report not only the analytical method but also the particle size information on the solids in stormwater runoff.
Sample size determination for mediation analysis of longitudinal data.
Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying
2018-03-27
Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.
Mischnik, Alexander; Mieth, Markus; Busch, Cornelius J; Hofer, Stefan; Zimmermann, Stefan
2012-08-01
Automation of plate streaking is ongoing in clinical microbiological laboratories, but evaluation for routine use is mostly open. In the present study, the recovery of microorganisms from the Previ Isola system plated polyurethane (PU) swab samples is compared to manually plated control viscose swab samples from wounds according to the CLSI procedure M40-A (quality control of microbiological transport systems). One hundred twelve paired samples (224 swabs) were analyzed. In 80/112 samples (71%), concordant culture results were obtained with the two methods. In 32/112 samples (29%), CFU recovery of microorganisms from the two methods was discordant. In 24 (75%) of the 32 paired samples with a discordant result, Previ Isola plated PU swabs were superior. In 8 (25%) of the 32 paired samples with a discordant result, control viscose swabs were superior. The quality of colony growth on culture media for further investigations was superior with Previ Isola inoculated plates compared to manual plating techniques. Gram stain results were concordant between the two methods in 62/112 samples (55%). In 50/112 samples (45%), the results of Gram staining were discordant between the two methods. In 34 (68%) of the 50 paired samples with discordant results, Gram staining of PU swabs was superior to that of control viscose swabs. In 16 (32%) of the 50 paired samples, Gram staining of control viscose swabs was superior to that of PU swabs. We report the first clinical evaluation of Previ Isola automated specimen inoculation for wound swab samples. This study suggests that use of an automated specimen inoculation system has good results with regard to CFU recovery, quality of Gram staining, and accuracy of diagnosis.
Mieth, Markus; Busch, Cornelius J.; Hofer, Stefan; Zimmermann, Stefan
2012-01-01
Automation of plate streaking is ongoing in clinical microbiological laboratories, but evaluation for routine use is mostly open. In the present study, the recovery of microorganisms from the Previ Isola system plated polyurethane (PU) swab samples is compared to manually plated control viscose swab samples from wounds according to the CLSI procedure M40-A (quality control of microbiological transport systems). One hundred twelve paired samples (224 swabs) were analyzed. In 80/112 samples (71%), concordant culture results were obtained with the two methods. In 32/112 samples (29%), CFU recovery of microorganisms from the two methods was discordant. In 24 (75%) of the 32 paired samples with a discordant result, Previ Isola plated PU swabs were superior. In 8 (25%) of the 32 paired samples with a discordant result, control viscose swabs were superior. The quality of colony growth on culture media for further investigations was superior with Previ Isola inoculated plates compared to manual plating techniques. Gram stain results were concordant between the two methods in 62/112 samples (55%). In 50/112 samples (45%), the results of Gram staining were discordant between the two methods. In 34 (68%) of the 50 paired samples with discordant results, Gram staining of PU swabs was superior to that of control viscose swabs. In 16 (32%) of the 50 paired samples, Gram staining of control viscose swabs was superior to that of PU swabs. We report the first clinical evaluation of Previ Isola automated specimen inoculation for wound swab samples. This study suggests that use of an automated specimen inoculation system has good results with regard to CFU recovery, quality of Gram staining, and accuracy of diagnosis. PMID:22692745
Extending the solvent-free MALDI sample preparation method.
Hanton, Scott D; Parees, David M
2005-01-01
Matrix-assisted laser desorption/ionization (MALDI) mass spectrometry is an important technique to characterize many different materials, including synthetic polymers. MALDI mass spectral data can be used to determine the polymer average molecular weights, repeat units, and end groups. One of the key issues in traditional MALDI sample preparation is making good solutions of the analyte and the matrix. Solvent-free sample preparation methods have been developed to address these issues. Previous results of solvent-free or dry prepared samples show some advantages over traditional wet sample preparation methods. Although the results of the published solvent-free sample preparation methods produced excellent mass spectra, we found the method to be very time-consuming, with significant tool cleaning, which presents a significant possibility of cross contamination. To address these issues, we developed an extension of the solvent-free method that replaces the mortar and pestle grinding with ball milling the sample in a glass vial with two small steel balls. This new method generates mass spectra with equal quality of the previous methods, but has significant advantages in productivity, eliminates cross contamination, and is applicable to liquid and soft or waxy analytes.
Archfield, Stacey A.; LeBlanc, Denis R.
2005-01-01
To evaluate diffusion sampling as an alternative method to monitor volatile organic compound (VOC) concentrations in ground water, concentrations in samples collected by traditional pumped-sampling methods were compared to concentrations in samples collected by diffusion-sampling methods for 89 monitoring wells at or near the Massachusetts Military Reservation, Cape Cod. Samples were analyzed for 36 VOCs. There was no substantial difference between the utility of diffusion and pumped samples to detect the presence or absence of a VOC. In wells where VOCs were detected, diffusion-sample concentrations of tetrachloroethene (PCE) and trichloroethene (TCE) were significantly lower than pumped-sample concentrations. Because PCE and TCE concentrations detected in the wells dominated the calculation of many of the total VOC concentrations, when VOC concentrations were summed and compared by sampling method, visual inspection also showed a downward concentration bias in the diffusion-sample concentration. The degree to which pumped- and diffusion-sample concentrations agreed was not a result of variability inherent within the sampling methods or the diffusion process itself. A comparison of the degree of agreement in the results from the two methods to 13 quantifiable characteristics external to the sampling methods offered only well-screen length as being related to the degree of agreement between the methods; however, there is also evidence to indicate that the flushing rate of water through the well screen affected the agreement between the sampling methods. Despite poor agreement between the concentrations obtained by the two methods at some wells, the degree to which the concentrations agree at a given well is repeatable. A one-time, well-bywell comparison between diffusion- and pumped-sampling methods could determine which wells are good candidates for the use of diffusion samplers. For wells with good method agreement, the diffusion-sampling method is a time-saving and cost-effective alternative to pumped-sampling methods in a long-term monitoring program, such as at the Massachusetts Military Reservation.
Methodology Series Module 5: Sampling Strategies.
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.
Some connections between importance sampling and enhanced sampling methods in molecular dynamics.
Lie, H C; Quer, J
2017-11-21
In molecular dynamics, enhanced sampling methods enable the collection of better statistics of rare events from a reference or target distribution. We show that a large class of these methods is based on the idea of importance sampling from mathematical statistics. We illustrate this connection by comparing the Hartmann-Schütte method for rare event simulation (J. Stat. Mech. Theor. Exp. 2012, P11004) and the Valsson-Parrinello method of variationally enhanced sampling [Phys. Rev. Lett. 113, 090601 (2014)]. We use this connection in order to discuss how recent results from the Monte Carlo methods literature can guide the development of enhanced sampling methods.
Some connections between importance sampling and enhanced sampling methods in molecular dynamics
NASA Astrophysics Data System (ADS)
Lie, H. C.; Quer, J.
2017-11-01
In molecular dynamics, enhanced sampling methods enable the collection of better statistics of rare events from a reference or target distribution. We show that a large class of these methods is based on the idea of importance sampling from mathematical statistics. We illustrate this connection by comparing the Hartmann-Schütte method for rare event simulation (J. Stat. Mech. Theor. Exp. 2012, P11004) and the Valsson-Parrinello method of variationally enhanced sampling [Phys. Rev. Lett. 113, 090601 (2014)]. We use this connection in order to discuss how recent results from the Monte Carlo methods literature can guide the development of enhanced sampling methods.
Sommer, D; Enderlein, D; Antakli, A; Schönenbrücher, H; Slaghuis, J; Redmann, T; Lierz, M
2012-01-01
The efficiency of two commercial PCR methods based on real-time technology, the foodproof® Salmonella detection system and the BAX® PCR Assay Salmonella system was compared to standardized culture methods (EN ISO 6579:2002 - Annex D) for the detection of Salmonella spp. in poultry samples. Four sample matrices (feed, dust, boot swabs, feces) obtained directly from poultry flocks, as well as artificially spiked samples of the same matrices, were used. All samples were tested for Salmonella spp. using culture methods first as the gold standard. In addition samples spiked with Salmonella Enteridis were tested to evaluate the sensitivity of both PCR methods. Furthermore all methods were evaluated in an annual ring-trial of the National Salmonella Reference Laboratory of Germany. Salmonella detection in the matrices feed, dust and boot swabs were comparable in both PCR systems whereas the results from feces differed markedly. The quality, especially the freshness, of the fecal samples had an influence on the sensitivity of the real-time PCR and the results of the culture methods. In fresh fecal samples an initial spiking level of 100cfu/25g Salmonella Enteritidis was detected. Two-days-dried fecal samples allowed the detection of 14cfu/25g. Both real- time PCR protocols appear to be suitable for the detection of Salmonella spp. in all four matrices. The foodproof® system detected eight samples more to be positive compared to the BAX® system, but had a potential false positive result in one case. In 7-days-dried samples none of the methods was able to detect Salmonella likely through letal cell damage. In general the advantage of PCR analyses over the culture method is the reduction of working time from 4-5 days to only 2 days. However, especially for the analysis of fecal samples official validation should be conducted according to the requirement of EN ISO6579:2002 - Annex D.
Jantzi, Sarah C; Almirall, José R
2011-07-01
A method for the quantitative elemental analysis of surface soil samples using laser-induced breakdown spectroscopy (LIBS) was developed and applied to the analysis of bulk soil samples for discrimination between specimens. The use of a 266 nm laser for LIBS analysis is reported for the first time in forensic soil analysis. Optimization of the LIBS method is discussed, and the results compared favorably to a laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) method previously developed. Precision for both methods was <10% for most elements. LIBS limits of detection were <33 ppm and bias <40% for most elements. In a proof of principle study, the LIBS method successfully discriminated samples from two different sites in Dade County, FL. Analysis of variance, Tukey's post hoc test and Student's t test resulted in 100% discrimination with no type I or type II errors. Principal components analysis (PCA) resulted in clear groupings of the two sites. A correct classification rate of 99.4% was obtained with linear discriminant analysis using leave-one-out validation. Similar results were obtained when the same samples were analyzed by LA-ICP-MS, showing that LIBS can provide similar information to LA-ICP-MS. In a forensic sampling/spatial heterogeneity study, the variation between sites, between sub-plots, between samples and within samples was examined on three similar Dade sites. The closer the sampling locations, the closer the grouping on a PCA plot and the higher the misclassification rate. These results underscore the importance of careful sampling for geographic site characterization.
Che, W W; Frey, H Christopher; Lau, Alexis K H
2014-12-01
Population and diary sampling methods are employed in exposure models to sample simulated individuals and their daily activity on each simulation day. Different sampling methods may lead to variations in estimated human exposure. In this study, two population sampling methods (stratified-random and random-random) and three diary sampling methods (random resampling, diversity and autocorrelation, and Markov-chain cluster [MCC]) are evaluated. Their impacts on estimated children's exposure to ambient fine particulate matter (PM2.5 ) are quantified via case studies for children in Wake County, NC for July 2002. The estimated mean daily average exposure is 12.9 μg/m(3) for simulated children using the stratified population sampling method, and 12.2 μg/m(3) using the random sampling method. These minor differences are caused by the random sampling among ages within census tracts. Among the three diary sampling methods, there are differences in the estimated number of individuals with multiple days of exposures exceeding a benchmark of concern of 25 μg/m(3) due to differences in how multiday longitudinal diaries are estimated. The MCC method is relatively more conservative. In case studies evaluated here, the MCC method led to 10% higher estimation of the number of individuals with repeated exposures exceeding the benchmark. The comparisons help to identify and contrast the capabilities of each method and to offer insight regarding implications of method choice. Exposure simulation results are robust to the two population sampling methods evaluated, and are sensitive to the choice of method for simulating longitudinal diaries, particularly when analyzing results for specific microenvironments or for exposures exceeding a benchmark of concern. © 2014 Society for Risk Analysis.
Girndt, Antje; Cockburn, Glenn; Sánchez-Tójar, Alfredo; Løvlie, Hanne; Schroeder, Julia
2017-01-01
Birds are model organisms in sperm biology. Previous work in zebra finches, suggested that sperm sampled from males' faeces and ejaculates do not differ in size. Here, we tested this assumption in a captive population of house sparrows, Passer domesticus. We compared sperm length in samples from three collection techniques: female dummy, faecal and abdominal massage samples. We found that sperm were significantly shorter in faecal than abdominal massage samples, which was explained by shorter heads and midpieces, but not flagella. This result might indicate that faecal sampled sperm could be less mature than sperm collected by abdominal massage. The female dummy method resulted in an insufficient number of experimental ejaculates because most males ignored it. In light of these results, we recommend using abdominal massage as a preferred method for avian sperm sampling. Where avian sperm cannot be collected by abdominal massage alone, we advise controlling for sperm sampling protocol statistically.
Evaluation of the Biological Sampling Kit (BiSKit) for Large-Area Surface Sampling
Buttner, Mark P.; Cruz, Patricia; Stetzenbach, Linda D.; Klima-Comba, Amy K.; Stevens, Vanessa L.; Emanuel, Peter A.
2004-01-01
Current surface sampling methods for microbial contaminants are designed to sample small areas and utilize culture analysis. The total number of microbes recovered is low because a small area is sampled, making detection of a potential pathogen more difficult. Furthermore, sampling of small areas requires a greater number of samples to be collected, which delays the reporting of results, taxes laboratory resources and staffing, and increases analysis costs. A new biological surface sampling method, the Biological Sampling Kit (BiSKit), designed to sample large areas and to be compatible with testing with a variety of technologies, including PCR and immunoassay, was evaluated and compared to other surface sampling strategies. In experimental room trials, wood laminate and metal surfaces were contaminated by aerosolization of Bacillus atrophaeus spores, a simulant for Bacillus anthracis, into the room, followed by settling of the spores onto the test surfaces. The surfaces were sampled with the BiSKit, a cotton-based swab, and a foam-based swab. Samples were analyzed by culturing, quantitative PCR, and immunological assays. The results showed that the large surface area (1 m2) sampled with the BiSKit resulted in concentrations of B. atrophaeus in samples that were up to 10-fold higher than the concentrations obtained with the other methods tested. A comparison of wet and dry sampling with the BiSKit indicated that dry sampling was more efficient (efficiency, 18.4%) than wet sampling (efficiency, 11.3%). The sensitivities of detection of B. atrophaeus on metal surfaces were 42 ± 5.8 CFU/m2 for wet sampling and 100.5 ± 10.2 CFU/m2 for dry sampling. These results demonstrate that the use of a sampling device capable of sampling larger areas results in higher sensitivity than that obtained with currently available methods and has the advantage of sampling larger areas, thus requiring collection of fewer samples per site. PMID:15574898
Unsupervised Learning —A Novel Clustering Method for Rolling Bearing Faults Identification
NASA Astrophysics Data System (ADS)
Kai, Li; Bo, Luo; Tao, Ma; Xuefeng, Yang; Guangming, Wang
2017-12-01
To promptly process the massive fault data and automatically provide accurate diagnosis results, numerous studies have been conducted on intelligent fault diagnosis of rolling bearing. Among these studies, such as artificial neural networks, support vector machines, decision trees and other supervised learning methods are used commonly. These methods can detect the failure of rolling bearing effectively, but to achieve better detection results, it often requires a lot of training samples. Based on above, a novel clustering method is proposed in this paper. This novel method is able to find the correct number of clusters automatically the effectiveness of the proposed method is validated using datasets from rolling element bearings. The diagnosis results show that the proposed method can accurately detect the fault types of small samples. Meanwhile, the diagnosis results are also relative high accuracy even for massive samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.
2013-04-27
This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that amore » decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account for FNR > 0 2. qualitative data when the FNR > 0 but statistical sampling methods are used that assume the FNR = 0 3. quantitative data (e.g., contaminant concentrations expressed as CFU/cm2) when the FNR = 0 or when using statistical sampling methods that account for FNR > 0 4. quantitative data when the FNR > 0 but statistical sampling methods are used that assume the FNR = 0. For Situation 2, the hotspot sampling approach provides for stating with Z% confidence that a hotspot of specified shape and size with detectable contamination will be found. Also for Situation 2, the CJR approach provides for stating with X% confidence that at least Y% of the decision area does not contain detectable contamination. Forms of these statements for the other three situations are discussed in Section 2.2. Statistical methods that account for FNR > 0 currently only exist for the hotspot sampling approach with qualitative data (or quantitative data converted to qualitative data). This report documents the current status of methods and formulas for the hotspot and CJR sampling approaches. Limitations of these methods are identified. Extensions of the methods that are applicable when FNR = 0 to account for FNR > 0, or to address other limitations, will be documented in future revisions of this report if future funding supports the development of such extensions. For quantitative data, this report also presents statistical methods and formulas for 1. quantifying the uncertainty in measured sample results 2. estimating the true surface concentration corresponding to a surface sample 3. quantifying the uncertainty of the estimate of the true surface concentration. All of the methods and formulas discussed in the report were applied to example situations to illustrate application of the methods and interpretation of the results.« less
Reference interval computation: which method (not) to choose?
Pavlov, Igor Y; Wilson, Andrew R; Delgado, Julio C
2012-07-11
When different methods are applied to reference interval (RI) calculation the results can sometimes be substantially different, especially for small reference groups. If there are no reliable RI data available, there is no way to confirm which method generates results closest to the true RI. We randomly drawn samples obtained from a public database for 33 markers. For each sample, RIs were calculated by bootstrapping, parametric, and Box-Cox transformed parametric methods. Results were compared to the values of the population RI. For approximately half of the 33 markers, results of all 3 methods were within 3% of the true reference value. For other markers, parametric results were either unavailable or deviated considerably from the true values. The transformed parametric method was more accurate than bootstrapping for sample size of 60, very close to bootstrapping for sample size 120, but in some cases unavailable. We recommend against using parametric calculations to determine RIs. The transformed parametric method utilizing Box-Cox transformation would be preferable way of RI calculation, if it satisfies normality test. If not, the bootstrapping is always available, and is almost as accurate and precise as the transformed parametric method. Copyright © 2012 Elsevier B.V. All rights reserved.
Toward cost-efficient sampling methods
NASA Astrophysics Data System (ADS)
Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie
2015-09-01
The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.
Photoacoustic spectroscopy and thermal relaxation method to evaluate corn moisture content
NASA Astrophysics Data System (ADS)
Pedrochi, F.; Medina, A. N.; Bento, A. C.; Baesso, M. L.; Luz, M. L. S.; Dalpasquale, V. A.
2005-06-01
In this study, samples of popcorn with different degrees of moisture were analyzed. The optical absorption bands at the mid infrared were measured using photoacoustic spectroscopy and were correlated to the sample moisture. The results were in agreement with moisture data determined by the well known reference method, the Karl Fischer. In addition, the thermal relaxation method was used to determine the sample specific heat as a function of the moisture content. The results were also in agreement with the two mentioned methods.
A comparison of fitness-case sampling methods for genetic programming
NASA Astrophysics Data System (ADS)
Martínez, Yuliana; Naredo, Enrique; Trujillo, Leonardo; Legrand, Pierrick; López, Uriel
2017-11-01
Genetic programming (GP) is an evolutionary computation paradigm for automatic program induction. GP has produced impressive results but it still needs to overcome some practical limitations, particularly its high computational cost, overfitting and excessive code growth. Recently, many researchers have proposed fitness-case sampling methods to overcome some of these problems, with mixed results in several limited tests. This paper presents an extensive comparative study of four fitness-case sampling methods, namely: Interleaved Sampling, Random Interleaved Sampling, Lexicase Selection and Keep-Worst Interleaved Sampling. The algorithms are compared on 11 symbolic regression problems and 11 supervised classification problems, using 10 synthetic benchmarks and 12 real-world data-sets. They are evaluated based on test performance, overfitting and average program size, comparing them with a standard GP search. Comparisons are carried out using non-parametric multigroup tests and post hoc pairwise statistical tests. The experimental results suggest that fitness-case sampling methods are particularly useful for difficult real-world symbolic regression problems, improving performance, reducing overfitting and limiting code growth. On the other hand, it seems that fitness-case sampling cannot improve upon GP performance when considering supervised binary classification.
An integrated bioanalytical method development and validation approach: case studies.
Xue, Y-J; Melo, Brian; Vallejo, Martha; Zhao, Yuwen; Tang, Lina; Chen, Yuan-Shek; Keller, Karin M
2012-10-01
We proposed an integrated bioanalytical method development and validation approach: (1) method screening based on analyte's physicochemical properties and metabolism information to determine the most appropriate extraction/analysis conditions; (2) preliminary stability evaluation using both quality control and incurred samples to establish sample collection, storage and processing conditions; (3) mock validation to examine method accuracy and precision and incurred sample reproducibility; and (4) method validation to confirm the results obtained during method development. This integrated approach was applied to the determination of compound I in rat plasma and compound II in rat and dog plasma. The effectiveness of the approach was demonstrated by the superior quality of three method validations: (1) a zero run failure rate; (2) >93% of quality control results within 10% of nominal values; and (3) 99% incurred sample within 9.2% of the original values. In addition, rat and dog plasma methods for compound II were successfully applied to analyze more than 900 plasma samples obtained from Investigational New Drug (IND) toxicology studies in rats and dogs with near perfect results: (1) a zero run failure rate; (2) excellent accuracy and precision for standards and quality controls; and (3) 98% incurred samples within 15% of the original values. Copyright © 2011 John Wiley & Sons, Ltd.
Pietilä, Heidi; Perämäki, Paavo; Piispanen, Juha; Starr, Mike; Nieminen, Tiina; Kantola, Marjatta; Ukonmaanaho, Liisa
2015-04-01
Most often, only total mercury concentrations in soil samples are determined in environmental studies. However, the determination of extremely toxic methylmercury (MeHg) in addition to the total mercury is critical to understand the biogeochemistry of mercury in the environment. In this study, N2-assisted distillation and acidic KBr/CuSO4 solvent extraction methods were applied to isolate MeHg from wet peat soil samples collected from boreal forest catchments. Determination of MeHg was performed using a purge and trap GC-ICP-MS technique with a species-specific isotope dilution quantification. Distillation is known to be more prone to artificial MeHg formation compared to solvent extraction which may result in the erroneous MeHg results, especially with samples containing high amounts of inorganic mercury. However, methylation of inorganic mercury during the distillation step had no effect on the reliability of the final MeHg results when natural peat soil samples were distilled. MeHg concentrations determined in peat soil samples after distillation were compared to those determined after the solvent extraction method. MeHg concentrations in peat soil samples varied from 0.8 to 18 μg kg(-1) (dry weight) and the results obtained with the two different methods did not differ significantly (p=0.05). The distillation method with an isotope dilution GC-ICP-MS was shown to be a reliable method for the determination of low MeHg concentrations in unpolluted soil samples. Furthermore, the distillation method is solvent-free and less time-consuming and labor-intensive when compared to the solvent extraction method. Copyright © 2014 Elsevier Ltd. All rights reserved.
[Free crystalline silica: a comparison of methods for its determination in total dust].
Maciejewska, Aleksandra; Szadkowska-Stańczyk, Irena; Kondratowicz, Grzegorz
2005-01-01
The major objective of the study was to compare and investigate the usefulness of quantitative analyses of free crystalline silica (FCS) in the assessment of dust exposure in samples of total dust of varied composition, using three methods: chemical method in common use in Poland; infrared spectrometry; and x-ray powder diffraction. Mineral composition and FCS contents were investigated in 9 laboratory samples of raw materials, materials, and industrial wastes, containing from about 2 to over 80% of crystalline silica and reduced to particles of size corresponding with that of total dust. Sample components were identified using XRD and FT-IR methods. Ten independent determinations of FCS with each of the three study methods were performed in dust samples. An analysis of linear correlation was applied to investigate interrelationship between mean FCS determinations. In analyzed dust samples, along with silica dust there were numerous minerals interfering with silica during the quantitative analysis. Comparison of mean results of FCS determinations showed that the results obtained using the FT-IR method were by 12-13% lower than those obtained with two other methods. However, the differences observed were within the limits of changeability of results associated with their precision and dependence on reference materials used. Assessment of occupational exposure to dusts containing crystalline silica can be performed on the basis of quantitative analysis of FCS in total dusts using each of the compared methods. The FT-IR method is most appropriate for the FCS determination in samples of small amount of silica or collected at low dust concentrations; the XRD method for the analysis of multicomponent samples; and the chemical method in the case of medium and high FCS contents in samples or high concentrations of dusts in the work environment.
Impact of Processing Method on Recovery of Bacteria from Wipes Used in Biological Surface Sampling
Olson, Nathan D.; Filliben, James J.; Morrow, Jayne B.
2012-01-01
Environmental sampling for microbiological contaminants is a key component of hygiene monitoring and risk characterization practices utilized across diverse fields of application. However, confidence in surface sampling results, both in the field and in controlled laboratory studies, has been undermined by large variation in sampling performance results. Sources of variation include controlled parameters, such as sampling materials and processing methods, which often differ among studies, as well as random and systematic errors; however, the relative contributions of these factors remain unclear. The objective of this study was to determine the relative impacts of sample processing methods, including extraction solution and physical dissociation method (vortexing and sonication), on recovery of Gram-positive (Bacillus cereus) and Gram-negative (Burkholderia thailandensis and Escherichia coli) bacteria from directly inoculated wipes. This work showed that target organism had the largest impact on extraction efficiency and recovery precision, as measured by traditional colony counts. The physical dissociation method (PDM) had negligible impact, while the effect of the extraction solution was organism dependent. Overall, however, extraction of organisms from wipes using phosphate-buffered saline with 0.04% Tween 80 (PBST) resulted in the highest mean recovery across all three organisms. The results from this study contribute to a better understanding of the factors that influence sampling performance, which is critical to the development of efficient and reliable sampling methodologies relevant to public health and biodefense. PMID:22706055
Methodology Series Module 5: Sampling Strategies
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-06
... Analytic Methods and Sampling Procedures for the United States National Residue Program for Meat, Poultry... implementing several multi-residue methods for analyzing samples of meat, poultry, and egg products for animal.... These modern, high-efficiency methods will conserve resources and provide useful and reliable results...
[Standard sample preparation method for quick determination of trace elements in plastic].
Yao, Wen-Qing; Zong, Rui-Long; Zhu, Yong-Fa
2011-08-01
Reference sample was prepared by masterbatch method, containing heavy metals with known concentration of electronic information products (plastic), the repeatability and precision were determined, and reference sample preparation procedures were established. X-Ray fluorescence spectroscopy (XRF) analysis method was used to determine the repeatability and uncertainty in the analysis of the sample of heavy metals and bromine element. The working curve and the metrical methods for the reference sample were carried out. The results showed that the use of the method in the 200-2000 mg x kg(-1) concentration range for Hg, Pb, Cr and Br elements, and in the 20-200 mg x kg(-1) range for Cd elements, exhibited a very good linear relationship, and the repeatability of analysis methods for six times is good. In testing the circuit board ICB288G and ICB288 from the Mitsubishi Heavy Industry Company, results agreed with the recommended values.
Rapid detection of Salmonella spp. in food by use of the ISO-GRID hydrophobic grid membrane filter.
Entis, P; Brodsky, M H; Sharpe, A N; Jarvis, G A
1982-01-01
A rapid hydrophobic grid-membrane filter (HGMF) method was developed and compared with the Health Protection Branch cultural method for the detection of Salmonella spp. in 798 spiked samples and 265 naturally contaminated samples of food. With the HGMF method, Salmonella spp. were isolated from 618 of the spiked samples and 190 of the naturally contaminated samples. The conventional method recovered Salmonella spp. from 622 spiked samples and 204 unspiked samples. The isolation rates from Salmonella-positive samples for the two methods were not significantly different (94.6% overall for the HGMF method and 96.7% for the conventional approach), but the HGMF results were available in only 2 to 3 days after sample receipt compared with 3 to 4 days by the conventional method. Images PMID:7059168
Taylor, Vivien F; Toms, Andrew; Longerich, Henry P
2002-01-01
The application of open vessel focused microwave acid digestion is described for the preparation of geological and environmental samples for analysis using inductively coupled plasma-mass spectrometry (ICP-MS). The method is compared to conventional closed-vessel high pressure methods which are limited in the use of HF to break down silicates. Open-vessel acid digestion more conveniently enables the use of HF to remove Si from geological and plant samples as volatile SiF4, as well as evaporation-to-dryness and sequential acid addition during the procedure. Rock reference materials (G-2 granite, MRG-1 gabbros, SY-2 syenite, JA-1 andesite, and JB-2 and SRM-688 basalts) and plant reference materials (BCR and IAEA lichens, peach leaves, apple leaves, Durham wheat flour, and pine needles) were digested with results comparable to conventional hotplate digestion. The microwave digestion method gave poor results for granitic samples containing refractory minerals, however fusion was the preferred method of preparation for these samples. Sample preparation time was reduced from several days, using conventional hotplate digestion method, to one hour per sample using our microwave method.
Sample processing approach for detection of ricin in surface samples.
Kane, Staci; Shah, Sanjiv; Erler, Anne Marie; Alfaro, Teneile
2017-12-01
With several ricin contamination incidents reported over the past decade, rapid and accurate methods are needed for environmental sample analysis, especially after decontamination. A sample processing method was developed for common surface sampling devices to improve the limit of detection and avoid false negative/positive results for ricin analysis. Potential assay interferents from the sample matrix (bleach residue, sample material, wetting buffer), including reference dust, were tested using a Time-Resolved Fluorescence (TRF) immunoassay. Test results suggested that the sample matrix did not cause the elevated background fluorescence sometimes observed when analyzing post-bleach decontamination samples from ricin incidents. Furthermore, sample particulates (80mg/mL Arizona Test Dust) did not enhance background fluorescence or interfere with ricin detection by TRF. These results suggested that high background fluorescence in this immunoassay could be due to labeled antibody quality and/or quantity issues. Centrifugal ultrafiltration devices were evaluated for ricin concentration as a part of sample processing. Up to 30-fold concentration of ricin was observed by the devices, which serve to remove soluble interferents and could function as the front-end sample processing step to other ricin analytical methods. The procedure has the potential to be used with a broader range of environmental sample types and with other potential interferences and to be followed by other ricin analytical methods, although additional verification studies would be required. Published by Elsevier B.V.
Comparing methods of determining Legionella spp. in complex water matrices.
Díaz-Flores, Álvaro; Montero, Juan Carlos; Castro, Francisco Javier; Alejandres, Eva María; Bayón, Carmen; Solís, Inmaculada; Fernández-Lafuente, Roberto; Rodríguez, Guillermo
2015-04-29
Legionella testing conducted at environmental laboratories plays an essential role in assessing the risk of disease transmission associated with water systems. However, drawbacks of culture-based methodology used for Legionella enumeration can have great impact on the results and interpretation which together can lead to underestimation of the actual risk. Up to 20% of the samples analysed by these laboratories produced inconclusive results, making effective risk management impossible. Overgrowth of competing microbiota was reported as an important factor for culture failure. For quantitative polymerase chain reaction (qPCR), the interpretation of the results from the environmental samples still remains a challenge. Inhibitors may cause up to 10% of inconclusive results. This study compared a quantitative method based on immunomagnetic separation (IMS method) with culture and qPCR, as a new approach to routine monitoring of Legionella. First, pilot studies evaluated the recovery and detectability of Legionella spp using an IMS method, in the presence of microbiota and biocides. The IMS method results were not affected by microbiota while culture counts were significantly reduced (1.4 log) or negative in the same samples. Damage by biocides of viable Legionella was detected by the IMS method. Secondly, a total of 65 water samples were assayed by all three techniques (culture, qPCR and the IMS method). Of these, 27 (41.5%) were recorded as positive by at least one test. Legionella spp was detected by culture in 7 (25.9%) of the 27 samples. Eighteen (66.7%) of the 27 samples were positive by the IMS method, thirteen of them reporting counts below 10(3) colony forming units per liter (CFU l(-1)), six presented interfering microbiota and three presented PCR inhibition. Of the 65 water samples, 24 presented interfering microbiota by culture and 8 presented partial or complete inhibition of the PCR reaction. So the rate of inconclusive results of culture and PCR was 36.9 and 12.3%, respectively, without any inconclusive results reported for the IMS method. The IMS method generally improved the recovery and detectability of Legionella in environmental matrices, suggesting the possibility to use IMS method as valuable indicator of risk. Thus, this method may significantly improve our knowledge about the exposure risk to these bacteria, allowing us to implement evidence-based monitoring and disinfection strategies.
Fernández-Soto, Pedro; Velasco Tirado, Virginia; Carranza Rodríguez, Cristina; Pérez-Arellano, José Luis; Muro, Antonio
2013-01-01
Human schistosomiasis remains a serious worldwide public health problem. At present, a sensitive and specific assay for routine diagnosis of schistosome infection is not yet available. The potential for detecting schistosome-derived DNA by PCR-based methods in human clinical samples is currently being investigated as a diagnostic tool with potential application in routine schistosomiasis diagnosis. Collection of diagnostic samples such as stool or blood is usually difficult in some populations. However, urine is a biological sample that can be collected in a non-invasive method, easy to get from people of all ages and easy in management, but as a sample for PCR diagnosis is still not widely used. This could be due to the high variability in the reported efficiency of detection as a result of the high variation in urine samples' storage or conditions for handling and DNA preservation and extraction methods. We evaluate different commercial DNA extraction methods from a series of long-term frozen storage human urine samples from patients with parasitological confirmed schistosomiasis in order to assess the PCR effectiveness for Schistosoma spp. detection. Patients urine samples were frozen for 18 months up to 7 years until use. Results were compared with those obtained in PCR assays using fresh healthy human urine artificially contaminated with Schistosoma mansoni DNA and urine samples from mice experimentally infected with S. mansoni cercariae stored frozen for at least 12 months before use. PCR results in fresh human artificial urine samples using different DNA based extraction methods were much more effective than those obtained when long-term frozen human urine samples were used as the source of DNA template. Long-term frozen human urine samples are probably not a good source for DNA extraction for use as a template in PCR detection of Schistosoma spp., regardless of the DNA method of extraction used.
Chao, Li-Wei; Szrek, Helena; Peltzer, Karl; Ramlagan, Shandir; Fleming, Peter; Leite, Rui; Magerman, Jesswill; Ngwenya, Godfrey B.; Pereira, Nuno Sousa; Behrman, Jere
2011-01-01
Finding an efficient method for sampling micro- and small-enterprises (MSEs) for research and statistical reporting purposes is a challenge in developing countries, where registries of MSEs are often nonexistent or outdated. This lack of a sampling frame creates an obstacle in finding a representative sample of MSEs. This study uses computer simulations to draw samples from a census of businesses and non-businesses in the Tshwane Municipality of South Africa, using three different sampling methods: the traditional probability sampling method, the compact segment sampling method, and the World Health Organization’s Expanded Programme on Immunization (EPI) sampling method. Three mechanisms by which the methods could differ are tested, the proximity selection of respondents, the at-home selection of respondents, and the use of inaccurate probability weights. The results highlight the importance of revisits and accurate probability weights, but the lesser effect of proximity selection on the samples’ statistical properties. PMID:22582004
Local Feature Selection for Data Classification.
Armanfard, Narges; Reilly, James P; Komeili, Majid
2016-06-01
Typical feature selection methods choose an optimal global feature subset that is applied over all regions of the sample space. In contrast, in this paper we propose a novel localized feature selection (LFS) approach whereby each region of the sample space is associated with its own distinct optimized feature set, which may vary both in membership and size across the sample space. This allows the feature set to optimally adapt to local variations in the sample space. An associated method for measuring the similarities of a query datum to each of the respective classes is also proposed. The proposed method makes no assumptions about the underlying structure of the samples; hence the method is insensitive to the distribution of the data over the sample space. The method is efficiently formulated as a linear programming optimization problem. Furthermore, we demonstrate the method is robust against the over-fitting problem. Experimental results on eleven synthetic and real-world data sets demonstrate the viability of the formulation and the effectiveness of the proposed algorithm. In addition we show several examples where localized feature selection produces better results than a global feature selection method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkins, T.F.; Thorne, P.G.; Myers, K.F.
Salting-out solvent extraction (SOE) was compared with cartridge and membrane solid-phase extraction (SPE) for preconcentration of nitroaromatics, nitramines, and aminonitroaromatics prior to determination by reversed-phase high-performance liquid chromatography. The solid phases used were manufacturer-cleaned materials, Porapak RDX for the cartridge method and Empore SDB-RPS for the membrane method. Thirty-three groundwater samples from the Naval Surface Warfare Center, Crane, Indiana, were analyzed using the direct analysis protocol specified in SW846 Method 8330, and the results were compared with analyses conducted after preconcentration using SOE with acetonitrile, cartridge-based SPE, and membrane-based SPE. For high-concentration samples, analytical results from the three preconcentration techniquesmore » were compared with results from the direct analysis protocol; good recovery of all target analytes was achieved by all three pre-concentration methods. For low-concentration samples, results from the two SPE methods were correlated with results from the SOE method; very similar data was obtained by the SOE and SPE methods, even at concentrations well below 1 microgram/L.« less
Cockburn, Glenn; Sánchez-Tójar, Alfredo; Løvlie, Hanne; Schroeder, Julia
2017-01-01
Birds are model organisms in sperm biology. Previous work in zebra finches, suggested that sperm sampled from males' faeces and ejaculates do not differ in size. Here, we tested this assumption in a captive population of house sparrows, Passer domesticus. We compared sperm length in samples from three collection techniques: female dummy, faecal and abdominal massage samples. We found that sperm were significantly shorter in faecal than abdominal massage samples, which was explained by shorter heads and midpieces, but not flagella. This result might indicate that faecal sampled sperm could be less mature than sperm collected by abdominal massage. The female dummy method resulted in an insufficient number of experimental ejaculates because most males ignored it. In light of these results, we recommend using abdominal massage as a preferred method for avian sperm sampling. Where avian sperm cannot be collected by abdominal massage alone, we advise controlling for sperm sampling protocol statistically. PMID:28813481
NASA Astrophysics Data System (ADS)
Sung, S.; Kim, H. G.; Lee, D. K.; Park, J. H.; Mo, Y.; Kil, S.; Park, C.
2016-12-01
The impact of climate change has been observed throughout the globe. The ecosystem experiences rapid changes such as vegetation shift, species extinction. In these context, Species Distribution Model (SDM) is one of the popular method to project impact of climate change on the ecosystem. SDM basically based on the niche of certain species with means to run SDM present point data is essential to find biological niche of species. To run SDM for plants, there are certain considerations on the characteristics of vegetation. Normally, to make vegetation data in large area, remote sensing techniques are used. In other words, the exact point of presence data has high uncertainties as we select presence data set from polygons and raster dataset. Thus, sampling methods for modeling vegetation presence data should be carefully selected. In this study, we used three different sampling methods for selection of presence data of vegetation: Random sampling, Stratified sampling and Site index based sampling. We used one of the R package BIOMOD2 to access uncertainty from modeling. At the same time, we included BioCLIM variables and other environmental variables as input data. As a result of this study, despite of differences among the 10 SDMs, the sampling methods showed differences in ROC values, random sampling methods showed the lowest ROC value while site index based sampling methods showed the highest ROC value. As a result of this study the uncertainties from presence data sampling methods and SDM can be quantified.
Liu, Gui-Long; Huang, Shi-Hong; Shi, Che-Si; Zeng, Bin; Zhang, Ke-Shi; Zhong, Xian-Ci
2018-02-10
Using copper thin-walled tubular specimens, the subsequent yield surfaces under pre-tension, pre-torsion and pre-combined tension-torsion are measured, where the single-sample and multi-sample methods are applied respectively to determine the yield stresses at specified offset strain. The rule and characteristics of the evolution of the subsequent yield surface are investigated. Under the conditions of different pre-strains, the influence of test point number, test sequence and specified offset strain on the measurement of subsequent yield surface and the concave phenomenon for measured yield surface are studied. Moreover, the feasibility and validity of the two methods are compared. The main conclusions are drawn as follows: (1) For the single or multi-sample method, the measured subsequent yield surfaces are remarkably different from cylindrical yield surfaces proposed by the classical plasticity theory; (2) there are apparent differences between the test results from the two kinds of methods: the multi-sample method is not influenced by the number of test points, test order and the cumulative effect of residual plastic strain resulting from the other test point, while those are very influential in the single-sample method; and (3) the measured subsequent yield surface may appear concave, which can be transformed to convex for single-sample method by changing the test sequence. However, for the multiple-sample method, the concave phenomenon will disappear when a larger offset strain is specified.
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
2017-10-26
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Fangyan; Zhang, Song; Chung Wong, Pak
Effectively visualizing large graphs and capturing the statistical properties are two challenging tasks. To aid in these two tasks, many sampling approaches for graph simplification have been proposed, falling into three categories: node sampling, edge sampling, and traversal-based sampling. It is still unknown which approach is the best. We evaluate commonly used graph sampling methods through a combined visual and statistical comparison of graphs sampled at various rates. We conduct our evaluation on three graph models: random graphs, small-world graphs, and scale-free graphs. Initial results indicate that the effectiveness of a sampling method is dependent on the graph model, themore » size of the graph, and the desired statistical property. This benchmark study can be used as a guideline in choosing the appropriate method for a particular graph sampling task, and the results presented can be incorporated into graph visualization and analysis tools.« less
Evaluation of 3 dental unit waterline contamination testing methods
Porteous, Nuala; Sun, Yuyu; Schoolfield, John
2015-01-01
Previous studies have found inconsistent results from testing methods used to measure heterotrophic plate count (HPC) bacteria in dental unit waterline (DUWL) samples. This study used 63 samples to compare the results obtained from an in-office chairside method and 2 currently used commercial laboratory HPC methods (Standard Methods 9215C and 9215E). The results suggest that the Standard Method 9215E is not suitable for application to DUWL quality monitoring, due to the detection of limited numbers of heterotrophic organisms at the required 35°C incubation temperature. The results also confirm that while the in-office chairside method is useful for DUWL quality monitoring, the Standard Method 9215C provided the most accurate results. PMID:25574718
Mauté, Carole; Nibourel, Olivier; Réa, Delphine; Coiteux, Valérie; Grardel, Nathalie; Preudhomme, Claude; Cayuela, Jean-Michel
2014-09-01
Until recently, diagnostic laboratories that wanted to report on the international scale had limited options: they had to align their BCR-ABL1 quantification methods through a sample exchange with a reference laboratory to derive a conversion factor. However, commercial methods calibrated on the World Health Organization genetic reference panel are now available. We report results from a study designed to assess the comparability of the two alignment strategies. Sixty follow-up samples from chronic myeloid leukemia patients were included. Two commercial methods calibrated on the genetic reference panel were compared to two conversion factor methods routinely used at Saint-Louis Hospital, Paris, and at Lille University Hospital. Results were matched against concordance criteria (i.e., obtaining at least two of the three following landmarks: 50, 75 and 90% of the patient samples within a 2-fold, 3-fold and 5-fold range, respectively). Out of the 60 samples, more than 32 were available for comparison. Compared to the conversion factor method, the two commercial methods were within a 2-fold, 3-fold and 5-fold range for 53 and 59%, 89 and 88%, 100 and 97%, respectively of the samples analyzed at Saint-Louis. At Lille, results were 45 and 85%, 76 and 97%, 100 and 100%, respectively. Agreements between methods were observed in the four comparisons performed. Our data show that the two commercial methods selected are concordant with the conversion factor methods. This study brings the proof of principle that alignment on the international scale using the genetic reference panel is compatible with the patient sample exchange procedure. We believe that these results are particularly important for diagnostic laboratories wishing to adopt commercial methods. Copyright © 2014 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Mechanisms of fracture of ring samples made of FCC metals on loading with magnetic-pulse method
NASA Astrophysics Data System (ADS)
Morozov, Viktor; Kats, Victor; Savenkov, Georgiy; Lukin, Anton
2018-05-01
Results of study of deformation and fracture of ring-shaped samples made of thin strips of cuprum, aluminum and steel in wide range of loading velocity are presented. Three developed by us schemes of magnetic-pulse method are used for the samples loading. The method of samples fracture with the high electrical resistance (e.g. steel) is proposed. Crack velocity at the sample fracture is estimated. Fracture surfaces are inspected. Mechanisms of dynamic fracture of the sample arere discussed.
Aristov, Alexander; Nosova, Ekaterina
2017-04-01
The paper focuses on research aimed at creating and testing a new approach to evaluate the processes of aggregation and sedimentation of red blood cells for purpose of its use in clinical laboratory diagnostics. The proposed method is based on photometric analysis of blood sample formed as a sessile drop. The results of clinical approbation of this method are given in the paper. Analysis of the processes occurring in the sample in the form of sessile drop during the process of blood cells sedimentation is described. The results of experimental studies to evaluate the effect of the droplet sample focusing properties on light radiation transmittance are presented. It is shown that this method significantly reduces the sample volume and provides sufficiently high sensitivity to the studied processes.
Rapid combined assay for Salmonella detection in food samples.
Gadó, I; Major, P; Király, M; Pláveczky, M G
2000-01-01
A rapid method was developed to detect salmonellae in food samples. The method gave a possibility to obtain results after 28 h 30 min. The preenrichment in buffered peptone water lasted for 6 h, the enrichment in Rappaport-Vassiliadis medium was applied for 18 h followed by PCR with INVA1-INVA2 primer pair, adapting Chiu and Ou's method. This procedure was suitable to demonstrate salmonella contamination at min. 10 cfu/25 g sample. Out of 18 samples there was a good agreement between the results of the conventional and rapid methods in case of 17 samples. PCR with SPVC1-SPVC2 primer pair informing about the presence of virulence plasmid was performed in separate tubes, because decreased sensitivity was observed in case of multiplex PCR.
Lee, Eun Gyung; Magrm, Rana; Kusti, Mohannad; Kashon, Michael L; Guffey, Steven; Costas, Michelle M; Boykin, Carie J; Harper, Martin
2017-01-01
This study was to determine occupational exposures to formaldehyde and to compare concentrations of formaldehyde obtained by active and passive sampling methods. In one pathology and one histology laboratories, exposure measurements were collected with sets of active air samplers (Supelco LpDNPH tubes) and passive badges (ChemDisk Aldehyde Monitor 571). Sixty-six sample pairs (49 personal and 17 area) were collected and analyzed by NIOSH NMAM 2016 for active samples and OSHA Method 1007 (using the manufacturer's updated uptake rate) for passive samples. All active and passive 8-hr time-weighted average (TWA) measurements showed compliance with the OSHA permissible exposure limit (PEL-0.75 ppm) except for one passive measurement, whereas 78% for the active and 88% for the passive samples exceeded the NIOSH recommended exposure limit (REL-0.016 ppm). Overall, 73% of the passive samples showed higher concentrations than the active samples and a statistical test indicated disagreement between two methods for all data and for data without outliers. The OSHA Method cautions that passive samplers should not be used for sampling situations involving formalin solutions because of low concentration estimates in the presence of reaction products of formaldehyde and methanol (a formalin additive). However, this situation was not observed, perhaps because the formalin solutions used in these laboratories included much less methanol (3%) than those tested in the OSHA Method (up to 15%). The passive samplers in general overestimated concentrations compared to the active method, which is prudent for demonstrating compliance with an occupational exposure limit, but occasional large differences may be a result of collecting aerosolized droplets or splashes on the face of the samplers. In the situations examined in this study the passive sampler generally produces higher results than the active sampler so that a body of results from passive samplers demonstrating compliance with the OSHA PEL would be a valid conclusion. However, individual passive samples can show lower results than a paired active sampler so that a single result should be treated with caution.
Comparisons of discrete and integrative sampling accuracy in estimating pulsed aquatic exposures.
Morrison, Shane A; Luttbeg, Barney; Belden, Jason B
2016-11-01
Most current-use pesticides have short half-lives in the water column and thus the most relevant exposure scenarios for many aquatic organisms are pulsed exposures. Quantifying exposure using discrete water samples may not be accurate as few studies are able to sample frequently enough to accurately determine time-weighted average (TWA) concentrations of short aquatic exposures. Integrative sampling methods that continuously sample freely dissolved contaminants over time intervals (such as integrative passive samplers) have been demonstrated to be a promising measurement technique. We conducted several modeling scenarios to test the assumption that integrative methods may require many less samples for accurate estimation of peak 96-h TWA concentrations. We compared the accuracies of discrete point samples and integrative samples while varying sampling frequencies and a range of contaminant water half-lives (t 50 = 0.5, 2, and 8 d). Differences the predictive accuracy of discrete point samples and integrative samples were greatest at low sampling frequencies. For example, when the half-life was 0.5 d, discrete point samples required 7 sampling events to ensure median values > 50% and no sampling events reporting highly inaccurate results (defined as < 10% of the true 96-h TWA). Across all water half-lives investigated, integrative sampling only required two samples to prevent highly inaccurate results and measurements resulting in median values > 50% of the true concentration. Regardless, the need for integrative sampling diminished as water half-life increased. For an 8-d water half-life, two discrete samples produced accurate estimates and median values greater than those obtained for two integrative samples. Overall, integrative methods are the more accurate method for monitoring contaminants with short water half-lives due to reduced frequency of extreme values, especially with uncertainties around the timing of pulsed events. However, the acceptability of discrete sampling methods for providing accurate concentration measurements increases with increasing aquatic half-lives. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fernández-Soto, Pedro; Velasco Tirado, Virginia; Carranza Rodríguez, Cristina; Pérez-Arellano, José Luis; Muro, Antonio
2013-01-01
Background Human schistosomiasis remains a serious worldwide public health problem. At present, a sensitive and specific assay for routine diagnosis of schistosome infection is not yet available. The potential for detecting schistosome-derived DNA by PCR-based methods in human clinical samples is currently being investigated as a diagnostic tool with potential application in routine schistosomiasis diagnosis. Collection of diagnostic samples such as stool or blood is usually difficult in some populations. However, urine is a biological sample that can be collected in a non-invasive method, easy to get from people of all ages and easy in management, but as a sample for PCR diagnosis is still not widely used. This could be due to the high variability in the reported efficiency of detection as a result of the high variation in urine samples’ storage or conditions for handling and DNA preservation and extraction methods. Methodology/Principal Findings We evaluate different commercial DNA extraction methods from a series of long-term frozen storage human urine samples from patients with parasitological confirmed schistosomiasis in order to assess the PCR effectiveness for Schistosoma spp. detection. Patientś urine samples were frozen for 18 months up to 7 years until use. Results were compared with those obtained in PCR assays using fresh healthy human urine artificially contaminated with Schistosoma mansoni DNA and urine samples from mice experimentally infected with S. mansoni cercariae stored frozen for at least 12 months before use. PCR results in fresh human artificial urine samples using different DNA based extraction methods were much more effective than those obtained when long-term frozen human urine samples were used as the source of DNA template. Conclusions/Significance Long-term frozen human urine samples are probably not a good source for DNA extraction for use as a template in PCR detection of Schistosoma spp., regardless of the DNA method of extraction used. PMID:23613907
Lin, Tao; Sun, Huijun; Chen, Zhong; You, Rongyi; Zhong, Jianhui
2007-12-01
Diffusion weighting in MRI is commonly achieved with the pulsed-gradient spin-echo (PGSE) method. When combined with spin-warping image formation, this method often results in ghosts due to the sample's macroscopic motion. It has been shown experimentally (Kennedy and Zhong, MRM 2004;52:1-6) that these motion artifacts can be effectively eliminated by the distant dipolar field (DDF) method, which relies on the refocusing of spatially modulated transverse magnetization by the DDF within the sample itself. In this report, diffusion-weighted images (DWIs) using both DDF and PGSE methods in the presence of macroscopic sample motion were simulated. Numerical simulation results quantify the dependence of signals in DWI on several key motion parameters and demonstrate that the DDF DWIs are much less sensitive to macroscopic sample motion than the traditional PGSE DWIs. The results also show that the dipolar correlation distance (d(c)) can alter contrast in DDF DWIs. The simulated results are in good agreement with the experimental results reported previously.
Surface sampling techniques for 3D object inspection
NASA Astrophysics Data System (ADS)
Shih, Chihhsiong S.; Gerhardt, Lester A.
1995-03-01
While the uniform sampling method is quite popular for pointwise measurement of manufactured parts, this paper proposes three novel sampling strategies which emphasize 3D non-uniform inspection capability. They are: (a) the adaptive sampling, (b) the local adjustment sampling, and (c) the finite element centroid sampling techniques. The adaptive sampling strategy is based on a recursive surface subdivision process. Two different approaches are described for this adaptive sampling strategy. One uses triangle patches while the other uses rectangle patches. Several real world objects were tested using these two algorithms. Preliminary results show that sample points are distributed more closely around edges, corners, and vertices as desired for many classes of objects. Adaptive sampling using triangle patches is shown to generally perform better than both uniform and adaptive sampling using rectangle patches. The local adjustment sampling strategy uses a set of predefined starting points and then finds the local optimum position of each nodal point. This method approximates the object by moving the points toward object edges and corners. In a hybrid approach, uniform points sets and non-uniform points sets, first preprocessed by the adaptive sampling algorithm on a real world object were then tested using the local adjustment sampling method. The results show that the initial point sets when preprocessed by adaptive sampling using triangle patches, are moved the least amount of distance by the subsequently applied local adjustment method, again showing the superiority of this method. The finite element sampling technique samples the centroids of the surface triangle meshes produced from the finite element method. The performance of this algorithm was compared to that of the adaptive sampling using triangular patches. The adaptive sampling with triangular patches was once again shown to be better on different classes of objects.
Influence of ECG sampling rate in fetal heart rate variability analysis.
De Jonckheere, J; Garabedian, C; Charlier, P; Champion, C; Servan-Schreiber, E; Storme, L; Debarge, V; Jeanne, M; Logier, R
2017-07-01
Fetal hypoxia results in a fetal blood acidosis (pH<;7.10). In such a situation, the fetus develops several adaptation mechanisms regulated by the autonomic nervous system. Many studies demonstrated significant changes in heart rate variability in hypoxic fetuses. So, fetal heart rate variability analysis could be of precious help for fetal hypoxia prediction. Commonly used fetal heart rate variability analysis methods have been shown to be sensitive to the ECG signal sampling rate. Indeed, a low sampling rate could induce variability in the heart beat detection which will alter the heart rate variability estimation. In this paper, we introduce an original fetal heart rate variability analysis method. We hypothesize that this method will be less sensitive to ECG sampling frequency changes than common heart rate variability analysis methods. We then compared the results of this new heart rate variability analysis method with two different sampling frequencies (250-1000 Hz).
Bruce, James F.; Roberts, James J.; Zuellig, Robert E.
2018-05-24
The U.S. Geological Survey (USGS), in cooperation with Colorado Springs City Engineering and Colorado Springs Utilities, analyzed previously collected invertebrate data to determine the comparability among four sampling methods and two versions (2010 and 2017) of the Colorado Benthic Macroinvertebrate Multimetric Index (MMI). For this study, annual macroinvertebrate samples were collected concurrently (in space and time) at 15 USGS surface-water gaging stations in the Fountain Creek Basin from 2010 to 2012 using four sampling methods. The USGS monitoring project in the basin uses two of the methods and the Colorado Department of Public Health and Environment recommends the other two. These methods belong to two distinct sample types, one that targets single habitats and one that targets multiple habitats. The study results indicate that there are significant differences in MMI values obtained from the single-habitat and multihabitat sample types but methods from each program within each sample type produced comparable values. This study also determined that MMI values calculated by different versions of the Colorado Benthic Macroinvertebrate MMI are indistinguishable. This indicates that the Colorado Department of Public Health and Environment methods are comparable with the USGS monitoring project methods for single-habitat and multihabitat sample types. This report discusses the direct application of the study results to inform the revision of the existing USGS monitoring project in the Fountain Creek Basin.
Humphry, R W; Evans, J; Webster, C; Tongue, S C; Innocent, G T; Gunn, G J
2018-02-01
Antimicrobial resistance is primarily a problem in human medicine but there are unquantified links of transmission in both directions between animal and human populations. Quantitative assessment of the costs and benefits of reduced antimicrobial usage in livestock requires robust quantification of transmission of resistance between animals, the environment and the human population. This in turn requires appropriate measurement of resistance. To tackle this we selected two different methods for determining whether a sample is resistant - one based on screening a sample, the other on testing individual isolates. Our overall objective was to explore the differences arising from choice of measurement. A literature search demonstrated the widespread use of testing of individual isolates. The first aim of this study was to compare, quantitatively, sample level and isolate level screening. Cattle or sheep faecal samples (n=41) submitted for routine parasitology were tested for antimicrobial resistance in two ways: (1) "streak" direct culture onto plates containing the antimicrobial of interest; (2) determination of minimum inhibitory concentration (MIC) of 8-10 isolates per sample compared to published MIC thresholds. Two antibiotics (ampicillin and nalidixic acid) were tested. With ampicillin, direct culture resulted in more than double the number of resistant samples than the MIC method based on eight individual isolates. The second aim of this study was to demonstrate the utility of the observed relationship between these two measures of antimicrobial resistance to re-estimate the prevalence of antimicrobial resistance from a previous study, in which we had used "streak" cultures. Boot-strap methods were used to estimate the proportion of samples that would have tested resistant in the historic study, had we used the isolate-based MIC method instead. Our boot-strap results indicate that our estimates of prevalence of antimicrobial resistance would have been considerably lower in the historic study had the MIC method been used. Finally we conclude that there is no single way of defining a sample as resistant to an antimicrobial agent. The method used greatly affects the estimated prevalence of antimicrobial resistance in a sampled population of animals, thus potentially resulting in misleading results. Comparing methods on the same samples allows us to re-estimate the prevalence from other studies, had other methods for determining resistance been used. The results of this study highlight the importance of establishing what the most appropriate measure of antimicrobial resistance is, for the proposed purpose of the results. Copyright © 2017 Elsevier B.V. All rights reserved.
Masuyama, Kotoka; Shojo, Hideki; Nakanishi, Hiroaki; Inokuchi, Shota; Adachi, Noboru
2017-01-01
Sex determination is important in archeology and anthropology for the study of past societies, cultures, and human activities. Sex determination is also one of the most important components of individual identification in criminal investigations. We developed a new method of sex determination by detecting a single-nucleotide polymorphism in the amelogenin gene using amplified product-length polymorphisms in combination with sex-determining region Y analysis. We particularly focused on the most common types of postmortem DNA damage in ancient and forensic samples: fragmentation and nucleotide modification resulting from deamination. Amplicon size was designed to be less than 60 bp to make the method more useful for analyzing degraded DNA samples. All DNA samples collected from eight Japanese individuals (four male, four female) were evaluated correctly using our method. The detection limit for accurate sex determination was determined to be 20 pg of DNA. We compared our new method with commercial short tandem repeat analysis kits using DNA samples artificially fragmented by ultraviolet irradiation. Our novel method was the most robust for highly fragmented DNA samples. To deal with allelic dropout resulting from deamination, we adopted "bidirectional analysis," which analyzed samples from both sense and antisense strands. This new method was applied to 14 Jomon individuals (3500-year-old bone samples) whose sex had been identified morphologically. We could correctly identify the sex of 11 out of 14 individuals. These results show that our method is reliable for the sex determination of highly degenerated samples.
Masuyama, Kotoka; Shojo, Hideki; Nakanishi, Hiroaki; Inokuchi, Shota; Adachi, Noboru
2017-01-01
Sex determination is important in archeology and anthropology for the study of past societies, cultures, and human activities. Sex determination is also one of the most important components of individual identification in criminal investigations. We developed a new method of sex determination by detecting a single-nucleotide polymorphism in the amelogenin gene using amplified product-length polymorphisms in combination with sex-determining region Y analysis. We particularly focused on the most common types of postmortem DNA damage in ancient and forensic samples: fragmentation and nucleotide modification resulting from deamination. Amplicon size was designed to be less than 60 bp to make the method more useful for analyzing degraded DNA samples. All DNA samples collected from eight Japanese individuals (four male, four female) were evaluated correctly using our method. The detection limit for accurate sex determination was determined to be 20 pg of DNA. We compared our new method with commercial short tandem repeat analysis kits using DNA samples artificially fragmented by ultraviolet irradiation. Our novel method was the most robust for highly fragmented DNA samples. To deal with allelic dropout resulting from deamination, we adopted “bidirectional analysis,” which analyzed samples from both sense and antisense strands. This new method was applied to 14 Jomon individuals (3500-year-old bone samples) whose sex had been identified morphologically. We could correctly identify the sex of 11 out of 14 individuals. These results show that our method is reliable for the sex determination of highly degenerated samples. PMID:28052096
Finck, Rachel; Lui-Deguzman, Carrie; Teng, Shih-Mao; Davis, Rebecca; Yuan, Shan
2013-04-01
Titration is a semiquantitative method used to estimate red blood cell (RBC) alloantibody reactivity. The conventional tube test (CTT) technique is the traditional method for performing titration studies. The gel microcolumn assay (GMA) is also a sensitive method to detect RBC alloantibodies. The aim of this study was to compare a GMA with the CTT technique in the performance of Rh and K alloantibody titration. Patient serum samples that contained an RBC alloantibody with a singular specificity were identified by routine blood bank workflow. Parallel titration studies were performed on these samples by both the CTT method and a GMA (ID-Micro Typing System anti-IgG gel card, Micro Typing Systems, Inc., an Ortho-Clinical Diagnostics Company). Forty-eight samples were included, including 11 anti-D, five anti-c, 13 anti-E, one anti-C, three anti-e, and 15 anti-K. Overall, the two methods generated identical results in 21 of 48 samples. For 42 samples (87.5%) the two methods generated results that were within one serial dilution, and for the remaining six samples, results were within two dilutions. GMA systems may perform comparably to the CTT in titrating alloantibodies to Rh and Kell antigens. © 2012 American Association of Blood Banks.
The effect of sampling techniques used in the multiconfigurational Ehrenfest method
NASA Astrophysics Data System (ADS)
Symonds, C.; Kattirtzi, J. A.; Shalashilin, D. V.
2018-05-01
In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.
The effect of sampling techniques used in the multiconfigurational Ehrenfest method.
Symonds, C; Kattirtzi, J A; Shalashilin, D V
2018-05-14
In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.
Kmiecik, Ewa; Tomaszewska, Barbara; Wątor, Katarzyna; Bodzek, Michał
2016-06-01
The aim of the study was to compare the two reference methods for the determination of boron in water samples and further assess the impact of the method of preparation of samples for analysis on the results obtained. Samples were collected during different desalination processes, ultrafiltration and the double reverse osmosis system, connected in series. From each point, samples were prepared in four different ways: the first was filtered (through a membrane filter of 0.45 μm) and acidified (using 1 mL ultrapure nitric acid for each 100 mL of samples) (FA), the second was unfiltered and not acidified (UFNA), the third was filtered but not acidified (FNA), and finally, the fourth was unfiltered but acidified (UFA). All samples were analysed using two analytical methods: inductively coupled plasma mass spectrometry (ICP-MS) and inductively coupled plasma optical emission spectrometry (ICP-OES). The results obtained were compared and correlated, and the differences between them were studied. The results show that there are statistically significant differences between the concentrations obtained using the ICP-MS and ICP-OES techniques regardless of the methods of sampling preparation (sample filtration and preservation). Finally, both the ICP-MS and ICP-OES methods can be used for determination of the boron concentration in water. The differences in the boron concentrations obtained using these two methods can be caused by several high-level concentrations in selected whole-water digestates and some matrix effects. Higher concentrations of iron (from 1 to 20 mg/L) than chromium (0.02-1 mg/L) in the samples analysed can influence boron determination. When iron concentrations are high, we can observe the emission spectrum as a double joined and overlapping peak.
Investigating Test Equating Methods in Small Samples through Various Factors
ERIC Educational Resources Information Center
Asiret, Semih; Sünbül, Seçil Ömür
2016-01-01
In this study, equating methods for random group design using small samples through factors such as sample size, difference in difficulty between forms, and guessing parameter was aimed for comparison. Moreover, which method gives better results under which conditions was also investigated. In this study, 5,000 dichotomous simulated data…
ABSTRACT: Few studies have addressed the efficacy of composite sampling for measurement of indicator bacteria by QPCR. In this study, composite results were compared to single sample results for culture- and QPCR-based water quality monitoring. Composite results for both methods ...
Zhuang, H; Savage, E M
2008-10-01
Quality assessment results of cooked meat can be significantly affected by sample preparation with different cooking techniques. A combi oven is a relatively new cooking technique in the U.S. market. However, there was a lack of published data about its effect on quality measurements of chicken meat. Broiler breast fillets deboned at 24-h postmortem were cooked with one of the 3 methods to the core temperature of 80 degrees C. Cooking methods were evaluated based on cooking operation requirements, sensory profiles, Warner-Bratzler (WB) shear and cooking loss. Our results show that the average cooking time for the combi oven was 17 min compared with 31 min for the commercial oven method and 16 min for the hot water method. The combi oven did not result in a significant difference in the WB shear force values, although the cooking loss of the combi oven samples was significantly lower than the commercial oven and hot water samples. Sensory profiles of the combi oven samples did not significantly differ from those of the commercial oven and hot water samples. These results demonstrate that combi oven cooking did not significantly affect sensory profiles and WB shear force measurements of chicken breast muscle compared to the other 2 cooking methods. The combi oven method appears to be an acceptable alternative for preparing chicken breast fillets in a quality assessment.
Alfano, Robert R.; Demos, Stavros G.; Zhang, Gang
2003-12-16
Method and an apparatus for examining a tissue using the spectral wing emission therefrom induced by visible to infrared photoexcitation. In one aspect, the method is used to characterize the condition of a tissue sample and comprises the steps of (a) photoexciting the tissue sample with substantially monochromatic light having a wavelength of at least 600 nm; and (b) using the resultant far red and near infrared spectral wing emission (SW) emitted from the tissue sample to characterize the condition of the tissue sample. In one embodiment, the substantially monochromatic photoexciting light is a continuous beam of light, and the resultant steady-state far red and near infrared SW emission from the tissue sample is used to characterize the condition of the tissue sample. In another embodiment, the substantially monochromatic photoexciting light is a light pulse, and the resultant time-resolved far red and near infrared SW emission emitted from the tissue sample is used to characterize the condition of the tissue sample. In still another embodiment, the substantially monochromatic photoexciting light is a polarized light pulse, and the parallel and perpendicular components of the resultant polarized time-resolved SW emission emitted from the tissue sample are used to characterize the condition of the tissue sample.
Determination of Porosity in Shale by Double Headspace Extraction GC Analysis.
Zhang, Chun-Yun; Li, Teng-Fei; Chai, Xin-Sheng; Xiao, Xian-Ming; Barnes, Donald
2015-11-03
This paper reports on a novel method for the rapid determination of the shale porosity by double headspace extraction gas chromatography (DHE-GC). Ground core samples of shale were placed into headspace vials and DHE-GC measurements of released methane gas were performed at a given time interval. A linear correlation between shale porosity and the ratio of consecutive GC signals was established both theoretically and experimentally by comparing with the results from the standard helium pycnometry method. The results showed that (a) the porosity of ground core samples of shale can be measured within 30 min; (b) the new method is not significantly affected by particle size of the sample; (c) the uncertainties of measured porosities of nine shale samples by the present method range from 0.31 to 0.46 p.u.; and (d) the results obtained by the DHE-GC method are in a good agreement with those from the standard helium pycnometry method. In short, the new DHE-GC method is simple, rapid, and accurate, making it a valuable tool for shale gas-related research and applications.
Meghdadi, Hossein; Khosravi, Azar D.; Ghadiri, Ata A.; Sina, Amir H.; Alami, Ameneh
2015-01-01
Present study was aimed to examine the diagnostic utility of polymerase chain reaction (PCR) and nested PCR techniques for the detection of Mycobacterium tuberculosis (MTB) DNA in samples from patients with extra pulmonary tuberculosis (EPTB). In total 80 formalin-fixed, paraffin-embedded (FFPE) samples comprising 70 samples with definite diagnosis of EPTB and 10 samples from known non- EPTB on the basis of histopathology examination, were included in the study. PCR amplification targeting IS6110, rpoB gene and nested PCR targeting the rpoB gene were performed on the extracted DNAs from 80 FFPE samples. The strong positive samples were directly sequenced. For negative samples and those with weak band in nested-rpoB PCR, TA cloning was performed by cloning the products into the plasmid vector with subsequent sequencing. The 95% confidence intervals (CI) for the estimates of sensitivity and specificity were calculated for each method. Fourteen (20%), 34 (48.6%), and 60 (85.7%) of the 70 positive samples confirmed by histopathology, were positive by rpoB-PCR, IS6110-PCR, and nested-rpoB PCR, respectively. By performing TA cloning on samples that yielded weak (n = 8) or negative results (n = 10) in the PCR methods, we were able to improve their quality for later sequencing. All samples with weak band and 7 out of 10 negative samples, showed strong positive results after cloning. So nested-rpoB PCR cloning revealed positivity in 67 out of 70 confirmed samples (95.7%). The sensitivity of these combination methods was calculated as 95.7% in comparison with histopathology examination. The CI for sensitivity of the PCR methods were calculated as 11.39–31.27% for rpoB-PCR, 36.44–60.83% for IS6110- PCR, 75.29–92.93% for nested-rpoB PCR, and 87.98–99.11% for nested-rpoB PCR cloning. The 10 true EPTB negative samples by histopathology, were negative by all tested methods including cloning and were used to calculate the specificity of the applied methods. The CI for 100% specificity of each PCR method were calculated as 69.15–100%. Our results indicated that nested-rpoB PCR combined with TA cloning and sequencing is a preferred method for the detection of MTB DNA in EPTB samples with high sensitivity and specificity which confirm the histopathology results. PMID:26191059
Meghdadi, Hossein; Khosravi, Azar D; Ghadiri, Ata A; Sina, Amir H; Alami, Ameneh
2015-01-01
Present study was aimed to examine the diagnostic utility of polymerase chain reaction (PCR) and nested PCR techniques for the detection of Mycobacterium tuberculosis (MTB) DNA in samples from patients with extra pulmonary tuberculosis (EPTB). In total 80 formalin-fixed, paraffin-embedded (FFPE) samples comprising 70 samples with definite diagnosis of EPTB and 10 samples from known non- EPTB on the basis of histopathology examination, were included in the study. PCR amplification targeting IS6110, rpoB gene and nested PCR targeting the rpoB gene were performed on the extracted DNAs from 80 FFPE samples. The strong positive samples were directly sequenced. For negative samples and those with weak band in nested-rpoB PCR, TA cloning was performed by cloning the products into the plasmid vector with subsequent sequencing. The 95% confidence intervals (CI) for the estimates of sensitivity and specificity were calculated for each method. Fourteen (20%), 34 (48.6%), and 60 (85.7%) of the 70 positive samples confirmed by histopathology, were positive by rpoB-PCR, IS6110-PCR, and nested-rpoB PCR, respectively. By performing TA cloning on samples that yielded weak (n = 8) or negative results (n = 10) in the PCR methods, we were able to improve their quality for later sequencing. All samples with weak band and 7 out of 10 negative samples, showed strong positive results after cloning. So nested-rpoB PCR cloning revealed positivity in 67 out of 70 confirmed samples (95.7%). The sensitivity of these combination methods was calculated as 95.7% in comparison with histopathology examination. The CI for sensitivity of the PCR methods were calculated as 11.39-31.27% for rpoB-PCR, 36.44-60.83% for IS6110- PCR, 75.29-92.93% for nested-rpoB PCR, and 87.98-99.11% for nested-rpoB PCR cloning. The 10 true EPTB negative samples by histopathology, were negative by all tested methods including cloning and were used to calculate the specificity of the applied methods. The CI for 100% specificity of each PCR method were calculated as 69.15-100%. Our results indicated that nested-rpoB PCR combined with TA cloning and sequencing is a preferred method for the detection of MTB DNA in EPTB samples with high sensitivity and specificity which confirm the histopathology results.
König, Gerhard; Miller, Benjamin T; Boresch, Stefan; Wu, Xiongwu; Brooks, Bernard R
2012-10-09
One of the key requirements for the accurate calculation of free energy differences is proper sampling of conformational space. Especially in biological applications, molecular dynamics simulations are often confronted with rugged energy surfaces and high energy barriers, leading to insufficient sampling and, in turn, poor convergence of the free energy results. In this work, we address this problem by employing enhanced sampling methods. We explore the possibility of using self-guided Langevin dynamics (SGLD) to speed up the exploration process in free energy simulations. To obtain improved free energy differences from such simulations, it is necessary to account for the effects of the bias due to the guiding forces. We demonstrate how this can be accomplished for the Bennett's acceptance ratio (BAR) and the enveloping distribution sampling (EDS) methods. While BAR is considered among the most efficient methods available for free energy calculations, the EDS method developed by Christ and van Gunsteren is a promising development that reduces the computational costs of free energy calculations by simulating a single reference state. To evaluate the accuracy of both approaches in connection with enhanced sampling, EDS was implemented in CHARMM. For testing, we employ benchmark systems with analytical reference results and the mutation of alanine to serine. We find that SGLD with reweighting can provide accurate results for BAR and EDS where conventional molecular dynamics simulations fail. In addition, we compare the performance of EDS with other free energy methods. We briefly discuss the implications of our results and provide practical guidelines for conducting free energy simulations with SGLD.
NASA Astrophysics Data System (ADS)
Lusiana, Evellin Dewi
2017-12-01
The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.
de Vries, W; Wieggers, H J J; Brus, D J
2010-08-05
Element fluxes through forest ecosystems are generally based on measurements of concentrations in soil solution at regular time intervals at plot locations sampled in a regular grid. Here we present spatially averaged annual element leaching fluxes in three Dutch forest monitoring plots using a new sampling strategy in which both sampling locations and sampling times are selected by probability sampling. Locations were selected by stratified random sampling with compact geographical blocks of equal surface area as strata. In each sampling round, six composite soil solution samples were collected, consisting of five aliquots, one per stratum. The plot-mean concentration was estimated by linear regression, so that the bias due to one or more strata being not represented in the composite samples is eliminated. The sampling times were selected in such a way that the cumulative precipitation surplus of the time interval between two consecutive sampling times was constant, using an estimated precipitation surplus averaged over the past 30 years. The spatially averaged annual leaching flux was estimated by using the modeled daily water flux as an ancillary variable. An important advantage of the new method is that the uncertainty in the estimated annual leaching fluxes due to spatial and temporal variation and resulting sampling errors can be quantified. Results of this new method were compared with the reference approach in which daily leaching fluxes were calculated by multiplying daily interpolated element concentrations with daily water fluxes and then aggregated to a year. Results show that the annual fluxes calculated with the reference method for the period 2003-2005, including all plots, elements and depths, lies only in 53% of the cases within the range of the average +/-2 times the standard error of the new method. Despite the differences in results, both methods indicate comparable N retention and strong Al mobilization in all plots, with Al leaching being nearly equal to the leaching of SO(4) and NO(3) with fluxes expressed in mol(c) ha(-1) yr(-1). This illustrates that Al release, which is the clearest signal of soil acidification, is mainly due to the external input of SO(4) and NO(3).
Effects of Linking Methods on Detection of DIF.
ERIC Educational Resources Information Center
Kim, Seock-Ho; Cohen, Allan S.
1992-01-01
Effects of the following methods for linking metrics on detection of differential item functioning (DIF) were compared: (1) test characteristic curve method (TCC); (2) weighted mean and sigma method; and (3) minimum chi-square method. With large samples, results were essentially the same. With small samples, TCC was most accurate. (SLD)
Validation of the ANSR Listeria method for detection of Listeria spp. in environmental samples.
Wendorf, Michael; Feldpausch, Emily; Pinkava, Lisa; Luplow, Karen; Hosking, Edan; Norton, Paul; Biswas, Preetha; Mozola, Mark; Rice, Jennifer
2013-01-01
ANSR Listeria is a new diagnostic assay for detection of Listeria spp. in sponge or swab samples taken from a variety of environmental surfaces. The method is an isothermal nucleic acid amplification assay based on the nicking enzyme amplification reaction technology. Following single-step sample enrichment for 16-24 h, the assay is completed in 40 min, requiring only simple instrumentation. In inclusivity testing, 48 of 51 Listeria strains tested positive, with only the three strains of L. grayi producing negative results. Further investigation showed that L. grayi is reactive in the ANSR assay, but its ability to grow under the selective enrichment conditions used in the method is variable. In exclusivity testing, 32 species of non-Listeria, Gram-positive bacteria all produced negative ANSR assay results. Performance of the ANSR method was compared to that of the U.S. Department of Agriculture-Food Safety and Inspection Service reference culture procedure for detection of Listeria spp. in sponge or swab samples taken from inoculated stainless steel, plastic, ceramic tile, sealed concrete, and rubber surfaces. Data were analyzed using Chi-square and probability of detection models. Only one surface, stainless steel, showed a significant difference in performance between the methods, with the ANSR method producing more positive results. Results of internal trials were supported by findings from independent laboratory testing. The ANSR Listeria method can be used as an accurate, rapid, and simple alternative to standard culture methods for detection of Listeria spp. in environmental samples.
Pipes, W O; Minnigh, H A; Moyer, B; Troy, M A
1986-01-01
A total of 2,601 water samples from six different water systems were tested for coliform bacteria by Clark's presence-absence (P-A) test and by the membrane filter (MF) method. There was no significant difference in the fraction of samples positive for coliform bacteria for any of the systems tested. It was concluded that the two tests are equivalent for monitoring purposes. However, 152 samples were positive for coliform bacteria by the MF method but negative by the P-A test, and 132 samples were positive by the P-A test but negative by the MF method. Many of these differences for individual samples can be explained by random dispersion of bacteria in subsamples when the coliform density is low. However, 15 samples had MF counts greater than 3 and gave negative P-A results. The only apparent explanation for most of these results is that coliform bacteria were present in the P-A test bottles but did not produce acid and gas. Two other studies have reported more samples positive by Clark's P-A test than by the MF method. PMID:3532953
DuPont Qualicon BAX System polymerase chain reaction assay. Performance Tested Method 100201.
Tice, George; Andaloro, Bridget; Fallon, Dawn; Wallace, F Morgan
2009-01-01
A recent outbreak of Salmonella in peanut butter has highlighted the need for validation of rapid detection methods. A multilaboratory study for detecting Salmonella in peanut butter was conducted as part of the AOAC Research Institute Emergency Response Validation program for methods that detect outbreak threats to food safety. Three sites tested spiked samples from the same master mix according to the U.S. Food and Drug Administration's Bacteriological Analytical Manual (FDA-BAM) method and the BAX System method. Salmonella Typhimurium (ATCC 14028) was grown in brain heart infusion for 24 h at 37 degrees C, then diluted to appropriate levels for sample inoculation. Master samples of peanut butter were spiked at high and low target levels, mixed, and allowed to equilibrate at room temperature for 2 weeks. Spike levels were low [1.08 most probable number (MPN)/25 g]; high (11.5 MPN/25 g) and unspiked to serve as negative controls. Each master sample was divided into 25 g portions and coded to blind the samples. Twenty portions of each spiked master sample and five portions of the unspiked sample were tested at each site. At each testing site, samples were blended in 25 g portions with 225 mL prewarmed lactose broth until thoroughly homogenized, then allowed to remain at room temperature for 55-65 min. Samples were adjusted to a pH of 6.8 +/- 0.2, if necessary, and incubated for 22-26 h at 35 degrees C. Across the three reporting laboratories, the BAX System detected Salmonella in 10/60 low-spike samples and 58/60 high-spike samples. The reference FDA-BAM method yielded positive results for 11/60 low-spike and 58/60 high-spike samples. Neither method demonstrated positive results for any of the 15 unspiked samples.
Moreno, Yolanda; Ballesteros, Lorena; García-Hernández, Jorge; Santiago, Paula; González, Ana; Ferrús, M Antonia
2011-10-01
Listeria monocytogenes detection in wastewater can be difficult because of the large amount of background microbiota and the presence of viable but non-culturable forms in this environment. The aim of this study was to evaluate a Fluorescent In Situ Hybridization (FISH) assay combined with Direct Viable Count (DVC) method for detecting viable L. monocytogenes in wastewater samples, as an alternative to conventional culture methods. 16S rRNA sequence data were used to design a specific oligonucleotide probe. In order to assess the suitability of the method, the assays were performed on naturally (n=87) and artificially (n=14) contaminated samples and results were compared to those obtained with the isolation of cells on selective media and with a PCR method. The detection limit of FISH and PCR assays was 10(4) cells/mL without enrichment and 10 cells/mL after enrichment. A total of 47 samples, including 3 samples from effluent sites, yielded FISH positive results for L. monocytogenes. Using DVC-FISH technique, the presence of viable L. monocytogenes cells was detected in 23 out of these 47 FISH positive wastewater samples. PCR and culture methods yielded 27 and 23 positive results, respectively. According to these results, FISH technique has the potential to be used as a sensitive method for the detection and enumeration of L. monocytogenes in environmental wastewater samples. Copyright © 2011 Elsevier Ltd. All rights reserved.
Migaszewski, Z.M.; Lamothe, P.J.; Crock, J.G.; Galuszka, A.; Dolegowska, S.
2011-01-01
Trace element concentrations in plant bioindicators are often determined to assess the quality of the environment. Instrumental methods used for trace element determination require digestion of samples. There are different methods of sample preparation for trace element analysis, and the selection of the best method should be fitted for the purpose of a study. Our hypothesis is that the method of sample preparation is important for interpretation of the results. Here we compare the results of 36 element determinations performed by ICP-MS on ashed and on acid-digested (HNO3, H2O2) samples of two moss species (Hylocomium splendens and Pleurozium schreberi) collected in Alaska and in south-central Poland. We found that dry ashing of the moss samples prior to analysis resulted in considerably lower detection limits of all the elements examined. We also show that this sample preparation technique facilitated the determination of interregional and interspecies differences in the chemistry of trace elements. Compared to the Polish mosses, the Alaskan mosses displayed more positive correlations of the major rock-forming elements with ash content, reflecting those elements' geogenic origin. Of the two moss species, P. schreberi from both Alaska and Poland was also highlighted by a larger number of positive element pair correlations. The cluster analysis suggests that the more uniform element distribution pattern of the Polish mosses primarily reflects regional air pollution sources. Our study has shown that the method of sample preparation is an important factor in statistical interpretation of the results of trace element determinations. ?? 2010 Springer-Verlag.
Nonlinear inversion of electrical resistivity imaging using pruning Bayesian neural networks
NASA Astrophysics Data System (ADS)
Jiang, Fei-Bo; Dai, Qian-Wei; Dong, Li
2016-06-01
Conventional artificial neural networks used to solve electrical resistivity imaging (ERI) inversion problem suffer from overfitting and local minima. To solve these problems, we propose to use a pruning Bayesian neural network (PBNN) nonlinear inversion method and a sample design method based on the K-medoids clustering algorithm. In the sample design method, the training samples of the neural network are designed according to the prior information provided by the K-medoids clustering results; thus, the training process of the neural network is well guided. The proposed PBNN, based on Bayesian regularization, is used to select the hidden layer structure by assessing the effect of each hidden neuron to the inversion results. Then, the hyperparameter α k , which is based on the generalized mean, is chosen to guide the pruning process according to the prior distribution of the training samples under the small-sample condition. The proposed algorithm is more efficient than other common adaptive regularization methods in geophysics. The inversion of synthetic data and field data suggests that the proposed method suppresses the noise in the neural network training stage and enhances the generalization. The inversion results with the proposed method are better than those of the BPNN, RBFNN, and RRBFNN inversion methods as well as the conventional least squares inversion.
NASA Astrophysics Data System (ADS)
Duan, Yaxuan; Xu, Songbo; Yuan, Suochao; Chen, Yongquan; Li, Hongguang; Da, Zhengshang; Gao, Limin
2018-01-01
ISO 12233 slanted-edge method experiences errors using fast Fourier transform (FFT) in the camera modulation transfer function (MTF) measurement due to tilt angle errors in the knife-edge resulting in nonuniform sampling of the edge spread function (ESF). In order to resolve this problem, a modified slanted-edge method using nonuniform fast Fourier transform (NUFFT) for camera MTF measurement is proposed. Theoretical simulations for images with noise at a different nonuniform sampling rate of ESF are performed using the proposed modified slanted-edge method. It is shown that the proposed method successfully eliminates the error due to the nonuniform sampling of the ESF. An experimental setup for camera MTF measurement is established to verify the accuracy of the proposed method. The experiment results show that under different nonuniform sampling rates of ESF, the proposed modified slanted-edge method has improved accuracy for the camera MTF measurement compared to the ISO 12233 slanted-edge method.
Real-Time PCR Method for Detection of Salmonella spp. in Environmental Samples.
Kasturi, Kuppuswamy N; Drgon, Tomas
2017-07-15
The methods currently used for detecting Salmonella in environmental samples require 2 days to produce results and have limited sensitivity. Here, we describe the development and validation of a real-time PCR Salmonella screening method that produces results in 18 to 24 h. Primers and probes specific to the gene invA , group D, and Salmonella enterica serovar Enteritidis organisms were designed and evaluated for inclusivity and exclusivity using a panel of 329 Salmonella isolates representing 126 serovars and 22 non- Salmonella organisms. The invA - and group D-specific sets identified all the isolates accurately. The PCR method had 100% inclusivity and detected 1 to 2 copies of Salmonella DNA per reaction. Primers specific for Salmonella -differentiating fragment 1 (Sdf-1) in conjunction with the group D set had 100% inclusivity for 32 S Enteritidis isolates and 100% exclusivity for the 297 non-Enteritidis Salmonella isolates. Single-laboratory validation performed on 1,741 environmental samples demonstrated that the PCR method detected 55% more positives than the V itek i mmuno d iagnostic a ssay s ystem (VIDAS) method. The PCR results correlated well with the culture results, and the method did not report any false-negative results. The receiver operating characteristic (ROC) analysis documented excellent agreement between the results from the culture and PCR methods (area under the curve, 0.90; 95% confidence interval of 0.76 to 1.0) confirming the validity of the PCR method. IMPORTANCE This validated PCR method detects 55% more positives for Salmonella in half the time required for the reference method, VIDAS. The validated PCR method will help to strengthen public health efforts through rapid screening of Salmonella spp. in environmental samples.
Real-Time PCR Method for Detection of Salmonella spp. in Environmental Samples
Drgon, Tomas
2017-01-01
ABSTRACT The methods currently used for detecting Salmonella in environmental samples require 2 days to produce results and have limited sensitivity. Here, we describe the development and validation of a real-time PCR Salmonella screening method that produces results in 18 to 24 h. Primers and probes specific to the gene invA, group D, and Salmonella enterica serovar Enteritidis organisms were designed and evaluated for inclusivity and exclusivity using a panel of 329 Salmonella isolates representing 126 serovars and 22 non-Salmonella organisms. The invA- and group D-specific sets identified all the isolates accurately. The PCR method had 100% inclusivity and detected 1 to 2 copies of Salmonella DNA per reaction. Primers specific for Salmonella-differentiating fragment 1 (Sdf-1) in conjunction with the group D set had 100% inclusivity for 32 S. Enteritidis isolates and 100% exclusivity for the 297 non-Enteritidis Salmonella isolates. Single-laboratory validation performed on 1,741 environmental samples demonstrated that the PCR method detected 55% more positives than the Vitek immunodiagnostic assay system (VIDAS) method. The PCR results correlated well with the culture results, and the method did not report any false-negative results. The receiver operating characteristic (ROC) analysis documented excellent agreement between the results from the culture and PCR methods (area under the curve, 0.90; 95% confidence interval of 0.76 to 1.0) confirming the validity of the PCR method. IMPORTANCE This validated PCR method detects 55% more positives for Salmonella in half the time required for the reference method, VIDAS. The validated PCR method will help to strengthen public health efforts through rapid screening of Salmonella spp. in environmental samples. PMID:28500041
Zhou, Zhi; Pons, Marie Noëlle; Raskin, Lutgarde; Zilles, Julie L
2007-05-01
When fluorescence in situ hybridization (FISH) analyses are performed with complex environmental samples, difficulties related to the presence of microbial cell aggregates and nonuniform background fluorescence are often encountered. The objective of this study was to develop a robust and automated quantitative FISH method for complex environmental samples, such as manure and soil. The method and duration of sample dispersion were optimized to reduce the interference of cell aggregates. An automated image analysis program that detects cells from 4',6'-diamidino-2-phenylindole (DAPI) micrographs and extracts the maximum and mean fluorescence intensities for each cell from corresponding FISH images was developed with the software Visilog. Intensity thresholds were not consistent even for duplicate analyses, so alternative ways of classifying signals were investigated. In the resulting method, the intensity data were divided into clusters using fuzzy c-means clustering, and the resulting clusters were classified as target (positive) or nontarget (negative). A manual quality control confirmed this classification. With this method, 50.4, 72.1, and 64.9% of the cells in two swine manure samples and one soil sample, respectively, were positive as determined with a 16S rRNA-targeted bacterial probe (S-D-Bact-0338-a-A-18). Manual counting resulted in corresponding values of 52.3, 70.6, and 61.5%, respectively. In two swine manure samples and one soil sample 21.6, 12.3, and 2.5% of the cells were positive with an archaeal probe (S-D-Arch-0915-a-A-20), respectively. Manual counting resulted in corresponding values of 22.4, 14.0, and 2.9%, respectively. This automated method should facilitate quantitative analysis of FISH images for a variety of complex environmental samples.
Révész, Kinga M; Landwehr, Jurate M
2002-01-01
A new method was developed to analyze the stable carbon and oxygen isotope ratios of small samples (400 +/- 20 micro g) of calcium carbonate. This new method streamlines the classical phosphoric acid/calcium carbonate (H(3)PO(4)/CaCO(3)) reaction method by making use of a recently available Thermoquest-Finnigan GasBench II preparation device and a Delta Plus XL continuous flow isotope ratio mass spectrometer. Conditions for which the H(3)PO(4)/CaCO(3) reaction produced reproducible and accurate results with minimal error had to be determined. When the acid/carbonate reaction temperature was kept at 26 degrees C and the reaction time was between 24 and 54 h, the precision of the carbon and oxygen isotope ratios for pooled samples from three reference standard materials was =0.1 and =0.2 per mill or per thousand, respectively, although later analysis showed that materials from one specific standard required reaction time between 34 and 54 h for delta(18)O to achieve this level of precision. Aliquot screening methods were shown to further minimize the total error. The accuracy and precision of the new method were analyzed and confirmed by statistical analysis. The utility of the method was verified by analyzing calcite from Devils Hole, Nevada, for which isotope-ratio values had previously been obtained by the classical method. Devils Hole core DH-11 recently had been re-cut and re-sampled, and isotope-ratio values were obtained using the new method. The results were comparable with those obtained by the classical method with correlation = +0.96 for both isotope ratios. The consistency of the isotopic results is such that an alignment offset could be identified in the re-sampled core material, and two cutting errors that occurred during re-sampling then were confirmed independently. This result indicates that the new method is a viable alternative to the classical reaction method. In particular, the new method requires less sample material permitting finer resolution and allows automation of some processes resulting in considerable time savings.
Rocha, C F D; Van Sluys, M; Hatano, F H; Boquimpani-Freitas, L; Marra, R V; Marques, R V
2004-11-01
Studies on anurans in restinga habitats are few and, as a result, there is little information on which methods are more efficient for sampling them in this environment. Ten methods are usually used for sampling anuran communities in tropical and sub-tropical areas. In this study we evaluate which methods are more appropriate for this purpose in the restinga environment of Parque Nacional da Restinga de Jurubatiba. We analyzed six methods among those usually used for anuran samplings. For each method, we recorded the total amount of time spent (in min.), the number of researchers involved, and the number of species captured. We calculated a capture efficiency index (time necessary for a researcher to capture an individual frog) in order to make comparable the data obtained. Of the methods analyzed, the species inventory (9.7 min/searcher /ind.- MSI; richness = 6; abundance = 23) and the breeding site survey (9.5 MSI; richness = 4; abundance = 22) were the most efficient. The visual encounter inventory (45.0 MSI) and patch sampling (65.0 MSI) methods were of comparatively lower efficiency restinga, whereas the plot sampling and the pit-fall traps with drift-fence methods resulted in no frog capture. We conclude that there is a considerable difference in efficiency of methods used in the restinga environment and that the complete species inventory method is highly efficient for sampling frogs in the restinga studied and may be so in other restinga environments. Methods that are usually efficient in forested areas seem to be of little value in open restinga habitats.
Nelson, Jennifer C.; Marsh, Tracey; Lumley, Thomas; Larson, Eric B.; Jackson, Lisa A.; Jackson, Michael
2014-01-01
Objective Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased due to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. Study Design and Setting We applied two such methods, imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method’s ability to reduce bias using the control time period prior to influenza circulation. Results Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not utilize the validation sample confounders. Conclusion Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from healthcare database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which data can be imputed or reweighted using the additional validation sample information. PMID:23849144
Szyłak-Szydłowski, Mirosław
2017-09-01
The basic principle of odor sampling from surface sources is based primarily on the amount of air obtained from a specific area of the ground, which acts as a source of malodorous compounds. Wind tunnels and flux chambers are often the only available, direct method of evaluating the odor fluxes from small area sources. There are currently no widely accepted chamber-based methods; thus, there is still a need for standardization of these methods to ensure accuracy and comparability. Previous research has established that there is a significant difference between the odor concentration values obtained using the Lindvall chamber and those obtained by a dynamic flow chamber. Thus, the present study compares sampling methods using a streaming chamber modeled on the Lindvall cover (using different wind speeds), a static chamber, and a direct sampling method without any screens. The volumes of chambers in the current work were similar, ~0.08 m 3 . This study was conducted at the mechanical-biological treatment plant in Poland. Samples were taken from a pile covered by the membrane. Measured odor concentration values were between 2 and 150 ou E /m 3 . Results of the study demonstrated that both chambers can be used interchangeably in the following conditions: odor concentration is below 60 ou E /m 3 , wind speed inside the Lindvall chamber is below 0.2 m/sec, and a flow value is below 0.011 m 3 /sec. Increasing the wind speed above the aforementioned value results in significant differences in the results obtained between those methods. In all experiments, the results of the concentration of odor in the samples using the static chamber were consistently higher than those from the samples measured in the Lindvall chamber. Lastly, the results of experiments were employed to determine a model function of the relationship between wind speed and odor concentration values. Several researchers wrote that there are no widely accepted chamber-based methods. Also, there is still a need for standardization to ensure full comparability of these methods. The present study compared the existing methods to improve the standardization of area source sampling. The practical usefulness of the results was proving that both examined chambers can be used interchangeably. Statistically similar results were achieved while odor concentration was below 60 ou E /m 3 and wind speed inside the Lindvall chamber was below 0.2 m/sec. Increasing wind speed over these values results in differences between these methods. A model function of relationship between wind speed and odor concentration value was determined.
Hughes, Sarah A; Huang, Rongfu; Mahaffey, Ashley; Chelme-Ayala, Pamela; Klamerth, Nikolaus; Meshref, Mohamed N A; Ibrahim, Mohamed D; Brown, Christine; Peru, Kerry M; Headley, John V; Gamal El-Din, Mohamed
2017-11-01
There are several established methods for the determination of naphthenic acids (NAs) in waters associated with oil sands mining operations. Due to their highly complex nature, measured concentration and composition of NAs vary depending on the method used. This study compared different common sample preparation techniques, analytical instrument methods, and analytical standards to measure NAs in groundwater and process water samples collected from an active oil sands operation. In general, the high- and ultrahigh-resolution methods, namely high performance liquid chromatography time-of-flight mass spectrometry (UPLC-TOF-MS) and Orbitrap mass spectrometry (Orbitrap-MS), were within an order of magnitude of the Fourier transform infrared spectroscopy (FTIR) methods. The gas chromatography mass spectrometry (GC-MS) methods consistently had the highest NA concentrations and greatest standard error. Total NAs concentration was not statistically different between sample preparation of solid phase extraction and liquid-liquid extraction. Calibration standards influenced quantitation results. This work provided a comprehensive understanding of the inherent differences in the various techniques available to measure NAs and hence the potential differences in measured amounts of NAs in samples. Results from this study will contribute to the analytical method standardization for NA analysis in oil sands related water samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hageman, Philip L.; Seal, Robert R.; Diehl, Sharon F.; Piatak, Nadine M.; Lowers, Heather
2015-01-01
A comparison study of selected static leaching and acid–base accounting (ABA) methods using a mineralogically diverse set of 12 modern-style, metal mine waste samples was undertaken to understand the relative performance of the various tests. To complement this study, in-depth mineralogical studies were conducted in order to elucidate the relationships between sample mineralogy, weathering features, and leachate and ABA characteristics. In part one of the study, splits of the samples were leached using six commonly used leaching tests including paste pH, the U.S. Geological Survey (USGS) Field Leach Test (FLT) (both 5-min and 18-h agitation), the U.S. Environmental Protection Agency (USEPA) Method 1312 SPLP (both leachate pH 4.2 and leachate pH 5.0), and the USEPA Method 1311 TCLP (leachate pH 4.9). Leachate geochemical trends were compared in order to assess differences, if any, produced by the various leaching procedures. Results showed that the FLT (5-min agitation) was just as effective as the 18-h leaching tests in revealing the leachate geochemical characteristics of the samples. Leaching results also showed that the TCLP leaching test produces inconsistent results when compared to results produced from the other leaching tests. In part two of the study, the ABA was determined on splits of the samples using both well-established traditional static testing methods and a relatively quick, simplified net acid–base accounting (NABA) procedure. Results showed that the traditional methods, while time consuming, provide the most in-depth data on both the acid generating, and acid neutralizing tendencies of the samples. However, the simplified NABA method provided a relatively fast, effective estimation of the net acid–base account of the samples. Overall, this study showed that while most of the well-established methods are useful and effective, the use of a simplified leaching test and the NABA acid–base accounting method provide investigators fast, quantitative tools that can be used to provide rapid, reliable information about the leachability of metals and other constituents of concern, and the acid-generating potential of metal mining waste.
Herrington, Jason S; Fan, Zhi-Hua Tina; Lioy, Paul J; Zhang, Junfeng Jim
2007-01-15
Airborne aldehyde and ketone (carbonyl) sampling methodologies based on derivatization with 2,4-dinitrophenylhydrazine (DNPH)-coated solid sorbents could unequivocally be considered the "gold" standard. Originally developed in the late 1970s, these methods have been extensively evaluated and developed up to the present day. However, these methods have been inadequately evaluated for the long-term (i.e., 24 h or greater) sampling collection efficiency (CE) of carbonyls other than formaldehyde. The current body of literature fails to demonstrate that DNPH-coated solid sorbent sampling methods have acceptable CEs for the long-term sampling of carbonyls other than formaldehyde. Despite this, such methods are widely used to report the concentrations of multiple carbonyls from long-term sampling, assuming approximately 100% CEs. Laboratory experiments were conducted in this study to evaluate the long-term formaldehyde and acetaldehyde sampling CEs for several commonly used DNPH-coated solid sorbents. Results from sampling known concentrations of formaldehyde and acetaldehyde generated in a dynamic atmosphere generation system demonstrate that the 24-hour formaldehyde sampling CEs ranged from 83 to 133%, confirming the findings made in previous studies. However, the 24-hour acetaldehyde sampling CEs ranged from 1 to 62%. Attempts to increase the acetaldehyde CEs by adding acid to the samples post sampling were unsuccessful. These results indicate that assuming approximately 100% CEs for 24-hour acetaldehyde sampling, as commonly done with DNPH-coated solid sorbent methods, would substantially under estimate acetaldehyde concentrations.
Chen, Xiaowen; Zhao, Luqian; Qin, Hongran; Zhao, Meijia; Zhou, Yirui; Yang, Shuqiang; Su, Xu; Xu, Xiaohua
2014-05-01
The aim of this work was to develop a method to provide rapid results for humans with internal radioactive contamination. The authors hypothesized that valuable information could be obtained from gas proportional counter techniques by screening urine samples from potentially exposed individuals rapidly. Recommended gross alpha and beta activity screening methods generally employ gas proportional counting techniques. Based on International Standards Organization (ISO) methods, improvements were made in the evaporation process to develop a method to provide rapid results, adequate sensitivity, and minimum sample preparation and operator intervention for humans with internal radioactive contamination. The method described by an American National Standards Institute publication was used to calibrate the gas proportional counter, and urine samples from patients with or without radionuclide treatment were measured to validate the method. By improving the evaporation process, the time required to perform the assay was reduced dramatically. Compared with the reference data, the results of the validation samples were very satisfactory with respect to gross-alpha and gross-beta activities. The gas flow proportional counting method described here has the potential for radioactivity monitoring in the body. This method was easy, efficient, and fast, and its application is of great utility in determining whether a sample should be analyzed by a more complicated method, for example radiochemical and/or γ-spectroscopy. In the future, it may be used commonly in medical examination and nuclear emergency treatment.Health Phys. 106(5):000-000; 2014.
Pilliod, David S.; Goldberg, Caren S.; Arkle, Robert S.; Waits, Lisette P.
2013-01-01
Environmental DNA (eDNA) methods for detecting aquatic species are advancing rapidly, but with little evaluation of field protocols or precision of resulting estimates. We compared sampling results from traditional field methods with eDNA methods for two amphibians in 13 streams in central Idaho, USA. We also evaluated three water collection protocols and the influence of sampling location, time of day, and distance from animals on eDNA concentration in the water. We found no difference in detection or amount of eDNA among water collection protocols. eDNA methods had slightly higher detection rates than traditional field methods, particularly when species occurred at low densities. eDNA concentration was positively related to field-measured density, biomass, and proportion of transects occupied. Precision of eDNA-based abundance estimates increased with the amount of eDNA in the water and the number of replicate subsamples collected. eDNA concentration did not vary significantly with sample location in the stream, time of day, or distance downstream from animals. Our results further advance the implementation of eDNA methods for monitoring aquatic vertebrates in stream habitats.
NASA Astrophysics Data System (ADS)
Kowalska, Małgorzata; Janas, Sławomir; Woźniak, Magdalena
2018-04-01
The aim of this work was the presentation of an alternative method of determination of the total dry mass content in processed cheese. The authors claim that the presented method can be used in industry's quality control laboratories for routine testing and for quick in-process control. For the test purposes both reference method of determination of dry mass in processed cheese and moisture analyzer method were used. The tests were carried out for three different kinds of processed cheese. In accordance with the reference method, the sample was placed on a layer of silica sand and dried at the temperature of 102 °C for about 4 h. The moisture analyzer test required method validation, with regard to drying temperature range and mass of the analyzed sample. Optimum drying temperature of 110 °C was determined experimentally. For Hochland cream processed cheese sample, the total dry mass content, obtained using the reference method, was 38.92%, whereas using the moisture analyzer method, it was 38.74%. An average analysis time in case of the moisture analyzer method was 9 min. For the sample of processed cheese with tomatoes, the reference method result was 40.37%, and the alternative method result was 40.67%. For the sample of cream processed cheese with garlic the reference method gave value of 36.88%, and the alternative method, of 37.02%. An average time of those determinations was 16 min. Obtained results confirmed that use of moisture analyzer is effective. Compliant values of dry mass content were obtained for both of the used methods. According to the authors, the fact that the measurement took incomparably less time for moisture analyzer method, is a key criterion of in-process control and final quality control method selection.
NASA Astrophysics Data System (ADS)
Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan
2018-03-01
T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.
Recovery and Determination of Adsorbed Technetium on Savannah River Site Charcoal Stack Samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lahoda, Kristy G.; Engelmann, Mark D.; Farmer, Orville T.
2008-03-01
Experimental results are provided for the sample analyses for technetium (Tc) in charcoal samples placed in-line with a Savannah River Site (SRS) processing stack effluent stream as a part of an environmental surveillance program. The method for Tc removal from charcoal was based on that originally developed with high purity charcoal. Presented is the process that allowed for the quantitative analysis of 99Tc in SRS charcoal stack samples with and without 97Tc as a tracer. The results obtained with the method using the 97Tc tracer quantitatively confirm the results obtained with no tracer added. All samples contain 99Tc at themore » pg g-1 level.« less
Fan, Xinghua; Kubwabo, Cariton; Rasmussen, Pat E; Wu, Fang
2014-09-01
An analytical method for the simultaneous determination of 13 organophosphate esters (OPEs) in house dust was developed. The method is based on solvent extraction by sonication, sample cleanup by solid phase extraction (SPE), and analysis by gas chromatography-positive chemical ionization-tandem mass spectrometry (GC/PCI-MS/MS). Method detection limits (MDLs) ranged from 0.03 to 0.43 μg/g and recoveries from 60% to 118%. The inter- and intra-day variations ranged from 3% to 23%. The method was applied to dust samples collected using two vacuum sampling techniques from 134 urban Canadian homes: a sample of fresh or "active" dust (FD) collected by technicians and a composite sample taken from the household vacuum cleaner (HD). Results show that the two sampling methods (i.e., FD vs HD) provided comparable results. Tributoxyethyl phosphate (TBEP), triphenyl phosphate (TPhP), tris(chloropropyl) phosphate (TCPP), tri(2-chloroethyl) phosphate (TCEP), tris(dichloro-isopropyl) phosphate (TDCPP), tricresyl phosphate (TCrP), and tri-n-butyl phosphate (TnBP) were detected in the majority of samples. The most predominant OPE was TBEP, with median concentrations of 31.9 μg/g and 22.8 μg/g in FD and HD samples, respectively, 1 to 2 orders of magnitude higher than other OPEs. The method was also applied to the analysis of OPEs in the National Institute of Standards and Technology (NIST) standard reference material (NIST SRM 2585, organic contaminants in house dust). The results from SRM 2585 may contribute to the certification of OPE concentration values in this SRM. Crown Copyright © 2013. Published by Elsevier B.V. All rights reserved.
Feng, Shu; Gale, Michael J.; Fay, Jonathan D.; Faridi, Ambar; Titus, Hope E.; Garg, Anupam K.; Michaels, Keith V.; Erker, Laura R.; Peters, Dawn; Smith, Travis B.; Pennesi, Mark E.
2015-01-01
Purpose To describe a standardized flood-illuminated adaptive optics (AO) imaging protocol suitable for the clinical setting and to assess sampling methods for measuring cone density. Methods Cone density was calculated following three measurement protocols: 50 × 50-μm sampling window values every 0.5° along the horizontal and vertical meridians (fixed-interval method), the mean density of expanding 0.5°-wide arcuate areas in the nasal, temporal, superior, and inferior quadrants (arcuate mean method), and the peak cone density of a 50 × 50-μm sampling window within expanding arcuate areas near the meridian (peak density method). Repeated imaging was performed in nine subjects to determine intersession repeatability of cone density. Results Cone density montages could be created for 67 of the 74 subjects. Image quality was determined to be adequate for automated cone counting for 35 (52%) of the 67 subjects. We found that cone density varied with different sampling methods and regions tested. In the nasal and temporal quadrants, peak density most closely resembled histological data, whereas the arcuate mean and fixed-interval methods tended to underestimate the density compared with histological data. However, in the inferior and superior quadrants, arcuate mean and fixed-interval methods most closely matched histological data, whereas the peak density method overestimated cone density compared with histological data. Intersession repeatability testing showed that repeatability was greatest when sampling by arcuate mean and lowest when sampling by fixed interval. Conclusions We show that different methods of sampling can significantly affect cone density measurements. Therefore, care must be taken when interpreting cone density results, even in a normal population. PMID:26325414
Intra prediction using face continuity in 360-degree video coding
NASA Astrophysics Data System (ADS)
Hanhart, Philippe; He, Yuwen; Ye, Yan
2017-09-01
This paper presents a new reference sample derivation method for intra prediction in 360-degree video coding. Unlike the conventional reference sample derivation method for 2D video coding, which uses the samples located directly above and on the left of the current block, the proposed method considers the spherical nature of 360-degree video when deriving reference samples located outside the current face to which the block belongs, and derives reference samples that are geometric neighbors on the sphere. The proposed reference sample derivation method was implemented in the Joint Exploration Model 3.0 (JEM-3.0) for the cubemap projection format. Simulation results for the all intra configuration show that, when compared with the conventional reference sample derivation method, the proposed method gives, on average, luma BD-rate reduction of 0.3% in terms of the weighted spherical PSNR (WS-PSNR) and spherical PSNR (SPSNR) metrics.
Crepeau, Kathryn L.; Fram, Miranda S.; Bush, Noel
2004-01-01
An analytical method for the determination of the trihalomethane formation potential of water samples has been developed. The trihalomethane formation potential is measured by dosing samples with chlorine under specified conditions of pH, temperature, incubation time, darkness, and residual-free chlorine, and then analyzing the resulting trihalomethanes by purge and trap/gas chromatography equipped with an electron capture detector. Detailed explanations of the method and quality-control practices are provided. Method validation experiments showed that the trihalomethane formation potential varies as a function of time between sample collection and analysis, residual-free chlorine concentration, method of sample dilution, and the concentration of bromide in the sample.
Universal multiplex PCR and CE for quantification of SMN1/SMN2 genes in spinal muscular atrophy.
Wang, Chun-Chi; Chang, Jan-Gowth; Jong, Yuh-Jyh; Wu, Shou-Mei
2009-04-01
We established a universal multiplex PCR and CE to calculate the copy number of survival motor neuron (SMN1 and SMN2) genes for clinical screening of spinal muscular atrophy (SMA). In this study, one universal fluorescent primer was designed and applied for multiplex PCR of SMN1, SMN2 and two internal standards (CYBB and KRIT1). These amplicons were separated by conformation sensitive CE. Mixture of hydroxyethyl cellulose and hydroxypropyl cellulose were used in this CE system. Our method provided the potential to separate two 390-bp PCR products that differ in a single nucleotide. Differentiation and quantification of SMN1 and SMN2 are essential for clinical screening of SMA patients and carriers. The DNA samples included 22 SMA patients, 45 parents of SMA patients (obligatory carriers) and 217 controls. For evaluating accuracy, those 284 samples were blind-analyzed by this method and denaturing high pressure liquid chromatography (DHPLC). Eight of the total samples showed different results. Among them, two samples were diagnosed as having only SMN2 gene by DHPLC, however, they contained both SMN1 and SMN2 by our method. They were further confirmed by DNA sequencing. Our method showed good agreement with the DNA sequencing. The multiplex ligation-dependent probe amplification (MLPA) was used for confirming the other five samples, and showed the same results with our CE method. For only one sample, our CE showed different results with MLPA and DNA sequencing. One out of 284 samples (0.35%) belonged to mismatching. Our method provided a better accurate method and convenient method for clinical genotyping of SMA disease.
de Lima Stebbins, Daniela; Docs, Jon; Lowe, Paula; Cohen, Jason; Lei, Hongxia
2016-05-18
The hormones listed in the screening survey list 2 of the Unregulated Contaminant Monitoring Rule 3 (estrone, 17-β-estradiol, 17-α-ethynylestradiol, 16-α-hydroxyestradiol (estriol), equilin, testosterone and 4-androstene-3,17-dione) were analyzed by liquid chromatography electrospray ionization tandem mass spectrometry (LC-ESI-MS/MS). Two analytical methods were compared: EPA method 539 and the isotope dilution method. EPA method 539 was successfully utilized in river and drinking water matrices with fortified recoveries of 98.9 to 108.5%. Samples from the Hillsborough River reflected levels below the method detection limit (MDL) for the majority of the analytes, except estrone (E1), which was detected at very low concentrations (<0.5 to 1 ng L(-1)) in the majority of samples. No hormones were detected in drinking water samples. The isotope dilution method was used to analyze reclaimed and aquifer storage and recovery (ASR) water samples as a result of strong matrix/solid phase extraction (SPE) losses observed in these more complex matrices. Most of the compounds were not detected or found at relatively low concentrations in the ASR samples. Attenuation of 50 to 99.1% was observed as a result of the ASR recharge/recovery cycles for most of the hormones, except for estriol (E3). Relatively stable concentrations of E3 were found, with only 10% attenuation at one of the sites and no measureable attenuation at another location. These results have substantiated that while EPA method 539 works well for most environmental samples, the isotope dilution method is more robust when dealing with complex matrices such as reclaimed and ASR samples.
Huitema, A. D. R.; Bakker, E. N.; Douma, J. W.; Schimmel, K. J. M.; van Weringh, G.; de Wolf, P. J.; Schellens, J. H. M.; Beijnen, J. H.
2007-01-01
Objective: To develop, validate, and apply a method for the determination of platinum contamination, originating from cisplatinum, oxaliplatinum, and carboplatinum. Methods: Inductively coupled plasma mass spectrometry (ICP-MS) was used to determine platinum in wipe samples. The sampling procedure and the analytical conditions were optimised and the assay was validated. The method was applied to measure surface contamination in seven Dutch hospital pharmacies. Results: The developed method allowed reproducible quantification of 0.50 ng l−1 platinum (5 pg/wipe sample). Recoveries for stainless steel and linoleum surfaces ranged between 50.4 and 81.4% for the different platinum compounds tested. Platinum contamination was reported in 88% of the wipe samples. Although a substantial variation in surface contamination of the pharmacies was noticed, in most pharmacies, the laminar-airflow (LAF) hoods, the floor in front of the LAF hoods, door handles, and handles of service hatches showed positive results. This demonstrates that contamination is spread throughout the preparation rooms. Conclusion: We developed and validated an ultra sensitive and reliable ICP-MS method for the determination of platinum in surface samples. Surface contamination with platinum was observed in all hospital pharmacies sampled. The interpretation of these results is, however, complicated. PMID:17377802
Lu, Tzong-Shi; Yiao, Szu-Yu; Lim, Kenneth; Jensen, Roderick V; Hsiao, Li-Li
2010-07-01
The identification of differences in protein expression resulting from methodical variations is an essential component to the interpretation of true, biologically significant results. We used the Lowry and Bradford methods- two most commonly used methods for protein quantification, to assess whether differential protein expressions are a result of true biological or methodical variations. MATERIAL #ENTITYSTARTX00026; Differential protein expression patterns was assessed by western blot following protein quantification by the Lowry and Bradford methods. We have observed significant variations in protein concentrations following assessment with the Lowry versus Bradford methods, using identical samples. Greater variations in protein concentration readings were observed over time and in samples with higher concentrations, with the Bradford method. Identical samples quantified using both methods yielded significantly different expression patterns on Western blot. We show for the first time that methodical variations observed in these protein assay techniques, can potentially translate into differential protein expression patterns, that can be falsely taken to be biologically significant. Our study therefore highlights the pivotal need to carefully consider methodical approaches to protein quantification in techniques that report quantitative differences.
Kvach, Yuriy; Ondračková, Markéta; Janáč, Michal; Jurajda, Pavel
2016-08-31
In this study, we assessed the impact of sampling method on the results of fish ectoparasite studies. Common roach Rutilus rutilus were sampled from the same gravel pit in the River Dyje flood plain (Czech Republic) using 3 different sampling methods, i.e. electrofishing, beach seining and gill-netting, and were examined for ectoparasites. Not only did fish caught by electrofishing have more of the most abundant parasites (Trichodina spp., Gyrodactylus spp.) than those caught by beach seining or gill-netting, they also had relatively rich parasite infracommunities, resulting in a significantly different assemblage composition, presumably as parasites were lost through handling and 'manipulation' in the net. Based on this, we recommend electrofishing as the most suitable method to sample fish for parasite community studies, as data from fish caught with gill-nets and beach seines will provide a biased picture of the ectoparasite community, underestimating ectoparasite abundance and infracommunity species richness.
Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean
2014-01-01
MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.
Fourcade, Yoan; Engler, Jan O.; Rödder, Dennis; Secondi, Jean
2014-01-01
MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one “virtual” derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases. PMID:24818607
Eckner, Karl F.
1998-01-01
A total of 338 water samples, 261 drinking water samples and 77 bathing water samples, obtained for routine testing were analyzed in duplicate by Swedish standard methods using multiple-tube fermentation or membrane filtration and by the Colilert and/or Enterolert methods. Water samples came from a wide variety of sources in southern Sweden (Skåne). The Colilert method was found to be more sensitive than Swedish standard methods for detecting coliform bacteria and of equal sensitivity for detecting Escherichia coli when all drinking water samples were grouped together. Based on these results, Swedac, the Swedish laboratory accreditation body, approved for the first time in Sweden use of the Colilert method at this laboratory for the analysis of all water sources not falling under public water regulations (A-krav). The coliform detection study of bathing water yielded anomalous results due to confirmation difficulties. E. coli detection in bathing water was similar by both the Colilert and Swedish standard methods as was fecal streptococcus and enterococcus detection by both the Enterolert and Swedish standard methods. PMID:9687478
2018-01-01
This work presents the results of an international interlaboratory comparison on ex situ passive sampling in sediments. The main objectives were to map the state of the science in passively sampling sediments, identify sources of variability, provide recommendations and practical guidance for standardized passive sampling, and advance the use of passive sampling in regulatory decision making by increasing confidence in the use of the technique. The study was performed by a consortium of 11 laboratories and included experiments with 14 passive sampling formats on 3 sediments for 25 target chemicals (PAHs and PCBs). The resulting overall interlaboratory variability was large (a factor of ∼10), but standardization of methods halved this variability. The remaining variability was primarily due to factors not related to passive sampling itself, i.e., sediment heterogeneity and analytical chemistry. Excluding the latter source of variability, by performing all analyses in one laboratory, showed that passive sampling results can have a high precision and a very low intermethod variability (
Dalmasso, Marion; Bolocan, Andrei Sorin; Hernandez, Marta; Kapetanakou, Anastasia E; Kuchta, Tomáš; Manios, Stavros G; Melero, Beatriz; Minarovičová, Jana; Muhterem, Meryem; Nicolau, Anca Ioana; Rovira, Jordi; Skandamis, Panagiotis N; Stessl, Beatrix; Wagner, Martin; Jordan, Kieran; Rodríguez-Lázaro, David
2014-03-01
Analysis for Listeria monocytogenes by ISO11290-1 is time-consuming, entailing two enrichment steps and subsequent plating on agar plates, taking five days without isolate confirmation. The aim of this study was to determine if a polymerase chain reaction (PCR) assay could be used for analysis of the first and second enrichment broths, saving four or two days, respectively. In a comprehensive approach involving six European laboratories, PCR and traditional plating of both enrichment broths from the ISO11290-1 method were compared for the detection of L. monocytogenes in 872 food, raw material and processing environment samples from 13 different dairy and meat food chains. After the first and second enrichments, total DNA was extracted from the enriched cultures and analysed for the presence of L. monocytogenes DNA by PCR. DNA extraction by chaotropic solid-phase extraction (spin column-based silica) combined with real-time PCR (RTi-PCR) was required as it was shown that crude DNA extraction applying sonication lysis and boiling followed by traditional gel-based PCR resulted in fewer positive results than plating. The RTi-PCR results were compared to plating, as defined by the ISO11290-1 method. For first and second enrichments, 90% of the samples gave the same results by RTi-PCR and plating, whatever the RTi-PCR method used. For the samples that gave different results, plating was significantly more accurate for detection of positive samples than RTi-PCR from the first enrichment, but RTi-PCR detected a greater number of positive samples than plating from the second enrichment, regardless of the RTi-PCR method used. RTi-PCR was more accurate for non-food contact surface and food contact surface samples than for food and raw material samples especially from the first enrichment, probably because of sample matrix interference. Even though RTi-PCR analysis of the first enrichment showed less positive results than plating, in outbreak scenarios where a rapid result is required, RTi-PCR could be an efficient way to get a preliminary result to be then confirmed by plating. Using DNA extraction from the second enrichment broth followed by RTi-PCR was reliable and a confirmed result could be obtained in three days, as against seven days by ISO11290-1. Copyright © 2014 Elsevier B.V. All rights reserved.
Chuang, Jane C; Emon, Jeanette M Van; Durnford, Joyce; Thomas, Kent
2005-09-15
An enzyme-linked immunosorbent assay (ELISA) method was developed to quantitatively measure 2,4-dichlorophenoxyacetic acid (2,4-D) in human urine. Samples were diluted (1:5) with phosphate-buffered saline containing 0.05% Tween and 0.02% sodium azide, with analysis by a 96-microwell plate immunoassay format. No clean up was required as dilution step minimized sample interferences. Fifty urine samples were received without identifiers from a subset of pesticide applicators and their spouses in an EPA pesticide exposure study (PES) and analyzed by the ELISA method and a conventional gas chromatography/mass spectrometry (GC/MS) procedure. For the GC/MS analysis, urine samples were extracted with acidic dichloromethane (DCM); methylated by diazomethane and fractionated by a Florisil solid phase extraction (SPE) column prior to GC/MS detection. The percent relative standard deviation (%R.S.D.) of the 96-microwell plate triplicate assays ranged from 1.2 to 22% for the urine samples. Day-to-day variation of the assay results was within +/-20%. Quantitative recoveries (>70%) of 2,4-D were obtained for the spiked urine samples by the ELISA method. Quantitative recoveries (>80%) of 2,4-D were also obtained for these samples by the GC/MS procedure. The overall method precision of these samples was within +/-20% for both the ELISA and GC/MS methods. The estimated quantification limit for 2,4-D in urine was 30ng/mL by ELISA and 0.2ng/mL by GC/MS. A higher quantification limit for the ELISA method is partly due to the requirement of a 1:5 dilution to remove the urine sample matrix effect. The GC/MS method can accommodate a 10:1 concentration factor (10mL of urine converted into 1mL organic solvent for analysis) but requires extraction, methylation and clean up on a solid phase column. The immunoassay and GC/MS data were highly correlated, with a correlation coefficient of 0.94 and a slope of 1.00. Favorable results between the two methods were achieved despite the vast differences in sample preparation. Results indicated that the ELISA method could be used as a high throughput, quantitative monitoring tool for human urine samples to identify individuals with exposure to 2,4-D above the typical background levels.
Performance of Traditional and Molecular Methods for Detecting Biological Agents in Drinking Water
Francy, Donna S.; Bushon, Rebecca N.; Brady, Amie M.G.; Bertke, Erin E.; Kephart, Christopher M.; Likirdopulos, Christina A.; Mailot, Brian E.; Schaefer, Frank W.; Lindquist, H.D. Alan
2009-01-01
To reduce the impact from a possible bioterrorist attack on drinking-water supplies, analytical methods are needed to rapidly detect the presence of biological agents in water. To this end, 13 drinking-water samples were collected at 9 water-treatment plants in Ohio to assess the performance of a molecular method in comparison to traditional analytical methods that take longer to perform. Two 100-liter samples were collected at each site during each sampling event; one was seeded in the laboratory with six biological agents - Bacillus anthracis (B. anthracis), Burkholderia cepacia (as a surrogate for Bu. pseudomallei), Francisella tularensis (F. tularensis), Salmonella Typhi (S. Typhi), Vibrio cholerae (V. cholerae), and Cryptospordium parvum (C. parvum). The seeded and unseeded samples were processed by ultrafiltration and analyzed by use of quantiative polymerase chain reaction (qPCR), a molecular method, and culture methods for bacterial agents or the immunomagnetic separation/fluorescent antibody (IMS/FA) method for C. parvum as traditional methods. Six replicate seeded samples were also processed and analyzed. For traditional methods, recoveries were highly variable between samples and even between some replicate samples, ranging from below detection to greater than 100 percent. Recoveries were significantly related to water pH, specific conductance, and dissolved organic carbon (DOC) for all bacteria combined by culture methods, but none of the water-quality characteristics tested were related to recoveries of C. parvum by IMS/FA. Recoveries were not determined by qPCR because of problems in quantifying organisms by qPCR in the composite seed. Instead, qPCR results were reported as detected, not detected (no qPCR signal), or +/- detected (Cycle Threshold or 'Ct' values were greater than 40). Several sample results by qPCR were omitted from the dataset because of possible problems with qPCR reagents, primers, and probes. For the remaining 14 qPCR results (including some replicate samples), F. tularensis and V. cholerae were detected in all samples after ultrafiltration, B. anthracis was detected in 13 and +/- detected in 1 sample, and C. parvum was detected in 9 and +/- detected in 4 samples. Bu. cepacia was detected in nine samples, +/- detected in two samples, and not detected in three samples (for two out of three samples not detected, a different strain was used). The qPCR assay for V. cholerae provided two false positive - but late - signals in one unseeded sample. Numbers found by qPCR after ultrafiltration were significantly or nearly significantly related to those found by traditional methods for B. anthracis, F. tularensis, and V. cholerae but not for Bu. cepacia and C. parvum. A qPCR assay for S. Typhi was not available. The qPCR method can be used to rapidly detect B. anthracis, F. tularensis, and V. cholerae with some certainty in drinking-water samples, but additional work would be needed to optimize and test qPCR for Bu. cepacia and C. parvum and establish relations to traditional methods. The specificity for the V. cholerae assay needs to be further investigated. Evidence is provided that ultrafiltration and qPCR are promising methods to rapidly detect biological agents in the Nation's drinking-water supplies and thus reduce the impact and consequences from intentional bioterrorist events. To our knowledge, this is the first study to compare the use of traditional and qPCR methods to detect biological agents in large-volume drinking-water samples.
Sampling and analysis of hexavalent chromium during exposure to chromic acid mist and welding fumes.
Blomquist, G; Nilsson, C A; Nygren, O
1983-12-01
Sampling and analysis of hexavalent chromium during exposure to chromic acid mist and welding fumes. Scand j work environ & health 9 (1983) 489-495. In view of the serious health effects of hexavalent chromium, the problems involved in its sampling and analysis in workroom air have been the subject of much concern. In this paper, the stability problems arising from the reduction of hexavalent to trivalent chromium during sampling, sample storage, and analysis are discussed. Replacement of sulfuric acid by a sodium acetate buffer (pH 4) as a leaching solution prior to analysis with the diphenylcarbazide (DPC) method is suggested and is demonstrated to be necessary in order to avoid reduction. Field samples were taken from two different industrial processes-manual metal arc welding on stainless steel without shield gas and chromium plating. A comparison was made of the DPC method, acidic dissolution with atomic absorption spectrophotometric (AAS) analysis, and the carbonate method. For chromic acid mist, the DPC method and AAS analysis were shown to give the same results. In the analysis of welding fumes, the modified DPC method gave the same results as the laborious and less sensitive carbonate method.
The report documents the technical approach and results achieved while developing a grab sampling method and an automated, on-line gas chromatography method suitable to characterize nitrous oxide (N2O) emissions from fossil fuel combustion sources. The two methods developed have...
Absolute method of measuring magnetic susceptibility
Thorpe, A.; Senftle, F.E.
1959-01-01
An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.
Nishiuchi, Yukiko; Tamaru, Aki; Suzuki, Yasuhiko; Kitada, Seigo; Maekura, Ryoji; Tateishi, Yoshitaka; Niki, Mamiko; Ogura, Hisashi; Matsumoto, Sohkichi
2014-06-01
We previously demonstrated the colonization of Mycobacterium avium complex in bathrooms by the conventional culture method. In the present study, we aimed to directly detect M. avium organisms in the environment using loop-mediated isothermal amplification (LAMP), and to demonstrate the efficacy of LAMP by comparing the results with those obtained by culture. Our data showed that LAMP analysis has detection limits of 100 fg DNA/reaction for M. avium. Using an FTA(®) elute card, DNA templates were extracted from environmental samples from bathrooms in the residences of 29 patients with pulmonary M. avium disease. Of the 162 environmental samples examined, 143 (88%) showed identical results by both methods; 20 (12%) and 123 (76%) samples were positive and negative, respectively, for M. avium. Of the remaining 19 samples (12%), seven (5%) and 12 (7%) samples were positive by the LAMP and culture methods, respectively. All samples that contained over 20 colony forming units/primary isolation plate, as measured by the culture method, were also positive by the LAMP method. Our data demonstrate that the combination of the FTA elute card and LAMP can facilitate prompt detection of M. avium in the environment.
Efficient method of image edge detection based on FSVM
NASA Astrophysics Data System (ADS)
Cai, Aiping; Xiong, Xiaomei
2013-07-01
For efficient object cover edge detection in digital images, this paper studied traditional methods and algorithm based on SVM. It analyzed Canny edge detection algorithm existed some pseudo-edge and poor anti-noise capability. In order to provide a reliable edge extraction method, propose a new detection algorithm based on FSVM. Which contains several steps: first, trains classify sample and gives the different membership function to different samples. Then, a new training sample is formed by increase the punishment some wrong sub-sample, and use the new FSVM classification model for train and test them. Finally the edges are extracted of the object image by using the model. Experimental result shows that good edge detection image will be obtained and adding noise experiments results show that this method has good anti-noise.
Volatile organic compounds: sampling methods and their worldwide profile in ambient air.
Kumar, Anuj; Víden, Ivan
2007-08-01
The atmosphere is a particularly difficult analytical system because of the very low levels of substances to be analysed, sharp variations in pollutant levels with time and location, differences in wind, temperature and humidity. This makes the selection of an efficient sampling technique for air analysis a key step to reliable results. Generally, methods for volatile organic compounds sampling include collection of the whole air or preconcentration of samples on adsorbents. All the methods vary from each other according to the sampling technique, type of sorbent, method of extraction and identification technique. In this review paper we discuss various important aspects for sampling of volatile organic compounds by the widely used and advanced sampling methods. Characteristics of various adsorbents used for VOCs sampling are also described. Furthermore, this paper makes an effort to comprehensively review the concentration levels of volatile organic compounds along with the methodology used for analysis, in major cities of the world.
Mirante, Clara; Clemente, Isabel; Zambu, Graciette; Alexandre, Catarina; Ganga, Teresa; Mayer, Carlos; Brito, Miguel
2016-09-01
Helminth intestinal parasitoses are responsible for high levels of child mortality and morbidity. Hence, the capacity to diagnose these parasitoses and consequently ensure due treatment represents a factor of great importance. The main objective of this study involves comparing two methods of concentration, parasitrap and Kato-Katz, for the diagnosis of intestinal parasitoses in faecal samples. Sample processing made recourse to two different concentration methods: the commercial parasitrap® method and the Kato-Katz method. We correspondingly collected a total of 610 stool samples from pre-school and school age children. The results demonstrate the incidence of helminth parasites in 32.8% or 32.3% of the sample collected depending on whether the concentration method applied was either the parasitrap method or the Kato-Katz method. We detected a relatively high percentage of samples testing positive for two or more species of helminth parasites. We would highlight that in searching for larvae the Kato-Katz method does not prove as appropriate as the parasitrap method. Both techniques prove easily applicable even in field working conditions and returning mutually agreeing results. This study concludes in favour of the need for deworming programs and greater public awareness among the rural populations of Angola.
Ruple-Czerniak, A; Bolte, D S; Burgess, B A; Morley, P S
2014-07-01
Nosocomial salmonellosis is an important problem in veterinary hospitals that treat horses and other large animals. Detection and mitigation of outbreaks and prevention of healthcare-associated infections often require detection of Salmonella enterica in the hospital environment. To compare 2 previously published methods for detecting environmental contamination with S. enterica in a large animal veterinary teaching hospital. Hospital-based comparison of environmental sampling techniques. A total of 100 pairs of environmental samples were collected from stalls used to house large animal cases (horses, cows or New World camelids) that were confirmed to be shedding S. enterica by faecal culture. Stalls were cleaned and disinfected prior to sampling, and the same areas within each stall were sampled for the paired samples. One method of detection used sterile, premoistened sponges that were cultured using thioglycolate enrichment before plating on XLT-4 agar. The other method used electrostatic wipes that were cultured using buffered peptone water, tetrathionate and Rappaport-Vassiliadis R10 broths before plating on XLT-4 agar. Salmonella enterica was recovered from 14% of samples processed using the electrostatic wipe sampling and culture procedure, whereas S. enterica was recovered from only 4% of samples processed using the sponge sampling and culture procedure. There was test agreement for 85 pairs of culture-negative samples and 3 pairs of culture-positive samples. However, the remaining 12 pairs of samples with discordant results created significant disagreement between the 2 detection methods (P<0.01). Persistence of Salmonella in the environment of veterinary hospitals can occur even with rigorous cleaning and disinfection. Use of sensitive methods for detection of environmental contamination is critical when detecting and mitigating this problem in veterinary hospitals. These results suggest that the electrostatic wipe sampling and culture method was more sensitive than the sponge sampling and culture method. © 2013 EVJ Ltd.
Filter forensics: microbiota recovery from residential HVAC filters.
Maestre, Juan P; Jennings, Wiley; Wylie, Dennis; Horner, Sharon D; Siegel, Jeffrey; Kinney, Kerry A
2018-01-30
Establishing reliable methods for assessing the microbiome within the built environment is critical for understanding the impact of biological exposures on human health. High-throughput DNA sequencing of dust samples provides valuable insights into the microbiome present in human-occupied spaces. However, the effect that different sampling methods have on the microbial community recovered from dust samples is not well understood across sample types. Heating, ventilation, and air conditioning (HVAC) filters hold promise as long-term, spatially integrated, high volume samplers to characterize the airborne microbiome in homes and other climate-controlled spaces. In this study, the effect that dust recovery method (i.e., cut and elution, swabbing, or vacuuming) has on the microbial community structure, membership, and repeatability inferred by Illumina sequencing was evaluated. The results indicate that vacuum samples captured higher quantities of total, bacterial, and fungal DNA than swab or cut samples. Repeated swab and vacuum samples collected from the same filter were less variable than cut samples with respect to both quantitative DNA recovery and bacterial community structure. Vacuum samples captured substantially greater bacterial diversity than the other methods, whereas fungal diversity was similar across all three methods. Vacuum and swab samples of HVAC filter dust were repeatable and generally superior to cut samples. Nevertheless, the contribution of environmental and human sources to the bacterial and fungal communities recovered via each sampling method was generally consistent across the methods investigated. Dust recovery methodologies have been shown to affect the recovery, repeatability, structure, and membership of microbial communities recovered from dust samples in the built environment. The results of this study are directly applicable to indoor microbiota studies utilizing the filter forensics approach. More broadly, this study provides a better understanding of the microbial community variability attributable to sampling methodology and helps inform interpretation of data collected from other types of dust samples collected from indoor environments.
A comparison of moment-based methods of estimation for the log Pearson type 3 distribution
NASA Astrophysics Data System (ADS)
Koutrouvelis, I. A.; Canavos, G. C.
2000-06-01
The log Pearson type 3 distribution is a very important model in statistical hydrology, especially for modeling annual flood series. In this paper we compare the various methods based on moments for estimating quantiles of this distribution. Besides the methods of direct and mixed moments which were found most successful in previous studies and the well-known indirect method of moments, we develop generalized direct moments and generalized mixed moments methods and a new method of adaptive mixed moments. The last method chooses the orders of two moments for the original observations by utilizing information contained in the sample itself. The results of Monte Carlo experiments demonstrated the superiority of this method in estimating flood events of high return periods when a large sample is available and in estimating flood events of low return periods regardless of the sample size. In addition, a comparison of simulation and asymptotic results shows that the adaptive method may be used for the construction of meaningful confidence intervals for design events based on the asymptotic theory even with small samples. The simulation results also point to the specific members of the class of generalized moments estimates which maintain small values for bias and/or mean square error.
Harju, Kirsi; Rapinoja, Marja-Leena; Avondet, Marc-André; Arnold, Werner; Schär, Martin; Burrell, Stephen; Luginbühl, Werner; Vanninen, Paula
2015-01-01
Saxitoxin (STX) and some selected paralytic shellfish poisoning (PSP) analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS). Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT) within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk). Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists) method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD). PMID:26610567
Özkaya, H; Özkaya, B; Duman, B; Turksoy, S
2017-07-19
Fermentation and hydrothermal methods were tested to reduce the phytic acid (PA) content of oat bran, and the effects of these methods on the dietary fiber (DF) and total phenolic (TP) contents as well as the antioxidant activity (AA) were also investigated. Fermentation with 6% yeast and for 6 h resulted in 88.2% reduction in PA content, while it only resulted in 32.5% reduction in the sample incubated for 6 h without yeast addition. The PA loss in autoclaved oat bran sample (1.5 h, pH 4.0) was 95.2% while it was 41.8% at most in the sample autoclaved without pH adjustment. In both methods, soluble, insoluble, and total DF contents of samples were remarkably higher than the control samples. Also for TP in the oat bran samples, both processes led to 17% and 39% increases, respectively, while AA values were 8% and 15%, respectively. Among all samples, the autoclaving process resulted in the lowest PA and the greatest amount of bioactive compounds.
Development of Gold Standard Ion-Selective Electrode-Based Methods for Fluoride Analysis
Martínez-Mier, E.A.; Cury, J.A.; Heilman, J.R.; Katz, B.P.; Levy, S.M.; Li, Y.; Maguire, A.; Margineda, J.; O’Mullane, D.; Phantumvanit, P.; Soto-Rojas, A.E.; Stookey, G.K.; Villa, A.; Wefel, J.S.; Whelton, H.; Whitford, G.M.; Zero, D.T.; Zhang, W.; Zohouri, V.
2011-01-01
Background/Aims: Currently available techniques for fluoride analysis are not standardized. Therefore, this study was designed to develop standardized methods for analyzing fluoride in biological and nonbiological samples used for dental research. Methods A group of nine laboratories analyzed a set of standardized samples for fluoride concentration using their own methods. The group then reviewed existing analytical techniques for fluoride analysis, identified inconsistencies in the use of these techniques and conducted testing to resolve differences. Based on the results of the testing undertaken to define the best approaches for the analysis, the group developed recommendations for direct and microdiffusion methods using the fluoride ion-selective electrode. Results Initial results demonstrated that there was no consensus regarding the choice of analytical techniques for different types of samples. Although for several types of samples, the results of the fluoride analyses were similar among some laboratories, greater differences were observed for saliva, food and beverage samples. In spite of these initial differences, precise and true values of fluoride concentration, as well as smaller differences between laboratories, were obtained once the standardized methodologies were used. Intraclass correlation coefficients ranged from 0.90 to 0.93, for the analysis of a certified reference material, using the standardized methodologies. Conclusion The results of this study demonstrate that the development and use of standardized protocols for F analysis significantly decreased differences among laboratories and resulted in more precise and true values. PMID:21160184
Boiano, J M; Wallace, M E; Sieber, W K; Groff, J H; Wang, J; Ashley, K
2000-08-01
A field study was conducted with the goal of comparing the performance of three recently developed or modified sampling and analytical methods for the determination of airborne hexavalent chromium (Cr(VI)). The study was carried out in a hard chrome electroplating facility and in a jet engine manufacturing facility where airborne Cr(VI) was expected to be present. The analytical methods evaluated included two laboratory-based procedures (OSHA Method ID-215 and NIOSH Method 7605) and a field-portable method (NIOSH Method 7703). These three methods employ an identical sampling methodology: collection of Cr(VI)-containing aerosol on a polyvinyl chloride (PVC) filter housed in a sampling cassette, which is connected to a personal sampling pump calibrated at an appropriate flow rate. The basis of the analytical methods for all three methods involves extraction of the PVC filter in alkaline buffer solution, chemical isolation of the Cr(VI) ion, complexation of the Cr(VI) ion with 1,5-diphenylcarbazide, and spectrometric measurement of the violet chromium diphenylcarbazone complex at 540 nm. However, there are notable specific differences within the sample preparation procedures used in three methods. To assess the comparability of the three measurement protocols, a total of 20 side-by-side air samples were collected, equally divided between a chromic acid electroplating operation and a spray paint operation where water soluble forms of Cr(VI) were used. A range of Cr(VI) concentrations from 0.6 to 960 microg m(-3), with Cr(VI) mass loadings ranging from 0.4 to 32 microg, was measured at the two operations. The equivalence of the means of the log-transformed Cr(VI) concentrations obtained from the different analytical methods was compared. Based on analysis of variance (ANOVA) results, no statistically significant differences were observed between mean values measured using each of the three methods. Small but statistically significant differences were observed between results obtained from performance evaluation samples for the NIOSH field method and the OSHA laboratory method.
Detecting the sampling rate through observations
NASA Astrophysics Data System (ADS)
Shoji, Isao
2018-09-01
This paper proposes a method to detect the sampling rate of discrete time series of diffusion processes. Using the maximum likelihood estimates of the parameters of a diffusion process, we establish a criterion based on the Kullback-Leibler divergence and thereby estimate the sampling rate. Simulation studies are conducted to check whether the method can detect the sampling rates from data and their results show a good performance in the detection. In addition, the method is applied to a financial time series sampled on daily basis and shows the detected sampling rate is different from the conventional rates.
Validated method for quantification of genetically modified organisms in samples of maize flour.
Kunert, Renate; Gach, Johannes S; Vorauer-Uhl, Karola; Engel, Edwin; Katinger, Hermann
2006-02-08
Sensitive and accurate testing for trace amounts of biotechnology-derived DNA from plant material is the prerequisite for detection of 1% or 0.5% genetically modified ingredients in food products or raw materials thereof. Compared to ELISA detection of expressed proteins, real-time PCR (RT-PCR) amplification has easier sample preparation and detection limits are lower. Of the different methods of DNA preparation CTAB method with high flexibility in starting material and generation of sufficient DNA with relevant quality was chosen. Previous RT-PCR data generated with the SYBR green detection method showed that the method is highly sensitive to sample matrices and genomic DNA content influencing the interpretation of results. Therefore, this paper describes a real-time DNA quantification based on the TaqMan probe method, indicating high accuracy and sensitivity with detection limits of lower than 18 copies per sample applicable and comparable to highly purified plasmid standards as well as complex matrices of genomic DNA samples. The results were evaluated with ValiData for homology of variance, linearity, accuracy of the standard curve, and standard deviation.
Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, İrem Ersöz
2013-01-01
Objective: The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Study Design: Simulation study. Material and Methods: SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Results: Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. Conclusion: It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values. PMID:25207065
Farer, Leslie J; Hayes, John M
2005-01-01
A new method has been developed for the determination of emamectin benzoate in fish feed. The method uses a wet extraction, cleanup by solid-phase extraction, and quantitation and separation by liquid chromatography (LC). In this paper, we compare the performance of this method with that of a previously reported LC assay for the determination of emamectin benzoate in fish feed. Although similar to the previous method, the new procedure uses a different sample pretreatment, wet extraction, and quantitation method. The performance of the new method was compared with that of the previously reported method by analyses of 22 medicated feed samples from various commercial sources. A comparison of the results presented here reveals slightly lower assay values obtained with the new method. Although a paired sample t-test indicates the difference in results is significant, this difference is within the method precision of either procedure.
Wang, Hongbin; Zhang, Yongqian; Gui, Shuqi; Zhang, Yong; Lu, Fuping; Deng, Yulin
2017-08-15
Comparisons across large numbers of samples are frequently necessary in quantitative proteomics. Many quantitative methods used in proteomics are based on stable isotope labeling, but most of these are only useful for comparing two samples. For up to eight samples, the iTRAQ labeling technique can be used. For greater numbers of samples, the label-free method has been used, but this method was criticized for low reproducibility and accuracy. An ingenious strategy has been introduced, comparing each sample against a 18 O-labeled reference sample that was created by pooling equal amounts of all samples. However, it is necessary to use proportion-known protein mixtures to investigate and evaluate this new strategy. Another problem for comparative proteomics of multiple samples is the poor coincidence and reproducibility in protein identification results across samples. In present study, a method combining 18 O-reference strategy and a quantitation and identification-decoupled strategy was investigated with proportion-known protein mixtures. The results obviously demonstrated that the 18 O-reference strategy had greater accuracy and reliability than other previously used comparison methods based on transferring comparison or label-free strategies. By the decoupling strategy, the quantification data acquired by LC-MS and the identification data acquired by LC-MS/MS are matched and correlated to identify differential expressed proteins, according to retention time and accurate mass. This strategy made protein identification possible for all samples using a single pooled sample, and therefore gave a good reproducibility in protein identification across multiple samples, and allowed for optimizing peptide identification separately so as to identify more proteins. Copyright © 2017 Elsevier B.V. All rights reserved.
Rosenblum, Michael A; Laan, Mark J van der
2009-01-07
The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes. We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability under much weaker assumptions than are required for standard methods. A drawback of this approach, as we show, is that these confidence intervals are often quite wide. In response to this, we present a method for constructing much narrower confidence intervals, which are better suited for practical applications, and that are still more robust than confidence intervals based on standard methods, when dealing with small sample sizes. We show how to extend our approaches to much more general estimation problems than estimating the sample mean. We describe how these methods can be used to obtain more reliable confidence intervals in survey sampling. As a concrete example, we construct confidence intervals using our methods for the number of violent deaths between March 2003 and July 2006 in Iraq, based on data from the study "Mortality after the 2003 invasion of Iraq: A cross sectional cluster sample survey," by Burnham et al. (2006).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steger, J.L.; Bursey, J.T.; Merrill, R.G.
1999-03-01
This report presents the results of laboratory studies to develop and evaluate a method for the sampling and analysis of phosgene from stationary sources of air emissions using diethylamine (DEA) in toluene as the collection media. The method extracts stack gas from emission sources and stabilizes the reactive gas for subsequent analysis. DEA was evaluated both in a benchtop study and in a laboratory train spiking study. This report includes results for both the benchtop study and the train spiking study. Benchtop studies to evaluate the suitability of DEA for collecting and analyzing phosgene investigated five variables: storage time, DEAmore » concentration, moisture/pH, phosgene concentration, and sample storage temperature. Prototype sampling train studies were performed to determine if the benchtop chemical studies were transferable to a Modified Method 5 sampling train collecting phosgene in the presence of clean air mixed with typical stack gas components. Four conditions, which varied the moisture and phosgene spike were evaluated in triplicate. In addition to research results, the report includes a detailed draft method for sampling and analysis of phosgene from stationary source emissions.« less
Harwood, Valerie J; Boehm, Alexandria B; Sassoubre, Lauren M; Vijayavel, Kannappan; Stewart, Jill R; Fong, Theng-Theng; Caprais, Marie-Paule; Converse, Reagan R; Diston, David; Ebdon, James; Fuhrman, Jed A; Gourmelon, Michele; Gentry-Shields, Jennifer; Griffith, John F; Kashian, Donna R; Noble, Rachel T; Taylor, Huw; Wicki, Melanie
2013-11-15
An inter-laboratory study of the accuracy of microbial source tracking (MST) methods was conducted using challenge fecal and sewage samples that were spiked into artificial freshwater and provided as unknowns (blind test samples) to the laboratories. The results of the Source Identification Protocol Project (SIPP) are presented in a series of papers that cover 41 MST methods. This contribution details the results of the virus and bacteriophage methods targeting human fecal or sewage contamination. Human viruses used as source identifiers included adenoviruses (HAdV), enteroviruses (EV), norovirus Groups I and II (NoVI and NoVII), and polyomaviruses (HPyVs). Bacteriophages were also employed, including somatic coliphages and F-specific RNA bacteriophages (FRNAPH) as general indicators of fecal contamination. Bacteriophage methods targeting human fecal sources included genotyping of FRNAPH isolates and plaque formation on bacterial hosts Enterococcus faecium MB-55, Bacteroides HB-73 and Bacteroides GB-124. The use of small sample volumes (≤50 ml) resulted in relatively insensitive theoretical limits of detection (10-50 gene copies or plaques × 50 ml(-1)) which, coupled with low virus concentrations in samples, resulted in high false-negative rates, low sensitivity, and low negative predictive values. On the other hand, the specificity of the human virus methods was generally close to 100% and positive predictive values were ∼40-70% with the exception of NoVs, which were not detected. The bacteriophage methods were generally much less specific toward human sewage than virus methods, although FRNAPH II genotyping was relatively successful, with 18% sensitivity and 85% specificity. While the specificity of the human virus methods engenders great confidence in a positive result, better concentration methods and larger sample volumes must be utilized for greater accuracy of negative results, i.e. the prediction that a human contamination source is absent. Copyright © 2013 Elsevier Ltd. All rights reserved.
Yong, Dongeun; Ki, Chang-Seok; Kim, Jae-Seok; Seong, Moon-Woo; Lee, Hyukmin
2016-01-01
Background Real-time reverse transcription PCR (rRT-PCR) of sputum samples is commonly used to diagnose Middle East respiratory syndrome coronavirus (MERS-CoV) infection. Owing to the difficulty of extracting RNA from sputum containing mucus, sputum homogenization is desirable prior to nucleic acid isolation. We determined optimal homogenization methods for isolating viral nucleic acids from sputum. Methods We evaluated the following three sputum-homogenization methods: proteinase K and DNase I (PK-DNase) treatment, phosphate-buffered saline (PBS) treatment, and N-acetyl-L-cysteine and sodium citrate (NALC) treatment. Sputum samples were spiked with inactivated MERS-CoV culture isolates. RNA was extracted from pretreated, spiked samples using the easyMAG system (bioMérieux, France). Extracted RNAs were then subjected to rRT-PCR for MERS-CoV diagnosis (DiaPlex Q MERS-coronavirus, SolGent, Korea). Results While analyzing 15 spiked sputum samples prepared in technical duplicate, false-negative results were obtained with five (16.7%) and four samples (13.3%), respectively, by using the PBS and NALC methods. The range of threshold cycle (Ct) values observed when detecting upE in sputum samples was 31.1–35.4 with the PK-DNase method, 34.7–39.0 with the PBS method, and 33.9–38.6 with the NALC method. Compared with the control, which were prepared by adding a one-tenth volume of 1:1,000 diluted viral culture to PBS solution, the ranges of Ct values obtained by the PBS and NALC methods differed significantly from the mean control Ct of 33.2 (both P<0.0001). Conclusions The PK-DNase method is suitable for homogenizing sputum samples prior to RNA extraction. PMID:27374711
Commutability of food microbiology proficiency testing samples.
Abdelmassih, M; Polet, M; Goffaux, M-J; Planchon, V; Dierick, K; Mahillon, J
2014-03-01
Food microbiology proficiency testing (PT) is a useful tool to assess the analytical performances among laboratories. PT items should be close to routine samples to accurately evaluate the acceptability of the methods. However, most PT providers distribute exclusively artificial samples such as reference materials or irradiated foods. This raises the issue of the suitability of these samples because the equivalence-or 'commutability'-between results obtained on artificial vs. authentic food samples has not been demonstrated. In the clinical field, the use of noncommutable PT samples has led to erroneous evaluation of the performances when different analytical methods were used. This study aimed to provide a first assessment of the commutability of samples distributed in food microbiology PT. REQUASUD and IPH organized 13 food microbiology PTs including 10-28 participants. Three types of PT items were used: genuine food samples, sterile food samples and reference materials. The commutability of the artificial samples (reference material or sterile samples) was assessed by plotting the distribution of the results on natural and artificial PT samples. This comparison highlighted matrix-correlated issues when nonfood matrices, such as reference materials, were used. Artificially inoculated food samples, on the other hand, raised only isolated commutability issues. In the organization of a PT-scheme, authentic or artificially inoculated food samples are necessary to accurately evaluate the analytical performances. Reference materials, used as PT items because of their convenience, may present commutability issues leading to inaccurate penalizing conclusions for methods that would have provided accurate results on food samples. For the first time, the commutability of food microbiology PT samples was investigated. The nature of the samples provided by the organizer turned out to be an important factor because matrix effects can impact on the analytical results. © 2013 The Society for Applied Microbiology.
Do placebo based validation standards mimic real batch products behaviour? Case studies.
Bouabidi, A; Talbi, M; Bouklouze, A; El Karbane, M; Bourichi, H; El Guezzar, M; Ziemons, E; Hubert, Ph; Rozet, E
2011-06-01
Analytical methods validation is a mandatory step to evaluate the ability of developed methods to provide accurate results for their routine application. Validation usually involves validation standards or quality control samples that are prepared in placebo or reconstituted matrix made of a mixture of all the ingredients composing the drug product except the active substance or the analyte under investigation. However, one of the main concerns that can be made with this approach is that it may lack an important source of variability that come from the manufacturing process. The question that remains at the end of the validation step is about the transferability of the quantitative performance from validation standards to real authentic drug product samples. In this work, this topic is investigated through three case studies. Three analytical methods were validated using the commonly spiked placebo validation standards at several concentration levels as well as using samples coming from authentic batch samples (tablets and syrups). The results showed that, depending on the type of response function used as calibration curve, there were various degrees of differences in the results accuracy obtained with the two types of samples. Nonetheless the use of spiked placebo validation standards was showed to mimic relatively well the quantitative behaviour of the analytical methods with authentic batch samples. Adding these authentic batch samples into the validation design may help the analyst to select and confirm the most fit for purpose calibration curve and thus increase the accuracy and reliability of the results generated by the method in routine application. Copyright © 2011 Elsevier B.V. All rights reserved.
Comparison of DNA extraction methods for human gut microbial community profiling.
Lim, Mi Young; Song, Eun-Ji; Kim, Sang Ho; Lee, Jangwon; Nam, Young-Do
2018-03-01
The human gut harbors a vast range of microbes that have significant impact on health and disease. Therefore, gut microbiome profiling holds promise for use in early diagnosis and precision medicine development. Accurate profiling of the highly complex gut microbiome requires DNA extraction methods that provide sufficient coverage of the original community as well as adequate quality and quantity. We tested nine different DNA extraction methods using three commercial kits (TianLong Stool DNA/RNA Extraction Kit (TS), QIAamp DNA Stool Mini Kit (QS), and QIAamp PowerFecal DNA Kit (QP)) with or without additional bead-beating step using manual or automated methods and compared them in terms of DNA extraction ability from human fecal sample. All methods produced DNA in sufficient concentration and quality for use in sequencing, and the samples were clustered according to the DNA extraction method. Inclusion of bead-beating step especially resulted in higher degrees of microbial diversity and had the greatest effect on gut microbiome composition. Among the samples subjected to bead-beating method, TS kit samples were more similar to QP kit samples than QS kit samples. Our results emphasize the importance of mechanical disruption step for a more comprehensive profiling of the human gut microbiome. Copyright © 2017 The Authors. Published by Elsevier GmbH.. All rights reserved.
Cox, Jennie; Indugula, Reshmi; Vesper, Stephen; Zhu, Zheng; Jandarov, Roman; Reponen, Tiina
2017-10-18
Evaluating fungal contamination indoors is complicated because of the many different sampling methods utilized. In this study, fungal contamination was evaluated using five sampling methods and four matrices for results. The five sampling methods were a 48 hour indoor air sample collected with a Button™ inhalable aerosol sampler and four types of dust samples: a vacuumed floor dust sample, newly settled dust collected for four weeks onto two types of electrostatic dust cloths (EDCs) in trays, and a wipe sample of dust from above floor surfaces. The samples were obtained in the bedrooms of asthmatic children (n = 14). Quantitative polymerase chain reaction (qPCR) was used to analyze the dust and air samples for the 36 fungal species that make up the Environmental Relative Moldiness Index (ERMI). The results from the samples were compared by four matrices: total concentration of fungal cells, concentration of fungal species associated with indoor environments, concentration of fungal species associated with outdoor environments, and ERMI values (or ERMI-like values for air samples). The ERMI values for the dust samples and the ERMI-like values for the 48 hour air samples were not significantly different. The total cell concentrations of the 36 species obtained with the four dust collection methods correlated significantly (r = 0.64-0.79, p < 0.05), with the exception of the vacuumed floor dust and newly settled dust. In addition, fungal cell concentrations of indoor associated species correlated well between all four dust sampling methods (r = 0.68-0.86, p < 0.01). No correlation was found between the fungal concentrations in the air and dust samples primarily because of differences in concentrations of Cladosporium cladosporioides Type 1 and Epicoccum nigrum. A representative type of dust sample and a 48 hour air sample might both provide useful information about fungal exposures.
Kowalczyk, Marek; Sekuła, Andrzej; Mleczko, Piotr; Olszowy, Zofia; Kujawa, Anna; Zubek, Szymon; Kupiec, Tomasz
2015-01-01
Aim To assess the usefulness of a DNA-based method for identifying mushroom species for application in forensic laboratory practice. Methods Two hundred twenty-one samples of clinical forensic material (dried mushrooms, food remains, stomach contents, feces, etc) were analyzed. ITS2 region of nuclear ribosomal DNA (nrDNA) was sequenced and the sequences were compared with reference sequences collected from the National Center for Biotechnology Information gene bank (GenBank). Sporological identification of mushrooms was also performed for 57 samples of clinical material. Results Of 221 samples, positive sequencing results were obtained for 152 (69%). The highest percentage of positive results was obtained for samples of dried mushrooms (96%) and food remains (91%). Comparison with GenBank sequences enabled identification of all samples at least at the genus level. Most samples (90%) were identified at the level of species or a group of closely related species. Sporological and molecular identification were consistent at the level of species or genus for 30% of analyzed samples. Conclusion Molecular analysis identified a larger number of species than sporological method. It proved to be suitable for analysis of evidential material (dried hallucinogenic mushrooms) in forensic genetic laboratories as well as to complement classical methods in the analysis of clinical material. PMID:25727040
Neutron activation analysis of certified samples by the absolute method
NASA Astrophysics Data System (ADS)
Kadem, F.; Belouadah, N.; Idiri, Z.
2015-07-01
The nuclear reactions analysis technique is mainly based on the relative method or the use of activation cross sections. In order to validate nuclear data for the calculated cross section evaluated from systematic studies, we used the neutron activation analysis technique (NAA) to determine the various constituent concentrations of certified samples for animal blood, milk and hay. In this analysis, the absolute method is used. The neutron activation technique involves irradiating the sample and subsequently performing a measurement of the activity of the sample. The fundamental equation of the activation connects several physical parameters including the cross section that is essential for the quantitative determination of the different elements composing the sample without resorting to the use of standard sample. Called the absolute method, it allows a measurement as accurate as the relative method. The results obtained by the absolute method showed that the values are as precise as the relative method requiring the use of standard sample for each element to be quantified.
THE NEW YORK CITY URBAN DISPERSION PROGRAM MARCH 2005 FIELD STUDY: TRACER METHODS AND RESULTS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
WATSON, T.B.; HEISER, J.; KALB, P.
The Urban Dispersion Program March 2005 Field Study tracer releases, sampling, and analytical methods are described in detail. There were two days where tracer releases and sampling were conducted. A total of 16.0 g of six tracers were released during the first test day or Intensive Observation Period (IOP) 1 and 15.7 g during IOP 2. Three types of sampling instruments were used in this study. Sequential air samplers, or SAS, collected six-minute samples, while Brookhaven atmospheric tracer samplers (BATS) and personal air samplers (PAS) collected thirty-minute samples. There were a total of 1300 samples resulting from the two IOPs.more » Confidence limits in the sampling and analysis method were 20% as determined from 100 duplicate samples. The sample recovery rate was 84%. The integrally averaged 6-minute samples were compared to the 30-minute samples. The agreement was found to be good in most cases. The validity of using a background tracer to calculate sample volumes was examined and also found to have a confidence level of 20%. Methods for improving sampling and analysis are discussed. The data described in this report are available as Excel files. An additional Excel file of quality assured tracer data for use in model validation efforts is also available. The file consists of extensively quality assured BATS tracer data with background concentrations subtracted.« less
Two-dimensional imaging of two types of radicals by the CW-EPR method
NASA Astrophysics Data System (ADS)
Czechowski, Tomasz; Krzyminiewski, Ryszard; Jurga, Jan; Chlewicki, Wojciech
2008-01-01
The CW-EPR method of image reconstruction is based on sample rotation in a magnetic field with a constant gradient (50 G/cm). In order to obtain a projection (radical density distribution) along a given direction, the EPR spectra are recorded with and without the gradient. Deconvolution, then gives the distribution of the spin density. Projection at 36 different angles gives the information that is necessary for reconstruction of the radical distribution. The problem becomes more complex when there are at least two types of radicals in the sample, because the deconvolution procedure does not give satisfactory results. We propose a method to calculate the projections for each radical, based on iterative procedures. The images of density distribution for each radical obtained by our procedure have proved that the method of deconvolution, in combination with iterative fitting, provides correct results. The test was performed on a sample of polymer PPS Br 111 ( p-phenylene sulphide) with glass fibres and minerals. The results indicated a heterogeneous distribution of radicals in the sample volume. The images obtained were in agreement with the known shape of the sample.
Haines, Troy D.; Adlaf, Kevin J.; Pierceall, Robert M.; Lee, Inmok; Venkitasubramanian, Padmesh
2010-01-01
Analysis of MCPD esters and glycidyl esters in vegetable oils using the indirect method proposed by the DGF gave inconsistent results when salting out conditions were varied. Subsequent investigation showed that the method was destroying and reforming MCPD during the analysis. An LC time of flight MS method was developed for direct analysis of both MCPD esters and glycidyl esters in vegetable oils. The results of the LC–TOFMS method were compared with the DGF method. The DGF method consistently gave results that were greater than the LC–TOFMS method. The levels of MCPD esters and glycidyl esters found in a variety of vegetable oils are reported. MCPD monoesters were not found in any oil samples. MCPD diesters were found only in samples containing palm oil, and were not present in all palm oil samples. Glycidyl esters were found in a wide variety of oils. Some processing conditions that influence the concentration of MCPD esters and glycidyl esters are discussed. PMID:21350591
Alles, Susan; Peng, Linda X; Mozola, Mark A
2009-01-01
A modification to Performance-Tested Method (PTM) 070601, Reveal Listeria Test (Reveal), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there was a statistically significant difference in performance between the Reveal and reference culture [U.S. Food and Drug Administration's Bacteriological Analytical Manual (FDA/BAM) or U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS)] methods for only a single food in one trial (pasteurized crab meat) at the 27 h enrichment time point, with more positive results obtained with the FDA/BAM reference method. No foods showed statistically significant differences in method performance at the 30 h time point. Independent laboratory testing of 3 foods again produced a statistically significant difference in results for crab meat at the 27 h time point; otherwise results of the Reveal and reference methods were statistically equivalent. Overall, considering both internal and independent laboratory trials, sensitivity of the Reveal method relative to the reference culture procedures in testing of foods was 85.9% at 27 h and 97.1% at 30 h. Results from 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the Reveal method was more productive than the reference USDA-FSIS culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the Reveal method at the 24 h time point. Overall, sensitivity of the Reveal method at 24 h relative to that of the USDA-FSIS method was 153%. The Reveal method exhibited extremely high specificity, with only a single false-positive result in all trials combined for overall specificity of 99.5%.
Mathes, Melvin V.; O'Brien, Tara L.; Strickler, Kriston M.; Hardy, Joshua J.; Schill, William B.; Lukasik, Jerzy; Scott, Troy M.; Bailey, David E.; Fenger, Terry L.
2007-01-01
Several methods were used to determine the sources of fecal contamination in water samples collected during September and October 2004 from four tributaries to the New River Gorge National River -- Arbuckle Creek, Dunloup Creek, Keeney Creek, and Wolf Creek. All four tributaries historically have had elevated levels of fecal coliform bacteria. The source-tracking methods used yielded various results, possibly because one or more methods failed. Sourcing methods used in this study included the detection of several human-specific and animal-specific biological or molecular markers, and library-dependent pulsed-field gel electrophoresis analysis that attempted to associate Escherichia coli bacteria obtained from water samples with animal sources by matching DNA-fragment banding patterns. Evaluation of the results of quality-control analysis indicated that pulsed-field gel electrophoresis analysis was unable to identify known-source bacteria isolates. Increasing the size of the known-source library did not improve the results for quality-control samples. A number of emerging methods, using markers in Enterococcus, human urine, Bacteroidetes, and host mitochondrial DNA, demonstrated some potential in associating fecal contamination with human or animal sources in a limited analysis of quality-control samples. All four of the human-specific markers were detected in water samples from Keeney Creek, a watershed with no centralized municipal wastewater-treatment facilities, thus indicating human sources of fecal contamination. The human-specific Bacteroidetes and host mitochondrial DNA markers were detected in water samples from Dunloup Creek, Wolf Creek, and to a lesser degree Arbuckle Creek. Results of analysis for wastewater compounds indicate that the September 27 sample from Arbuckle Creek contained numerous human tracer compounds likely from sewage. Dog, horse, chicken, and pig host mitochondrial DNA were detected in some of the water samples with the exception of the October 5 sample from Dunloup Creek. Cow, white-tailed deer, and Canada goose DNA were not detected in any of the samples collected from the four tributaries, despite the presence of these animals in the watersheds. Future studies with more rigorous quality-control analyses are needed to investigate the potential applicability and use of these emerging methods. Because many of the detections for the various methods could vary over time and with flow conditions, repeated sampling during both base flow and storm events would be necessary to more definitively determine the sources of fecal contamination for each watershed.
Application of Advanced Nondestructive Evaluation Techniques for Cylindrical Composite Test Samples
NASA Technical Reports Server (NTRS)
Martin, Richard E.; Roth, Donald J.; Salem, Jonathan A.
2013-01-01
Two nondestructive methods were applied to composite cylinder samples pressurized to failure in order to determine manufacturing quality and monitor damage progression under load. A unique computed tomography (CT) image processing methodology developed at NASA Glenn Research was used to assess the condition of the as-received samples while acoustic emission (AE) monitoring was used to identify both the extent and location of damage within the samples up to failure. Results show the effectiveness of both of these methods in identifying potentially critical fabrication issues and their resulting impact on performance.
Equilibrium Molecular Thermodynamics from Kirkwood Sampling
2015-01-01
We present two methods for barrierless equilibrium sampling of molecular systems based on the recently proposed Kirkwood method (J. Chem. Phys.2009, 130, 134102). Kirkwood sampling employs low-order correlations among internal coordinates of a molecule for random (or non-Markovian) sampling of the high dimensional conformational space. This is a geometrical sampling method independent of the potential energy surface. The first method is a variant of biased Monte Carlo, where Kirkwood sampling is used for generating trial Monte Carlo moves. Using this method, equilibrium distributions corresponding to different temperatures and potential energy functions can be generated from a given set of low-order correlations. Since Kirkwood samples are generated independently, this method is ideally suited for massively parallel distributed computing. The second approach is a variant of reservoir replica exchange, where Kirkwood sampling is used to construct a reservoir of conformations, which exchanges conformations with the replicas performing equilibrium sampling corresponding to different thermodynamic states. Coupling with the Kirkwood reservoir enhances sampling by facilitating global jumps in the conformational space. The efficiency of both methods depends on the overlap of the Kirkwood distribution with the target equilibrium distribution. We present proof-of-concept results for a model nine-atom linear molecule and alanine dipeptide. PMID:25915525
Spindler, Patrice; Paretti, Nick V.
2007-01-01
The Arizona Department of Environmental Quality (ADEQ) and the U.S. Environmental Protection Agency (USEPA) Ecological Monitoring and Assessment Program (EMAP), use different field methods for collecting macroinvertebrate samples and habitat data for bioassessment purposes. Arizona’s Biocriteria index was developed using a riffle habitat sampling methodology, whereas the EMAP method employs a multi-habitat sampling protocol. There was a need to demonstrate comparability of these different bioassessment methodologies to allow use of the EMAP multi-habitat protocol for both statewide probabilistic assessments for integration of the EMAP data into the national (305b) assessment and for targeted in-state bioassessments for 303d determinations of standards violations and impaired aquatic life conditions. The purpose of this study was to evaluate whether the two methods yield similar bioassessment results, such that the data could be used interchangeably in water quality assessments. In this Regional EMAP grant funded project, a probabilistic survey of 30 sites in the Little Colorado River basin was conducted in the spring of 2007. Macroinvertebrate and habitat data were collected using both ADEQ and EMAP sampling methods, from adjacent reaches within these stream channels.
All analyses indicated that the two macroinvertebrate sampling methods were significantly correlated. ADEQ and EMAP samples were classified into the same scoring categories (meeting, inconclusive, violating the biocriteria standard) 82% of the time. When the ADEQ-IBI was applied to both the ADEQ and EMAP taxa lists, the resulting IBI scores were significantly correlated (r=0.91), even though only 4 of the 7 metrics in the IBI were significantly correlated. The IBI scores from both methods were significantly correlated to the percent of riffle habitat, even though the average percent riffle habitat was only 30% of the stream reach. Multivariate analyses found that the percent riffle was an important attribute for both datasets in classifying IBI scores into assessment categories.
Habitat measurements generated from EMAP and ADEQ methods were also significantly correlated; 13 of 16 habitat measures were significantly correlated (p<0.01). The visual-based percentage estimates of percent riffle and pool habitats, vegetative cover and percent canopy cover, and substrate measurements of percent fine substrate and embeddedness were all remarkably similar, given the different field methods used. A multivariate analysis identified substrate and flow conditions, as well as canopy cover as important combinations of habitat attributes affecting both IBI scores. These results indicate that similar habitat measures can be obtained using two different field sampling protocols. In addition, similar combinations of these habitat parameters were important to macroinvertebrate community condition in multivariate analyses of both ADEQ and EMAP datasets.
These results indicate the two sampling methods for macroinvertebrates and habitat data were very similar in terms of bioassessment results and stressors. While the bioassessment category was not identical for all sites, overall the assessments were significantly correlated, providing similar bioassessment results for the cold water streams used in this study. The findings of this study indicate that ADEQ can utilize either a riffle-based sampling methodology or a multi-habitat sampling approach in cold water streams as both yield similar results relative to the macroinvertebrate assemblage. These results will allow for use of either macroinvertebrate dataset to determine water quality standards compliance with the ADEQ Indexes of Biological Integrity, for which threshold values were just recently placed into the Arizona Surface Water Quality Standards. While this survey did not include warm water desert streams of Arizona, we would predict that EMAP and ADEQ sampling methodologies would provide similar bioassessment results and would not be significantly different, as we have found that the percent riffle habitat in cold and warm water perennial, wadeable streams is not significantly different. However, a comparison study of sampling methodologies in warm water streams should be conducted to confirm the predicted similarity of bioassessment results. ADEQ will continue to implement a monitoring strategy that includes probabilistic monitoring for a statewide ecological assessment of stream conditions. Conclusions from this study will guide decisions regarding the most appropriate sampling methods for future probabilistic monitoring sample plans.
Sampling and sample processing in pesticide residue analysis.
Lehotay, Steven J; Cook, Jo Marie
2015-05-13
Proper sampling and sample processing in pesticide residue analysis of food and soil have always been essential to obtain accurate results, but the subject is becoming a greater concern as approximately 100 mg test portions are being analyzed with automated high-throughput analytical methods by agrochemical industry and contract laboratories. As global food trade and the importance of monitoring increase, the food industry and regulatory laboratories are also considering miniaturized high-throughput methods. In conjunction with a summary of the symposium "Residues in Food and Feed - Going from Macro to Micro: The Future of Sample Processing in Residue Analytical Methods" held at the 13th IUPAC International Congress of Pesticide Chemistry, this is an opportune time to review sampling theory and sample processing for pesticide residue analysis. If collected samples and test portions do not adequately represent the actual lot from which they came and provide meaningful results, then all costs, time, and efforts involved in implementing programs using sophisticated analytical instruments and techniques are wasted and can actually yield misleading results. This paper is designed to briefly review the often-neglected but crucial topic of sample collection and processing and put the issue into perspective for the future of pesticide residue analysis. It also emphasizes that analysts should demonstrate the validity of their sample processing approaches for the analytes/matrices of interest and encourages further studies on sampling and sample mass reduction to produce a test portion.
Estimating uncertainty in respondent-driven sampling using a tree bootstrap method.
Baraff, Aaron J; McCormick, Tyler H; Raftery, Adrian E
2016-12-20
Respondent-driven sampling (RDS) is a network-based form of chain-referral sampling used to estimate attributes of populations that are difficult to access using standard survey tools. Although it has grown quickly in popularity since its introduction, the statistical properties of RDS estimates remain elusive. In particular, the sampling variability of these estimates has been shown to be much higher than previously acknowledged, and even methods designed to account for RDS result in misleadingly narrow confidence intervals. In this paper, we introduce a tree bootstrap method for estimating uncertainty in RDS estimates based on resampling recruitment trees. We use simulations from known social networks to show that the tree bootstrap method not only outperforms existing methods but also captures the high variability of RDS, even in extreme cases with high design effects. We also apply the method to data from injecting drug users in Ukraine. Unlike other methods, the tree bootstrap depends only on the structure of the sampled recruitment trees, not on the attributes being measured on the respondents, so correlations between attributes can be estimated as well as variability. Our results suggest that it is possible to accurately assess the high level of uncertainty inherent in RDS.
Hydrocarbon group type determination in jet fuels by high performance liquid chromatography
NASA Technical Reports Server (NTRS)
Antoine, A. C.
1977-01-01
Results are given for the analysis of some jet and diesel fuel samples which were prepared from oil shale and coal syncrudes. Thirty-two samples of varying chemical composition and physical properties were obtained. Hydrocarbon types in these samples were determined by fluorescent indicator adsorption (FIA) analysis, and the results from three laboratories are presented and compared. Recently, rapid high performance liquid chromatography (HPLC) methods have been proposed for hydrocarbon group type analysis, with some suggestion for their use as a replacement of the FIA technique. Two of these methods were used to analyze some of the samples, and these results are also presented and compared. Two samples of petroleum-based Jet A fuel are similarly analyzed.
Huang, An-Min; Fei, Ben-Hua; Jiang, Ze-Hui; Hse, Chung-Yun
2007-09-01
Near infrared spectroscopy is widely used as a quantitative method, and the main multivariate techniques consist of regression methods used to build prediction models, however, the accuracy of analysis results will be affected by many factors. In the present paper, the influence of different sample roughness on the mathematical model of NIR quantitative analysis of wood density was studied. The result of experiments showed that if the roughness of predicted samples was consistent with that of calibrated samples, the result was good, otherwise the error would be much higher. The roughness-mixed model was more flexible and adaptable to different sample roughness. The prediction ability of the roughness-mixed model was much better than that of the single-roughness model.
Hu, Zheng; Lin, Jun; Chen, Zhong-Sheng; Yang, Yong-Min; Li, Xue-Jun
2015-01-22
High-speed blades are often prone to fatigue due to severe blade vibrations. In particular, synchronous vibrations can cause irreversible damages to the blade. Blade tip-timing methods (BTT) have become a promising way to monitor blade vibrations. However, synchronous vibrations are unsuitably monitored by uniform BTT sampling. Therefore, non-equally mounted probes have been used, which will result in the non-uniformity of the sampling signal. Since under-sampling is an intrinsic drawback of BTT methods, how to analyze non-uniformly under-sampled BTT signals is a big challenge. In this paper, a novel reconstruction method for non-uniformly under-sampled BTT data is presented. The method is based on the periodically non-uniform sampling theorem. Firstly, a mathematical model of a non-uniform BTT sampling process is built. It can be treated as the sum of certain uniform sample streams. For each stream, an interpolating function is required to prevent aliasing in the reconstructed signal. Secondly, simultaneous equations of all interpolating functions in each sub-band are built and corresponding solutions are ultimately derived to remove unwanted replicas of the original signal caused by the sampling, which may overlay the original signal. In the end, numerical simulations and experiments are carried out to validate the feasibility of the proposed method. The results demonstrate the accuracy of the reconstructed signal depends on the sampling frequency, the blade vibration frequency, the blade vibration bandwidth, the probe static offset and the number of samples. In practice, both types of blade vibration signals can be particularly reconstructed by non-uniform BTT data acquired from only two probes.
Hu, Zheng; Lin, Jun; Chen, Zhong-Sheng; Yang, Yong-Min; Li, Xue-Jun
2015-01-01
High-speed blades are often prone to fatigue due to severe blade vibrations. In particular, synchronous vibrations can cause irreversible damages to the blade. Blade tip-timing methods (BTT) have become a promising way to monitor blade vibrations. However, synchronous vibrations are unsuitably monitored by uniform BTT sampling. Therefore, non-equally mounted probes have been used, which will result in the non-uniformity of the sampling signal. Since under-sampling is an intrinsic drawback of BTT methods, how to analyze non-uniformly under-sampled BTT signals is a big challenge. In this paper, a novel reconstruction method for non-uniformly under-sampled BTT data is presented. The method is based on the periodically non-uniform sampling theorem. Firstly, a mathematical model of a non-uniform BTT sampling process is built. It can be treated as the sum of certain uniform sample streams. For each stream, an interpolating function is required to prevent aliasing in the reconstructed signal. Secondly, simultaneous equations of all interpolating functions in each sub-band are built and corresponding solutions are ultimately derived to remove unwanted replicas of the original signal caused by the sampling, which may overlay the original signal. In the end, numerical simulations and experiments are carried out to validate the feasibility of the proposed method. The results demonstrate the accuracy of the reconstructed signal depends on the sampling frequency, the blade vibration frequency, the blade vibration bandwidth, the probe static offset and the number of samples. In practice, both types of blade vibration signals can be particularly reconstructed by non-uniform BTT data acquired from only two probes. PMID:25621612
Quantitative evaluation of the CEEM soil sampling intercomparison.
Wagner, G; Lischer, P; Theocharopoulos, S; Muntau, H; Desaules, A; Quevauviller, P
2001-01-08
The aim of the CEEM soil project was to compare and to test the soil sampling and sample preparation guidelines used in the member states of the European Union and Switzerland for investigations of background and large-scale contamination of soils, soil monitoring and environmental risk assessments. The results of the comparative evaluation of the sampling guidelines demonstrated that, in soil contamination studies carried out with different sampling strategies and methods, comparable results can hardly be expected. Therefore, a reference database (RDB) was established by the organisers, which acted as a basis for the quantitative comparison of the participants' results. The detected deviations were related to the methodological details of the individual strategies. The comparative evaluation concept consisted of three steps: The first step was a comparison of the participants' samples (which were both centrally and individually analysed) between each other, as well as with the reference data base (RDB) and some given soil quality standards on the level of concentrations present. The comparison was made using the example of the metals cadmium, copper, lead and zinc. As a second step, the absolute and relative deviations between the reference database and the participants' results (both centrally analysed under repeatability conditions) were calculated. The comparability of the samples with the RDB was categorised on four levels. Methods of exploratory statistical analysis were applied to estimate the differential method bias among the participants. The levels of error caused by sampling and sample preparation were compared with those caused by the analytical procedures. As a third step, the methodological profiles of the participants were compiled to concisely describe the different procedures used. They were related to the results to find out the main factors leading to their incomparability. The outcome of this evaluation process was a list of strategies and methods, which are problematic with respect to comparability, and should be standardised and/or specified in order to arrive at representative and comparable results in soil contamination studies throughout Europe. Pre-normative recommendations for harmonising European soil sampling guidelines and standard operating procedures have been outlined in Wagner G, Desules A, Muntau H, Theocharopoulos S. Comparative Evaluation of European Methods for Sampling and Sample Preparation of Soils for Inorganic Analysis (CEEM Soil). Final Report of the Contract SMT4-CT96-2085, Sci Total Environ 2001;264:181-186. Wagner G, Desaules A, Munatu H. Theocharopolous S, Quevauvaller Ph. Suggestions for harmonising sampling and sample pre-treatment procedures and improving quality assurance in pre-analytical steps of soil contamination studies. Paper 1.7 Sci Total Environ 2001b;264:103-118.
(PRESENTED NAQC SAN FRANCISCO, CA) COARSE PM METHODS STUDY: STUDY DESIGN AND RESULTS
Comprehensive field studies were conducted to evaluate the performance of sampling methods for measuring the coarse fraction of PM10 in ambient air. Five separate sampling approaches were evaluated at each of three sampling sites. As the primary basis of comparison, a discrete ...
NASA Astrophysics Data System (ADS)
Šantić, Branko; Gracin, Davor
2017-12-01
A new simple Monte Carlo method is introduced for the study of electrostatic screening by surrounding ions. The proposed method is not based on the generally used Markov chain method for sample generation. Each sample is pristine and there is no correlation with other samples. As the main novelty, the pairs of ions are gradually added to a sample provided that the energy of each ion is within the boundaries determined by the temperature and the size of ions. The proposed method provides reliable results, as demonstrated by the screening of ion in plasma and in water.
NASA Astrophysics Data System (ADS)
Kislov, E. V.; Kulikov, A. A.; Kulikova, A. B.
1989-10-01
Samples of basit-ultrabasit rocks and NiCu ores of the Ioko-Dovyren and Chaya massifs were analysed by SRXFA and a chemical-spectral method. SRXFA perfectly satisfies the quantitative noble-metals analysis of ore-free rocks. Combination of SRXFA and chemical-spectral analysis has good prospects. After analysis of a great number of samples by SRXFA it is necessary to select samples which would show minimal and maximal results for the chemical-spectral method.
Karamon, Jacek; Ziomko, Irena; Cencek, Tomasz; Sroka, Jacek
2008-10-01
The modification of flotation method for the examination of diarrhoeic piglet faeces for the detection of Isospora suis oocysts was elaborated. The method was based on removing fractions of fat from the sample of faeces by centrifugation with a 25% Percoll solution. The investigations were carried out in comparison to the McMaster method. From five variants of the Percoll flotation method, the best results were obtained when 2ml of flotation liquid per 1g of faeces were used. The limit of detection in the Percoll flotation method was 160 oocysts per 1g, and was better than with the McMaster method. The efficacy of the modified method was confirmed by results obtained in the examination of the I. suis infected piglets. From all faecal samples, positive samples in the Percoll flotation method were double the results than that of the routine method. Oocysts were first detected by the Percoll flotation method on day 4 post-invasion, i.e. one-day earlier than with the McMaster method. During the experiment (except for 3 days), the extensity of I. suis invasion in the litter examined by the Percoll flotation method was higher than that with the McMaster method. The obtained results show that the modified flotation method with the use of Percoll could be applied in the diagnostics of suckling piglet isosporosis.
Mizinga, Kemmy M; Burnett, Thomas J; Brunelle, Sharon L; Wallace, Michael A; Coleman, Mark R
2018-05-01
The U.S. Department of Agriculture, Food Safety Inspection Service regulatory method for monensin, Chemistry Laboratory Guidebook CLG-MON, is a semiquantitative bioautographic method adopted in 1991. Official Method of AnalysisSM (OMA) 2011.24, a modern quantitative and confirmatory LC-tandem MS method, uses no chlorinated solvents and has several advantages, including ease of use, ready availability of reagents and materials, shorter run-time, and higher throughput than CLG-MON. Therefore, a bridging study was conducted to support the replacement of method CLG-MON with OMA 2011.24 for regulatory use. Using fortified bovine tissue samples, CLG-MON yielded accuracies of 80-120% in 44 of the 56 samples tested (one sample had no result, six samples had accuracies of >120%, and five samples had accuracies of 40-160%), but the semiquantitative nature of CLG-MON prevented assessment of precision, whereas OMA 2011.24 had accuracies of 88-110% and RSDr of 0.00-15.6%. Incurred residue results corroborated these results, demonstrating improved accuracy (83.3-114%) and good precision (RSDr of 2.6-20.5%) for OMA 2011.24 compared with CLG-MON (accuracy generally within 80-150%, with exceptions). Furthermore, χ2 analysis revealed no statistically significant difference between the two methods. Thus, the microbiological activity of monensin correlated with the determination of monensin A in bovine tissues, and OMA 2011.24 provided improved accuracy and precision over CLG-MON.
Fatemeh, Dehghan; Reza, Zolfaghari Mohammad; Mohammad, Arjomandzadegan; Salomeh, Kalantari; Reza, Ahmari Gholam; Hossein, Sarmadian; Maryam, Sadrnia; Azam, Ahmadi; Mana, Shojapoor; Negin, Najarian; Reza, Kasravi Alii; Saeed, Falahat
2014-01-01
Objective To analyse molecular detection of coliforms and shorten the time of PCR. Methods Rapid detection of coliforms by amplification of lacZ and uidA genes in a multiplex PCR reaction was designed and performed in comparison with most probably number (MPN) method for 16 artificial and 101 field samples. The molecular method was also conducted on isolated coliforms from positive MPN samples; standard sample for verification of microbial method certificated reference material; isolated strains from certificated reference material and standard bacteria. The PCR and electrophoresis parameters were changed for reducing the operation time. Results Results of PCR for lacZ and uidA genes were similar in all of standard, operational and artificial samples and showed the 876 bp and 147 bp bands of lacZ and uidA genes by multiplex PCR. PCR results were confirmed by MPN culture method by sensitivity 86% (95% CI: 0.71-0.93). Also the total execution time, with a successful change of factors, was reduced to less than two and a half hour. Conclusions Multiplex PCR method with shortened operation time was used for the simultaneous detection of total coliforms and Escherichia coli in distribution system of Arak city. It's recommended to be used at least as an initial screening test, and then the positive samples could be randomly tested by MPN. PMID:25182727
A STRINGENT COMPARISON OF SAMPLING AND ANALYSIS METHODS FOR VOCS IN AMBIENT AIR
A carefully designed study was conducted during the summer of 1998 to simultaneously collect samples of ambient air by canisters and compare the analysis results to direct sorbent preconcentration results taken at the time of sample collection. A total of 32 1-h sample sets we...
Evaluation of seven aquatic sampling methods for amphibians and other aquatic fauna
Gunzburger, M.S.
2007-01-01
To design effective and efficient research and monitoring programs researchers must have a thorough understanding of the capabilities and limitations of their sampling methods. Few direct comparative studies exist for aquatic sampling methods for amphibians. The objective of this study was to simultaneously employ seven aquatic sampling methods in 10 wetlands to compare amphibian species richness and number of individuals detected with each method. Four sampling methods allowed counts of individuals (metal dipnet, D-frame dipnet, box trap, crayfish trap), whereas the other three methods allowed detection of species (visual encounter, aural, and froglogger). Amphibian species richness was greatest with froglogger, box trap, and aural samples. For anuran species, the sampling methods by which each life stage was detected was related to relative length of larval and breeding periods and tadpole size. Detection probability of amphibians varied across sampling methods. Box trap sampling resulted in the most precise amphibian count, but the precision of all four count-based methods was low (coefficient of variation > 145 for all methods). The efficacy of the four count sampling methods at sampling fish and aquatic invertebrates was also analyzed because these predatory taxa are known to be important predictors of amphibian habitat distribution. Species richness and counts were similar for fish with the four methods, whereas invertebrate species richness and counts were greatest in box traps. An effective wetland amphibian monitoring program in the southeastern United States should include multiple sampling methods to obtain the most accurate assessment of species community composition at each site. The combined use of frogloggers, crayfish traps, and dipnets may be the most efficient and effective amphibian monitoring protocol. ?? 2007 Brill Academic Publishers.
Tian, Guo-Liang; Li, Hui-Qiong
2017-08-01
Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.
Robust gene selection methods using weighting schemes for microarray data analysis.
Kang, Suyeon; Song, Jongwoo
2017-09-02
A common task in microarray data analysis is to identify informative genes that are differentially expressed between two different states. Owing to the high-dimensional nature of microarray data, identification of significant genes has been essential in analyzing the data. However, the performances of many gene selection techniques are highly dependent on the experimental conditions, such as the presence of measurement error or a limited number of sample replicates. We have proposed new filter-based gene selection techniques, by applying a simple modification to significance analysis of microarrays (SAM). To prove the effectiveness of the proposed method, we considered a series of synthetic datasets with different noise levels and sample sizes along with two real datasets. The following findings were made. First, our proposed methods outperform conventional methods for all simulation set-ups. In particular, our methods are much better when the given data are noisy and sample size is small. They showed relatively robust performance regardless of noise level and sample size, whereas the performance of SAM became significantly worse as the noise level became high or sample size decreased. When sufficient sample replicates were available, SAM and our methods showed similar performance. Finally, our proposed methods are competitive with traditional methods in classification tasks for microarrays. The results of simulation study and real data analysis have demonstrated that our proposed methods are effective for detecting significant genes and classification tasks, especially when the given data are noisy or have few sample replicates. By employing weighting schemes, we can obtain robust and reliable results for microarray data analysis.
Seth, Mayank; Jackson, Karen V.; Winzelberg, Sarah; Giger, Urs
2012-01-01
Objective To compare accuracy and ease of use of a card agglutination assay, an immunochromatographic cartridge method, and a gel-based method for canine blood typing. Sample Blood samples from 52 healthy blood donor dogs, 10 dogs with immune-mediated hemolytic anemia (IMHA), and 29 dogs with other diseases. Procedures Blood samples were tested in accordance with manufacturer guidelines. Samples with low PCVs were created by the addition of autologous plasma to separately assess the effects of anemia on test results. Results Compared with a composite reference standard of agreement between 2 methods, the gel-based method was found to be 100% accurate. The card agglutination assay was 89% to 91% accurate, depending on test interpretation, and the immunochromatographic cartridge method was 93% accurate but 100% specific. Errors were observed more frequently in samples from diseased dogs, particularly those with IMHA. In the presence of persistent autoagglutination, dog erythrocyte antigen (DEA) 1.1 typing was not possible, except with the immunochromatographic cartridge method. Conclusions and Clinical Relevance The card agglutination assay and immunochromatographic cartridge method, performed by trained personnel, were suitable for in-clinic emergency DEA 1.1 blood typing. There may be errors, particularly for samples from dogs with IMHA, and the immunochromatographic cartridge method may have an advantage of allowing typing of samples with persistent autoagglutination. The laboratory gel-based method would be preferred for routine DEA 1.1 typing of donors and patients if it is available and time permits. Current DEA 1.1 typing techniques appear to be appropriately standardized and easy to use. PMID:22280380
Létant, Sonia E; Murphy, Gloria A; Alfaro, Teneile M; Avila, Julie R; Kane, Staci R; Raber, Ellen; Bunt, Thomas M; Shah, Sanjiv R
2011-09-01
In the event of a biothreat agent release, hundreds of samples would need to be rapidly processed to characterize the extent of contamination and determine the efficacy of remediation activities. Current biological agent identification and viability determination methods are both labor- and time-intensive such that turnaround time for confirmed results is typically several days. In order to alleviate this issue, automated, high-throughput sample processing methods were developed in which real-time PCR analysis is conducted on samples before and after incubation. The method, referred to as rapid-viability (RV)-PCR, uses the change in cycle threshold after incubation to detect the presence of live organisms. In this article, we report a novel RV-PCR method for detection of live, virulent Bacillus anthracis, in which the incubation time was reduced from 14 h to 9 h, bringing the total turnaround time for results below 15 h. The method incorporates a magnetic bead-based DNA extraction and purification step prior to PCR analysis, as well as specific real-time PCR assays for the B. anthracis chromosome and pXO1 and pXO2 plasmids. A single laboratory verification of the optimized method applied to the detection of virulent B. anthracis in environmental samples was conducted and showed a detection level of 10 to 99 CFU/sample with both manual and automated RV-PCR methods in the presence of various challenges. Experiments exploring the relationship between the incubation time and the limit of detection suggest that the method could be further shortened by an additional 2 to 3 h for relatively clean samples.
Importance of sample preparation for molecular diagnosis of lyme borreliosis from urine.
Bergmann, A R; Schmidt, B L; Derler, A-M; Aberer, E
2002-12-01
Urine PCR has been used for the diagnosis of Borrelia burgdorferi infection in recent years but has been abandoned because of its low sensitivity and the irreproducibility of the results. Our study aimed to analyze technical details related to sample preparation and detection methods. Crucial for a successful urine PCR were (i) avoidance of the first morning urine sample; (ii) centrifugation at 36,000 x g; and (iii) the extraction method, with only DNAzol of the seven different extraction methods used yielding positive results with patient urine specimens. Furthermore, storage of frozen urine samples at -80 degrees C reduced the sensitivity of a positive urine PCR result obtained with samples from 72 untreated erythema migrans (EM) patients from 85% in the first 3 months to <30% after more than 3 months. Bands were detected at 276 bp on ethidium bromide-stained agarose gels after amplification by a nested PCR. The specificity of bands for 32 of 33 samples was proven by hybridization with a GEN-ETI-K-DEIA kit and for a 10 further positive amplicons by sequencing. By using all of these steps to optimize the urine PCR technique, B. burgdorferi infection could be diagnosed by using urine samples from EM patients with a sensitivity (85%) substantially better than that of serological methods (50%). This improved method could be of future importance as an additional laboratory technique for the diagnosis of unclear, unrecognized borrelia infections and diseases possibly related to Lyme borreliosis.
Zheng, Jinkai; Fang, Xiang; Cao, Yong; Xiao, Hang; He, Lili
2013-01-01
To develop an accurate and convenient method for monitoring the production of citrus-derived bioactive 5-demethylnobiletin from demethylation reaction of nobiletin, we compared surface enhanced Raman spectroscopy (SERS) methods with a conventional HPLC method. Our results show that both the substrate-based and solution-based SERS methods correlated with HPLC method very well. The solution method produced lower root mean square error of calibration and higher correlation coefficient than the substrate method. The solution method utilized an ‘affinity chromatography’-like procedure to separate the reactant nobiletin from the product 5-demthylnobiletin based on their different binding affinity to the silver dendrites. The substrate method was found simpler and faster to collect the SERS ‘fingerprint’ spectra of the samples as no incubation between samples and silver was needed and only trace amount of samples were required. Our results demonstrated that the SERS methods were superior to HPLC method in conveniently and rapidly characterizing and quantifying 5-demethylnobiletin production. PMID:23885986
An improved SRC method based on virtual samples for face recognition
NASA Astrophysics Data System (ADS)
Fu, Lijun; Chen, Deyun; Lin, Kezheng; Li, Ao
2018-07-01
The sparse representation classifier (SRC) performs classification by evaluating which class leads to the minimum representation error. However, in real world, the number of available training samples is limited due to noise interference, training samples cannot accurately represent the test sample linearly. Therefore, in this paper, we first produce virtual samples by exploiting original training samples at the aim of increasing the number of training samples. Then, we take the intra-class difference as data representation of partial noise, and utilize the intra-class differences and training samples simultaneously to represent the test sample in a linear way according to the theory of SRC algorithm. Using weighted score level fusion, the respective representation scores of the virtual samples and the original training samples are fused together to obtain the final classification results. The experimental results on multiple face databases show that our proposed method has a very satisfactory classification performance.
Schwarcz, Sandra; Spindler, Hilary; Scheer, Susan; Valleroy, Linda; Lansky, Amy
2007-07-01
Convenience samples are used to determine HIV-related behaviors among men who have sex with men (MSM) without measuring the extent to which the results are representative of the broader MSM population. We compared results from a cross-sectional survey of MSM recruited from gay bars between June and October 2001 to a random digit dial telephone survey conducted between June 2002 and January 2003. The men in the probability sample were older, better educated, and had higher incomes than men in the convenience sample, the convenience sample enrolled more employed men and men of color. Substance use around the time of sex was higher in the convenience sample but other sexual behaviors were similar. HIV testing was common among men in both samples. Periodic validation, through comparison of data collected by different sampling methods, may be useful when relying on survey data for program and policy development.
Gill, Christina; van de Wijgert, Janneke H H M; Blow, Frances; Darby, Alistair C
2016-01-01
Recent studies on the vaginal microbiota have employed molecular techniques such as 16S rRNA gene sequencing to describe the bacterial community as a whole. These techniques require the lysis of bacterial cells to release DNA before purification and PCR amplification of the 16S rRNA gene. Currently, methods for the lysis of bacterial cells are not standardised and there is potential for introducing bias into the results if some bacterial species are lysed less efficiently than others. This study aimed to compare the results of vaginal microbiota profiling using four different pretreatment methods for the lysis of bacterial samples (30 min of lysis with lysozyme, 16 hours of lysis with lysozyme, 60 min of lysis with a mixture of lysozyme, mutanolysin and lysostaphin and 30 min of lysis with lysozyme followed by bead beating) prior to chemical and enzyme-based DNA extraction with a commercial kit. After extraction, DNA yield did not significantly differ between methods with the exception of lysis with lysozyme combined with bead beating which produced significantly lower yields when compared to lysis with the enzyme cocktail or 30 min lysis with lysozyme only. However, this did not result in a statistically significant difference in the observed alpha diversity of samples. The beta diversity (Bray-Curtis dissimilarity) between different lysis methods was statistically significantly different, but this difference was small compared to differences between samples, and did not affect the grouping of samples with similar vaginal bacterial community structure by hierarchical clustering. An understanding of how laboratory methods affect the results of microbiota studies is vital in order to accurately interpret the results and make valid comparisons between studies. Our results indicate that the choice of lysis method does not prevent the detection of effects relating to the type of vaginal bacterial community one of the main outcome measures of epidemiological studies. However, we recommend that the same method is used on all samples within a particular study.
RAPID SEPARATION METHOD FOR EMERGENCY WATER AND URINE SAMPLES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maxwell, S.; Culligan, B.
2008-08-27
The Savannah River Site Environmental Bioassay Lab participated in the 2008 NRIP Emergency Response program administered by the National Institute for Standards and Technology (NIST) in May, 2008. A new rapid column separation method was used for analysis of actinides and {sup 90}Sr the NRIP 2008 emergency water and urine samples. Significant method improvements were applied to reduce analytical times. As a result, much faster analysis times were achieved, less than 3 hours for determination of {sup 90}Sr and 3-4 hours for actinides. This represents a 25%-33% improvement in analysis times from NRIP 2007 and a {approx}100% improvement compared tomore » NRIP 2006 report times. Column flow rates were increased by a factor of two, with no significant adverse impact on the method performance. Larger sample aliquots, shorter count times, faster cerium fluoride microprecipitation and streamlined calcium phosphate precipitation were also employed. Based on initial feedback from NIST, the SRS Environmental Bioassay Lab had the most rapid analysis times for actinides and {sup 90}Sr analyses for NRIP 2008 emergency urine samples. High levels of potential matrix interferences may be present in emergency samples and rugged methods are essential. Extremely high levels of {sup 210}Po were found to have an adverse effect on the uranium results for the NRIP-08 urine samples, while uranium results for NRIP-08 water samples were not affected. This problem, which was not observed for NRIP-06 or NRIP-07 urine samples, was resolved by using an enhanced {sup 210}Po removal step, which will be described.« less
Sampling Based Influence Maximization on Linear Threshold Model
NASA Astrophysics Data System (ADS)
Jia, Su; Chen, Ling
2018-04-01
A sampling based influence maximization on linear threshold (LT) model method is presented. The method samples the routes in the possible worlds in the social networks, and uses Chernoff bound to estimate the number of samples so that the error can be constrained within a given bound. Then the active possibilities of the routes in the possible worlds are calculated, and are used to compute the influence spread of each node in the network. Our experimental results show that our method can effectively select appropriate seed nodes set that spreads larger influence than other similar methods.
Improved ASTM G72 Test Method for Ensuring Adequate Fuel-to-Oxidizer Ratios
NASA Technical Reports Server (NTRS)
Juarez, Alfredo; Harper, Susana A.
2016-01-01
The ASTM G72/G72M-15 Standard Test Method for Autogenous Ignition Temperature of Liquids and Solids in a High-Pressure Oxygen-Enriched Environment is currently used to evaluate materials for the ignition susceptibility driven by exposure to external heat in an enriched oxygen environment. Testing performed on highly volatile liquids such as cleaning solvents has proven problematic due to inconsistent test results (non-ignitions). Non-ignition results can be misinterpreted as favorable oxygen compatibility, although they are more likely associated with inadequate fuel-to-oxidizer ratios. Forced evaporation during purging and inadequate sample size were identified as two potential causes for inadequate available sample material during testing. In an effort to maintain adequate fuel-to-oxidizer ratios within the reaction vessel during test, several parameters were considered, including sample size, pretest sample chilling, pretest purging, and test pressure. Tests on a variety of solvents exhibiting a range of volatilities are presented in this paper. A proposed improvement to the standard test protocol as a result of this evaluation is also presented. Execution of the final proposed improved test protocol outlines an incremental step method of determining optimal conditions using increased sample sizes while considering test system safety limits. The proposed improved test method increases confidence in results obtained by utilizing the ASTM G72 autogenous ignition temperature test method and can aid in the oxygen compatibility assessment of highly volatile liquids and other conditions that may lead to false non-ignition results.
Bannerman, J A; Costamagna, A C; McCornack, B P; Ragsdale, D W
2015-06-01
Generalist natural enemies play an important role in controlling soybean aphid, Aphis glycines (Hemiptera: Aphididae), in North America. Several sampling methods are used to monitor natural enemy populations in soybean, but there has been little work investigating their relative bias, precision, and efficiency. We compare five sampling methods: quadrats, whole-plant counts, sweep-netting, walking transects, and yellow sticky cards to determine the most practical methods for sampling the three most prominent species, which included Harmonia axyridis (Pallas), Coccinella septempunctata L. (Coleoptera: Coccinellidae), and Orius insidiosus (Say) (Hemiptera: Anthocoridae). We show an important time by sampling method interaction indicated by diverging community similarities within and between sampling methods as the growing season progressed. Similarly, correlations between sampling methods for the three most abundant species over multiple time periods indicated differences in relative bias between sampling methods and suggests that bias is not consistent throughout the growing season, particularly for sticky cards and whole-plant samples. Furthermore, we show that sticky cards produce strongly biased capture rates relative to the other four sampling methods. Precision and efficiency differed between sampling methods and sticky cards produced the most precise (but highly biased) results for adult natural enemies, while walking transects and whole-plant counts were the most efficient methods for detecting coccinellids and O. insidiosus, respectively. Based on bias, precision, and efficiency considerations, the most practical sampling methods for monitoring in soybean include walking transects for coccinellid detection and whole-plant counts for detection of small predators like O. insidiosus. Sweep-netting and quadrat samples are also useful for some applications, when efficiency is not paramount. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Elges, Sandra; Arnold, Renate; Liesenfeld, Oliver; Kofla, Grzegorz; Mikolajewska, Agata; Schwartz, Stefan; Uharek, Lutz; Ruhnke, Markus
2017-12-01
We prospectively evaluated a multiplex real-time PCR assay (SeptiFast, SF) in a cohort of patients undergoing allo-BMT in comparison to an in-house PCR method (IH-PCR). Overall 847 blood samples (mean 8 samples/patient) from 104 patients with haematological malignancies were analysed. The majority of patients had acute leukaemia (62%) with a mean age of 52 years (54% female). Pathogens could be detected in 91 of 847 (11%) samples by SF compared to 38 of 205 (18.5%) samples by BC, and 57 of 847 (6.7%) samples by IH-PCR. Coagulase-negative staphylococci (n=41 in SF, n=29 in BC) were the most frequently detected bacteria followed by Escherichia coli (n=9 in SF, n=6 in BC). Candida albicans (n=17 in SF, n=0 in BC, n=24 in IH-PCR) was the most frequently detected fungal pathogen. SF gave positive results in 5% of samples during surveillance vs in 26% of samples during fever episodes. Overall, the majority of blood samples gave negative results in both PCR methods resulting in 93% overall agreement resulting in a negative predictive value of 0.96 (95% CI: 0.95-0.97), and a positive predictive value of 0.10 (95% CI: -0.01 to 0.21). SeptiFast appeared to be superior over BC and the IH-PCR method. © 2017 Blackwell Verlag GmbH.
Methodological integrative review of the work sampling technique used in nursing workload research.
Blay, Nicole; Duffield, Christine M; Gallagher, Robyn; Roche, Michael
2014-11-01
To critically review the work sampling technique used in nursing workload research. Work sampling is a technique frequently used by researchers and managers to explore and measure nursing activities. However, work sampling methods used are diverse making comparisons of results between studies difficult. Methodological integrative review. Four electronic databases were systematically searched for peer-reviewed articles published between 2002-2012. Manual scanning of reference lists and Rich Site Summary feeds from contemporary nursing journals were other sources of data. Articles published in the English language between 2002-2012 reporting on research which used work sampling to examine nursing workload. Eighteen articles were reviewed. The review identified that the work sampling technique lacks a standardized approach, which may have an impact on the sharing or comparison of results. Specific areas needing a shared understanding included the training of observers and subjects who self-report, standardization of the techniques used to assess observer inter-rater reliability, sampling methods and reporting of outcomes. Work sampling is a technique that can be used to explore the many facets of nursing work. Standardized reporting measures would enable greater comparison between studies and contribute to knowledge more effectively. Author suggestions for the reporting of results may act as guidelines for researchers considering work sampling as a research method. © 2014 John Wiley & Sons Ltd.
Hoerner, Rebecca; Feldpausch, Jill; Gray, R Lucas; Curry, Stephanie; Islam, Zahidul; Goldy, Tim; Klein, Frank; Tadese, Theodros; Rice, Jennifer; Mozola, Mark
2011-01-01
Reveal Salmonella 2.0 is an improved version of the original Reveal Salmonella lateral flow immunoassay and is applicable to the detection of Salmonella enterica serogroups A-E in a variety of food and environmental samples. A Performance Tested Method validation study was conducted to compare performance of the Reveal 2.0 method with that of the U.S. Department of Agriculture-Food Safety and Inspection Service or U.S. Food and Drug Administration/Bacteriological Analytical Manual reference culture methods for detection of Salmonella spp. in chicken carcass rinse, raw ground turkey, raw ground beef, hot dogs, raw shrimp, a ready-to-eat meal product, dry pet food, ice cream, spinach, cantaloupe, peanut butter, stainless steel surface, and sprout irrigation water. In a total of 17 trials performed internally and four trials performed in an independent laboratory, there were no statistically significant differences in performance of the Reveal 2.0 and reference culture procedures as determined by Chi-square analysis, with the exception of one trial with stainless steel surface and one trial with sprout irrigation water where there were significantly more positive results by the Reveal 2.0 method. Considering all data generated in testing food samples using enrichment procedures specifically designed for the Reveal method, overall sensitivity of the Reveal method relative to the reference culture methods was 99%. In testing environmental samples, sensitivity of the Reveal method relative to the reference culture method was 164%. For select foods, use of the Reveal test in conjunction with reference method enrichment resulted in overall sensitivity of 92%. There were no unconfirmed positive results on uninoculated control samples in any trials for specificity of 100%. In inclusivity testing, 102 different Salmonella serovars belonging to serogroups A-E were tested and 99 were consistently positive in the Reveal test. In exclusivity testing of 33 strains of non-salmonellae representing 14 genera, 32 were negative when tested with Reveal following nonselective enrichment, and the remaining strain was found to be substantially inhibited by the enrichment media used with the Reveal method. Results of ruggedness testing showed that the Reveal test produces accurate results even with substantial deviation in sample volume or device development time.
Revesz, Kinga M.; Landwehr, Jurate M.
2002-01-01
A new method was developed to analyze the stable carbon and oxygen isotope ratios of small samples (400 ± 20 µg) of calcium carbonate. This new method streamlines the classical phosphoric acid/calcium carbonate (H3PO4/CaCO3) reaction method by making use of a recently available Thermoquest-Finnigan GasBench II preparation device and a Delta Plus XL continuous flow isotope ratio mass spectrometer. Conditions for which the H3PO4/CaCO3 reaction produced reproducible and accurate results with minimal error had to be determined. When the acid/carbonate reaction temperature was kept at 26 °C and the reaction time was between 24 and 54 h, the precision of the carbon and oxygen isotope ratios for pooled samples from three reference standard materials was ≤0.1 and ≤0.2 per mill or ‰, respectively, although later analysis showed that materials from one specific standard required reaction time between 34 and 54 h for δ18O to achieve this level of precision. Aliquot screening methods were shown to further minimize the total error. The accuracy and precision of the new method were analyzed and confirmed by statistical analysis. The utility of the method was verified by analyzing calcite from Devils Hole, Nevada, for which isotope-ratio values had previously been obtained by the classical method. Devils Hole core DH-11 recently had been re-cut and re-sampled, and isotope-ratio values were obtained using the new method. The results were comparable with those obtained by the classical method with correlation = +0.96 for both isotope ratios. The consistency of the isotopic results is such that an alignment offset could be identified in the re-sampled core material, and two cutting errors that occurred during re-sampling then were confirmed independently. This result indicates that the new method is a viable alternative to the classical reaction method. In particular, the new method requires less sample material permitting finer resolution and allows automation of some processes resulting in considerable time savings.
Al, Kait F; Bisanz, Jordan E; Gloor, Gregory B; Reid, Gregor; Burton, Jeremy P
2018-01-01
The increasing interest on the impact of the gut microbiota on health and disease has resulted in multiple human microbiome-related studies emerging. However, multiple sampling methods are being used, making cross-comparison of results difficult. To avoid additional clinic visits and increase patient recruitment to these studies, there is the potential to utilize at-home stool sampling. The aim of this pilot study was to compare simple self-sampling collection and storage methods. To simulate storage conditions, stool samples from three volunteers were freshly collected, placed on toilet tissue, and stored at four temperatures (-80, 7, 22 and 37°C), either dry or in the presence of a stabilization agent (RNAlater®) for 3 or 7days. Using 16S rRNA gene sequencing by Illumina, the effect of storage variations for each sample was compared to a reference community from fresh, unstored counterparts. Fastq files may be accessed in the NCBI Sequence Read Archive: Bioproject ID PRJNA418287. Microbial diversity and composition were not significantly altered by any storage method. Samples were always separable based on participant, regardless of storage method suggesting there was no need for sample preservation by a stabilization agent. In summary, if immediate sample processing is not feasible, short term storage of unpreserved stool samples on toilet paper offers a reliable way to assess the microbiota composition by 16S rRNA gene sequencing. Copyright © 2017 Elsevier B.V. All rights reserved.
Sampling naturally contaminated broiler carcasses for Salmonella by three different methods
USDA-ARS?s Scientific Manuscript database
Postchill neck skin (NS) maceration and whole carcass rinsing (WCR) are frequently used methods to detect salmonellae from commercially processed broilers. These are practical, nondestructive methods, but they are insensitive and may result in frequent false negatives (20 to 40%). NS samples only ...
Compendium of selected methods for sampling and analysis at geothermal facilities
NASA Astrophysics Data System (ADS)
Kindle, C. H.; Pool, K. H.; Ludwick, J. D.; Robertson, D. E.
1984-06-01
An independent study of the field has resulted in a compilation of the best methods for sampling, preservation and analysis of potential pollutants from geothermally fueled electric power plants. These methods are selected as the most usable over the range of application commonly experienced in the various geothermal plant sample locations. In addition to plant and well piping, techniques for sampling cooling towers, ambient gases, solids, surface and subsurface waters are described. Emphasis is placed on the use of sampling proves to extract samples from heterogeneous flows. Certain sampling points, constituents and phases of plant operation are more amenable to quality assurance improvement in the emission measurements than others and are so identified.
Estimation of reference intervals from small samples: an example using canine plasma creatinine.
Geffré, A; Braun, J P; Trumel, C; Concordet, D
2009-12-01
According to international recommendations, reference intervals should be determined from at least 120 reference individuals, which often are impossible to achieve in veterinary clinical pathology, especially for wild animals. When only a small number of reference subjects is available, the possible bias cannot be known and the normality of the distribution cannot be evaluated. A comparison of reference intervals estimated by different methods could be helpful. The purpose of this study was to compare reference limits determined from a large set of canine plasma creatinine reference values, and large subsets of this data, with estimates obtained from small samples selected randomly. Twenty sets each of 120 and 27 samples were randomly selected from a set of 1439 plasma creatinine results obtained from healthy dogs in another study. Reference intervals for the whole sample and for the large samples were determined by a nonparametric method. The estimated reference limits for the small samples were minimum and maximum, mean +/- 2 SD of native and Box-Cox-transformed values, 2.5th and 97.5th percentiles by a robust method on native and Box-Cox-transformed values, and estimates from diagrams of cumulative distribution functions. The whole sample had a heavily skewed distribution, which approached Gaussian after Box-Cox transformation. The reference limits estimated from small samples were highly variable. The closest estimates to the 1439-result reference interval for 27-result subsamples were obtained by both parametric and robust methods after Box-Cox transformation but were grossly erroneous in some cases. For small samples, it is recommended that all values be reported graphically in a dot plot or histogram and that estimates of the reference limits be compared using different methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Jianying; Dann, Geoffrey P.; Shi, Tujin
2012-03-10
Sodium dodecyl sulfate (SDS) is one of the most popular laboratory reagents used for highly efficient biological sample extraction; however, SDS presents a significant challenge to LC-MS-based proteomic analyses due to its severe interference with reversed-phase LC separations and electrospray ionization interfaces. This study reports a simple SDS-assisted proteomic sample preparation method facilitated by a novel peptide-level SDS removal protocol. After SDS-assisted protein extraction and digestion, SDS was effectively (>99.9%) removed from peptides through ion substitution-mediated DS- precipitation with potassium chloride (KCl) followed by {approx}10 min centrifugation. Excellent peptide recovery (>95%) was observed for less than 20 {mu}g of peptides.more » Further experiments demonstrated the compatibility of this protocol with LC-MS/MS analyses. The resulting proteome coverage from this SDS-assisted protocol was comparable to or better than those obtained from other standard proteomic preparation methods in both mammalian tissues and bacterial samples. These results suggest that this SDS-assisted protocol is a practical, simple, and broadly applicable proteomic sample processing method, which can be particularly useful when dealing with samples difficult to solubilize by other methods.« less
Determination of fossil carbon content in Swedish waste fuel by four different methods.
Jones, Frida C; Blomqvist, Evalena W; Bisaillon, Mattias; Lindberg, Daniel K; Hupa, Mikko
2013-10-01
This study aimed to determine the content of fossil carbon in waste combusted in Sweden by using four different methods at seven geographically spread combustion plants. In total, the measurement campaign included 42 solid samples, 21 flue gas samples, 3 sorting analyses and 2 investigations using the balance method. The fossil carbon content in the solid samples and in the flue gas samples was determined using (14)C-analysis. From the analyses it was concluded that about a third of the carbon in mixed Swedish waste (municipal solid waste and industrial waste collected at Swedish industry sites) is fossil. The two other methods (the balance method and calculations from sorting analyses), based on assumptions and calculations, gave similar results in the plants in which they were used. Furthermore, the results indicate that the difference between samples containing as much as 80% industrial waste and samples consisting of solely municipal solid waste was not as large as expected. Besides investigating the fossil content of the waste, the project was also established to investigate the usability of various methods. However, it is difficult to directly compare the different methods used in this project because besides the estimation of emitted fossil carbon the methods provide other information, which is valuable to the plant owner. Therefore, the choice of method can also be controlled by factors other than direct determination of the fossil fuel emissions when considering implementation in the combustion plants.
Barbosa, Marcília Medrado; Detmann, Edenio; Rocha, Gabriel Cipriano; de Oliveira Franco, Marcia; de Campos Valadares Filho, Sebastião
2015-01-01
A comparison was made of measurements of neutral detergent fiber concentrations obtained with AOAC Method 2002.04 and modified methods using pressurized environments or direct use of industrial heat-stable α-amylase in samples of forage (n=37), concentrate (n=30), and ruminant feces (n=39). The following method modifications were tested: AOAC Method 2002.04 with replacement of the reflux apparatus with an autoclave or Ankom(220®) extractor and F57 filter bags, and AOAC Method 2002.04 with replacement of the standardization procedures for α-amylase by a single addition of industrial α-amylase [250 μL of Termamyl 2X 240 Kilo Novo Units (KNU)-T/g] prior to heating the neutral detergent solution. For the feces and forage samples, the results obtained with the modified methods with an autoclave or modification of α-amylase use were similar to those obtained using AOAC Method 2002.04, but the use of the Ankom220 extractor resulted in overestimated values. For the concentrate samples, the modified methods using an autoclave or Ankom220 extractor resulted in positive systematic errors. However, the method using industrial α-amylase resulted in systematic error and slope bias despite that the obtained values were close to those obtained with AOAC Method 2002.04.
Cardellicchio, Nicola; Di Leo, Antonella; Giandomenico, Santina; Santoro, Stefania
2006-01-01
Optimization of acid digestion method for mercury determination in marine biological samples (dolphin liver, fish and mussel tissues) using a closed vessel microwave sample preparation is presented. Five digestion procedures with different acid mixtures were investigated: the best results were obtained when the microwave-assisted digestion was based on sample dissolution with HNO3-H2SO4-K2Cr2O7 mixture. A comparison between microwave digestion and conventional reflux digestion shows there are considerable losses of mercury in the open digestion system. The microwave digestion method has been tested satisfactorily using two certified reference materials. Analytical results show a good agreement with certified values. The microwave digestion proved to be a reliable and rapid method for decomposition of biological samples in mercury determination.
Elongation measurement using 1-dimensional image correlation method
NASA Astrophysics Data System (ADS)
Phongwisit, Phachara; Kamoldilok, Surachart; Buranasiri, Prathan
2016-11-01
Aim of this paper was to study, setup, and calibrate an elongation measurement by using 1- Dimensional Image Correlation method (1-DIC). To confirm our method and setup correctness, we need calibration with other methods. In this paper, we used a small spring as a sample to find a result in terms of spring constant. With a fundamental of Image Correlation method, images of formed and deformed samples were compared to understand the difference between deformed process. By comparing the location of reference point on both image's pixel, the spring's elongation were calculated. Then, the results have been compared with the spring constants, which were found from Hooke's law. The percentage of 5 percent error has been found. This DIC method, then, would be applied to measure the elongation of some different kinds of small fiber samples.
Cleveland, Danielle; Brumbaugh, William G.; MacDonald, Donald D.
2017-01-01
Evaluations of sediment quality conditions are commonly conducted using whole-sediment chemistry analyses but can be enhanced by evaluating multiple lines of evidence, including measures of the bioavailable forms of contaminants. In particular, porewater chemistry data provide information that is directly relevant for interpreting sediment toxicity data. Various methods for sampling porewater for trace metals and dissolved organic carbon (DOC), which is an important moderator of metal bioavailability, have been employed. The present study compares the peeper, push point, centrifugation, and diffusive gradients in thin films (DGT) methods for the quantification of 6 metals and DOC. The methods were evaluated at low and high concentrations of metals in 3 sediments having different concentrations of total organic carbon and acid volatile sulfide and different particle-size distributions. At low metal concentrations, centrifugation and push point sampling resulted in up to 100 times higher concentrations of metals and DOC in porewater compared with peepers and DGTs. At elevated metal levels, the measured concentrations were in better agreement among the 4 sampling techniques. The results indicate that there can be marked differences among operationally different porewater sampling methods, and it is unclear if there is a definitive best method for sampling metals and DOC in porewater.
NASA Astrophysics Data System (ADS)
Huang, Jian; Yuen, Pong C.; Chen, Wen-Sheng; Lai, J. H.
2005-05-01
Many face recognition algorithms/systems have been developed in the last decade and excellent performances have also been reported when there is a sufficient number of representative training samples. In many real-life applications such as passport identification, only one well-controlled frontal sample image is available for training. Under this situation, the performance of existing algorithms will degrade dramatically or may not even be implemented. We propose a component-based linear discriminant analysis (LDA) method to solve the one training sample problem. The basic idea of the proposed method is to construct local facial feature component bunches by moving each local feature region in four directions. In this way, we not only generate more samples with lower dimension than the original image, but also consider the face detection localization error while training. After that, we propose a subspace LDA method, which is tailor-made for a small number of training samples, for the local feature projection to maximize the discrimination power. Theoretical analysis and experiment results show that our proposed subspace LDA is efficient and overcomes the limitations in existing LDA methods. Finally, we combine the contributions of each local component bunch with a weighted combination scheme to draw the recognition decision. A FERET database is used for evaluating the proposed method and results are encouraging.
Tube and column agglutination technology for autocontrol testing.
Courtney, J E; Vincent, J L; Indrikovs, A J
2001-01-01
The incidence of positive autocontrol test results with column agglutination technology is a concern. This study investigates the incidence and significance of positive autocontrols in the ID Micro Typing System (gel) and the Gamma ReACT (ReACT). The study encompassed a total of 1021 randomly selected samples from patients and 95 samples from donors collected during 1 month. The autocontrol testing was carried out according to the manufacturer's instructions for the column agglutination tests. The tube method was carried out using low-ionic-strength solution (LISS). The direct antiglobulin test (DAT) was performed using the tube method, and further investigated with elution studies if warranted. Seventy-nine patient's samples (7.74%) had a positive autocontrol: the gel test, 72 (91.13%); ReACT, 21 (26.58%); and the tube method, 27 (34.18%). Of the 79 positive autocontrols, 44 samples had a negative DAT. Of the samples with positive DAT results, only one possessed a clinically significant antibody, anti-D. Moreover, the same sample also tested positive in all three methods. Column agglutination techniques have increased sensitivity for a positive autocontrol beyond the conventional tube method. However, ReACT and gel tests differ significantly in their frequency of positives. Investigation of the significance of a positive autocontrol in column agglutination technology when the conventional tube method is also positive is suggested.
Determining the 40K radioactivity in rocks using x-ray spectrometry
NASA Astrophysics Data System (ADS)
Pilakouta, M.; Kallithrakas-Kontos, N.; Nikolaou, G.
2017-09-01
In this paper we propose an experimental method for the determination of potassium-40 (40K) radioactivity in commercial granite samples using x-ray fluorescence (XRF). The method correlates the total potassium concentration (yield) in samples deduced by XRF analysis with the radioactivity of the sample due to the 40K radionuclide. This method can be used in an undergraduate student laboratory. A brief theoretical background and description of the method, as well as some results and their interpretation, are presented.
Goeman, Valerie R; Tinkler, Stacy H; Hammac, G Kenitra; Ruple, Audrey
2018-04-01
Environmental surveillance for Salmonella enterica can be used for early detection of contamination; thus routine sampling is an integral component of infection control programs in hospital environments. At the Purdue University Veterinary Teaching Hospital (PUVTH), the technique regularly employed in the large animal hospital for sample collection uses sterile gauze sponges for environmental sampling, which has proven labor-intensive and time-consuming. Alternative sampling methods use Swiffer brand electrostatic wipes for environmental sample collection, which are reportedly effective and efficient. It was hypothesized that use of Swiffer wipes for sample collection would be more efficient and less costly than the use of gauze sponges. A head-to-head comparison between the 2 sampling methods was conducted in the PUVTH large animal hospital and relative agreement, cost-effectiveness, and sampling efficiency were compared. There was fair agreement in culture results between the 2 sampling methods, but Swiffer wipes required less time and less physical effort to collect samples and were more cost-effective.
NASA Technical Reports Server (NTRS)
Slivon, L. E.; Hernon-Kenny, L. A.; Katona, V. R.; Dejarme, L. E.
1995-01-01
This report describes analytical methods and results obtained from chemical analysis of 31 charcoal samples in five sets. Each set was obtained from a single scrubber used to filter ambient air on board a Spacelab mission. Analysis of the charcoal samples was conducted by thermal desorption followed by gas chromatography/mass spectrometry (GC/MS). All samples were analyzed using identical methods. The method used for these analyses was able to detect compounds independent of their polarity or volatility. In addition to the charcoal samples, analyses of three Environmental Control and Life Support System (ECLSS) water samples were conducted specifically for trimethylamine.
Wang, Yan; Ma, Guangkai; An, Le; Shi, Feng; Zhang, Pei; Lalush, David S.; Wu, Xi; Pu, Yifei; Zhou, Jiliu; Shen, Dinggang
2017-01-01
Objective To obtain high-quality positron emission tomography (PET) image with low-dose tracer injection, this study attempts to predict the standard-dose PET (S-PET) image from both its low-dose PET (L-PET) counterpart and corresponding magnetic resonance imaging (MRI). Methods It was achieved by patch-based sparse representation (SR), using the training samples with a complete set of MRI, L-PET and S-PET modalities for dictionary construction. However, the number of training samples with complete modalities is often limited. In practice, many samples generally have incomplete modalities (i.e., with one or two missing modalities) that thus cannot be used in the prediction process. In light of this, we develop a semi-supervised tripled dictionary learning (SSTDL) method for S-PET image prediction, which can utilize not only the samples with complete modalities (called complete samples) but also the samples with incomplete modalities (called incomplete samples), to take advantage of the large number of available training samples and thus further improve the prediction performance. Results Validation was done on a real human brain dataset consisting of 18 subjects, and the results show that our method is superior to the SR and other baseline methods. Conclusion This work proposed a new S-PET prediction method, which can significantly improve the PET image quality with low-dose injection. Significance The proposed method is favorable in clinical application since it can decrease the potential radiation risk for patients. PMID:27187939
Mass load estimation errors utilizing grab sampling strategies in a karst watershed
Fogle, A.W.; Taraba, J.L.; Dinger, J.S.
2003-01-01
Developing a mass load estimation method appropriate for a given stream and constituent is difficult due to inconsistencies in hydrologic and constituent characteristics. The difficulty may be increased in flashy flow conditions such as karst. Many projects undertaken are constrained by budget and manpower and do not have the luxury of sophisticated sampling strategies. The objectives of this study were to: (1) examine two grab sampling strategies with varying sampling intervals and determine the error in mass load estimates, and (2) determine the error that can be expected when a grab sample is collected at a time of day when the diurnal variation is most divergent from the daily mean. Results show grab sampling with continuous flow to be a viable data collection method for estimating mass load in the study watershed. Comparing weekly, biweekly, and monthly grab sampling, monthly sampling produces the best results with this method. However, the time of day the sample is collected is important. Failure to account for diurnal variability when collecting a grab sample may produce unacceptable error in mass load estimates. The best time to collect a sample is when the diurnal cycle is nearest the daily mean.
Howe, Alan; Musgrove, Darren; Breuer, Dietmar; Gusbeth, Krista; Moritz, Andreas; Demange, Martine; Oury, Véronique; Rousset, Davy; Dorotte, Michel
2011-08-01
Historically, workplace exposure to the volatile inorganic acids hydrochloric acid (HCl) and nitric acid (HNO(3)) has been determined mostly by collection on silica gel sorbent tubes and analysis of the corresponding anions by ion chromatography (IC). However, HCl and HNO(3) can be present in workplace air in the form of mist as well as vapor, so it is important to sample the inhalable fraction of airborne particles. As sorbent tubes exhibit a low sampling efficiency for inhalable particles, a more suitable method was required. This is the first of two articles on "Evaluation of Sampling Methods for Measuring Exposure to Volatile Inorganic Acids in Workplace Air" and describes collaborative sampling exercises carried out to evaluate an alternative method for sampling HCl and HNO(3) using sodium carbonate-impregnated filters. The second article describes sampling capacity and breakthrough tests. The method was found to perform well and a quartz fiber filter impregnated with 500 μL of 1 M Na(2)CO(3) (10% (m/v) Na(2)CO(3)) was found to have sufficient sampling capacity for use in workplace air measurement. A pre-filter is required to remove particulate chlorides and nitrates that when present would otherwise result in a positive interference. A GSP sampler fitted with a plastic cone, a closed face cassette, or a plastic IOM sampler were all found to be suitable for mounting the pre-filter and sampling filter(s), but care has to be taken with the IOM sampler to ensure that the sampler is tightly closed to avoid leaks. HCl and HNO(3) can react with co-sampled particulate matter on the pre-filter, e.g., zinc oxide, leading to low results, and stronger acids can react with particulate chlorides and nitrates removed by the pre-filter to liberate HCl and HNO(3), which are subsequently collected on the sampling filter, leading to high results. However, although there is this potential for both positive and negative interferences in the measurement, these are unavoidable. The method studied has now been published in ISO 21438-2:2009.
Junge, Benjamin; Berghof-Jäger, Kornelia
2006-01-01
A method was developed for the detection of L. monocytogenes in food based on real-time polymerase chain reaction (PCR). This advanced PCR method was designed to reduce the time needed to achieve results from PCR reactions and to enable the user to monitor the amplification of the PCR product simultaneously, in real-time. After DNA isolation using the Roche/BIOTECON Diagnostics ShortPrep foodproof II Kit (formerly called Listeria ShortPrep Kit) designed for the rapid preparation of L. monocytogenes DNA for direct use in PCR, the real-time detection of L. monocytogenes DNA is performed by using the Roche/BIOTECON Diagnostics LightCycler foodproof L. monocytogenes Detection Kit. This kit provides primers and hybridization probes for sequence-specific detection, convenient premixed reagents, and different controls for reliable interpretation of results. For repeatability studies, 20 different foods, covering the 15 food groups recommended from the AOAC Research Institute (AOAC RI) for L. monocytogenes detection were analyzed: raw meats, fresh produce/vegetables, processed meats, seafood, egg and egg products, dairy (cultured/noncultured), spices, dry foods, fruit/juices, uncooked pasta, nuts, confectionery, pet food, food dyes and colorings, and miscellaneous. From each food 20, samples were inoculated with a low level (1-10 colony-forming units (CFU)/25 g) and 20 samples with a high level (10-50 CFU/25 g) of L. monocytogenes. Additionally, 5 uninoculated samples were prepared from each food. The food samples were examined with the test kits and in correlation with the cultural methods according to U.S. Food and Drug Administration (FDA) Bacteriological Analytical Manual (BAM) or U.S. Department of Agriculture (USDA)/Food Safety and Inspection Service (FSIS) Microbiology Laboratory Guidebook. After 48 h of incubation, the PCR method in all cases showed equal or better results than the reference cultural FDA/BAM or USDA/FSIS methods. Fifteen out of 20 tested food types gave exactly the same amount of positive samples for both methods in both inoculation levels. For 5 out of 20 foodstuffs, the PCR method resulted in more positives than the reference method after 48 h of incubation. Following AOAC RI definition, these were false positives because they were not confirmed by the reference method (false-positive rate for low inoculated foodstuffs: 5.4%; for high inoculated foodstuffs: 7.1%). Without calculating these unconfirmed positives, the PCR method showed equal sensitivity results compared to the alternative method. With the unconfirmed PCR-positives included into the calculations, the alternative PCR method showed a higher sensitivity than the microbiological methods (low inoculation level: 100 vs 98.0%; sensitivity rate: 1; high inoculation level: 99.7 vs 97.7%; sensitivity rate, 1). All in-house and independently tested uninoculated food samples were negative for L. monocytogenes. The ruggedness testing of both ShortPrep foodproof II Kit and Roche/BIOTECON LightCycler foodproof L. monocytogenes Detection Kit showed no noteworthy influences to any variation of the parameters component concentration, apparatus comparison, tester comparison, and sample volumes. In total, 102 L. monocytogenes isolates (cultures and pure DNA) were tested and detected for the inclusivity study, including all isolates claimed by the AOAC RI. The exclusivity study included 60 non-L. monocytogenes bacteria. None of the tested isolates gave a false-positive result; specificity was 100%. Three different lots were tested in the lot-to-lot study. All 3 lots gave equal results. The stability study was subdivided into 3 parts: long-term study, stress test, and freeze-defrost test. Three lots were tested in 4 time intervals within a period of 13 months. They all gave comparable results for all test intervals. For the stress test, LightCycler L. monocytogenes detection mixes were stored at different temperatures and tested at different time points during 1 month. Stable results were produced at all storage temperatures. The freeze-defrost analysis showed no noteworthy aggravation of test results. The independent validation study examined by Campden and Chorleywood Food Research Association Group (CCFRA) demonstrated again that the LightCycler L. monocytogenes detection system shows a comparable sensitivity to reference methods. With both the LightCycler PCR and BAM methods, 19 out of 20 inoculated food samples were detected. The 24 h PCR results generated by the LightCycler system corresponded directly with the FDA/BAM culture results. However, the 48 h PCR results did not relate exactly to the FDA/BAM results, as one sample found to be positive by the 48 h PCR could not be culturally confirmed and another sample which was negative by the 48 h PCR was culturally positive.
NASA Astrophysics Data System (ADS)
Liu, Xiaodong
2017-08-01
A sampling method by using scattering amplitude is proposed for shape and location reconstruction in inverse acoustic scattering problems. Only matrix multiplication is involved in the computation, thus the novel sampling method is very easy and simple to implement. With the help of the factorization of the far field operator, we establish an inf-criterion for characterization of underlying scatterers. This result is then used to give a lower bound of the proposed indicator functional for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functional decays like the bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functional continuously depends on the scattering amplitude, this further implies that the novel sampling method is extremely stable with respect to errors in the data. Different to the classical sampling method such as the linear sampling method or the factorization method, from the numerical point of view, the novel indicator takes its maximum near the boundary of the underlying target and decays like the bessel functions as the sampling points go away from the boundary. The numerical simulations also show that the proposed sampling method can deal with multiple multiscale case, even the different components are close to each other.
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.
Tallman, Sean D; Winburn, Allysha P
2015-09-01
Ancestry assessment from the postcranial skeleton presents a significant challenge to forensic anthropologists. However, metric dimensions of the femur subtrochanteric region are believed to distinguish between individuals of Asian and non-Asian descent. This study tests the discriminatory power of subtrochanteric shape using modern samples of 128 Thai and 77 White American males. Results indicate that the samples' platymeric index distributions are significantly different (p≤0.001), with the Thai platymeric index range generally lower and the White American range generally higher. While the application of ancestry assessment methods developed from Native American subtrochanteric data results in low correct classification rates for the Thai sample (50.8-57.8%), adapting these methods to the current samples leads to better classification. The Thai data may be more useful in forensic analysis than previously published subtrochanteric data derived from Native American samples. Adapting methods to include appropriate geographic and contemporaneous populations increases the accuracy of femur subtrochanteric ancestry methods. © 2015 American Academy of Forensic Sciences.
Petrova, Darinka Todorova; Cocisiu, Gabriela Ariadna; Eberle, Christoph; Rhode, Karl-Heinz; Brandhorst, Gunnar; Walson, Philip D; Oellerich, Michael
2013-09-01
The aim of this study was to develop a novel method for automated quantification of cell-free hemoglobin (fHb) based on the HI (Roche Diagnostics). The novel fHb method based on the HI was correlated with fHb measured using the triple wavelength methods of both Harboe [fHb, g/L = (0.915 * HI + 2.634)/100] and Fairbanks et al. [fHb, g/L = (0.917 * HI + 2.131)/100]. fHb concentrations were estimated from the HI using the Roche Modular automated platform in self-made and commercially available quality controls, as well as samples from a proficiency testing scheme (INSTAND). The fHb using Roche automated HI results were then compared to results obtained using the traditional spectrophotometric assays for one hundred plasma samples with varying degrees of hemolysis, lipemia and/or bilirubinemia. The novel method using automated HI quantification on the Roche Modular clinical chemistry platform correlated well with results using the classical methods in the 100 patient samples (Harboe: r = 0.9284; Fairbanks et al.: r = 0.9689) and recovery was good for self-made controls. However, commercially available quality controls showed poor recovery due to an unidentified matrix problem. The novel method produced reliable determination of fHb in samples without interferences. However, poor recovery using commercially available fHb quality control samples currently greatly limits its usefulness. © 2013.
Estimating removal rates of bacteria from poultry carcasses using two whole-carcass rinse volumes
USDA-ARS?s Scientific Manuscript database
Rinse sampling is a common method for determining the level of microbial contamination on poultry carcasses. One of the advantages of rinse sampling, over other carcass sampling methods, is that the results can be used for both process control applications and to estimate the total microbial level o...
Zhao, Xiaoyan; Qureshi, Ferhan; Eastman, P Scott; Manning, William C; Alexander, Claire; Robinson, William H; Hesterberg, Lyndal K
2012-04-30
Variability in pre-analytical blood sampling and handling can significantly impact results obtained in quantitative immunoassays. Understanding the impact of these variables is critical for accurate quantification and validation of biomarker measurements. Particularly, in the design and execution of large clinical trials, even small differences in sample processing and handling can have dramatic effects in analytical reliability, results interpretation, trial management and outcome. The effects of two common blood sampling methods (serum vs. plasma) and two widely-used serum handling methods (on the clot with ambient temperature shipping, "traditional", vs. centrifuged with cold chain shipping, "protocol") on protein and autoantibody concentrations were examined. Matched serum and plasma samples were collected from 32 rheumatoid arthritis (RA) patients representing a wide range of disease activity status. Additionally, a set of matched serum samples with two sample handling methods was collected. One tube was processed per manufacturer's instructions and shipped overnight on cold packs (protocol). The matched tube, without prior centrifugation, was simultaneously shipped overnight at ambient temperatures (traditional). Upon delivery, the traditional tube was centrifuged. All samples were subsequently aliquoted and frozen prior to analysis of protein and autoantibody biomarkers. Median correlation between paired serum and plasma across all autoantibody assays was 0.99 (0.98-1.00) with a median % difference of -3.3 (-7.5 to 6.0). In contrast, observed protein biomarker concentrations were significantly affected by sample types, with median correlation of 0.99 (0.33-1.00) and a median % difference of -10 (-55 to 23). When the two serum collection/handling methods were compared, the median correlation between paired samples for autoantibodies was 0.99 (0.91-1.00) with a median difference of 4%. In contrast, significant increases were observed in protein biomarker concentrations among certain biomarkers in samples processed with the 'traditional' method. Autoantibody quantification appears robust to both sample type (plasma vs. serum) and pre-analytical sample collection/handling methods (protocol vs. traditional). In contrast, for non-antibody protein biomarker concentrations, sample type had a significant impact; plasma samples generally exhibit decreased protein biomarker concentrations relative to serum. Similarly, sample handling significantly impacted the variability of protein biomarker concentrations. When biomarker concentrations are combined algorithmically into a single test score such as a multi-biomarker disease activity test for rheumatoid arthritis (MBDA), changes in protein biomarker concentrations may result in a bias of the score. These results illustrate the importance of characterizing pre-analytical methodology, sample type, sample processing and handling procedures for clinical testing in order to ensure test accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.
Ardizzoni, E; Mulders, W; Sanchez-Padilla, E; Varaine, F; de Jong, B C; Rigouts, L
2014-08-01
Long transportation times of samples to culture laboratories can lead to higher contamination rates and significant loss of viability, resulting in lower culture positivity rates. Thin-layer agar (TLA) is a sensitive culture method for the isolation of Mycobacterium tuberculosis that has been optimised with N-acetyl-L-cysteine-sodium hydroxide (NALC-NaOH) decontaminated samples. The combination of the TLA culture method and other decontamination procedures has not been extensively validated. Among 390 smear-positive samples, we compared the culture positivity of samples decontaminated using the Petroff method vs. NALC-NaOH neutralised with phosphate buffer (PBS), applied to samples preserved with cetylpyridinium chloride (CPC) or CPC-free, and then of CPC-preserved samples decontaminated with NALC-NaOH neutralised using Difco neutralising buffer. The sediments were inoculated on TLA, and then on MGIT 960 or Löwenstein-Jensen (LJ) as gold standards. Decontamination with NALC-NaOH yielded higher culture positivity in TLA than in the Petroff method, which was further enhanced by neutralising CPC with the Difco buffer. Surprisingly, culture positivity on LJ also increased after using Difco buffer, suggesting that CPC may not be completely neutralised in egg-based medium. After transportation in CPC, decontamination using NALC-NaOH followed by neutralisation using Difco buffer resulted in the best recovery rates for samples inoculated on TLA and on LJ.
77 FR 27135 - HACCP Systems Validation
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-09
.... Comments may be submitted by either of the following methods: Federal eRulemaking Portal: This Web site... those CCPs and the method of monitoring of them and provides certificates of analysis that specify the sampling method that the supplier uses and the results of that sampling. The receiving establishment should...
This report compares simultaneous results from three woodstove sampling methods and evaluates particulate emission rates of conventional and Oregon-certified catalytic and noncatalytic woodstoves in six Portland, OR, houses. EPA Methods 5G and 5H and the field emission sampler (A...
Sajnóg, Adam; Hanć, Anetta; Barałkiewicz, Danuta
2018-05-15
Analysis of clinical specimens by imaging techniques allows to determine the content and distribution of trace elements on the surface of the examined sample. In order to obtain reliable results, the developed procedure should be based not only on the properly prepared sample and performed calibration. It is also necessary to carry out all phases of the procedure in accordance with the principles of chemical metrology whose main pillars are the use of validated analytical methods, establishing the traceability of the measurement results and the estimation of the uncertainty. This review paper discusses aspects related to sampling, preparation and analysis of clinical samples by laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) with emphasis on metrological aspects, i.e. selected validation parameters of the analytical method, the traceability of the measurement result and the uncertainty of the result. This work promotes the introduction of metrology principles for chemical measurement with emphasis to the LA-ICP-MS which is the comparative method that requires studious approach to the development of the analytical procedure in order to acquire reliable quantitative results. Copyright © 2018 Elsevier B.V. All rights reserved.
[Optimized application of nested PCR method for detection of malaria].
Yao-Guang, Z; Li, J; Zhen-Yu, W; Li, C
2017-04-28
Objective To optimize the application of the nested PCR method for the detection of malaria according to the working practice, so as to improve the efficiency of malaria detection. Methods Premixing solution of PCR, internal primers for further amplification and new designed primers that aimed at two Plasmodium ovale subspecies were employed to optimize the reaction system, reaction condition and specific primers of P . ovale on basis of routine nested PCR. Then the specificity and the sensitivity of the optimized method were analyzed. The positive blood samples and examination samples of malaria were detected by the routine nested PCR and the optimized method simultaneously, and the detection results were compared and analyzed. Results The optimized method showed good specificity, and its sensitivity could reach the pg to fg level. The two methods were used to detect the same positive malarial blood samples simultaneously, the results indicated that the PCR products of the two methods had no significant difference, but the non-specific amplification reduced obviously and the detection rates of P . ovale subspecies improved, as well as the total specificity also increased through the use of the optimized method. The actual detection results of 111 cases of malarial blood samples showed that the sensitivity and specificity of the routine nested PCR were 94.57% and 86.96%, respectively, and those of the optimized method were both 93.48%, and there was no statistically significant difference between the two methods in the sensitivity ( P > 0.05), but there was a statistically significant difference between the two methods in the specificity ( P < 0.05). Conclusion The optimized PCR can improve the specificity without reducing the sensitivity on the basis of the routine nested PCR, it also can save the cost and increase the efficiency of malaria detection as less experiment links.
Effects of Sample Preparation on the Infrared Reflectance Spectra of Powders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brauer, Carolyn S.; Johnson, Timothy J.; Myers, Tanya L.
2015-05-22
While reflectance spectroscopy is a useful tool in identifying molecular compounds, laboratory measurement of solid (particularly powder) samples often is confounded by sample preparation methods. For example, both the packing density and surface roughness can have an effect on the quantitative reflectance spectra of powdered samples. Recent efforts in our group have focused on developing standard methods for measuring reflectance spectra that accounts for sample preparation, as well as other factors such as particle size and provenance. In this work, the effect of preparation method on sample reflectivity was investigated by measuring the directional-hemispherical spectra of samples that were hand-packedmore » as well as pressed into pellets using an integrating sphere attached to a Fourier transform infrared spectrometer. The results show that the methods used to prepare the sample have a substantial effect on the measured reflectance spectra, as do other factors such as particle size.« less
Effects of sample preparation on the infrared reflectance spectra of powders
NASA Astrophysics Data System (ADS)
Brauer, Carolyn S.; Johnson, Timothy J.; Myers, Tanya L.; Su, Yin-Fong; Blake, Thomas A.; Forland, Brenda M.
2015-05-01
While reflectance spectroscopy is a useful tool for identifying molecular compounds, laboratory measurement of solid (particularly powder) samples often is confounded by sample preparation methods. For example, both the packing density and surface roughness can have an effect on the quantitative reflectance spectra of powdered samples. Recent efforts in our group have focused on developing standard methods for measuring reflectance spectra that accounts for sample preparation, as well as other factors such as particle size and provenance. In this work, the effect of preparation method on sample reflectivity was investigated by measuring the directional-hemispherical spectra of samples that were hand-loaded as well as pressed into pellets using an integrating sphere attached to a Fourier transform infrared spectrometer. The results show that the methods used to prepare the sample can have a substantial effect on the measured reflectance spectra, as do other factors such as particle size.
Bakal, Tomas; Janata, Jiri; Sabova, Lenka; Grabic, Roman; Zlabek, Vladimir; Najmanova, Lucie
2018-06-16
A robust and widely applicable method for sampling of aquatic microbial biofilm and further sample processing is presented. The method is based on next-generation sequencing of V4-V5 variable regions of 16S rRNA gene and further statistical analysis of sequencing data, which could be useful not only to investigate taxonomic composition of biofilm bacterial consortia but also to assess aquatic ecosystem health. Five artificial materials commonly used for biofilm growth (glass, stainless steel, aluminum, polypropylene, polyethylene) were tested to determine the one giving most robust and reproducible results. The effect of used sampler material on total microbial composition was not statistically significant; however, the non-plastic materials (glass, metal) gave more stable outputs without irregularities among sample parallels. The bias of the method is assessed with respect to the employment of a non-quantitative step (PCR amplification) to obtain quantitative results (relative abundance of identified taxa). This aspect is often overlooked in ecological and medical studies. We document that sequencing of a mixture of three merged primary PCR reactions for each sample and further evaluation of median values from three technical replicates for each sample enables to overcome this bias and gives robust and repeatable results well distinguishing among sampling localities and seasons.
Face recognition based on symmetrical virtual image and original training image
NASA Astrophysics Data System (ADS)
Ke, Jingcheng; Peng, Yali; Liu, Shigang; Li, Jun; Pei, Zhao
2018-02-01
In face representation-based classification methods, we are able to obtain high recognition rate if a face has enough available training samples. However, in practical applications, we only have limited training samples to use. In order to obtain enough training samples, many methods simultaneously use the original training samples and corresponding virtual samples to strengthen the ability of representing the test sample. One is directly using the original training samples and corresponding mirror samples to recognize the test sample. However, when the test sample is nearly symmetrical while the original training samples are not, the integration of the original training and mirror samples might not well represent the test samples. To tackle the above-mentioned problem, in this paper, we propose a novel method to obtain a kind of virtual samples which are generated by averaging the original training samples and corresponding mirror samples. Then, the original training samples and the virtual samples are integrated to recognize the test sample. Experimental results on five face databases show that the proposed method is able to partly overcome the challenges of the various poses, facial expressions and illuminations of original face image.
Mobbing Experiences of Instructors: Causes, Results, and Solution Suggestions
ERIC Educational Resources Information Center
Celep, Cevat; Konakli, Tugba
2013-01-01
In this study, it was aimed to investigate possible mobbing problems in universities, their causes and results, and to attract attention to precautions that can be taken. Phenomenology as one of the qualitative research methods was used in the study. Sample group of the study was selected through the criteria sampling method and eight instructors…
Elemental Analysis of Beryllium Samples Using a Microzond-EGP-10 Unit
NASA Astrophysics Data System (ADS)
Buzoverya, M. E.; Karpov, I. A.; Gorodnov, A. A.; Shishpor, I. V.; Kireycheva, V. I.
2017-12-01
Results concerning the structural and elemental analysis of beryllium samples obtained via different technologies using a Microzond-EGP-10 unit with the help of the PIXE and RBS methods are presented. As a result, the overall chemical composition and the nature of inclusions were determined. The mapping method made it possible to reveal the structural features of beryllium samples: to select the grains of the main substance having different size and chemical composition, to visualize the interfaces between the regions of different composition, and to describe the features of the distribution of impurities in the samples.
Spectroscopic diagnostics for bacteria in biologic sample
El-Sayed, Mostafa A.; El-Sayed, Ivan H.
2002-01-01
A method to analyze and diagnose specific bacteria in a biologic sample using spectroscopy is disclosed. The method includes obtaining the spectra of a biologic sample of a non-infected patient for use as a reference, subtracting the reference from the spectra of an infected sample, and comparing the fingerprint regions of the resulting differential spectrum with reference spectra of bacteria in saline. Using this diagnostic technique, specific bacteria can be identified sooner and without culturing, bacteria-specific antibiotics can be prescribed sooner, resulting in decreased likelihood of antibiotic resistance and an overall reduction of medical costs.
NASA Astrophysics Data System (ADS)
Miyaoka, Teiji; Isono, Yoshimi; Setani, Kaoru; Sakai, Kumiko; Yamada, Ichimaro; Sato, Yoshiaki; Gunji, Shinobu; Matsui, Takao
2007-06-01
Institute of Accelerator Analysis Ltd. (IAA) is the first Contract Research Organization in Japan providing Accelerator Mass Spectrometry (AMS) analysis services for carbon dating and bioanalysis works. The 3 MV AMS machines are maintained by validated analysis methods using multiple control compounds. It is confirmed that these AMS systems have reliabilities and sensitivities enough for each objective. The graphitization of samples for bioanalysis is prepared by our own purification lines including the measurement of total carbon content in the sample automatically. In this paper, we present the use of AMS analysis in human mass balance and metabolism profiling studies with IAA 3 MV AMS, comparing results obtained from the same samples with liquid scintillation counting (LSC). Human samples such as plasma, urine and feces were obtained from four healthy volunteers orally administered a 14C-labeled drug Y-700, a novel xanthine oxidase inhibitor, of which radioactivity was about 3 MBq (85 μCi). For AMS measurement, these samples were diluted 100-10,000-fold with pure-water or blank samples. The results indicated that AMS method had a good correlation with LSC method (e.g. plasma: r = 0.998, urine: r = 0.997, feces: r = 0.997), and that the drug recovery in the excreta exceeded 92%. The metabolite profiles of plasma, urine and feces obtained with HPLC-AMS corresponded to radio-HPLC results measured at much higher radioactivity level. These results revealed that AMS analysis at IAA is useful to measure 14C-concentration in bioanalysis studies at very low radioactivity level.
Salter, Robert; Holmes, Steven; Legg, David; Coble, Joel; George, Bruce
2012-02-01
Pork tissue samples that tested positive and negative by the Charm II tetracycline test screening method in the slaughter plant laboratory were tested with the modified AOAC International liquid chromatography tandem mass spectrometry (LC-MS-MS) method 995.09 to determine the predictive value of the screening method at detecting total tetracyclines at 10 μg/kg of tissue, in compliance with Russian import regulations. There were 218 presumptive-positive tetracycline samples of 4,195 randomly tested hogs. Of these screening test positive samples, 83% (182) were positive, >10 μg/kg by LC-MS-MS; 12.8% (28) were false violative, greater than limit of detection (LOD) but <10 μg/kg; and 4.2% (8) were not detected at the LC-MS-MS LOD. The 36 false-violative and not-detected samples represent 1% of the total samples screened. Twenty-seven of 30 randomly selected tetracycline screening negative samples tested below the LC-MS-MS LOD, and 3 samples tested <3 μg/kg chlortetracycline. Results indicate that the Charm II tetracycline test is effective at predicting hogs containing >10 μg/kg total tetracyclines in compliance with Russian import regulations.
Garbarino, John R.; Bednar, Anthony J.; Burkhardt, Mark R.
2002-01-01
Analytical methods for the determination of arsenite [As(III)], arsenate [As(V)], dimethylarsinate (DMA), monomethylarsonate (MMA), and roxarsone in filtered natural-water samples are described. Various analytical methods can be used for the determination, depending on the arsenic species being determined. Arsenic concentration is determined by using inductively coupled plasma-mass spectrometry (ICP-MS) as an arsenic-specific detector for all methods. Laboratory-speciation methods are described that use an ion chromatographic column to separate the arsenic species; the column length, column packing, and mobile phase are dependent on the species of interest. Regardless of the separation technique, the arsenic species are introduced into plasma by eithe rpneumatic nebulization or arsine generation. Analysis times range from 2 to 8 minutes and method detection limits range from 0.1 to 0.6 microgram-arsenic per liter (ug-As/L), 10 to 60 picograms absolute (for a 100-microliter injection), depending on the arsenic species determined and the analytical method used. A field-generation specciation method also is described that uses a strong anion exchange cartridge to separate As(III) from As(V) in the field. As(III) in the eluate and the As(V) in the cartridge extract are determined by direct nebulization ICP-MS. Methylated arsenic species that also are retained on the cartridge will positively bias As(V) results without further laboratory separations. The method detection limit for field speciation is 0.3 ug-As/L. The distribution of arsenic species must be preserved in the field to eliminate changes caused by photochemical oxidation or metal oxyhydroxide precipitation. Preservation techniques, such as refrigeration, the addition of acides, or the additoin of ethylene-diaminetetraacetic acid (EDTA) and the effects of ambient light were tested. Of the preservatives evaluated, EDTA was found to work best with the laboratory- and field-speciation methods for all sample matrices tested. Storing the samples in opaque polytethylene bottles eliminated the effects of photochemical oxidation. The percentage change in As(III):As(V) ratios for an EDTA-preserved acid mine drainage (AMD) sample and ground-water sample during a 3-month period was -5 percent and +3 percent, respectively. The bias and variability of the methods were evaluated by comparing results for total arsenic and As(III), As(V), DMA, and MMA concentrations in ground water, AMD, and surface water. Seventy-one ground-water, 10 AMD, and 24 surface-water samples were analyzed. Concentrations in ground-water samples reached 720 ug-As/L for As(III) and 1080 ug-As/L for As(V); AMD samples reached 12800 ug-As/L for As(III) and 7050 ug-As/L for As(V); and surface-water samples reached 5 ug-As/L for As(III) and As(V). Inorganic arsenic species distribution in the samples ranged from 0 to 90 percent As(III). DMA and MMA were present only in surface-water samples from agricultural areas where the herbicide monosodium methylarsonate was applied; concentrations never exceeded 6 ug-As/L. Statistical analyses indicated that the difference between As(III) and As(V) concentrations for samples preserved with EDTA in opaque bottles and field-speciation results were analytically insignificant at the 95-percent confidence interval. There was no significant difference among the methods tested for total arsenic concentration. Percentage recovery for field samples spiked at 50 ug-As/L and analyzed by the laboratory-speciation method (n=2) ranged from 82 to 100 percent for As(III), 97 to 102 percent for As(V), 90 to 104 percent for DMA, and 81 to 96 percent for MMA; recoveries for samples spiked at 100 ug-As/L and analyzed by the field-speciation method ranged from 102 to 107 percent for As(III) and 105 to 106 percent for As(V). Laboratory-speciation results for Environment Canada reference material SLRS-2 closely matched reported concentrations. Laboratory-speciation metho
Cárdenas Valdivia, A; Vereda Alonso, E; López Guerrero, M M; Gonzalez-Rodriguez, J; Cano Pavón, J M; García de Torres, A
2018-03-01
A green and simple method has been proposed in this work for the simultaneous determination of V, Ni and Fe in fuel ash samples by solid sampling high resolution continuum source graphite furnace atomic absorption spectrometry (SS HR CS GFAAS). The application of fast programs in combination with direct solid sampling allows eliminating pretreatment steps, involving minimal manipulation of sample. Iridium treated platforms were applied throughout the present study, enabling the use of aqueous standards for calibration. Correlation coefficients for the calibration curves were typically better than 0.9931. The concentrations found in the fuel ash samples analysed ranged from 0.66% to 4.2% for V, 0.23-0.7% for Ni and 0.10-0.60% for Fe. Precision (%RSD) were 5.2%, 10.0% and 9.8% for V, Ni and Fe, respectively, obtained as the average of the %RSD of six replicates of each fuel ash sample. The optimum conditions established were applied to the determination of the target analytes in fuel ash samples. In order to test the accuracy and applicability of the proposed method in the analysis of samples, five ash samples from the combustion of fuel in power stations, were analysed. The method accuracy was evaluated by comparing the results obtained using the proposed method with the results obtained by ICP OES previous acid digestion. The results showed good agreement between them. The goal of this work has been to develop a fast and simple methodology that permits the use of aqueous standards for straightforward calibration and the simultaneous determination of V, Ni and Fe in fuel ash samples by direct SS HR CS GFAAS. Copyright © 2017 Elsevier B.V. All rights reserved.
Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun
2014-12-19
In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different situations.
NASA Astrophysics Data System (ADS)
Oliver, Karen D.; Cousett, Tamira A.; Whitaker, Donald A.; Smith, Luther A.; Mukerjee, Shaibal; Stallings, Casson; Thoma, Eben D.; Alston, Lillian; Colon, Maribel; Wu, Tai; Henkle, Stacy
2017-08-01
A sample integrity evaluation and an interlaboratory comparison were conducted in application of U.S. Environmental Protection Agency (EPA) Methods 325A and 325B for diffusively monitoring benzene and other selected volatile organic compounds (VOCs) using Carbopack X sorbent tubes. To evaluate sample integrity, VOC samples were refrigerated for up to 240 days and analyzed using thermal desorption/gas chromatography-mass spectrometry at the EPA Office of Research and Development laboratory in Research Triangle Park, NC, USA. For the interlaboratory comparison, three commercial analytical laboratories were asked to follow Method 325B when analyzing samples of VOCs that were collected in field and laboratory settings for EPA studies. Overall results indicate that the selected VOCs collected diffusively on sorbent tubes generally were stable for 6 months or longer when samples were refrigerated. This suggests the specified maximum 30-day storage time of VOCs collected diffusively on Carbopack X passive samplers and analyzed using Method 325B might be able to be relaxed. Interlaboratory comparison results were in agreement for the challenge samples collected diffusively in an exposure chamber in the laboratory, with most measurements within ±25% of the theoretical concentration. Statistically significant differences among laboratories for ambient challenge samples were small, less than 1 part per billion by volume (ppbv). Results from all laboratories exhibited good precision and generally agreed well with each other.
Sato, Yuka; Seimiya, Masanori; Yoshida, Toshihiko; Sawabe, Yuji; Hokazono, Eisaku; Osawa, Susumu; Matsushita, Kazuyuki
2017-01-01
Background The indocyanine green retention rate is important for assessing the severity of liver disorders. In the conventional method, blood needs to be collected twice. In the present study, we developed an automated indocyanine green method that does not require blood sampling before intravenous indocyanine green injections and is applicable to an automated biochemical analyser. Methods The serum samples of 471 patients collected before and after intravenous indocyanine green injections and submitted to the clinical laboratory of our hospital were used as samples. The standard procedure established by the Japan Society of Hepatology was used as the standard method. In the automated indocyanine green method, serum collected after an intravenous indocyanine green injection was mixed with the saline reagent containing a surfactant, and the indocyanine green concentration was measured at a dominant wavelength of 805 nm and a complementary wavelength of 884 nm. Results The coefficient of variations of the within- and between-run reproducibilities of this method were 2% or lower, and dilution linearity passing the origin was noted up to 10 mg/L indocyanine green. The reagent was stable for four weeks or longer. Haemoglobin, bilirubin and chyle had no impact on the results obtained. The correlation coefficient between the standard method (x) and this method (y) was r=0.995; however, slight divergence was noted in turbid samples. Conclusion Divergence in turbid samples may have corresponded to false negativity with the standard procedure. Our method may be highly practical because blood sampling before indocyanine green loading is unnecessary and measurements are simple.
Conceptual data sampling for breast cancer histology image classification.
Rezk, Eman; Awan, Zainab; Islam, Fahad; Jaoua, Ali; Al Maadeed, Somaya; Zhang, Nan; Das, Gautam; Rajpoot, Nasir
2017-10-01
Data analytics have become increasingly complicated as the amount of data has increased. One technique that is used to enable data analytics in large datasets is data sampling, in which a portion of the data is selected to preserve the data characteristics for use in data analytics. In this paper, we introduce a novel data sampling technique that is rooted in formal concept analysis theory. This technique is used to create samples reliant on the data distribution across a set of binary patterns. The proposed sampling technique is applied in classifying the regions of breast cancer histology images as malignant or benign. The performance of our method is compared to other classical sampling methods. The results indicate that our method is efficient and generates an illustrative sample of small size. It is also competing with other sampling methods in terms of sample size and sample quality represented in classification accuracy and F1 measure. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zainathan, S C; Carson, J; Crane, M St J; Nowak, B F
2013-04-01
The use of swabs relative to organs as a sample collection method for the detection of Tasmanian salmon reovirus (TSRV) in farmed Tasmanian Atlantic salmon, Salmo salar L., was evaluated by RT-qPCR. Evaluation of individual and pooled sample collection (organs vs swabs) was carried out to determine the sensitivity of the collection methods and the effect of pooling of samples for the detection of TSRV. Detection of TSRV in individual samples was as sensitive when organs were sampled compared to swabs, and in pooled samples, organs demonstrated a sensitivity of one 10-fold dilution higher than sampling of pooled swabs. Storage of swabs at 4 °C for t = 24 h demonstrated results similar to those at t = 0. Advantages of using swabs as a preferred sample collection method for the detection of TSRV compared to organ samples are evident from these experimental trials. © 2012 Blackwell Publishing Ltd.
Maes, Sharon; Huu, Son Nguyen; Heyndrickx, Marc; Weyenberg, Stephanie van; Steenackers, Hans; Verplaetse, Alex; Vackier, Thijs; Sampers, Imca; Raes, Katleen; Reu, Koen De
2017-12-01
Biofilms are an important source of contamination in food companies, yet the composition of biofilms in practice is still mostly unknown. The chemical and microbiological characterization of surface samples taken after cleaning and disinfection is very important to distinguish free-living bacteria from the attached bacteria in biofilms. In this study, sampling methods that are potentially useful for both chemical and microbiological analyses of surface samples were evaluated. In the manufacturing facilities of eight Belgian food companies, surfaces were sampled after cleaning and disinfection using two sampling methods: the scraper-flocked swab method and the sponge stick method. Microbiological and chemical analyses were performed on these samples to evaluate the suitability of the sampling methods for the quantification of extracellular polymeric substance components and microorganisms originating from biofilms in these facilities. The scraper-flocked swab method was most suitable for chemical analyses of the samples because the material in these swabs did not interfere with determination of the chemical components. For microbiological enumerations, the sponge stick method was slightly but not significantly more effective than the scraper-flocked swab method. In all but one of the facilities, at least 20% of the sampled surfaces had more than 10 2 CFU/100 cm 2 . Proteins were found in 20% of the chemically analyzed surface samples, and carbohydrates and uronic acids were found in 15 and 8% of the samples, respectively. When chemical and microbiological results were combined, 17% of the sampled surfaces were contaminated with both microorganisms and at least one of the analyzed chemical components; thus, these surfaces were characterized as carrying biofilm. Overall, microbiological contamination in the food industry is highly variable by food sector and even within a facility at various sampling points and sampling times.
Field evaluation of personal sampling methods for multiple bioaerosols.
Wang, Chi-Hsun; Chen, Bean T; Han, Bor-Cheng; Liu, Andrew Chi-Yeu; Hung, Po-Chen; Chen, Chih-Yong; Chao, Hsing Jasmine
2015-01-01
Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC) filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters) and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min). Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.
Coes, Alissa L.; Paretti, Nicholas V.; Foreman, William T.; Iverson, Jana L.; Alvarez, David A.
2014-01-01
A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19–23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method.
Coes, Alissa L; Paretti, Nicholas V; Foreman, William T; Iverson, Jana L; Alvarez, David A
2014-03-01
A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19-23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method. Published by Elsevier B.V.
Jannink, I; Bennen, J N; Blaauw, J; van Diest, P J; Baak, J P
1995-01-01
This study compares the influence of two different nuclear sampling methods on the prognostic value of assessments of mean and standard deviation of nuclear area (MNA, SDNA) in 191 consecutive invasive breast cancer patients with long term follow up. The first sampling method used was 'at convenience' sampling (ACS); the second, systematic random sampling (SRS). Both sampling methods were tested with a sample size of 50 nuclei (ACS-50 and SRS-50). To determine whether, besides the sampling methods, sample size had impact on prognostic value as well, the SRS method was also tested using a sample size of 100 nuclei (SRS-100). SDNA values were systematically lower for ACS, obviously due to (unconsciously) not including small and large nuclei. Testing prognostic value of a series of cut off points, MNA and SDNA values assessed by the SRS method were prognostically significantly stronger than the values obtained by the ACS method. This was confirmed in Cox regression analysis. For the MNA, the Mantel-Cox p-values from SRS-50 and SRS-100 measurements were not significantly different. However, for the SDNA, SRS-100 yielded significantly lower p-values than SRS-50. In conclusion, compared with the 'at convenience' nuclear sampling method, systematic random sampling of nuclei is not only superior with respect to reproducibility of results, but also provides a better prognostic value in patients with invasive breast cancer.
Sengüven, Burcu; Baris, Emre; Oygur, Tulin; Berktas, Mehmet
2014-01-01
Aim: Discussing a protocol involving xylene-ethanol deparaffinization on slides followed by a kit-based extraction that allows for the extraction of high quality DNA from FFPE tissues. Methods: DNA was extracted from the FFPE tissues of 16 randomly selected blocks. Methods involving deparaffinization on slides or tubes, enzyme digestion overnight or for 72 hours and isolation using phenol chloroform method or a silica-based commercial kit were compared in terms of yields, concentrations and the amplifiability. Results: The highest yield of DNA was produced from the samples that were deparaffinized on slides, digested for 72 hours and isolated with a commercial kit. Samples isolated with the phenol-chloroform method produced DNA of lower purity than the samples that were purified with kit. The samples isolated with the commercial kit resulted in better PCR amplification. Conclusion: Silica-based commercial kits and deparaffinized on slides should be considered for DNA extraction from FFPE. PMID:24688314
NASA Astrophysics Data System (ADS)
Sarparandeh, Mohammadali; Hezarkhani, Ardeshir
2017-12-01
The use of efficient methods for data processing has always been of interest to researchers in the field of earth sciences. Pattern recognition techniques are appropriate methods for high-dimensional data such as geochemical data. Evaluation of the geochemical distribution of rare earth elements (REEs) requires the use of such methods. In particular, the multivariate nature of REE data makes them a good target for numerical analysis. The main subject of this paper is application of unsupervised pattern recognition approaches in evaluating geochemical distribution of REEs in the Kiruna type magnetite-apatite deposit of Se-Chahun. For this purpose, 42 bulk lithology samples were collected from the Se-Chahun iron ore deposit. In this study, 14 rare earth elements were measured with inductively coupled plasma mass spectrometry (ICP-MS). Pattern recognition makes it possible to evaluate the relations between the samples based on all these 14 features, simultaneously. In addition to providing easy solutions, discovery of the hidden information and relations of data samples is the advantage of these methods. Therefore, four clustering methods (unsupervised pattern recognition) - including a modified basic sequential algorithmic scheme (MBSAS), hierarchical (agglomerative) clustering, k-means clustering and self-organizing map (SOM) - were applied and results were evaluated using the silhouette criterion. Samples were clustered in four types. Finally, the results of this study were validated with geological facts and analysis results from, for example, scanning electron microscopy (SEM), X-ray diffraction (XRD), ICP-MS and optical mineralogy. The results of the k-means clustering and SOM methods have the best matches with reality, with experimental studies of samples and with field surveys. Since only the rare earth elements are used in this division, a good agreement of the results with lithology is considerable. It is concluded that the combination of the proposed methods and geological studies leads to finding some hidden information, and this approach has the best results compared to using only one of them.
NASA Astrophysics Data System (ADS)
Hadi, S.; Artanti, A. N.; Rinanto, Y.; Wahyuni, D. S. C.
2018-04-01
Curcuminoid, consisting of curcumin, demethoxycurcumin and bis demethoxycurcumin, is the major compound in Curcuma longa L. and Curcuma xanthorrhiza rhizome. It has been known to have a potent antioxidants, anticancer, antibacteria activity. Those rhizomes needs to be dried beforehand which influenced the active compounds concentration. The present work was conducted to assess the curcuminoid content of C. longa L. and C. xanthorrhiza based on drying method with Nuclear Magnetic Resonance (NMR) and High Pressure Liquid Chromatography (HPLC)-UVD. Samples were collected and dried using freeze-drying and oven method. The latter is the common method applied in most drying method at herbal medicine preparation procedure. All samples were extracted using 96% ethanol and analyzed using NMR and HPLC-UVD. Curcuminoid as a bioactive compound in the sample exhibited no significant difference and weak significant difference in C. xanthorrhiza and C. longa L., respectively. HLPC-UVD as a reliable analytical method for the quantification is subsequently used to confirm of the data obtained by NMR. It resulted that curcuminoid content showed no significant difference in both samples. This replied that curcuminoids content in both samples were stable into heating process. These results are useful information for simplicia standardization method in pharmaceutical products regarding to preparation procedure.
Laser-Induced Breakdown Spectroscopy Based Protein Assay for Cereal Samples.
Sezer, Banu; Bilge, Gonca; Boyaci, Ismail Hakki
2016-12-14
Protein content is an important quality parameter in terms of price, nutritional value, and labeling of various cereal samples. However, conventional analysis methods, namely, Kjeldahl and Dumas, have major drawbacks such as long analysis time, titration mistakes, and carrier gas dependence with high purity. For this reason, there is an urgent need for rapid, reliable, and environmentally friendly technologies for protein analysis. The present study aims to develop a new method for protein analysis in wheat flour and whole meal by using laser-induced breakdown spectroscopy (LIBS), which is a multielemental, fast, and simple spectroscopic method. Unlike the Kjeldahl and Dumas methods, it has potential to analyze a high number of samples in considerably short time. In the study, nitrogen peaks in LIBS spectra of wheat flour and whole meal samples with different protein contents were correlated with results of the standard Dumas method with the aid of chemometric methods. A calibration graph showed good linearity with the protein content between 7.9 and 20.9% and a 0.992 coefficient of determination (R 2 ). The limit of detection was calculated as 0.26%. The results indicated that LIBS is a promising and reliable method with its high sensitivity for routine protein analysis in wheat flour and whole meal samples.
Recommendations for representative ballast water sampling
NASA Astrophysics Data System (ADS)
Gollasch, Stephan; David, Matej
2017-05-01
Until now, the purpose of ballast water sampling studies was predominantly limited to general scientific interest to determine the variety of species arriving in ballast water in a recipient port. Knowing the variety of species arriving in ballast water also contributes to the assessment of relative species introduction vector importance. Further, some sampling campaigns addressed awareness raising or the determination of organism numbers per water volume to evaluate the species introduction risk by analysing the propagule pressure of species. A new aspect of ballast water sampling, which this contribution addresses, is compliance monitoring and enforcement of ballast water management standards as set by, e.g., the IMO Ballast Water Management Convention. To achieve this, sampling methods which result in representative ballast water samples are essential. We recommend such methods based on practical tests conducted on two commercial vessels also considering results from our previous studies. The results show that different sampling approaches influence the results regarding viable organism concentrations in ballast water samples. It was observed that the sampling duration (i.e., length of the sampling process), timing (i.e., in which point in time of the discharge the sample is taken), the number of samples and the sampled water quantity are the main factors influencing the concentrations of viable organisms in a ballast water sample. Based on our findings we provide recommendations for representative ballast water sampling.
Létant, Sonia E.; Murphy, Gloria A.; Alfaro, Teneile M.; Avila, Julie R.; Kane, Staci R.; Raber, Ellen; Bunt, Thomas M.; Shah, Sanjiv R.
2011-01-01
In the event of a biothreat agent release, hundreds of samples would need to be rapidly processed to characterize the extent of contamination and determine the efficacy of remediation activities. Current biological agent identification and viability determination methods are both labor- and time-intensive such that turnaround time for confirmed results is typically several days. In order to alleviate this issue, automated, high-throughput sample processing methods were developed in which real-time PCR analysis is conducted on samples before and after incubation. The method, referred to as rapid-viability (RV)-PCR, uses the change in cycle threshold after incubation to detect the presence of live organisms. In this article, we report a novel RV-PCR method for detection of live, virulent Bacillus anthracis, in which the incubation time was reduced from 14 h to 9 h, bringing the total turnaround time for results below 15 h. The method incorporates a magnetic bead-based DNA extraction and purification step prior to PCR analysis, as well as specific real-time PCR assays for the B. anthracis chromosome and pXO1 and pXO2 plasmids. A single laboratory verification of the optimized method applied to the detection of virulent B. anthracis in environmental samples was conducted and showed a detection level of 10 to 99 CFU/sample with both manual and automated RV-PCR methods in the presence of various challenges. Experiments exploring the relationship between the incubation time and the limit of detection suggest that the method could be further shortened by an additional 2 to 3 h for relatively clean samples. PMID:21764960
Evaluation of Three Field-Based Methods for Quantifying Soil Carbon
Izaurralde, Roberto C.; Rice, Charles W.; Wielopolski, Lucian; Ebinger, Michael H.; Reeves, James B.; Thomson, Allison M.; Francis, Barry; Mitra, Sudeep; Rappaport, Aaron G.; Etchevers, Jorge D.; Sayre, Kenneth D.; Govaerts, Bram; McCarty, Gregory W.
2013-01-01
Three advanced technologies to measure soil carbon (C) density (g C m−2) are deployed in the field and the results compared against those obtained by the dry combustion (DC) method. The advanced methods are: a) Laser Induced Breakdown Spectroscopy (LIBS), b) Diffuse Reflectance Fourier Transform Infrared Spectroscopy (DRIFTS), and c) Inelastic Neutron Scattering (INS). The measurements and soil samples were acquired at Beltsville, MD, USA and at Centro International para el Mejoramiento del Maíz y el Trigo (CIMMYT) at El Batán, Mexico. At Beltsville, soil samples were extracted at three depth intervals (0–5, 5–15, and 15–30 cm) and processed for analysis in the field with the LIBS and DRIFTS instruments. The INS instrument determined soil C density to a depth of 30 cm via scanning and stationary measurements. Subsequently, soil core samples were analyzed in the laboratory for soil bulk density (kg m−3), C concentration (g kg−1) by DC, and results reported as soil C density (kg m−2). Results from each technique were derived independently and contributed to a blind test against results from the reference (DC) method. A similar procedure was employed at CIMMYT in Mexico employing but only with the LIBS and DRIFTS instruments. Following conversion to common units, we found that the LIBS, DRIFTS, and INS results can be compared directly with those obtained by the DC method. The first two methods and the standard DC require soil sampling and need soil bulk density information to convert soil C concentrations to soil C densities while the INS method does not require soil sampling. We conclude that, in comparison with the DC method, the three instruments (a) showed acceptable performances although further work is needed to improve calibration techniques and (b) demonstrated their portability and their capacity to perform under field conditions. PMID:23383225
Fell, Shari; Bröckl, Stephanie; Büttner, Mathias; Rettinger, Anna; Zimmermann, Pia; Straubinger, Reinhard K
2016-09-15
Bovine tuberculosis (bTB), which is caused by Mycobacterium bovis and M. caprae, is a notifiable animal disease in Germany. Diagnostic procedure is based on a prescribed protocol that is published in the framework of German bTB legislation. In this protocol small sample volumes are used for DNA extraction followed by real-time PCR analyses. As mycobacteria tend to concentrate in granuloma and the infected tissue in early stages of infection does not necessarily show any visible lesions, it is likely that DNA extraction from only small tissue samples (20-40 mg) of a randomly chosen spot from the organ and following PCR testing may result in false negative results. In this study two DNA extraction methods were developed to process larger sample volumes to increase the detection sensitivity of mycobacterial DNA in animal tissue. The first extraction method is based on magnetic capture, in which specific capture oligonucleotides were utilized. These nucleotides are linked to magnetic particles and capture Mycobacterium-tuberculosis-complex (MTC) DNA released from 10 to 15 g of tissue material. In a second approach remaining sediments from the magnetic capture protocol were further processed with a less complex extraction protocol that can be used in daily routine diagnostics. A total number of 100 tissue samples from 34 cattle (n = 74) and 18 red deer (n = 26) were analyzed with the developed protocols and results were compared to the prescribed protocol. All three extraction methods yield reliable results by the real-time PCR analysis. The use of larger sample volume led to a sensitivity increase of DNA detection which was shown by the decrease of Ct-values. Furthermore five samples which were tested negative or questionable by the official extraction protocol were detected positive by real time PCR when the alternative extraction methods were used. By calculating the kappa index, the three extraction protocols resulted in a moderate (0.52; protocol 1 vs 3) to almost perfect agreement (1.00; red deer sample testing with all protocols). Both new methods yielded increased detection rates for MTC DNA detection in large sample volumes and consequently improve the official diagnostic protocol.
NASA Astrophysics Data System (ADS)
Zhang, Qian; Harman, Ciaran J.; Kirchner, James W.
2018-02-01
River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1) fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2) the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling - in the form of spectral slope (β) or other equivalent scaling parameters (e.g., Hurst exponent) - are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1) they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β = 0) to Brown noise (β = 2) and (2) their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths) in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb-Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among all methods for a wide range of prescribed β values and gap distributions. The aliasing method, however, does not itself account for sampling irregularity, and this introduces some bias in the result. Nonetheless, the wavelet method is recommended for estimating β in irregular time series until improved methods are developed. Finally, all methods' performances depend strongly on the sampling irregularity, highlighting that the accuracy and precision of each method are data specific. Accurately quantifying the strength of fractal scaling in irregular water-quality time series remains an unresolved challenge for the hydrologic community and for other disciplines that must grapple with irregular sampling.
NASA Astrophysics Data System (ADS)
Ali, Nauman; Ismail, Muhammad; Khan, Adnan; Khan, Hamayun; Haider, Sajjad; Kamal, Tahseen
2018-01-01
In this work, we have developed simple, sensitive and inexpensive methods for the spectrophotometric determination of urea in urine samples using silver nanoparticles (AgNPs). The standard addition and 2nd order derivative methods were adopted for this purpose. AgNPs were prepared by chemical reduction of AgNO3 with hydrazine using 1,3-di-(1H-imidazol-1-yl)-2-propanol (DIPO) as a stabilizing agent in aqueous medium. The proposed methods were based on the complexation of AgNPs with urea. Using this concept, urea in the urine samples was successfully determined spectrophotometric methods. The results showed high percent recovery with ± RSD. The recoveries of urea in the three urine samples by spectrophotometric standard addition were 99.2% ± 5.37, 96.3% ± 4.49, 104.88% ± 4.99 and that of spectrophotometric 2nd order derivative method were 115.3% ± 5.2, 103.4% ± 2.6, 105.93% ± 0.76. The results show that these methods can open doors for a potential role of AgNPs in the clinical determination of urea in urine, blood, biological, non-biological fluids.
Gill, Christina; Blow, Frances; Darby, Alistair C.
2016-01-01
Background Recent studies on the vaginal microbiota have employed molecular techniques such as 16S rRNA gene sequencing to describe the bacterial community as a whole. These techniques require the lysis of bacterial cells to release DNA before purification and PCR amplification of the 16S rRNA gene. Currently, methods for the lysis of bacterial cells are not standardised and there is potential for introducing bias into the results if some bacterial species are lysed less efficiently than others. This study aimed to compare the results of vaginal microbiota profiling using four different pretreatment methods for the lysis of bacterial samples (30 min of lysis with lysozyme, 16 hours of lysis with lysozyme, 60 min of lysis with a mixture of lysozyme, mutanolysin and lysostaphin and 30 min of lysis with lysozyme followed by bead beating) prior to chemical and enzyme-based DNA extraction with a commercial kit. Results After extraction, DNA yield did not significantly differ between methods with the exception of lysis with lysozyme combined with bead beating which produced significantly lower yields when compared to lysis with the enzyme cocktail or 30 min lysis with lysozyme only. However, this did not result in a statistically significant difference in the observed alpha diversity of samples. The beta diversity (Bray-Curtis dissimilarity) between different lysis methods was statistically significantly different, but this difference was small compared to differences between samples, and did not affect the grouping of samples with similar vaginal bacterial community structure by hierarchical clustering. Conclusions An understanding of how laboratory methods affect the results of microbiota studies is vital in order to accurately interpret the results and make valid comparisons between studies. Our results indicate that the choice of lysis method does not prevent the detection of effects relating to the type of vaginal bacterial community one of the main outcome measures of epidemiological studies. However, we recommend that the same method is used on all samples within a particular study. PMID:27643503
Model-based inference for small area estimation with sampling weights
Vandendijck, Y.; Faes, C.; Kirby, R.S.; Lawson, A.; Hens, N.
2017-01-01
Obtaining reliable estimates about health outcomes for areas or domains where only few to no samples are available is the goal of small area estimation (SAE). Often, we rely on health surveys to obtain information about health outcomes. Such surveys are often characterised by a complex design, stratification, and unequal sampling weights as common features. Hierarchical Bayesian models are well recognised in SAE as a spatial smoothing method, but often ignore the sampling weights that reflect the complex sampling design. In this paper, we focus on data obtained from a health survey where the sampling weights of the sampled individuals are the only information available about the design. We develop a predictive model-based approach to estimate the prevalence of a binary outcome for both the sampled and non-sampled individuals, using hierarchical Bayesian models that take into account the sampling weights. A simulation study is carried out to compare the performance of our proposed method with other established methods. The results indicate that our proposed method achieves great reductions in mean squared error when compared with standard approaches. It performs equally well or better when compared with more elaborate methods when there is a relationship between the responses and the sampling weights. The proposed method is applied to estimate asthma prevalence across districts. PMID:28989860
[Comparative studies of methods of salmonella enrichment (author's transl)].
Pietzsch, O; Kretschmer, F J; Bulling, E
1975-07-01
Eight different methods of salmonella enrichment were compared in two series of experiments involving 100 samples of whole-egg powder and 80 samples of frozen whole liquid egg, respectively. 66 out of a total of 100 samples of whole-egg powder had been artificially infected with varying numbers of S. typhi-murium; 60 out of 80 samples of frozen whole liquid egg were found to be naturally infected with various salmonella species. 3 of the 8 methods (Table 1) were compared within an international collaborative study with 14 laboratories in 11 countries participating. A reduction of the pre-enrichment period from 18 to 6 hours and of volumes used in pre-enrichment and selective enrichment from 10 and 100 ml, respectively to 1 and 10 ml, respectively were found to have adverse influence upon the result of isolations, in particular in the case of weakly infected samples. In contrast, extended incubation over 48 hours as well as preparation of two sub-cultures on solid selective media following incubation of enrichment cultures over 18-24 hours and 42-48 hours, respectively always resulted in a certain increase of salmonella yield which, however, exhibited gradual differences for the individual methods examined. Preparation of a 2nd sub-culture meant, in particular, a decisive improvement of the result of isolations from artificially infected samples if selenite-cystine enrichment volumes were 10 and 100 ml, respectively. The best results could be obtained by means of the following methods of enrichment: Pre-enrichment of material in buffered peptone water at 37 degrees C over 18 hours; pipetting of 10 ml inoculated and incubated pre-enriched material into 100 ml selenite-cystine or tetrathionate enrichment medium according to MULLER-KAUFFMANN; onward incubation of the enrichment culture at 43 degrees C over 48 hours; and preparation of sub-cultures on solid selective media after 24 and 48 hours. The method using tetrathionate enrichment medium was found to be most expensive, results, however, were the most consistent ones.
Evaluation of Respondent-Driven Sampling
McCreesh, Nicky; Frost, Simon; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda Ndagire; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G
2012-01-01
Background Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex-workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total-population data. Methods Total-population data on age, tribe, religion, socioeconomic status, sexual activity and HIV status were available on a population of 2402 male household-heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, employing current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). Results We recruited 927 household-heads. Full and small RDS samples were largely representative of the total population, but both samples under-represented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven-sampling statistical-inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven-sampling bootstrap 95% confidence intervals included the population proportion. Conclusions Respondent-driven sampling produced a generally representative sample of this well-connected non-hidden population. However, current respondent-driven-sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience-sampling method, and caution is required when interpreting findings based on the sampling method. PMID:22157309
HITTING THE BULL'S-EYE IN GROUNDWATER SAMPLING
Many of the commonly-used groundwater sampling techniques and procedures have resulted from methods developed for water supply investigations. These methods have persisted, even though the monitoring goals may have changed from water supply development to contaminant source and ...
Xun-Ping, W; An, Z
2017-07-27
Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.
Razban, Behrooz; Nelson, Kristina Y; McMartin, Dena W; Cullimore, D Roy; Wall, Michelle; Wang, Dunling
2012-01-01
An analytical method to produce profiles of bacterial biomass fatty acid methyl esters (FAME) was developed employing rapid agitation followed by static incubation (RASI) using selective media of wastewater microbial communities. The results were compiled to produce a unique library for comparison and performance analysis at a Wastewater Treatment Plant (WWTP). A total of 146 samples from the aerated WWTP, comprising 73 samples of each secondary and tertiary effluent, were included analyzed. For comparison purposes, all samples were evaluated via a similarity index (SI) with secondary effluents producing an SI of 0.88 with 2.7% variation and tertiary samples producing an SI 0.86 with 5.0% variation. The results also highlighted significant differences between the fatty acid profiles of the tertiary and secondary effluents indicating considerable shifts in the bacterial community profile between these treatment phases. The WWTP performance results using this method were highly replicable and reproducible indicating that the protocol has potential as a performance-monitoring tool for aerated WWTPs. The results quickly and accurately reflect shifts in dominant bacterial communities that result when processes operations and performance change.
Jin, Jae Hwa; Kim, Junho; Lee, Jeong-Yil; Oh, Young Min
2016-07-22
One of the main interests in petroleum geology and reservoir engineering is to quantify the porosity of reservoir beds as accurately as possible. A variety of direct measurements, including methods of mercury intrusion, helium injection and petrographic image analysis, have been developed; however, their application frequently yields equivocal results because these methods are different in theoretical bases, means of measurement, and causes of measurement errors. Here, we present a set of porosities measured in Berea Sandstone samples by the multiple methods, in particular with adoption of a new method using computed tomography and reference samples. The multiple porosimetric data show a marked correlativeness among different methods, suggesting that these methods are compatible with each other. The new method of reference-sample-guided computed tomography is more effective than the previous methods when the accompanied merits such as experimental conveniences are taken into account.
Jin, Jae Hwa; Kim, Junho; Lee, Jeong-Yil; Oh, Young Min
2016-01-01
One of the main interests in petroleum geology and reservoir engineering is to quantify the porosity of reservoir beds as accurately as possible. A variety of direct measurements, including methods of mercury intrusion, helium injection and petrographic image analysis, have been developed; however, their application frequently yields equivocal results because these methods are different in theoretical bases, means of measurement, and causes of measurement errors. Here, we present a set of porosities measured in Berea Sandstone samples by the multiple methods, in particular with adoption of a new method using computed tomography and reference samples. The multiple porosimetric data show a marked correlativeness among different methods, suggesting that these methods are compatible with each other. The new method of reference-sample-guided computed tomography is more effective than the previous methods when the accompanied merits such as experimental conveniences are taken into account. PMID:27445105
Krämer, Nadine; Löfström, Charlotta; Vigre, Håkan; Hoorfar, Jeffrey; Bunge, Cornelia; Malorny, Burkhard
2011-03-01
Salmonella is a major zoonotic pathogen which causes outbreaks and sporadic cases of gastroenteritis in humans worldwide. The primary sources for Salmonella are food-producing animals such as pigs and poultry. For risk assessment and hazard analysis and critical control point (HACCP) concepts, it is essential to produce large amounts of quantitative data, which is currently not achievable with the standard cultural based methods for enumeration of Salmonella. This study presents the development of a novel strategy to enumerate low numbers of Salmonella in cork borer samples taken from pig carcasses as a first concept and proof of principle for a new sensitive and rapid quantification method based on combined enrichment and real-time PCR. The novelty of the approach is in the short pre-enrichment step, where for most bacteria, growth is in the log phase. The method consists of an 8h pre-enrichment of the cork borer sample diluted 1:10 in non-selective buffered peptone water, followed by DNA extraction, and Salmonella detection and quantification by real-time PCR. The limit of quantification was 1.4 colony forming units (CFU)/20 cm(2) (approximately 10 g) of artificially contaminated sample with 95% confidence interval of ± 0.7 log CFU/sample. The precision was similar to the standard reference most probable number (MPN) method. A screening of 200 potentially naturally contaminated cork borer samples obtained over seven weeks in a slaughterhouse resulted in 25 Salmonella-positive samples. The analysis of salmonellae within these samples showed that the PCR method had a higher sensitivity for samples with a low contamination level (<6.7 CFU/sample), where 15 of the samples negative with the MPN method was detected with the PCR method and 5 were found to be negative by both methods. For the samples with a higher contamination level (6.7-310 CFU/sample) a good agreement between the results obtained with the PCR and MPN methods was obtained. The quantitative real-time PCR method can easily be applied to other food and environmental matrices by adaptation of the pre-enrichment time and media. Copyright © 2010 Elsevier B.V. All rights reserved.
Resampling methods in Microsoft Excel® for estimating reference intervals
Theodorsson, Elvar
2015-01-01
Computer- intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples. PMID:26527366
Resampling methods in Microsoft Excel® for estimating reference intervals.
Theodorsson, Elvar
2015-01-01
Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.
Method for charging a hydrogen getter
Tracy, C. Edwin; Keyser, Matthew A.; Benson, David K.
1998-01-01
A method for charging a sample of either a permanent or reversible getter material with a high concentration of hydrogen while maintaining a base pressure below 10.sup.-4 torr at room temperature involves placing the sample of hydrogen getter material in a chamber, activating the sample of hydrogen getter material, overcharging the sample of getter material through conventional charging techniques to a high concentration of hydrogen, and then subjecting the sample of getter material to a low temperature vacuum bake-out process. Application of the method results in a reversible hydrogen getter which is highly charged to maximum capacities of hydrogen and which concurrently exhibits minimum hydrogen vapor pressures at room temperatures.
Subrandom methods for multidimensional nonuniform sampling.
Worley, Bradley
2016-08-01
Methods of nonuniform sampling that utilize pseudorandom number sequences to select points from a weighted Nyquist grid are commonplace in biomolecular NMR studies, due to the beneficial incoherence introduced by pseudorandom sampling. However, these methods require the specification of a non-arbitrary seed number in order to initialize a pseudorandom number generator. Because the performance of pseudorandom sampling schedules can substantially vary based on seed number, this can complicate the task of routine data collection. Approaches such as jittered sampling and stochastic gap sampling are effective at reducing random seed dependence of nonuniform sampling schedules, but still require the specification of a seed number. This work formalizes the use of subrandom number sequences in nonuniform sampling as a means of seed-independent sampling, and compares the performance of three subrandom methods to their pseudorandom counterparts using commonly applied schedule performance metrics. Reconstruction results using experimental datasets are also provided to validate claims made using these performance metrics. Copyright © 2016 Elsevier Inc. All rights reserved.
Zhang, Hong-guang; Lu, Jian-gang
2016-02-01
Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
This report provides a detailed summary of the activities carried out to sample groundwater at Waste Area Grouping (WAG) 6. The analytical results for samples collected during Phase 1, Activity 2 of the WAG 6 Resource Conservation and Recovery Act Facility Investigation (RFI) are also presented. In addition, analytical results for Phase 1, activity sampling events for which data were not previously reported are included in this TM. A summary of the groundwater sampling activities of WAG 6, to date, are given in the Introduction. The Methodology section describes the sampling procedures and analytical parameters. Six attachments are included. Attachmentsmore » 1 and 2 provide analytical results for selected RFI groundwater samples and ORNL sampling event. Attachment 3 provides a summary of the contaminants detected in each well sampled for all sampling events conducted at WAG 6. Bechtel National Inc. (BNI)/IT Corporation Contract Laboratory (IT) RFI analytical methods and detection limits are given in Attachment 4. Attachment 5 provides the Oak Ridge National Laboratory (ORNL)/Analytical Chemistry Division (ACD) analytical methods and detection limits and Resource Conservation and Recovery Act (RCRA) quarterly compliance monitoring (1988--1989). Attachment 6 provides ORNL/ACD groundwater analytical methods and detection limits (for the 1990 RCRA semi-annual compliance monitoring).« less
Comparison of analytical methods for the determination of histamine in reference canned fish samples
NASA Astrophysics Data System (ADS)
Jakšić, S.; Baloš, M. Ž.; Mihaljev, Ž.; Prodanov Radulović, J.; Nešić, K.
2017-09-01
Two screening methods for histamine in canned fish, an enzymatic test and a competitive direct enzyme-linked immunosorbent assay (CD-ELISA), were compared with the reversed-phase liquid chromatography (RP-HPLC) standard method. For enzymatic and CD-ELISA methods, determination was conducted according to producers’ manuals. For RP-HPLC, histamine was derivatized with dansyl-chloride, followed by RP-HPLC and diode array detection. Results of analysis of canned fish, supplied as reference samples for proficiency testing, showed good agreement when histamine was present at higher concentrations (above 100 mg kg-1). At a lower level (16.95 mg kg-1), the enzymatic test produced some higher results. Generally, analysis of four reference samples according to CD-ELISA and RP-HPLC showed good agreement for histamine determination (r=0.977 in concentration range 16.95-216 mg kg-1) The results show that the applied enzymatic test and CD-ELISA appeared to be suitable screening methods for the determination of histamine in canned fish.
Chen, Meilian; Lee, Jong-Hyeon; Hur, Jin
2015-10-01
Despite literature evidence suggesting the importance of sampling methods on the properties of sediment pore waters, their effects on the dissolved organic matter (PW-DOM) have been unexplored to date. Here, we compared the effects of two commonly used sampling methods (i.e., centrifuge and Rhizon sampler) on the characteristics of PW-DOM for the first time. The bulk dissolved organic carbon (DOC), ultraviolet-visible (UV-Vis) absorption, and excitation-emission matrixes coupled with parallel factor analysis (EEM-PARAFAC) of the PW-DOM samples were compared for the two sampling methods with the sediments from minimal to severely contaminated sites. The centrifuged samples were found to have higher average values of DOC, UV absorption, and protein-like EEM-PARAFAC components. The samples collected with the Rhizon sampler, however, exhibited generally more humified characteristics than the centrifuged ones, implying a preferential collection of PW-DOM with respect to the sampling methods. Furthermore, the differences between the two sampling methods seem more pronounced in relatively more polluted sites. Our observations were possibly explained by either the filtration effect resulting from the smaller pore size of the Rhizon sampler or the desorption of DOM molecules loosely bound to minerals during centrifugation, or both. Our study suggests that consistent use of one sampling method is crucial for PW-DOM studies and also that caution should be taken in the comparison of data collected with different sampling methods.
Misra, Sambuddha; Lloyd, Nicholas; Elderfield, Henry; Bickle, Mike J.
2017-01-01
Rationale Li and Mg isotopes are increasingly used as a combined tool within the geosciences. However, established methods require separate sample purification protocols utilising several column separation procedures. This study presents a single‐step cation‐exchange method for quantitative separation of trace levels of Li and Mg from multiple sample matrices. Methods The column method utilises the macro‐porous AGMP‐50 resin and a high‐aspect ratio column, allowing quantitative separation of Li and Mg from natural waters, sediments, rocks and carbonate matrices following the same elution protocol. High‐precision isotope determination was conducted by multi‐collector inductively coupled plasma mass spectrometry (MC‐ICPMS) on the Thermo Scientific™ NEPTUNE Plus™ fitted with 1013 Ω amplifiers which allow accurate and precise measurements at ion beams ≤0.51 V. Results Sub‐nanogram Li samples (0.3–0.5 ng) were regularly separated (yielding Mg masses of 1–70 μg) using the presented column method. The total sample consumption during isotopic analysis is <0.5 ng Li and <115 ng Mg with long‐term external 2σ precisions of ±0.39‰ for δ7Li and ±0.07‰ for δ26Mg. The results for geological reference standards and seawater analysed by our method are in excellent agreement with published values despite the order of magnitude lower sample consumption. Conclusions The possibility of eluting small sample masses and the low analytical sample consumption make this method ideal for samples of limited mass or low Li concentration, such as foraminifera, mineral separates or dilute river waters. PMID:29078008
Methods for determination of inorganic substances in water and fluvial sediments
Fishman, Marvin J.; Friedman, Linda C.
1989-01-01
Chapter Al of the laboratory manual contains methods used by the U.S. Geological Survey to analyze samples of water, suspended sediments, and bottom material for their content of inorganic constituents. Included are methods for determining the concentration of dissolved constituents in water, the total recoverable and total of constituents in water-suspended sediment samples, and the recoverable and total concentrations of constituents in samples of bottom material. The introduction to the manual includes essential definitions and a brief discussion of the use of significant figures in calculating and reporting analytical results. Quality control in the water-analysis laboratory is discussed, including the accuracy and precision of analyses, the use of standard-reference water samples, and the operation of an effective quality-assurance program. Methods for sample preparation and pretreatment are given also. A brief discussion of the principles of the analytical techniques involved and their particular application to water and sediment analysis is presented. The analytical methods of these techniques are arranged alphabetically by constituent. For each method, the general topics covered are the application, the principle of the method, the interferences, the apparatus and reagents required, a detailed description of the analytical procedure, reporting results, units and significant figures, and analytical precision data, when available. More than 126 methods are given for the determination of 70 inorganic constituents and physical properties of water, suspended sediment, and bottom material.
Methods for determination of inorganic substances in water and fluvial sediments
Fishman, Marvin J.; Friedman, Linda C.
1985-01-01
Chapter Al of the laboratory manual contains methods used by the Geological Survey to analyze samples of water, suspended sediments, and bottom material for their content of inorganic constituents. Included are methods for determining the concentration of dissolved constituents in water, total recoverable and total of constituents in water-suspended sediment samples, and recoverable and total concentrations of constituents in samples of bottom material. Essential definitions are included in the introduction to the manual, along with a brief discussion of the use of significant figures in calculating and reporting analytical results. Quality control in the water-analysis laboratory is discussed, including accuracy and precision of analyses, the use of standard reference water samples, and the operation of an effective quality assurance program. Methods for sample preparation and pretreatment are given also.A brief discussion of the principles of the analytical techniques involved and their particular application to water and sediment analysis is presented. The analytical methods involving these techniques are arranged alphabetically according to constituent. For each method given, the general topics covered are application, principle of the method, interferences, apparatus and reagents required, a detailed description of the analytical procedure, reporting results, units and significant figures, and analytical precision data, when available. More than 125 methods are given for the determination of 70 different inorganic constituents and physical properties of water, suspended sediment, and bottom material.
Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys
Hund, Lauren; Bedrick, Edward J.; Pagano, Marcello
2015-01-01
Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis. PMID:26125967
Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.
Hund, Lauren; Bedrick, Edward J; Pagano, Marcello
2015-01-01
Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.
Determination of benzylpenicillin in pharmaceuticals by capillary zone electrophoresis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoyt, A.M. Jr.; Sepaniak, M.J.
A rapid and direct method is described for the determination of benzylpenicillin (penicillin G) in pharmaceutical preparations. The method involves very little sample preparation and total analysis time for duplicate results is less 30 minutes per sample. The method takes advantage of the speed and separating power of capillary zone electrophoresis (CZE). Detection of penicillin is by absorption at 228 nm. An internal standard is employed to reduce sample injection error. The method was applied successfully to both tablets and injectable preparations. 14 refs., 5 figs., 3 tabs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, T.; Hera, K.; Coleman, C.
2011-12-05
Evaluation of Defense Waste Processing Facility (DWPF) Chemical Process Cell (CPC) cycle time identified several opportunities to improve the CPC processing time. The Mechanical Systems & Custom Equipment Development (MS&CED) Section of the Savannah River National Laboratory (SRNL) recently completed the evaluation of one of these opportunities - the possibility of using an Isolok sampling valve as an alternative to the Hydragard valve for taking DWPF process samples at the Slurry Mix Evaporator (SME). The use of an Isolok for SME sampling has the potential to improve operability, reduce maintenance time, and decrease CPC cycle time. The SME acceptability testingmore » for the Isolok was requested in Task Technical Request (TTR) HLW-DWPF-TTR-2010-0036 and was conducted as outlined in Task Technical and Quality Assurance Plan (TTQAP) SRNLRP-2011-00145. RW-0333P QA requirements applied to the task, and the results from the investigation were documented in SRNL-STI-2011-00693. Measurement of the chemical composition of study samples was a critical component of the SME acceptability testing of the Isolok. A sampling and analytical plan supported the investigation with the analytical plan directing that the study samples be prepared by a cesium carbonate (Cs{sub 2}CO{sub 3}) fusion dissolution method and analyzed by Inductively Coupled Plasma - Optical Emission Spectroscopy (ICP-OES). The use of the cesium carbonate preparation method for the Isolok testing provided an opportunity for an additional assessment of this dissolution method, which is being investigated as a potential replacement for the two methods (i.e., sodium peroxide fusion and mixed acid dissolution) that have been used at the DWPF for the analysis of SME samples. Earlier testing of the Cs{sub 2}CO{sub 3} method yielded promising results which led to a TTR from Savannah River Remediation, LLC (SRR) to SRNL for additional support and an associated TTQAP to direct the SRNL efforts. A technical report resulting from this work was issued that recommended that the mixed acid method be replaced by the Cs{sub 2}CO{sub 3} method for the measurement of magnesium (Mg), sodium (Na), and zirconium (Zr) with additional testing of the method by DWPF Laboratory being needed before further implementation of the Cs{sub 2}CO{sub 3} method at that laboratory. While the SME acceptability testing of the Isolok does not address any of the open issues remaining after the publication of the recommendation for the replacement of the mixed acid method by the Cs{sub 2}CO{sub 3} method (since those issues are to be addressed by the DWPF Laboratory), the Cs{sub 2}CO{sub 3} testing associated with the Isolok testing does provide additional insight into the performance of the method as conducted by SRNL. The performance is to be investigated by looking to the composition measurement data generated by the samples of a standard glass, the Analytical Reference Glass - 1 (ARG-1), that were prepared by the Cs{sub 2}CO{sub 3} method and included in the SME acceptability testing of the Isolok. The measurements of these samples were presented as part of the study results, but no statistical analysis of these measurements was conducted as part of those results. It is the purpose of this report to provide that analysis, which was supported using JMP Version 7.0.2.« less
Measurement of Crystalline Silica Aerosol Using Quantum Cascade Laser-Based Infrared Spectroscopy.
Wei, Shijun; Kulkarni, Pramod; Ashley, Kevin; Zheng, Lina
2017-10-24
Inhalation exposure to airborne respirable crystalline silica (RCS) poses major health risks in many industrial environments. There is a need for new sensitive instruments and methods for in-field or near real-time measurement of crystalline silica aerosol. The objective of this study was to develop an approach, using quantum cascade laser (QCL)-based infrared spectroscopy (IR), to quantify airborne concentrations of RCS. Three sampling methods were investigated for their potential for effective coupling with QCL-based transmittance measurements: (i) conventional aerosol filter collection, (ii) focused spot sample collection directly from the aerosol phase, and (iii) dried spot obtained from deposition of liquid suspensions. Spectral analysis methods were developed to obtain IR spectra from the collected particulate samples in the range 750-1030 cm -1 . The new instrument was calibrated and the results were compared with standardized methods based on Fourier transform infrared (FTIR) spectrometry. Results show that significantly lower detection limits for RCS (≈330 ng), compared to conventional infrared methods, could be achieved with effective microconcentration and careful coupling of the particulate sample with the QCL beam. These results offer promise for further development of sensitive filter-based laboratory methods and portable sensors for near real-time measurement of crystalline silica aerosol.
Gonzales, J L; Loza, A; Chacon, E
2006-03-15
There are several T. vivax specific primers developed for PCR diagnosis. Most of these primers were validated under different DNA extraction methods and study designs leading to heterogeneity of results. The objective of the present study was to validate PCR as a diagnostic test for T. vivax trypanosomosis by means of determining the test sensitivity of different published specific primers with different sample preparations. Four different DNA extraction methods were used to test the sensitivity of PCR with four different primer sets. DNA was extracted directly from whole blood samples, blood dried on filter papers or blood dried on FTA cards. The results showed that the sensitivity of PCR with each primer set was highly dependant of the sample preparation and DNA extraction method. The highest sensitivities for all the primers tested were determined using DNA extracted from whole blood samples, while the lowest sensitivities were obtained when DNA was extracted from filter paper preparations. To conclude, the obtained results are discussed and a protocol for diagnosis and surveillance for T. vivax trypanosomosis is recommended.
Marinelli, L; Cottarelli, A; Solimini, A G; Del Cimmuto, A; De Giusti, M
2017-01-01
In this study we estimated the presence of Legionella species, viable but non-culturable (VBNC), in hospital water networks. We also evaluated the time and load of Legionella appearance in samples found negative using the standard culture method. A total of 42 samples was obtained from the tap water of five hospital buildings. The samples were tested for Legionella by the standard culture method and were monitored for up to 12 months for the appearance of VBNC Legionella. All the 42 samples were negative at the time of collection. Seven of the 42 samples (17.0%) became positive for Legionella at different times of monitoring. The time to the appearance of VBNC Legionella was extremely variable, from 15 days to 9 months from sampling. The most frequent Legionella species observed were Legionella spp and L. anisa and only in one sample L. pneumophila srg.1. Our study confirms the presence of VBNC Legionella in samples resulting negative using the standard culture method and highlights the different time to its appearance that can occur several months after sampling. The results are important for risk assessment and risk management of engineered water systems.
Warm-Up Effect in Panelist-Articulated-2-Alternative Forced Choice Test.
Bloom, David J; Baik, Hwa-Young; Lee, Soo-Yeun
2018-01-01
Panelist performance in discrimination tests has been shown to increase when warm-up samples are provided prior to the actual test. Samples are used prior to the actual test for the attribute articulation process of a panelist-articulated-2-alternative forced choice (PA-2-AFC) procedure; however, it is yet unknown if the pretest articulation phase adds to the power of this testing method as with the warm-up. The goal of the study was to determine if a "warm-up" effect was displayed in the PA-2-AFC test resulting in greater power compared to the researcher-designated-2-AFC (RD-2-AFC) test. A RD-2-AFC test, with and without warm-up samples, and a PA-2-AFC test were performed by 61 panelists. A reduced calorie, citrus-flavored, and carbonated beverage was used in the tests. During RD-2-AFC testing, panelists were asked to identify which sample was more sour. For PA-2-AFC testing, panelists individually articulated the nature and direction of the difference between the 2 samples through a pretesting articulation procedure. The articulated difference was, then, used in standard 2-AFC test procedure. A warm-up effect was observed when comparing the standard RD-2-AFC with and without warm-up samples. The addition of warm up samples significantly increased the power of the test, in addition, the PA-2-AFC method had lower power than the RD-2-AFC method. The increase in power with the addition of warm-up samples for the RD-2-AFC procedure supports literature findings on the benefit of providing warm-up samples. No warm-up effect can be attributed to the PA-2-AFC method evidenced by the overall low power observed, which may be attributed to sample complexity. Selecting a specified discrimination testing method is advantageous and can reduce costs of sensory testing, but has been considered unpractical when samples may differ in unknown ways. This research explores the use of panelist derived terms to circumvent the need for researchers to identify these differences and compares the results to using research designated terms in discrimination testing. Results from this study can be utilized in creating ways to incorporate more powerful methods into sensory discrimination testing plans and provides researchers with a means for selecting terms for use in specified discrimination testing methods. © 2017 Institute of Food Technologists®.
A Data-Centric Strategy for Modern Biobanking.
Quinlan, Philip R; Gardner, Stephen; Groves, Martin; Emes, Richard; Garibaldi, Jonathan
2015-01-01
Biobanking has been in existence for many decades and over that time has developed significantly. Biobanking originated from a need to collect, store and make available biological samples for a range of research purposes. It has changed as the understanding of biological processes has increased and new sample handling techniques have been developed to ensure samples were fit-for-purpose.As a result of these developments, modern biobanking is now facing two substantial new challenges. Firstly, new research methods such as next generation sequencing can generate datasets that are at an infinitely greater scale and resolution than previous methods. Secondly, as the understanding of diseases increases researchers require a far richer data set about the donors from which the sample originate.To retain a sample-centric strategy in a research environment that is increasingly dictated by data will place a biobank at a significant disadvantage and even result in the samples collected going unused. As a result biobanking is required to change strategic focus from a sample dominated perspective to a data-centric strategy.
2012-01-01
Background Over the past years, statistical and Bayesian approaches have become increasingly appreciated to address the long-standing problem of computational RNA structure prediction. Recently, a novel probabilistic method for the prediction of RNA secondary structures from a single sequence has been studied which is based on generating statistically representative and reproducible samples of the entire ensemble of feasible structures for a particular input sequence. This method samples the possible foldings from a distribution implied by a sophisticated (traditional or length-dependent) stochastic context-free grammar (SCFG) that mirrors the standard thermodynamic model applied in modern physics-based prediction algorithms. Specifically, that grammar represents an exact probabilistic counterpart to the energy model underlying the Sfold software, which employs a sampling extension of the partition function (PF) approach to produce statistically representative subsets of the Boltzmann-weighted ensemble. Although both sampling approaches have the same worst-case time and space complexities, it has been indicated that they differ in performance (both with respect to prediction accuracy and quality of generated samples), where neither of these two competing approaches generally outperforms the other. Results In this work, we will consider the SCFG based approach in order to perform an analysis on how the quality of generated sample sets and the corresponding prediction accuracy changes when different degrees of disturbances are incorporated into the needed sampling probabilities. This is motivated by the fact that if the results prove to be resistant to large errors on the distinct sampling probabilities (compared to the exact ones), then it will be an indication that these probabilities do not need to be computed exactly, but it may be sufficient and more efficient to approximate them. Thus, it might then be possible to decrease the worst-case time requirements of such an SCFG based sampling method without significant accuracy losses. If, on the other hand, the quality of sampled structures can be observed to strongly react to slight disturbances, there is little hope for improving the complexity by heuristic procedures. We hence provide a reliable test for the hypothesis that a heuristic method could be implemented to improve the time scaling of RNA secondary structure prediction in the worst-case – without sacrificing much of the accuracy of the results. Conclusions Our experiments indicate that absolute errors generally lead to the generation of useless sample sets, whereas relative errors seem to have only small negative impact on both the predictive accuracy and the overall quality of resulting structure samples. Based on these observations, we present some useful ideas for developing a time-reduced sampling method guaranteeing an acceptable predictive accuracy. We also discuss some inherent drawbacks that arise in the context of approximation. The key results of this paper are crucial for the design of an efficient and competitive heuristic prediction method based on the increasingly accepted and attractive statistical sampling approach. This has indeed been indicated by the construction of prototype algorithms. PMID:22776037
Blekhman, Ran; Tang, Karen; Archie, Elizabeth A; Barreiro, Luis B; Johnson, Zachary P; Wilson, Mark E; Kohn, Jordan; Yuan, Michael L; Gesquiere, Laurence; Grieneisen, Laura E; Tung, Jenny
2016-08-16
Field studies of wild vertebrates are frequently associated with extensive collections of banked fecal samples-unique resources for understanding ecological, behavioral, and phylogenetic effects on the gut microbiome. However, we do not understand whether sample storage methods confound the ability to investigate interindividual variation in gut microbiome profiles. Here, we extend previous work on storage methods for gut microbiome samples by comparing immediate freezing, the gold standard of preservation, to three methods commonly used in vertebrate field studies: lyophilization, storage in ethanol, and storage in RNAlater. We found that the signature of individual identity consistently outweighed storage effects: alpha diversity and beta diversity measures were significantly correlated across methods, and while samples often clustered by donor, they never clustered by storage method. Provided that all analyzed samples are stored the same way, banked fecal samples therefore appear highly suitable for investigating variation in gut microbiota. Our results open the door to a much-expanded perspective on variation in the gut microbiome across species and ecological contexts.
NASA Astrophysics Data System (ADS)
Larsen, B. R.; Brussol, C.; Kotzias, D.; Veltkamp, T.; Zwaagstra, O.; Slanina, J.
A method has been developed for the preparation of samples for radiocarbon ( 14C) measurements of carbonyl compounds in the atmosphere. Sampling on 25 ml 2,4-dinitrophenylhydrazine (DNPH)- coated silica gel cartridges can be carried out with up to 10.000 ℓ of ambient air with no adverse effects on sample integrity. Methods for the selective clean-up of the extracts have been investigated. This is a necessary step in preparing ambient carbonyl samples for a measurement of the radiocarbon ( 14C) content. The method which gave the best results include extraction of the DNPH cartridge with CH 3CN and purification of the carbonyl hydrazones over activated silica gel to remove excess DNPH and non target compounds. This method has been validated with laboratory samples and has been proved to give reliable results The radiocarbon data from the first field experiment showed that ambient air over a semi-rural test site in Ispra, Italy on a late summer day contained mainly five carbonyls (formaldehyde>acetaldehyde>acetone>propanal>butanal) of a mixed biogenic (41-57%) and anthropogenic (43-59%) origin. The method will be used in future monitoring of radiocarbon ( 14C) on a number of test sites in Europe.
van der Put, Robert M F; de Haan, Alex; van den IJssel, Jan G M; Hamidi, Ahd; Beurret, Michel
2015-11-27
Due to the rapidly increasing introduction of Haemophilus influenzae type b (Hib) and other conjugate vaccines worldwide during the last decade, reliable and robust analytical methods are needed for the quantitative monitoring of intermediate samples generated during fermentation (upstream processing, USP) and purification (downstream processing, DSP) of polysaccharide vaccine components. This study describes the quantitative characterization of in-process control (IPC) samples generated during the fermentation and purification of the capsular polysaccharide (CPS), polyribosyl-ribitol-phosphate (PRP), derived from Hib. Reliable quantitative methods are necessary for all stages of production; otherwise accurate process monitoring and validation is not possible. Prior to the availability of high performance anion exchange chromatography methods, this polysaccharide was predominantly quantified either with immunochemical methods, or with the colorimetric orcinol method, which shows interference from fermentation medium components and reagents used during purification. Next to an improved high performance anion exchange chromatography-pulsed amperometric detection (HPAEC-PAD) method, using a modified gradient elution, both the orcinol assay and high performance size exclusion chromatography (HPSEC) analyses were evaluated. For DSP samples, it was found that the correlation between the results obtained by HPAEC-PAD specific quantification of the PRP monomeric repeat unit released by alkaline hydrolysis, and those from the orcinol method was high (R(2)=0.8762), and that it was lower between HPAEC-PAD and HPSEC results. Additionally, HPSEC analysis of USP samples yielded surprisingly comparable results to those obtained by HPAEC-PAD. In the early part of the fermentation, medium components interfered with the different types of analysis, but quantitative HPSEC data could still be obtained, although lacking the specificity of the HPAEC-PAD method. Thus, the HPAEC-PAD method has the advantage of giving a specific response compared to the orcinol assay and HPSEC, and does not show interference from various components that can be present in intermediate and purified PRP samples. Copyright © 2014 Elsevier Ltd. All rights reserved.
9 CFR 149.6 - Slaughter facilities.
Code of Federal Regulations, 2010 CFR
2010-01-01
... result based on an ELISA method and is confirmed positive by further testing using the digestion method... decertified. (C) If a test sample yields a positive test result based on an ELISA method, but is not confirmed...
9 CFR 149.6 - Slaughter facilities.
Code of Federal Regulations, 2013 CFR
2013-01-01
... result based on an ELISA method and is confirmed positive by further testing using the digestion method... decertified. (C) If a test sample yields a positive test result based on an ELISA method, but is not confirmed...
9 CFR 149.6 - Slaughter facilities.
Code of Federal Regulations, 2011 CFR
2011-01-01
... result based on an ELISA method and is confirmed positive by further testing using the digestion method... decertified. (C) If a test sample yields a positive test result based on an ELISA method, but is not confirmed...
9 CFR 149.6 - Slaughter facilities.
Code of Federal Regulations, 2012 CFR
2012-01-01
... result based on an ELISA method and is confirmed positive by further testing using the digestion method... decertified. (C) If a test sample yields a positive test result based on an ELISA method, but is not confirmed...
9 CFR 149.6 - Slaughter facilities.
Code of Federal Regulations, 2014 CFR
2014-01-01
... result based on an ELISA method and is confirmed positive by further testing using the digestion method... decertified. (C) If a test sample yields a positive test result based on an ELISA method, but is not confirmed...
An improved initialization center k-means clustering algorithm based on distance and density
NASA Astrophysics Data System (ADS)
Duan, Yanling; Liu, Qun; Xia, Shuyin
2018-04-01
Aiming at the problem of the random initial clustering center of k means algorithm that the clustering results are influenced by outlier data sample and are unstable in multiple clustering, a method of central point initialization method based on larger distance and higher density is proposed. The reciprocal of the weighted average of distance is used to represent the sample density, and the data sample with the larger distance and the higher density are selected as the initial clustering centers to optimize the clustering results. Then, a clustering evaluation method based on distance and density is designed to verify the feasibility of the algorithm and the practicality, the experimental results on UCI data sets show that the algorithm has a certain stability and practicality.
Bird, Susan M.; Fram, Miranda S.; Crepeau, Kathryn L.
2003-01-01
An analytical method has been developed for the determination of dissolved organic carbon concentration in water samples. This method includes the results of the tests used to validate the method and the quality-control practices used for dissolved organic carbon analysis. Prior to analysis, water samples are filtered to remove suspended particulate matter. A Shimadzu TOC-5000A Total Organic Carbon Analyzer in the nonpurgeable organic carbon mode is used to analyze the samples by high temperature catalytic oxidation. The analysis usually is completed within 48 hours of sample collection. The laboratory reporting level is 0.22 milligrams per liter.
Evaluation of sampling methods for Bacillus spore-contaminated HVAC filters
Calfee, M. Worth; Rose, Laura J.; Tufts, Jenia; Morse, Stephen; Clayton, Matt; Touati, Abderrahmane; Griffin-Gatchalian, Nicole; Slone, Christina; McSweeney, Neal
2016-01-01
The objective of this study was to compare an extraction-based sampling method to two vacuum-based sampling methods (vacuum sock and 37 mm cassette filter) with regards to their ability to recover Bacillus atrophaeus spores (surrogate for Bacillus anthracis) from pleated heating, ventilation, and air conditioning (HVAC) filters that are typically found in commercial and residential buildings. Electrostatic and mechanical HVAC filters were tested, both without and after loading with dust to 50% of their total holding capacity. The results were analyzed by one-way ANOVA across material types, presence or absence of dust, and sampling device. The extraction method gave higher relative recoveries than the two vacuum methods evaluated (p ≤ 0.001). On average, recoveries obtained by the vacuum methods were about 30% of those achieved by the extraction method. Relative recoveries between the two vacuum methods were not significantly different (p > 0.05). Although extraction methods yielded higher recoveries than vacuum methods, either HVAC filter sampling approach may provide a rapid and inexpensive mechanism for understanding the extent of contamination following a wide-area biological release incident. PMID:24184312
Evaluation of sampling methods for Bacillus spore-contaminated HVAC filters.
Calfee, M Worth; Rose, Laura J; Tufts, Jenia; Morse, Stephen; Clayton, Matt; Touati, Abderrahmane; Griffin-Gatchalian, Nicole; Slone, Christina; McSweeney, Neal
2014-01-01
The objective of this study was to compare an extraction-based sampling method to two vacuum-based sampling methods (vacuum sock and 37mm cassette filter) with regards to their ability to recover Bacillus atrophaeus spores (surrogate for Bacillus anthracis) from pleated heating, ventilation, and air conditioning (HVAC) filters that are typically found in commercial and residential buildings. Electrostatic and mechanical HVAC filters were tested, both without and after loading with dust to 50% of their total holding capacity. The results were analyzed by one-way ANOVA across material types, presence or absence of dust, and sampling device. The extraction method gave higher relative recoveries than the two vacuum methods evaluated (p≤0.001). On average, recoveries obtained by the vacuum methods were about 30% of those achieved by the extraction method. Relative recoveries between the two vacuum methods were not significantly different (p>0.05). Although extraction methods yielded higher recoveries than vacuum methods, either HVAC filter sampling approach may provide a rapid and inexpensive mechanism for understanding the extent of contamination following a wide-area biological release incident. Published by Elsevier B.V.
Wu, Zheng-Guang; Shi, Peng; Su, Hongye; Chu, Jian
2012-09-01
This paper investigates the problem of master-slave synchronization for neural networks with discrete and distributed delays under variable sampling with a known upper bound on the sampling intervals. An improved method is proposed, which captures the characteristic of sampled-data systems. Some delay-dependent criteria are derived to ensure the exponential stability of the error systems, and thus the master systems synchronize with the slave systems. The desired sampled-data controller can be achieved by solving a set of linear matrix inequalitys, which depend upon the maximum sampling interval and the decay rate. The obtained conditions not only have less conservatism but also have less decision variables than existing results. Simulation results are given to show the effectiveness and benefits of the proposed methods.
NASA Astrophysics Data System (ADS)
Trapanese, A.; Batt, C. M.; Schnepp, E.
The aim of this research was to review the relative merits of different methods of taking samples for archaeomagnetic dating. To allow different methods to be investigated, two archaeological structures and one modern fireplace were sampled in Austria. On each structure a variety of sampling methods were used: the tube and disc techniques of Clark et al. (Clark, A.J., Tarling, D.H., Noel, M., 1988. Developments in archaeomagnetic dating in Great Britain. Journal of Archaeological Science 15, 645-667), the drill core technique, the mould plastered hand block method of Thellier, and a modification of it. All samples were oriented with a magnetic compass and sun compass, where weather conditions allowed. Approximately 12 discs, tubes, drill cores or plaster hand blocks were collected from each structure, with one mould plaster hand block being collected and cut into specimens. The natural remanent magnetisation (NRM) of the samples was measured and stepwise alternating field (AF) or thermal demagnetisation was applied. Samples were measured either in the UK or in Austria, which allowed the comparison of results between magnetometers with different sensitivity. The tubes and plastered hand block specimens showed good agreement in directional results, and the samples obtained showed good stability. The discs proved to be unreliable as both NRM and the characteristic remanent magnetisation (ChRM) distribution were very scattered. The failure of some methods may be related to the suitability of the material sampled, for example if it was disturbed before sampling, had been insufficiently heated or did not contain appropriate magnetic minerals to retain a remanent magnetisation. Caution is also recommended for laboratory procedures as the cutting of poorly consolidated specimens may disturb the material and therefore the remanent magnetisation. Criteria and guidelines were established to aid researchers in selecting the most appropriate method for a particular archaeological structure.
Enhanced Detection of Surface-Associated Bacteria in Indoor Environments by Quantitative PCR
Buttner, Mark P.; Cruz-Perez, Patricia; Stetzenbach, Linda D.
2001-01-01
Methods for detecting microorganisms on surfaces are needed to locate biocontamination sources and to relate surface and airborne concentrations. Research was conducted in an experimental room to evaluate surface sampling methods and quantitative PCR (QPCR) for enhanced detection of a target biocontaminant present on flooring materials. QPCR and culture analyses were used to quantitate Bacillus subtilis (Bacillus globigii) endospores on vinyl tile, commercial carpet, and new and soiled residential carpet with samples obtained by four surface sampling methods: a swab kit, a sponge swipe, a cotton swab, and a bulk method. The initial data showed that greater overall sensitivity was obtained with the QPCR than with culture analysis; however, the QPCR results for bulk samples from residential carpet were negative. The swab kit and the sponge swipe methods were then tested with two levels of background biological contamination consisting of Penicillium chrysogenum spores. The B. subtilis values obtained by the QPCR method were greater than those obtained by culture analysis. The differences between the QPCR and culture data were significant for the samples obtained with the swab kit for all flooring materials except soiled residential carpet and with the sponge swipe for commercial carpet. The QPCR data showed that there were no significant differences between the swab kit and sponge swipe sampling methods for any of the flooring materials. Inhibition of QPCR due solely to biological contamination of flooring materials was not evident. However, some degree of inhibition was observed with the soiled residential carpet, which may have been caused by the presence of abiotic contaminants, alone or in combination with biological contaminants. The results of this research demonstrate the ability of QPCR to enhance detection and enumeration of biocontaminants on surface materials and provide information concerning the comparability of currently available surface sampling methods. PMID:11375164
Zhang, Heng; Lan, Fang; Shi, Yupeng; Wan, Zhi-Gang; Yue, Zhen-Feng; Fan, Fang; Lin, Yan-Kui; Tang, Mu-Jin; Lv, Jing-Zhang; Xiao, Tan; Yi, Changqing
2014-06-15
VitaFast(®) test kits designed for the microbiological assay in microtiter plate format can be applied to quantitative determination of B-group water-soluble vitamins such as vitamin B12, folic acid and biotin, et al. Compared to traditional microbiological methods, VitaFast(®) kits significantly reduce sample processing time and provide greater reliability, higher productivity and better accuracy. Recently, simultaneous determination of vitamin B12, folic acid and biotin in one sample is urgently required when evaluating the quality of infant formulae in our practical work. However, the present sample preparation protocols which are developed for individual test systems, are incompatible with simultaneous determination of several analytes. To solve this problem, a novel "three-in-one" sample preparation method is herein developed for simultaneous determination of B-group water-soluble vitamins using VitaFast(®) kits. The performance of this novel "three-in-one" sample preparation method was systematically evaluated through comparing with individual sample preparation protocols. The experimental results of the assays which employed "three-in-one" sample preparation method were in good agreement with those obtained from conventional VitaFast(®) extraction methods, indicating that the proposed "three-in-one" sample preparation method is applicable to the present three VitaFast(®) vitamin test systems, thus offering a promising alternative for the three independent sample preparation methods. The proposed new sample preparation method will significantly improve the efficiency of infant formulae inspection. Copyright © 2013 Elsevier Ltd. All rights reserved.
Methods for collection and analysis of water samples
Rainwater, Frank Hays; Thatcher, Leland Lincoln
1960-01-01
This manual contains methods used by the U.S. Geological Survey to collect, preserve, and analyze water samples. Throughout, the emphasis is on obtaining analytical results that accurately describe the chemical composition of the water in situ. Among the topics discussed are selection of sampling sites, frequency of sampling, field equipment, preservatives and fixatives, analytical techniques of water analysis, and instruments. Seventy-seven laboratory and field procedures are given for determining fifty-three water properties.
Rudkjøbing, Vibeke Børsholt; Thomsen, Trine Rolighed; Xu, Yijuan; Melton-Kreft, Rachael; Ahmed, Azad; Eickhardt, Steffen; Bjarnsholt, Thomas; Poulsen, Steen Seier; Nielsen, Per Halkjær; Earl, Joshua P; Ehrlich, Garth D; Moser, Claus
2016-11-08
Necrotizing soft tissue infections (NSTIs) are a group of infections affecting all soft tissues. NSTI involves necrosis of the afflicted tissue and is potentially life threatening due to major and rapid destruction of tissue, which often leads to septic shock and organ failure. The gold standard for identification of pathogens is culture; however molecular methods for identification of microorganisms may provide a more rapid result and may be able to identify additional microorganisms that are not detected by culture. In this study, tissue samples (n = 20) obtained after debridement of 10 patients with NSTI were analyzed by standard culture, fluorescence in situ hybridization (FISH) and multiple molecular methods. The molecular methods included analysis of microbial diversity by 1) direct 16S and D2LSU rRNA gene Microseq 2) construction of near full-length 16S rRNA gene clone libraries with subsequent Sanger sequencing for most samples, 3) the Ibis T5000 biosensor and 4) 454-based pyrosequencing. Furthermore, quantitative PCR (qPCR) was used to verify and determine the relative abundance of Streptococcus pyogenes in samples. For 70 % of the surgical samples it was possible to identify microorganisms by culture. Some samples did not result in growth (presumably due to administration of antimicrobial therapy prior to sampling). The molecular methods identified microorganisms in 90 % of the samples, and frequently detected additional microorganisms when compared to culture. Although the molecular methods generally gave concordant results, our results indicate that Microseq may misidentify or overlook microorganisms that can be detected by other molecular methods. Half of the patients were found to be infected with S. pyogenes, but several atypical findings were also made including infection by a) Acinetobacter baumannii, b) Streptococcus pneumoniae, and c) fungi, mycoplasma and Fusobacterium necrophorum. The study emphasizes that many pathogens can be involved in NSTIs, and that no specific "NSTI causing" combination of species exists. This means that clinicians should be prepared to diagnose and treat any combination of microbial pathogens. Some of the tested molecular methods offer a faster turnaround time combined with a high specificity, which makes supplemental use of such methods attractive for identification of microorganisms, especially for fulminant life-threatening infections such as NSTI.
Vitamin B12 assays compared by use of patients sera with low vitamin B12 content
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheridan, B.L.; Pearce, L.C.
1985-05-01
The authors compared four radioisotope dilution (RD) methods and a microbiological assay for measuring concentrations of vitamin B12 in a selected panel of serum samples from patients known to be deficient in the vitamin. Low (less than 100 ng/L) and borderline (100-180 ng/L) results were similar between methods, but use of the manufacturers recommended ranges for borderline results would have changed the diagnostic classifications for 22 of 38 samples. Results of all the RD methods inter-correlated well, but less so with the microbiological assay. Borderline, nondiagnostic results were common to all methods, and no apparent advantage was gained from usingmore » the microbiological assay.« less
Evaluation of direct saponification method for determination of cholesterol in meats.
Adams, M L; Sullivan, D M; Smith, R L; Richter, E F
1986-01-01
A gas chromatographic (GC) method has been developed for determination of cholesterol in meats. The method involves ethanolic KOH saponification of the sample material, homogeneous-phase toluene extraction of the unsaponifiables, derivatization of cholesterol to its trimethylsilylether, and quantitation by GC-flame ionization detection using 5-alpha-cholestane as internal standard. This direct saponification method is compared with the current AOAC official method for determination of cholesterol in 20 different meat products. The direct saponification method eliminates the need for initial lipid extraction, thus offering a 30% savings in labor, and requires fewer solvents than the AOAC method. It produced comparable or slightly higher cholesterol results than the AOAC method in all meat samples examined. Precision, determined by assaying a turkey meat sample 16 times over 4 days, was excellent (CV = 1.74%). Average recovery of cholesterol added to meat samples was 99.8%.
Wang, Wei; Zhou, Fang; Zhao, Liang; Zhang, Jian-Rong; Zhu, Jun-Jie
2008-02-01
A simple method of hydrostatic pressure sample injection towards a disposable microchip CE device was developed. The liquid level in the sample reservoir was higher than that in the sample waste reservoir (SWR) by tilting microchip and hydrostatic pressure was generated, the sample was driven to pass through injection channel into SWR. After sample loading, the microchip was levelled for separation under applied high separation voltage. Effects of tilted angle, initial liquid height and injection duration on electrophoresis were investigated. With enough injection duration, the injection result was little affected by tilted angle and initial liquid heights in the reservoirs. Injection duration for obtaining a stable sample plug was mainly dependent on the tilted angle rather than the initial height of liquid. Experimental results were consistent with theoretical prediction. Fluorescence observation and electrochemical detection of dopamine and catechol were employed to verify the feasibility of tilted microchip hydrostatic pressure injection. Good reproducibility of this injection method was obtained. Because the instrumentation was simplified and no additional hardware was needed in this technology, the proposed method would be potentially useful in disposable devices.
Müller, David; Cattaneo, Stefano; Meier, Florian; Welz, Roland; de Vries, Tjerk; Portugal-Cohen, Meital; Antonio, Diana C; Cascio, Claudia; Calzolai, Luigi; Gilliland, Douglas; de Mello, Andrew
2016-04-01
We demonstrate the use of inverse supercritical carbon dioxide (scCO2) extraction as a novel method of sample preparation for the analysis of complex nanoparticle-containing samples, in our case a model sunscreen agent with titanium dioxide nanoparticles. The sample was prepared for analysis in a simplified process using a lab scale supercritical fluid extraction system. The residual material was easily dispersed in an aqueous solution and analyzed by Asymmetrical Flow Field-Flow Fractionation (AF4) hyphenated with UV- and Multi-Angle Light Scattering detection. The obtained results allowed an unambiguous determination of the presence of nanoparticles within the sample, with almost no background from the matrix itself, and showed that the size distribution of the nanoparticles is essentially maintained. These results are especially relevant in view of recently introduced regulatory requirements concerning the labeling of nanoparticle-containing products. The novel sample preparation method is potentially applicable to commercial sunscreens or other emulsion-based cosmetic products and has important ecological advantages over currently used sample preparation techniques involving organic solvents. Copyright © 2016 Elsevier B.V. All rights reserved.
Capers, Patrice L.; Brown, Andrew W.; Dawson, John A.; Allison, David B.
2015-01-01
Background: Meta-research can involve manual retrieval and evaluation of research, which is resource intensive. Creation of high throughput methods (e.g., search heuristics, crowdsourcing) has improved feasibility of large meta-research questions, but possibly at the cost of accuracy. Objective: To evaluate the use of double sampling combined with multiple imputation (DS + MI) to address meta-research questions, using as an example adherence of PubMed entries to two simple consolidated standards of reporting trials guidelines for titles and abstracts. Methods: For the DS large sample, we retrieved all PubMed entries satisfying the filters: RCT, human, abstract available, and English language (n = 322, 107). For the DS subsample, we randomly sampled 500 entries from the large sample. The large sample was evaluated with a lower rigor, higher throughput (RLOTHI) method using search heuristics, while the subsample was evaluated using a higher rigor, lower throughput (RHITLO) human rating method. Multiple imputation of the missing-completely at-random RHITLO data for the large sample was informed by: RHITLO data from the subsample; RLOTHI data from the large sample; whether a study was an RCT; and country and year of publication. Results: The RHITLO and RLOTHI methods in the subsample largely agreed (phi coefficients: title = 1.00, abstract = 0.92). Compliance with abstract and title criteria has increased over time, with non-US countries improving more rapidly. DS + MI logistic regression estimates were more precise than subsample estimates (e.g., 95% CI for change in title and abstract compliance by year: subsample RHITLO 1.050–1.174 vs. DS + MI 1.082–1.151). As evidence of improved accuracy, DS + MI coefficient estimates were closer to RHITLO than the large sample RLOTHI. Conclusion: Our results support our hypothesis that DS + MI would result in improved precision and accuracy. This method is flexible and may provide a practical way to examine large corpora of literature. PMID:25988135
Yi, Ming; Stephens, Robert M.
2008-01-01
Analysis of microarray and other high throughput data often involves identification of genes consistently up or down-regulated across samples as the first step in extraction of biological meaning. This gene-level paradigm can be limited as a result of valid sample fluctuations and biological complexities. In this report, we describe a novel method, SLEPR, which eliminates this limitation by relying on pathway-level consistencies. Our method first selects the sample-level differentiated genes from each individual sample, capturing genes missed by other analysis methods, ascertains the enrichment levels of associated pathways from each of those lists, and then ranks annotated pathways based on the consistency of enrichment levels of individual samples from both sample classes. As a proof of concept, we have used this method to analyze three public microarray datasets with a direct comparison with the GSEA method, one of the most popular pathway-level analysis methods in the field. We found that our method was able to reproduce the earlier observations with significant improvements in depth of coverage for validated or expected biological themes, but also produced additional insights that make biological sense. This new method extends existing analyses approaches and facilitates integration of different types of HTP data. PMID:18818771
Flotemersch, Joseph E; North, Sheila; Blocksom, Karen A
2014-02-01
Benthic macroinvertebrates are sampled in streams and rivers as one of the assessment elements of the US Environmental Protection Agency's National Rivers and Streams Assessment. In a 2006 report, the recommendation was made that different yet comparable methods be evaluated for different types of streams (e.g., low gradient vs. high gradient). Consequently, a research element was added to the 2008-2009 National Rivers and Streams Assessment to conduct a side-by-side comparison of the standard macroinvertebrate sampling method with an alternate method specifically designed for low-gradient wadeable streams and rivers that focused more on stream edge habitat. Samples were collected using each method at 525 sites in five of nine aggregate ecoregions located in the conterminous USA. Methods were compared using the benthic macroinvertebrate multimetric index developed for the 2006 Wadeable Streams Assessment. Statistical analysis did not reveal any trends that would suggest the overall assessment of low-gradient streams on a regional or national scale would change if the alternate method was used rather than the standard sampling method, regardless of the gradient cutoff used to define low-gradient streams. Based on these results, the National Rivers and Streams Survey should continue to use the standard field method for sampling all streams.
Characterization of rock thermal conductivity by high-resolution optical scanning
Popov, Y.A.; Pribnow, D.F.C.; Sass, J.H.; Williams, C.F.; Burkhardt, H.
1999-01-01
We compared thress laboratory methods for thermal conductivity measurements: divided-bar, line-source and optical scanning. These methods are widely used in geothermal and petrophysical studies, particularly as applied to research on cores from deep scientific boreholes. The relatively new optical scanning method has recently been perfected and applied to geophysical problems. A comparison among these methods for determining the thermal conductivity tensor for anisotropic rocks is based on a representative collection of 80 crystalline rock samples from the KTB continental deep borehole (Germany). Despite substantial thermal inhomogeneity of rock thermal conductivity (up to 40-50% variation) and high anisotropy (with ratios of principal values attaining 2 and more), the results of measurements agree very well among the different methods. The discrepancy for measurements along the foliation is negligible (<1%). The component of thermal conductivity normal to the foliation reveals somewhat larger differences (3-4%). Optical scanning allowed us to characterize the thermal inhomogeneity of rocks and to identify a three-dimensional anisotropy in thermal conductivity of some gneiss samples. The merits of optical scanning include minor random errors (1.6%), the ability to record the variation of thermal conductivity along the sample, the ability to sample deeply using a slow scanning rate, freedom from constraints for sample size and shape, and quality of mechanical treatment of the sample surface, a contactless mode of measurement, high speed of operation, and the ability to measure on a cylindrical sample surface. More traditional methods remain superior for characterizing bulk conductivity at elevated temperature.Three laboratory methods including divided-bar, line-source and optical scanning are widely applied in geothermal and petrophysical studies. In this study, these three methods were compared for determining the thermal conductivity tensor for anisotropic rocks. For this study, a representative collection of 80 crystalline rock samples from the KTB continental deep borehole was used. Despite substantial thermal inhomogeneity of rock thermal conductivity and high anisotropy, measurement results were in excellent agreement among the three methods.
TPH detection in groundwater: Identification and elimination of positive interferences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zemo, D.A.; Synowiec, K.A.
1996-01-01
Groundwater assessment programs frequently require total petroleum hydrocarbon (TPH) analyses (Methods 8015M and 418.1). TPH analyses are often unreliable indicators of water quality because these methods are not constituent-specific and are vulnerable to significant sources of positive interferences. These positive interferences include: (a) non-dissolved petroleum constituents; (b) soluble, non-petroleum hydrocarbons (e.g., biodegradation products); and (c) turbidity, commonly introduced into water samples during sample collection. In this paper, we show that the portion of a TPH concentration not directly the result of water-soluble petroleum constituents can be attributed solely to these positive interferences. To demonstrate the impact of these interferences, wemore » conducted a field experiment at a site affected by degraded crude oil. Although TPH was consistently detected in groundwater samples, BTEX was not detected. PNAs were not detected, except for very low concentrations of fluorene (<5 ug/1). Filtering and silica gel cleanup steps were added to sampling and analyses to remove particulates and biogenic by-products. Results showed that filtering lowered the Method 8015M concentrations and reduced the Method 418.1 concentrations to non-detectable. Silica gel cleanup reduced the Method 8015M concentrations to non-detectable. We conclude from this study that the TPH results from groundwater samples are artifacts of positive interferences caused by both particulates and biogenic materials and do not represent dissolved-phase petroleum constituents.« less
TPH detection in groundwater: Identification and elimination of positive interferences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zemo, D.A.; Synowiec, K.A.
1996-12-31
Groundwater assessment programs frequently require total petroleum hydrocarbon (TPH) analyses (Methods 8015M and 418.1). TPH analyses are often unreliable indicators of water quality because these methods are not constituent-specific and are vulnerable to significant sources of positive interferences. These positive interferences include: (a) non-dissolved petroleum constituents; (b) soluble, non-petroleum hydrocarbons (e.g., biodegradation products); and (c) turbidity, commonly introduced into water samples during sample collection. In this paper, we show that the portion of a TPH concentration not directly the result of water-soluble petroleum constituents can be attributed solely to these positive interferences. To demonstrate the impact of these interferences, wemore » conducted a field experiment at a site affected by degraded crude oil. Although TPH was consistently detected in groundwater samples, BTEX was not detected. PNAs were not detected, except for very low concentrations of fluorene (<5 ug/1). Filtering and silica gel cleanup steps were added to sampling and analyses to remove particulates and biogenic by-products. Results showed that filtering lowered the Method 8015M concentrations and reduced the Method 418.1 concentrations to non-detectable. Silica gel cleanup reduced the Method 8015M concentrations to non-detectable. We conclude from this study that the TPH results from groundwater samples are artifacts of positive interferences caused by both particulates and biogenic materials and do not represent dissolved-phase petroleum constituents.« less
NASA Astrophysics Data System (ADS)
Shojaei Zoeram, Ali; Rahmani, Aida; Asghar Akbari Mousavi, Seyed Ali
2017-05-01
The precise controllability of heat input in pulsed Nd:YAG welding method provided by two additional parameters, frequency and pulse duration, has made this method very promising for welding of alloys sensitive to heat input. The poor weldability of Ti-rich nitinol as a result of the formation of Ti2Ni IMC has deprived us of the unique properties of this alloy. In this study, to intensify solidification rate during welding of Ti-rich nitinol, pulsed Nd:YAG laser beam in low frequency was employed in addition to the employment of a copper substrate. Specific microstructure produced in this condition was characterized and the effects of this microstructure on tensile and fracture behavior of samples welded by two different procedures, full penetration and double-sided method with halved penetration depth for each side were investigated. The investigations revealed although the combination of low frequencies, the use of a high thermal conductor substrate and double-sided method eliminated intergranular fracture and increased tensile strength, the particular microstructure, built in the pulsed welding method in low frequencies, results to the formation of the longitudinal cracks during the first stages of tensile test at weld centerline. This degrades tensile strength of welded samples compared to base metal. The results showed samples welded in double-sided method performed much better than samples welded in full penetration mode.
Patterson, Fiona; Lievens, Filip; Kerrin, Máire; Munro, Neil; Irish, Bill
2013-01-01
Background The selection methodology for UK general practice is designed to accommodate several thousand applicants per year and targets six core attributes identified in a multi-method job-analysis study Aim To evaluate the predictive validity of selection methods for entry into postgraduate training, comprising a clinical problem-solving test, a situational judgement test, and a selection centre. Design and setting A three-part longitudinal predictive validity study of selection into training for UK general practice. Method In sample 1, participants were junior doctors applying for training in general practice (n = 6824). In sample 2, participants were GP registrars 1 year into training (n = 196). In sample 3, participants were GP registrars sitting the licensing examination after 3 years, at the end of training (n = 2292). The outcome measures include: assessor ratings of performance in a selection centre comprising job simulation exercises (sample 1); supervisor ratings of trainee job performance 1 year into training (sample 2); and licensing examination results, including an applied knowledge examination and a 12-station clinical skills objective structured clinical examination (OSCE; sample 3). Results Performance ratings at selection predicted subsequent supervisor ratings of job performance 1 year later. Selection results also significantly predicted performance on both the clinical skills OSCE and applied knowledge examination for licensing at the end of training. Conclusion In combination, these longitudinal findings provide good evidence of the predictive validity of the selection methods, and are the first reported for entry into postgraduate training. Results show that the best predictor of work performance and training outcomes is a combination of a clinical problem-solving test, a situational judgement test, and a selection centre. Implications for selection methods for all postgraduate specialties are considered. PMID:24267856
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Yanxia; Ma He; Peng Nanbo
We apply one of the lazy learning methods, the k-nearest neighbor (kNN) algorithm, to estimate the photometric redshifts of quasars based on various data sets from the Sloan Digital Sky Survey (SDSS), the UKIRT Infrared Deep Sky Survey (UKIDSS), and the Wide-field Infrared Survey Explorer (WISE; the SDSS sample, the SDSS-UKIDSS sample, the SDSS-WISE sample, and the SDSS-UKIDSS-WISE sample). The influence of the k value and different input patterns on the performance of kNN is discussed. kNN performs best when k is different with a special input pattern for a special data set. The best result belongs to the SDSS-UKIDSS-WISEmore » sample. The experimental results generally show that the more information from more bands, the better performance of photometric redshift estimation with kNN. The results also demonstrate that kNN using multiband data can effectively solve the catastrophic failure of photometric redshift estimation, which is met by many machine learning methods. Compared with the performance of various other methods of estimating the photometric redshifts of quasars, kNN based on KD-Tree shows superiority, exhibiting the best accuracy.« less
Is it appropriate to composite fish samples for mercury trend monitoring and consumption advisories?
Gandhi, Nilima; Bhavsar, Satyendra P; Gewurtz, Sarah B; Drouillard, Ken G; Arhonditsis, George B; Petro, Steve
2016-03-01
Monitoring mercury levels in fish can be costly because variation by space, time, and fish type/size needs to be captured. Here, we explored if compositing fish samples to decrease analytical costs would reduce the effectiveness of the monitoring objectives. Six compositing methods were evaluated by applying them to an existing extensive dataset, and examining their performance in reproducing the fish consumption advisories and temporal trends. The methods resulted in varying amount (average 34-72%) of reductions in samples, but all (except one) reproduced advisories very well (96-97% of the advisories did not change or were one category more restrictive compared to analysis of individual samples). Similarly, the methods performed reasonably well in recreating temporal trends, especially when longer-term and frequent measurements were considered. The results indicate that compositing samples within 5cm fish size bins or retaining the largest/smallest individuals and compositing in-between samples in batches of 5 with decreasing fish size would be the best approaches. Based on the literature, the findings from this study are applicable to fillet, muscle plug and whole fish mercury monitoring studies. The compositing methods may also be suitable for monitoring Persistent Organic Pollutants (POPs) in fish. Overall, compositing fish samples for mercury monitoring could result in a substantial savings (approximately 60% of the analytical cost) and should be considered in fish mercury monitoring, especially in long-term programs or when study cost is a concern. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
Jiang, Jia-Jia; Duan, Fa-Jie; Li, Yan-Chao; Hua, Xiang-Ning
2014-03-01
Synchronization sampling is very important in underwater towed array system where every acquisition node (AN) samples analog signals by its own analog-digital converter (ADC). In this paper, a simple and effective synchronization sampling method is proposed to ensure synchronized operation among different ANs of the underwater towed array system. We first present a master-slave synchronization sampling model, and then design a high accuracy phase-locked loop to synchronize all delta-sigma ADCs to a reference clock. However, when the master-slave synchronization sampling model is used, both the time-delay (TD) of messages traveling along the wired transmission medium and the jitter of the clocks will bring out synchronization sampling error (SSE). Therefore, a simple method is proposed to estimate and compensate the TD of the messages transmission, and then another effective method is presented to overcome the SSE caused by the jitter of the clocks. An experimental system with three ANs is set up, and the related experimental results verify the validity of the synchronization sampling method proposed in this paper.
NASA Astrophysics Data System (ADS)
Jiang, Jia-Jia; Duan, Fa-Jie; Li, Yan-Chao; Hua, Xiang-Ning
2014-03-01
Synchronization sampling is very important in underwater towed array system where every acquisition node (AN) samples analog signals by its own analog-digital converter (ADC). In this paper, a simple and effective synchronization sampling method is proposed to ensure synchronized operation among different ANs of the underwater towed array system. We first present a master-slave synchronization sampling model, and then design a high accuracy phase-locked loop to synchronize all delta-sigma ADCs to a reference clock. However, when the master-slave synchronization sampling model is used, both the time-delay (TD) of messages traveling along the wired transmission medium and the jitter of the clocks will bring out synchronization sampling error (SSE). Therefore, a simple method is proposed to estimate and compensate the TD of the messages transmission, and then another effective method is presented to overcome the SSE caused by the jitter of the clocks. An experimental system with three ANs is set up, and the related experimental results verify the validity of the synchronization sampling method proposed in this paper.
Csupor, Dezso; Borcsa, Botond; Heydel, Barbara; Hohmann, Judit; Zupkó, István; Ma, Yan; Widowitz, Ute; Bauer, Rudolf
2011-10-01
In traditional Chinese medicine, Aconitum (Ranunculaceae) roots are only applied after processing. Nevertheless, several cases of poisoning by improperly processed aconite roots have been reported. The aim of this study was to develop a reliable analytical method to assess the amount of toxic aconite alkaloids in commercial aconite roots, and to compare this method with the commonly used total alkaloid content determination by titration. The content of mesaconitine, aconitine, and hypaconitine in 16 commercial samples of processed aconite roots was determined by an HPLC method and the total alkaloid content by indirect titration. Five samples were selected for in vivo toxicological investigation. In most of the commercial samples, toxic alkaloids were not detectable, or only traces were found. In four samples, we could detect >0.04% toxic aconite alkaloids, the highest with a content of 0.16%. The results of HPLC analysis were compared with the results obtained by titration, and no correlation was found between the two methods. The in vivo results reassured the validity of the HPLC determination. Samples with mesaconitine, aconitine, and hypaconitine content below the HPLC detection limit still contained up to 0.2% alkaloids determined by titration. Since titration of alkaloids gives no information selectively on the aconitine-type alkaloid content and toxicity of aconite roots this method is not appropriate for safety assessment. The HPLC method developed by us provides a quick and reliable assessment of toxicity and should be considered as a purity test in pharmacopoeia monographs.
Double sampling to estimate density and population trends in birds
Bart, Jonathan; Earnst, Susan L.
2002-01-01
We present a method for estimating density of nesting birds based on double sampling. The approach involves surveying a large sample of plots using a rapid method such as uncorrected point counts, variable circular plot counts, or the recently suggested double-observer method. A subsample of those plots is also surveyed using intensive methods to determine actual density. The ratio of the mean count on those plots (using the rapid method) to the mean actual density (as determined by the intensive searches) is used to adjust results from the rapid method. The approach works well when results from the rapid method are highly correlated with actual density. We illustrate the method with three years of shorebird surveys from the tundra in northern Alaska. In the rapid method, surveyors covered ~10 ha h-1 and surveyed each plot a single time. The intensive surveys involved three thorough searches, required ~3 h ha-1, and took 20% of the study effort. Surveyors using the rapid method detected an average of 79% of birds present. That detection ratio was used to convert the index obtained in the rapid method into an essentially unbiased estimate of density. Trends estimated from several years of data would also be essentially unbiased. Other advantages of double sampling are that (1) the rapid method can be changed as new methods become available, (2) domains can be compared even if detection rates differ, (3) total population size can be estimated, and (4) valuable ancillary information (e.g. nest success) can be obtained on intensive plots with little additional effort. We suggest that double sampling be used to test the assumption that rapid methods, such as variable circular plot and double-observer methods, yield density estimates that are essentially unbiased. The feasibility of implementing double sampling in a range of habitats needs to be evaluated.
[Individual Identification of Cartilage by Direct Amplification in Mass Disasters].
Wang, C H; Xu, C; Li, X Q; Wu, Y; Du, Z
2017-06-01
To explore the effectiveness of direct amplification for the STR analysis of cartilage, and to accelerate the effectiveness of disaster victim identification. Eighty-eight cartilage samples were directly amplified by PowerPle® 21 kit, and the results of genotyping were compared with that obtained by the magnetic beads method. In 88 cartilage samples, the STR genotypes were successfully detected from 84 samples by direct amplification and magnetic beads method, and both the results of genotyping by two method were consistent. Direct amplification with PowerPlex® 21 kit can be used for STR genotyping of cartilages. This method is operated easily and promptly, which has a potential application in the individual identification of mass disasters. Copyright© by the Editorial Department of Journal of Forensic Medicine
Alles, Susan; Peng, Linda X; Mozola, Mark A
2009-01-01
A modification to Performance-Tested Method 010403, GeneQuence Listeria Test (DNAH method), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C, and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there were statistically significant differences in method performance between the DNAH method and reference culture procedures for only 2 foods (pasteurized crab meat and lettuce) at the 27 h enrichment time point and for only a single food (pasteurized crab meat) in one trial at the 30 h enrichment time point. Independent laboratory testing with 3 foods showed statistical equivalence between the methods for all foods, and results support the findings of the internal trials. Overall, considering both internal and independent laboratory trials, sensitivity of the DNAH method relative to the reference culture procedures was 90.5%. Results of testing 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the DNAH method was more productive than the reference U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS) culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the DNAH method at the 24 h time point. Overall, sensitivity of the DNAH method at 24 h relative to that of the USDA-FSIS method was 152%. The DNAH method exhibited extremely high specificity, with only 1% false-positive reactions overall.
Two research studies funded and overseen by EPA have been conducted since October 2006 on soil gas sampling methods and variations in shallow soil gas concentrations with the purpose of improving our understanding of soil gas methods and data for vapor intrusion applications. Al...
Marucci, Gianluca; Pezzotti, Patrizio; Pozio, Edoardo
2009-02-23
To control Trichinella spp. infection in the European Union, all slaughtered pigs should be tested by one of the approved digestion methods described in EU directive 2075/2005. The aim of the present work was to evaluate, by a ring trial, the sensitivity of the digestion method used at the National Reference Laboratories for Parasites (NRLP). These Laboratories are responsible for the quality of the detection method in their own country. Of the 27 EU countries, only three (Hungary, Luxembourg and Malta) did not participate in the ring trial. Each participating laboratory received 10 samples of 100g of minced pork containing 3-5 larvae (3 samples), 10-20 larvae (3 samples), 30-50 larvae (3 samples), and one negative control. In each positive sample, there were living Trichinella spiralis larvae without the collagen capsule, obtained by partial artificial digestion of muscle tissue from infected mice. No false positive sample was found in any laboratories, whereas nine laboratories (37.5%) failed to detect some positive samples with the percentage of false negatives ranging from 11 to 100%. The variation between expected and reported larval counts observed among the participating laboratories was statistically significant. There was a direct correlation between the consistency of the results and the use of a validated/accredited digestion method. Conversely, there was no correlation between the consistency of the results and the number of digestions performed yearly by the NRLP. These results support the importance of validating the test.
Garelnabi, Mahdi; Litvinov, Dmitry; Parthasarathy, Sampath
2010-01-01
Background: Azelaic acid (AzA) is the best known dicarboxilic acid to have pharmaceutical benefits and clinical applications and also to be associated with some diseases pathophysiology. Materials and Methods: We extracted and methylesterified AzA and determined its concentration in human plasma obtained from healthy individuals and also in mice fed AzA containing diet for three months. Results: AzA was detected in Gas Chromatography (GC) and confirmed by Liquid chromatography mass spectrometry (LCMS), and gas chromatography mass spectrometry (GCMC). Our results have shown that AzA can be determined efficiently in selected biological samples by GC method with 1nM limit of detection (LoD) and the limit of quantification (LoQ); was established at 50nM. Analytical Sensitivity as assayed by hexane demonstrated an analytical sensitivity at 0.050nM. The method has demonstrated 8-10% CV batch repeatability across the sample types and 13-18.9% CV for the Within-Lab Precision analysis. The method has shown that AzA can efficiently be recovered from various sample preparation including liver tissue homogenate (95%) and human plasma (97%). Conclusions: Because of its simplicity and lower limit of quantification, the present method provides a useful tool for determining AzA in various biological sample preparations. PMID:22558586
Potentiometric detection in UPLC as an easy alternative to determine cocaine in biological samples.
Daems, Devin; van Nuijs, Alexander L N; Covaci, Adrian; Hamidi-Asl, Ezat; Van Camp, Guy; Nagels, Luc J
2015-07-01
The analytical methods which are often used for the determination of cocaine in complex biological matrices are a prescreening immunoassay and confirmation by chromatography combined with mass spectrometry. We suggest an ultra-high-pressure liquid chromatography combined with a potentiometric detector, as a fast and practical method to detect and quantify cocaine in biological samples. An adsorption/desorption model was used to investigate the usefulness of the potentiometric detector to determine cocaine in complex matrices. Detection limits of 6.3 ng mL(-1) were obtained in plasma and urine, which is below the maximum residue limit (MRL) of 25 ng mL(-1). A set of seven plasma samples and 10 urine samples were classified identically by both methods as exceeding the MRL or being inferior to it. The results obtained with the UPLC/potentiometric detection method were compared with the results obtained with the UPLC/MS method for samples spiked with varying cocaine concentrations. The intraclass correlation coefficient was 0.997 for serum (n =7) and 0.977 for urine (n =8). As liquid chromatography is an established technique, and as potentiometry is very simple and cost-effective in terms of equipment, we believe that this method is potentially easy, inexpensive, fast and reliable. Copyright © 2014 John Wiley & Sons, Ltd.
Sample preparation of metal alloys by electric discharge machining
NASA Technical Reports Server (NTRS)
Chapman, G. B., II; Gordon, W. A.
1976-01-01
Electric discharge machining was investigated as a noncontaminating method of comminuting alloys for subsequent chemical analysis. Particulate dispersions in water were produced from bulk alloys at a rate of about 5 mg/min by using a commercially available machining instrument. The utility of this approach was demonstrated by results obtained when acidified dispersions were substituted for true acid solutions in an established spectrochemical method. The analysis results were not significantly different for the two sample forms. Particle size measurements and preliminary results from other spectrochemical methods which require direct aspiration of liquid into flame or plasma sources are reported.
Microwave absorption properties of gold nanoparticle doped polymers
NASA Astrophysics Data System (ADS)
Jiang, C.; Ouattara, L.; Ingrosso, C.; Curri, M. L.; Krozer, V.; Boisen, A.; Jakobsen, M. H.; Johansen, T. K.
2011-03-01
This paper presents a method for characterizing microwave absorption properties of gold nanoparticle doped polymers. The method is based on on-wafer measurements at the frequencies from 0.5 GHz to 20 GHz. The on-wafer measurement method makes it possible to characterize electromagnetic (EM) property of small volume samples. The epoxy based SU8 polymer and SU8 doped with gold nanoparticles are chosen as the samples under test. Two types of microwave test devices are designed for exciting the samples through electrical coupling and magnetic coupling, respectively. Measurement results demonstrate that the nanocomposites absorb a certain amount of microwave energy due to gold nanoparticles. Higher nanoparticle concentration results in more significant absorption effect.
Estill, Cheryl Fairfield; Baron, Paul A.; Beard, Jeremy K.; Hein, Misty J.; Larsen, Lloyd D.; Rose, Laura; Schaefer, Frank W.; Noble-Wang, Judith; Hodges, Lisa; Lindquist, H. D. Alan; Deye, Gregory J.; Arduino, Matthew J.
2009-01-01
After the 2001 anthrax incidents, surface sampling techniques for biological agents were found to be inadequately validated, especially at low surface loadings. We aerosolized Bacillus anthracis Sterne spores within a chamber to achieve very low surface loading (ca. 3, 30, and 200 CFU per 100 cm2). Steel and carpet coupons seeded in the chamber were sampled with swab (103 cm2) or wipe or vacuum (929 cm2) surface sampling methods and analyzed at three laboratories. Agar settle plates (60 cm2) were the reference for determining recovery efficiency (RE). The minimum estimated surface concentrations to achieve a 95% response rate based on probit regression were 190, 15, and 44 CFU/100 cm2 for sampling steel surfaces and 40, 9.2, and 28 CFU/100 cm2 for sampling carpet surfaces with swab, wipe, and vacuum methods, respectively; however, these results should be cautiously interpreted because of high observed variability. Mean REs at the highest surface loading were 5.0%, 18%, and 3.7% on steel and 12%, 23%, and 4.7% on carpet for the swab, wipe, and vacuum methods, respectively. Precision (coefficient of variation) was poor at the lower surface concentrations but improved with increasing surface concentration. The best precision was obtained with wipe samples on carpet, achieving 38% at the highest surface concentration. The wipe sampling method detected B. anthracis at lower estimated surface concentrations and had higher RE and better precision than the other methods. These results may guide investigators to more meaningfully conduct environmental sampling, quantify contamination levels, and conduct risk assessment for humans. PMID:19429546
Estill, Cheryl Fairfield; Baron, Paul A; Beard, Jeremy K; Hein, Misty J; Larsen, Lloyd D; Rose, Laura; Schaefer, Frank W; Noble-Wang, Judith; Hodges, Lisa; Lindquist, H D Alan; Deye, Gregory J; Arduino, Matthew J
2009-07-01
After the 2001 anthrax incidents, surface sampling techniques for biological agents were found to be inadequately validated, especially at low surface loadings. We aerosolized Bacillus anthracis Sterne spores within a chamber to achieve very low surface loading (ca. 3, 30, and 200 CFU per 100 cm(2)). Steel and carpet coupons seeded in the chamber were sampled with swab (103 cm(2)) or wipe or vacuum (929 cm(2)) surface sampling methods and analyzed at three laboratories. Agar settle plates (60 cm(2)) were the reference for determining recovery efficiency (RE). The minimum estimated surface concentrations to achieve a 95% response rate based on probit regression were 190, 15, and 44 CFU/100 cm(2) for sampling steel surfaces and 40, 9.2, and 28 CFU/100 cm(2) for sampling carpet surfaces with swab, wipe, and vacuum methods, respectively; however, these results should be cautiously interpreted because of high observed variability. Mean REs at the highest surface loading were 5.0%, 18%, and 3.7% on steel and 12%, 23%, and 4.7% on carpet for the swab, wipe, and vacuum methods, respectively. Precision (coefficient of variation) was poor at the lower surface concentrations but improved with increasing surface concentration. The best precision was obtained with wipe samples on carpet, achieving 38% at the highest surface concentration. The wipe sampling method detected B. anthracis at lower estimated surface concentrations and had higher RE and better precision than the other methods. These results may guide investigators to more meaningfully conduct environmental sampling, quantify contamination levels, and conduct risk assessment for humans.
Capik, Sarah F; White, Brad J; Lubbers, Brian V; Apley, Michael D; DeDonder, Keith D; Larson, Robert L; Harhay, Greg P; Chitko-McKown, Carol G; Harhay, Dayna M; Kalbfleisch, Ted S; Schuller, Gennie; Clawson, Michael L
2017-03-01
OBJECTIVE To compare predictive values, extent of agreement, and gamithromycin susceptibility between bacterial culture results of nasopharyngeal swab (NPS) and bronchoalveolar lavage fluid (BALF) samples obtained from calves with bovine respiratory disease (BRD). ANIMALS 28 beef calves with clinical BRD. PROCEDURES Pooled bilateral NPS samples and BALF samples were obtained for bacterial culture from calves immediately before and at various times during the 5 days after gamithromycin (6 mg/kg, SC, once) administration. For each culture-positive sample, up to 12 Mannheimia haemolytica, 6 Pasteurella multocida, and 6 Histophilus somni colonies underwent gamithromycin susceptibility testing. Whole-genome sequencing was performed on all M haemolytica isolates. For paired NPS and BALF samples collected 5 days after gamithromycin administration, the positive and negative predictive values for culture results of NPS samples relative to those of BALF samples and the extent of agreement between the sampling methods were determined. RESULTS Positive and negative predictive values of NPS samples were 67% and 100% for M haemolytica, 75% and 100% for P multocida, and 100% and 96% for H somni. Extent of agreement between results for NPS and BALF samples was substantial for M haemolytica (κ, 0.71) and H somni (κ, 0.78) and almost perfect for P multocida (κ, 0.81). Gamithromycin susceptibility varied within the same sample and between paired NPS and BALF samples. CONCLUSIONS AND CLINICAL RELEVANCE Results indicated culture results of NPS and BALF samples from calves with BRD should be interpreted cautiously considering disease prevalence within the population, sample collection relative to antimicrobial administration, and limitations of diagnostic testing methods.
Regan, Rainy D; Fenyk-Melody, Judy E; Tran, Sam M; Chen, Guang; Stocking, Kim L
2016-01-01
Nonterminal blood sample collection of sufficient volume and quality for research is complicated in mice due to their small size and anatomy. Large (>100 μL) nonterminal volumes of unhemolyzed or unclotted blood currently are typically collected from the retroorbital sinus or submandibular plexus. We developed a third method—submental blood collection—which is similar in execution to the submandibular method but with minor changes in animal restraint and collection location. Compared with other techniques, submental collection is easier to perform due to the direct visibility of the target vessels, which are located in a sparsely furred region. Compared with the submandibular method, the submental method did not differ regarding weight change and clotting score but significantly decreased hemolysis and increased the overall number of high-quality samples. The submental method was performed with smaller lancets for the majority of the bleeds, yet resulted in fewer repeat collection attempts, fewer insufficient samples, and less extraneous blood loss and was qualitatively less traumatic. Compared with the retroorbital technique, the submental method was similar regarding weight change but decreased hemolysis, clotting, and the number of overall high-quality samples; however the retroorbital method resulted in significantly fewer incidents of insufficient sample collection. Extraneous blood loss was roughly equivalent between the submental and retroorbital methods. We conclude that the submental method is an acceptable venipuncture technique for obtaining large, nonterminal volumes of blood from mice. PMID:27657712
Study on Measuring the Viscosity of Lubricating Oil by Viscometer Based on Hele - Shaw Principle
NASA Astrophysics Data System (ADS)
Li, Longfei
2017-12-01
In order to explore the method of accurately measuring the viscosity value of oil samples using the viscometer based on Hele-Shaw principle, three different measurement methods are designed in the laboratory, and the statistical characteristics of the measured values are compared, in order to get the best measurement method. The results show that the oil sample to be measured is placed in the magnetic field formed by the magnet, and the oil sample can be sucked from the same distance from the magnet. The viscosity value of the sample can be measured accurately.
Tao, Guohua; Miller, William H
2011-07-14
An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.
K-Nearest Neighbor Algorithm Optimization in Text Categorization
NASA Astrophysics Data System (ADS)
Chen, Shufeng
2018-01-01
K-Nearest Neighbor (KNN) classification algorithm is one of the simplest methods of data mining. It has been widely used in classification, regression and pattern recognition. The traditional KNN method has some shortcomings such as large amount of sample computation and strong dependence on the sample library capacity. In this paper, a method of representative sample optimization based on CURE algorithm is proposed. On the basis of this, presenting a quick algorithm QKNN (Quick k-nearest neighbor) to find the nearest k neighbor samples, which greatly reduces the similarity calculation. The experimental results show that this algorithm can effectively reduce the number of samples and speed up the search for the k nearest neighbor samples to improve the performance of the algorithm.
Berlinger, Balazs; Harper, Martin
2018-02-01
There is interest in the bioaccessible metal components of aerosols, but this has been minimally studied because standardized sampling and analytical methods have not yet been developed. An interlaboratory study (ILS) has been carried out to evaluate a method for determining the water-soluble component of realistic welding fume (WF) air samples. Replicate samples were generated in the laboratory and distributed to participating laboratories to be analyzed according to a standardized procedure. Within-laboratory precision of replicate sample analysis (repeatability) was very good. Reproducibility between laboratories was not as good, but within limits of acceptability for the analysis of typical aerosol samples. These results can be used to support the development of a standardized test method.
The Beginner's Guide to the Bootstrap Method of Resampling.
ERIC Educational Resources Information Center
Lane, Ginny G.
The bootstrap method of resampling can be useful in estimating the replicability of study results. The bootstrap procedure creates a mock population from a given sample of data from which multiple samples are then drawn. The method extends the usefulness of the jackknife procedure as it allows for computation of a given statistic across a maximal…
An intercomparison of five ammonia measurement techniques
NASA Technical Reports Server (NTRS)
Williams, E. J.; Sandholm, S. T.; Bradshaw, J. D.; Schendel, J. S.; Langford, A. O.; Quinn, P. K.; Lebel, P. J.; Vay, S. A.; Roberts, P. D.; Norton, R. B.
1992-01-01
Results obtained from five techniques for measuring gas-phase ammonia at low concentration in the atmosphere are compared. These methods are: (1) a photofragmentation/laser-induced fluorescence (PF/LIF) instrument; (2) a molybdenum oxide annular denuder sampling/chemiluminescence detection technique; (3) a tungsten oxide denuder sampling/chemiluminescence detection system; (4) a citric-acid-coated denuder sampling/ion chromatographic analysis (CAD/IC) method; and (5) an oxalic-acid-coated filter pack sampling/colorimetric analysis method. It was found that two of the techniques, the PF/LIF and the CAD/IC methods, measured approximately 90 percent of the calculated ammonia added in the spiking tests and agreed very well with each other in the ambient measurements.
Hügler, Michael; Böckle, Karin; Eberhagen, Ingrid; Thelen, Karin; Beimfohr, Claudia; Hambsch, Beate
2011-01-01
Monitoring of microbiological contaminants in water supplies requires fast and sensitive methods for the specific detection of indicator organisms or pathogens. We developed a protocol for the simultaneous detection of E. coli and coliform bacteria based on the Fluorescence in situ Hybridization (FISH) technology. This protocol consists of two approaches. The first allows the direct detection of single E. coli and coliform bacterial cells on the filter membranes. The second approach includes incubation of the filter membranes on a nutrient agar plate and subsequent detection of the grown micro-colonies. Both approaches were validated using drinking water samples spiked with pure cultures and naturally contaminated water samples. The effects of heat, chlorine and UV disinfection were also investigated. The micro-colony approach yielded very good results for all samples and conditions tested, and thus can be thoroughly recommended for usage as an alternative method to detect E. coli and coliform bacteria in water samples. However, during this study, some limitations became visible for the single cell approach. The method cannot be applied for water samples which have been disinfected by UV irradiation. In addition, our results indicated that green fluorescent dyes are not suitable to be used with chlorine disinfected samples.
Zheng, Lu; Gao, Naiyun; Deng, Yang
2012-01-01
It is difficult to isolate DNA from biological activated carbon (BAC) samples used in water treatment plants, owing to the scarcity of microorganisms in BAC samples. The aim of this study was to identify DNA extraction methods suitable for a long-term, comprehensive ecological analysis of BAC microbial communities. To identify a procedure that can produce high molecular weight DNA, maximizes detectable diversity and is relatively free from contaminants, the microwave extraction method, the cetyltrimethylammonium bromide (CTAB) extraction method, a commercial DNA extraction kit, and the ultrasonic extraction method were used for the extraction of DNA from BAC samples. Spectrophotometry, agarose gel electrophoresis and polymerase chain reaction (PCR)-restriction fragment length polymorphisms (RFLP) analysis were conducted to compare the yield and quality of DNA obtained using these methods. The results showed that the CTAB method produce the highest yield and genetic diversity of DNA from BAC samples, but DNA purity was slightly less than that obtained with the DNA extraction-kit method. This study provides a theoretical basis for establishing and selecting DNA extraction methods for BAC samples.
Drummond, A; Rodrigo, A G
2000-12-01
Reconstruction of evolutionary relationships from noncontemporaneous molecular samples provides a new challenge for phylogenetic reconstruction methods. With recent biotechnological advances there has been an increase in molecular sequencing throughput, and the potential to obtain serial samples of sequences from populations, including rapidly evolving pathogens, is fast being realized. A new method called the serial-sample unweighted pair grouping method with arithmetic means (sUPGMA) is presented that reconstructs a genealogy or phylogeny of sequences sampled serially in time using a matrix of pairwise distances. The resulting tree depicts the terminal lineages of each sample ending at a different level consistent with the sample's temporal order. Since sUPGMA is a variant of UPGMA, it will perform best when sequences have evolved at a constant rate (i.e., according to a molecular clock). On simulated data, this new method performs better than standard cluster analysis under a variety of longitudinal sampling strategies. Serial-sample UPGMA is particularly useful for analysis of longitudinal samples of viruses and bacteria, as well as ancient DNA samples, with the minimal requirement that samples of sequences be ordered in time.
Modeling the Sensitivity of Field Surveys for Detection of Environmental DNA (eDNA)
Schultz, Martin T.; Lance, Richard F.
2015-01-01
The environmental DNA (eDNA) method is the practice of collecting environmental samples and analyzing them for the presence of a genetic marker specific to a target species. Little is known about the sensitivity of the eDNA method. Sensitivity is the probability that the target marker will be detected if it is present in the water body. Methods and tools are needed to assess the sensitivity of sampling protocols, design eDNA surveys, and interpret survey results. In this study, the sensitivity of the eDNA method is modeled as a function of ambient target marker concentration. The model accounts for five steps of sample collection and analysis, including: 1) collection of a filtered water sample from the source; 2) extraction of DNA from the filter and isolation in a purified elution; 3) removal of aliquots from the elution for use in the polymerase chain reaction (PCR) assay; 4) PCR; and 5) genetic sequencing. The model is applicable to any target species. For demonstration purposes, the model is parameterized for bighead carp (Hypophthalmichthys nobilis) and silver carp (H. molitrix) assuming sampling protocols used in the Chicago Area Waterway System (CAWS). Simulation results show that eDNA surveys have a high false negative rate at low concentrations of the genetic marker. This is attributed to processing of water samples and division of the extraction elution in preparation for the PCR assay. Increases in field survey sensitivity can be achieved by increasing sample volume, sample number, and PCR replicates. Increasing sample volume yields the greatest increase in sensitivity. It is recommended that investigators estimate and communicate the sensitivity of eDNA surveys to help facilitate interpretation of eDNA survey results. In the absence of such information, it is difficult to evaluate the results of surveys in which no water samples test positive for the target marker. It is also recommended that invasive species managers articulate concentration-based sensitivity objectives for eDNA surveys. In the absence of such information, it is difficult to design appropriate sampling protocols. The model provides insights into how sampling protocols can be designed or modified to achieve these sensitivity objectives. PMID:26509674
Modeling the Sensitivity of Field Surveys for Detection of Environmental DNA (eDNA).
Schultz, Martin T; Lance, Richard F
2015-01-01
The environmental DNA (eDNA) method is the practice of collecting environmental samples and analyzing them for the presence of a genetic marker specific to a target species. Little is known about the sensitivity of the eDNA method. Sensitivity is the probability that the target marker will be detected if it is present in the water body. Methods and tools are needed to assess the sensitivity of sampling protocols, design eDNA surveys, and interpret survey results. In this study, the sensitivity of the eDNA method is modeled as a function of ambient target marker concentration. The model accounts for five steps of sample collection and analysis, including: 1) collection of a filtered water sample from the source; 2) extraction of DNA from the filter and isolation in a purified elution; 3) removal of aliquots from the elution for use in the polymerase chain reaction (PCR) assay; 4) PCR; and 5) genetic sequencing. The model is applicable to any target species. For demonstration purposes, the model is parameterized for bighead carp (Hypophthalmichthys nobilis) and silver carp (H. molitrix) assuming sampling protocols used in the Chicago Area Waterway System (CAWS). Simulation results show that eDNA surveys have a high false negative rate at low concentrations of the genetic marker. This is attributed to processing of water samples and division of the extraction elution in preparation for the PCR assay. Increases in field survey sensitivity can be achieved by increasing sample volume, sample number, and PCR replicates. Increasing sample volume yields the greatest increase in sensitivity. It is recommended that investigators estimate and communicate the sensitivity of eDNA surveys to help facilitate interpretation of eDNA survey results. In the absence of such information, it is difficult to evaluate the results of surveys in which no water samples test positive for the target marker. It is also recommended that invasive species managers articulate concentration-based sensitivity objectives for eDNA surveys. In the absence of such information, it is difficult to design appropriate sampling protocols. The model provides insights into how sampling protocols can be designed or modified to achieve these sensitivity objectives.
Vogel, Laura J; Edge, Thomas A; O'Carroll, Denis M; Solo-Gabriele, Helena M; Kushnir, Caitlin S E; Robinson, Clare E
2017-09-15
Fecal indicator bacteria (FIB) are known to accumulate in foreshore beach sand and pore water (referred to as foreshore reservoir) where they act as a non-point source for contaminating adjacent surface waters. While guidelines exist for sampling surface waters at recreational beaches, there is no widely-accepted method to collect sand/sediment or pore water samples for FIB enumeration. The effect of different sampling strategies in quantifying the abundance of FIB in the foreshore reservoir is unclear. Sampling was conducted at six freshwater beaches with different sand types to evaluate sampling methods for characterizing the abundance of E. coli in the foreshore reservoir as well as the partitioning of E. coli between different components in the foreshore reservoir (pore water, saturated sand, unsaturated sand). Methods were evaluated for collection of pore water (drive point, shovel, and careful excavation), unsaturated sand (top 1 cm, top 5 cm), and saturated sand (sediment core, shovel, and careful excavation). Ankle-depth surface water samples were also collected for comparison. Pore water sampled with a shovel resulted in the highest observed E. coli concentrations (only statistically significant at fine sand beaches) and lowest variability compared to other sampling methods. Collection of the top 1 cm of unsaturated sand resulted in higher and more variable concentrations than the top 5 cm of sand. There were no statistical differences in E. coli concentrations when using different methods to sample the saturated sand. Overall, the unsaturated sand had the highest amount of E. coli when compared to saturated sand and pore water (considered on a bulk volumetric basis). The findings presented will help determine the appropriate sampling strategy for characterizing FIB abundance in the foreshore reservoir as a means of predicting its potential impact on nearshore surface water quality and public health risk. Copyright © 2017 Elsevier Ltd. All rights reserved.
Magnuson, Matthew; Campisano, Romy; Griggs, John; Fitz-James, Schatzi; Hall, Kathy; Mapp, Latisha; Mullins, Marissa; Nichols, Tonya; Shah, Sanjiv; Silvestri, Erin; Smith, Terry; Willison, Stuart; Ernst, Hiba
2014-11-01
Catastrophic incidents can generate a large number of samples of analytically diverse types, including forensic, clinical, environmental, food, and others. Environmental samples include water, wastewater, soil, air, urban building and infrastructure materials, and surface residue. Such samples may arise not only from contamination from the incident but also from the multitude of activities surrounding the response to the incident, including decontamination. This document summarizes a range of activities to help build laboratory capability in preparation for sample analysis following a catastrophic incident, including selection and development of fit-for-purpose analytical methods for chemical, biological, and radiological contaminants. Fit-for-purpose methods are those which have been selected to meet project specific data quality objectives. For example, methods could be fit for screening contamination in the early phases of investigation of contamination incidents because they are rapid and easily implemented, but those same methods may not be fit for the purpose of remediating the environment to acceptable levels when a more sensitive method is required. While the exact data quality objectives defining fitness-for-purpose can vary with each incident, a governing principle of the method selection and development process for environmental remediation and recovery is based on achieving high throughput while maintaining high quality analytical results. This paper illustrates the result of applying this principle, in the form of a compendium of analytical methods for contaminants of interest. The compendium is based on experience with actual incidents, where appropriate and available. This paper also discusses efforts aimed at adaptation of existing methods to increase fitness-for-purpose and development of innovative methods when necessary. The contaminants of interest are primarily those potentially released through catastrophes resulting from malicious activity. However, the same techniques discussed could also have application to catastrophes resulting from other incidents, such as natural disasters or industrial accidents. Further, the high sample throughput enabled by the techniques discussed could be employed for conventional environmental studies and compliance monitoring, potentially decreasing costs and/or increasing the quantity of data available to decision-makers. Published by Elsevier Ltd.
Numerical simulation and analysis for low-frequency rock physics measurements
NASA Astrophysics Data System (ADS)
Dong, Chunhui; Tang, Genyang; Wang, Shangxu; He, Yanxiao
2017-10-01
In recent years, several experimental methods have been introduced to measure the elastic parameters of rocks in the relatively low-frequency range, such as differential acoustic resonance spectroscopy (DARS) and stress-strain measurement. It is necessary to verify the validity and feasibility of the applied measurement method and to quantify the sources and levels of measurement error. Relying solely on the laboratory measurements, however, we cannot evaluate the complete wavefield variation in the apparatus. Numerical simulations of elastic wave propagation, on the other hand, are used to model the wavefield distribution and physical processes in the measurement systems, and to verify the measurement theory and analyze the measurement results. In this paper we provide a numerical simulation method to investigate the acoustic waveform response of the DARS system and the quasi-static responses of the stress-strain system, both of which use axisymmetric apparatus. We applied this method to parameterize the properties of the rock samples, the sample locations and the sensor (hydrophone and strain gauges) locations and simulate the measurement results, i.e. resonance frequencies and axial and radial strains on the sample surface, from the modeled wavefield following the physical experiments. Rock physical parameters were estimated by inversion or direct processing of these data, and showed a perfect match with the true values, thus verifying the validity of the experimental measurements. Error analysis was also conducted for the DARS system with 18 numerical samples, and the sources and levels of error are discussed. In particular, we propose an inversion method for estimating both density and compressibility of these samples. The modeled results also showed fairly good agreement with the real experiment results, justifying the effectiveness and feasibility of our modeling method.
Long, Ju
2016-05-01
In China, -(SEA), -α(3.7) and -α(4.2) are common deletional α-thalassemia alleles. Gap-PCR is the currently used detection method for these alleles, whose disadvantages include time-consuming procedure and increased potential for PCR product contamination. Therefore, this detection method needs to be improved. Based on identical-primer homologous fragments, a qPCR system was developed for deletional α-thalassemia genotyping, which was composed of a group of quantitatively-related primers and their corresponding probes plus two groups of qualitatively-related primers and their corresponding probes. In order to verify the accuracy of the qPCR system, known genotype samples and random samples are employed. The standard curve result demonstrated that designed primers and probes all yielded good amplification efficiency. In the tests of known genotype samples and random samples, sample detection results were consistent with verification results. In detecting αα, -(SEA), -α(3.7) and -α(4.2) alleles, deletional α-thalassemia alleles are accurately detected by this method. In addition, this method is provided with a wider detection range, greater speed and reduced PCR product contamination risk when compared with current common gap-PCR detection reagents. Copyright © 2016 Elsevier B.V. All rights reserved.
Canon, Abbey J; Lauterbach, Nicholas; Bates, Jessica; Skoland, Kristin; Thomas, Paul; Ellingson, Josh; Ruston, Chelsea; Breuer, Mary; Gerardy, Kimberlee; Hershberger, Nicole; Hayman, Kristen; Buckley, Alexis; Holtkamp, Derald; Karriker, Locke
2017-06-15
OBJECTIVE To develop and evaluate a pyramid training method for teaching techniques for collection of diagnostic samples from swine. DESIGN Experimental trial. SAMPLE 45 veterinary students. PROCEDURES Participants went through a preinstruction assessment to determine their familiarity with the equipment needed and techniques used to collect samples of blood, nasal secretions, feces, and oral fluid from pigs. Participants were then shown a series of videos illustrating the correct equipment and techniques for collecting samples and were provided hands-on pyramid-based instruction wherein a single swine veterinarian trained 2 or 3 participants on each of the techniques and each of those participants, in turn, trained additional participants. Additional assessments were performed after the instruction was completed. RESULTS Following the instruction phase, percentages of participants able to collect adequate samples of blood, nasal secretions, feces, and oral fluid increased, as did scores on a written quiz assessing participants' ability to identify the correct equipment, positioning, and procedures for collection of samples. CONCLUSIONS AND CLINICAL RELEVANCE Results suggested that the pyramid training method may be a feasible way to rapidly increase diagnostic sampling capacity during an emergency veterinary response to a swine disease outbreak.
Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine
2017-09-01
According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.
Wang, Yi-Ya; Zhan, Xiu-Chun
2014-04-01
Evaluating uncertainty of analytical results with 165 geological samples by polarized dispersive X-ray fluorescence spectrometry (P-EDXRF) has been reported according to the internationally accepted guidelines. One hundred sixty five pressed pellets of similar matrix geological samples with reliable values were analyzed by P-EDXRF. These samples were divided into several different concentration sections in the concentration ranges of every component. The relative uncertainties caused by precision and accuracy of 27 components were evaluated respectively. For one element in one concentration, the relative uncertainty caused by precision can be calculated according to the average value of relative standard deviation with different concentration level in one concentration section, n = 6 stands for the 6 results of one concentration level. The relative uncertainty caused by accuracy in one concentration section can be evaluated by the relative standard deviation of relative deviation with different concentration level in one concentration section. According to the error propagation theory, combining the precision uncertainty and the accuracy uncertainty into a global uncertainty, this global uncertainty acted as method uncertainty. This model of evaluating uncertainty can solve a series of difficult questions in the process of evaluating uncertainty, such as uncertainties caused by complex matrix of geological samples, calibration procedure, standard samples, unknown samples, matrix correction, overlap correction, sample preparation, instrument condition and mathematics model. The uncertainty of analytical results in this method can act as the uncertainty of the results of the similar matrix unknown sample in one concentration section. This evaluation model is a basic statistical method owning the practical application value, which can provide a strong base for the building of model of the following uncertainty evaluation function. However, this model used a lot of samples which cannot simply be applied to other types of samples with different matrix samples. The number of samples is too large to adapt to other type's samples. We will strive for using this study as a basis to establish a reasonable basis of mathematical statistics function mode to be applied to different types of samples.
Ferreira, L; Sánchez-Juanes, F; Muñoz-Bellido, J L; González-Buitrago, J M
2011-07-01
Matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF) mass spectrometry (MS) is a fast and reliable technology for the identification of microorganisms with proteomics approaches. Here, we compare an intact cell method and a protein extraction method before application on the MALDI plate for the direct identification of microorganisms in both urine and blood culture samples from clinical microbiology laboratories. The results show that the intact cell method provides excellent results for urine and is a good initial method for blood cultures. The extraction method complements the intact cell method, improving microorganism identification from blood culture. Thus, we consider that MALDI-TOF MS performed directly on urine and blood culture samples, with the protocols that we propose, is a suitable technique for microorganism identification, as compared with the routine methods used in the clinical microbiology laboratory. © 2010 The Authors. Clinical Microbiology and Infection © 2010 European Society of Clinical Microbiology and Infectious Diseases.
NASA Astrophysics Data System (ADS)
Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar
2018-04-01
Model based analysis methods are relatively new approaches for processing the output data of radiation detectors in nuclear medicine imaging and spectroscopy. A class of such methods requires fast algorithms for fitting pulse models to experimental data. In order to apply integral-equation based methods for processing the preamplifier output pulses, this article proposes a fast and simple method for estimating the parameters of the well-known bi-exponential pulse model by solving an integral equation. The proposed method needs samples from only three points of the recorded pulse as well as its first and second order integrals. After optimizing the sampling points, the estimation results were calculated and compared with two traditional integration-based methods. Different noise levels (signal-to-noise ratios from 10 to 3000) were simulated for testing the functionality of the proposed method, then it was applied to a set of experimental pulses. Finally, the effect of quantization noise was assessed by studying different sampling rates. Promising results by the proposed method endorse it for future real-time applications.
Jedynak, Łukasz; Jedynak, Maria; Kossykowska, Magdalena; Zagrodzka, Joanna
2017-02-20
An HPLC method with UV detection and separation with the use of a C30 reversed phase analytical column for the determination of chemical purity and assay of menaquinone-7 (MK7) in one chromatographic run was developed. The method is superior to the methods published in the USP Monograph in terms of selectivity, sensitivity and accuracy, as well as time, solvent and sample consumption. The developed methodology was applied to MK7 samples of active pharmaceutical ingredient (API) purity, MK7 samples of lower quality and crude MK7 samples before purification. The comparison of the results revealed that the use of USP methodology could lead to serious overestimation (up to a few percent) of both purity and MK7 assay in menaquinone-7 samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Sen, Indranil; Zou, Wei; Alvaran, Josephine; Nguyen, Linda; Gajek, Ryszard; She, Jianwen
2015-01-01
In order to better distinguish the different toxic inorganic and organic forms of arsenic (As) exposure in individuals, we have developed and validated a simple and robust analytical method for determining the following six As species in human urine: arsenous (III) acid (As-III), As (V) acid, monomethylarsonic acid, dimethylarsinic acid, arsenobetaine (AsB), and arsenocholine. In this method, human urine is diluted using a pH 5.8 buffer, separation is performed using an anion exchange column with isocratic HPLC, and detection is achieved using inductively coupled plasma-MS. The method uses a single mobile phase consisting of low concentrations of both phosphate buffer (5 mM) and ammonium nitrate salt (5 mM) at pH 9.0; this minimizes the column equilibration time and overcomes challenges with separation between AsB and As-III. In addition, As-III oxidation is prevented by degassing the sample preparation buffer at pH 5.8, degassing the mobile phase online at pH 9.0, and by the use of low temperature (-70 °C) and flip-cap airtight tubes for long term storage of samples. The method was validated using externally provided reference samples. Results were in agreement with target values at varying concentrations and successfully passed external performance test criteria. Internal QC samples were prepared and repeatedly analyzed to assess the method's long-term precision, and further analyses were completed on anonymous donor urine to assess the quality of the method's baseline separation. Results from analyses of external reference samples agreed with target values at varying concentrations, and results from precision studies yielded absolute CV values of 3-14% and recovery from 82 to 115% for the six As species. Analysis of anonymous donor urine confirmed the well-resolved baseline separation capabilities of the method for real participant samples.
Pyschik, Marcelina; Klein-Hitpaß, Marcel; Girod, Sabrina; Winter, Martin; Nowak, Sascha
2017-02-01
In this study, an optimized method using capillary electrophoresis (CE) with a direct contactless conductivity detector (C 4 D) for a new application field is presented for the quantification of fluoride in common used lithium ion battery (LIB) electrolyte using LiPF 6 in organic carbonate solvents and in ionic liquids (ILs) after contacted to Li metal. The method development for finding the right buffer and the suitable CE conditions for the quantification of fluoride was investigated. The results of the concentration of fluoride in different LIB electrolyte samples were compared to the results from the ion-selective electrode (ISE). The relative standard deviations (RSDs) and recovery rates for fluoride were obtained with a very high accuracy in both methods. The results of the fluoride concentration in the LIB electrolytes were in very good agreement for both methods. In addition, the limit of detection (LOD) and limit of quantification (LOQ) values were determined for the CE method. The CE method has been applied also for the quantification of fluoride in ILs. In the fresh IL sample, the concentration of fluoride was under the LOD. Another sample of the IL mixed with Li metal has been investigated as well. It was possible to quantify the fluoride concentration in this sample. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sanchez, J; Dohoo, I R; Markham, F; Leslie, K; Conboy, G
2002-10-16
An indirect enzyme-linked immunosorbent assay (ELISA) for the detection of antibodies against Ostertagia ostertagi using a crude adult worm antigen was evaluated using serum and milk samples from adult cows, as well as from bulk tank milk. Within and between plate repeatabilities were determined. In addition, the effects of factors such as antigen batch, freezing, preserving of the samples and somatic cell counts (SCCs) of the samples were evaluated. Raw optical densities (ODs) and normalized values were compared using the concordance correlation coefficient (CCC), the coefficient of variation (CV), Bland-Altman plots (BA). Based on raw OD values, there was a high repeatability within a plate (CCC approximately 0.96 and CV<10%). Repeatability between plates was evaluated following normalization of OD values by four methods. Computing normalized values as (OD-Nt)/(Pst-Nt), gave the most repeatable results, with the CCC being approximately 0.95 and the CV approximately 11%. When the OD values were higher than 1.2 and 0.3 for the positive and the negative controls, respectively, none of the normalization methods evaluated provided highly repeatable results and it was necessary to repeat the test. Two batches of the crude antigen preparation were evaluated for repeatability, and no difference was found (CCC=0.96). The use of preservative (bronopol) did not affect test results, nor did freezing the samples for up to 8 months. A significant positive relationship between ELISA OD for milk samples and SCC score was found. Therefore, the use of composite milk samples, which have less variable SCC than samples taken from each quarter, would be more suitable when the udder health status is unknown. The analytical methods used to evaluate repeatability provided a practical way to select among normalization procedures.
Nelson, Jennifer Clark; Marsh, Tracey; Lumley, Thomas; Larson, Eric B; Jackson, Lisa A; Jackson, Michael L
2013-08-01
Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased owing to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. We applied two such methods, namely imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method's ability to reduce bias using the control time period before influenza circulation. Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not use the validation sample confounders. Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from health care database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which the data can be imputed or reweighted using the additional validation sample information. Copyright © 2013 Elsevier Inc. All rights reserved.
Method for charging a hydrogen getter
Tracy, C.E.; Keyser, M.A.; Benson, D.K.
1998-09-15
A method for charging a sample of either a permanent or reversible getter material with a high concentration of hydrogen while maintaining a base pressure below 10{sup {minus}4} torr at room temperature involves placing the sample of hydrogen getter material in a chamber, activating the sample of hydrogen getter material, overcharging the sample of getter material through conventional charging techniques to a high concentration of hydrogen, and then subjecting the sample of getter material to a low temperature vacuum bake-out process. Application of the method results in a reversible hydrogen getter which is highly charged to maximum capacities of hydrogen and which concurrently exhibits minimum hydrogen vapor pressures at room temperatures. 9 figs.
NASA Astrophysics Data System (ADS)
Monchau, Jean-Pierre; Hameury, Jacques; Ausset, Patrick; Hay, Bruno; Ibos, Laurent; Candau, Yves
2018-05-01
Accurate knowledge of infrared emissivity is important in applications such as surface temperature measurements by infrared thermography or thermal balance for building walls. A comparison of total hemispherical emissivity measurement was performed by two laboratories: the Laboratoire National de Métrologie et d'Essais (LNE) and the Centre d'Études et de Recherche en Thermique, Environnement et Systèmes (CERTES). Both laboratories performed emissivity measurements on four samples, chosen to cover a large range of emissivity values and angular reflectance behaviors. The samples were polished aluminum (highly specular, low emissivity), bulk PVC (slightly specular, high emissivity), sandblasted aluminum (diffuse surface, medium emissivity), and aluminum paint (slightly specular surface, medium emissivity). Results obtained using five measurement techniques were compared. LNE used a calorimetric method for direct total hemispherical emissivity measurement [1], an absolute reflectometric measurement method [2], and a relative reflectometric measurement method. CERTES used two total hemispherical directional reflectometric measurement methods [3, 4]. For indirect techniques by reflectance measurements, the total hemispherical emissivity values were calculated from directional hemispherical reflectance measurement results using spectral integration when required and directional to hemispherical extrapolation. Results were compared, taking into account measurement uncertainties; an added uncertainty was introduced to account for heterogeneity over the surfaces of the samples and between samples. All techniques gave large relative uncertainties for a low emissive and very specular material (polished aluminum), and results were quite scattered. All the indirect techniques by reflectance measurement gave results within ±0.01 for a high emissivity material. A commercial aluminum paint appears to be a good candidate for producing samples with medium level of emissivity (about 0.4) and with good uniformity of emissivity values (within ±0.015).
Paudyal, Priyamvada; Llewellyn, Carrie; Lau, Jason; Mahmud, Mohammad; Smith, Helen
2015-01-01
Background Routine screening is key to sexually transmitted infection (STI) prevention and control. Previous studies suggest that clinic-based screening programmes capture only a small proportion of people with STIs. Self-sampling using non- or minimally invasive techniques may be beneficial for those reluctant to actively engage with conventional sampling methods. We systematically reviewed studies of patients’ experiences of obtaining self-samples to diagnose curable STIs. Methods We conducted an electronic search of MEDLINE, EMBASE, CINAHL, PsychINFO, BNI, and Cochrane Database of Systematic Reviews to identify relevant articles published in English between January 1980 and March 2014. Studies were included if participants self-sampled for the diagnosis of a curable STI and had specifically sought participants’ opinions of their experience, acceptability, preferences, or willingness to self-sample. Results The initial search yielded 558 references. Of these, 45 studies met the inclusion criteria. Thirty-six studies assessed patients’ acceptability and experiences of self-sampling. Pooled results from these studies shows that self-sampling is a highly acceptable method with 85% of patients reporting the method to be well received and acceptable. Twenty-eight studies reported on ease of self-sampling; the majority of patients (88%) in these studies found self-sampling an “easy” procedure. Self-sampling was favoured compared to clinician sampling, and home sampling was preferred to clinic-based sampling. Females and older participants were more accepting of self-sampling. Only a small minority of participants (13%) reported pain during self-sampling. Participants were willing to undergo self-sampling and recommend others. Privacy and safety were the most common concerns. Conclusion Self-sampling for diagnostic testing is well accepted with the majority having a positive experience and willingness to use again. Standardization of self-sampling procedures and rigorous validation of outcome measurement will lead to better comparability across studies. Future studies need to conduct rigorous economic evaluations of self-sampling to inform policy development for the management of STI. PMID:25909508
Anderson, Annette Carola; Hellwig, Elmar; Vespermann, Robin; Wittmer, Annette; Schmid, Michael; Karygianni, Lamprini; Al-Ahmad, Ali
2012-01-01
Persistence of microorganisms or reinfections are the main reasons for failure of root canal therapy. Very few studies to date have included culture-independent methods to assess the microbiota, including non-cultivable microorganisms. The aim of this study was to combine culture methods with culture-independent cloning methods to analyze the microbial flora of root-filled teeth with periradicular lesions. Twenty-one samples from previously root-filled teeth were collected from patients with periradicular lesions. Microorganisms were cultivated, isolated and biochemically identified. In addition, ribosomal DNA of bacteria, fungi and archaea derived from the same samples was amplified and the PCR products were used to construct clone libraries. DNA of selected clones was sequenced and microbial species were identified, comparing the sequences with public databases. Microorganisms were found in 12 samples with culture-dependent and -independent methods combined. The number of bacterial species ranged from 1 to 12 in one sample. The majority of the 26 taxa belonged to the phylum Firmicutes (14 taxa), followed by Actinobacteria, Proteobacteria and Bacteroidetes. One sample was positive for fungi, and archaea could not be detected. The results obtained with both methods differed. The cloning technique detected several as-yet-uncultivated taxa. Using a combination of both methods 13 taxa were detected that had not been found in root-filled teeth so far. Enterococcus faecalis was only detected in two samples using culture methods. Combining the culture-dependent and –independent approaches revealed new candidate endodontic pathogens and a high diversity of the microbial flora in root-filled teeth with periradicular lesions. Both methods yielded differing results, emphasizing the benefit of combined methods for the detection of the actual microbial diversity in apical periodontitis. PMID:23152922
Martins, Angélica Rocha; Talhavini, Márcio; Vieira, Maurício Leite; Zacca, Jorge Jardim; Braga, Jez Willian Batista
2017-08-15
The discrimination of whisky brands and counterfeit identification were performed by UV-Vis spectroscopy combined with partial least squares for discriminant analysis (PLS-DA). In the proposed method all spectra were obtained with no sample preparation. The discrimination models were built with the employment of seven whisky brands: Red Label, Black Label, White Horse, Chivas Regal (12years), Ballantine's Finest, Old Parr and Natu Nobilis. The method was validated with an independent test set of authentic samples belonging to the seven selected brands and another eleven brands not included in the training samples. Furthermore, seventy-three counterfeit samples were also used to validate the method. Results showed correct classification rates for genuine and false samples over 98.6% and 93.1%, respectively, indicating that the method can be helpful for the forensic analysis of whisky samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
Correcting for Sample Contamination in Genotype Calling of DNA Sequence Data
Flickinger, Matthew; Jun, Goo; Abecasis, Gonçalo R.; Boehnke, Michael; Kang, Hyun Min
2015-01-01
DNA sample contamination is a frequent problem in DNA sequencing studies and can result in genotyping errors and reduced power for association testing. We recently described methods to identify within-species DNA sample contamination based on sequencing read data, showed that our methods can reliably detect and estimate contamination levels as low as 1%, and suggested strategies to identify and remove contaminated samples from sequencing studies. Here we propose methods to model contamination during genotype calling as an alternative to removal of contaminated samples from further analyses. We compare our contamination-adjusted calls to calls that ignore contamination and to calls based on uncontaminated data. We demonstrate that, for moderate contamination levels (5%–20%), contamination-adjusted calls eliminate 48%–77% of the genotyping errors. For lower levels of contamination, our contamination correction methods produce genotypes nearly as accurate as those based on uncontaminated data. Our contamination correction methods are useful generally, but are particularly helpful for sample contamination levels from 2% to 20%. PMID:26235984
Comparison of methods of preserving tissues for pesticide analysis
Stickel, W.H.; Stickel, L.F.; Dyrland, R.A.; Hughes, D.L.
1984-01-01
Formalin preservation, freezing, spoiling followed by freezing, and phenoxyethanol were compared in terms of concentrations of DDT, DDD, DDE, endrin, and hepatachlor epoxide measured in brain, liver and carcass of birds fed dietary dosages of pesticides and in spiked egg homogenate. Phenoxyethanol proved to be an unsatisfactory preservative; the amount of 'extractable lipid' was excessive, and measurements of concentrations in replicates were erratic. Concentrations of residues in formalin-preserved and frozen samples did not differ significantly in any tissue. Percentage lipid in brains and eggs, however, were significantly lower in formalin-preserved samples. Samples of muscle and liver that had been spoiled before freezing yielded less DDD, and muscle samples yielded more DDT than formalin-preserved samples. The authors conclude that formalin preservation is a satisfactory method for preservation of field samples and that the warming and spoiling of samples that may occur unavoidably in the field will not result in misleading analytical results.
Comparison of microstickies measurement methods. Part II, Results and discussion
Mahendra R. Doshi; Angeles Blanco; Carlos Negro; Concepcion Monte; Gilles M. Dorris; Carlos C. Castro; Axel Hamann; R. Daniel Haynes; Carl Houtman; Karen Scallon; Hans-Joachim Putz; Hans Johansson; R. A. Venditti; K. Copeland; H.-M. Chang
2003-01-01
In part I of the article we discussed sample preparation procedure and described various methods used for the measurement of microstickies. Some of the important features of different methods are highlighted in Table 1. Temperatures used in the measurement methods vary from room temperature in some cases, 45 °C to 65 °C in other cases. Sample size ranges from as low as...
NASA Technical Reports Server (NTRS)
Dunham, A. J.; Barkley, R. M.; Sievers, R. E.; Clarkson, T. W. (Principal Investigator)
1995-01-01
An improved method of flow injection analysis for aqueous nitrite ion exploits the sensitivity and selectivity of the nitric oxide (NO) chemilluminescence detector. Trace analysis of nitrite ion in a small sample (5-160 microL) is accomplished by conversion of nitrite ion to NO by aqueous iodide in acid. The resulting NO is transported to the gas phase through a semipermeable membrane and subsequently detected by monitoring the photoemission of the reaction between NO and ozone (O3). Chemiluminescence detection is selective for measurement of NO, and, since the detection occurs in the gas-phase, neither sample coloration nor turbidity interfere. The detection limit for a 100-microL sample is 0.04 ppb of nitrite ion. The precision at the 10 ppb level is 2% relative standard deviation, and 60-180 samples can be analyzed per hour. Samples of human saliva and food extracts were analyzed; the results from a standard colorimetric measurement are compared with those from the new chemiluminescence method in order to further validate the latter method. A high degree of selectivity is obtained due to the three discriminating steps in the process: (1) the nitrite ion to NO conversion conditions are virtually specific for nitrite ion, (2) only volatile products of the conversion will be swept to the gas phase (avoiding turbidity or color in spectrophotometric methods), and (3) the NO chemiluminescence detector selectively detects the emission from the NO + O3 reaction. The method is free of interferences, offers detection limits of low parts per billion of nitrite ion, and allows the analysis of up to 180 microL-sized samples per hour, with little sample preparation and no chromatographic separation. Much smaller samples can be analyzed by this method than in previously reported batch analysis methods, which typically require 5 mL or more of sample and often need chromatographic separations as well.
Population clustering based on copy number variations detected from next generation sequencing data.
Duan, Junbo; Zhang, Ji-Gang; Wan, Mingxi; Deng, Hong-Wen; Wang, Yu-Ping
2014-08-01
Copy number variations (CNVs) can be used as significant bio-markers and next generation sequencing (NGS) provides a high resolution detection of these CNVs. But how to extract features from CNVs and further apply them to genomic studies such as population clustering have become a big challenge. In this paper, we propose a novel method for population clustering based on CNVs from NGS. First, CNVs are extracted from each sample to form a feature matrix. Then, this feature matrix is decomposed into the source matrix and weight matrix with non-negative matrix factorization (NMF). The source matrix consists of common CNVs that are shared by all the samples from the same group, and the weight matrix indicates the corresponding level of CNVs from each sample. Therefore, using NMF of CNVs one can differentiate samples from different ethnic groups, i.e. population clustering. To validate the approach, we applied it to the analysis of both simulation data and two real data set from the 1000 Genomes Project. The results on simulation data demonstrate that the proposed method can recover the true common CNVs with high quality. The results on the first real data analysis show that the proposed method can cluster two family trio with different ancestries into two ethnic groups and the results on the second real data analysis show that the proposed method can be applied to the whole-genome with large sample size consisting of multiple groups. Both results demonstrate the potential of the proposed method for population clustering.
Detection of cocaine in cargo containers by high-volume vapor sampling: field test at Port of Miami
NASA Astrophysics Data System (ADS)
Neudorfl, Pavel; Hupe, Michael; Pilon, Pierre; Lawrence, Andre H.; Drolet, Gerry; Su, Chih-Wu; Rigdon, Stephen W.; Kunz, Terry D.; Ulwick, Syd; Hoglund, David E.; Wingo, Jeff J.; Demirgian, Jack C.; Shier, Patrick
1997-02-01
The use of marine containers is a well known smuggling method for large shipments of drugs. Such containers present an ideal method of smuggling as the examination method is time consuming, difficult and expensive for the importing community. At present, various methods are being studied for screening containers which would allow to rapidly distinguish between innocent and suspicious cargo. Air sampling is one such method. Air is withdrawn for the inside of containers and analyzed for telltale vapors uniquely associated with the drug. The attractive feature of the technique is that the containers could be sampled without destuffing and opening, since air could be conveniently withdrawn via ventilation ducts. In the present paper, the development of air sampling methodology for the detection of cocaine hydrochloride will be discussed, and the results from a recent field test will be presented. The results indicated that vapors of cocaine and its decomposition product, ecgonidine methyl ester, could serve as sensitive indicators of the presence of the drug in the containers.
Evaluation of a new automated instrument for pretransfusion testing.
Morelati, F; Revelli, N; Maffei, L M; Poretti, M; Santoro, C; Parravicini, A; Rebulla, P; Cole, R; Sirchia, G
1998-10-01
A number of automated devices for pretransfusion testing have recently become available. This study evaluated a fully automated device based on column agglutination technology (AutoVue System, Ortho, Raritan, NJ). Some 6747 tests including forward and reverse ABO group, Rh type and phenotype, antibody screen, autocontrol, and crossmatch were performed on random samples from 1069 blood donors, 2063 patients, and 98 newborns and cord blood. Also tested were samples from 168 immunized patients and 53 donors expressing weak or variant A and D antigens. Test results and technician times required for their performance were compared with those obtained by standard methods (manual column agglutination technology, slide, semiautomatic handler). No erroneous conclusions were found in regard to the 5028 ABO group and Rh type or phenotype determinations carried out with the device. The device rejected 1.53 percent of tests for sample inadequacy. Of the remaining 18 tests with discrepant results found with the device and not confirmed with the standard methods, 6 gave such results because of mixed-field reactions, 10 gave negative results with A2 RBCs in reverse ABO grouping, and 2 gave very weak positive reactions in antibody screening and crossmatching. In the samples from immunized patients, the device missed one weak anti-K, whereas standard methods missed five weak antibodies. In addition, 48, 34, and 31 of the 53 weak or variant antigens were detected by the device, the slide method, and the semiautomated handler, respectively. Technician time with the standard methods was 1.6 to 7 times higher than that with the device. The technical performance of the device compared favorably with that of standard methods, with a number of advantages, including in particular the saving of technician time. Sample inadequacy was the most common cause of discrepancy, which suggests that standardization of sample collection can further improve the performance of the device.
[Research on fast classification based on LIBS technology and principle component analyses].
Yu, Qi; Ma, Xiao-Hong; Wang, Rui; Zhao, Hua-Feng
2014-11-01
Laser-induced breakdown spectroscopy (LIBS) and the principle component analysis (PCA) were combined to study aluminum alloy classification in the present article. Classification experiments were done on thirteen different kinds of standard samples of aluminum alloy which belong to 4 different types, and the results suggested that the LIBS-PCA method can be used to aluminum alloy fast classification. PCA was used to analyze the spectrum data from LIBS experiments, three principle components were figured out that contribute the most, the principle component scores of the spectrums were calculated, and the scores of the spectrums data in three-dimensional coordinates were plotted. It was found that the spectrum sample points show clear convergence phenomenon according to the type of aluminum alloy they belong to. This result ensured the three principle components and the preliminary aluminum alloy type zoning. In order to verify its accuracy, 20 different aluminum alloy samples were used to do the same experiments to verify the aluminum alloy type zoning. The experimental result showed that the spectrum sample points all located in their corresponding area of the aluminum alloy type, and this proved the correctness of the earlier aluminum alloy standard sample type zoning method. Based on this, the identification of unknown type of aluminum alloy can be done. All the experimental results showed that the accuracy of principle component analyses method based on laser-induced breakdown spectroscopy is more than 97.14%, and it can classify the different type effectively. Compared to commonly used chemical methods, laser-induced breakdown spectroscopy can do the detection of the sample in situ and fast with little sample preparation, therefore, using the method of the combination of LIBS and PCA in the areas such as quality testing and on-line industrial controlling can save a lot of time and cost, and improve the efficiency of detection greatly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, Gregory F.; Amidan, Brett G.; Krauter, Paula
2011-05-01
Two concerns were raised by the Government Accountability Office following the 2001 building contaminations via letters containing Bacillus anthracis (BA). These included the: 1) lack of validated sampling methods, and 2) need to use statistical sampling to quantify the confidence of no contamination when all samples have negative results. Critical to addressing these concerns is quantifying the false negative rate (FNR). The FNR may depend on the 1) method of contaminant deposition, 2) surface concentration of the contaminant, 3) surface material being sampled, 4) sample collection method, 5) sample storage/transportation conditions, 6) sample processing method, and 7) sample analytical method.more » A review of the literature found 17 laboratory studies that focused on swab, wipe, or vacuum samples collected from a variety of surface materials contaminated by BA or a surrogate, and used culture methods to determine the surface contaminant concentration. These studies quantified performance of the sampling and analysis methods in terms of recovery efficiency (RE) and not FNR (which left a major gap in available information). Quantifying the FNR under a variety of conditions is a key aspect of validating sample and analysis methods, and also for calculating the confidence in characterization or clearance decisions based on a statistical sampling plan. A laboratory study was planned to partially fill the gap in FNR results. This report documents the experimental design developed by Pacific Northwest National Laboratory and Sandia National Laboratories (SNL) for a sponge-wipe method. The testing was performed by SNL and is now completed. The study investigated the effects on key response variables from six surface materials contaminated with eight surface concentrations of a BA surrogate (Bacillus atrophaeus). The key response variables include measures of the contamination on test coupons of surface materials tested, contamination recovered from coupons by sponge-wipe samples, RE, and FNR. The experimental design involves 16 test runs, performed in two blocks of eight runs. Three surface materials (stainless steel, vinyl tile, and ceramic tile) were tested in the first block, while three other surface materials (plastic, painted wood paneling, and faux leather) were tested in the second block. The eight surface concentrations of the surrogate were randomly assigned to test runs within each block. Some of the concentrations were very low and presented challenges for deposition, sampling, and analysis. However, such tests are needed to investigate RE and FNR over the full range of concentrations of interest. In each run, there were 10 test coupons of each of the three surface materials. A positive control sample was generated at the same time as each test sample. The positive control results will be used to 1) calculate RE values for the wipe sampling and analysis method, and 2) fit RE- and FNR-concentration equations, for each of the six surface materials. Data analyses will support 1) estimating the FNR for each combination of contaminant concentration and surface material, 2) estimating the surface concentrations and their uncertainties of the contaminant for each combination of concentration and surface material, 3) estimating RE (%) and their uncertainties for each combination of contaminant concentration and surface material, 4) fitting FNR-concentration and RE-concentration equations for each of the six surface materials, 5) assessing goodness-of-fit of the equations, and 6) quantifying the uncertainty in FNR and RE predictions made with the fitted equations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, Gregory F.; Amidan, Brett G.; Krauter, Paula
2010-12-16
Two concerns were raised by the Government Accountability Office following the 2001 building contaminations via letters containing Bacillus anthracis (BA). These included the: 1) lack of validated sampling methods, and 2) need to use statistical sampling to quantify the confidence of no contamination when all samples have negative results. Critical to addressing these concerns is quantifying the probability of correct detection (PCD) (or equivalently the false negative rate FNR = 1 - PCD). The PCD/FNR may depend on the 1) method of contaminant deposition, 2) surface concentration of the contaminant, 3) surface material being sampled, 4) sample collection method, 5)more » sample storage/transportation conditions, 6) sample processing method, and 7) sample analytical method. A review of the literature found 17 laboratory studies that focused on swab, wipe, or vacuum samples collected from a variety of surface materials contaminated by BA or a surrogate, and used culture methods to determine the surface contaminant concentration. These studies quantified performance of the sampling and analysis methods in terms of recovery efficiency (RE) and not PCD/FNR (which left a major gap in available information). Quantifying the PCD/FNR under a variety of conditions is a key aspect of validating sample and analysis methods, and also for calculating the confidence in characterization or clearance decisions based on a statistical sampling plan. A laboratory study was planned to partially fill the gap in PCD/FNR results. This report documents the experimental design developed by Pacific Northwest National Laboratory and Sandia National Laboratories (SNL) for a sponge-wipe method. The study will investigate the effects on key response variables from six surface materials contaminated with eight surface concentrations of a BA surrogate (Bacillus atrophaeus). The key response variables include measures of the contamination on test coupons of surface materials tested, contamination recovered from coupons by sponge-wipe samples, RE, and PCD/FNR. The experimental design involves 16 test runs, to be performed in two blocks of eight runs. Three surface materials (stainless steel, vinyl tile, and ceramic tile) were tested in the first block, while three other surface materials (plastic, painted wood paneling, and faux leather) will be tested in the second block. The eight surface concentrations of the surrogate were randomly assigned to test runs within each block. Some of the concentrations will be very low and may present challenges for deposition, sampling, and analysis. However, such tests are needed to investigate RE and PCD/FNR over the full range of concentrations of interest. In each run, there will be 10 test coupons of each of the three surface materials. A positive control sample will be generated prior to each test sample. The positive control results will be used to 1) calculate RE values for the wipe sampling and analysis method, and 2) fit RE- and PCD-concentration equations, for each of the six surface materials. Data analyses will support 1) estimating the PCD for each combination of contaminant concentration and surface material, 2) estimating the surface concentrations and their uncertainties of the contaminant for each combination of concentration and surface material, 3) estimating RE (%) and their uncertainties for each combination of contaminant concentration and surface material, 4) fitting PCD-concentration and RE-concentration equations for each of the six surface materials, 5) assessing goodness-of-fit of the equations, and 6) quantifying the uncertainty in PCD and RE predictions made with the fitted equations.« less
Flow Cytometric Human Leukocyte Antigen-B27 Typing with Stored Samples for Batch Testing
Seo, Bo Young
2013-01-01
Background Flow cytometry (FC) HLA-B27 typing is still used extensively for the diagnosis of spondyloarthropathies. If patient blood samples are stored for a prolonged duration, this testing can be performed in a batch manner, and in-house cellular controls could easily be procured. In this study, we investigated various methods of storing patient blood samples. Methods We compared four storage methods: three methods of analyzing lymphocytes (whole blood stored at room temperature, frozen mononuclear cells, and frozen white blood cells [WBCs] after lysing red blood cells [RBCs]), and one method using frozen platelets (FPLT). We used three ratios associated with mean fluorescence intensities (MFI) for HLAB27 assignment: the B27 MFI ratio (sample/control) for HLA-B27 fluorescein-5-isothiocyanate (FITC); the B7 MFI ratio for HLA-B7 phycoerythrin (PE); and the ratio of these two ratios, B7/B27 ratio. Results Comparing the B27 MFI ratios of each storage method for the HLA-B27+ samples and the B7/B27 ratios for the HLA-B7+ samples revealed that FPLT was the best of the four methods. FPLT had a sensitivity of 100% and a specificity of 99.3% for HLA-B27 assignment in DNA-typed samples (N=164) when the two criteria, namely, B27 MFI ratio >4.0 and B7/B27 ratio <1.5, were used. Conclusions The FPLT method was found to offer a simple, economical, and accurate method of FC HLA-B27 typing by using stored patient samples. If stored samples are used, this method has the potential to replace the standard FC typing method when used in combination with a complementary DNA-based method. PMID:23667843
NASA Astrophysics Data System (ADS)
Schindler, Matthias; Kretschmer, Wolfgang; Scharf, Andreas; Tschekalinskij, Alexander
2016-05-01
Three new methods to sample and prepare various carbonyl compounds for radiocarbon measurements were developed and tested. Two of these procedures utilized the Strecker synthetic method to form amino acids from carbonyl compounds with either sodium cyanide or trimethylsilyl cyanide. The third procedure used semicarbazide to form crystalline carbazones with the carbonyl compounds. The resulting amino acids and semicarbazones were then separated and purified using thin layer chromatography. The separated compounds were then combusted to CO2 and reduced to graphite to determine 14C content by accelerator mass spectrometry (AMS). All of these methods were also compared with the standard carbonyl compound sampling method wherein a compound is derivatized with 2,4-dinitrophenylhydrazine and then separated by high-performance liquid chromatography (HPLC).
Buttner, Mark P.; Cruz, Patricia; Stetzenbach, Linda D.; Cronin, Tracy
2007-01-01
This research was designed to evaluate surface sampling protocols for use with culture and quantitative PCR (QPCR) amplification assay for detection of the gram-negative bacterial biothreat simulant Erwinia herbicola on a variety of surface materials. Surfaces selected for evaluation were wood laminate, glass and computer monitor screens, metal file cabinets, plastic arena seats, nylon seat cushions, finished concrete flooring, and vinyl tile flooring. Laboratory and test chamber studies were performed to evaluate two sampling methods, a sponge and a macrofoam swab, for detection of E. herbicola on surface materials. In laboratory trials, seven materials were inoculated with a known concentration of E. herbicola cells and samples were collected from the surfaces of the materials to determine sampling efficiencies. Culture analysis was ineffective for assessing E. herbicola collection efficiency because very few culturable cells were obtained from surface samples. QPCR demonstrated that E. herbicola DNA was present in high concentrations on all of the surface samples, and sampling efficiencies ranged from 0.7 to 52.2%, depending on the sampling method and the surface material. The swab was generally more efficient than the sponge for collection of E. herbicola from surfaces. Test chamber trials were also performed in which E. herbicola was aerosolized into the chamber and allowed to settle onto test materials. Surface sampling results supported those obtained in laboratory trials. The results of this study demonstrate the capabilities of QPCR to enhance the detection and enumeration of biocontaminants on surface materials and provide information on the comparability of sampling methods. PMID:17416685
Buttner, Mark P; Cruz, Patricia; Stetzenbach, Linda D; Cronin, Tracy
2007-06-01
This research was designed to evaluate surface sampling protocols for use with culture and quantitative PCR (QPCR) amplification assay for detection of the gram-negative bacterial biothreat simulant Erwinia herbicola on a variety of surface materials. Surfaces selected for evaluation were wood laminate, glass and computer monitor screens, metal file cabinets, plastic arena seats, nylon seat cushions, finished concrete flooring, and vinyl tile flooring. Laboratory and test chamber studies were performed to evaluate two sampling methods, a sponge and a macrofoam swab, for detection of E. herbicola on surface materials. In laboratory trials, seven materials were inoculated with a known concentration of E. herbicola cells and samples were collected from the surfaces of the materials to determine sampling efficiencies. Culture analysis was ineffective for assessing E. herbicola collection efficiency because very few culturable cells were obtained from surface samples. QPCR demonstrated that E. herbicola DNA was present in high concentrations on all of the surface samples, and sampling efficiencies ranged from 0.7 to 52.2%, depending on the sampling method and the surface material. The swab was generally more efficient than the sponge for collection of E. herbicola from surfaces. Test chamber trials were also performed in which E. herbicola was aerosolized into the chamber and allowed to settle onto test materials. Surface sampling results supported those obtained in laboratory trials. The results of this study demonstrate the capabilities of QPCR to enhance the detection and enumeration of biocontaminants on surface materials and provide information on the comparability of sampling methods.
Zarei, Mohammad; Ravanshad, Mehrdad; Bagban, Ashraf; Fallahi, Shahab
2016-07-01
The human immunodeficiency virus (HIV-1) is the etiologic agent of AIDS. The disease can be transmitted via blood in the window period prior to the development of antibodies to the disease. Thus, an appropriate method for the detection of HIV-1 during this window period is very important. This descriptive study proposes a sensitive, efficient, inexpensive, and easy method to detect HIV-1. In this study 25 serum samples of patients under treatment and also 10 positive and 10 negative control samples were studied. Twenty-five blood samples were obtained from HIV-1-infected individuals who were receiving treatment at the acquired immune deficiency syndrome (AIDS) research center of Imam Khomeini hospital in Tehran. The identification of HIV-1-positive samples was done by using reverse transcription to produce copy deoxyribonucleic acid (cDNA) and then optimizing the nested polymerase chain reaction (PCR) method. Two pairs of primers were then designed specifically for the protease gene fragment of the nested real time-PCR (RT-PCR) samples. Electrophoresis was used to examine the PCR products. The results were analyzed using statistical tests, including Fisher's exact test, and SPSS17 software. The 325 bp band of the protease gene was observed in all the positive control samples and in none of the negative control samples. The proposed method correctly identified HIV-1 in 23 of the 25 samples. These results suggest that, in comparison with viral cultures, antibody detection by enzyme linked immunosorbent assay (ELISAs), and conventional PCR methods, the proposed method has high sensitivity and specificity for the detection of HIV-1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor-Pashow, K.; Fondeur, F.; White, T.
Savannah River National Laboratory (SRNL) was tasked with identifying and developing at least one, but preferably two methods for quantifying the suppressor in the Next Generation Solvent (NGS) system. The suppressor is a guanidine derivative, N,N',N"-tris(3,7-dimethyloctyl)guanidine (TiDG). A list of 10 possible methods was generated, and screening experiments were performed for 8 of the 10 methods. After completion of the screening experiments, the non-aqueous acid-base titration was determined to be the most promising, and was selected for further development as the primary method. {sup 1}H NMR also showed promising results from the screening experiments, and this method was selected formore » further development as the secondary method. Other methods, including {sup 36}Cl radiocounting and ion chromatography, also showed promise; however, due to the similarity to the primary method (titration) and the inability to differentiate between TiDG and TOA (tri-n-ocytlamine) in the blended solvent, {sup 1}H NMR was selected over these methods. Analysis of radioactive samples obtained from real waste ESS (extraction, scrub, strip) testing using the titration method showed good results. Based on these results, the titration method was selected as the method of choice for TiDG measurement. {sup 1}H NMR has been selected as the secondary (back-up) method, and additional work is planned to further develop this method and to verify the method using radioactive samples. Procedures for analyzing radioactive samples of both pure NGS and blended solvent were developed and issued for the both methods.« less
Solid-State Kinetic Investigations of Nonisothermal Reduction of Iron Species Supported on SBA-15
2017-01-01
Iron oxide catalysts supported on nanostructured silica SBA-15 were synthesized with various iron loadings using two different precursors. Structural characterization of the as-prepared FexOy/SBA-15 samples was performed by nitrogen physisorption, X-ray diffraction, DR-UV-Vis spectroscopy, and Mössbauer spectroscopy. An increasing size of the resulting iron species correlated with an increasing iron loading. Significantly smaller iron species were obtained from (Fe(III), NH4)-citrate precursors compared to Fe(III)-nitrate precursors. Moreover, smaller iron species resulted in a smoother surface of the support material. Temperature-programmed reduction (TPR) of the FexOy/SBA-15 samples with H2 revealed better reducibility of the samples originating from Fe(III)-nitrate precursors. Varying the iron loading led to a change in reduction mechanism. TPR traces were analyzed by model-independent Kissinger method, Ozawa, Flynn, and Wall (OFW) method, and model-dependent Coats-Redfern method. JMAK kinetic analysis afforded a one-dimensional reduction process for the FexOy/SBA-15 samples. The Kissinger method yielded the lowest apparent activation energy for the lowest loaded citrate sample (Ea ≈ 39 kJ/mol). Conversely, the lowest loaded nitrate sample possessed the highest apparent activation energy (Ea ≈ 88 kJ/mol). For samples obtained from Fe(III)-nitrate precursors, Ea decreased with increasing iron loading. Apparent activation energies from model-independent analysis methods agreed well with those from model-dependent methods. Nucleation as rate-determining step in the reduction of the iron oxide species was consistent with the Mampel solid-state reaction model. PMID:29230346
Zhang, Chun-Yun; Hu, Hui-Chao; Chai, Xin-Sheng; Pan, Lei; Xiao, Xian-Ming
2014-02-07
In this paper, we present a novel method for determining the maximal amount of ethane, a minor gas species, adsorbed in a shale sample. The method is based on the time-dependent release of ethane from shale samples measured by headspace gas chromatography (HS-GC). The study includes a mathematical model for fitting the experimental data, calculating the maximal amount gas adsorbed, and predicting results at other temperatures. The method is a more efficient alternative to the isothermal adsorption method that is in widespread use today. Copyright © 2013 Elsevier B.V. All rights reserved.
Kosek, Margaret N.; Schwab, Kellogg J.
2017-01-01
Empiric quantification of environmental fecal contamination is an important step toward understanding the impact that water, sanitation, and hygiene interventions have on reducing enteric infections. There is a need to standardize the methods used for surface sampling in field studies that examine fecal contamination in low-income settings. The dry cloth method presented in this manuscript improves upon the more commonly used swabbing technique that has been shown in the literature to have a low sampling efficiency. The recovery efficiency of a dry electrostatic cloth sampling method was evaluated using Escherichia coli and then applied to household surfaces in Iquitos, Peru, where there is high fecal contamination and enteric infection. Side-by-side measurements were taken from various floor locations within a household at the same time over a three-month period to compare for consistency of quantification of E. coli bacteria. The dry cloth sampling method in the laboratory setting showed 105% (95% Confidence Interval: 98%, 113%) E. coli recovery efficiency off of the cloths. The field application demonstrated strong agreement of side-by-side results (Pearson correlation coefficient for dirt surfaces was 0.83 (p < 0.0001) and 0.91 (p < 0.0001) for cement surfaces) and moderate agreement for results between entrance and kitchen samples (Pearson (0.53, p < 0.0001) and weighted Kappa statistic (0.54, p < 0.0001)). Our findings suggest that this method can be utilized in households with high bacterial loads using either continuous (quantitative) or categorical (semi-quantitative) data. The standardization of this low-cost, dry electrostatic cloth sampling method can be used to measure differences between households in intervention and non-intervention arms of randomized trials. PMID:28829392
Exum, Natalie G; Kosek, Margaret N; Davis, Meghan F; Schwab, Kellogg J
2017-08-22
Empiric quantification of environmental fecal contamination is an important step toward understanding the impact that water, sanitation, and hygiene interventions have on reducing enteric infections. There is a need to standardize the methods used for surface sampling in field studies that examine fecal contamination in low-income settings. The dry cloth method presented in this manuscript improves upon the more commonly used swabbing technique that has been shown in the literature to have a low sampling efficiency. The recovery efficiency of a dry electrostatic cloth sampling method was evaluated using Escherichia coli and then applied to household surfaces in Iquitos, Peru, where there is high fecal contamination and enteric infection. Side-by-side measurements were taken from various floor locations within a household at the same time over a three-month period to compare for consistency of quantification of E. coli bacteria. The dry cloth sampling method in the laboratory setting showed 105% (95% Confidence Interval: 98%, 113%) E. coli recovery efficiency off of the cloths. The field application demonstrated strong agreement of side-by-side results (Pearson correlation coefficient for dirt surfaces was 0.83 ( p < 0.0001) and 0.91 ( p < 0.0001) for cement surfaces) and moderate agreement for results between entrance and kitchen samples (Pearson (0.53, p < 0.0001) and weighted Kappa statistic (0.54, p < 0.0001)). Our findings suggest that this method can be utilized in households with high bacterial loads using either continuous (quantitative) or categorical (semi-quantitative) data. The standardization of this low-cost, dry electrostatic cloth sampling method can be used to measure differences between households in intervention and non-intervention arms of randomized trials.
Zhang, Hua; Chen, Qing-song; Li, Nan; Hua, Yan; Zeng, Lin; Xu, Guo-yang; Tao, Li-yuan; Zhao, Yi-ming
2013-05-01
To compare the results of noise hazard evaluations based on area sampling and personal sampling in a new thermal power plant and to analyze the similarities and differences between the two measurement methods. According to Measurement of Physical agents in Workplace Part 8: Noise(GBZff 189.8-2007), area sampling was performed at various operating points for noise measurement, and meanwhile the workers under different types of work wore noise dosimeters for personal noise exposure measurement. The two measurement methods were used to evaluate the level of noise hazards in the enterprise according to the corresponding occupational health standards, and the evaluation results were compared. Area sampling was performed at 99 operating points, the mean noise level was 88.9 ± 11.1 dB (A)(range, 51.3-107.0 dB (A)), with an over-standard rate of 75.8%. Personal sampling was performed (73 person times),and the mean noise level was 79.3 ± 6.3 dB (A), with an over-standard rate of 6.6% ( 16/241 ). There was a statistically significant difference in the over-standard rate between the evaluation results of the two measurement methods ( x2=53.869, ?<0.001 ). Because of the characteristics of the work in new thermal power plants, the noise hazard evaluation based on area sampling cannot be used instead of personal noise exposure measurement among workers. Personal sampling should be used in the noise measurement in new thermal power plant.
U/Th dating of carbonate deposits from Constantina (Sevilla), Spain.
Alcaraz-Pelegrina, J M; Martínez-Aguirre, A
2007-07-01
Uranium-series method has been applied to continental carbonate deposits from Constantina, Seville, in Spain. All samples analysed were impure carbonates and the leachate-leachate method was used to obtain activity ratios in carbonate fraction. Leachate-residue methods were applied to one of the samples in order to compare with leachate-leachate method, but leachate-residue method assumptions did not meet and ages resulting from leachate-residue methods were not valid. Ages obtained by leachate-leachate method range from 1.8 to 23.5ky BP and are consistent with stratigraphical positions of samples analysed. Initial activity ratios for uranium isotopes are practically constant in this period, thus indicating that no changes in environmental conditions occur between 1.8 and 23.5ky period.
Hyun, Noorie; Gastwirth, Joseph L; Graubard, Barry I
2018-03-26
Originally, 2-stage group testing was developed for efficiently screening individuals for a disease. In response to the HIV/AIDS epidemic, 1-stage group testing was adopted for estimating prevalences of a single or multiple traits from testing groups of size q, so individuals were not tested. This paper extends the methodology of 1-stage group testing to surveys with sample weighted complex multistage-cluster designs. Sample weighted-generalized estimating equations are used to estimate the prevalences of categorical traits while accounting for the error rates inherent in the tests. Two difficulties arise when using group testing in complex samples: (1) How does one weight the results of the test on each group as the sample weights will differ among observations in the same group. Furthermore, if the sample weights are related to positivity of the diagnostic test, then group-level weighting is needed to reduce bias in the prevalence estimation; (2) How does one form groups that will allow accurate estimation of the standard errors of prevalence estimates under multistage-cluster sampling allowing for intracluster correlation of the test results. We study 5 different grouping methods to address the weighting and cluster sampling aspects of complex designed samples. Finite sample properties of the estimators of prevalences, variances, and confidence interval coverage for these grouping methods are studied using simulations. National Health and Nutrition Examination Survey data are used to illustrate the methods. Copyright © 2018 John Wiley & Sons, Ltd.
A method for detecting fungal contaminants in wall cavities.
Spurgeon, Joe C
2003-01-01
This article describes a practical method for detecting the presence of both fungal spores and culturable fungi in wall cavities. Culturable fungi were collected in 25 mm cassettes containing 0.8 microm mixed cellulose ester filters using aggressive sampling conditions. Both culturable fungi and fungal spores were collected in modified slotted-disk cassettes. The sample volume was 4 L. The filters were examined microscopically and dilution plated onto multiple culture media. Collecting airborne samples in filter cassettes was an effective method for assessing wall cavities for fungal contaminants, especially because this method allowed the sample to be analyzed by both microscopy and culture media. Assessment criteria were developed that allowed the sample results to be used to classify wall cavities as either uncontaminated or contaminated. As a criterion, wall cavities with concentrations of culturable fungi below the limit of detection (LOD) were classified as uncontaminated, whereas those cavities with detectable concentrations of culturable fungi were classified as contaminated. A total of 150 wall cavities was sampled as part of a field project. The concentrations of culturable fungi were below the LOD in 34% of the samples, whereas Aspergillus and/or Penicillium were the only fungal genera detected in 69% of the samples in which culturable fungi were detected. Spore counting resulted in the detection of Stachybotrys-like spores in 25% of the samples that were analyzed, whereas Stachybotrys chartarum colonies were only detected on 2% of malt extract agar plates and on 6% of corn meal agar plates.
Evaluation of Existing Methods for Human Blood mRNA Isolation and Analysis for Large Studies
Meyer, Anke; Paroni, Federico; Günther, Kathrin; Dharmadhikari, Gitanjali; Ahrens, Wolfgang; Kelm, Sørge; Maedler, Kathrin
2016-01-01
Aims Prior to implementing gene expression analyses from blood to a larger cohort study, an evaluation to set up a reliable and reproducible method is mandatory but challenging due to the specific characteristics of the samples as well as their collection methods. In this pilot study we optimized a combination of blood sampling and RNA isolation methods and present reproducible gene expression results from human blood samples. Methods The established PAXgeneTM blood collection method (Qiagen) was compared with the more recent TempusTM collection and storing system. RNA from blood samples collected by both systems was extracted on columns with the corresponding Norgen and PAX RNA extraction Kits. RNA quantity and quality was compared photometrically, with Ribogreen and by Real-Time PCR analyses of various reference genes (PPIA, β-ACTIN and TUBULIN) and exemplary of SIGLEC-7. Results Combining different sampling methods and extraction kits caused strong variations in gene expression. The use of PAXgeneTM and TempusTM collection systems resulted in RNA of good quality and quantity for the respective RNA isolation system. No large inter-donor variations could be detected for both systems. However, it was not possible to extract sufficient RNA of good quality with the PAXgeneTM RNA extraction system from samples collected by TempusTM collection tubes. Comparing only the Norgen RNA extraction methods, RNA from blood collected either by the TempusTM or PAXgeneTM collection system delivered sufficient amount and quality of RNA, but the TempusTM collection delivered higher RNA concentration compared to the PAXTM collection system. The established Pre-analytix PAXgeneTM RNA extraction system together with the PAXgeneTM blood collection system showed lowest CT-values, i.e. highest RNA concentration of good quality. Expression levels of all tested genes were stable and reproducible. Conclusions This study confirms that it is not possible to mix or change sampling or extraction strategies during the same study because of large variations of RNA yield and expression levels. PMID:27575051
Sparse feature learning for instrument identification: Effects of sampling and pooling methods.
Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu
2016-05-01
Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.
New robust bilinear least squares method for the analysis of spectral-pH matrix data.
Goicoechea, Héctor C; Olivieri, Alejandro C
2005-07-01
A new second-order multivariate method has been developed for the analysis of spectral-pH matrix data, based on a bilinear least-squares (BLLS) model achieving the second-order advantage and handling multiple calibration standards. A simulated Monte Carlo study of synthetic absorbance-pH data allowed comparison of the newly proposed BLLS methodology with constrained parallel factor analysis (PARAFAC) and with the combination multivariate curve resolution-alternating least-squares (MCR-ALS) technique under different conditions of sample-to-sample pH mismatch and analyte-background ratio. The results indicate an improved prediction ability for the new method. Experimental data generated by measuring absorption spectra of several calibration standards of ascorbic acid and samples of orange juice were subjected to second-order calibration analysis with PARAFAC, MCR-ALS, and the new BLLS method. The results indicate that the latter method provides the best analytical results in regard to analyte recovery in samples of complex composition requiring strict adherence to the second-order advantage. Linear dependencies appear when multivariate data are produced by using the pH or a reaction time as one of the data dimensions, posing a challenge to classical multivariate calibration models. The presently discussed algorithm is useful for these latter systems.
Voelz, David G; Roggemann, Michael C
2009-11-10
Accurate simulation of scalar optical diffraction requires consideration of the sampling requirement for the phase chirp function that appears in the Fresnel diffraction expression. We describe three sampling regimes for FFT-based propagation approaches: ideally sampled, oversampled, and undersampled. Ideal sampling, where the chirp and its FFT both have values that match analytic chirp expressions, usually provides the most accurate results but can be difficult to realize in practical simulations. Under- or oversampling leads to a reduction in the available source plane support size, the available source bandwidth, or the available observation support size, depending on the approach and simulation scenario. We discuss three Fresnel propagation approaches: the impulse response/transfer function (angular spectrum) method, the single FFT (direct) method, and the two-step method. With illustrations and simulation examples we show the form of the sampled chirp functions and their discrete transforms, common relationships between the three methods under ideal sampling conditions, and define conditions and consequences to be considered when using nonideal sampling. The analysis is extended to describe the sampling limitations for the more exact Rayleigh-Sommerfeld diffraction solution.
A Simple Sampling Method for Estimating the Accuracy of Large Scale Record Linkage Projects.
Boyd, James H; Guiver, Tenniel; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Anderson, Phil; Dickinson, Teresa
2016-05-17
Record linkage techniques allow different data collections to be brought together to provide a wider picture of the health status of individuals. Ensuring high linkage quality is important to guarantee the quality and integrity of research. Current methods for measuring linkage quality typically focus on precision (the proportion of incorrect links), given the difficulty of measuring the proportion of false negatives. The aim of this work is to introduce and evaluate a sampling based method to estimate both precision and recall following record linkage. In the sampling based method, record-pairs from each threshold (including those below the identified cut-off for acceptance) are sampled and clerically reviewed. These results are then applied to the entire set of record-pairs, providing estimates of false positives and false negatives. This method was evaluated on a synthetically generated dataset, where the true match status (which records belonged to the same person) was known. The sampled estimates of linkage quality were relatively close to actual linkage quality metrics calculated for the whole synthetic dataset. The precision and recall measures for seven reviewers were very consistent with little variation in the clerical assessment results (overall agreement using the Fleiss Kappa statistics was 0.601). This method presents as a possible means of accurately estimating matching quality and refining linkages in population level linkage studies. The sampling approach is especially important for large project linkages where the number of record pairs produced may be very large often running into millions.
Bertini, Sabrina; Risi, Giulia; Guerrini, Marco; Carrick, Kevin; Szajek, Anita Y; Mulloy, Barbara
2017-07-19
In a collaborative study involving six laboratories in the USA, Europe, and India the molecular weight distributions of a panel of heparin sodium samples were determined, in order to compare heparin sodium of bovine intestinal origin with that of bovine lung and porcine intestinal origin. Porcine samples met the current criteria as laid out in the USP Heparin Sodium monograph. Bovine lung heparin samples had consistently lower average molecular weights. Bovine intestinal heparin was variable in molecular weight; some samples fell below the USP limits, some fell within these limits and others fell above the upper limits. These data will inform the establishment of pharmacopeial acceptance criteria for heparin sodium derived from bovine intestinal mucosa. The method for MW determination as described in the USP monograph uses a single, broad standard calibrant to characterize the chromatographic profile of heparin sodium on high-resolution silica-based GPC columns. These columns may be short-lived in some laboratories. Using the panel of samples described above, methods based on the use of robust polymer-based columns have been developed. In addition to the use of the USP's broad standard calibrant for heparin sodium with these columns, a set of conditions have been devised that allow light-scattering detected molecular weight characterization of heparin sodium, giving results that agree well with the monograph method. These findings may facilitate the validation of variant chromatographic methods with some practical advantages over the USP monograph method.
Berger, Sebastian T; Ahmed, Saima; Muntel, Jan; Cuevas Polo, Nerea; Bachur, Richard; Kentsis, Alex; Steen, Judith; Steen, Hanno
2015-10-01
We describe a 96-well plate compatible membrane-based proteomic sample processing method, which enables the complete processing of 96 samples (or multiples thereof) within a single workday. This method uses a large-pore hydrophobic PVDF membrane that efficiently adsorbs proteins, resulting in fast liquid transfer through the membrane and significantly reduced sample processing times. Low liquid transfer speeds have prevented the useful 96-well plate implementation of FASP as a widely used membrane-based proteomic sample processing method. We validated our approach on whole-cell lysate and urine and cerebrospinal fluid as clinically relevant body fluids. Without compromising peptide and protein identification, our method uses a vacuum manifold and circumvents the need for digest desalting, making our processing method compatible with standard liquid handling robots. In summary, our new method maintains the strengths of FASP and simultaneously overcomes one of the major limitations of FASP without compromising protein identification and quantification. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.
Santas, Jonathan; Guzmán, Yeimmy J; Guardiola, Francesc; Rafecas, Magdalena; Bou, Ricard
2014-11-01
A fluorometric method for the determination of hydroperoxides (HP) in edible oils and fats using the reagent diphenyl-1-pyrenylphosphine (DPPP) was developed and validated. Two solvent media containing 100% butanol or a mixture of chloroform/methanol (2:1, v/v) can be used to solubilise lipid samples. Regardless of the solvent used to solubilise the sample, the DPPP method was precise, accurate, sensitive and easy to perform. The HP content of 43 oil and fat samples was determined and the results were compared with those obtained by means of the AOCS Official Method for the determination of peroxide value (PV) and the ferrous oxidation-xylenol orange (FOX) method. The proposed method not only correlates well with the PV and FOX methods, but also presents some advantages such as requiring low sample and solvent amounts and being suitable for high-throughput sample analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.
Novikov, I; Fund, N; Freedman, L S
2010-01-15
Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.
Ryba, Stepan; Kindlmann, Pavel; Titera, Dalibor; Haklova, Marcela; Stopka, Pavel
2012-10-01
American foulbrood, because of its virulence and worldwide spread, is currently one of the most dangerous diseases of honey bees. Quick diagnosis of this disease is therefore vitally important. For its successful eradication, however, all the hives in the region must be tested. This is time consuming and costly. Therefore, a fast and sensitive method of detecting American foulbrood is needed. Here we present a method that significantly reduces the number of tests needed by combining batches of samples from different hives. The results of this method were verified by testing each sample. A simulation study was used to compare the efficiency of the new method with testing all the samples and to develop a decision tool for determining when best to use the new method. The method is suitable for testing large numbers of samples (over 100) when the incidence of the disease is low (10% or less).
Berger, Sebastian T.; Ahmed, Saima; Muntel, Jan; Cuevas Polo, Nerea; Bachur, Richard; Kentsis, Alex; Steen, Judith; Steen, Hanno
2015-01-01
We describe a 96-well plate compatible membrane-based proteomic sample processing method, which enables the complete processing of 96 samples (or multiples thereof) within a single workday. This method uses a large-pore hydrophobic PVDF membrane that efficiently adsorbs proteins, resulting in fast liquid transfer through the membrane and significantly reduced sample processing times. Low liquid transfer speeds have prevented the useful 96-well plate implementation of FASP as a widely used membrane-based proteomic sample processing method. We validated our approach on whole-cell lysate and urine and cerebrospinal fluid as clinically relevant body fluids. Without compromising peptide and protein identification, our method uses a vacuum manifold and circumvents the need for digest desalting, making our processing method compatible with standard liquid handling robots. In summary, our new method maintains the strengths of FASP and simultaneously overcomes one of the major limitations of FASP without compromising protein identification and quantification. PMID:26223766
Sparse magnetic resonance imaging reconstruction using the bregman iteration
NASA Astrophysics Data System (ADS)
Lee, Dong-Hoon; Hong, Cheol-Pyo; Lee, Man-Woo
2013-01-01
Magnetic resonance imaging (MRI) reconstruction needs many samples that are sequentially sampled by using phase encoding gradients in a MRI system. It is directly connected to the scan time for the MRI system and takes a long time. Therefore, many researchers have studied ways to reduce the scan time, especially, compressed sensing (CS), which is used for sparse images and reconstruction for fewer sampling datasets when the k-space is not fully sampled. Recently, an iterative technique based on the bregman method was developed for denoising. The bregman iteration method improves on total variation (TV) regularization by gradually recovering the fine-scale structures that are usually lost in TV regularization. In this study, we studied sparse sampling image reconstruction using the bregman iteration for a low-field MRI system to improve its temporal resolution and to validate its usefulness. The image was obtained with a 0.32 T MRI scanner (Magfinder II, SCIMEDIX, Korea) with a phantom and an in-vivo human brain in a head coil. We applied random k-space sampling, and we determined the sampling ratios by using half the fully sampled k-space. The bregman iteration was used to generate the final images based on the reduced data. We also calculated the root-mean-square-error (RMSE) values from error images that were obtained using various numbers of bregman iterations. Our reconstructed images using the bregman iteration for sparse sampling images showed good results compared with the original images. Moreover, the RMSE values showed that the sparse reconstructed phantom and the human images converged to the original images. We confirmed the feasibility of sparse sampling image reconstruction methods using the bregman iteration with a low-field MRI system and obtained good results. Although our results used half the sampling ratio, this method will be helpful in increasing the temporal resolution at low-field MRI systems.
Rapid detection of mecA and spa by the loop-mediated isothermal amplification (LAMP) method.
Koide, Y; Maeda, H; Yamabe, K; Naruishi, K; Yamamoto, T; Kokeguchi, S; Takashiba, S
2010-04-01
To develop a detection assay for staphylococcal mecA and spa by using loop-mediated isothermal amplification (LAMP) method. Staphylococcus aureus and other related species were subjected to the detection of mecA and spa by both PCR and LAMP methods. The LAMP successfully amplified the genes under isothermal conditions at 64 degrees C within 60 min, and demonstrated identical results with the conventional PCR methods. The detection limits of the LAMP for mecA and spa, by gel electrophoresis, were 10(2) and 10 cells per tube, respectively. The naked-eye inspections were possible with 10(3) and 10 cells for detection of mecA and spa, respectively. The LAMP method was then applied to sputum and dental plaque samples. The LAMP and PCR demonstrated identical results for the plaque samples, although frequency in detection of mecA and spa by the LAMP was relatively lower for the sputum samples when compared to the PCR methods. Application of the LAMP enabled a rapid detection assay for mecA and spa. The assay may be applicable to clinical plaque samples. The LAMP offers an alternative detection assay for mecA and spa with a great advantage of the rapidity.
SU-G-IeP1-13: Sub-Nyquist Dynamic MRI Via Prior Rank, Intensity and Sparsity Model (PRISM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, B; Gao, H
Purpose: Accelerated dynamic MRI is important for MRI guided radiotherapy. Inspired by compressive sensing (CS), sub-Nyquist dynamic MRI has been an active research area, i.e., sparse sampling in k-t space for accelerated dynamic MRI. This work is to investigate sub-Nyquist dynamic MRI via a previously developed CS model, namely Prior Rank, Intensity and Sparsity Model (PRISM). Methods: The proposed method utilizes PRISM with rank minimization and incoherent sampling patterns for sub-Nyquist reconstruction. In PRISM, the low-rank background image, which is automatically calculated by rank minimization, is excluded from the L1 minimization step of the CS reconstruction to further sparsify themore » residual image, thus allowing for higher acceleration rates. Furthermore, the sampling pattern in k-t space is made more incoherent by sampling a different set of k-space points at different temporal frames. Results: Reconstruction results from L1-sparsity method and PRISM method with 30% undersampled data and 15% undersampled data are compared to demonstrate the power of PRISM for dynamic MRI. Conclusion: A sub- Nyquist MRI reconstruction method based on PRISM is developed with improved image quality from the L1-sparsity method.« less
Generating virtual training samples for sparse representation of face images and face recognition
NASA Astrophysics Data System (ADS)
Du, Yong; Wang, Yu
2016-03-01
There are many challenges in face recognition. In real-world scenes, images of the same face vary with changing illuminations, different expressions and poses, multiform ornaments, or even altered mental status. Limited available training samples cannot convey these possible changes in the training phase sufficiently, and this has become one of the restrictions to improve the face recognition accuracy. In this article, we view the multiplication of two images of the face as a virtual face image to expand the training set and devise a representation-based method to perform face recognition. The generated virtual samples really reflect some possible appearance and pose variations of the face. By multiplying a training sample with another sample from the same subject, we can strengthen the facial contour feature and greatly suppress the noise. Thus, more human essential information is retained. Also, uncertainty of the training data is simultaneously reduced with the increase of the training samples, which is beneficial for the training phase. The devised representation-based classifier uses both the original and new generated samples to perform the classification. In the classification phase, we first determine K nearest training samples for the current test sample by calculating the Euclidean distances between the test sample and training samples. Then, a linear combination of these selected training samples is used to represent the test sample, and the representation result is used to classify the test sample. The experimental results show that the proposed method outperforms some state-of-the-art face recognition methods.
Cao, Yi; Wang, Chao-Qun; Xu, Feng; Jia, Xiu-Hong; Liu, Guang-Xue; Yang, Sheng-Chao; Long, Guang-Qiang; Chen, Zhong-Jian; Wei, Fu-Zhou; Yang, Shao-Zhou; Fukuda, Kozo; Wang, Xuan; Cai, Shao-Qing
2016-10-01
Panax notoginseng is a commonly used traditional Chinese medicine with blood activating effect while has continuous cropping obstacle problem in planting process. In present study, a semimicroextraction method with water-saturated n-butanol on 0.1 g notoginseng sample was established with good repeatability (RSD<2.5%) and 9.6%-20.6% higher extraction efficiency of seven saponins than the conventional method. A total of 16 characteristic peaks were identified by LC-MS-IT-TOF, including eight 20(S)-protopanaxatriol (PPT) type saponins and eight 20(S)-protopanaxadiol (PPD) type saponins. The established method was utilized to evaluate the quality of notoginseng samples cultivated by manual intervened methods to overcome continuous cropping obstacles.As a result, HPLC fingerprint similarity, content of Fa and ratio of notoginsenoside K and notoginsenoside Fa (N-K/Fa) were found out to be as valuatable markers of the quality of samples in continuous cropping obstacle research, of which N-K/Fa could also be applied to the analysis of notoginseng samples with different growth years.Notoginseng samples with continuous cropping obstacle had HPLC fingerprint similarity lower than 0.87, in consistent with normal sample, and had significant lower content of notoginsenoside Fa and significant higher N-K/Fa (2.35-4.74) than normal group (0.45-1.33). All samples in the first group with manual intervention showed high similarity with normal group (>0.87), similar content of common peaks and N-K/Fa (0.42-2.06). The content of notoginsenoside K in the second group with manual intervention was higher than normal group. All samples except two displayed similarity higher than 0.87 and possessed content of 16 saponins close to normal group. The result showed that notoginseng samples with continuous cropping obstacle had lower quality than normal sample. And manual intervened methods could improve their quality in different levels.The method established in this study was simple, fast and accurate, and the markers may provide new guides for quality control in continuous cropping obstacle research of notoginseng. Copyright© by the Chinese Pharmaceutical Association.
Peng, Jun; Chen, Yi-Ting; Chen, Chien-Lun; Li, Liang
2014-07-01
Large-scale metabolomics study requires a quantitative method to generate metabolome data over an extended period with high technical reproducibility. We report a universal metabolome-standard (UMS) method, in conjunction with chemical isotope labeling liquid chromatography-mass spectrometry (LC-MS), to provide long-term analytical reproducibility and facilitate metabolome comparison among different data sets. In this method, UMS of a specific type of sample labeled by an isotope reagent is prepared a priori. The UMS is spiked into any individual samples labeled by another form of the isotope reagent in a metabolomics study. The resultant mixture is analyzed by LC-MS to provide relative quantification of the individual sample metabolome to UMS. UMS is independent of a study undertaking as well as the time of analysis and useful for profiling the same type of samples in multiple studies. In this work, the UMS method was developed and applied for a urine metabolomics study of bladder cancer. UMS of human urine was prepared by (13)C2-dansyl labeling of a pooled sample from 20 healthy individuals. This method was first used to profile the discovery samples to generate a list of putative biomarkers potentially useful for bladder cancer detection and then used to analyze the verification samples about one year later. Within the discovery sample set, three-month technical reproducibility was examined using a quality control sample and found a mean CV of 13.9% and median CV of 9.4% for all the quantified metabolites. Statistical analysis of the urine metabolome data showed a clear separation between the bladder cancer group and the control group from the discovery samples, which was confirmed by the verification samples. Receiver operating characteristic (ROC) test showed that the area under the curve (AUC) was 0.956 in the discovery data set and 0.935 in the verification data set. These results demonstrated the utility of the UMS method for long-term metabolomics and discovering potential metabolite biomarkers for diagnosis of bladder cancer.
Ultrasound-Assisted Extraction of Stilbenes from Grape Canes.
Piñeiro, Zulema; Marrufo-Curtido, Almudena; Serrano, Maria Jose; Palma, Miguel
2016-06-16
An analytical ultrasound-assisted extraction (UAE) method has been optimized and validated for the rapid extraction of stilbenes from grape canes. The influence of sample pre-treatment (oven or freeze-drying) and several extraction variables (solvent, sample-solvent ratio and extraction time between others) on the extraction process were analyzed. The new method allowed the main stilbenes in grape canes to be extracted in just 10 min, with an extraction temperature of 75 °C and 60% ethanol in water as the extraction solvent. Validation of the extraction method was based on analytical properties. The resulting RSDs (n = 5) for interday/intraday precision were less than 10%. Furthermore, the method was successfully applied in the analysis of 20 different grape cane samples. The result showed that grape cane byproducts are potentially sources of bioactive compounds of interest for pharmaceutical and food industries.
NASA Astrophysics Data System (ADS)
Ma, Yinbiao; Wei, Xiaojuan
2017-04-01
A novel method for the determination of platinum in waste platinum-loaded carbon catalyst samples was established by inductively coupled plasma optical emission spectrometry after samples digested by microwave oven with aqua regia. Such experiment conditions were investigated as the influence of sample digestion methods, digestion time, digestion temperature and interfering ions on the determination. Under the optimized conditions, the linear range of calibration graph for Pt was 0 ˜ 200.00 mg L-1, and the recovery was 95.67% ˜ 104.29%. The relative standard deviation (RSDs) for Pt was 1.78 %. The proposed method was applied to determine the same samples with atomic absorption spectrometry with the results consistently, which is suitable for the determination of platinum in waste platinum-loaded carbon catalyst samples.
Pugesek, Bruce H.; Baldwin, Michael J.; Stehn, Thomas; Folk, Martin J.; Nesbitt, Stephen A.
2008-01-01
We sampled blue crabs (Callinectes sapidus) in marshes on the Aransas National Wildlife Refuge, Texas from 1997 to 2005 to determine whether whooping crane (Grus americana) mortality was related to the availability of this food source. For four years, 1997 - 2001, we sampled monthly from the fall through the spring. From these data, we developed a reduced sampling effort method that adequately characterized crab abundance and reduced the potential for disturbance to the cranes. Four additional years of data were collected with the reduced sampling effort methods. Yearly variation in crab numbers was high, ranging from a low of 0.1 crabs to a high of 3.4 crabs per 100-m transect section. Mortality among adult cranes was inversely related to crab abundance. We found no relationship between crab abundance and mortality among juvenile cranes, possibly as a result of a smaller population size of juveniles compared to adults.
NASA Astrophysics Data System (ADS)
Randle, K.; Al-Jundi, J.; Mamas, C. J. V.; Sokhi, R. S.; Earwaker, L. G.
1993-06-01
Our work on heavy metals in the estuarine environment has involved the use of two multielement techniques: neutron activation analysis (NAA) and proton-induced X-ray emission (PIXE) analysis. As PIXE is essentially a surface analytical technique problems may arise due to sample inhomogeneity and surface roughness. In order to assess the contribution of these effects we have compared the results from PIXE analysis with those from a technique which analyzes a larger bulk sample rather than just the surface. An obvious method was NAA. A series of sediment samples containing particles of variable diameter were compared. Pellets containing a few mg of sediment were prepared from each sample and analyzed by the PIXE technique using both an absolute and a comparitive method. For INAA the rest of the sample was then irradiated with thermal neutrons and element concentrations determined from analyses of the subsequent gamma-ray spectrum. Results from the two methods are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Findlay, Rick; Kautsky, Mark
2015-12-01
The U.S. Department of Energy (DOE) Office of Legacy Management conducted annual sampling at the Rulison, Colorado, Site for the Long-Term Hydrologic Monitoring Program (LTHMP) on May 20–22 and 27, 2015. Several of the land owners were not available to allow access to their respective properties, which created the need for several sample collection trips. This report documents the analytical results of the Rulison monitoring event and includes the trip report and the data validation package (Appendix A). The groundwater and surface water monitoring were shipped to the GEL Group Inc. laboratories for analysis. All requested analyses were successfully completed.more » Samples were analyzed for gamma-emitting radionuclides by high- resolution gamma spectrometry. Tritium was analyzed using two methods, the conventional tritium method, which has a detection limit on the order of 400 picocuries per liter (pCi/L), and the enriched method (for selected samples), which has a detection limit on the order of 3 pCi/L.« less
Preparation of samples for leaf architecture studies, a method for mounting cleared leaves1
Vasco, Alejandra; Thadeo, Marcela; Conover, Margaret; Daly, Douglas C.
2014-01-01
• Premise of the study: Several recent waves of interest in leaf architecture have shown an expanding range of approaches and applications across a number of disciplines. Despite this increased interest, examination of existing archives of cleared and mounted leaves shows that current methods for mounting, in particular, yield unsatisfactory results and deterioration of samples over relatively short periods. Although techniques for clearing and staining leaves are numerous, published techniques for mounting leaves are scarce. • Methods and Results: Here we present a complete protocol and recommendations for clearing, staining, and imaging leaves, and, most importantly, a method to permanently mount cleared leaves. • Conclusions: The mounting protocol is faster than other methods, inexpensive, and straightforward; moreover, it yields clear and permanent samples that can easily be imaged, scanned, and stored. Specimens mounted with this method preserve well, with leaves that were mounted more than 35 years ago showing no signs of bubbling or discoloration. PMID:25225627
NASA Technical Reports Server (NTRS)
Swaminathan, Prasanna; Dennison, J. R.; Sim, Alec; Brunson, Jerilyn; Crapo, Eric; Frederickson, A. R.
2004-01-01
Conductivity of insulating materials is a key parameter to determine how accumulated charge will distribute across the spacecraft and how rapidly charge imbalance will dissipate. Classical ASTM and IEC methods to measure thin film insulator conductivity apply a constant voltage to two electrodes around the sample and measure the resulting current for tens of minutes. However, conductivity is more appropriately measured for spacecraft charging applications as the "decay" of charge deposited on the surface of an insulator. Charge decay methods expose one side of the insulator in vacuum to sequences of charged particles, light, and plasma, with a metal electrode attached to the other side of the insulator. Data are obtained by capacitive coupling to measure both the resulting voltage on the open surface and emission of electrons from the exposed surface, as well monitoring currents to the electrode. Instrumentation for both classical and charge storage decay methods has been developed and tested at Jet Propulsion Laboratory (JPL) and at Utah State University (USU). Details of the apparatus, test methods and data analysis are given here. The JPL charge storage decay chamber is a first-generation instrument, designed to make detailed measurements on only three to five samples at a time. Because samples must typically be tested for over a month, a second-generation high sample throughput charge storage decay chamber was developed at USU with the capability of testing up to 32 samples simultaneously. Details are provided about the instrumentation to measure surface charge and current; for charge deposition apparatus and control; the sample holders to properly isolate the mounted samples; the sample carousel to rotate samples into place; the control of the sample environment including sample vacuum, ambient gas, and sample temperature; and the computer control and data acquisition systems. Measurements are compared here for a number of thin film insulators using both methods at both facilities. We have found that conductivity determined from charge storage decay methods is 102 to 104 larger than values obtained from classical methods. Another Spacecraft Charging Conference presentation describes more extensive measurements made with these apparatus. This work is supported through funding from the NASA Space Environments and Effects Program and the USU Space Dynamics Laboratory Enabling Technologies Program.
Sampling of temporal networks: Methods and biases
NASA Astrophysics Data System (ADS)
Rocha, Luis E. C.; Masuda, Naoki; Holme, Petter
2017-11-01
Temporal networks have been increasingly used to model a diversity of systems that evolve in time; for example, human contact structures over which dynamic processes such as epidemics take place. A fundamental aspect of real-life networks is that they are sampled within temporal and spatial frames. Furthermore, one might wish to subsample networks to reduce their size for better visualization or to perform computationally intensive simulations. The sampling method may affect the network structure and thus caution is necessary to generalize results based on samples. In this paper, we study four sampling strategies applied to a variety of real-life temporal networks. We quantify the biases generated by each sampling strategy on a number of relevant statistics such as link activity, temporal paths and epidemic spread. We find that some biases are common in a variety of networks and statistics, but one strategy, uniform sampling of nodes, shows improved performance in most scenarios. Given the particularities of temporal network data and the variety of network structures, we recommend that the choice of sampling methods be problem oriented to minimize the potential biases for the specific research questions on hand. Our results help researchers to better design network data collection protocols and to understand the limitations of sampled temporal network data.
NASA Astrophysics Data System (ADS)
Peselnick, L.
1982-08-01
An ultrasonic method is presented which combines features of the differential path and the phase comparison methods. The proposed differential path phase comparison method, referred to as the `hybrid' method for brevity, eliminates errors resulting from phase changes in the bond between the sample and buffer rod. Define r(P) [and R(P)] as the square of the normalized frequency for cancellation of sample waves for shear [and for compressional] waves. Define N as the number of wavelengths in twice the sample length. The pressure derivatives r'(P) and R' (P) for samples of Alcoa 2024-T4 aluminum were obtained by using the phase comparison and the hybrid methods. The values of the pressure derivatives obtained by using the phase comparison method show variations by as much as 40% for small values of N (N < 50). The pressure derivatives as determined from the hybrid method are reproducible to within ±2% independent of N. The values of the pressure derivatives determined by the phase comparison method for large N are the same as those determined by the hybrid method. Advantages of the hybrid method are (1) no pressure dependent phase shift at the buffer-sample interface, (2) elimination of deviatoric stress in the sample portion of the sample assembly with application of hydrostatic pressure, and (3) operation at lower ultrasonic frequencies (for comparable sample lengths), which eliminates detrimental high frequency ultrasonic problems. A reduction of the uncertainties of the pressure derivatives of single crystals and of low porosity polycrystals permits extrapolation of such experimental data to deeper mantle depths.
Parajulee, M N; Shrestha, R B; Leser, J F
2006-04-01
A 2-yr field study was conducted to examine the effectiveness of two sampling methods (visual and plant washing techniques) for western flower thrips, Frankliniella occidentalis (Pergande), and five sampling methods (visual, beat bucket, drop cloth, sweep net, and vacuum) for cotton fleahopper, Pseudatomoscelis seriatus (Reuter), in Texas cotton, Gossypium hirsutum (L.), and to develop sequential sampling plans for each pest. The plant washing technique gave similar results to the visual method in detecting adult thrips, but the washing technique detected significantly higher number of thrips larvae compared with the visual sampling. Visual sampling detected the highest number of fleahoppers followed by beat bucket, drop cloth, vacuum, and sweep net sampling, with no significant difference in catch efficiency between vacuum and sweep net methods. However, based on fixed precision cost reliability, the sweep net sampling was the most cost-effective method followed by vacuum, beat bucket, drop cloth, and visual sampling. Taylor's Power Law analysis revealed that the field dispersion patterns of both thrips and fleahoppers were aggregated throughout the crop growing season. For thrips management decision based on visual sampling (0.25 precision), 15 plants were estimated to be the minimum sample size when the estimated population density was one thrips per plant, whereas the minimum sample size was nine plants when thrips density approached 10 thrips per plant. The minimum visual sample size for cotton fleahoppers was 16 plants when the density was one fleahopper per plant, but the sample size decreased rapidly with an increase in fleahopper density, requiring only four plants to be sampled when the density was 10 fleahoppers per plant. Sequential sampling plans were developed and validated with independent data for both thrips and cotton fleahoppers.
Human body mass estimation: a comparison of "morphometric" and "mechanical" methods.
Auerbach, Benjamin M; Ruff, Christopher B
2004-12-01
In the past, body mass was reconstructed from hominin skeletal remains using both "mechanical" methods which rely on the support of body mass by weight-bearing skeletal elements, and "morphometric" methods which reconstruct body mass through direct assessment of body size and shape. A previous comparison of two such techniques, using femoral head breadth (mechanical) and stature and bi-iliac breadth (morphometric), indicated a good general correspondence between them (Ruff et al. [1997] Nature 387:173-176). However, the two techniques were never systematically compared across a large group of modern humans of diverse body form. This study incorporates skeletal measures taken from 1,173 Holocene adult individuals, representing diverse geographic origins, body sizes, and body shapes. Femoral head breadth, bi-iliac breadth (after pelvic rearticulation), and long bone lengths were measured on each individual. Statures were estimated from long bone lengths using appropriate reference samples. Body masses were calculated using three available femoral head breadth (FH) formulae and the stature/bi-iliac breadth (STBIB) formula, and compared. All methods yielded similar results. Correlations between FH estimates and STBIB estimates are 0.74-0.81. Slight differences in results between the three FH estimates can be attributed to sampling differences in the original reference samples, and in particular, the body-size ranges included in those samples. There is no evidence for systematic differences in results due to differences in body proportions. Since the STBIB method was validated on other samples, and the FH methods produced similar estimates, this argues that either may be applied to skeletal remains with some confidence. 2004 Wiley-Liss, Inc.
Zhu, Wensheng; Yuan, Ying; Zhang, Jingwen; Zhou, Fan; Knickmeyer, Rebecca C; Zhu, Hongtu
2017-02-01
The aim of this paper is to systematically evaluate a biased sampling issue associated with genome-wide association analysis (GWAS) of imaging phenotypes for most imaging genetic studies, including the Alzheimer's Disease Neuroimaging Initiative (ADNI). Specifically, the original sampling scheme of these imaging genetic studies is primarily the retrospective case-control design, whereas most existing statistical analyses of these studies ignore such sampling scheme by directly correlating imaging phenotypes (called the secondary traits) with genotype. Although it has been well documented in genetic epidemiology that ignoring the case-control sampling scheme can produce highly biased estimates, and subsequently lead to misleading results and suspicious associations, such findings are not well documented in imaging genetics. We use extensive simulations and a large-scale imaging genetic data analysis of the Alzheimer's Disease Neuroimaging Initiative (ADNI) data to evaluate the effects of the case-control sampling scheme on GWAS results based on some standard statistical methods, such as linear regression methods, while comparing it with several advanced statistical methods that appropriately adjust for the case-control sampling scheme. Copyright © 2016 Elsevier Inc. All rights reserved.
Martínez-Mier, E. Angeles; Soto-Rojas, Armando E.; Buckley, Christine M.; Margineda, Jorge; Zero, Domenick T.
2010-01-01
Objective The aim of this study was to assess methods currently used for analyzing fluoridated salt in order to identify the most useful method for this type of analysis. Basic research design Seventy-five fluoridated salt samples were obtained. Samples were analyzed for fluoride content, with and without pretreatment, using direct and diffusion methods. Element analysis was also conducted in selected samples. Fluoride was added to ultra pure NaCl and non-fluoridated commercial salt samples and Ca and Mg were added to fluoride samples in order to assess fluoride recoveries using modifications to the methods. Results Larger amounts of fluoride were found and recovered using diffusion than direct methods (96%–100% for diffusion vs. 67%–90% for direct). Statistically significant differences were obtained between direct and diffusion methods using different ion strength adjusters. Pretreatment methods reduced the amount of recovered fluoride. Determination of fluoride content was influenced both by the presence of NaCl and other ions in the salt. Conclusion Direct and diffusion techniques for analysis of fluoridated salt are suitable methods for fluoride analysis. The choice of method should depend on the purpose of the analysis. PMID:20088217
Efficient global biopolymer sampling with end-transfer configurational bias Monte Carlo
NASA Astrophysics Data System (ADS)
Arya, Gaurav; Schlick, Tamar
2007-01-01
We develop an "end-transfer configurational bias Monte Carlo" method for efficient thermodynamic sampling of complex biopolymers and assess its performance on a mesoscale model of chromatin (oligonucleosome) at different salt conditions compared to other Monte Carlo moves. Our method extends traditional configurational bias by deleting a repeating motif (monomer) from one end of the biopolymer and regrowing it at the opposite end using the standard Rosenbluth scheme. The method's sampling efficiency compared to local moves, pivot rotations, and standard configurational bias is assessed by parameters relating to translational, rotational, and internal degrees of freedom of the oligonucleosome. Our results show that the end-transfer method is superior in sampling every degree of freedom of the oligonucleosomes over other methods at high salt concentrations (weak electrostatics) but worse than the pivot rotations in terms of sampling internal and rotational sampling at low-to-moderate salt concentrations (strong electrostatics). Under all conditions investigated, however, the end-transfer method is several orders of magnitude more efficient than the standard configurational bias approach. This is because the characteristic sampling time of the innermost oligonucleosome motif scales quadratically with the length of the oligonucleosomes for the end-transfer method while it scales exponentially for the traditional configurational-bias method. Thus, the method we propose can significantly improve performance for global biomolecular applications, especially in condensed systems with weak nonbonded interactions and may be combined with local enhancements to improve local sampling.
Characterisation of a reference site for quantifying uncertainties related to soil sampling.
Barbizzi, Sabrina; de Zorzi, Paolo; Belli, Maria; Pati, Alessandra; Sansone, Umberto; Stellato, Luisa; Barbina, Maria; Deluisa, Andrea; Menegon, Sandro; Coletti, Valter
2004-01-01
The paper reports a methodology adopted to face problems related to quality assurance in soil sampling. The SOILSAMP project, funded by the Environmental Protection Agency of Italy (APAT), is aimed at (i) establishing protocols for soil sampling in different environments; (ii) assessing uncertainties associated with different soil sampling methods in order to select the "fit-for-purpose" method; (iii) qualifying, in term of trace elements spatial variability, a reference site for national and international inter-comparison exercises. Preliminary results and considerations are illustrated.
Enhancing resolution of free-flow zone electrophoresis via a simple sheath-flow sample injection.
Yang, Ying; Kong, Fan-Zhi; Liu, Ji; Li, Jun-Min; Liu, Xiao-Ping; Li, Guo-Qing; Wang, Ju-Fang; Xiao, Hua; Fan, Liu-Yin; Cao, Cheng-Xi; Li, Shan
2016-07-01
In this work, a simple and novel sheath-flow sample injection method (SFSIM) is introduced to reduce the band broadening of free-flow zone electrophoresis separation in newly developed self-balance free-flow electrophoresis instrument. A needle injector was placed in the center of the separation inlet, into which the BGE and sample solution were pumped simultaneously. BGE formed sheath flow outside the sample stream, resulting in less band broadening related to hydrodynamics and electrodynamics. Hemoglobin and C-phycocyanin were successfully separated by the proposed method in contrast to the poor separation of free-flow electrophoresis with the traditional injection method without sheath flow. About 3.75 times resolution enhancement could be achieved by sheath-flow sample injection method. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Phase-resolved acoustic radiation force optical coherence elastography
NASA Astrophysics Data System (ADS)
Qi, Wenjuan; Chen, Ruimin; Chou, Lidek; Liu, Gangjun; Zhang, Jun; Zhou, Qifa; Chen, Zhongping
2012-11-01
Many diseases involve changes in the biomechanical properties of tissue, and there is a close correlation between tissue elasticity and pathology. We report on the development of a phase-resolved acoustic radiation force optical coherence elastography method (ARF-OCE) to evaluate the elastic properties of tissue. This method utilizes chirped acoustic radiation force to produce excitation along the sample's axial direction, and it uses phase-resolved optical coherence tomography (OCT) to measure the vibration of the sample. Under 500-Hz square wave modulated ARF signal excitation, phase change maps of tissue mimicking phantoms are generated by the ARF-OCE method, and the resulting Young's modulus ratio is correlated with a standard compression test. The results verify that this technique could efficiently measure sample elastic properties accurately and quantitatively. Furthermore, a three-dimensional ARF-OCE image of the human atherosclerotic coronary artery is obtained. The result indicates that our dynamic phase-resolved ARF-OCE method can delineate tissues with different mechanical properties.
Hu, Youxin; Shanjani, Yaser; Toyserkani, Ehsan; Grynpas, Marc; Wang, Rizhi; Pilliar, Robert
2014-02-01
Porous calcium polyphosphate (CPP) structures proposed as bone-substitute implants and made by sintering CPP powders to form bending test samples of approximately 35 vol % porosity were machined from preformed blocks made either by additive manufacturing (AM) or conventional gravity sintering (CS) methods and the structure and mechanical characteristics of samples so made were compared. AM-made samples displayed higher bending strengths (≈1.2-1.4 times greater than CS-made samples), whereas elastic constant (i.e., effective elastic modulus of the porous structures) that is determined by material elastic modulus and structural geometry of the samples was ≈1.9-2.3 times greater for AM-made samples. X-ray diffraction analysis showed that samples made by either method displayed the same crystal structure forming β-CPP after sinter annealing. The material elastic modulus, E, determined using nanoindentation tests also showed the same value for both sample types (i.e., E ≈ 64 GPa). Examination of the porous structures indicated that significantly larger sinter necks resulted in the AM-made samples which presumably resulted in the higher mechanical properties. The development of mechanical properties was attributed to the different sinter anneal procedures required to make 35 vol % porous samples by the two methods. A primary objective of the present study, in addition to reporting on bending strength and sample stiffness (elastic constant) characteristics, was to determine why the two processes resulted in the observed mechanical property differences for samples of equivalent volume percentage of porosity. An understanding of the fundamental reason(s) for the observed effect is considered important for developing improved processes for preparation of porous CPP implants as bone substitutes for use in high load-bearing skeletal sites. Copyright © 2013 Wiley Periodicals, Inc.
Huang, Yunrui; Zhou, Qingxiang; Xie, Guohong
2013-01-01
Fungicides have been widely used throughout the world, and the resulted pollution has absorbed great attention in recent years. Present study described an effective measurement technique for fungicides including thiram, metalaxyl, diethofencarb, myclobutanil and tebuconazole in environmental water samples. A micro-solid phase extraction (μSPE) was developed utilizing ordered TiO(2) nanotube array for determination of target fungicides prior to a high performance liquid chromatography (HPLC). The experimental results indicated that TiO(2) nanotube arrays demonstrated excellent merits on the preconcentration of fungicides, and excellent linear relationship between peak area and the concentration of fungicides was obtained in the range of 0.1-50 μg L(-1). The detection limits for the targeted fungicides were in the range of 0.016-0.086 μg L(-1) (S/N=3). Four real environmental water samples were used to validate the applicability of the proposed method, and good spiked recoveries in the range of 73.9-114% were achieved. A comparison of present method with conventional solid phase extraction was made and the results exhibited that proposed method resulted in better recoveries. The results demonstrated that this μ-SPE technique was a viable alternative for the analysis of fungicides in complex samples. Copyright © 2012 Elsevier Ltd. All rights reserved.
Use of Passive Diffusion Samplers for Monitoring Volatile Organic Compounds in Ground Water
Harte, Philip T.; Brayton, Michael J.; Ives, Wayne
2000-01-01
Passive diffusion samplers have been tested at a number of sites where volatile organic compounds (VOC's) are the principal contaminants in ground water. Test results generally show good agreement between concentrations of VOC's in samples collected with diffusion samplers and concentrations in samples collected by purging the water from a well. Diffusion samplers offer several advantages over conventional and low-flow ground-water sampling procedures: * Elimination of the need to purge a well before collecting a sample and to dispose of contaminated water. * Elimination of cross-contamination of samples associated with sampling with non-dedicated pumps or sample delivery tubes. * Reduction in sampling time by as much as 80 percent of that required for 'purge type' sampling methods. * An increase in the frequency and spatial coverage of monitoring at a site because of the associated savings in time and money. The successful use of diffusion samplers depends on the following three primary factors: (1) understanding site conditions and contaminants of interest (defining sample objectives), (2) validating of results of diffusion samplers against more widely acknowledged sampling methods, and (3) applying diffusion samplers in the field.
USGS GeoData Digital Raster Graphics
,
2001-01-01
Passive diffusion samplers have been tested at a number of sites where volatile organic compounds (VOC?s) are the principal contaminants in ground water. Test results generally show good agreement between concentrations of VOC?s in samples collected with diffusion samplers and concentrations in samples collected by purging the water from a well. Diffusion samplers offer several advantages over conventional and low-flow ground-water sampling procedures: ? Elimination of the need to purge a well before collecting a sample and to dispose of contaminated water. ? Elimination of cross-contamination of samples associated with sampling with non-dedicated pumps or sample delivery tubes. ? Reduction in sampling time by as much as 80 percent of that required for ?purge type? sampling methods. ? An increase in the frequency and spatial coverage of monitoring at a site because of the associated savings in time and money. The successful use of diffusion samplers depends on the following three primary factors: (1) understanding site conditions and contaminants of interest (defining sample objectives), (2) validating of results of diffusion samplers against more widely acknowledged sampling methods, and (3) applying diffusion samplers in the field.
Hou, Siyuan; Riley, Christopher B; Mitchell, Cynthia A; Shaw, R Anthony; Bryanton, Janet; Bigsby, Kathryn; McClure, J Trenton
2015-09-01
Immunoglobulin G (IgG) is crucial for the protection of the host from invasive pathogens. Due to its importance for human health, tools that enable the monitoring of IgG levels are highly desired. Consequently there is a need for methods to determine the IgG concentration that are simple, rapid, and inexpensive. This work explored the potential of attenuated total reflectance (ATR) infrared spectroscopy as a method to determine IgG concentrations in human serum samples. Venous blood samples were collected from adults and children, and from the umbilical cord of newborns. The serum was harvested and tested using ATR infrared spectroscopy. Partial least squares (PLS) regression provided the basis to develop the new analytical methods. Three PLS calibrations were determined: one for the combined set of the venous and umbilical cord serum samples, the second for only the umbilical cord samples, and the third for only the venous samples. The number of PLS factors was chosen by critical evaluation of Monte Carlo-based cross validation results. The predictive performance for each PLS calibration was evaluated using the Pearson correlation coefficient, scatter plot and Bland-Altman plot, and percent deviations for independent prediction sets. The repeatability was evaluated by standard deviation and relative standard deviation. The results showed that ATR infrared spectroscopy is potentially a simple, quick, and inexpensive method to measure IgG concentrations in human serum samples. The results also showed that it is possible to build a united calibration curve for the umbilical cord and the venous samples. Copyright © 2015 Elsevier B.V. All rights reserved.
Zhang, Zeng-yan; Ji, Te; Zhu, Zhi-yong; Zhao, Hong-wei; Chen, Min; Xiao, Ti-qiao; Guo, Zhi
2015-01-01
Terahertz radiation is an electromagnetic radiation in the range between millimeter waves and far infrared. Due to its low energy and non-ionizing characters, THz pulse imaging emerges as a novel tool in many fields, such as material, chemical, biological medicine, and food safety. Limited spatial resolution is a significant restricting factor of terahertz imaging technology. Near field imaging method was proposed to improve the spatial resolution of terahertz system. Submillimeter scale's spauial resolution can be achieved if the income source size is smaller than the wawelength of the incoming source and the source is very close to the sample. But many changes were needed to the traditional terahertz time domain spectroscopy system, and it's very complex to analyze sample's physical parameters through the terahertz signal. A method of inserting a pinhole upstream to the sample was first proposed in this article to improve the spatial resolution of traditional terahertz time domain spectroscopy system. The measured spatial resolution of terahertz time domain spectroscopy system by knife edge method can achieve spatial resolution curves. The moving stage distance between 10 % and 90 Yo of the maximum signals respectively was defined as the, spatial resolution of the system. Imaging spatial resolution of traditional terahertz time domain spectroscopy system was improved dramatically after inserted a pinhole with diameter 0. 5 mm, 2 mm upstream to the sample. Experimental results show that the spatial resolution has been improved from 1. 276 mm to 0. 774 mm, with the increment about 39 %. Though this simple method, the spatial resolution of traditional terahertz time domain spectroscopy system was increased from millimeter scale to submillimeter scale. A pinhole with diameter 1 mm on a polyethylene plate was taken as sample, to terahertz imaging study. The traditional terahertz time domain spectroscopy system and pinhole inserted terahertz time domain spectroscopy system were applied in the imaging experiment respectively. The relative THz-power loss imaging of samples were use in this article. This method generally delivers the best signal to noise ratio in loss images, dispersion effects are cancelled. Terahertz imaging results show that the sample's boundary was more distinct after inserting the pinhole in front of, sample. The results also conform that inserting pinhole in front of sample can improve the imaging spatial resolution effectively. The theoretical analyses of the method which improve the spatial resolution by inserting a pinhole in front of sample were given in this article. The analyses also indicate that the smaller the pinhole size, the longer spatial coherence length of the system, the better spatial resolution of the system. At the same time the terahertz signal will be reduced accordingly. All the experimental results and theoretical analyses indicate that the method of inserting a pinhole in front of sample can improve the spatial resolution of traditional terahertz time domain spectroscopy system effectively, and it will further expand the application of terahertz imaging technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ababkov, Nikolai, E-mail: n.ababkov@rambler.ru; Smirnov, Alexander, E-mail: galvas.kem@gmail.com
The present paper presents comparative analysis of measurement results of acoustic and magnetic properties in long working metal of boiler drums and the results obtained by methods of electronic microscopy. The structure of the metal sample from the fracture zone to the base metal (metal working sample long) and the center of the base metal before welding (weld metal sample) was investigated by electron microscopy. Studies performed by spectral acoustic, magnetic noise and electron microscopic methods were conducted on the same plots and the same samples of long working and weld metal of high-pressure boiler drums. The analysis of researchmore » results showed high sensitivity of spectral-acoustic and magnetic-noise methods to definition changes of microstructure parameters. Practical application of spectral-acoustic and magnetic noise NDT method is possible for the detection of irregularities and changes in structural and phase state of the long working and weld metal of boiler drums, made of a special molybdenum steel (such as 20M). The above technique can be used to evaluate the structure and physical-mechanical properties of the long working metal of boiler drums in the energy sector.« less
Williams, Michael S; Cao, Yong; Ebel, Eric D
2013-07-15
Levels of pathogenic organisms in food and water have steadily declined in many parts of the world. A consequence of this reduction is that the proportion of samples that test positive for the most contaminated product-pathogen pairings has fallen to less than 0.1. While this is unequivocally beneficial to public health, datasets with very few enumerated samples present an analytical challenge because a large proportion of the observations are censored values. One application of particular interest to risk assessors is the fitting of a statistical distribution function to datasets collected at some point in the farm-to-table continuum. The fitted distribution forms an important component of an exposure assessment. A number of studies have compared different fitting methods and proposed lower limits on the proportion of samples where the organisms of interest are identified and enumerated, with the recommended lower limit of enumerated samples being 0.2. This recommendation may not be applicable to food safety risk assessments for a number of reasons, which include the development of new Bayesian fitting methods, the use of highly sensitive screening tests, and the generally larger sample sizes found in surveys of food commodities. This study evaluates the performance of a Markov chain Monte Carlo fitting method when used in conjunction with a screening test and enumeration of positive samples by the Most Probable Number technique. The results suggest that levels of contamination for common product-pathogen pairs, such as Salmonella on poultry carcasses, can be reliably estimated with the proposed fitting method and samples sizes in excess of 500 observations. The results do, however, demonstrate that simple guidelines for this application, such as the proportion of positive samples, cannot be provided. Published by Elsevier B.V.
Arango-Sabogal, Juan C; Labrecque, Olivia; Paré, Julie; Fairbrother, Julie-Hélène; Roy, Jean-Philippe; Wellemans, Vincent; Fecteau, Gilles
2016-11-01
Culture of Mycobacterium avium subsp. paratuberculosis (MAP) is the definitive antemortem test method for paratuberculosis. Microbial overgrowth is a challenge for MAP culture, as it complicates, delays, and increases the cost of the process. Additionally, herd status determination is impeded when noninterpretable (NI) results are obtained. The performance of PCR is comparable to fecal culture, thus it may be a complementary detection tool to classify NI samples. Our study aimed to determine if MAP DNA can be identified by PCR performed on NI environmental samples and to evaluate the performance of PCR before and after the culture of these samples in liquid media. A total of 154 environmental samples (62 NI, 62 negative, and 30 positive) were analyzed by PCR before being incubated in an automated system. Growth was confirmed by acid-fast bacilli stain and then the same PCR method was again applied on incubated samples, regardless of culture and stain results. Change in MAP DNA after incubation was assessed by converting the PCR quantification cycle (Cq) values into fold change using the 2 -ΔCq method (ΔCq = Cq after culture - Cq before culture). A total of 1.6% (standard error [SE] = 1.6) of the NI environmental samples had detectable MAP DNA. The PCR had a significantly better performance when applied after culture than before culture (p = 0.004). After culture, a 66-fold change (SE = 17.1) in MAP DNA was observed on average. Performing a PCR on NI samples improves MAP culturing. The PCR method used in our study is a reliable and consistent method to classify NI environmental samples. © 2016 The Author(s).
Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.
2008-06-01
An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.
Schwaiger, K; Wimmer, M; Huber-Schlenstedt, R; Fehlings, K; Hölzel, C S; Bauer, J
2012-01-01
A large proportion of mastitis milk samples yield negative or nonspecific results (i.e., no mastitis pathogen can be identified) in bacterial culturing. Therefore, the culture-independent PCR-single strand conformation polymorphism method was applied to the investigation of bovine mastitis milk samples. In addition to the known mastitis pathogens, the method was suitable for the detection of fastidious bacteria such as Mycoplasma spp., which are often missed by conventional culturing methods. The detection of Helcococcus ovis in 4 samples might indicate an involvement of this species in pathogenesis of bovine mastitis. In conclusion, PCR-single-strand conformation polymorphism is a promising tool for gaining new insights into the bacteriological etiology of mastitis. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Scheid, Anika; Nebel, Markus E
2012-07-09
Over the past years, statistical and Bayesian approaches have become increasingly appreciated to address the long-standing problem of computational RNA structure prediction. Recently, a novel probabilistic method for the prediction of RNA secondary structures from a single sequence has been studied which is based on generating statistically representative and reproducible samples of the entire ensemble of feasible structures for a particular input sequence. This method samples the possible foldings from a distribution implied by a sophisticated (traditional or length-dependent) stochastic context-free grammar (SCFG) that mirrors the standard thermodynamic model applied in modern physics-based prediction algorithms. Specifically, that grammar represents an exact probabilistic counterpart to the energy model underlying the Sfold software, which employs a sampling extension of the partition function (PF) approach to produce statistically representative subsets of the Boltzmann-weighted ensemble. Although both sampling approaches have the same worst-case time and space complexities, it has been indicated that they differ in performance (both with respect to prediction accuracy and quality of generated samples), where neither of these two competing approaches generally outperforms the other. In this work, we will consider the SCFG based approach in order to perform an analysis on how the quality of generated sample sets and the corresponding prediction accuracy changes when different degrees of disturbances are incorporated into the needed sampling probabilities. This is motivated by the fact that if the results prove to be resistant to large errors on the distinct sampling probabilities (compared to the exact ones), then it will be an indication that these probabilities do not need to be computed exactly, but it may be sufficient and more efficient to approximate them. Thus, it might then be possible to decrease the worst-case time requirements of such an SCFG based sampling method without significant accuracy losses. If, on the other hand, the quality of sampled structures can be observed to strongly react to slight disturbances, there is little hope for improving the complexity by heuristic procedures. We hence provide a reliable test for the hypothesis that a heuristic method could be implemented to improve the time scaling of RNA secondary structure prediction in the worst-case - without sacrificing much of the accuracy of the results. Our experiments indicate that absolute errors generally lead to the generation of useless sample sets, whereas relative errors seem to have only small negative impact on both the predictive accuracy and the overall quality of resulting structure samples. Based on these observations, we present some useful ideas for developing a time-reduced sampling method guaranteeing an acceptable predictive accuracy. We also discuss some inherent drawbacks that arise in the context of approximation. The key results of this paper are crucial for the design of an efficient and competitive heuristic prediction method based on the increasingly accepted and attractive statistical sampling approach. This has indeed been indicated by the construction of prototype algorithms.
NASA Technical Reports Server (NTRS)
Hazen-Bosveld, April; Lipert, Robert J.; Nordling, John; Shih, Chien-Ju; Siperko, Lorraine; Porter, Marc D.; Gazda, Daniel B.; Rutz, Jeff A.; Straub, John E.; Schultz, John R.;
2007-01-01
Colorimetric-solid phase extraction (C-SPE) is being developed as a method for in-flight monitoring of spacecraft water quality. C-SPE is based on measuring the change in the diffuse reflectance spectrum of indicator disks following exposure to a water sample. Previous microgravity testing has shown that air bubbles suspended in water samples can cause uncertainty in the volume of liquid passed through the disks, leading to errors in the determination of water quality parameter concentrations. We report here the results of a recent series of C-9 microgravity experiments designed to evaluate manual manipulation as a means to collect bubble-free water samples of specified volumes from water sample bags containing up to 47% air. The effectiveness of manual manipulation was verified by comparing the results from C-SPE analyses of silver(I) and iodine performed in-flight using samples collected and debubbled in microgravity to those performed on-ground using bubble-free samples. The ground and flight results showed excellent agreement, demonstrating that manual manipulation is an effective means for collecting bubble-free water samples in microgravity.
Salvatierra Virgen, Sara; Ceballos-Magaña, Silvia Guillermina; Salvatierra-Stamp, Vilma Del Carmen; Sumaya-Martínez, Maria Teresa; Martínez-Martínez, Francisco Javier; Muñiz-Valencia, Roberto
2017-12-01
In recent years, there has been an increased concern about the presence of toxic compounds derived from the Maillard reaction produced during food cooking at high temperatures. The main toxic compounds derived from this reaction are acrylamide and hydroxymethylfurfural (HMF). The majority of analytical methods require sample treatments using solvents which are highly polluting for the environment. The difficulty of quantifying HMF in complex fried food matrices encourages the development of new analytical methods. This paper provides a rapid, sensitive and environmentally-friendly analytical method for the quantification of HMF in corn chips using HPLC-DAD. Chromatographic separation resulted in a baseline separation for HMF in 3.7 min. Sample treatment for corn chip samples first involved a leaching process using water and afterwards a solid-phase extraction (SPE) using HLB-Oasis polymeric cartridges. Sample treatment optimisation was carried out by means of Box-Behnken fractional factorial design and Response Surface Methodolog y to examine the effects of four variables (sample weight, pH, sonication time and elution volume) on HMF extraction from corn chips. The SPE-HPLC-DAD method was validated. The limits of detection and quantification were 0.82 and 2.20 mg kg -1 , respectively. Method precision was evaluated in terms of repeatability and reproducibility as relative standard deviations (RSDs) using three concentration levels. For repeatability, RSD values were 6.9, 3.6 and 2.0%; and for reproducibility 18.8, 7.9 and 2.9%. For a ruggedness study the Yuden test was applied and the result demonstrated the method as robust. The method was successfully applied to different corn chip samples.
NASA Astrophysics Data System (ADS)
Kwon, Ki-Won; Cho, Yongsoo
This letter presents a simple joint estimation method for residual frequency offset (RFO) and sampling frequency offset (STO) in OFDM-based digital video broadcasting (DVB) systems. The proposed method selects a continual pilot (CP) subset from an unsymmetrically and non-uniformly distributed CP set to obtain an unbiased estimator. Simulation results show that the proposed method using a properly selected CP subset is unbiased and performs robustly.
Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods
NASA Astrophysics Data System (ADS)
Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.
2011-12-01
Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.
Pontoni, Ludovico; Panico, Antonio; Matanò, Alessia; van Hullebusch, Eric D; Fabbricino, Massimiliano; Esposito, Giovanni; Pirozzi, Francesco
2017-12-06
A novel modification of the sample preparation procedure for the Folin-Ciocalteu colorimetric assay for the determination of total phenolic compounds in natural solid and semisolid organic materials (e.g., foods, organic solid waste, soils, plant tissues, agricultural residues, manure) is proposed. In this method, the sample is prepared by adding sodium sulfate as a solid diluting agent before homogenization. The method allows for the determination of total phenols (TP) in samples with high solids contents, and it provides good accuracy and reproducibility. Additionally, this method permits analyses of significant amounts of sample, which reduces problems related to heterogeneity. We applied this method to phenols-rich lignocellulosic and humic-like solids and semisolid samples, including rice straw (RS), peat-rich soil (PS), and food waste (FW). The TP concentrations measured with the solid dilution (SD) preparation were substantially higher (increases of 41.4%, 15.5%, and 59.4% in RS, PS and FW, respectively) than those obtained with the traditional method (solids suspended in water). These results showed that the traditional method underestimates the phenolic contents in the studied solids.
NASA Astrophysics Data System (ADS)
Harvey, J.; Fisher, J. L.; Johnson, S.; Morgan, S.; Peterson, W. T.; Satterthwaite, E. V.; Vrijenhoek, R. C.
2016-02-01
Our ability to accurately characterize the diversity of planktonic organisms is affected by both the methods we use to collect water samples and our approaches to assessing sample contents. Plankton nets collect organisms from high volumes of water, but integrate sample contents along the net's path. In contrast, plankton pumps collect water from discrete depths. Autonomous underwater vehicles (AUVs) can collect water samples with pinpoint accuracy from physical features such as upwelling fronts or biological features such as phytoplankton blooms, but sample volumes are necessarily much smaller than those possible with nets. Characterization of plankton diversity and abundances in water samples may also vary with the assessment method we apply. Morphological taxonomy provides visual identification and enumeration of organisms via microscopy, but is labor intensive. Next generation DNA sequencing (NGS) shows great promise for assessing plankton diversity in water samples but accurate assessment of relative abundances may not be possible in all cases. Comparison of morphological taxonomy to molecular approaches is necessary to identify areas of overlap and also areas of disagreement between these methods. We have compared morphological taxonomic assessments to mitochondrial COI and nuclear 28S ribosomal RNA NGS results for plankton net samples collected in Monterey bay, California. We have made a similar comparison for plankton pump samples, and have also applied our NGS methods to targeted, small volume water samples collected by an AUV. Our goal is to communicate current results and lessons learned regarding application of traditional taxonomy and novel molecular approaches to the study of plankton diversity in spatially and temporally variable, coastal marine environments.
Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella
2016-12-09
Measuring toxicity is one of the main steps in drug development. Hence, there is a high demand for computational models to predict the toxicity effects of the potential drugs. In this study, we used a dataset, which consists of four toxicity effects:mutagenic, tumorigenic, irritant and reproductive effects. The proposed model consists of three phases. In the first phase, rough set-based methods are used to select the most discriminative features for reducing the classification time and improving the classification performance. Due to the imbalanced class distribution, in the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique are used to solve the problem of imbalanced datasets. ITerative Sampling (ITS) method is proposed to avoid the limitations of those methods. ITS method has two steps. The first step (sampling step) iteratively modifies the prior distribution of the minority and majority classes. In the second step, a data cleaning method is used to remove the overlapping that is produced from the first step. In the third phase, Bagging classifier is used to classify an unknown drug into toxic or non-toxic. The experimental results proved that the proposed model performed well in classifying the unknown samples according to all toxic effects in the imbalanced datasets.
Taylor, C; Duffy, L K; Plumley, F G; Bowyer, R T
2000-09-01
A spectrofluorometric method (B. Grandchamp et al., 1980, Biochem. Biophys. Acta 629, 577-586) developed for the determination of amounts of uroporphyrin I (Uro I), coproporphyrin III (Copro III), and protoporphyrin IX (Proto IX) in skin fibroblasts was compared with a high-performance liquid chromatography (HPLC) method for the analysis of porphyrins in fecal samples of river otters (Lutra canadensis). Heptacarboxylate porphyrin I and coproporphyrin I, two porphyrins determined to be critical in defining the porphyrin profile in fecal samples of river otters with the HPLC method, contributed substantially to the calculation of the concentrations of Uro I and Copro III, respectively, in standard solutions of porphyrins with the spectrofluorometric method. Fluorescent components of the fecal matrix complicated the determination of the concentrations of Uro I, Copro III, and Proto IX with the spectrofluorometric method and resulted in erroneous values for the concentrations of these porphyrins compared with values determined with the HPLC method. These results indicate that the complexity of the sample, particularly with regard to the potential presence of interfering fluorescent compounds, as well as porphyrins additional to Uro I, Copro III, and Proto IX, should be considered prior to the application of the spectrofluorometric method. An alternative HPLC method developed for the rapid characterization of porphyrin profiles in fecal samples of river otters is described. Copyright 2000 Academic Press.
Chang, Cheng-Ping; Lin, Tser-Cheng; Lin, Yu-Wen; Hua, Yi-Chun; Chu, Wei-Ming; Lin, Tzu-Yu; Lin, Yi-Wen; Wu, Jyun-De
2016-01-01
Objective: The purpose of this study was to compare thermal desorption tubes and stainless steel canisters for measuring volatile organic compounds (VOCs) emitted from petrochemical factories. Methods: Twelve petrochemical factories in the Mailiao Industrial Complex were recruited for conducting the measurements of VOCs. Thermal desorption tubes and 6-l specially prepared stainless steel canisters were used to simultaneously perform active sampling of environmental air samples. The sampling time of the environmental air samples was set up on 6h close to a full work shift of the workers. A total of 94 pairwise air samples were collected by using the thermal adsorption tubes and stainless steel canisters in these 12 factories in the petrochemical industrial complex. To maximize the number of comparative data points, all the measurements from all the factories in different sampling times were lumped together to perform a linear regression analysis for each selected VOC. Pearson product–moment correlation coefficient was used to examine the correlation between the pairwise measurements of these two sampling methods. A paired t-test was also performed to examine whether the difference in the concentrations of each selected VOC measured by the two methods was statistically significant. Results: The correlation coefficients of seven compounds, including acetone, n-hexane, benzene, toluene, 1,2-dichloroethane, 1,3-butadiene, and styrene were >0.80 indicating the two sampling methods for these VOCs’ measurements had high consistency. The paired t-tests for the measurements of n-hexane, benzene, m/p-xylene, o-xylene, 1,2-dichloroethane, and 1,3-butadiene showed statistically significant difference (P-value < 0.05). This indicated that the two sampling methods had various degrees of systematic errors. Looking at the results of six chemicals and these systematic errors probably resulted from the differences of the detection limits in the two sampling methods for these VOCs. Conclusions: The comparison between the concentrations of each of the 10 selected VOCs measured by the two sampling methods indicted that the thermal desorption tubes provided high accuracy and precision measurements for acetone, benzene, and 1,3-butadiene. The accuracy and precision of using the thermal desorption tubes for measuring the VOCs can be improved due to new developments in sorbent materials, multi-sorbent designs, and thermal desorption instrumentation. More applications of thermal desorption tubes for measuring occupational and environmental hazardous agents can be anticipated. PMID:26585828
Ullah, Md Ahsan; Kim, Ki-Hyun; Szulejko, Jan E; Cho, Jinwoo
2014-04-11
The production of short-chained volatile fatty acids (VFAs) by the anaerobic bacterial digestion of sewage (wastewater) affords an excellent opportunity to alternative greener viable bio-energy fuels (i.e., microbial fuel cell). VFAs in wastewater (sewage) samples are commonly quantified through direct injection (DI) into a gas chromatograph with a flame ionization detector (GC-FID). In this study, the reliability of VFA analysis by the DI-GC method has been examined against a thermal desorption (TD-GC) method. The results indicate that the VFA concentrations determined from an aliquot from each wastewater sample by the DI-GC method were generally underestimated, e.g., reductions of 7% (acetic acid) to 93.4% (hexanoic acid) relative to the TD-GC method. The observed differences between the two methods suggest the possibly important role of the matrix effect to give rise to the negative biases in DI-GC analysis. To further explore this possibility, an ancillary experiment was performed to examine bias patterns of three DI-GC approaches. For instance, the results of the standard addition (SA) method confirm the definite role of matrix effect when analyzing wastewater samples by DI-GC. More importantly, their biases tend to increase systematically with increasing molecular weight and decreasing VFA concentrations. As such, the use of DI-GC method, if applied for the analysis of samples with a complicated matrix, needs a thorough validation to improve the reliability in data acquisition. Copyright © 2014 Elsevier B.V. All rights reserved.
Jafari, Mohammad T; Riahi, Farhad
2014-05-23
The capability of corona discharge ionization ion mobility spectrometry (CD-IMS) for direct analysis of the samples extracted by dispersive liquid-liquid microextraction (DLLME) was investigated and evaluated, for the first time. To that end, an appropriate new injection port was designed and constructed, resulting in possibility of direct injection of the known sample volume, without tedious sample preparation steps (e.g. derivatization, solvent evaporation, and re-solving in another solvent…). Malathion as a test compound was extracted from different matrices by a rapid and convenient DLLME method. The positive ion mobility spectra of the extracted malathion were obtained after direct injection of carbon tetrachloride or methanol solutions. The analyte responses were compared and the statistical results revealed the feasibility of direct analysis of the extracted samples in carbon tetrachloride, resulting in a convenient methodology. The coupled method of DLLME-CD-IMS was exhaustively validated in terms of sensitivity, dynamic range, recovery, and enrichment factor. Finally, various real samples of apple, river and underground water were analyzed, all verifying the feasibility and success of the proposed method for the easy extraction of the analyte using DLLME separation before the direct analysis by CD-IMS. Copyright © 2014 Elsevier B.V. All rights reserved.
Bayesian posterior distributions without Markov chains.
Cole, Stephen R; Chu, Haitao; Greenland, Sander; Hamra, Ghassan; Richardson, David B
2012-03-01
Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976-1983) assessing the relation between residential exposure to magnetic fields and the development of childhood cancer. Results from rejection sampling (odds ratio (OR) = 1.69, 95% posterior interval (PI): 0.57, 5.00) were similar to MCMC results (OR = 1.69, 95% PI: 0.58, 4.95) and approximations from data-augmentation priors (OR = 1.74, 95% PI: 0.60, 5.06). In example 2, the authors apply rejection sampling to a cohort study of 315 human immunodeficiency virus seroconverters (1984-1998) to assess the relation between viral load after infection and 5-year incidence of acquired immunodeficiency syndrome, adjusting for (continuous) age at seroconversion and race. In this more complex example, rejection sampling required a notably longer run time than MCMC sampling but remained feasible and again yielded similar results. The transparency of the proposed approach comes at a price of being less broadly applicable than MCMC.
Revesz, Kinga M.; Landwehr, Jurate Maciunas; Keybl, Jaroslav Edward
2001-01-01
A new method was developed to analyze the stable carbon and oxygen isotope ratios of small samples (400?20 ?g) of calcium carbonate. This new method streamlines the classical phosphoric acid - calcium carbonate (H3PO4 - CaCO3) reaction method by making use of a Thermoquest-Finnigan GasBench II preparation device and a Delta Plus XL continuous flow isotope ratio mass spectrometer. To obtain reproducible and accurate results, optimal conditions for the H3PO4 - CaCO3 reaction had to be determined. At the acid-carbonate reaction temperature suggested by the equipment manufacturer, the oxygen isotope ratio results were unsatisfactory (standard deviation () greater than 1.5 per mill), probably because of a secondary reaction. When the acid-carbonate reaction temperature was lowered to 26?C and the reaction time was increased to 24 hours, the precision of the carbon and oxygen isotope ratios for duplicate analyses improved to 0.1 and 0.2 per mill, respectively. The method was tested by analyzing calcite from Devils Hole, Nevada, which was formed by precipitation from ground water onto the walls of a sub-aqueous cavern during the last 500,000 years. Isotope-ratio values previously had been obtained by the classical method for Devils Hole core DH-11. The DH-11 core had been recently re-sampled, and isotope-ratio values were obtained using this new method. The results were comparable to those obtained by the classical method. The consistency of the isotopic results is such that an alignment offset could be identified in the re-sampled core material, a cutting error that was then independently confirmed. The reproducibility of the isotopic values is demonstrated by a correlation of approximately 0.96 for both isotopes, after correcting for an alignment offset. This result indicates that the new method is a viable alternative to the classical method. In particular, the new method requires less sample material permitting finer resolution and allows automation of some processes resulting in considerable timesavings.
Boix, A; Fernández Pierna, J A; von Holst, C; Baeten, V
2012-01-01
The performance characteristics of a near infrared microscopy (NIRM) method, when applied to the detection of animal products in feedingstuffs, were determined via a collaborative study. The method delivers qualitative results in terms of the presence or absence of animal particles in feed and differentiates animal from vegetable feed ingredients on the basis of the evaluation of near infrared spectra obtained from individual particles present in the sample. The specificity ranged from 86% to 100%. The limit of detection obtained on the analysis of the sediment fraction, prepared as for the European official method, was 0.1% processed animal proteins (PAPs) in feed, since all laboratories correctly identified the positive samples. This limit has to be increased up to 2% for the analysis of samples which are not sedimented. The required sensitivity for the official control is therefore achieved in the analysis of the sediment fraction of the samples where the method can be applied for the detection of the presence of animal meal. Criteria for the classification of samples, when fewer than five spectra are found, as being of animal origin needs to be set up in order to harmonise the approach taken by the laboratories when applying NIRM for the detection of the presence of animal meal in feed.
Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, Irem Ersöz
2013-03-01
The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Simulation study. SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values.
2014-01-01
Background Levels of haemoglobin A1c (HbA1c) and blood lipids are important determinants of risk in patients with diabetes. Standard analysis methods based upon venous blood samples can be logistically challenging in resource-poor settings where much of the diabetes epidemic is occurring. Dried blood spots (DBS) provide a simple alternative method for sample collection but the comparability of data from analyses based on DBS is not well established. Methods We conducted a systematic review and meta-analysis to define the association of findings for HbA1c and blood lipids for analyses based upon standard methods compared to DBS. The Cochrane, Embase and Medline databases were searched for relevant reports and summary regression lines were estimated. Results 705 abstracts were found by the initial electronic search with 6 further reports identified by manual review of the full papers. 16 studies provided data for one or more outcomes of interest. There was a close agreement between the results for HbA1c assays based on venous and DBS samples (DBS = 0.9858venous + 0.3809), except for assays based upon affinity chromatography. Significant adjustment was required for assays of total cholesterol (DBS = 0.6807venous + 1.151) but results for triglycerides (DBS = 0.9557venous + 0.1427) were directly comparable. Conclusions For HbA1c and selected blood lipids, assays based on DBS samples are clearly associated with assays based on standard venous samples. There are, however, significant uncertainties about the nature of these associations and there is a need for standardisation of the sample collection, transportation, storage and analysis methods before the technique can be considered mainstream. This should be a research priority because better elucidation of metabolic risks in resource poor settings, where venous sampling is infeasible, will be key to addressing the global epidemic of cardiovascular diseases. PMID:25045323
Van der Vorst, Sébastien; Dekairelle, Anne-France; Irenge, Léonid; Hamoir, Marc; Robert, Annie; Gala, Jean-Luc
2009-01-01
This study compared automated vs. manual tissue grinding in terms of RNA yield obtained from oral mucosa biopsies. A total of 20 patients undergoing uvulectomy for sleep-related disorders and 10 patients undergoing biopsy for head and neck squamous cell carcinoma were enrolled in the study. Samples were collected, snap-frozen in liquid nitrogen, and divided into two parts of similar weight. Sample grinding was performed on one sample from each pair, either manually or using an automated cell disruptor. The performance and efficacy of each homogenization approach was compared in terms of total RNA yield (spectrophotometry, fluorometry), mRNA quantity [densitometry of specific TP53 amplicons and TP53 quantitative reverse-transcribed real-time PCR (qRT-PCR)], and mRNA quality (functional analysis of separated alleles in yeast). Although spectrophotometry and fluorometry results were comparable for both homogenization methods, TP53 expression values obtained by amplicon densitometry and qRT-PCR were significantly and consistently better after automated homogenization (p<0.005) for both uvula and tumor samples. Functional analysis of separated alleles in yeast results was better with the automated technique for tumor samples. Automated tissue homogenization appears to be a versatile, quick, and reliable method of cell disruption and is especially useful in the case of small malignant samples, which show unreliable results when processed by manual homogenization.
Confidence intervals for correlations when data are not normal.
Bishara, Anthony J; Hittner, James B
2017-02-01
With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.
Zhu, Shaoqing; Guo, Sheng; Duan, Jin-Ao; Qian, Dawei; Yan, Hui; Sha, Xiuxiu; Zhu, Zhenhua
2017-06-01
To explore the nutrients in roots of Angelica sinensis (Angelicae Sinensis Radix, ASR), a medicinal and edible plant, and evaluate its nutritional value, a rapid and reliable UHPLC-TQ-MS method was established and used to determine the potential nutritional compounds, including nucleosides, nucleobases and amino acids, in 50 batches of ASR samples obtained using two drying methods. The results showed that ASR is a healthy food rich in nucleosides, nucleobases and amino acids, especially arginine. The total average content of nucleosides and nucleobases in all ASR samples was 3.94 mg/g, while that of amino acids reached as high as 61.79 mg/g. Principle component analysis showed that chemical profile differences exist between the two groups of ASR samples prepared using different drying methods, and the contents of nutritional compounds in samples dried with the tempering-intermittent drying processing method (TIDM) were generally higher than those dried using the traditional solar processing method. The above results suggest that ASR should be considered an ideal healthy food and TIDM could be a suitable drying method for ASR when taking nucleosides, nucleobases and amino acids as the major consideration for their known human health benefits.
Gyawali, P; Ahmed, W; Jagals, P; Sidhu, J P S; Toze, S
2015-12-01
Hookworm infection contributes around 700 million infections worldwide especially in developing nations due to increased use of wastewater for crop production. The effective recovery of hookworm ova from wastewater matrices is difficult due to their low concentrations and heterogeneous distribution. In this study, we compared the recovery rates of (i) four rapid hookworm ova concentration methods from municipal wastewater, and (ii) two concentration methods from sludge samples. Ancylostoma caninum ova were used as surrogate for human hookworm (Ancylostoma duodenale and Necator americanus). Known concentration of A. caninum hookworm ova were seeded into wastewater (treated and raw) and sludge samples collected from two wastewater treatment plants (WWTPs) in Brisbane and Perth, Australia. The A. caninum ova were concentrated from treated and raw wastewater samples using centrifugation (Method A), hollow fiber ultrafiltration (HFUF) (Method B), filtration (Method C) and flotation (Method D) methods. For sludge samples, flotation (Method E) and direct DNA extraction (Method F) methods were used. Among the four methods tested, filtration (Method C) method was able to recover higher concentrations of A. caninum ova consistently from treated wastewater (39-50%) and raw wastewater (7.1-12%) samples collected from both WWTPs. The remaining methods (Methods A, B and D) yielded variable recovery rate ranging from 0.2 to 40% for treated and raw wastewater samples. The recovery rates for sludge samples were poor (0.02-4.7), although, Method F (direct DNA extraction) provided 1-2 orders of magnitude higher recovery rate than Method E (flotation). Based on our results it can be concluded that the recovery rates of hookworm ova from wastewater matrices, especially sludge samples, can be poor and highly variable. Therefore, choice of concentration method is vital for the sensitive detection of hookworm ova in wastewater matrices. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.
Pereira, Éderson R; de Almeida, Tarcísio S; Borges, Daniel L G; Carasek, Eduardo; Welz, Bernhard; Feldmann, Jörg; Campo Menoyo, Javier Del
2016-04-01
High-resolution continuum source graphite furnace atomic absorption spectrometry (HR-CS GF AAS) has been applied for the development of a method for the determination of total As in fish oil samples using direct analysis. The method does not use any sample pretreatment, besides dilution with 1-propanole, in order to decrease the oil viscosity. The stability and sensitivity of As were evaluated using ruthenium and iridium as permanent chemical modifiers and palladium added in solution over the sample. The best results were obtained with ruthenium as the permanent modifier and palladium in solution added to samples and standard solutions. Under these conditions, aqueous standard solutions could be used for calibration for the fish oil samples diluted with 1-propanole. The pyrolysis and atomization temperatures were 1400 °C and 2300 °C, respectively, and the limit of detection and characteristic mass were 30 pg and 43 pg, respectively. Accuracy and precision of the method have been evaluated using microwave-assisted acid digestion of the samples with subsequent determination by HR-CS GF AAS and ICP-MS; the results were in agreement (95% confidence level) with those of the proposed method. Copyright © 2015 Elsevier B.V. All rights reserved.
Nkouawa, Agathe; Sako, Yasuhito; Li, Tiaoying; Chen, Xingwang; Nakao, Minoru; Yanagida, Tetsuya; Okamoto, Munehiro; Giraudoux, Patrick; Raoul, Francis; Nakaya, Kazuhiro; Xiao, Ning; Qiu, Jiamin; Qiu, Dongchuan; Craig, Philip S; Ito, Akira
2012-12-01
In this study, we applied a loop-mediated isothermal amplification method for identification of human Taenia tapeworms in Tibetan communities in Sichuan, China. Out of 51 proglottids recovered from 35 carriers, 9, 1, and 41 samples were identified as Taenia solium, Taenia asiatica and Taenia saginata, respectively. Same results were obtained afterwards in the laboratory, except one sample. These results demonstrated that the LAMP method enabled rapid identification of parasites in the field surveys, which suggested that this method would contribute to the control of Taenia infections in endemic areas. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Davoren, Jon; Vanek, Daniel; Konjhodzić, Rijad; Crews, John; Huffine, Edwin; Parsons, Thomas J.
2007-01-01
Aim To quantitatively compare a silica extraction method with a commonly used phenol/chloroform extraction method for DNA analysis of specimens exhumed from mass graves. Methods DNA was extracted from twenty randomly chosen femur samples, using the International Commission on Missing Persons (ICMP) silica method, based on Qiagen Blood Maxi Kit, and compared with the DNA extracted by the standard phenol/chloroform-based method. The efficacy of extraction methods was compared by real time polymerase chain reaction (PCR) to measure DNA quantity and the presence of inhibitors and by amplification with the PowerPlex 16 (PP16) multiplex nuclear short tandem repeat (STR) kit. Results DNA quantification results showed that the silica-based method extracted on average 1.94 ng of DNA per gram of bone (range 0.25-9.58 ng/g), compared with only 0.68 ng/g by the organic method extracted (range 0.0016-4.4880 ng/g). Inhibition tests showed that there were on average significantly lower levels of PCR inhibitors in DNA isolated by the organic method. When amplified with PP16, all samples extracted by silica-based method produced 16 full loci profiles, while only 75% of the DNA extracts obtained by organic technique amplified 16 loci profiles. Conclusions The silica-based extraction method showed better results in nuclear STR typing from degraded bone samples than a commonly used phenol/chloroform method. PMID:17696302
Investigating the soil removal characteristics of flexible tube coring method for lunar exploration
NASA Astrophysics Data System (ADS)
Tang, Junyue; Quan, Qiquan; Jiang, Shengyuan; Liang, Jieneng; Lu, Xiangyong; Yuan, Fengpei
2018-02-01
Compared with other technical solutions, sampling the planetary soil and returning it back to Earth may be the most direct method to seek the evidence of extraterrestrial life. To keep sample's stratification for further analyzing, a novel sampling method called flexible tube coring has been adopted for China future lunar explorations. Given the uncertain physical properties of lunar regolith, proper drilling parameters should be adjusted immediately in piercing process. Otherwise, only a small amount of core could be sampled and overload drilling faults could occur correspondingly. Due to the fact that the removed soil is inevitably connected with the cored soil, soil removal characteristics may have a great influence on both drilling loads and coring results. To comprehend the soil removal characteristics, a non-contact measurement was proposed and verified to acquire the coring and removal results accurately. Herein, further more experiments in one homogenous lunar regolith simulant were conducted, revealing that there exists a sudden core failure during the sampling process and the final coring results are determined by the penetration per revolution index. Due to the core failure, both drilling loads and soil's removal states are also affected thereby.
Sun, Hao; Guo, Jianbin; Wu, Shubiao; Liu, Fang; Dong, Renjie
2017-09-01
The volatile fatty acids (VFAs) concentration has been considered as one of the most sensitive process performance indicators in anaerobic digestion (AD) process. However, the accurate determination of VFAs concentration in AD processes normally requires advanced equipment and complex pretreatment procedures. A simplified method with fewer sample pretreatment procedures and improved accuracy is greatly needed, particularly for on-site application. This report outlines improvements to the Nordmann method, one of the most popular titrations used for VFA monitoring. The influence of ion and solid interfering subsystems in titrated samples on results accuracy was discussed. The total solid content in titrated samples was the main factor affecting accuracy in VFA monitoring. Moreover, a high linear correlation was established between the total solids contents and VFA measurement differences between the traditional Nordmann equation and gas chromatography (GC). Accordingly, a simplified titration method was developed and validated using a semi-continuous experiment of chicken manure anaerobic digestion with various organic loading rates. The good fitting of the results obtained by this method in comparison with GC results strongly supported the potential application of this method to VFA monitoring. Copyright © 2017. Published by Elsevier Ltd.
Plane-Based Sampling for Ray Casting Algorithm in Sequential Medical Images
Lin, Lili; Chen, Shengyong; Shao, Yan; Gu, Zichun
2013-01-01
This paper proposes a plane-based sampling method to improve the traditional Ray Casting Algorithm (RCA) for the fast reconstruction of a three-dimensional biomedical model from sequential images. In the novel method, the optical properties of all sampling points depend on the intersection points when a ray travels through an equidistant parallel plan cluster of the volume dataset. The results show that the method improves the rendering speed at over three times compared with the conventional algorithm and the image quality is well guaranteed. PMID:23424608
Small sample estimation of the reliability function for technical products
NASA Astrophysics Data System (ADS)
Lyamets, L. L.; Yakimenko, I. V.; Kanishchev, O. A.; Bliznyuk, O. A.
2017-12-01
It is demonstrated that, in the absence of big statistic samples obtained as a result of testing complex technical products for failure, statistic estimation of the reliability function of initial elements can be made by the moments method. A formal description of the moments method is given and its advantages in the analysis of small censored samples are discussed. A modified algorithm is proposed for the implementation of the moments method with the use of only the moments at which the failures of initial elements occur.
Hwang, Chiu-Chu; Lin, Chia-Min; Kung, Hsien-Feng; Huang, Ya-Ling; Hwang, Deng-Fwu; Su, Yi-Cheng; Tsai, Yung-Hsiang
2012-11-15
The effects of salt concentrations (0-15.0%) and drying methods on the quality of dried milkfish were studied. The results showed that the levels of aerobic plate counts, total coliform, water activity, moisture contents, total volatile basic nitrogen (TVBN) and thiobarbituric acid (TBA) of the dried milkfish samples prepared with the same drying method decreased with increased salt concentrations. The samples prepared with the cold-air drying method had better quality in term of lower TVBN and TBA values than those of samples prepared with other drying methods. The histamine contents in all samples, except two, prepared with various salt concentrations by different drying methods were less than 1.9 mg/100 g. Two unsalted samples prepared with hot-air drying at 35 °C and sun drying methods were found to contain histamine at levels of 249.7 and 67.4 mg/100 g, respectively, which were higher than the potential hazard level of 50 mg/100 g. Copyright © 2012 Elsevier Ltd. All rights reserved.
Wang, Shujie J; Wu, Steven T; Gokemeijer, Jochem; Fura, Aberra; Krishna, Murli; Morin, Paul; Chen, Guodong; Price, Karen; Wang-Iverson, David; Olah, Timothy; Weiner, Russell; Tymiak, Adrienne; Jemal, Mohammed
2012-01-01
High-performance liquid chromatography-tandem mass spectrometry (LC-MS/MS) and enzyme-linked immunosorbent assay (ELISA) methods were developed for the quantification of a PEGylated scaffold protein drug in monkey plasma samples. The LC-MS/MS method was based on the extraction of the therapeutic protein with a water-miscible organic solvent and the subsequent trypsin digestion of the extract followed by the detection of a surrogate peptide. The assay was linear over a range of 10-3,000 ng/mL. The ELISA method utilized a therapeutic target-binding format in which the recombinant target antigen was used to capture the drug in the sample, followed by detection with an anti-PEG monoclonal antibody. The assay range was 30-2,000 ng/mL. A correlation study between the two methods was performed by measuring the drug concentrations in plasma samples from a single-dose pharmacokinetic (PK) study in cynomolgus monkeys following a 5-mg/kg subcutaneous administration (n = 4). In the early time points of the PK profile, the drug concentrations obtained by the LC-MS/MS method agreed very well with those obtained by the ELISA method. However, at later time points, the drug concentrations measured by the LC-MS/MS method were consistently higher than those measured by the ELISA method. The PK parameters calculated based on the concentration data showed that the two methods gave equivalent peak exposure (C(max)) at 24-48 h. However, the LC-MS/MS results exhibited about 1.53-fold higher total exposure (AUC(tot)) than the ELISA results. The discrepancy between the LC-MS/MS and ELISA results was investigated by conducting immunogenicity testing, anti-drug antibody (ADA) epitope mapping, and Western blot analysis of the drug concentrations coupled with Protein G separation. The results demonstrated the presence of ADA specific to the engineered antigen-binding region of the scaffold protein drug that interfered with the ability of the drug to bind to the target antigen used in the ELISA method. In the presence of the ADAs, the ELISA method measured only the active circulating drug (target-binding), while the LC-MS/MS method measured the total circulating drug. The work presented here indicates that the bioanalysis of protein drugs may be complicated owing to the presence of drug-binding endogenous components or ADAs in the post-dose (incurred) samples. The clear understanding of the behavior of different bioanalytical techniques vis-à-vis the potentially interfering components found in incurred samples is critical in selecting bioanalytical strategies for measuring protein drugs.
Shin, Saeam; Kim, Juwon; Kim, Yoonjung; Cho, Sun-Mi; Lee, Kyung-A
2017-10-26
EGFR mutation is an emerging biomarker for treatment selection in non-small-cell lung cancer (NSCLC) patients. However, optimal mutation detection is hindered by complications associated with the biopsy procedure, tumor heterogeneity and limited sensitivity of test methodology. In this study, we evaluated the diagnostic utility of real-time PCR using malignant pleural effusion samples. A total of 77 pleural fluid samples from 77 NSCLC patients were tested using the cobas EGFR mutation test (Roche Molecular Systems). Pleural fluid was centrifuged, and separated cell pellets and supernatants were tested in parallel. Results were compared with Sanger sequencing and/or peptide nucleic acid (PNA)-mediated PCR clamping of matched tumor tissue or pleural fluid samples. All samples showed valid real-time PCR results in one or more DNA samples extracted from cell pellets and supernatants. Compared with other molecular methods, the sensitivity of real-time PCR method was 100%. Concordance rate of real-time PCR and Sanger sequencing plus PNA-mediated PCR clamping was 98.7%. We have confirmed that real-time PCR using pleural fluid had a high concordance rate compared to conventional methods, with no failed samples. Our data demonstrated that the parallel real-time PCR testing using supernatant and cell pellet could offer reliable and robust surrogate strategy when tissue is not available.
NanoSIMS analysis of Bacillus spores for forensics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weber, P K; Davisson, M L; Velsko, S P
2010-02-23
The threat associated with the potential use of radiological, nuclear, chemical and biological materials in terrorist acts has resulted in new fields of forensic science requiring the application of state-of-the-science analytical techniques. Since the anthrax letter attacks in the United States in the fall of 2001, there has been increased interest in physical and chemical characterization of bacterial spores. While molecular methods are powerful tools for identifying genetic differences, other methods may be able to differentiate genetically identical samples based on physical and chemical properties, as well as provide complimentary information, such as methods of production and approximate date ofmore » production. Microanalysis has the potential to contribute significantly to microbial forensics. Bacillus spores are highly structured, consisting of a core, cortex, coat, and in some species, an exosporium. This structure provides a template for constraining elemental abundance differences at the nanometer scale. The primary controls on the distribution of major elements in spores are likely structural and physiological. For example, P and Ca are known to be abundant in the spore core because that is where P-rich nucleic acids and Cadipicolinic acid are located, respectively. Trace elements are known to bind to the spore coat but the controls on these elements are less well understood. Elemental distributions and abundances may be directly related to spore production, purification and stabilization methodologies, which are of particular interest for forensic investigation. To this end, we are developing a high-resolution secondary ion mass spectrometry method using a Cameca NanoSIMS 50 to study the distribution and abundance of trace elements in bacterial spores. In this presentation we will review and compare methods for preparing and analyzing samples, as well as review results on the distribution and abundance of elements in bacterial spores. We use NanoSIMS to directly image samples as well as depth profile samples. The directly imaged samples are sectioned to present a flat surface for analysis. We use focused ion beam (FIB) milling to top-cut individual spores to create flat surfaces for NanoSIMS analysis. Depth profiling can be used on whole spores, which are consumed in the process of analysis. The two methods generate comparable results, with the expected distribution of P and Ca. Ca-compatible elements, such as Mg and Mn, are found to follow the distribution of Ca. The distribution of other elements will be discussed. We envision the first application of this methodology will be to sample matching for trace samples. Towards this end, we are generating a baseline data set for samples produced by multiple laboratories. Preliminary results suggest that this method provides significant probative value for identifying samples produced by the same method in the same laboratory, as well as coming from the same initial production run. The results of this study will be presented.« less
Alves, Cíntia; Pereira, Rui; Prieto, Lourdes; Aler, Mercedes; Amaral, Cesar R L; Arévalo, Cristina; Berardi, Gabriela; Di Rocco, Florencia; Caputo, Mariela; Carmona, Cristian Hernandez; Catelli, Laura; Costa, Heloísa Afonso; Coufalova, Pavla; Furfuro, Sandra; García, Óscar; Gaviria, Anibal; Goios, Ana; Gómez, Juan José Builes; Hernández, Alexis; Hernández, Eva Del Carmen Betancor; Miranda, Luís; Parra, David; Pedrosa, Susana; Porto, Maria João Anjos; Rebelo, Maria de Lurdes; Spirito, Matteo; Torres, María Del Carmen Villalobos; Amorim, António; Pereira, Filipe
2017-05-01
DNA is a powerful tool available for forensic investigations requiring identification of species. However, it is necessary to develop and validate methods able to produce results in degraded and or low quality DNA samples with the high standards obligatory in forensic research. Here, we describe a voluntary collaborative exercise to test the recently developed Species Identification by Insertions/Deletions (SPInDel) method. The SPInDel kit allows the identification of species by the generation of numeric profiles combining the lengths of six mitochondrial ribosomal RNA (rRNA) gene regions amplified in a single reaction followed by capillary electrophoresis. The exercise was organized during 2014 by a Working Commission of the Spanish and Portuguese-Speaking Working Group of the International Society for Forensic Genetics (GHEP-ISFG), created in 2013. The 24 participating laboratories from 10 countries were asked to identify the species in 11 DNA samples from previous GHEP-ISFG proficiency tests using a SPInDel primer mix and control samples of the 10 target species. A computer software was also provided to the participants to assist the analyses of the results. All samples were correctly identified by 22 of the 24 laboratories, including samples with low amounts of DNA (hair shafts) and mixtures of saliva and blood. Correct species identifications were obtained in 238 of the 241 (98.8%) reported SPInDel profiles. Two laboratories were responsible for the three cases of misclassifications. The SPInDel was efficient in the identification of species in mixtures considering that only a single laboratory failed to detect a mixture in one sample. This result suggests that SPInDel is a valid method for mixture analyses without the need for DNA sequencing, with the advantage of identifying more than one species in a single reaction. The low frequency of wrong (5.0%) and missing (2.1%) alleles did not interfere with the correct species identification, which demonstrated the advantage of using a method based on the analysis of multiple loci. Overall, the SPInDel method was easily implemented by laboratories using different genotyping platforms, the interpretation of results was straightforward and the SPInDel software was used without any problems. The results of this collaborative exercise indicate that the SPInDel method can be applied successfully in forensic casework investigations. Copyright © 2017 Elsevier B.V. All rights reserved.
Wohlsen, T; Bates, J; Vesey, G; Robinson, W A; Katouli, M
2006-04-01
To use BioBall cultures as a precise reference standard to evaluate methods for enumeration of Escherichia coli and other coliform bacteria in water samples. Eight methods were evaluated including membrane filtration, standard plate count (pour and spread plate methods), defined substrate technology methods (Colilert and Colisure), the most probable number method and the Petrifilm disposable plate method. Escherichia coli and Enterobacter aerogenes BioBall cultures containing 30 organisms each were used. All tests were performed using 10 replicates. The mean recovery of both bacteria varied with the different methods employed. The best and most consistent results were obtained with Petrifilm and the pour plate method. Other methods either yielded a low recovery or showed significantly high variability between replicates. The BioBall is a very suitable quality control tool for evaluating the efficiency of methods for bacterial enumeration in water samples.
Prapamontol, Tippawan; Sutan, Kunrunya; Laoyang, Sompong; Hongsibsong, Surat; Lee, Grace; Yano, Yukiko; Hunter, Ronald Elton; Ryan, P Barry; Barr, Dana Boyd; Panuwet, Parinya
2014-01-01
We report two analytical methods for the measurement of dialkylphosphate (DAP) metabolites of organophosphate pesticides in human urine. These methods were independently developed/modified and implemented in two separate laboratories and cross validated. The aim was to develop simple, cost effective, and reliable methods that could use available resources and sample matrices in Thailand and the United States. While several methods already exist, we found that direct application of these methods required modification of sample preparation and chromatographic conditions to render accurate, reliable data. The problems encountered with existing methods were attributable to urinary matrix interferences, and differences in the pH of urine samples and reagents used during the extraction and derivatization processes. Thus, we provide information on key parameters that require attention during method modification and execution that affect the ruggedness of the methods. The methods presented here employ gas chromatography (GC) coupled with either flame photometric detection (FPD) or electron impact ionization-mass spectrometry (EI-MS) with isotopic dilution quantification. The limits of detection were reported from 0.10ng/mL urine to 2.5ng/mL urine (for GC-FPD), while the limits of quantification were reported from 0.25ng/mL urine to 2.5ng/mL urine (for GC-MS), for all six common DAP metabolites (i.e., dimethylphosphate, dimethylthiophosphate, dimethyldithiophosphate, diethylphosphate, diethylthiophosphate, and diethyldithiophosphate). Each method showed a relative recovery range of 94-119% (for GC-FPD) and 92-103% (for GC-MS), and relative standard deviations (RSD) of less than 20%. Cross-validation was performed on the same set of urine samples (n=46) collected from pregnant women residing in the agricultural areas of northern Thailand. The results from split sample analysis from both laboratories agreed well for each metabolite, suggesting that each method can produce comparable data. In addition, results from analyses of specimens from the German External Quality Assessment Scheme (G-EQUAS) suggested that the GC-FPD method produced accurate results that can be reasonably compared to other studies. Copyright © 2013 Elsevier GmbH. All rights reserved.
Sun, Yangbo; Chen, Long; Huang, Bisheng; Chen, Keli
2017-07-01
As a mineral, the traditional Chinese medicine calamine has a similar shape to many other minerals. Investigations of commercially available calamine samples have shown that there are many fake and inferior calamine goods sold on the market. The conventional identification method for calamine is complicated, therefore as a result of the large scale of calamine samples, a rapid identification method is needed. To establish a qualitative model using near-infrared (NIR) spectroscopy for rapid identification of various calamine samples, large quantities of calamine samples including crude products, counterfeits and processed products were collected and correctly identified using the physicochemical and powder X-ray diffraction method. The NIR spectroscopy method was used to analyze these samples by combining the multi-reference correlation coefficient (MRCC) method and the error back propagation artificial neural network algorithm (BP-ANN), so as to realize the qualitative identification of calamine samples. The accuracy rate of the model based on NIR and MRCC methods was 85%; in addition, the model, which took comprehensive multiple factors into consideration, can be used to identify crude calamine products, its counterfeits and processed products. Furthermore, by in-putting the correlation coefficients of multiple references as the spectral feature data of samples into BP-ANN, a BP-ANN model of qualitative identification was established, of which the accuracy rate was increased to 95%. The MRCC method can be used as a NIR-based method in the process of BP-ANN modeling.
AMINI, Payam; AHMADINIA, Hasan; POOROLAJAL, Jalal; MOQADDASI AMIRI, Mohammad
2016-01-01
Background: We aimed to assess the high-risk group for suicide using different classification methods includinglogistic regression (LR), decision tree (DT), artificial neural network (ANN), and support vector machine (SVM). Methods: We used the dataset of a study conducted to predict risk factors of completed suicide in Hamadan Province, the west of Iran, in 2010. To evaluate the high-risk groups for suicide, LR, SVM, DT and ANN were performed. The applied methods were compared using sensitivity, specificity, positive predicted value, negative predicted value, accuracy and the area under curve. Cochran-Q test was implied to check differences in proportion among methods. To assess the association between the observed and predicted values, Ø coefficient, contingency coefficient, and Kendall tau-b were calculated. Results: Gender, age, and job were the most important risk factors for fatal suicide attempts in common for four methods. SVM method showed the highest accuracy 0.68 and 0.67 for training and testing sample, respectively. However, this method resulted in the highest specificity (0.67 for training and 0.68 for testing sample) and the highest sensitivity for training sample (0.85), but the lowest sensitivity for the testing sample (0.53). Cochran-Q test resulted in differences between proportions in different methods (P<0.001). The association of SVM predictions and observed values, Ø coefficient, contingency coefficient, and Kendall tau-b were 0.239, 0.232 and 0.239, respectively. Conclusion: SVM had the best performance to classify fatal suicide attempts comparing to DT, LR and ANN. PMID:27957463
Wells, Beth; Shaw, Hannah; Innocent, Giles; Guido, Stefano; Hotchkiss, Emily; Parigi, Maria; Opsteegh, Marieke; Green, James; Gillespie, Simon; Innes, Elisabeth A; Katzer, Frank
2015-12-15
Waterborne transmission of Toxoplasma gondii is a potential public health risk and there are currently no agreed optimised methods for the recovery, processing and detection of T. gondii oocysts in water samples. In this study modified methods of T. gondii oocyst recovery and DNA extraction were applied to 1427 samples collected from 147 public water supplies throughout Scotland. T. gondii DNA was detected, using real time PCR (qPCR) targeting the 529bp repeat element, in 8.79% of interpretable samples (124 out of 1411 samples). The samples which were positive for T. gondii DNA originated from a third of the sampled water sources. The samples which were positive by qPCR and some of the negative samples were reanalysed using ITS1 nested PCR (nPCR) and results compared. The 529bp qPCR was the more sensitive technique and a full analysis of assay performance, by Bayesian analysis using a Markov Chain Monte Carlo method, was completed which demonstrated the efficacy of this method for the detection of T. gondii in water samples. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Multilattice sampling strategies for region of interest dynamic MRI.
Rilling, Gabriel; Tao, Yuehui; Marshall, Ian; Davies, Mike E
2013-08-01
A multilattice sampling approach is proposed for dynamic MRI with Cartesian trajectories. It relies on the use of sampling patterns composed of several different lattices and exploits an image model where only some parts of the image are dynamic, whereas the rest is assumed static. Given the parameters of such an image model, the methodology followed for the design of a multilattice sampling pattern adapted to the model is described. The multi-lattice approach is compared to single-lattice sampling, as used by traditional acceleration methods such as UNFOLD (UNaliasing by Fourier-Encoding the Overlaps using the temporal Dimension) or k-t BLAST, and random sampling used by modern compressed sensing-based methods. On the considered image model, it allows more flexibility and higher accelerations than lattice sampling and better performance than random sampling. The method is illustrated on a phase-contrast carotid blood velocity mapping MR experiment. Combining the multilattice approach with the KEYHOLE technique allows up to 12× acceleration factors. Simulation and in vivo undersampling results validate the method. Compared to lattice and random sampling, multilattice sampling provides significant gains at high acceleration factors. © 2012 Wiley Periodicals, Inc.
Domain Regeneration for Cross-Database Micro-Expression Recognition
NASA Astrophysics Data System (ADS)
Zong, Yuan; Zheng, Wenming; Huang, Xiaohua; Shi, Jingang; Cui, Zhen; Zhao, Guoying
2018-05-01
In this paper, we investigate the cross-database micro-expression recognition problem, where the training and testing samples are from two different micro-expression databases. Under this setting, the training and testing samples would have different feature distributions and hence the performance of most existing micro-expression recognition methods may decrease greatly. To solve this problem, we propose a simple yet effective method called Target Sample Re-Generator (TSRG) in this paper. By using TSRG, we are able to re-generate the samples from target micro-expression database and the re-generated target samples would share same or similar feature distributions with the original source samples. For this reason, we can then use the classifier learned based on the labeled source samples to accurately predict the micro-expression categories of the unlabeled target samples. To evaluate the performance of the proposed TSRG method, extensive cross-database micro-expression recognition experiments designed based on SMIC and CASME II databases are conducted. Compared with recent state-of-the-art cross-database emotion recognition methods, the proposed TSRG achieves more promising results.
Pikkemaat, M G; Rapallini, M L B A; Karp, M T; Elferink, J W A
2010-08-01
Tetracyclines are extensively used in veterinary medicine. For the detection of tetracycline residues in animal products, a broad array of methods is available. Luminescent bacterial biosensors represent an attractive inexpensive, simple and fast method for screening large numbers of samples. A previously developed cell-biosensor method was subjected to an evaluation study using over 300 routine poultry samples and the results were compared with a microbial inhibition test. The cell-biosensor assay yielded many more suspect samples, 10.2% versus 2% with the inhibition test, which all could be confirmed by liquid chromatography-tandem mass spectrometry (LC-MS/MS). Only one sample contained a concentration above the maximum residue limit (MRL) of 100 microg kg(-1), while residue levels in most of the suspect samples were very low (<10 microg kg(-1)). The method appeared to be specific and robust. Using an experimental set-up comprising the analysis of a series of three sample dilutions allowed an appropriate cut-off for confirmatory analysis, limiting the number of samples and requiring further analysis to a minimum.
WEIGHTED LIKELIHOOD ESTIMATION UNDER TWO-PHASE SAMPLING
Saegusa, Takumi; Wellner, Jon A.
2013-01-01
We develop asymptotic theory for weighted likelihood estimators (WLE) under two-phase stratified sampling without replacement. We also consider several variants of WLEs involving estimated weights and calibration. A set of empirical process tools are developed including a Glivenko–Cantelli theorem, a theorem for rates of convergence of M-estimators, and a Donsker theorem for the inverse probability weighted empirical processes under two-phase sampling and sampling without replacement at the second phase. Using these general results, we derive asymptotic distributions of the WLE of a finite-dimensional parameter in a general semiparametric model where an estimator of a nuisance parameter is estimable either at regular or nonregular rates. We illustrate these results and methods in the Cox model with right censoring and interval censoring. We compare the methods via their asymptotic variances under both sampling without replacement and the more usual (and easier to analyze) assumption of Bernoulli sampling at the second phase. PMID:24563559
This paper describes the development of a 96-microwell high sample capacity ELISA method for measuring 2,4-D in urine; the analysis of 2,4-D in real-world urine samples by both ELISA and GC/MS methods; and compares the ELISA and GC/MS results in several key areas: accuracy, preci...
Properties of the carbon-palladium nanocomposites studied by Raman spectroscopy method
NASA Astrophysics Data System (ADS)
Belka, Radosław; Suchańska, Małgorzata
2013-10-01
In this paper, the results for thin carbon-palladium (C-Pd) nanocomposites obtained by PVD (Physical Vapour Deposition) and PVD/CVD (Chemical Vapour Deposition) method, carried out using Raman spectroscopy method are presented. Studies reveal the dominance of fullerene-like structure for PVD samples and graphite-like structures for CVD samples. The type of substrate and metal content have great impact on spectra shapes.
Rath, S; Panda, M; Sahu, M C; Padhy, R N
2015-09-01
Quantitatively, conventional methods of diagnosis of tinea capitis or paediatric ringworm, microscopic and culture tests were evaluated with Bayes rule. This analysis would help in quantifying the pervasive errors in each diagnostic method, particularly the microscopic method, as a long-term treatment would be involved to eradicate the infection by the use of a particular antifungal chemotherapy. Secondly, the analysis of clinical data would help in obtaining digitally the fallible standard of the microscopic test method, as the culture test method is taken as gold standard. Test results of 51 paediatric patients were of 4 categories: 21 samples were true positive (both tests positive), and 13 were true negative; the rest samples comprised both 14 false positive (microscopic test positivity with culture test negativity) and 3 false negative (microscopic test negativity with culture test positivity) samples. The prevalence of tinea infection was 47.01% in the population of 51 children. The microscopic test of a sample was efficient by 87.5%, in arriving at a positive result on diagnosis, when its culture test was positive; and, this test was efficient by 76.4%, in arriving at a negative result, when its culture test was negative. But, the post-test probability value of a sample with both microscopic and culture tests would be correct in distinguishing a sample from a sick or a healthy child with a chance of 71.5%. However, since the sensitivity of the analysis is 87.5%, the microscopic test positivity would be easier to detect in the presence of infection. In conclusion, it could be stated that Trychophyton rubrum was the most prevalent species; sensitivity and specificity of treating the infection, by antifungal therapy before ascertaining by the culture method remain as 0.8751 and 0.7642, respectively. A correct/coveted diagnostic method of fungal infection would be could be achieved by modern molecular methods (matrix-assisted laser desorption ionisation-time of flight mass spectrometry or fluorescence in situ hybridization or enzyme-linked immunosorbent assay [ELISA] or restriction fragment length polymorphism or DNA/RNA probes of known fungal taxa) in advanced laboratories. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
O'Sullivan, F; Kirrane, J; Muzi, M; O'Sullivan, J N; Spence, A M; Mankoff, D A; Krohn, K A
2010-03-01
Kinetic quantitation of dynamic positron emission tomography (PET) studies via compartmental modeling usually requires the time-course of the radio-tracer concentration in the arterial blood as an arterial input function (AIF). For human and animal imaging applications, significant practical difficulties are associated with direct arterial sampling and as a result there is substantial interest in alternative methods that require no blood sampling at the time of the study. A fixed population template input function derived from prior experience with directly sampled arterial curves is one possibility. Image-based extraction, including requisite adjustment for spillover and recovery, is another approach. The present work considers a hybrid statistical approach based on a penalty formulation in which the information derived from a priori studies is combined in a Bayesian manner with information contained in the sampled image data in order to obtain an input function estimate. The absolute scaling of the input is achieved by an empirical calibration equation involving the injected dose together with the subject's weight, height and gender. The technique is illustrated in the context of (18)F -Fluorodeoxyglucose (FDG) PET studies in humans. A collection of 79 arterially sampled FDG blood curves are used as a basis for a priori characterization of input function variability, including scaling characteristics. Data from a series of 12 dynamic cerebral FDG PET studies in normal subjects are used to evaluate the performance of the penalty-based AIF estimation technique. The focus of evaluations is on quantitation of FDG kinetics over a set of 10 regional brain structures. As well as the new method, a fixed population template AIF and a direct AIF estimate based on segmentation are also considered. Kinetics analyses resulting from these three AIFs are compared with those resulting from radially sampled AIFs. The proposed penalty-based AIF extraction method is found to achieve significant improvements over the fixed template and the segmentation methods. As well as achieving acceptable kinetic parameter accuracy, the quality of fit of the region of interest (ROI) time-course data based on the extracted AIF, matches results based on arterially sampled AIFs. In comparison, significant deviation in the estimation of FDG flux and degradation in ROI data fit are found with the template and segmentation methods. The proposed AIF extraction method is recommended for practical use.
Zhang, Lida; Sun, Da-Wen; Zhang, Zhihang
2017-03-24
Moisture sorption isotherm is commonly determined by saturated salt slurry method, which has defects of long time cost, cumbersome labor, and microbial deterioration of samples. Thus, a novel method, a w measurement (AWM) method, has been developed to overcome these drawbacks. Fundamentals and applications of this fast method have been introduced with respects to its typical operational steps, a variety of equipment set-ups and applied samples. The resultant rapidness and reliability have been evaluated by comparing with conventional methods. This review also discussed factors impairing measurement precision and accuracy, including inappropriate choice of predryingwetting techniques and unachieved moisture uniformity in samples due to inadequate time. This analysis and corresponding suggestions can facilitate improved AWM method with more satisfying accuracy and time cost.
Comprehensive comparative analysis of 5'-end RNA-sequencing methods.
Adiconis, Xian; Haber, Adam L; Simmons, Sean K; Levy Moonshine, Ami; Ji, Zhe; Busby, Michele A; Shi, Xi; Jacques, Justin; Lancaster, Madeline A; Pan, Jen Q; Regev, Aviv; Levin, Joshua Z
2018-06-04
Specialized RNA-seq methods are required to identify the 5' ends of transcripts, which are critical for studies of gene regulation, but these methods have not been systematically benchmarked. We directly compared six such methods, including the performance of five methods on a single human cellular RNA sample and a new spike-in RNA assay that helps circumvent challenges resulting from uncertainties in annotation and RNA processing. We found that the 'cap analysis of gene expression' (CAGE) method performed best for mRNA and that most of its unannotated peaks were supported by evidence from other genomic methods. We applied CAGE to eight brain-related samples and determined sample-specific transcription start site (TSS) usage, as well as a transcriptome-wide shift in TSS usage between fetal and adult brain.
Direct Compositional Characterization of (U,Th)O2 Powders, Microspheres, and Pellets Using TXRF.
Dhara, Sangita; Prabhat, Parimal; Misra, N L
2015-10-20
A total reflection X-ray fluorescence (TXRF) analysis method for direct compositional characterization of sintered and green (U,Th)O2 samples in different forms (e.g., pellets, powders, and microspheres) without sample dissolution has been developed for the first time. The methodology involves transfer of only a few nanograms of the sample on the TXRF sample support by gently rubbing the samples on supports or taking their tiny uniform slurry in collodion on the sample support, drying them to make thin film, and measuring the TXRF spectra of the specimens thus prepared. This approach minimizes the matrix effects. Uranium determinations from the TXRF spectra of such specimens were made with respect to thorium, considering it as an internal standard. Samples having uranium atom percent (at%) from 0 to 100 in (U,Th)O2 were analyzed for uranium in comparison to thorium. The results showed an average precision of 2.6% (RSD, 2σ, n = 8). The TXRF-determined results deviated from expected values within 5%. The TXRF results were compared with those of biamperometry with good agreement. The lattice parameters of the solid solutions were calculated using their XRD patterns. A good correlation between lattice parameters and TXRF-determined U at% and between TXRF-determined U at% and expected U at%, calculated on the basis of preparation of (U,Th)O2 solid solutions, was obtained. The developed method is capable of analyzing (U,Th)O2 samples directly with almost negligible sample preparation and is well suited for radioactive samples. The present study suggests that this method can be extended for the determination of U,Th and Pu in other nuclear fuel materials (e.g., nitrides, carbides, etc.) in the form of pellets, powders, and microspheres after suitable modifications in sample handling procedure.
Methods for sample size determination in cluster randomized trials
Rutterford, Clare; Copas, Andrew; Eldridge, Sandra
2015-01-01
Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515
Mavridou, A; Smeti, E; Mandilara, G; Mandilara, G; Boufa, P; Vagiona-Arvanitidou, M; Vantarakis, A; Vassilandonopoulou, G; Pappa, O; Roussia, V; Tzouanopoulos, A; Livadara, M; Aisopou, I; Maraka, V; Nikolaou, E; Mandilara, G
2010-01-01
In this study ten laboratories in Greece compared the performance of reference method TTC Tergitol 7 Agar (with the additional test of beta-glucuronidase production) with five alternative methods, to detect E. coli in water, in line with European Water Directive recommendations. The samples were prepared by spiking drinking water with sewage effluent following a standard protocol. Chlorinated and non-chlorinated samples were used. The statistical analysis was based on the mean relative difference of confirmed counts and was performed in line with ISO 17994. The results showed that in total, three of the alternative methods (Chromocult Coliform agar, Membrane Lauryl Sulfate agar and Trypton Bilex-glucuronidase medium) were not different from TTC Tergitol 7 agar (TTC Tergitol 7 agar vs Chromocult Coliform agar, 294 samples, mean RD% 5.55; vs MLSA, 302 samples, mean RD% 1; vs TBX, 297 samples, mean RD% -2.78). The other two alternative methods (Membrane Faecal coliform medium and Colilert 18/ Quantitray) gave significantly higher counts than TTC Tergitol 7 agar (TTC Tergitol 7 agar vs MFc, 303 samples, mean RD% 8.81; vs Colilert-18/Quantitray, 76 samples, mean RD% 18.91). In other words, the alternative methods generated performance that was as reliable as, or even better than, the reference method. This study will help laboratories in Greece overcome culture and counting problems deriving from the EU reference method for E. coli counts in water samples.
Tian, Peng; Yang, David; Mandrell, Robert
2011-06-30
Human norovirus (NoV) outbreaks are major food safety concerns. The virus has to be concentrated from food samples in order to be detected. PEG precipitation is the most common method to recover the virus. Recently, histo-blood group antigens (HBGA) have been recognized as receptors for human NoV, and have been utilized as an alternative method to concentrate human NoV for samples up to 40 mL in volume. However, to wash off the virus from contaminated fresh food samples, at least 250 mL of wash volume is required. Recirculating affinity magnetic separation system (RCAMS) has been tried by others to concentrate human NoV from large-volume samples and failed to yield consistent results with the standard procedure of 30 min of recirculation at the default flow rate. Our work here demonstrates that proper recirculation time and flow rate are key factors for success in using the RCAMS. The bead recovery rate was increased from 28% to 47%, 67% and 90% when recirculation times were extended from 30 min to 60 min, 120 min and 180 min, respectively. The kinetics study suggests that at least 120 min recirculation is required to obtain a good recovery of NoV. In addition, different binding and elution conditions were compared for releasing NoV from inoculated lettuce. Phosphate-buffered saline (PBS) and water results in similar efficacy for virus release, but the released virus does not bind to RCAMS effectively unless pH was adjusted to acidic. Either citrate-buffered saline (CBS) wash, or water wash followed by CBS adjustment, resulted in an enhanced recovery of virus. We also demonstrated that the standard curve generated from viral RNA extracted from serially-diluted virus samples is more accurate for quantitative analysis than standard curves generated from serially-diluted plasmid DNA or transcribed-RNA templates, both of which tend to overestimate the concentration power. The efficacy of recovery of NoV from produce using RCAMS was directly compared with that of the PEG method in NoV inoculated lettuce. 40, 4, 0.4, and 0.04 RTU can be detected by both methods. At 0.004 RTU, NoV was detectable in all three samples concentrated by the RCAMS method, while none could be detected by the PEG precipitation method. RCAMS is a simple and rapid method that is more sensitive than conventional methods for recovery of NoV from food samples with a large sample size. In addition, the RTU value detected through RCAMS-processed samples is more biologically relevant. Published by Elsevier B.V.
Shear Strength of Remoulding Clay Samples Using Different Methods of Moulding
NASA Astrophysics Data System (ADS)
Norhaliza, W.; Ismail, B.; Azhar, A. T. S.; Nurul, N. J.
2016-07-01
Shear strength for clay soil was required to determine the soil stability. Clay was known as a soil with complex natural formations and very difficult to obtain undisturbed samples at the site. The aim of this paper was to determine the unconfined shear strength of remoulded clay on different methods in moulding samples which were proctor compaction, hand operated soil compacter and miniature mould methods. All the samples were remoulded with the same optimum moisture content (OMC) and density that were 18% and 1880 kg/m3 respectively. The unconfined shear strength results of remoulding clay soils for proctor compaction method was 289.56kPa with the strain 4.8%, hand operated method was 261.66kPa with the strain 4.4% and miniature mould method was 247.52kPa with the strain 3.9%. Based on the proctor compaction method, the reduction percentage of unconfined shear strength of remoulded clay soil of hand operated method was 9.66%, and for miniature mould method was 14.52%. Thus, because there was no significant difference of reduction percentage of unconfined shear strength between three different methods, so it can be concluded that remoulding clay by hand operated method and miniature mould method were accepted and suggested to perform remoulding clay samples by other future researcher. However for comparison, the hand operated method was more suitable to form remoulded clay sample in term of easiness, saving time and less energy for unconfined shear strength determination purposes.
McHale, Michael R.; McChesney, Dennis
2007-01-01
In 2003, a study was conducted to evaluate the accuracy and precision of 10 laboratories that analyze water-quality samples for phosphorus concentrations in the Catskill Mountain region of New York State. Many environmental studies in this region rely on data from these different laboratories for water-quality analyses, and the data may be used in watershed modeling and management decisions. Therefore, it is important to determine whether the data reported by these laboratories are of comparable accuracy and precision. Each laboratory was sent 12 samples for triplicate analysis for total phosphorus, total dissolved phosphorus, and soluble reactive phosphorus. Eight of these laboratories reported results that met comparability criteria for all samples; the remaining two laboratories met comparability criteria for only about half of the analyses. Neither the analytical method used nor the sample concentration ranges appeared to affect the comparability of results. The laboratories whose results were comparable gave consistently comparable results throughout the concentration range analyzed, and the differences among methods did not diminish comparability. All laboratories had high data precision as indicated by sample triplicate results. In addition, the laboratories consistently reported total phosphorus values greater than total dissolved phosphorus values, and total dissolved phosphorus values greater than soluble reactive phosphorus values, as would be expected. The results of this study emphasize the importance of regular laboratory participation in sample-exchange programs.
Point-of-Care Quantitative Measure of Glucose-6-Phosphate Dehydrogenase Enzyme Deficiency.
Bhutani, Vinod K; Kaplan, Michael; Glader, Bertil; Cotten, Michael; Kleinert, Jairus; Pamula, Vamsee
2015-11-01
Widespread newborn screening on a point-of-care basis could prevent bilirubin neurotoxicity in newborns with glucose-6-phosphate dehydrogenase (G6PD) deficiency. We evaluated a quantitative G6PD assay on a digital microfluidic platform by comparing its performance with standard clinical methods. G6PD activity was measured quantitatively by using digital microfluidic fluorescence and the gold standard fluorescence biochemical test on a convenience sample of 98 discarded blood samples. Twenty-four samples were designated as G6PD deficient. Mean ± SD G6PD activity for normal samples using the digital microfluidic method and the standard method, respectively, was 9.7 ± 2.8 and 11.1 ± 3.0 U/g hemoglobin (Hb), respectively; for G6PD-deficient samples, it was 0.8 ± 0.7 and 1.4 ± 0.9 U/g Hb. Bland-Altman analysis determined a mean difference of -0.96 ± 1.8 U/g Hb between the digital microfluidic fluorescence results and the standard biochemical test results. The lower and upper limits for the digital microfluidic platform were 4.5 to 19.5 U/g Hb for normal samples and 0.2 to 3.7 U/g Hb for G6PD-deficient samples. The lower and upper limits for the Stanford method were 5.5 to 20.7 U/g Hb for normal samples and 0.1 to 2.8 U/g Hb for G6PD-deficient samples. The measured activity discriminated between G6PD-deficient samples and normal samples with no overlap. Pending further validation, a digital microfluidics platform could be an accurate point-of-care screening tool for rapid newborn G6PD screening. Copyright © 2015 by the American Academy of Pediatrics.
Quality Control for Ambient Sampling of PCDD/PCDF from Open Combustion Sources
Both long duration (> 6 h) and high temperature (up to 139o C) sampling efforts were conducted using ambient air sampling methods to determine if either high volume throughput or higher than ambient sampling temperatures resulted in loss of target polychlorinated dibenzodioxins/d...
Analysis of spreadable cheese by Raman spectroscopy and chemometric tools.
Oliveira, Kamila de Sá; Callegaro, Layce de Souza; Stephani, Rodrigo; Almeida, Mariana Ramos; de Oliveira, Luiz Fernando Cappa
2016-03-01
In this work, FT-Raman spectroscopy was explored to evaluate spreadable cheese samples. A partial least squares discriminant analysis was employed to identify the spreadable cheese samples containing starch. To build the models, two types of samples were used: commercial samples and samples manufactured in local industries. The method of supervised classification PLS-DA was employed to classify the samples as adulterated or without starch. Multivariate regression was performed using the partial least squares method to quantify the starch in the spreadable cheese. The limit of detection obtained for the model was 0.34% (w/w) and the limit of quantification was 1.14% (w/w). The reliability of the models was evaluated by determining the confidence interval, which was calculated using the bootstrap re-sampling technique. The results show that the classification models can be used to complement classical analysis and as screening methods. Copyright © 2015 Elsevier Ltd. All rights reserved.
Drinking water test methods in crisis-afflicted areas: comparison of methods under field conditions.
Merle, Roswitha; Bleul, Ingo; Schulenburg, Jörg; Kreienbrock, Lothar; Klein, Günter
2011-11-01
To simplify the testing of drinking water in crisis-afflicted areas (as in Kosovo in 2007), rapid test methods were compared with the standard test. For Escherichia coli and coliform pathogens, rapid tests were made available: Colilert(®)-18, P/A test with 4-methylumbelliferyl-β-D-glucoronid, and m-Endo Broth. Biochemical differentiation was carried out by Enterotube™ II. Enterococci were determined following the standard ISO test and by means of Enterolert™. Four hundred ninety-nine water samples were tested for E. coli and coliforms using four methods. Following the standard method, 20.8% (n=104) of the samples contained E. coli, whereas the rapid tests detected between 19.6% (m-Endo Broth, 92.0% concordance) and 20.0% (concordance: 93.6% Colilert-18 and 94.8% P/A-test) positive samples. Regarding coliforms, the percentage of concordant results ranged from 98.4% (P/A-test) to 99.0% (Colilert-18). Colilert-18 and m-Endo Broth detected even more positive samples than the standard method did. Enterococci were detected in 93 of 573 samples by the standard method, but in 92 samples by Enterolert (concordance: 99.5%). Considering the high-quality equipment and time requirements of the standard method, the use of rapid tests in crisis-afflicted areas is sufficiently reliable.
Laserson, K F; Petralanda, I; Hamlin, D M; Almera, R; Fuentes, M; Carrasquel, A; Barker, R H
1994-02-01
We have examined the reproducibility, sensitivity, and specificity of detecting Plasmodium falciparum using the polymerase chain reaction (PCR) and the species-specific probe pPF14 under field conditions in the Venezuelan Amazon. Up to eight samples were field collected from each of 48 consenting Amerindians presenting with symptoms of malaria. Sample processing and analysis was performed at the Centro Amazonico para la Investigacion y Control de Enfermedades Tropicales Simon Bolivar. A total of 229 samples from 48 patients were analyzed by PCR methods using four different P. falciparum-specific probes. One P. vivax-specific probe and by conventional microscopy. Samples in which results from PCR and microscopy differed were reanalyzed at a higher sensitivity by microscopy. Results suggest that microscopy-negative, PCR-positive samples are true positives, and that microscopy-positive and PCR-negative samples are true negatives. The sensitivity of the DNA probe/PCR method was 78% and its specificity was 97%. The positive predictive value of the PCR method was 88%, and the negative predictive value was 95%. Through the analysis of multiple blood samples from each individual, the DNA probe/PCR methodology was found to have an inherent reproducibility that was highly statistically significant.
Wilson, Jordan L; Limmer, Matthew A; Samaranayake, V A; Schumacher, John G; Burken, Joel G
2017-09-19
Vapor intrusion (VI) by volatile organic compounds (VOCs) in the built environment presents a threat to human health. Traditional VI assessments are often time-, cost-, and labor-intensive; whereas traditional subsurface methods sample a relatively small volume in the subsurface and are difficult to collect within and near structures. Trees could provide a similar subsurface sample where roots act as the "sampler' and are already onsite. Regression models were developed to assess the relation between PCE concentrations in over 500 tree-core samples with PCE concentrations in over 50 groundwater and 1000 soil samples collected from a tetrachloroethylene- (PCE-) contaminated Superfund site and analyzed using gas chromatography. Results indicate that in planta concentrations are significantly and positively related to PCE concentrations in groundwater samples collected at depths less than 20 m (adjusted R 2 values greater than 0.80) and in soil samples (adjusted R 2 values greater than 0.90). Results indicate that a 30 cm diameter tree characterizes soil concentrations at depths less than 6 m over an area of 700-1600 m 2 , the volume of a typical basement. These findings indicate that tree sampling may be an appropriate method to detect contamination at shallow depths at sites with VI.
Wilson, Jordan L.; Limmer, Matthew A.; Samaranayake, V. A.; Schumacher, John G.; Burken, Joel G.
2017-01-01
Vapor intrusion (VI) by volatile organic compounds (VOCs) in the built environment presents a threat to human health. Traditional VI assessments are often time-, cost-, and labor-intensive; whereas traditional subsurface methods sample a relatively small volume in the subsurface and are difficult to collect within and near structures. Trees could provide a similar subsurface sample where roots act as the “sampler’ and are already onsite. Regression models were developed to assess the relation between PCE concentrations in over 500 tree-core samples with PCE concentrations in over 50 groundwater and 1000 soil samples collected from a tetrachloroethylene- (PCE-) contaminated Superfund site and analyzed using gas chromatography. Results indicate that in planta concentrations are significantly and positively related to PCE concentrations in groundwater samples collected at depths less than 20 m (adjusted R2 values greater than 0.80) and in soil samples (adjusted R2 values greater than 0.90). Results indicate that a 30 cm diameter tree characterizes soil concentrations at depths less than 6 m over an area of 700–1600 m2, the volume of a typical basement. These findings indicate that tree sampling may be an appropriate method to detect contamination at shallow depths at sites with VI.