A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers
Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund; ...
2018-03-28
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less
A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less
A ricin forensic profiling approach based on a complex set of biomarkers.
Fredriksson, Sten-Åke; Wunschel, David S; Lindström, Susanne Wiklund; Nilsson, Calle; Wahl, Karen; Åstot, Crister
2018-08-15
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1-PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods and robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved. Copyright © 2018 Elsevier B.V. All rights reserved.
Storm-water data for Bear Creek basin, Jackson County, Oregon 1977-78
Wittenberg, Loren A.
1978-01-01
Storm-water-quality samples were collected from four subbasins in the Bear Creek basin in southern Oregon. These subbasins vary in drainage size, channel slope, effective impervious area, and land use. Automatic waterquality samplers and precipitation and discharge gages were set up in each of the four subbasins. During the period October 1977 through May 1978, 19 sets of samples, including two base-flow samples, were collected. Fecal coliform bacteria colonies per 100-milliliter sample ranged from less than 1,000 to more than 1,000,000. Suspended-sediment concentrations ranged from less than 1 to more than 2,300 milligrams per liter. One subbasin consisting of downtown businesses and streets with heavy vehicular traffic was monitored for lead. Total lead values ranging from 100 to 1,900 micrograms per liter were measured during one storm event.
Wilson, Nick; Edwards, Richard; Parry, Rhys
2011-03-04
To assess the need for additional smokefree settings, by measuring secondhand smoke (SHS) in a range of public places in an urban setting. Measurements were made in Wellington City during the 6-year period after the implementation of legislation that made indoor areas of restaurants and bars/pubs smokefree in December 2004, and up to 20 years after the 1990 legislation making most indoor workplaces smokefree. Fine particulate levels (PM2.5) were measured with a portable real-time airborne particle monitor. We collated data from our previously published work involving random sampling, purposeful sampling and convenience sampling of a wide range of settings (in 2006) and from additional sampling of selected indoor and outdoor areas (in 2007-2008 and 2010). The "outdoor" smoking areas of hospitality venues had the highest particulate levels, with a mean value of 72 mcg/m3 (range of maximum values 51-284 mcg/m3) (n=20 sampling periods). These levels are likely to create health hazards for some workers and patrons (i.e., when considered in relation to the WHO air quality guidelines). National survey data also indicate that these venues are the ones where SHS exposure is most frequently reported by non-smokers. Areas inside bars that were adjacent to "outdoor" smoking areas also had high levels, with a mean of 54 mcg/m3 (range of maximum values: 18-239 mcg/m3, for n=13 measurements). In all other settings mean levels were lower (means: 2-22 mcg/m3). These other settings included inside traditional style pubs/sports bars (n=10), bars (n=18), restaurants (n=9), cafes (n=5), inside public buildings (n=15), inside transportation settings (n=15), and various outdoor street/park settings (n=22). During the data collection in all settings made smokefree by law, there was only one occasion of a person observed smoking. The results suggest that compliance in pubs/bars and restaurants has remained extremely high in this city in the nearly six years since implementation of the upgraded smokefree legislation. The results also highlight additional potential health gain from extending smokefree policies to reduce SHS exposure in the "outdoor" smoking areas of hospitality venues and to reduce SHS drift from these areas to indoor areas.
Nuopponen, Mari H; Birch, Gillian M; Sykes, Rob J; Lee, Steve J; Stewart, Derek
2006-01-11
Sitka spruce (Picea sitchensis) samples (491) from 50 different clones as well as 24 different tropical hardwoods and 20 Scots pine (Pinus sylvestris) samples were used to construct diffuse reflectance mid-infrared Fourier transform (DRIFT-MIR) based partial least squares (PLS) calibrations on lignin, cellulose, and wood resin contents and densities. Calibrations for density, lignin, and cellulose were established for all wood species combined into one data set as well as for the separate Sitka spruce data set. Relationships between wood resin and MIR data were constructed for the Sitka spruce data set as well as the combined Scots pine and Sitka spruce data sets. Calibrations containing only five wavenumbers instead of spectral ranges 4000-2800 and 1800-700 cm(-1) were also established. In addition, chemical factors contributing to wood density were studied. Chemical composition and density assessed from DRIFT-MIR calibrations had R2 and Q2 values in the ranges of 0.6-0.9 and 0.6-0.8, respectively. The PLS models gave residual mean squares error of prediction (RMSEP) values of 1.6-1.9, 2.8-3.7, and 0.4 for lignin, cellulose, and wood resin contents, respectively. Density test sets had RMSEP values ranging from 50 to 56. Reduced amount of wavenumbers can be utilized to predict the chemical composition and density of a wood, which should allow measurements of these properties using a hand-held device. MIR spectral data indicated that low-density samples had somewhat higher lignin contents than high-density samples. Correspondingly, high-density samples contained slightly more polysaccharides than low-density samples. This observation was consistent with the wet chemical data.
2169 steel waveform experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furnish, Michael David; Alexander, C. Scott; Reinhart, William Dodd
2012-11-01
In support of LLNL efforts to develop multiscale models of a variety of materials, we have performed a set of eight gas gun impact experiments on 2169 steel (21% Cr, 6% Ni, 9% Mn, balance predominantly Fe). These experiments provided carefully controlled shock, reshock and release velocimetry data, with initial shock stresses ranging from 10 to 50 GPa (particle velocities from 0.25 to 1.05 km/s). Both windowed and free-surface measurements were included in this experiment set to increase the utility of the data set, as were samples ranging in thickness from 1 to 5 mm. Target physical phenomena included themore » elastic/plastic transition (Hugoniot elastic limit), the Hugoniot, any phase transition phenomena, and the release path (windowed and free-surface). The Hugoniot was found to be nearly linear, with no indications of the Fe phase transition. Releases were non-hysteretic, and relatively consistent between 3- and 5-mmthick samples (the 3 mm samples giving slightly lower wavespeeds on release). Reshock tests with explosively welded impactors produced clean results; those with glue bonds showed transient releases prior to the arrival of the reshock, reducing their usefulness for deriving strength information. The free-surface samples, which were steps on a single piece of steel, showed lower wavespeeds for thin (1 mm) samples than for thicker (2 or 4 mm) samples. A configuration used for the last three shots allows release information to be determined from these free surface samples. The sample strength appears to increase with stress from ~1 GPa to ~ 3 GPa over this range, consistent with other recent work but about 40% above the Steinberg model.« less
Rapid DNA analysis for automated processing and interpretation of low DNA content samples.
Turingan, Rosemary S; Vasantgadkar, Sameer; Palombo, Luke; Hogan, Catherine; Jiang, Hua; Tan, Eugene; Selden, Richard F
2016-01-01
Short tandem repeat (STR) analysis of casework samples with low DNA content include those resulting from the transfer of epithelial cells from the skin to an object (e.g., cells on a water bottle, or brim of a cap), blood spatter stains, and small bone and tissue fragments. Low DNA content (LDC) samples are important in a wide range of settings, including disaster response teams to assist in victim identification and family reunification, military operations to identify friend or foe, criminal forensics to identify suspects and exonerate the innocent, and medical examiner and coroner offices to identify missing persons. Processing LDC samples requires experienced laboratory personnel, isolated workstations, and sophisticated equipment, requires transport time, and involves complex procedures. We present a rapid DNA analysis system designed specifically to generate STR profiles from LDC samples in field-forward settings by non-technical operators. By performing STR in the field, close to the site of collection, rapid DNA analysis has the potential to increase throughput and to provide actionable information in real time. A Low DNA Content BioChipSet (LDC BCS) was developed and manufactured by injection molding. It was designed to function in the fully integrated Accelerated Nuclear DNA Equipment (ANDE) instrument previously designed for analysis of buccal swab and other high DNA content samples (Investigative Genet. 4(1):1-15, 2013). The LDC BCS performs efficient DNA purification followed by microfluidic ultrafiltration of the purified DNA, maximizing the quantity of DNA available for subsequent amplification and electrophoretic separation and detection of amplified fragments. The system demonstrates accuracy, precision, resolution, signal strength, and peak height ratios appropriate for casework analysis. The LDC rapid DNA analysis system is effective for the generation of STR profiles from a wide range of sample types. The technology broadens the range of sample types that can be processed and minimizes the time between sample collection, sample processing and analysis, and generation of actionable intelligence. The fully integrated Expert System is capable of interpreting a wide range or sample types and input DNA quantities, allowing samples to be processed and interpreted without a technical operator.
Fisher, Danielle S; Beyer, Chad; van Schalkwyk, Gerrit; Seedat, Soraya; Flanagan, Robert J
2017-04-01
There is a poor correlation between total concentrations of proton-accepting compounds (most basic drugs) in unstimulated oral fluid and in plasma. The aim of this study was to compare clozapine, norclozapine, and amisulpride concentrations in plasma and in oral fluid collected using commercially available collection devices [Thermo Fisher Scientific Oral-Eze and Greiner Bio-One (GBO)]. Oral-Eze and GBO samples and plasma were collected in that order from patients prescribed clozapine. Analyte concentrations were measured by liquid chromatography-tandem mass spectrometry. There were 112 participants [96 men, aged (median, range) 47 (21-65) years and 16 women, aged 44 (21-65) years]: 74 participants provided 2 sets of samples and 7 provided 3 sets (overall 2 GBO samples not collected). Twenty-three patients were co-prescribed amisulpride, of whom 17 provided 2 sets of samples and 1 provided 3 sets. The median (range) oral fluid within the GBO samples was 52 (13%-86%). Nonadherence to clozapine was identified in all 3 samples in one instance. After correction for oral fluid content, analyte concentrations in the GBO and Oral-Eze samples were poorly correlated with plasma clozapine and norclozapine (R = 0.57-0.63) and plasma amisulpride (R = 0.65-0.72). Analyte concentrations in the 2 sets of oral fluid samples were likewise poorly correlated (R = 0.68-0.84). Mean (SD) plasma clozapine and norclozapine were 0.60 (0.46) and 0.25 (0.21) mg/L, respectively. Mean clozapine and norclozapine concentrations in the 2 sets of oral fluid samples were similar to those in plasma (0.9-1.8 times higher), that is, approximately 2- to 3-fold higher than those in unstimulated oral fluid. The mean (±SD) amisulpride concentrations (microgram per liter) in plasma (446 ± 297) and in the Oral-Eze samples (501 ± 461) were comparable and much higher than those in the GBO samples (233 ± 318). Oral fluid collected using either the GBO system or the Oral-Eze system cannot be used for quantitative clozapine and/or amisulpride therapeutic drug monitoring.
Lanvers-Kaminsky, Claudia; Rüffer, Andrea; Würthwein, Gudrun; Gerss, Joachim; Zucchetti, Massimo; Ballerini, Andrea; Attarbaschi, Andishe; Smisek, Petr; Nath, Christa; Lee, Samiuela; Elitzur, Sara; Zimmermann, Martin; Möricke, Anja; Schrappe, Martin; Rizzari, Carmelo; Boos, Joachim
2018-02-01
In the international AIEOP-BFM ALL 2009 trial, asparaginase (ASE) activity was monitored after each dose of pegylated Escherichia coli ASE (PEG-ASE). Two methods were used: the aspartic acid β-hydroxamate (AHA) test and medac asparaginase activity test (MAAT). As the latter method overestimates PEG-ASE activity because it calibrates using E. coli ASE, method comparison was performed using samples from the AIEOP-BFM ALL 2009 trial. PEG-ASE activities were determined using MAAT and AHA test in 2 sets of samples (first set: 630 samples and second set: 91 samples). Bland-Altman analysis was performed on ratios between MAAT and AHA tests. The mean difference between both methods, limits of agreement, and 95% confidence intervals were calculated and compared for all samples and samples grouped according to the calibration ranges of the MAAT and the AHA test. PEG-ASE activity determined using the MAAT was significantly higher than when determined using the AHA test (P < 0.001; Wilcoxon signed-rank test). Within the calibration range of the MAAT (30-600 U/L), PEG-ASE activities determined using the MAAT were on average 23% higher than PEG-ASE activities determined using the AHA test. This complies with the mean difference reported in the MAAT manual. With PEG-ASE activities >600 U/L, the discrepancies between MAAT and AHA test increased. Above the calibration range of the MAAT (>600 U/L) and the AHA test (>1000 U/L), a mean difference of 42% was determined. Because more than 70% of samples had PEG-ASE activities >600 U/L and required additional sample dilution, an overall mean difference of 37% was calculated for all samples (37% for the first and 34% for the second set). Comparison of the MAAT and AHA test for PEG-ASE activity confirmed a mean difference of 23% between MAAT and AHA test for PEG-ASE activities between 30 and 600 U/L. The discrepancy increased in samples with >600 U/L PEG-ASE activity, which will be especially relevant when evaluating high PEG-ASE activities in relation to toxicity, efficacy, and population pharmacokinetics.
New XAFS Facility for In-Situ Measurements at Beamline C at HASYLAB
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rickers, K.; Drube, W.; Schulte-Schrepping, H.
2007-02-02
A XAFS-experiment allowing for in-situ experiments has been set up at DORIS bending magnet beamline C. For that purpose, a new double-crystal, UHV-compatible monochromator with fast scanning capability was designed. This fixed-exit monochromator uses two crystal sets on a common central rotation axis driven by an ex-vacuo goniometer. Bragg angles range from 5 deg. to 55.5 deg. resulting in a total energy range 2.3 - 43.4 keV using Si(111)/(311) crystal sets. Crystal pairs can be remotely selected by translating the vacuum chamber. Energy encoding is performed using an optical encoder system. The standard XAFS sample environment is set-up in vacuomore » and can be adapted for special sample environments. For in-situ experiments, the beamline is equipped with twelve gas lines. An exhaust line allows toxic/reactive gases to be handled. As an initial performance test of the instrument, Ti, Cr, Fe and Cu XAFS and Ce K-shell QEXAFS measurements were performed.« less
Asteroid orbital inversion using uniform phase-space sampling
NASA Astrophysics Data System (ADS)
Muinonen, K.; Pentikäinen, H.; Granvik, M.; Oszkiewicz, D.; Virtanen, J.
2014-07-01
We review statistical inverse methods for asteroid orbit computation from a small number of astrometric observations and short time intervals of observations. With the help of Markov-chain Monte Carlo methods (MCMC), we present a novel inverse method that utilizes uniform sampling of the phase space for the orbital elements. The statistical orbital ranging method (Virtanen et al. 2001, Muinonen et al. 2001) was set out to resolve the long-lasting challenges in the initial computation of orbits for asteroids. The ranging method starts from the selection of a pair of astrometric observations. Thereafter, the topocentric ranges and angular deviations in R.A. and Decl. are randomly sampled. The two Cartesian positions allow for the computation of orbital elements and, subsequently, the computation of ephemerides for the observation dates. Candidate orbital elements are included in the sample of accepted elements if the χ^2-value between the observed and computed observations is within a pre-defined threshold. The sample orbital elements obtain weights based on a certain debiasing procedure. When the weights are available, the full sample of orbital elements allows the probabilistic assessments for, e.g., object classification and ephemeris computation as well as the computation of collision probabilities. The MCMC ranging method (Oszkiewicz et al. 2009; see also Granvik et al. 2009) replaces the original sampling algorithm described above with a proposal probability density function (p.d.f.), and a chain of sample orbital elements results in the phase space. MCMC ranging is based on a bivariate Gaussian p.d.f. for the topocentric ranges, and allows for the sampling to focus on the phase-space domain with most of the probability mass. In the virtual-observation MCMC method (Muinonen et al. 2012), the proposal p.d.f. for the orbital elements is chosen to mimic the a posteriori p.d.f. for the elements: first, random errors are simulated for each observation, resulting in a set of virtual observations; second, corresponding virtual least-squares orbital elements are derived using the Nelder-Mead downhill simplex method; third, repeating the procedure two times allows for a computation of a difference for two sets of virtual orbital elements; and, fourth, this orbital-element difference constitutes a symmetric proposal in a random-walk Metropolis-Hastings algorithm, avoiding the explicit computation of the proposal p.d.f. In a discrete approximation, the allowed proposals coincide with the differences that are based on a large number of pre-computed sets of virtual least-squares orbital elements. The virtual-observation MCMC method is thus based on the characterization of the relevant volume in the orbital-element phase space. Here we utilize MCMC to map the phase-space domain of acceptable solutions. We can make use of the proposal p.d.f.s from the MCMC ranging and virtual-observation methods. The present phase-space mapping produces, upon convergence, a uniform sampling of the solution space within a pre-defined χ^2-value. The weights of the sampled orbital elements are then computed on the basis of the corresponding χ^2-values. The present method resembles the original ranging method. On one hand, MCMC mapping is insensitive to local extrema in the phase space and efficiently maps the solution space. This is somewhat contrary to the MCMC methods described above. On the other hand, MCMC mapping can suffer from producing a small number of sample elements with small χ^2-values, in resemblance to the original ranging method. We apply the methods to example near-Earth, main-belt, and transneptunian objects, and highlight the utilization of the methods in the data processing and analysis pipeline of the ESA Gaia space mission.
Identification and classification of similar looking food grains
NASA Astrophysics Data System (ADS)
Anami, B. S.; Biradar, Sunanda D.; Savakar, D. G.; Kulkarni, P. V.
2013-01-01
This paper describes the comparative study of Artificial Neural Network (ANN) and Support Vector Machine (SVM) classifiers by taking a case study of identification and classification of four pairs of similar looking food grains namely, Finger Millet, Mustard, Soyabean, Pigeon Pea, Aniseed, Cumin-seeds, Split Greengram and Split Blackgram. Algorithms are developed to acquire and process color images of these grains samples. The developed algorithms are used to extract 18 colors-Hue Saturation Value (HSV), and 42 wavelet based texture features. Back Propagation Neural Network (BPNN)-based classifier is designed using three feature sets namely color - HSV, wavelet-texture and their combined model. SVM model for color- HSV model is designed for the same set of samples. The classification accuracies ranging from 93% to 96% for color-HSV, ranging from 78% to 94% for wavelet texture model and from 92% to 97% for combined model are obtained for ANN based models. The classification accuracy ranging from 80% to 90% is obtained for color-HSV based SVM model. Training time required for the SVM based model is substantially lesser than ANN for the same set of images.
P.D. Jones; L.R. Schimleck; G.F. Peter; R.F. Daniels; A. Clark
2005-01-01
Preliminary studies based on small sample sets show that near infrared (NIR) spectroscopy has the potential for rapidly estimating many important wood properties. However, if NIR is to be used operationally, then calibrations using several hundred samples from a wide variety of growing conditions need to be developed and their performance tested on samples from new...
The effect of sampling techniques used in the multiconfigurational Ehrenfest method
NASA Astrophysics Data System (ADS)
Symonds, C.; Kattirtzi, J. A.; Shalashilin, D. V.
2018-05-01
In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.
The effect of sampling techniques used in the multiconfigurational Ehrenfest method.
Symonds, C; Kattirtzi, J A; Shalashilin, D V
2018-05-14
In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.
García-Molina, María Dolores; García-Olmo, Juan; Barro, Francisco
2016-01-01
The aim of this work was to assess the ability of Near Infrared Spectroscopy (NIRS) to distinguish wheat lines with low gliadin content, obtained by RNA interference (RNAi), from non-transgenic wheat lines. The discriminant analysis was performed using both whole grain and flour. The transgenic sample set included 409 samples for whole grain sorting and 414 samples for flour experiments, while the non-transgenic set consisted of 126 and 156 samples for whole grain and flour, respectively. Samples were scanned using a Foss-NIR Systems 6500 System II instrument. Discrimination models were developed using the entire spectral range (400-2500 nm) and ranges of 400-780 nm, 800-1098 nm and 1100-2500 nm, followed by analysis of means of partial least square (PLS). Two external validations were made, using samples from the years 2013 and 2014 and a minimum of 99% of the flour samples and 96% of the whole grain samples were classified correctly. The results demonstrate the ability of NIRS to successfully discriminate between wheat samples with low-gliadin content and wild types. These findings are important for the development and analysis of foodstuff for celiac disease (CD) patients to achieve better dietary composition and a reduction in disease incidence.
Grignard, Lynn; Gonçalves, Bronner P; Early, Angela M; Daniels, Rachel F; Tiono, Alfred B; Guelbéogo, Wamdaogo M; Ouédraogo, Alphonse; van Veen, Elke M; Lanke, Kjerstin; Diarra, Amidou; Nebie, Issa; Sirima, Sodiomon B; Targett, Geoff A; Volkman, Sarah K; Neafsey, Daniel E; Wirth, Dyann F; Bousema, Teun; Drakeley, Chris
2018-05-05
Plasmodium falciparum malaria infections often comprise multiple distinct parasite clones. Few datasets have directly assessed infection complexity in humans and mosquitoes they infect. Examining parasites using molecular tools may provide insights into the selective transmissibility of isolates. Using capillary electrophoresis genotyping and next generation amplicon sequencing, we analysed complexity of parasite infections in human blood and in the midguts of mosquitoes that became infected in membrane feeding experiments using the same blood material in two West African settings. Median numbers of clones in humans and mosquitoes were higher in samples from Burkina Faso (4.5, interquartile range 2-8 for humans; and 2, interquartile range 1-3 for mosquitoes) than in The Gambia (2, interquartile range 1-3 and 1, interquartile range 1-3, for humans and mosquitoes, respectively). Whilst the median number of clones was commonly higher in human blood samples, not all transmitted alleles were detectable in the human peripheral blood. In both study sample sets, additional parasite alleles were identified in mosquitoes compared with the matched human samples (10-88.9% of all clones/feeding assay, n = 73 feeding assays). The results are likely due to preferential amplification of the most abundant clones in peripheral blood but confirm the presence of low density clones that produce transmissible sexual stage parasites. Copyright © 2018. Published by Elsevier Ltd.
2010-06-01
Sampling (MIS)? • Technique of combining many increments of soil from a number of points within exposure area • Developed by Enviro Stat (Trademarked...Demonstrating a reliable soil sampling strategy to accurately characterize contaminant concentrations in spatially extreme and heterogeneous...into a set of decision (exposure) units • One or several discrete or small- scale composite soil samples collected to represent each decision unit
Joseph L. Ganey; William M. Block; Steven H. Ackers
2003-01-01
As part of a set of studies evaluating home-range size and habitat use of radio-marked Mexican spotted owls (Strix occidentalis lucida), we sampled structural characteristics of forest stands within owl home ranges on two study areas in Arizona and New Mexico. Study areas were dominated by ponderosa pine (Pinus ponderosa)-Gambel...
Grazzini, Grazia; Ventura, Leonardo; Rubeca, Tiziana; Rapi, Stefano; Cellai, Filippo; Di Dia, Pietro P; Mallardi, Beatrice; Mantellini, Paola; Zappa, Marco; Castiglione, Guido
2017-07-01
Haemoglobin (Hb) stability in faecal samples is an important issue in colorectal cancer screening by the faecal immunochemical test (FIT) for Hb. This study evaluated the performance of the FIT-Hb (OC-Sensor Eiken) used in the Florence screening programme by comparing two different formulations of the buffer, both in an analytical and in a clinical setting. In the laboratory simulation, six faecal pools (three in each buffer type) were stored at different temperatures and analysed eight times in 10 replicates over 21 days. In the clinical setting, 7695 screenees returned two samples, using both the old and the new specimen collection device (SCD). In the laboratory simulation, 5 days from sample preparation with the buffer of the old SCD, the Hb concentration decreased by 40% at room temperature (25°C, range 22-28°C) and up to 60% at outside temperature (29°C, range 16-39°C), whereas with the new one, Hb concentration decreased by 10%. In the clinical setting, a higher mean Hb concentration with the new SCD compared with the old one was found (6.3 vs. 5.0 µg Hb/g faeces, respectively, P<0.001); no statistically significant difference was found in the probability of having a positive result in the two SCDs. Better Hb stability was observed with the new buffer under laboratory conditions, but no difference was found in the clinical performance. In our study, only marginal advantages arise from the new buffer. Improvements in sample stability represent a significant target in the screening setting.
Di Girolamo, Francesco; Righetti, Pier Giorgio; Soste, Martin; Feng, Yuehan; Picotti, Paola
2013-08-26
Systems biology studies require the capability to quantify with high precision proteins spanning a broad range of abundances across multiple samples. However, the broad range of protein expression in cells often precludes the detection of low-abundance proteins. Different sample processing techniques can be applied to increase proteome coverage. Among these, combinatorial (hexa)peptide ligand libraries (CPLLs) bound to solid matrices have been used to specifically capture and detect low-abundance proteins in complex samples. To assess whether CPLL capture can be applied in systems biology studies involving the precise quantitation of proteins across a multitude of samples, we evaluated its performance across the whole range of protein abundances in Saccharomyces cerevisiae. We used selected reaction monitoring assays for a set of target proteins covering a broad abundance range to quantitatively evaluate the precision of the approach and its capability to detect low-abundance proteins. Replicated CPLL-isolates showed an average variability of ~10% in the amount of the isolated proteins. The high reproducibility of the technique was not dependent on the abundance of the protein or the amount of beads used for the capture. However, the protein-to-bead ratio affected the enrichment of specific proteins. We did not observe a normalization effect of CPLL beads on protein abundances. However, CPLLs enriched for and depleted specific sets of proteins and thus changed the abundances of proteins from a whole proteome extract. This allowed the identification of ~400 proteins otherwise undetected in an untreated sample, under the experimental conditions used. CPLL capture is thus a useful tool to increase protein identifications in proteomic experiments, but it should be coupled to the analysis of untreated samples, to maximize proteome coverage. Our data also confirms that CPLL capture is reproducible and can be confidently used in quantitative proteomic experiments. Combinatorial hexapeptide ligand libraries (CPLLs) bound to solid matrices have been proposed to specifically capture and detect low-abundance proteins in complex samples. To assess whether the CPLL capture can be confidently applied in systems biology studies involving the precise quantitation of proteins across a broad range of abundances and a multitude of samples, we evaluated its reproducibility and performance features. Using selected reaction monitoring assays for proteins covering the whole range of abundances we show that the technique is reproducible and compatible with quantitative proteomic studies. However, the protein-to-bead ratio affects the enrichment of specific proteins and CPLLs depleted specific sets of proteins from a whole proteome extract. Our results suggest that CPLL-based analyses should be coupled to the analysis of untreated samples, to maximize proteome coverage. Overall, our data confirms the applicability of CPLLs in systems biology research and guides the correct use of this technique. Copyright © 2013 Elsevier B.V. All rights reserved.
Shallow ground-water quality beneath a major urban center: Denver, Colorado, USA
Bruce, B.W.; McMahon, P.B.
1996-01-01
A survey of the chemical quality of ground water in the unconsolidated alluvial aquifer beneath a major urban center (Denver, Colorado, USA) was performed in 1993 with the objective of characterizing the quality of shallow ground-water in the urban area and relating water quality to land use. Thirty randomly selected alluvial wells were each sampled once for a broad range of dissolved constituents. The urban land use at each well site was sub- classified into one of three land-use settings: residential, commercial, and industrial. Shallow ground-water quality was highly variable in the urban area and the variability could be related to these land-use setting classifications. Sulfate (SO4) was the predominant anion in most samples from the residential and commercial land-use settings, whereas bicarbonate (HCO3) was the predominant anion in samples from the industrial land-use setting, indicating a possible shift in redox conditions associated with land use. Only three of 30 samples had nitrate concentrations that exceeded the US national drinking-water standard of 10 mg l-1 as nitrogen, indicating that nitrate contamination of shallow ground water may not be a serious problem in this urban area. However, the highest median nitrate concentration (4.2 mg l-1) was in samples from the residential setting, where fertilizer application is assumed to be most intense. Twenty-seven of 30 samples had detectable pesticides and nine of 82 analyzed pesticide compounds were detected at low concentrations, indicating that pesticides are widely distributed in shallow ground water in this urban area. Although the highest median total pesticide concentration (0.17 ??g l-1) was in the commercial setting, the herbicides prometon and atrazine were found in each land-use setting. Similarly, 25 of 29 samples analyzed had detectable volatile organic compounds (VOCs) indicating these compounds are also widely distributed in this urban area. The total VOC concentrations in sampled wells ranged from nondetectable to 23 442 ??g l-1. Widespread detections and occasionally high concentrations point to VOCs as the major anthropogenic ground-water impact in this urban environment. Generally, the highest VOC concentrations occurred in samples from the industrial setting. The most frequently detected VOC was the gasoline additive methyl tertbutyl ether (MTBE, in 23 of 29 wells). Results from this study indicate that the quality of shallow ground water in major urban areas can be related to land-use settings. Moreover, some VOCs and pesticides may be widely distributed at low concentrations in shallow ground water throughout major urban areas. As a result, the differentiation between point and non-point sources for these compounds in urban areas may be difficult.
Improved pulse laser ranging algorithm based on high speed sampling
NASA Astrophysics Data System (ADS)
Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang
2016-10-01
Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.
Kirchberger, Martin
2016-01-01
Dynamic range compression serves different purposes in the music and hearing-aid industries. In the music industry, it is used to make music louder and more attractive to normal-hearing listeners. In the hearing-aid industry, it is used to map the variable dynamic range of acoustic signals to the reduced dynamic range of hearing-impaired listeners. Hence, hearing-aided listeners will typically receive a dual dose of compression when listening to recorded music. The present study involved an acoustic analysis of dynamic range across a cross section of recorded music as well as a perceptual study comparing the efficacy of different compression schemes. The acoustic analysis revealed that the dynamic range of samples from popular genres, such as rock or rap, was generally smaller than the dynamic range of samples from classical genres, such as opera and orchestra. By comparison, the dynamic range of speech, based on recordings of monologues in quiet, was larger than the dynamic range of all music genres tested. The perceptual study compared the effect of the prescription rule NAL-NL2 with a semicompressive and a linear scheme. Music subjected to linear processing had the highest ratings for dynamics and quality, followed by the semicompressive and the NAL-NL2 setting. These findings advise against NAL-NL2 as a prescription rule for recorded music and recommend linear settings. PMID:26868955
Kirchberger, Martin; Russo, Frank A
2016-02-10
Dynamic range compression serves different purposes in the music and hearing-aid industries. In the music industry, it is used to make music louder and more attractive to normal-hearing listeners. In the hearing-aid industry, it is used to map the variable dynamic range of acoustic signals to the reduced dynamic range of hearing-impaired listeners. Hence, hearing-aided listeners will typically receive a dual dose of compression when listening to recorded music. The present study involved an acoustic analysis of dynamic range across a cross section of recorded music as well as a perceptual study comparing the efficacy of different compression schemes. The acoustic analysis revealed that the dynamic range of samples from popular genres, such as rock or rap, was generally smaller than the dynamic range of samples from classical genres, such as opera and orchestra. By comparison, the dynamic range of speech, based on recordings of monologues in quiet, was larger than the dynamic range of all music genres tested. The perceptual study compared the effect of the prescription rule NAL-NL2 with a semicompressive and a linear scheme. Music subjected to linear processing had the highest ratings for dynamics and quality, followed by the semicompressive and the NAL-NL2 setting. These findings advise against NAL-NL2 as a prescription rule for recorded music and recommend linear settings. © The Author(s) 2016.
García-Molina, María Dolores; García-Olmo, Juan; Barro, Francisco
2016-01-01
Scope The aim of this work was to assess the ability of Near Infrared Spectroscopy (NIRS) to distinguish wheat lines with low gliadin content, obtained by RNA interference (RNAi), from non-transgenic wheat lines. The discriminant analysis was performed using both whole grain and flour. The transgenic sample set included 409 samples for whole grain sorting and 414 samples for flour experiments, while the non-transgenic set consisted of 126 and 156 samples for whole grain and flour, respectively. Methods and Results Samples were scanned using a Foss-NIR Systems 6500 System II instrument. Discrimination models were developed using the entire spectral range (400–2500 nm) and ranges of 400–780 nm, 800–1098 nm and 1100–2500 nm, followed by analysis of means of partial least square (PLS). Two external validations were made, using samples from the years 2013 and 2014 and a minimum of 99% of the flour samples and 96% of the whole grain samples were classified correctly. Conclusions The results demonstrate the ability of NIRS to successfully discriminate between wheat samples with low-gliadin content and wild types. These findings are important for the development and analysis of foodstuff for celiac disease (CD) patients to achieve better dietary composition and a reduction in disease incidence. PMID:27018786
Straightforward analytical method to determine opium alkaloids in poppy seeds and bakery products.
López, Patricia; Pereboom-de Fauw, Diana P K H; Mulder, Patrick P J; Spanjer, Martien; de Stoppelaar, Joyce; Mol, Hans G J; de Nijs, Monique
2018-03-01
A straightforward method to determine the content of six opium alkaloids (morphine, codeine, thebaine, noscapine, papaverine and narceine) in poppy seeds and bakery products was developed and validated down to a limit of quantification (LOQ) of 0.1mg/kg. The method was based on extraction with acetonitrile/water/formic acid, ten-fold dilution and analysis by LC-MS/MS using a pH 10 carbonate buffer. The method was applied for the analysis of 41 samples collected in 2015 in the Netherlands and Germany. All samples contained morphine ranging from 0.2 to 240mg/kg. The levels of codeine and thebaine ranged from below LOQ to 348mg/kg and from below LOQ to 106mg/kg, respectively. Sixty percent of the samples exceeded the guidance reference value of 4mg/kg of morphine set by BfR in Germany, whereas 25% of the samples did not comply with the limits set for morphine, codeine, thebaine and noscapine by Hungarian legislation. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Cao, X.; Tian, F.; Telford, R.; Ni, J.; Xu, Q.; Chen, F.; Liu, X.; Stebich, M.; Zhao, Y.; Herzschuh, U.
2017-12-01
Pollen-based quantitative reconstructions of past climate variables is a standard palaeoclimatic approach. Despite knowing that the spatial extent of the calibration-set affects the reconstruction result, guidance is lacking as to how to determine a suitable spatial extent of the pollen-climate calibration-set. In this study, past mean annual precipitation (Pann) during the Holocene (since 11.5 cal ka BP) is reconstructed repeatedly for pollen records from Qinghai Lake (36.7°N, 100.5°E; north-east Tibetan Plateau), Gonghai Lake (38.9°N, 112.2°E; north China) and Sihailongwan Lake (42.3°N, 126.6°E; north-east China) using calibration-sets of varying spatial extents extracted from the modern pollen dataset of China and Mongolia (2559 sampling sites and 168 pollen taxa in total). Results indicate that the spatial extent of the calibration-set has a strong impact on model performance, analogue quality and reconstruction diagnostics (absolute value, range, trend, optimum). Generally, these effects are stronger with the modern analogue technique (MAT) than with weighted averaging partial least squares (WA-PLS). With respect to fossil spectra from northern China, the spatial extent of calibration-sets should be restricted to ca. 1000 km in radius because small-scale calibration-sets (<800 km radius) will likely fail to include enough spatial variation in the modern pollen assemblages to reflect the temporal range shifts during the Holocene, while too broad a scale calibration-set (>1500 km radius) will include taxa with very different pollen-climate relationships. Based on our results we conclude that the optimal calibration-set should 1) cover a reasonably large spatial extent with an even distribution of modern pollen samples; 2) possess good model performance as indicated by cross-validation, high analogue quality, and excellent fit with the target fossil pollen spectra; 3) possess high taxonomic resolution, and 4) obey the modern and past distribution ranges of taxa inferred from palaeo-genetic and macrofossil studies.
Tang, Liguo; Cao, Wenwu
2016-01-01
During the operation of high power electromechanical devices, a temperature rise is unavoidable due to mechanical and electrical losses, causing the degradation of device performance. In order to evaluate such degradations using computer simulations, full matrix material properties at elevated temperatures are needed as inputs. It is extremely difficult to measure such data for ferroelectric materials due to their strong anisotropic nature and property variation among samples of different geometries. Because the degree of depolarization is boundary condition dependent, data obtained by the IEEE (Institute of Electrical and Electronics Engineers) impedance resonance technique, which requires several samples with drastically different geometries, usually lack self-consistency. The resonant ultrasound spectroscopy (RUS) technique allows the full set material constants to be measured using only one sample, which can eliminate errors caused by sample to sample variation. A detailed RUS procedure is demonstrated here using a lead zirconate titanate (PZT-4) piezoceramic sample. In the example, the complete set of material constants was measured from room temperature to 120 °C. Measured free dielectric constants and were compared with calculated ones based on the measured full set data, and piezoelectric constants d15 and d33 were also calculated using different formulas. Excellent agreement was found in the entire range of temperatures, which confirmed the self-consistency of the data set obtained by the RUS. PMID:27168336
Airado-Rodríguez, Diego; Høy, Martin; Skaret, Josefine; Wold, Jens Petter
2014-05-01
The potential of multispectral imaging of autofluorescence to map sensory flavour properties and fluorophore concentrations in cod caviar paste has been investigated. Cod caviar paste was used as a case product and it was stored over time, under different headspace gas composition and light exposure conditions, to obtain a relevant span in lipid oxidation and sensory properties. Samples were divided in two sets, calibration and test sets, with 16 and 7 samples, respectively. A third set of samples was prepared with induced gradients in lipid oxidation and sensory properties by light exposure of certain parts of the sample surface. Front-face fluorescence emission images were obtained for excitation wavelength 382 nm at 11 different channels ranging from 400 to 700 nm. The analysis of the obtained sets of images was divided in two parts: First, in an effort to compress and extract relevant information, multivariate curve resolution was applied on the calibration set and three spectral components and their relative concentrations in each sample were obtained. The obtained profiles were employed to estimate the concentrations of each component in the images of the heterogeneous samples, giving chemical images of the distribution of fluorescent oxidation products, protoporphyrin IX and photoprotoporphyrin. Second, regression models for sensory attributes related to lipid oxidation were constructed based on the spectra of homogeneous samples from the calibration set. These models were successfully validated with the test set. The models were then applied for pixel-wise estimation of sensory flavours in the heterogeneous images, giving rise to sensory images. As far as we know this is the first time that sensory images of odour and flavour are obtained based on multispectral imaging. Copyright © 2014 Elsevier B.V. All rights reserved.
Oliveira, Gislene B; Alewijn, Martin; Boerrigter-Eenling, Rita; van Ruth, Saskia M
2015-08-25
Consumers' interest in the way meat is produced is increasing in Europe. The resulting free range and organic meat products retail at a higher price, but are difficult to differentiate from their counterparts. To ascertain authenticity and prevent fraud, relevant markers need to be identified and new analytical methodology developed. The objective of this pilot study was to characterize pork belly meats of different animal welfare classes by their fatty acid (Fatty Acid Methyl Ester-FAME), non-volatile compound (electrospray ionization-tandem mass spectrometry-ESI-MS/MS), and volatile compound (proton-transfer-reaction mass spectrometry-PTR-MS) fingerprints. Well-defined pork belly meat samples (13 conventional, 15 free range, and 13 organic) originating from the Netherlands were subjected to analysis. Fingerprints appeared to be specific for the three categories, and resulted in 100%, 95.3%, and 95.3% correct identity predictions of training set samples for FAME, ESI-MS/MS, and PTR-MS respectively and slightly lower scores for the validation set. Organic meat was also well discriminated from the other two categories with 100% success rates for the training set for all three analytical approaches. Ten out of 25 FAs showed significant differences in abundance between organic meat and the other categories, free range meat differed significantly for 6 out of the 25 FAs. Overall, FAME fingerprinting presented highest discrimination power.
Oliveira, Gislene B.; Alewijn, Martin; Boerrigter-Eenling, Rita; van Ruth, Saskia M.
2015-01-01
Consumers’ interest in the way meat is produced is increasing in Europe. The resulting free range and organic meat products retail at a higher price, but are difficult to differentiate from their counterparts. To ascertain authenticity and prevent fraud, relevant markers need to be identified and new analytical methodology developed. The objective of this pilot study was to characterize pork belly meats of different animal welfare classes by their fatty acid (Fatty Acid Methyl Ester—FAME), non-volatile compound (electrospray ionization-tandem mass spectrometry—ESI-MS/MS), and volatile compound (proton-transfer-reaction mass spectrometry—PTR-MS) fingerprints. Well-defined pork belly meat samples (13 conventional, 15 free range, and 13 organic) originating from the Netherlands were subjected to analysis. Fingerprints appeared to be specific for the three categories, and resulted in 100%, 95.3%, and 95.3% correct identity predictions of training set samples for FAME, ESI-MS/MS, and PTR-MS respectively and slightly lower scores for the validation set. Organic meat was also well discriminated from the other two categories with 100% success rates for the training set for all three analytical approaches. Ten out of 25 FAs showed significant differences in abundance between organic meat and the other categories, free range meat differed significantly for 6 out of the 25 FAs. Overall, FAME fingerprinting presented highest discrimination power. PMID:28231211
Robustness of Two Formulas to Correct Pearson Correlation for Restriction of Range
ERIC Educational Resources Information Center
tran, minh
2011-01-01
Many research studies involving Pearson correlations are conducted in settings where one of the two variables has a restricted range in the sample. For example, this situation occurs when tests are used for selecting candidates for employment or university admission. Often after selection, there is interest in correlating the selection variable,…
Capacitance Sensor For Nondestructive Determination Of Total Oil Content In Nuts
USDA-ARS?s Scientific Manuscript database
Earlier a simple, low cost instrument was designed and assembled in our laboratory, that could estimate the moisture content (MC) of in-shell peanuts (MC range 9% to 20%) and yellow-dent field corn (MC range 7% to 18%). In this method a sample of of in-shell peanuts or corn was placed between a set...
Use of Advanced Spectroscopic Techniques for Predicting the Mechanical Properties of Wood Composites
Timothy G. Rials; Stephen S. Kelley; Chi-Leung So
2002-01-01
Near infrared (NIR) spectroscopy was used to characterize a set of medium-density fiberboard (MDF) samples. This spectroscopic technique, in combination with projection to latent structures (PLS) modeling, effectively predicted the mechanical strength of MDF samples with a wide range of physical properties. The stiffness, strength, and internal bond properties of the...
VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox
NASA Astrophysics Data System (ADS)
Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.
2016-12-01
VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.
NASA Astrophysics Data System (ADS)
Loubere, Paul; Fariduddin, Mohammad
1999-03-01
We present a quantitative method, based on the relative abundances of benthic foraminifera in deep-sea sediments, for estimating surface ocean biological productivity over the timescale of centuries to millennia. We calibrate the method using a global data set composed of 207 samples from the Atlantic, Pacific, and Indian Oceans from a water depth range between 2300 and 3600 m. The sample set was developed so that other, potentially significant, environmental variables would be uncorrelated to overlying surface ocean productivity. A regression of assemblages against productivity yielded an r2 = 0.89 demonstrating a strong productivity signal in the faunal data. In addition, we examined assemblage response to annual variability in biological productivity (seasonality). Our data set included a range of seasonalities which we quantified into a seasonality index using the pigment color bands from the coastal zone color scanner (CZCS). The response of benthic foraminiferal assemblage composition to our seasonality index was tested with regression analysis. We obtained a statistically highly significant r2 = 0.75. Further, discriminant function analysis revealed a clear separation among sample groups based on surface ocean productivity and our seasonality index. Finally, we tested the response of benthic foraminiferal assemblages to three different modes of seasonality. We observed a distinct separation of our samples into groups representing low seasonal variability, strong seasonality with a single main productivity event in the year, and strong seasonality with multiple productivity events in the year. Reconstructing surface ocean biological productivity with benthic foraminifera will aid in modeling marine biogeochemical cycles. Also, estimating mode and range of annual seasonality will provide insight to changing oceanic processes, allowing the examination of the mechanisms causing changes in the marine biotic system over time. This article contains supplementary material.
Cross-cultural dataset for the evolution of religion and morality project.
Purzycki, Benjamin Grant; Apicella, Coren; Atkinson, Quentin D; Cohen, Emma; McNamara, Rita Anne; Willard, Aiyana K; Xygalatas, Dimitris; Norenzayan, Ara; Henrich, Joseph
2016-11-08
A considerable body of research cross-culturally examines the evolution of religious traditions, beliefs and behaviors. The bulk of this research, however, draws from coded qualitative ethnographies rather than from standardized methods specifically designed to measure religious beliefs and behaviors. Psychological data sets that examine religious thought and behavior in controlled conditions tend to be disproportionately sampled from student populations. Some cross-national databases employ standardized methods at the individual level, but are primarily focused on fully market integrated, state-level societies. The Evolution of Religion and Morality Project sought to generate a data set that systematically probed individual level measures sampling across a wider range of human populations. The set includes data from behavioral economic experiments and detailed surveys of demographics, religious beliefs and practices, material security, and intergroup perceptions. This paper describes the methods and variables, briefly introduces the sites and sampling techniques, notes inconsistencies across sites, and provides some basic reporting for the data set.
Perceived climate in physical activity settings.
Gill, Diane L; Morrow, Ronald G; Collins, Karen E; Lucey, Allison B; Schultz, Allison M
2010-01-01
This study focused on the perceived climate for LGBT youth and other minority groups in physical activity settings. A large sample of undergraduates and a selected sample including student teachers/interns and a campus Pride group completed a school climate survey and rated the climate in three physical activity settings (physical education, organized sport, exercise). Overall, school climate survey results paralleled the results with national samples revealing high levels of homophobic remarks and low levels of intervention. Physical activity climate ratings were mid-range, but multivariate analysis of variation test (MANOVA) revealed clear differences with all settings rated more inclusive for racial/ethnic minorities and most exclusive for gays/lesbians and people with disabilities. The results are in line with national surveys and research suggesting sexual orientation and physical characteristics are often the basis for harassment and exclusion in sport and physical activity. The current results also indicate that future physical activity professionals recognize exclusion, suggesting they could benefit from programs that move beyond awareness to skills and strategies for creating more inclusive programs.
Cross-cultural dataset for the evolution of religion and morality project
Purzycki, Benjamin Grant; Apicella, Coren; Atkinson, Quentin D.; Cohen, Emma; McNamara, Rita Anne; Willard, Aiyana K.; Xygalatas, Dimitris; Norenzayan, Ara; Henrich, Joseph
2016-01-01
A considerable body of research cross-culturally examines the evolution of religious traditions, beliefs and behaviors. The bulk of this research, however, draws from coded qualitative ethnographies rather than from standardized methods specifically designed to measure religious beliefs and behaviors. Psychological data sets that examine religious thought and behavior in controlled conditions tend to be disproportionately sampled from student populations. Some cross-national databases employ standardized methods at the individual level, but are primarily focused on fully market integrated, state-level societies. The Evolution of Religion and Morality Project sought to generate a data set that systematically probed individual level measures sampling across a wider range of human populations. The set includes data from behavioral economic experiments and detailed surveys of demographics, religious beliefs and practices, material security, and intergroup perceptions. This paper describes the methods and variables, briefly introduces the sites and sampling techniques, notes inconsistencies across sites, and provides some basic reporting for the data set. PMID:27824332
Assessment of the hygienic performances of hamburger patty production processes.
Gill, C O; Rahn, K; Sloan, K; McMullen, L M
1997-05-20
The hygienic conditions of the hamburger patties collected from three patty manufacturing plants and six retail outlets were examined. At each manufacturing plant a sample from newly formed, chilled patties and one from frozen patties were collected from each of 25 batches of patties selected at random. At three, two or one retail outlet, respectively, 25 samples from frozen, chilled or both frozen and chilled patties were collected at random. Each sample consisted of 30 g of meat obtained from five or six patties. Total aerobic, coliform and Escherichia coli counts per gram were enumerated for each sample. The mean log (x) and standard deviation (s) were calculated for the log10 values for each set of 25 counts, on the assumption that the distribution of counts approximated the log normal. A value for the log10 of the arithmetic mean (log A) was calculated for each set from the values of x and s. A chi2 statistic was calculated for each set as a test of the assumption of the log normal distribution. The chi2 statistic was calculable for 32 of the 39 sets. Four of the sets gave chi2 values indicative of gross deviation from log normality. On inspection of those sets, distributions obviously differing from the log normal were apparent in two. Log A values for total, coliform and E. coli counts for chilled patties from manufacturing plants ranged from 4.4 to 5.1, 1.7 to 2.3 and 0.9 to 1.5, respectively. Log A values for frozen patties from manufacturing plants were between < 0.1 and 0.5 log10 units less than the equivalent values for chilled patties. Log A values for total, coliform and E. coli counts for frozen patties on retail sale ranged from 3.8 to 8.5, < 0.5 to 3.6 and < 0 to 1.9, respectively. The equivalent ranges for chilled patties on retail sale were 4.8 to 8.5, 1.8 to 3.7 and 1.4 to 2.7, respectively. The findings indicate that the general hygienic condition of hamburgers patties could be improved by their being manufactured from only manufacturing beef of superior hygienic quality, and by the better management of chilled patties at retail outlets.
NASA Astrophysics Data System (ADS)
Willmes, M.; McMorrow, L.; Kinsley, L.; Armstrong, R.; Aubert, M.; Eggins, S.; Falguères, C.; Maureille, B.; Moffat, I.; Grün, R.
2014-03-01
Strontium isotope ratios (87Sr / 86Sr) are a key geochemical tracer used in a wide range of fields including archaeology, ecology, food and forensic sciences. These applications are based on the principle that the Sr isotopic ratios of natural materials reflect the sources of strontium available during their formation. A major constraint for current studies is the lack of robust reference maps to evaluate the source of strontium isotope ratios measured in the samples. Here we provide a new data set of bioavailable Sr isotope ratios for the major geologic units of France, based on plant and soil samples (Pangaea data repository doi:10.1594/PANGAEA.819142). The IRHUM (Isotopic Reconstruction of Human Migration) database is a web platform to access, explore and map our data set. The database provides the spatial context and metadata for each sample, allowing the user to evaluate the suitability of the sample for their specific study. In addition, it allows users to upload and share their own data sets and data products, which will enhance collaboration across the different research fields. This article describes the sampling and analytical methods used to generate the data set and how to use and access the data set through the IRHUM database. Any interpretation of the isotope data set is outside the scope of this publication.
Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun
2014-12-19
In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different situations.
Legacy and currently used pesticides in the atmospheric environment of Lake Victoria, East Africa.
Arinaitwe, Kenneth; Kiremire, Bernard T; Muir, Derek C G; Fellin, Phil; Li, Henrik; Teixeira, Camilla; Mubiru, Drake N
2016-02-01
The Lake Victoria watershed has extensive agricultural activity with a long history of pesticide use but there is limited information on historical use or on environmental levels. To address this data gap, high volume air samples were collected from two sites close to the northern shore of Lake Victoria; Kakira (KAK) and Entebbe (EBB). The samples, to be analyzed for pesticides, were collected over various periods between 1999 and 2004 inclusive (KAK 1999-2000, KAK 2003-2004, EBB 2003 and EBB 2004 sample sets) and from 2008 to 2010 inclusive (EBB 2008, EBB 2009 and EBB 2010 sample sets). The latter sample sets (which also included precipitation samples) were also analyzed for currently used pesticides (CUPs) including chlorpyrifos, chlorthalonil, metribuzin, trifluralin, malathion and dacthal. Chlorpyrifos was the predominant CUP in air samples with average concentrations of 93.5, 26.1 and 3.54 ng m(-3) for the EBB 2008, 2009, 2010 sample sets, respectively. Average concentrations of total endosulfan (ΣEndo), total DDT related compounds (ΣDDTs) and hexachlorocyclohexanes (ΣHCHs) ranged from 12.3-282, 22.8-130 and 3.72-81.8 pg m(-3), respectively, for all the sample sets. Atmospheric prevalence of residues of persistent organic pollutants (POPs) increased with fresh emissions of endosulfan, DDT and lindane. Hexachlorobenzene (HCB), pentachlorobenzene (PeCB) and dieldrin were also detected in air samples. Transformation products, pentachloroanisole, 3,4,5-trichloroveratrole and 3,4,5,6-tetrachloroveratrole, were also detected. The five most prevalent compounds in the precipitation samples were in the order chlorpyrifos>chlorothalonil>ΣEndo>ΣDDTs>ΣHCHs with average fluxes of 1123, 396, 130, 41.7 and 41.3 ng m(-2)sample(-1), respectively. PeCB exceeded HCB in precipitation samples. The reverse was true for air samples. Backward air trajectories suggested transboundary and local emission sources of the analytes. The results underscore the need for a concerted regional vigilance in management of chemicals. Copyright © 2015 Elsevier B.V. All rights reserved.
Gould, Matthew J.; Cain, James W.; Roemer, Gary W.; Gould, William R.
2016-01-01
During the 2004–2005 to 2015–2016 hunting seasons, the New Mexico Department of Game and Fish (NMDGF) estimated black bear abundance (Ursus americanus) across the state by coupling density estimates with the distribution of primary habitat generated by Costello et al. (2001). These estimates have been used to set harvest limits. For example, a density of 17 bears/100 km2 for the Sangre de Cristo and Sacramento Mountains and 13.2 bears/100 km2 for the Sandia Mountains were used to set harvest levels. The advancement and widespread acceptance of non-invasive sampling and mark-recapture methods, prompted the NMDGF to collaborate with the New Mexico Cooperative Fish and Wildlife Research Unit and New Mexico State University to update their density estimates for black bear populations in select mountain ranges across the state.We established 5 study areas in 3 mountain ranges: the northern (NSC; sampled in 2012) and southern Sangre de Cristo Mountains (SSC; sampled in 2013), the Sandia Mountains (Sandias; sampled in 2014), and the northern (NSacs) and southern Sacramento Mountains (SSacs; both sampled in 2014). We collected hair samples from black bears using two concurrent non-invasive sampling methods, hair traps and bear rubs. We used a gender marker and a suite of microsatellite loci to determine the individual identification of hair samples that were suitable for genetic analysis. We used these data to generate mark-recapture encounter histories for each bear and estimated density in a spatially explicit capture-recapture framework (SECR). We constructed a suite of SECR candidate models using sex, elevation, land cover type, and time to model heterogeneity in detection probability and the spatial scale over which detection probability declines. We used Akaike’s Information Criterion corrected for small sample size (AICc) to rank and select the most supported model from which we estimated density.We set 554 hair traps, 117 bear rubs and collected 4,083 hair samples. We identified 725 (367 M, 358 F) individuals; the sex ratio for each study area was approximately equal. Our density estimates varied within and among mountain ranges with an estimated density of 21.86 bears/100 km2 (95% CI: 17.83 – 26.80) for the NSC, 19.74 bears/100 km2 (95% CI: 13.77 – 28.30) in the SSC, 25.75 bears/100 km2 (95% CI: 13.22 – 50.14) in the Sandias, 21.86 bears/100 km2 (95% CI: 17.83 – 26.80) in the NSacs, and 16.55 bears/100 km2 (95% CI: 11.64 – 23.53) in the SSacs. Overall detection probability for hair traps and bear rubs, combined, was low across all study areas and ranged from 0.00001 to 0.02. We speculate that detection probabilities were affected by failure of some hair samples to produce a complete genotype due to UV degradation of DNA, and our inability to set and check some sampling devices due to wildfires in the SSC. Ultraviolet radiation levels are particularly high in New Mexico compared to other states where NGS methods have been used because New Mexico receives substantial amounts of sunshine, is relatively high in elevation (1,200 m – 4,000 m), and is at a lower latitude. Despite these sampling difficulties, we were able to produce density estimates for New Mexico black bear populations with levels of precision comparable to estimated black bear densities made elsewhere in the U.S.Our ability to generate reliable black bear density estimates for 3 New Mexico mountain ranges is attributable to our use of a statistically robust study design and analytical method. There are multiple factors that need to be considered when developing future SECR-based density estimation projects. First, the spatial extent of the population of interest and the smallest average home range size must be determined; these will dictate size of the trapping array and spacing necessary between hair traps. The number of technicians needed and access to the study areas will also influence configuration of the trapping array. We believe shorter sampling occasions could be implemented to reduce degradation of DNA due to UV radiation; this might help increase amplification rates and thereby increase both the number of unique individuals identified and the number of recaptures, improving the precision of the density estimates. A pilot study may be useful to determine the length of time hair samples can remain in the field prior to collection. In addition, researchers may consider setting hair traps and bear rubs in more shaded areas (e.g., north facing slopes) to help reduce exposure to UV radiation. To reduce the sampling interval it will be necessary to either hire more field personnel or decrease the number of hair traps per sampling session. Both of these will enhance detection of long-range movement events by individual bears, increase initial capture and recapture rates, and improve precision of the parameter estimates. We recognize that all studies are constrained by limited resources, however, increasing field personnel would also allow a larger study area to be sampled or enable higher trap density.In conclusion, we estimated the density of black bears in 5 study areas within 3 mountains ranges of New Mexico. Our estimates will aid the NMDGF in setting sustainable harvest limits. Along with estimates of density, information on additional demographic rates (e.g., survival rates and reproduction) and the potential effects that climate change and future land use may have on the demography of black bears may also help inform management of black bears in New Mexico, and may be considered as future areas for research.
High pressure system for 3-D study of elastic anisotropy
NASA Astrophysics Data System (ADS)
Lokajicek, T.; Pros, Z.; Klima, K.
2003-04-01
New high pressure system was designed for the study of elastic anisotropy of condensed matter under high confining pressure up to 700 MPa. Simultaneously could be measured dynamic and static parameters: a) dynamic parameters by ultrasonic sounding, b) static parameters by measuring of spherical sample deformation. The measurement is carried out on spherical samples diameter 50 +/- 0.01 mm. Higher value of confining pressure was reached due to the new construction of sample positioning unit. The positioning unit is equipped with two Portecap step motors, which are located inside the vessel and make possible to rotate with the sphere and couple of piezoceramic transducers. Sample deformation is measured in the same direction as ultrasonic signal travel time. Only electric leads connects inner part of high pressure vessel with surrounding environment. Experimental set up enables: - simultaneous P-wave ultrasonic sounding, - measurement of current sample deformation at sounding points, - measurement of current value of confining pressure and - measurement of current stress media temperature. Air driven high pressure pump Haskel is used to produce high value of confining pressure up to 700 MPa. Ultrasonic signals are recorded by digital scope Agilent 54562 with sampling frequency 100 MHz. Control and measuring software was developed under Agilent VEE software environment working under MS Win 2000 operating system. Measuring set up was tested by measurement of monomineral spherical samples of quartz and corundum. Both of them have trigonal symmetry. The measurement showed that the P-wave velocity range of quartz was between 5.7-7.0 km/sec. and velocity range of corundum was between 9.7-10.9 km/sec. High pressure resistant LVDT transducers Mesing together with Intronix electronic unit were used to monitor sample deformation. Sample deformation is monitored with the accuracy of 0.1 micron. All test measurements proved the good accuracy of the whole measuring set up. This project was supported by Grant Agency of the Czech Republic No.: 205/01/1430.
Kamalandua, Aubeline
2015-01-01
Age estimation from DNA methylation markers has seen an exponential growth of interest, not in the least from forensic scientists. The current published assays, however, can still be improved by lowering the number of markers in the assay and by providing more accurate models to predict chronological age. From the published literature we selected 4 age-associated genes (ASPA, PDE4C, ELOVL2, and EDARADD) and determined CpG methylation levels from 206 blood samples of both deceased and living individuals (age range: 0–91 years). This data was subsequently used to compare prediction accuracy with both linear and non-linear regression models. A quadratic regression model in which the methylation levels of ELOVL2 were squared showed the highest accuracy with a Mean Absolute Deviation (MAD) between chronological age and predicted age of 3.75 years and an adjusted R2 of 0.95. No difference in accuracy was observed for samples obtained either from living and deceased individuals or between the 2 genders. In addition, 29 teeth from different individuals (age range: 19–70 years) were analyzed using the same set of markers resulting in a MAD of 4.86 years and an adjusted R2 of 0.74. Cross validation of the results obtained from blood samples demonstrated the robustness and reproducibility of the assay. In conclusion, the set of 4 CpG DNA methylation markers is capable of producing highly accurate age predictions for blood samples from deceased and living individuals PMID:26280308
Enhanced Cumulative Sum Charts for Monitoring Process Dispersion
Abujiya, Mu’azu Ramat; Riaz, Muhammad; Lee, Muhammad Hisyam
2015-01-01
The cumulative sum (CUSUM) control chart is widely used in industry for the detection of small and moderate shifts in process location and dispersion. For efficient monitoring of process variability, we present several CUSUM control charts for monitoring changes in standard deviation of a normal process. The newly developed control charts based on well-structured sampling techniques - extreme ranked set sampling, extreme double ranked set sampling and double extreme ranked set sampling, have significantly enhanced CUSUM chart ability to detect a wide range of shifts in process variability. The relative performances of the proposed CUSUM scale charts are evaluated in terms of the average run length (ARL) and standard deviation of run length, for point shift in variability. Moreover, for overall performance, we implore the use of the average ratio ARL and average extra quadratic loss. A comparison of the proposed CUSUM control charts with the classical CUSUM R chart, the classical CUSUM S chart, the fast initial response (FIR) CUSUM R chart, the FIR CUSUM S chart, the ranked set sampling (RSS) based CUSUM R chart and the RSS based CUSUM S chart, among others, are presented. An illustrative example using real dataset is given to demonstrate the practicability of the application of the proposed schemes. PMID:25901356
Gilliom, Robert J.; Helsel, Dennis R.
1986-01-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations, for determining the best performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.
Estimation of distributional parameters for censored trace-level water-quality data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilliom, R.J.; Helsel, D.R.
1984-01-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water-sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations,more » for determining the best-performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least-squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification. 6 figs., 6 tabs.« less
The Accuracy of Physicians' Clinical Predictions of Survival in Patients With Advanced Cancer.
Amano, Koji; Maeda, Isseki; Shimoyama, Satofumi; Shinjo, Takuya; Shirayama, Hiroto; Yamada, Takeshi; Ono, Shigeki; Yamamoto, Ryo; Yamamoto, Naoki; Shishido, Hideki; Shimizu, Mie; Kawahara, Masanori; Aoki, Shigeru; Demizu, Akira; Goshima, Masahiro; Goto, Keiji; Gyoda, Yasuaki; Hashimoto, Kotaro; Otomo, Sen; Sekimoto, Masako; Shibata, Takemi; Sugimoto, Yuka; Morita, Tatsuya
2015-08-01
Accurate prognoses are needed for patients with advanced cancer. To evaluate the accuracy of physicians' clinical predictions of survival (CPS) and assess the relationship between CPS and actual survival (AS) in patients with advanced cancer in palliative care units, hospital palliative care teams, and home palliative care services, as well as those receiving chemotherapy. This was a multicenter prospective cohort study conducted in 58 palliative care service centers in Japan. The palliative care physicians evaluated patients on the first day of admission and followed up all patients to their death or six months after enrollment. We evaluated the accuracy of CPS and assessed the relationship between CPS and AS in the four groups. We obtained a total of 2036 patients: 470, 764, 404, and 398 in hospital palliative care teams, palliative care units, home palliative care services, and chemotherapy, respectively. The proportion of accurate CPS (0.67-1.33 times AS) was 35% (95% CI 33-37%) in the total sample and ranged from 32% to 39% in each setting. While the proportion of patients living longer than CPS (pessimistic CPS) was 20% (95% CI 18-22%) in the total sample, ranging from 15% to 23% in each setting, the proportion of patients living shorter than CPS (optimistic CPS) was 45% (95% CI 43-47%) in the total sample, ranging from 43% to 49% in each setting. Physicians tend to overestimate when predicting survival in all palliative care patients, including those receiving chemotherapy. Copyright © 2015 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
Bacterial contaminants in carbonated soft drinks sold in Bangladesh markets.
Akond, Muhammad Ali; Alam, Saidul; Hasan, S M R; Mubassara, Sanzida; Uddin, Sarder Nasir; Shirin, Momena
2009-03-31
A total of 225 carbonated soft drink (CSD) samples from nine brands, from various locations in five metropolitan cities of Bangladesh were examined to determine their bacteriological quality. Most samples were not in compliance with microbiological standards set by organizations like the World Health Organization (WHO). Pseudomonas aeruginosa was the predominant species with an incidence of 95%. Streptococcus spp. and Bacillus stearothermophilus were the next most prevalent with numbers ranging from 6 to 122 and 9 to 105 cfu/100 ml, respectively. Fifty four percent of the samples yielded Salmonella spp. at numbers ranging from 2 to 90 cfu/100 ml. Total coliform (TC) and faecal coliform (FC) counts were found in 68-100% and 76-100% of samples of individual brands, at numbers ranging from 5 to 213 and 3 to 276 cfu/100 ml, respectively. According to WHO standards 60-88% of samples from six brands and 32% and 40% of samples from two other brands belonged to the intermediate risk group with FC counts of 100-1000 cfu/100 ml. Heterotrophic plate counts, however, were under the permissible limit in all 225 samples. These findings suggest that carbonated soft drinks commercially available in Bangladesh pose substantial risks to public health.
Development of a direct observation Measure of Environmental Qualities of Activity Settings.
King, Gillian; Rigby, Patty; Batorowicz, Beata; McMain-Klein, Margot; Petrenchik, Theresa; Thompson, Laura; Gibson, Michelle
2014-08-01
The aim of this study was to develop an observer-rated measure of aesthetic, physical, social, and opportunity-related qualities of leisure activity settings for young people (with or without disabilities). Eighty questionnaires were completed by sets of raters who independently rated 22 community/home activity settings. The scales of the 32-item Measure of Environmental Qualities of Activity Settings (MEQAS; Opportunities for Social Activities, Opportunities for Physical Activities, Pleasant Physical Environment, Opportunities for Choice, Opportunities for Personal Growth, and Opportunities to Interact with Adults) were determined using principal components analyses. Test-retest reliability was determined for eight activity settings, rated twice (4-6wk interval) by a trained rater. The factor structure accounted for 80% of the variance. The Kaiser-Meyer-Olkin Measure of Sampling Adequacy was 0.73. Cronbach's alphas for the scales ranged from 0.76 to 0.96, and interrater reliabilities (ICCs) ranged from 0.60 to 0.93. Test-retest reliabilities ranged from 0.70 to 0.90. Results suggest that the MEQAS has a sound factor structure and preliminary evidence of internal consistency, interrater, and test-retest reliability. The MEQAS is the first observer-completed measure of environmental qualities of activity settings. The MEQAS allows researchers to assess comprehensively qualities and affordances of activity settings, and can be used to design and assess environmental qualities of programs for young people. © 2014 Mac Keith Press.
Stability of mercury concentration measurements in archived soil and peat samples
Navrátil, Tomáš; Burns, Douglas; Nováková, Tereza; Kaňa, Jiří; Rohovec, Jan; Roll, Michal; Ettler, Vojtěch
2018-01-01
Archived soil samples can provide important information on the history of environmental contamination and by comparison with recently collected samples, temporal trends can be inferred. Little previous work has addressed whether mercury (Hg) concentrations in soil samples are stable with long-term storage under standard laboratory conditions. In this study, we have re-analyzed using cold vapor atomic adsorption spectroscopy a set of archived soil samples that ranged from relatively pristine mountainous sites to a polluted site near a non-ferrous metal smelter with a wide range of Hg concentrations (6 - 6485 µg kg-1). Samples included organic and mineral soils and peats with a carbon content that ranged from 0.2 to 47.7%. Soil samples were stored in polyethylene bags or bottles and held in laboratory rooms where temperature was not kept to a constant value. Mercury concentrations in four subsets of samples were originally measured in 2000, 2005, 2006 and 2007, and re-analyzed in 2017, i.e. after 17, 12, 11 and 10 years of storage. Statistical analyses of either separated or lumped data yielded no significant differences between the original and current Hg concentrations. Based on these analyses, we show that archived soil and peat samples can be used to evaluate historical soil mercury contamination.
40 CFR 85.2233 - Steady state test equipment calibrations, adjustments, and quality control-EPA 91.
Code of Federal Regulations, 2013 CFR
2013-07-01
... tolerance range. The pressure in the sample cell must be the same with the calibration gas flowing during... this chapter. The check is done at 30 mph (48 kph), and a power absorption load setting to generate a... in § 85.2225(c)(1) are not met. (2) Leak checks. Each time the sample line integrity is broken, a...
40 CFR 85.2233 - Steady state test equipment calibrations, adjustments, and quality control-EPA 91.
Code of Federal Regulations, 2011 CFR
2011-07-01
... tolerance range. The pressure in the sample cell must be the same with the calibration gas flowing during... this chapter. The check is done at 30 mph (48 kph), and a power absorption load setting to generate a... in § 85.2225(c)(1) are not met. (2) Leak checks. Each time the sample line integrity is broken, a...
40 CFR 85.2233 - Steady state test equipment calibrations, adjustments, and quality control-EPA 91.
Code of Federal Regulations, 2012 CFR
2012-07-01
... tolerance range. The pressure in the sample cell must be the same with the calibration gas flowing during... this chapter. The check is done at 30 mph (48 kph), and a power absorption load setting to generate a... in § 85.2225(c)(1) are not met. (2) Leak checks. Each time the sample line integrity is broken, a...
Pollination and reproduction of an invasive plant inside and outside its ancestral range
NASA Astrophysics Data System (ADS)
Petanidou, Theodora; Price, Mary V.; Bronstein, Judith L.; Kantsa, Aphrodite; Tscheulin, Thomas; Kariyat, Rupesh; Krigas, Nikos; Mescher, Mark C.; De Moraes, Consuelo M.; Waser, Nickolas M.
2018-05-01
Comparing traits of invasive species within and beyond their ancestral range may improve our understanding of processes that promote aggressive spread. Solanum elaeagnifolium (silverleaf nightshade) is a noxious weed in its ancestral range in North America and is invasive on other continents. We compared investment in flowers and ovules, pollination success, and fruit and seed set in populations from Arizona, USA ("AZ") and Greece ("GR"). In both countries, the populations we sampled varied in size and types of present-day disturbance. Stature of plants increased with population size in AZ samples whereas GR plants were uniformly tall. Taller plants produced more flowers, and GR plants produced more flowers for a given stature and allocated more ovules per flower. Similar functional groups of native bees pollinated in AZ and GR populations, but visits to flowers decreased with population size and we observed no visits in the largest GR populations. As a result, plants in large GR populations were pollen-limited, and estimates of fecundity were lower on average in GR populations despite the larger allocation to flowers and ovules. These differences between plants in our AZ and GR populations suggest promising directions for further study. It would be useful to sample S. elaeagnifolium in Mediterranean climates within the ancestral range (e.g., in California, USA), to study asexual spread via rhizomes, and to use common gardens and genetic studies to explore the basis of variation in allocation patterns and of relationships between visitation and fruit set.
[Automated Assessment for Bone Age of Left Wrist Joint in Uyghur Teenagers by Deep Learning].
Hu, T H; Huo, Z; Liu, T A; Wang, F; Wan, L; Wang, M W; Chen, T; Wang, Y H
2018-02-01
To realize the automated bone age assessment by applying deep learning to digital radiography (DR) image recognition of left wrist joint in Uyghur teenagers, and explore its practical application value in forensic medicine bone age assessment. The X-ray films of left wrist joint after pretreatment, which were taken from 245 male and 227 female Uyghur nationality teenagers in Uygur Autonomous Region aged from 13.0 to 19.0 years old, were chosen as subjects. And AlexNet was as a regression model of image recognition. From the total samples above, 60% of male and female DR images of left wrist joint were selected as net train set, and 10% of samples were selected as validation set. As test set, the rest 30% were used to obtain the image recognition accuracy with an error range in ±1.0 and ±0.7 age respectively, compared to the real age. The modelling results of deep learning algorithm showed that when the error range was in ±1.0 and ±0.7 age respectively, the accuracy of the net train set was 81.4% and 75.6% in male, and 80.5% and 74.8% in female, respectively. When the error range was in ±1.0 and ±0.7 age respectively, the accuracy of the test set was 79.5% and 71.2% in male, and 79.4% and 66.2% in female, respectively. The combination of bone age research on teenagers' left wrist joint and deep learning, which has high accuracy and good feasibility, can be the research basis of bone age automatic assessment system for the rest joints of body. Copyright© by the Editorial Department of Journal of Forensic Medicine.
Kamel, Alaa; Al-Ghamdi, Ahmad
2006-01-01
Determination of acaricide residues of flumethrin, tau-fluvalinate, coumaphos, and amitraz in honey and beeswax was carried out using a rapid extraction method utilizing C-18 SPE cartridges and an analytical method utilizing GC with ECD, NPD, and MSD detectors for the four acaricides. Recovery percentages from the extraction method ranged from 90-102%, while the minimum detection levels ranged from 0.01-0.05 mg/kg for the acaricides. Nine of the 21 analyzed samples were found to be contaminated with the acaricides tau-fluvalinate and coumaphos. Neither flumethrin nor amitraz was detected in any of the honey or wax samples. Coumaphos was found only in honey samples in which two samples exceeded the tolerance levels set by EPA and EC regulations. It has not been detected in beeswax. Five honey samples and eight beeswax samples were found to be contaminated with tau-fluvalinate. One of the wax samples was contaminated with a relatively high residue of tau-fluvalinate and contained above 10 mg/kg.
1991-08-01
and pressure data collected during the four seasons at White Sands Missile Range, New Mexico , are converted for use in artillery surface-to-surface...155-mm weapon system fired at White Sands Missile Range, New Mexico , did not reach an apogee of 30 km. For the low-angle simulations, the projectile...Range, Now Mexico , during 1989. A sample of 226 rawinsonde flighto containing representative sets for each of the four seasons is used an the met data
Average variograms to guide soil sampling
NASA Astrophysics Data System (ADS)
Kerry, R.; Oliver, M. A.
2004-10-01
To manage land in a site-specific way for agriculture requires detailed maps of the variation in the soil properties of interest. To predict accurately for mapping, the interval at which the soil is sampled should relate to the scale of spatial variation. A variogram can be used to guide sampling in two ways. A sampling interval of less than half the range of spatial dependence can be used, or the variogram can be used with the kriging equations to determine an optimal sampling interval to achieve a given tolerable error. A variogram might not be available for the site, but if the variograms of several soil properties were available on a similar parent material and or particular topographic positions an average variogram could be calculated from these. Averages of the variogram ranges and standardized average variograms from four different parent materials in southern England were used to suggest suitable sampling intervals for future surveys in similar pedological settings based on half the variogram range. The standardized average variograms were also used to determine optimal sampling intervals using the kriging equations. Similar sampling intervals were suggested by each method and the maps of predictions based on data at different grid spacings were evaluated for the different parent materials. Variograms of loss on ignition (LOI) taken from the literature for other sites in southern England with similar parent materials had ranges close to the average for a given parent material showing the possible wider application of such averages to guide sampling.
Metabolomic technologies are increasingly being applied to study biological questions in a range of different settings from clinical through to environmental. As with other high-throughput technologies, such as those used in transcriptomics and proteomics, metabolomics continues...
Kuligowski, Julia; Carrión, David; Quintás, Guillermo; Garrigues, Salvador; de la Guardia, Miguel
2011-01-01
The selection of an appropriate calibration set is a critical step in multivariate method development. In this work, the effect of using different calibration sets, based on a previous classification of unknown samples, on the partial least squares (PLS) regression model performance has been discussed. As an example, attenuated total reflection (ATR) mid-infrared spectra of deep-fried vegetable oil samples from three botanical origins (olive, sunflower, and corn oil), with increasing polymerized triacylglyceride (PTG) content induced by a deep-frying process were employed. The use of a one-class-classifier partial least squares-discriminant analysis (PLS-DA) and a rooted binary directed acyclic graph tree provided accurate oil classification. Oil samples fried without foodstuff could be classified correctly, independent of their PTG content. However, class separation of oil samples fried with foodstuff, was less evident. The combined use of double-cross model validation with permutation testing was used to validate the obtained PLS-DA classification models, confirming the results. To discuss the usefulness of the selection of an appropriate PLS calibration set, the PTG content was determined by calculating a PLS model based on the previously selected classes. In comparison to a PLS model calculated using a pooled calibration set containing samples from all classes, the root mean square error of prediction could be improved significantly using PLS models based on the selected calibration sets using PLS-DA, ranging between 1.06 and 2.91% (w/w).
Downslope coarsening in aeolian grainflows of the Navajo Sandstone
NASA Astrophysics Data System (ADS)
Loope, David B.; Elder, James F.; Sweeney, Mark R.
2012-07-01
Downslope coarsening in grainflows has been observed on present-day dunes and generated in labs, but few previous studies have examined vertical sorting in ancient aeolian grainflows. We studied the grainflow strata of the Jurassic Navajo Sandstone in the southern Utah portion of its outcrop belt from Zion National Park (west) to Coyote Buttes and The Dive (east). At each study site, thick sets of grainflow-dominated cross-strata that were deposited by large transverse dunes comprise the bulk of the Navajo Sandstone. We studied three stratigraphic columns, one per site, composed almost exclusively of aeolian cross-strata. For each column, samples were obtained from one grainflow stratum in each consecutive set of the column, for a total of 139 samples from thirty-two sets of cross-strata. To investigate grading perpendicular to bedding within individual grainflows, we collected fourteen samples from four superimposed grainflow strata at The Dive. Samples were analyzed with a Malvern Mastersizer 2000 laser diffraction particle analyser. The median grain size of grainflow samples ranges from fine sand (164 μm) to coarse sand (617 μm). Using Folk and Ward criteria, samples are well-sorted to moderately-well-sorted. All but one of the twenty-eight sets showed at least slight downslope coarsening, but in general, downslope coarsening was not as well-developed or as consistent as that reported in laboratory subaqueous grainflows. Because coarse sand should be quickly sequestered within preserved cross-strata when bedforms climb, grain-size studies may help to test hypotheses for the stacking of sets of cross-strata.
Phase Tomography Reconstructed by 3D TIE in Hard X-ray Microscope
NASA Astrophysics Data System (ADS)
Yin, Gung-Chian; Chen, Fu-Rong; Pyun, Ahram; Je, Jung Ho; Hwu, Yeukuang; Liang, Keng S.
2007-01-01
X-ray phase tomography and phase imaging are promising ways of investigation on low Z material. A polymer blend of PE/PS sample was used to test the 3D phase retrieval method in the parallel beam illuminated microscope. Because the polymer sample is thick, the phase retardation is quite mixed and the image can not be distinguished when the 2D transport intensity equation (TIE) is applied. In this study, we have provided a different approach for solving the phase in three dimensions for thick sample. Our method involves integration of 3D TIE/Fourier slice theorem for solving thick phase sample. In our experiment, eight sets of de-focal series image data sets were recorded covering the angular range of 0 to 180 degree. Only three set of image cubes were used in 3D TIE equation for solving the phase tomography. The phase contrast of the polymer blend in 3D is obviously enhanced, and the two different groups of polymer blend can be distinguished in the phase tomography.
A 'range test' for determining scatterers with unknown physical properties
NASA Astrophysics Data System (ADS)
Potthast, Roland; Sylvester, John; Kusiak, Steven
2003-06-01
We describe a new scheme for determining the convex scattering support of an unknown scatterer when the physical properties of the scatterers are not known. The convex scattering support is a subset of the scatterer and provides information about its location and estimates for its shape. For convex polygonal scatterers the scattering support coincides with the scatterer and we obtain full shape reconstructions. The method will be formulated for the reconstruction of the scatterers from the far field pattern for one or a few incident waves. The method is non-iterative in nature and belongs to the type of recently derived generalized sampling schemes such as the 'no response test' of Luke-Potthast. The range test operates by testing whether it is possible to analytically continue a far field to the exterior of any test domain Omegatest. By intersecting the convex hulls of various test domains we can produce a minimal convex set, the convex scattering support of which must be contained in the convex hull of the support of any scatterer which produces that far field. The convex scattering support is calculated by testing the range of special integral operators for a sampling set of test domains. The numerical results can be used as an approximation for the support of the unknown scatterer. We prove convergence and regularity of the scheme and show numerical examples for sound-soft, sound-hard and medium scatterers. We can apply the range test to non-convex scatterers as well. We can conclude that an Omegatest which passes the range test has a non-empty intersection with the infinity-support (the complement of the unbounded component of the complement of the support) of the true scatterer, but cannot find a minimal set which must be contained therein.
Forrest, Sarah M; Challis, John H; Winter, Samantha L
2014-06-01
Approximate entropy (ApEn) is frequently used to identify changes in the complexity of isometric force records with ageing and disease. Different signal acquisition and processing parameters have been used, making comparison or confirmation of results difficult. This study determined the effect of sampling and parameter choices by examining changes in ApEn values across a range of submaximal isometric contractions of the first dorsal interosseus. Reducing the sample rate by decimation changed both the value and pattern of ApEn values dramatically. The pattern of ApEn values across the range of effort levels was not sensitive to the filter cut-off frequency, or the criterion used to extract the section of data for analysis. The complexity increased with increasing effort levels using a fixed 'r' value (which accounts for measurement noise) but decreased with increasing effort level when 'r' was set to 0.1 of the standard deviation of force. It is recommended isometric force records are sampled at frequencies >200Hz, template length ('m') is set to 2, and 'r' set to measurement system noise or 0.1SD depending on physiological process to be distinguished. It is demonstrated that changes in ApEn across effort levels are related to changes in force gradation strategy. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.
Diagnosis of Meningococcal Meningitis by Broad-Range Bacterial PCR with Cerebrospinal Fluid
Kotilainen, Pirkko; Jalava, Jari; Meurman, Olli; Lehtonen, Olli-Pekka; Rintala, Esa; Seppälä, Olli-Pekka; Eerola, Erkki; Nikkari, Simo
1998-01-01
We used broad-range bacterial PCR combined with DNA sequencing to examine prospectively cerebrospinal fluid (CSF) samples from patients with suspected meningitis. Fifty-six CSF samples from 46 patients were studied during the year 1995. Genes coding for bacterial 16S and/or 23S rRNA genes could be amplified from the CSF samples from five patients with a clinical picture consistent with acute bacterial meningitis. For these patients, the sequenced PCR product shared 98.3 to 100% homology with the Neisseria meningitidis sequence. For one patient, the diagnosis was initially made by PCR alone. Of the remaining 51 CSF samples, for 50 (98.0%) samples the negative PCR findings were in accordance with the negative findings by bacterial culture and Gram staining, as well as with the eventual clinical diagnosis for the patient. However, the PCR test failed to detect the bacterial rRNA gene in one CSF sample, the culture of which yielded Listeria monocytogenes. These results invite new research efforts to be focused on the application of PCR with broad-range bacterial primers to improve the etiologic diagnosis of bacterial meningitis. In a clinical setting, Gram staining and bacterial culture still remain the cornerstones of diagnosis. PMID:9665992
Wesolowski, Edwin A.
1999-01-01
A streamflow and water-quality model was developed for reaches of Sand and Caddo Creeks in south-central Oklahoma to simulate the effects of wastewater discharge from a refinery and a municipal treatment plant.The purpose of the model was to simulate conditions during low streamflow when the conditions controlling dissolved-oxygen concentrations are most severe. Data collected to calibrate and verify the streamflow and water-quality model include continuously monitored streamflow and water-quality data at two gaging stations and three temporary monitoring stations; wastewater discharge from two wastewater plants; two sets each of five water-quality samples at nine sites during a 24-hour period; dye and propane samples; periphyton samples; and sediment oxygen demand measurements. The water-quality sampling, at a 6-hour frequency, was based on a Lagrangian reference frame in which the same volume of water was sampled at each site. To represent the unsteady streamflows and the dynamic water-quality conditions, a transport modeling system was used that included both a model to route streamflow and a model to transport dissolved conservative constituents with linkage to reaction kinetics similar to the U.S. Environmental Protection Agency QUAL2E model to simulate nonconservative constituents. These model codes are the Diffusion Analogy Streamflow Routing Model (DAFLOW) and the branched Lagrangian transport model (BLTM) and BLTM/QUAL2E that, collectively, as calibrated models, are referred to as the Ardmore Water-Quality Model.The Ardmore DAFLOW model was calibrated with three sets of streamflows that collectively ranged from 16 to 3,456 cubic feet per second. The model uses only one set of calibrated coefficients and exponents to simulate streamflow over this range. The Ardmore BLTM was calibrated for transport by simulating dye concentrations collected during a tracer study when streamflows ranged from 16 to 23 cubic feet per second. Therefore, the model is expected to be most useful for low streamflow simulations. The Ardmore BLTM/QUAL2E model was calibrated and verified with water-quality data from nine sites where two sets of five samples were collected. The streamflow during the water-quality sampling in Caddo Creek at site 7 ranged from 8.4 to 20 cubic feet per second, of which about 5.0 to 9.7 cubic feet per second was contributed by Sand Creek. The model simulates the fate and transport of 10 water-quality constituents. The model was verified by running it using data that were not used in calibration; only phytoplankton were not verified.Measured and simulated concentrations of dissolved oxygen exhibited a marked daily pattern that was attributable to waste loading and algal activity. Dissolved-oxygen measurements during this study and simulated dissolved-oxygen concentrations using the Ardmore Water-Quality Model, for the conditions of this study, illustrate that the dissolved-oxygen sag curve caused by the upstream wastewater discharges is confined to Sand Creek.
Validation of an automated fluorescein method for determining bromide in water
Fishman, M. J.; Schroder, L.J.; Friedman, L.C.
1985-01-01
Surface, atmospheric precipitation and deionized water samples were spiked with ??g l-1 concentrations of bromide, and the solutions stored in polyethylene and polytetrafluoroethylene bottles. Bromide was determined periodically for 30 days. Automated fluorescein and ion chromatography methods were used to determine bromide in these prepared samples. Analysis of the data by the paired t-test indicates that the two methods are not significantly different at a probability of 95% for samples containing from 0.015 to 0.5 mg l-1 of bromide. The correlation coefficient for the same sets of paired data is 0.9987. Recovery data, except for the surface water samples to which 0.005 mg l-1 of bromide was added, range from 89 to 112%. There appears to be no loss of bromide from solution in either type of container.Surface, atmospheric precipitation and deionized water samples were spiked with mu g l** minus **1 concentrations of bromide, and the solutions stored in polyethylene and polytetrafluoroethylene bottles. Bromide was determined periodically for 30 days. Automated fluorescein and ion chromatography methods were used to determine bromide in these prepared samples. Analysis of the data by the paired t-test indicates that the two methods are not significantly different at a probability of 95% for samples containing from 0. 015 to 0. 5 mg l** minus **1 of bromide. The correlation coefficient for the same sets of paired data is 0. 9987. Recovery data, except for the surface water samples to which 0. 005 mg l** minus **1 of bromide was added, range from 89 to 112%. Refs.
Borhani zarandi, Mahmoud; Amrollahi Bioki, Hojjat; Mirbagheri, Zahra-alsadat; Tabbakh, Farshid; Mirjalili, Ghazanfar
2012-01-01
In this paper a series of low-density polyethylene (LDPE) blends with different percentages (10%, 20%, and 30%) of EVA and sets of low-density polyethylene sheets were prepared. This set consists of four subsets, which were made under different cooling methods: fast cooling in liquid nitrogen, cooling with cassette, exposing in open air, and cooling in oven, to investigate the crystallinity effects. All of the samples were irradiated with 10MeV electron-beam in the dose range of 0-250kGy using a Rhodotron accelerator system. The variation of thermal conductivity (k) and specific heat capacity (C(p)) of all of the samples were measured. We found that, for the absorption dose less than 150kGy, k of the LDPE samples at a prescribed temperature range decreased by increasing the amount of dose, but then the change is insignificant. With increasing the crystallinity, k of the LDPE samples increased, whereas C(p) of this material is decreased. In the case of LDPE/EVA blends, for the dose less than 150kGy, C(p) (at 40°C) and k (in average) decreased, but then the change is insignificant. With increasing the amount of additive (EVA), C(p) and k increased. Copyright © 2011 Elsevier Ltd. All rights reserved.
Statistical characterization of a large geochemical database and effect of sample size
Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.
2005-01-01
The authors investigated statistical distributions for concentrations of chemical elements from the National Geochemical Survey (NGS) database of the U.S. Geological Survey. At the time of this study, the NGS data set encompasses 48,544 stream sediment and soil samples from the conterminous United States analyzed by ICP-AES following a 4-acid near-total digestion. This report includes 27 elements: Al, Ca, Fe, K, Mg, Na, P, Ti, Ba, Ce, Co, Cr, Cu, Ga, La, Li, Mn, Nb, Nd, Ni, Pb, Sc, Sr, Th, V, Y and Zn. The goal and challenge for the statistical overview was to delineate chemical distributions in a complex, heterogeneous data set spanning a large geographic range (the conterminous United States), and many different geological provinces and rock types. After declustering to create a uniform spatial sample distribution with 16,511 samples, histograms and quantile-quantile (Q-Q) plots were employed to delineate subpopulations that have coherent chemical and mineral affinities. Probability groupings are discerned by changes in slope (kinks) on the plots. Major rock-forming elements, e.g., Al, Ca, K and Na, tend to display linear segments on normal Q-Q plots. These segments can commonly be linked to petrologic or mineralogical associations. For example, linear segments on K and Na plots reflect dilution of clay minerals by quartz sand (low in K and Na). Minor and trace element relationships are best displayed on lognormal Q-Q plots. These sensitively reflect discrete relationships in subpopulations within the wide range of the data. For example, small but distinctly log-linear subpopulations for Pb, Cu, Zn and Ag are interpreted to represent ore-grade enrichment of naturally occurring minerals such as sulfides. None of the 27 chemical elements could pass the test for either normal or lognormal distribution on the declustered data set. Part of the reasons relate to the presence of mixtures of subpopulations and outliers. Random samples of the data set with successively smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.
Gras, Ronda; Luong, Jim; Shellie, Robert A
2015-11-17
We introduce a technique for the direct measurement of elemental mercury in light hydrocarbons such as natural gas. We determined elemental mercury at the parts-per-trillion level with high precision [<3% RSD (n = 20 manual injection)] using gas chromatography with ultraviolet photometric detection (GC-UV) at 254 nm. Our approach requires a small sample volume (1 mL) and does not rely on any form of sample preconcentration. The GC-UV separation employs an inert divinylbenzene porous layer open tubular column set to separate mercury from other components in the sample matrix. We incorporated a 10-port gas-sampling valve in the GC-UV system, which enables automated sampling, as well as back flushing capability to enhance system cleanliness and sample throughput. Total analysis time is <2 min, and the procedure is linear over a range of 2-83 μg/m(3) [correlation coefficient of R(2) = 0.998] with a measured recovery of >98% over this range.
Sulfuric acid/hydrogen peroxide digestion and colorimetric a collaborative study.
Christians, D K; Aspelund, T G; Brayton, S V; Roberts, L L
1991-01-01
Seven laboratories participated in a collaborative study of a method for determination of phosphorus in meat and meat products. Samples are digested in sulfuric acid and hydrogen peroxide; digestion is complete in approximately 10 min. Phosphorus is determined by colorimetric analysis of a dilute aliquot of the sample digest. The collaborators analyzed 3 sets of blind duplicate samples from each of 6 classes of meat (U.S. Department of Agriculture classifications): smoked ham, water-added ham, canned ham, pork sausage, cooked sausage, and hamburger. The calibration curve was linear over the range of standard solutions prepared (phosphorus levels from 0.05 to 1.00%); levels in the collaborative study samples ranged from 0.10 to 0.30%. Standard deviations for repeatability (sr) and reproducibility (SR) ranged from 0.004 to 0.012 and 0.007 to 0.014, respectively. Corresponding relative standard deviations (RSDr and RSDR, respectively) ranged from 1.70 to 7.28% and 3.50 to 9.87%. Six laboratories analyzed samples by both the proposed method and AOAC method 24.016 (14th Ed.). One laboratory reported results by the proposed method only. Statistical evaluations indicated no significant difference between the 2 methods. The method has been adopted official first action by AOAC.
Improving automatic peptide mass fingerprint protein identification by combining many peak sets.
Rögnvaldsson, Thorsteinn; Häkkinen, Jari; Lindberg, Claes; Marko-Varga, György; Potthast, Frank; Samuelsson, Jim
2004-08-05
An automated peak picking strategy is presented where several peak sets with different signal-to-noise levels are combined to form a more reliable statement on the protein identity. The strategy is compared against both manual peak picking and industry standard automated peak picking on a set of mass spectra obtained after tryptic in gel digestion of 2D-gel samples from human fetal fibroblasts. The set of spectra contain samples ranging from strong to weak spectra, and the proposed multiple-scale method is shown to be much better on weak spectra than the industry standard method and a human operator, and equal in performance to these on strong and medium strong spectra. It is also demonstrated that peak sets selected by a human operator display a considerable variability and that it is impossible to speak of a single "true" peak set for a given spectrum. The described multiple-scale strategy both avoids time-consuming parameter tuning and exceeds the human operator in protein identification efficiency. The strategy therefore promises reliable automated user-independent protein identification using peptide mass fingerprints.
Note: Inter-satellite laser range-rate measurement by using digital phase locked loop.
Liang, Yu-Rong; Duan, Hui-Zong; Xiao, Xin-Long; Wei, Bing-Bing; Yeh, Hsien-Chi
2015-01-01
This note presents an improved high-resolution frequency measurement system dedicated for the inter-satellite range-rate monitoring that could be used in the future's gravity recovery mission. We set up a simplified common signal test instead of the three frequencies test. The experimental results show that the dominant noises are the sampling time jitter and the thermal drift of electronic components, which can be reduced by using the pilot-tone correction and passive thermal control. The improved noise level is about 10(-8) Hz/Hz(1/2)@0.01Hz, limited by the signal-to-noise ratio of the sampling circuit.
Note: Inter-satellite laser range-rate measurement by using digital phase locked loop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Yu-Rong; Department of Electronics and Information Engineering, Huazhong University of Science and Technology, 1037 Luo Yu Road, Wuhan 430074; Duan, Hui-Zong
2015-01-15
This note presents an improved high-resolution frequency measurement system dedicated for the inter-satellite range-rate monitoring that could be used in the future’s gravity recovery mission. We set up a simplified common signal test instead of the three frequencies test. The experimental results show that the dominant noises are the sampling time jitter and the thermal drift of electronic components, which can be reduced by using the pilot-tone correction and passive thermal control. The improved noise level is about 10{sup −8} Hz/Hz{sup 1/2}@0.01Hz, limited by the signal-to-noise ratio of the sampling circuit.
Note: Inter-satellite laser range-rate measurement by using digital phase locked loop
NASA Astrophysics Data System (ADS)
Liang, Yu-Rong; Duan, Hui-Zong; Xiao, Xin-Long; Wei, Bing-Bing; Yeh, Hsien-Chi
2015-01-01
This note presents an improved high-resolution frequency measurement system dedicated for the inter-satellite range-rate monitoring that could be used in the future's gravity recovery mission. We set up a simplified common signal test instead of the three frequencies test. The experimental results show that the dominant noises are the sampling time jitter and the thermal drift of electronic components, which can be reduced by using the pilot-tone correction and passive thermal control. The improved noise level is about 10-8 Hz/Hz1/2@0.01Hz, limited by the signal-to-noise ratio of the sampling circuit.
The rotate-plus-shift C-arm trajectory. Part I. Complete data with less than 180° rotation.
Ritschl, Ludwig; Kuntz, Jan; Fleischmann, Christof; Kachelrieß, Marc
2016-05-01
In the last decade, C-arm-based cone-beam CT became a widely used modality for intraoperative imaging. Typically a C-arm CT scan is performed using a circular or elliptical trajectory around a region of interest. Therefore, an angular range of at least 180° plus fan angle must be covered to ensure a completely sampled data set. However, mobile C-arms designed with a focus on classical 2D applications like fluoroscopy may be limited to a mechanical rotation range of less than 180° to improve handling and usability. The method proposed in this paper allows for the acquisition of a fully sampled data set with a system limited to a mechanical rotation range of at least 180° minus fan angle using a new trajectory design. This enables CT like 3D imaging with a wide range of C-arm devices which are mainly designed for 2D imaging. The proposed trajectory extends the mechanical rotation range of the C-arm system with two additional linear shifts. Due to the divergent character of the fan-beam geometry, these two shifts lead to an additional angular range of half of the fan angle. Combining one shift at the beginning of the scan followed by a rotation and a second shift, the resulting rotate-plus-shift trajectory enables the acquisition of a completely sampled data set using only 180° minus fan angle of rotation. The shifts can be performed using, e.g., the two orthogonal positioning axes of a fully motorized C-arm system. The trajectory was evaluated in phantom and cadaver examinations using two prototype C-arm systems. The proposed trajectory leads to reconstructions without limited angle artifacts. Compared to the limited angle reconstructions of 180° minus fan angle, image quality increased dramatically. Details in the rotate-plus-shift reconstructions were clearly depicted, whereas they are dominated by artifacts in the limited angle scan. The method proposed here employs 3D imaging using C-arms with less than 180° rotation range adding full 3D functionality to a C-arm device retaining both handling comfort and the usability of 2D imaging. This method has a clear potential for clinical use especially to meet the increasing demand for an intraoperative 3D imaging.
The rotate-plus-shift C-arm trajectory. Part I. Complete data with less than 180° rotation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ritschl, Ludwig; Fleischmann, Christof; Kuntz, Jan, E-mail: j.kuntz@dkfz.de
Purpose: In the last decade, C-arm-based cone-beam CT became a widely used modality for intraoperative imaging. Typically a C-arm CT scan is performed using a circular or elliptical trajectory around a region of interest. Therefore, an angular range of at least 180° plus fan angle must be covered to ensure a completely sampled data set. However, mobile C-arms designed with a focus on classical 2D applications like fluoroscopy may be limited to a mechanical rotation range of less than 180° to improve handling and usability. The method proposed in this paper allows for the acquisition of a fully sampled datamore » set with a system limited to a mechanical rotation range of at least 180° minus fan angle using a new trajectory design. This enables CT like 3D imaging with a wide range of C-arm devices which are mainly designed for 2D imaging. Methods: The proposed trajectory extends the mechanical rotation range of the C-arm system with two additional linear shifts. Due to the divergent character of the fan-beam geometry, these two shifts lead to an additional angular range of half of the fan angle. Combining one shift at the beginning of the scan followed by a rotation and a second shift, the resulting rotate-plus-shift trajectory enables the acquisition of a completely sampled data set using only 180° minus fan angle of rotation. The shifts can be performed using, e.g., the two orthogonal positioning axes of a fully motorized C-arm system. The trajectory was evaluated in phantom and cadaver examinations using two prototype C-arm systems. Results: The proposed trajectory leads to reconstructions without limited angle artifacts. Compared to the limited angle reconstructions of 180° minus fan angle, image quality increased dramatically. Details in the rotate-plus-shift reconstructions were clearly depicted, whereas they are dominated by artifacts in the limited angle scan. Conclusions: The method proposed here employs 3D imaging using C-arms with less than 180° rotation range adding full 3D functionality to a C-arm device retaining both handling comfort and the usability of 2D imaging. This method has a clear potential for clinical use especially to meet the increasing demand for an intraoperative 3D imaging.« less
Reference data set of volcanic ash physicochemical and optical properties
NASA Astrophysics Data System (ADS)
Vogel, A.; Diplas, S.; Durant, A. J.; Azar, A. S.; Sunding, M. F.; Rose, W. I.; Sytchkova, A.; Bonadonna, C.; Krüger, K.; Stohl, A.
2017-09-01
Uncertainty in the physicochemical and optical properties of volcanic ash particles creates errors in the detection and modeling of volcanic ash clouds and in quantification of their potential impacts. In this study, we provide a data set that describes the physicochemical and optical properties of a representative selection of volcanic ash samples from nine different volcanic eruptions covering a wide range of silica contents (50-80 wt % SiO2). We measured and calculated parameters describing the physical (size distribution, complex shape, and dense-rock equivalent mass density), chemical (bulk and surface composition), and optical (complex refractive index from ultraviolet to near-infrared wavelengths) properties of the volcanic ash and classified the samples according to their SiO2 and total alkali contents into the common igneous rock types basalt to rhyolite. We found that the mass density ranges between
Thermal effects on rare earth element and strontium isotope chemistry in single conodont elements
NASA Astrophysics Data System (ADS)
Armstrong, H. A.; Pearson, D. G.; Griselin, M.
2001-02-01
A low-blank, high sensitivity isotope dilution, ICP-MS analytical technique has been used to obtain REE abundance data from single conodont elements weighing as little as 5 μg. Sr isotopes can also be measured from the column eluants enabling Sr isotope ratios and REE abundance to be determined from the same dissolution. Results are comparable to published analyses comprising tens to hundreds of elements. To study the effects of thermal metamorphism on REE and strontium mobility in conodonts, samples were selected from a single bed adjacent to a basaltic dyke and from the internationally used colour alteration index (CAI) "standard set." Our analyses span the range of CAI 1 to 8. Homogeneous REE patterns, "bell-shaped" shale-normalised REE patterns are observed across the range of CAI 1 to 6 in both sample sets. This pattern is interpreted as the result of adsorption during early diagenesis and could reflect original seawater chemistry. Above CAI 6 REE patterns become less predictable and perturbations from the typical REE pattern are likely to be due to the onset of apatite recrystallisation. Samples outside the contact aureole of the dyke have a mean 87Sr/ 86Sr ratio of 0.708165, within the broad range of published mid-Carboniferous seawater values. Our analysis indicates conodonts up to CAI 6 record primary geochemical signals that may be a proxy for ancient seawater.
Gillet, Ludovic C.; Navarro, Pedro; Tate, Stephen; Röst, Hannes; Selevsek, Nathalie; Reiter, Lukas; Bonner, Ron; Aebersold, Ruedi
2012-01-01
Most proteomic studies use liquid chromatography coupled to tandem mass spectrometry to identify and quantify the peptides generated by the proteolysis of a biological sample. However, with the current methods it remains challenging to rapidly, consistently, reproducibly, accurately, and sensitively detect and quantify large fractions of proteomes across multiple samples. Here we present a new strategy that systematically queries sample sets for the presence and quantity of essentially any protein of interest. It consists of using the information available in fragment ion spectral libraries to mine the complete fragment ion maps generated using a data-independent acquisition method. For this study, the data were acquired on a fast, high resolution quadrupole-quadrupole time-of-flight (TOF) instrument by repeatedly cycling through 32 consecutive 25-Da precursor isolation windows (swaths). This SWATH MS acquisition setup generates, in a single sample injection, time-resolved fragment ion spectra for all the analytes detectable within the 400–1200 m/z precursor range and the user-defined retention time window. We show that suitable combinations of fragment ions extracted from these data sets are sufficiently specific to confidently identify query peptides over a dynamic range of 4 orders of magnitude, even if the precursors of the queried peptides are not detectable in the survey scans. We also show that queried peptides are quantified with a consistency and accuracy comparable with that of selected reaction monitoring, the gold standard proteomic quantification method. Moreover, targeted data extraction enables ad libitum quantification refinement and dynamic extension of protein probing by iterative re-mining of the once-and-forever acquired data sets. This combination of unbiased, broad range precursor ion fragmentation and targeted data extraction alleviates most constraints of present proteomic methods and should be equally applicable to the comprehensive analysis of other classes of analytes, beyond proteomics. PMID:22261725
ASTM clustering for improving coal analysis by near-infrared spectroscopy.
Andrés, J M; Bona, M T
2006-11-15
Multivariate analysis techniques have been applied to near-infrared (NIR) spectra coals to investigate the relationship between nine coal properties (moisture (%), ash (%), volatile matter (%), fixed carbon (%), heating value (kcal/kg), carbon (%), hydrogen (%), nitrogen (%) and sulphur (%)) and the corresponding predictor variables. In this work, a whole set of coal samples was grouped into six more homogeneous clusters following the ASTM reference method for classification prior to the application of calibration methods to each coal set. The results obtained showed a considerable improvement of the error determination compared with the calibration for the whole sample set. For some groups, the established calibrations approached the quality required by the ASTM/ISO norms for laboratory analysis. To predict property values for a new coal sample it is necessary the assignation of that sample to its respective group. Thus, the discrimination and classification ability of coal samples by Diffuse Reflectance Infrared Fourier Transform Spectroscopy (DRIFTS) in the NIR range was also studied by applying Soft Independent Modelling of Class Analogy (SIMCA) and Linear Discriminant Analysis (LDA) techniques. Modelling of the groups by SIMCA led to overlapping models that cannot discriminate for unique classification. On the other hand, the application of Linear Discriminant Analysis improved the classification of the samples but not enough to be satisfactory for every group considered.
Interface adjustment and exchange coupling in the IrMn/NiFe system
NASA Astrophysics Data System (ADS)
Spizzo, F.; Tamisari, M.; Chinni, F.; Bonfiglioli, E.; Del Bianco, L.
2017-01-01
The exchange bias effect was investigated, in the 5-300 K temperature range, in samples of IrMn [100 Å]/NiFe [50 Å] (set A) and in samples with inverted layer-stacking sequence (set B), produced at room temperature by DC magnetron sputtering in a static magnetic field of 400 Oe. The samples of each set differ for the nominal thickness (tCu) of a Cu spacer, grown at the interface between the antiferromagnetic and ferromagnetic layers, which was varied between 0 and 2 Å. It has been found out that the Cu insertion reduces the values of the exchange field and of the coercivity and can also affect their thermal evolution, depending on the stack configuration. Indeed, the latter also determines a peculiar variation of the exchange bias properties with time, shown and discussed with reference to the samples without Cu of the two sets. The results have been explained considering that, in this system, the exchange coupling mechanism is ruled by the glassy magnetic behavior of the IrMn spins located at the interface with the NiFe layer. Varying the stack configuration and tCu results in a modulation of the structural and magnetic features of the interface, which ultimately affects the spins dynamics of the glassy IrMn interfacial component.
Rekully, Cameron M; Faulkner, Stefan T; Lachenmyer, Eric M; Cunningham, Brady R; Shaw, Timothy J; Richardson, Tammi L; Myrick, Michael L
2018-03-01
An all-pairs method is used to analyze phytoplankton fluorescence excitation spectra. An initial set of nine phytoplankton species is analyzed in pairwise fashion to select two optical filter sets, and then the two filter sets are used to explore variations among a total of 31 species in a single-cell fluorescence imaging photometer. Results are presented in terms of pair analyses; we report that 411 of the 465 possible pairings of the larger group of 31 species can be distinguished using the initial nine-species-based selection of optical filters. A bootstrap analysis based on the larger data set shows that the distribution of possible pair separation results based on a randomly selected nine-species initial calibration set is strongly peaked in the 410-415 pair separation range, consistent with our experimental result. Further, the result for filter selection using all 31 species is also 411 pair separations; The set of phytoplankton fluorescence excitation spectra is intuitively high in rank due to the number and variety of pigments that contribute to the spectrum. However, the results in this report are consistent with an effective rank as determined by a variety of heuristic and statistical methods in the range of 2-3. These results are reviewed in consideration of how consistent the filter selections are from model to model for the data presented here. We discuss the common observation that rank is generally found to be relatively low even in many seemingly complex circumstances, so that it may be productive to assume a low rank from the beginning. If a low-rank hypothesis is valid, then relatively few samples are needed to explore an experimental space. Under very restricted circumstances for uniformly distributed samples, the minimum number for an initial analysis might be as low as 8-11 random samples for 1-3 factors.
Paris, Daniel H; Blacksell, Stuart D; Newton, Paul N; Day, Nicholas P J
2008-12-01
We present a loop-mediated isothermal PCR assay (LAMP) targeting the groEL gene, which encodes the 60kDa heat shock protein of Orientia tsutsugamushi. Evaluation included testing of 63 samples of contemporary in vitro isolates, buffy coats and whole blood samples from patients with fever. Detection limits for LAMP were assessed by serial dilutions and quantitation by real-time PCR assay based on the same target gene: three copies/microl for linearized plasmids, 26 copies/microl for VERO cell culture isolates, 14 copies/microl for full blood samples and 41 copies/microl for clinical buffy coats. Based on a limited sample number, the LAMP assay is comparable in sensitivity with conventional nested PCR (56kDa gene), with limits of detection well below the range of known admission bacterial loads of patients with scrub typhus. This inexpensive method requires no sophisticated equipment or sample preparation, and may prove useful as a diagnostic assay in financially poor settings; however, it requires further prospective validation in the field setting.
Design and development of a simple UV fluorescence multi-spectral imaging system
NASA Astrophysics Data System (ADS)
Tovar, Carlos; Coker, Zachary; Yakovlev, Vladislav V.
2018-02-01
Healthcare access in low-resource settings is compromised by the availability of affordable and accurate diagnostic equipment. The four primary poverty-related diseases - AIDS, pneumonia, malaria, and tuberculosis - account for approximately 400 million annual deaths worldwide as of 2016 estimates. Current diagnostic procedures for these diseases are prolonged and can become unreliable under various conditions. We present the development of a simple low-cost UV fluorescence multi-spectral imaging system geared towards low resource settings for a variety of biological and in-vitro applications. Fluorescence microscopy serves as a useful diagnostic indicator and imaging tool. The addition of a multi-spectral imaging modality allows for the detection of fluorophores within specific wavelength bands, as well as the distinction between fluorophores possessing overlapping spectra. The developed instrument has the potential for a very diverse range of diagnostic applications in basic biomedical science and biomedical diagnostics and imaging. Performance assessment of the microscope will be validated with a variety of samples ranging from organic compounds to biological samples.
NASA Astrophysics Data System (ADS)
Lam, Daryl; Croke, Jacky; Thompson, Chris; Sharma, Ashneel
2017-09-01
The application of palaeoflood hydrology in Australia has been limited since its initial introduction > 30 years ago. This study adopts a regional, field-based approach to sampling slackwater deposits in a subtropical setting in southeast Queensland beyond the traditional arid setting. We explore the potential and challenges of using sites outside the traditional physiographical setting of bedrock gorges. Over 30 flood units were identified across different physiographical settings using a range of criteria. Evidence of charcoal-rich layers and palaeosol development assisted in the identification and separation of distinct flood units. The OSL-dated flood units are relatively young with two-thirds of the samples being < 1000 years old. The elevation of all flood units have resulted in estimated minimum discharges greater than the 1% annual exceedance probability. Although these are in the same order of gauged flood magnitudes, > 80% of them classified as 'extreme event'. This study opens up the renewed possibility of applying palaeoflood hydrology to more populated parts of Australia where the need for improved estimation of flood frequency and magnitude is now urgent in light of several extreme flood events. Preliminary contributions to improve the understanding between high magnitude floods and regional climatic drivers are also discussed. Recognised regional extreme floods generally coincide with La Niña and negative IPO phases, while tropical cyclones appear to be a key weather system in generating such large floods.
Benthic Foraminifera Clumped Isotope Calibration
NASA Astrophysics Data System (ADS)
Piasecki, A.; Marchitto, T. M., Jr.; Bernasconi, S. M.; Grauel, A. L.; Tisserand, A. A.; Meckler, N.
2017-12-01
Due to the widespread spatial and temporal distribution of benthic foraminifera within ocean sediments, they are a commonly used for reconstructing past ocean temperatures and environmental conditions. Many foraminifera-based proxies, however, require calibration schemes that are species specific, which becomes complicated in deep time due to extinct species. Furthermore, calibrations often depend on seawater chemistry being stable and/or constrained, which is not always the case over significant climate state changes like the Eocene Oligocene Transition. Here we study the effect of varying benthic foraminifera species using the clumped isotope proxy for temperature. The benefit of this proxy is that it is independent of seawater chemistry, whereas the downside is that it requires a relatively large sample amounts. Due to recent advancements in sample processing that reduce the sample weight by a factor of 10, clumped isotopes can now be applied to a range paleoceanographic questions. First however, we need to prove that, unlike for other proxies, there are no interspecies differences with clumped isotopes, as is predicted by first principles modeling. We used a range of surface sediment samples covering a temperature range of 1-20°C from the Pacific, Mediterranean, Bahamas, and the Atlantic, and measured the clumped isotope composition of 11 different species of benthic foraminifera. We find that there are indeed no discernible species-specific differences within the sample set. In addition, the samples have the same temperature response to the proxy as inorganic carbonate samples over the same temperature range. As a result, we can now apply this proxy to a wide range of samples and foraminifera species from different ocean basins with different ocean chemistry and be confident that observed signals reflect variations in temperature.
A New Electromagnetic Instrument for Thickness Gauging of Conductive Materials
NASA Technical Reports Server (NTRS)
Fulton, J. P.; Wincheski, B.; Nath, S.; Reilly, J.; Namkung, M.
1994-01-01
Eddy current techniques are widely used to measure the thickness of electrically conducting materials. The approach, however, requires an extensive set of calibration standards and can be quite time consuming to set up and perform. Recently, an electromagnetic sensor was developed which eliminates the need for impedance measurements. The ability to monitor the magnitude of a voltage output independent of the phase enables the use of extremely simple instrumentation. Using this new sensor a portable hand-held instrument was developed. The device makes single point measurements of the thickness of nonferromagnetic conductive materials. The technique utilized by this instrument requires calibration with two samples of known thicknesses that are representative of the upper and lower thickness values to be measured. The accuracy of the instrument depends upon the calibration range, with a larger range giving a larger error. The measured thicknesses are typically within 2-3% of the calibration range (the difference between the thin and thick sample) of their actual values. In this paper the design, operational and performance characteristics of the instrument along with a detailed description of the thickness gauging algorithm used in the device are presented.
Staging Dementia Using Clinical Dementia Rating Scale Sum of Boxes Scores
O'Bryant, Sid E.; Waring, Stephen C.; Cullum, C. Munro; Hall, James; Lacritz, Laura; Massman, Paul J.; Lupo, Philip J.; Reisch, Joan S.; Doody, Rachelle
2012-01-01
Background The Clinical Dementia Rating Scale Sum of Boxes (CDR-SOB) score is commonly used, although the utility regarding this score in staging dementia severity is not well established. Obiective To investigate the effectiveness of CDRSOB scores in staging dementia severity compared with the global CDR score. Design Retrospective study. Setting Texas Alzheimer's Research Consortium minimum data set cohort. Participants A total of 1577 participants (110 controls, 202 patients with mild cognitive impairment, and 1265 patients with probable Alzheimer disease) were available for analysis. Main Outcome Measures Receiver operating characteristic curves were generated from a derivation sample to determine optimal cutoff scores and ranges, which were then applied to the validation sample. Results Optimal ranges of CDR-SOB scores corresponding to the global CDR scores were 0.5 to 4.0 for a global score of 0.5, 4.5 to 9.0 for a global score of 1.O, 9.5 to 15.5 for a global score of 2.0, and 16.0 to 18.0 for a global score of 3.0. When applied to the validation sample, κ scores ranged from 0.86 to 0.94 (P <.001 for all), with 93.0% of the participants falling within the new staging categories. Conclusions The CDR-SOB score compares well with the global CDR score for dementia staging. Owing to the increased range of values, the CDR-SOB score offers several advantages over the global score, including increased utility in tracking changes within and between stages of dementia severity. Interpretive guidelines for CDR-SOB scores are provided. PMID:18695059
Hawkins, Melissa T R; Hofman, Courtney A; Callicrate, Taylor; McDonough, Molly M; Tsuchiya, Mirian T N; Gutiérrez, Eliécer E; Helgen, Kristofer M; Maldonado, Jesus E
2016-09-01
Here, we present a set of RNA-based probes for whole mitochondrial genome in-solution enrichment, targeting a diversity of mammalian mitogenomes. This probes set was designed from seven mammalian orders and tested to determine the utility for enriching degraded DNA. We generated 63 mitogenomes representing five orders and 22 genera of mammals that yielded varying coverage ranging from 0 to >5400X. Based on a threshold of 70% mitogenome recovery and at least 10× average coverage, 32 individuals or 51% of samples were considered successful. The estimated sequence divergence of samples from the probe sequences used to construct the array ranged up to nearly 20%. Sample type was more predictive of mitogenome recovery than sample age. The proportion of reads from each individual in multiplexed enrichments was highly skewed, with each pool having one sample that yielded a majority of the reads. Recovery across each mitochondrial gene varied with most samples exhibiting regions with gaps or ambiguous sites. We estimated the ability of the probes to capture mitogenomes from a diversity of mammalian taxa not included here by performing a clustering analysis of published sequences for 100 taxa representing most mammalian orders. Our study demonstrates that a general array can be cost and time effective when there is a need to screen a modest number of individuals from a variety of taxa. We also address the practical concerns for using such a tool, with regard to pooling samples, generating high quality mitogenomes and detail a pipeline to remove chimeric molecules. © 2015 John Wiley & Sons Ltd.
Predictions of CD4 lymphocytes’ count in HIV patients from complete blood count
2013-01-01
Background HIV diagnosis, prognostic and treatment requires T CD4 lymphocytes’ number from flow cytometry, an expensive technique often not available to people in developing countries. The aim of this work is to apply a previous developed methodology that predicts T CD4 lymphocytes’ value based on total white blood cell (WBC) count and lymphocytes count applying sets theory, from information taken from the Complete Blood Count (CBC). Methods Sets theory was used to classify into groups named A, B, C and D the number of leucocytes/mm3, lymphocytes/mm3, and CD4/μL3 subpopulation per flow cytometry of 800 HIV diagnosed patients. Union between sets A and C, and B and D were assessed, and intersection between both unions was described in order to establish the belonging percentage to these sets. Results were classified into eight ranges taken by 1000 leucocytes/mm3, calculating the belonging percentage of each range with respect to the whole sample. Results Intersection (A ∪ C) ∩ (B ∪ D) showed an effectiveness in the prediction of 81.44% for the range between 4000 and 4999 leukocytes, 91.89% for the range between 3000 and 3999, and 100% for the range below 3000. Conclusions Usefulness and clinical applicability of a methodology based on sets theory were confirmed to predict the T CD4 lymphocytes’ value, beginning with WBC and lymphocytes’ count from CBC. This methodology is new, objective, and has lower costs than the flow cytometry which is currently considered as Gold Standard. PMID:24034560
Spliced synthetic genes as internal controls in RNA sequencing experiments.
Hardwick, Simon A; Chen, Wendy Y; Wong, Ted; Deveson, Ira W; Blackburn, James; Andersen, Stacey B; Nielsen, Lars K; Mattick, John S; Mercer, Tim R
2016-09-01
RNA sequencing (RNA-seq) can be used to assemble spliced isoforms, quantify expressed genes and provide a global profile of the transcriptome. However, the size and diversity of the transcriptome, the wide dynamic range in gene expression and inherent technical biases confound RNA-seq analysis. We have developed a set of spike-in RNA standards, termed 'sequins' (sequencing spike-ins), that represent full-length spliced mRNA isoforms. Sequins have an entirely artificial sequence with no homology to natural reference genomes, but they align to gene loci encoded on an artificial in silico chromosome. The combination of multiple sequins across a range of concentrations emulates alternative splicing and differential gene expression, and it provides scaling factors for normalization between samples. We demonstrate the use of sequins in RNA-seq experiments to measure sample-specific biases and determine the limits of reliable transcript assembly and quantification in accompanying human RNA samples. In addition, we have designed a complementary set of sequins that represent fusion genes arising from rearrangements of the in silico chromosome to aid in cancer diagnosis. RNA sequins provide a qualitative and quantitative reference with which to navigate the complexity of the human transcriptome.
Cross-Study Homogeneity of Psoriasis Gene Expression in Skin across a Large Expression Range
Kerkof, Keith; Timour, Martin; Russell, Christopher B.
2013-01-01
Background In psoriasis, only limited overlap between sets of genes identified as differentially expressed (psoriatic lesional vs. psoriatic non-lesional) was found using statistical and fold-change cut-offs. To provide a framework for utilizing prior psoriasis data sets we sought to understand the consistency of those sets. Methodology/Principal Findings Microarray expression profiling and qRT-PCR were used to characterize gene expression in PP and PN skin from psoriasis patients. cDNA (three new data sets) and cRNA hybridization (four existing data sets) data were compared using a common analysis pipeline. Agreement between data sets was assessed using varying qualitative and quantitative cut-offs to generate a DEG list in a source data set and then using other data sets to validate the list. Concordance increased from 67% across all probe sets to over 99% across more than 10,000 probe sets when statistical filters were employed. The fold-change behavior of individual genes tended to be consistent across the multiple data sets. We found that genes with <2-fold change values were quantitatively reproducible between pairs of data-sets. In a subset of transcripts with a role in inflammation changes detected by microarray were confirmed by qRT-PCR with high concordance. For transcripts with both PN and PP levels within the microarray dynamic range, microarray and qRT-PCR were quantitatively reproducible, including minimal fold-changes in IL13, TNFSF11, and TNFRSF11B and genes with >10-fold changes in either direction such as CHRM3, IL12B and IFNG. Conclusions/Significance Gene expression changes in psoriatic lesions were consistent across different studies, despite differences in patient selection, sample handling, and microarray platforms but between-study comparisons showed stronger agreement within than between platforms. We could use cut-offs as low as log10(ratio) = 0.1 (fold-change = 1.26), generating larger gene lists that validate on independent data sets. The reproducibility of PP signatures across data sets suggests that different sample sets can be productively compared. PMID:23308107
Fourier Plane Image Combination by Feathering
NASA Astrophysics Data System (ADS)
Cotton, W. D.
2017-09-01
Astronomical objects frequently exhibit structure over a wide range of scales whereas many telescopes, especially interferometer arrays, only sample a limited range of spatial scales. To properly image these objects, images from a set of instruments covering the range of scales may be needed. These images then must be combined in a manner to recover all spatial scales. This paper describes the feathering technique for image combination in the Fourier transform plane. Implementations in several packages are discussed and example combinations of single dish and interferometric observations of both simulated and celestial radio emission are given.
Characterization and electron-energy-loss spectroscopy on NiV and NiMo superlattices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahmood, S.H.
1986-01-01
NiV superlattices with periods (A) ranging from 15 to 80 A, and NiMo superlattices with from 14 to 110 A were studied using X-ray Diffraction (XRD), Electron Diffraction (ED), Energy-Dispersive X-Ray (EDX) microanalysis, and Electron Energy Loss Spectroscopy (EELS). Both of these systems have sharp superlattice-to-amorphous (S-A) transitions at about empty set = 17A. Superlattices with empty set around the S-A boundary were found to have large local variations in the in-plane grain sizes. Except for a few isolated regions, the chemical composition of the samples were found to be uniform. In samples prepared at Argonne National Laboratory (ANL), mostmore » places studied with EELS showed changes in the EELS spectrum with decreasing empty set. An observed growth in a plasmon peak at approx. 10ev in both NiV and NiMo as empty set decreased down to 19 A is attributed to excitation of interface plasmons. Consistent with this attribution, the peak height shrank in the amorphous samples. The width of this peak is consistent with the theory. The sift in this peak down to 9 ev with decreasing empty set in NiMo is not understood.« less
Rincent, R; Laloë, D; Nicolas, S; Altmann, T; Brunel, D; Revilla, P; Rodríguez, V M; Moreno-Gonzalez, J; Melchinger, A; Bauer, E; Schoen, C-C; Meyer, N; Giauffret, C; Bauland, C; Jamin, P; Laborde, J; Monod, H; Flament, P; Charcosset, A; Moreau, L
2012-10-01
Genomic selection refers to the use of genotypic information for predicting breeding values of selection candidates. A prediction formula is calibrated with the genotypes and phenotypes of reference individuals constituting the calibration set. The size and the composition of this set are essential parameters affecting the prediction reliabilities. The objective of this study was to maximize reliabilities by optimizing the calibration set. Different criteria based on the diversity or on the prediction error variance (PEV) derived from the realized additive relationship matrix-best linear unbiased predictions model (RA-BLUP) were used to select the reference individuals. For the latter, we considered the mean of the PEV of the contrasts between each selection candidate and the mean of the population (PEVmean) and the mean of the expected reliabilities of the same contrasts (CDmean). These criteria were tested with phenotypic data collected on two diversity panels of maize (Zea mays L.) genotyped with a 50k SNPs array. In the two panels, samples chosen based on CDmean gave higher reliabilities than random samples for various calibration set sizes. CDmean also appeared superior to PEVmean, which can be explained by the fact that it takes into account the reduction of variance due to the relatedness between individuals. Selected samples were close to optimality for a wide range of trait heritabilities, which suggests that the strategy presented here can efficiently sample subsets in panels of inbred lines. A script to optimize reference samples based on CDmean is available on request.
Upadhyay, Neelam; Jaiswal, Pranita; Jha, Shyam Narayan
2016-10-01
Ghee forms an important component of the diet of human beings due to its rich flavor and high nutritive value. This high priced fat is prone to adulteration with cheaper fats. ATR-FTIR spectroscopy coupled with chemometrics was applied for determining the presence of goat body fat in ghee (@1, 3, 5, 10, 15 and 20% level in the laboratory made/spiked samples). The spectra of pure (ghee and goat body fat) and spiked samples were taken in the wavenumber range of 4000-500 cm -1 . Separated clusters of pure ghee and spiked samples were obtained on applying principal component analysis at 5% level of significance in the selected wavenumber range (1786-1680, 1490-919 and 1260-1040 cm -1 ). SIMCA was applied for classification of samples and pure ghee showed 100% classification efficiency. The value of R 2 was found to be >0.99 for calibration and validation sets using partial least square method at all the selected wavenumber range which indicate that the model was well developed. The study revealed that the spiked samples of goat body fat could be detected even at 1% level in ghee.
Udowelle, Nnaemeka Arinze; Igweze, Zelinjo Nkeiruka; Asomugha, Rose Ngozi; Orisakwe, Orish Ebere
A risk assessment and dietary exposure to polycyclic aromatic hydrocarbons (PAHs), lead and cadmium from bread, a common food consumed in Nigeria. Sixty samples of bread were collected from different types of bakeries where the heat is generated by wood (42 samples) or by electricity (18 samples) from twenty bakeries located in Gusau Zamfara (B1- B14) and Port Harcourt Rivers States (B15-B20) in Nigeria. PAHs in bread were determined by gas chromatography. Lead and cadmium were determined using atomic absorption spectrophotometry. Non-carcinogenic PAHs pyrene (13.72 μg/kg) and genotoxic PAHs (PAH8), benzo[a]anthracene (9.13 μg/ kg) were at the highest concentrations. Total benzo[a]pyrene concentration of 6.7 μg/kg was detected in 100% of tested samples. Dietary intake of total PAHs ranged between 0.004-0.063 μg/kg bw. day-1 (children), 0.002-0.028 μg/kg day-1 (adolescents), 0.01-0.017 μg/kg day-1 (male), 0.002-0.027 μg/kg day-1 (female), and 0.002-0.025 μg/kg day-1 (seniors). The Target Hazard Quotient (THQ) for Pb and Cd were below 1. Lead ranged from 0.01-0.071 mg/kg with 10.85 and 100% of bread samples violating the permissible limit set by USEPA, WHO and EU respectively. Cadmium ranged from 0.01-0.03 mg/kg, with all bread samples below the permissible limits as set by US EPA, JECFA and EU. The daily intake of Pb and Cd ranged from 0.03-0.23 μg/kg bw day-1 and 0.033-0.36 μg/kg bw day-1 respectively. Incremental lifetime cancer risk (ILCR) was 3.8 x 10-7. The levels of these contaminants in bread if not controlled might present a possible route of exposure to heavy metals and PAHs additional to the body burden from other sources.
A self-sampling method to obtain large volumes of undiluted cervicovaginal secretions.
Boskey, Elizabeth R; Moench, Thomas R; Hees, Paul S; Cone, Richard A
2003-02-01
Studies of vaginal physiology and pathophysiology sometime require larger volumes of undiluted cervicovaginal secretions than can be obtained by current methods. A convenient method for self-sampling these secretions outside a clinical setting can facilitate such studies of reproductive health. The goal was to develop a vaginal self-sampling method for collecting large volumes of undiluted cervicovaginal secretions. A menstrual collection device (the Instead cup) was inserted briefly into the vagina to collect secretions that were then retrieved from the cup by centrifugation in a 50-ml conical tube. All 16 women asked to perform this procedure found it feasible and acceptable. Among 27 samples, an average of 0.5 g of secretions (range, 0.1-1.5 g) was collected. This is a rapid and convenient self-sampling method for obtaining relatively large volumes of undiluted cervicovaginal secretions. It should prove suitable for a wide range of assays, including those involving sexually transmitted diseases, microbicides, vaginal physiology, immunology, and pathophysiology.
Seo, Dongmin; Paek, Sung-Ho; Oh, Sangwoo; Seo, Sungkyu; Paek, Se-Hwan
2016-09-24
The incidence of diabetes is continually increasing, and by 2030, it is expected to have increased by 69% and 20% in underdeveloped and developed countries, respectively. Therefore, glucose sensors are likely to remain in high demand in medical device markets. For the current study, we developed a needle-type bio-layer interference (BLI) sensor that can continuously monitor glucose levels. Using dialysis procedures, we were able to obtain hypoglycemic samples from commercial human serum. These dialysis-derived samples, alongside samples of normal human serum were used to evaluate the utility of the sensor for the detection of the clinical interest range of glucose concentrations (70-200 mg/dL), revealing high system performance for a wide glycemic state range (45-500 mg/dL). Reversibility and reproducibility were also tested over a range of time spans. Combined with existing BLI system technology, this sensor holds great promise for use as a wearable online continuous glucose monitoring system for patients in a hospital setting.
Performance of SMARTer at Very Low Scattering Vector q-Range Revealed by Monodisperse Nanoparticles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Putra, E. Giri Rachman; Ikram, A.; Bharoto
2008-03-17
A monodisperse nanoparticle sample of polystyrene has been employed to determine performance of the 36 meter small-angle neutron scattering (SANS) BATAN spectrometer (SMARTer) at the Neutron Scattering Laboratory (NSL)--Serpong, Indonesia, in a very low scattering vector q-range. Detector position at 18 m from sample position, beam stopper of 50 mm in diameter, neutron wavelength of 5.66 A as well as 18 m-long collimator had been set up to achieve very low scattering vector q-range of SMARTer. A polydisperse smeared-spherical particle model was applied to fit the corrected small-angle scattering data of monodisperse polystyrene nanoparticle sample. The mean average of particlemore » radius of 610 A, volume fraction of 0.0026, and polydispersity of 0.1 were obtained from the fitting results. The experiment results from SMARTer are comparable to SANS-J, JAEA - Japan and it is revealed that SMARTer is powerfully able to achieve the lowest scattering vector down to 0.002 A{sup -1}.« less
Environmental setting of benchmark streams in agricultural areas of eastern Wisconsin
Rheaume, S.J.; Stewart, J.S.; Lenz, B.N.
1996-01-01
Differences in land use/land cover, and riparian vegetation and instream habitat characteristics are presented. Summaries of field measurements of water temperature, pH, specific conductance and concentrations of dissolved oxygen, total organic plus ammonia nitrogen, dissolved ammonium, nitrate plus nitrte as nitrogen, total phosphorus, dissolved orthophosphate, and atrazine are listed. Concentrations of dissolved oxygen for the sampled streams ranged from 6 A to 14.3 and met the standards set by the Wisconsin Department of Natural Resources (WDNR) for supporting fish and aquatic life. Specific conductance ranged from 98 to 753 u,Scm with values highest in RHU's 1 and 3, where streams are underlain by carbonate bedrock. Median pH did not vary greatly among the four RHU's and ranged from 6.7 to 8.8 also meeting the WDNR standards. Concentrations of total organic plus ammonia nitrogen, dissolved ammonium, total phosphorus, and dissolved orthophosphate show little variation between streams and are generally low, compared to concentrations measured in agriculturally-affected streams in the same RHU's during the same sampling period. Concentrations of the most commonly used pesticide in the study unit, atrazine, were low in all streams, and most concentrations were below trn 0.1 u,g/L detection limit. Riparian vegetation for the benchmark streams were characterized by lowland species of the native plant communities described by John T. Curtis in the "Vegetation of Wisconsin." Based on the environmental setting and water-quality information collected to date, these streams appear to show minimal adverse effects from human activity.
Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use
Arthur, Steve M.; Schwartz, Charles C.
1999-01-01
We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area <1%/additional location) and precise (CV < 50%). Although the radiotracking data appeared unbiased, except for the relationship between area and sample size, these data failed to indicate some areas that likely were important to bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.
NASA Technical Reports Server (NTRS)
Hung, Ching-cheh; de Groh, Kim K.; Banks, Bruce A.
2012-01-01
Under a microscope, atomic oxygen (AO) exposed silicone surfaces are crazed and seen as "islands" separated by numerous crack lines, much analogous to mud-tile cracks. This research characterized and compared the degree of AO degradation of silicones by analyzing optical microscope images of samples exposed to low Earth orbit (LEO) AO as part of the Spacecraft Silicone Experiment. The Spacecraft Silicone Experiment consisted of eight DC 93-500 silicone samples exposed to eight different AO fluence levels (ranged from 1.46 to 8.43 10(exp 21) atoms/sq cm) during two different Materials International Space Station Experiment (MISSE) missions. Image analysis software was used to analyze images taken using a digital camera. To describe the morphological degradation of each AO exposed flight sample, three different parameters were selected and estimated: (1) average area of islands was determined and found to be in the 1000 to 3100 sq mm range; (2) total length of crack lines per unit area of the sample surface were determined and found to be in the range of 27 to 59 mm of crack length per sq mm of sample surface; and (3) the fraction of sample surface area that is occupied by crack lines was determined and found to be in the 25 to 56 percent range. In addition, average crack width can be estimated from crack length and crack area measurements and was calculated to be about 10 mm. Among the parameters studied, the fraction of sample surface area that is occupied by crack lines is believed to be most useful in characterizing the degree of silicone conversion to silicates by AO because its value steadily increases with increasing fluence over the entire fluence range. A series of SEM images from the eight samples exposed to different AO fluences suggest a complex sequence of surface stress due to surface shrinkage and crack formation, followed by re-distribution of stress and shrinking rate on the sample surface. Energy dispersive spectra (EDS) indicated that upon AO exposure, carbon content on the surface decreased relatively quickly at the beginning, to 32 percent of the pristine value for the least exposed sample in this set of experiments (1.46 10(exp 21) atoms/sq cm), but then decreased slowly, to 22 percent of the pristine value for the most exposed sample in this set of experiment (8.43 10(exp 21) atoms/sq cm). The oxygen content appears to increase at a slower rate. The least and most AO exposed samples were, respectively, 52 and 150 percent above the pristine values. The silicone samples with the greater AO exposure (7.75 10(exp 21) atoms/sq cm and higher) appear to have a surface layer which contains SiO2 with perhaps small amounts of unreacted silicone, CO and CO2 sealed inside.
Kuo, Frances E.; Faber Taylor, Andrea
2004-01-01
Objectives. We examined the impact of relatively “green” or natural settings on attention-deficit/hyperactivity disorder (ADHD) symptoms across diverse subpopulations of children. Methods. Parents nationwide rated the aftereffects of 49 common after-school and weekend activities on children’s symptoms. Aftereffects were compared for activities conducted in green outdoor settings versus those conducted in both built outdoor and indoor settings. Results. In this national, nonprobability sample, green outdoor activities reduced symptoms significantly more than did activities conducted in other settings, even when activities were matched across settings. Findings were consistent across age, gender, and income groups; community types; geographic regions; and diagnoses. Conclusions. Green outdoor settings appear to reduce ADHD symptoms in children across a wide range of individual, residential, and case characteristics. PMID:15333318
Burow, Karen R.; Shelton, Jennifer L.; Dubrovsky, Neil M.
1998-01-01
The processes that affect nitrate and pesticide occurrence may be better understood by relating ground-water quality to natural and human factors in the context of distinct, regionally extensive, land- use settings. This study assesses nitrate and pesticide occurrence in ground water beneath three agricultural land-use settings in the eastern San Joaquin Valley, California. Water samples were collected from 60 domestic wells in vineyard, almond, and a crop grouping of corn, alfalfa, and vegetable land-use settings. Each well was sampled once during 1993?1995. This study is one element of the U.S. Geological Survey?s National Water-Quality Assessment Program, which is designed to assess the status of, and trends in, the quality of the nation?s ground- and surface-water resources and to link the status and trends with an understanding of the natural and human factors that affect the quality of water. The concentrations and occurrence of nitrate and pesticides in ground-water samples from domestic wells in the eastern alluvial fan physiographic region were related to differences in chemical applica- tions and to the physical and biogeochemical processes that charac- terize each of the three land-use settings. Ground water beneath the vineyard and almond land-use settings on the coarse-grained, upper and middle parts of the alluvial fans is more vulnerable to nonpoint- source agricultural contamination than is the ground water beneath the corn, alfalfa, and vegetable land-use setting on the lower part of the fans, near the basin physiographic region. Nitrate concentrations ranged from less than 0.05 to 55 milligrams per liter, as nitrogen. Nitrate concentrations were significantly higher in the almond land-use setting than in the vineyard land-use setting, whereas concentrations in the corn, alfalfa, and vegetable land-use setting were intermediate. Nitrate concentrations exceeded the maximum contaminant level in eight samples from the almond land- use setting (40 percent), in seven samples from the corn, alfalfa, and vegetable land-use setting (35 percent), and in three samples from the vineyard land-use setting (15 percent). The physical and chemical characteristics of the vineyard and the almond land-use settings are similar, characterized by coarse-grained sediments and high dissolved- oxygen concentrations, reflecting processes that promote rapid infiltration of water and solutes. The high nitrate concentrations in the almond land-use setting reflect the high amount of nitrogen appli- cations in this setting, whereas the low nitrate concentrations in the vineyard land-use setting reflect relatively low nitrogen applications. In the corn, alfalfa, and vegetable land-use setting, the relatively fine-grained sediments, and low dissolved-oxygen concentrations, reflect processes that result in slow infiltration rates and longer ground-water residence times. The intermediate nitrate concentrations in the corn, alfalfa, and vegetable land-use setting are a result of these physical and chemical characteristics, combined with generally high (but variable) nitrogen applications. Twenty-three different pesticides were detected in 41 of 60 ground- water samples (68 percent). Eighty percent of the ground-water samples from the vineyard land-use setting had at least one pesticide detection, followed by 70 percent in the almond land-use setting, and 55 percent in the corn, alfalfa, and vegetable land-use setting. All concentra- tions were less than state or federal maximum contaminant levels only 5 of the detected pesticides have established maximum contaminant levels) with the exception of 1,2-dibromo-3-chloropropane, which exceeded the maximum contaminant level of 0.2 micrograms per liter in 10 ground-water samples from vineyard land-use wells and in 5 ground- water samples from almond land-use wells. Simazine was detected most often, occurring in 50 percent of the ground-water samples from the vineyard land-use wells and in 30 percent
Parallel k-Means Clustering for Quantitative Ecoregion Delineation Using Large Data Sets
Jitendra Kumar; Richard T. Mills; Forrest M Hoffman; William W Hargrove
2011-01-01
Identification of geographic ecoregions has long been of interest to environmental scientists and ecologists for identifying regions of similar ecological and environmental conditions. Such classifications are important for predicting suitable species ranges, for stratification of ecological samples, and to help prioritize habitat preservation and remediation efforts....
7 CFR 28.425 - Low Middling Spotted Color.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Low Middling Spotted Color. 28.425 Section 28.425 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... Color. Low Middling Spotted Color is color which is within the range represented by a set of samples in...
7 CFR 28.422 - Strict Middling Spotted Color.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Strict Middling Spotted Color. 28.422 Section 28.422 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... Color. Strict Middling Spotted Color is color which is within the range represented by a set of samples...
7 CFR 28.425 - Low Middling Spotted Color.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Low Middling Spotted Color. 28.425 Section 28.425 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... Color. Low Middling Spotted Color is color which is within the range represented by a set of samples in...
7 CFR 28.422 - Strict Middling Spotted Color.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Strict Middling Spotted Color. 28.422 Section 28.422 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... Color. Strict Middling Spotted Color is color which is within the range represented by a set of samples...
7 CFR 28.422 - Strict Middling Spotted Color.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Strict Middling Spotted Color. 28.422 Section 28.422 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... Color. Strict Middling Spotted Color is color which is within the range represented by a set of samples...
7 CFR 28.425 - Low Middling Spotted Color.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Low Middling Spotted Color. 28.425 Section 28.425 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... Color. Low Middling Spotted Color is color which is within the range represented by a set of samples in...
7 CFR 28.425 - Low Middling Spotted Color.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Low Middling Spotted Color. 28.425 Section 28.425 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... Color. Low Middling Spotted Color is color which is within the range represented by a set of samples in...
7 CFR 28.422 - Strict Middling Spotted Color.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Strict Middling Spotted Color. 28.422 Section 28.422 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... Color. Strict Middling Spotted Color is color which is within the range represented by a set of samples...
USDA-ARS?s Scientific Manuscript database
The primary advantage of Dynamically Dimensioned Search algorithm (DDS) is that it outperforms many other optimization techniques in both convergence speed and the ability in searching for parameter sets that satisfy statistical guidelines while requiring only one algorithm parameter (perturbation f...
7 CFR 28.425 - Low Middling Spotted Color.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Low Middling Spotted Color. 28.425 Section 28.425 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... Color. Low Middling Spotted Color is color which is within the range represented by a set of samples in...
7 CFR 28.422 - Strict Middling Spotted Color.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Strict Middling Spotted Color. 28.422 Section 28.422 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... Color. Strict Middling Spotted Color is color which is within the range represented by a set of samples...
Candidate-based proteomics in the search for biomarkers of cardiovascular disease
Anderson, Leigh
2005-01-01
The key concept of proteomics (looking at many proteins at once) opens new avenues in the search for clinically useful biomarkers of disease, treatment response and ageing. As the number of proteins that can be detected in plasma or serum (the primary clinical diagnostic samples) increases towards 1000, a paradoxical decline has occurred in the number of new protein markers approved for diagnostic use in clinical laboratories. This review explores the limitations of current proteomics protein discovery platforms, and proposes an alternative approach, applicable to a range of biological/physiological problems, in which quantitative mass spectrometric methods developed for analytical chemistry are employed to measure limited sets of candidate markers in large sets of clinical samples. A set of 177 candidate biomarker proteins with reported associations to cardiovascular disease and stroke are presented as a starting point for such a ‘directed proteomics’ approach. PMID:15611012
A new small-angle X-ray scattering set-up on the crystallography beamline I711 at MAX-lab.
Knaapila, M; Svensson, C; Barauskas, J; Zackrisson, M; Nielsen, S S; Toft, K N; Vestergaard, B; Arleth, L; Olsson, U; Pedersen, J S; Cerenius, Y
2009-07-01
A small-angle X-ray scattering (SAXS) set-up has recently been developed at beamline I711 at the MAX II storage ring in Lund (Sweden). An overview of the required modifications is presented here together with a number of application examples. The accessible q range in a SAXS experiment is 0.009-0.3 A(-1) for the standard set-up but depends on the sample-to-detector distance, detector offset, beamstop size and wavelength. The SAXS camera has been designed to have a low background and has three collinear slit sets for collimating the incident beam. The standard beam size is about 0.37 mm x 0.37 mm (full width at half-maximum) at the sample position, with a flux of 4 x 10(10) photons s(-1) and lambda = 1.1 A. The vacuum is of the order of 0.05 mbar in the unbroken beam path from the first slits until the exit window in front of the detector. A large sample chamber with a number of lead-throughs allows different sample environments to be mounted. This station is used for measurements on weakly scattering proteins in solutions and also for colloids, polymers and other nanoscale structures. A special application supported by the beamline is the effort to establish a micro-fluidic sample environment for structural analysis of samples that are only available in limited quantities. Overall, this work demonstrates how a cost-effective SAXS station can be constructed on a multipurpose beamline.
Sanzolone, R.F.
1986-01-01
An inductively coupled plasma atomic fluorescence spectrometric method is described for the determination of six elements in a variety of geological materials. Sixteen reference materials are analysed by this technique to demonstrate its use in geochemical exploration. Samples are decomposed with nitric, hydrofluoric and hydrochloric acids, and the residue dissolved in hydrochloric acid and diluted to volume. The elements are determined in two groups based on compatibility of instrument operating conditions and consideration of crustal abundance levels. Cadmium, Cu, Pb and Zn are determined as a group in the 50-ml sample solution under one set of instrument conditions with the use of scatter correction. Limitations of the scatter correction technique used with the fluorescence instrument are discussed. Iron and Mn are determined together using another set of instrumental conditions on a 1-50 dilution of the sample solution without the use of scatter correction. The ranges of concentration (??g g-1) of these elements in the sample that can be determined are: Cd, 0.3-500; Cu, 0.4-500; Fe, 85-250 000; Mn, 45-100 000; Pb, 5-10 000; and Zn, 0.4-300. The precision of the method is usually less than 5% relative standard deviation (RSD) over a wide concentration range and acceptable accuracy is shown by the agreement between values obtained and those recommended for the reference materials.
van der Gaag, Kristiaan J; de Leeuw, Rick H; Laros, Jeroen F J; den Dunnen, Johan T; de Knijff, Peter
2018-07-01
Since two decades, short tandem repeats (STRs) are the preferred markers for human identification, routinely analysed by fragment length analysis. Here we present a novel set of short hypervariable autosomal microhaplotypes (MH) that have four or more SNPs in a span of less than 70 nucleotides (nt). These MHs display a discriminating power approaching that of STRs and provide a powerful alternative for the analysis;1;is of forensic samples that are problematic when the STR fragment size range exceeds the integrity range of severely degraded DNA or when multiple donors contribute to an evidentiary stain and STR stutter artefacts complicate profile interpretation. MH typing was developed using the power of massively parallel sequencing (MPS) enabling new powerful, fast and efficient SNP-based approaches. MH candidates were obtained from queries in data of the 1000 Genomes, and Genome of the Netherlands (GoNL) projects. Wet-lab analysis of 276 globally dispersed samples and 97 samples of nine large CEPH families assisted locus selection and corroboration of informative value. We infer that MHs represent an alternative marker type with good discriminating power per locus (allowing the use of a limited number of loci), small amplicon sizes and absence of stutter artefacts that can be especially helpful when unbalanced mixed samples are submitted for human identification. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Edwards, T.C.; Cutler, D.R.; Zimmermann, N.E.; Geiser, L.; Moisen, Gretchen G.
2006-01-01
We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by resubstitution rates were similar for each lichen species irrespective of the underlying sample survey form. Cross-validation estimates of prediction accuracies were lower than resubstitution accuracies for all species and both design types, and in all cases were closer to the true prediction accuracies based on the EVALUATION data set. We argue that greater emphasis should be placed on calculating and reporting cross-validation accuracy rates rather than simple resubstitution accuracy rates. Evaluation of the DESIGN and PURPOSIVE tree models on the EVALUATION data set shows significantly lower prediction accuracy for the PURPOSIVE tree models relative to the DESIGN models, indicating that non-probabilistic sample surveys may generate models with limited predictive capability. These differences were consistent across all four lichen species, with 11 of the 12 possible species and sample survey type comparisons having significantly lower accuracy rates. Some differences in accuracy were as large as 50%. The classification tree structures also differed considerably both among and within the modelled species, depending on the sample survey form. Overlap in the predictor variables selected by the DESIGN and PURPOSIVE tree models ranged from only 20% to 38%, indicating the classification trees fit the two evaluated survey forms on different sets of predictor variables. The magnitude of these differences in predictor variables throws doubt on ecological interpretation derived from prediction models based on non-probabilistic sample surveys. ?? 2006 Elsevier B.V. All rights reserved.
Water and Sediment Quality in the Yukon River Basin, Alaska, During Water Year 2001
Schuster, Paul F.
2003-01-01
Overview -- This report contains water-quality and sediment-quality data from samples collected in the Yukon River Basin during water year 2001 (October 2000 through September 2001). A broad range of chemical and biological analyses from three sets of samples are presented. First, samples were collected throughout the year at five stations in the basin (three on the mainstem Yukon River, one each on the Tanana and Porcupine Rivers). Second, fecal indicators were measured on samples from drinking-water supplies collected near four villages. Third, sediment cores from five lakes throughout the Yukon Basin were sampled to reconstruct historic trends in the atmospheric deposition of trace elements and hydrophobic organic compounds.
Bozzi, Jorge A.; Liepelt, Sascha; Ohneiser, Sebastian; Gallo, Leonardo A.; Marchelli, Paula; Leyer, Ilona; Ziegenhagen, Birgit; Mengel, Christina
2015-01-01
Premise of the study: We present a set of 23 polymorphic nuclear microsatellite loci, 18 of which are identified for the first time within the riparian species Salix humboldtiana (Salicaceae) using next-generation sequencing. Methods and Results To characterize the 23 loci, up to 60 individuals were sampled and genotyped at each locus. The number of alleles ranged from two to eight, with an average of 4.43 alleles per locus. The effective number of alleles ranged from 1.15 to 3.09 per locus, and allelic richness ranged from 2.00 to 7.73 alleles per locus. Conclusions The new marker set will be used for future studies of genetic diversity and differentiation as well as for unraveling spatial genetic structures in S. humboldtiana populations in northern Patagonia, Argentina. PMID:25909042
Health plan auditing: 100-percent-of-claims vs. random-sample audits.
Sillup, George P; Klimberg, Ronald K
2011-01-01
The objective of this study was to examine the relative efficacy of two different methodologies for auditing self-funded medical claim expenses: 100-percent-of-claims auditing versus random-sampling auditing. Multiple data sets of claim errors or 'exceptions' from two Fortune-100 corporations were analysed and compared to 100 simulated audits of 300- and 400-claim random samples. Random-sample simulations failed to identify a significant number and amount of the errors that ranged from $200,000 to $750,000. These results suggest that health plan expenses of corporations could be significantly reduced if they audited 100% of claims and embraced a zero-defect approach.
Analysis of near infrared spectra for age-grading of wild populations of Anopheles gambiae.
Krajacich, Benjamin J; Meyers, Jacob I; Alout, Haoues; Dabiré, Roch K; Dowell, Floyd E; Foy, Brian D
2017-11-07
Understanding the age-structure of mosquito populations, especially malaria vectors such as Anopheles gambiae, is important for assessing the risk of infectious mosquitoes, and how vector control interventions may impact this risk. The use of near-infrared spectroscopy (NIRS) for age-grading has been demonstrated previously on laboratory and semi-field mosquitoes, but to date has not been utilized on wild-caught mosquitoes whose age is externally validated via parity status or parasite infection stage. In this study, we developed regression and classification models using NIRS on datasets of wild An. gambiae (s.l.) reared from larvae collected from the field in Burkina Faso, and two laboratory strains. We compared the accuracy of these models for predicting the ages of wild-caught mosquitoes that had been scored for their parity status as well as for positivity for Plasmodium sporozoites. Regression models utilizing variable selection increased predictive accuracy over the more common full-spectrum partial least squares (PLS) approach for cross-validation of the datasets, validation, and independent test sets. Models produced from datasets that included the greatest range of mosquito samples (i.e. different sampling locations and times) had the highest predictive accuracy on independent testing sets, though overall accuracy on these samples was low. For classification, we found that intramodel accuracy ranged between 73.5-97.0% for grouping of mosquitoes into "early" and "late" age classes, with the highest prediction accuracy found in laboratory colonized mosquitoes. However, this accuracy was decreased on test sets, with the highest classification of an independent set of wild-caught larvae reared to set ages being 69.6%. Variation in NIRS data, likely from dietary, genetic, and other factors limits the accuracy of this technique with wild-caught mosquitoes. Alternative algorithms may help improve prediction accuracy, but care should be taken to either maximize variety in models or minimize confounders.
Dark Energy Survey Year 1 Results: galaxy mock catalogues for BAO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avila, S.; et al.
Mock catalogues are a crucial tool in the analysis of galaxy surveys data, both for the accurate computation of covariance matrices, and for the optimisation of analysis methodology and validation of data sets. In this paper, we present a set of 1800 galaxy mock catalogues designed to match the Dark Energy Survey Year-1 BAO sample (Crocce et al. 2017) in abundance, observational volume, redshift distribution and uncertainty, and redshift dependent clustering. The simulated samples were built upon HALOGEN (Avila et al. 2015) halo catalogues, based on a $2LPT$ density field with an exponential bias. For each of them, a lightconemore » is constructed by the superposition of snapshots in the redshift range $0.45« less
Phillips, Patrick J.; Schubert, Christopher E.; Argue, Denise M.; Fisher, Irene J.; Furlong, Edward T.; Foreman, William T.; Gray, James L.; Chalmers, Ann T.
2015-01-01
The highest micropollutant concentrations for the NY network were present in the shoreline wells and reflect groundwater that is most affected by septic system discharges. One of the shoreline wells had personal care/domestic use, pharmaceutical, and plasticizer concentrations ranging from 0.4 to 5.7 μg/L. Estradiol equivalency quotient concentrations were also highest in a shoreline well sample (3.1 ng/L). Most micropollutant concentrations increase with increasing specific conductance and total nitrogen concentrations for shoreline well samples. These findings suggest that septic systems serving institutional settings and densely populated areas in coastal settings may be locally important sources of micropollutants to adjacent aquifer and marine systems.
NASA Astrophysics Data System (ADS)
Suhandy, D.; Yulia, M.; Ogawa, Y.; Kondo, N.
2018-05-01
In the present research, an evaluation of using near infrared (NIR) spectroscopy in tandem with full spectrum partial least squares (FS-PLS) regression for quantification of degree of adulteration in civet coffee was conducted. A number of 126 ground roasted coffee samples with degree of adulteration 0-51% were prepared. Spectral data were acquired using a NIR spectrometer equipped with an integrating sphere for diffuse reflectance measurement in the range of 1300-2500 nm. The samples were divided into two groups calibration sample set (84 samples) and prediction sample set (42 samples). The calibration model was developed on original spectra using FS-PLS regression with full-cross validation method. The calibration model exhibited the determination coefficient R2=0.96 for calibration and R2=0.92 for validation. The prediction resulted in low root mean square error of prediction (RMSEP) (4.67%) and high ratio prediction to deviation (RPD) (3.75). In conclusion, the degree of adulteration in civet coffee have been quantified successfully by using NIR spectroscopy and FS-PLS regression in a non-destructive, economical, precise, and highly sensitive method, which uses very simple sample preparation.
NASA Astrophysics Data System (ADS)
Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.
2017-08-01
Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.
New Teff and [Fe/H] spectroscopic calibration for FGK dwarfs and GK giants
NASA Astrophysics Data System (ADS)
Teixeira, G. D. C.; Sousa, S. G.; Tsantaki, M.; Monteiro, M. J. P. F. G.; Santos, N. C.; Israelian, G.
2016-10-01
Context. The ever-growing number of large spectroscopic survey programs has increased the importance of fast and reliable methods with which to determine precise stellar parameters. Some of these methods are highly dependent on correct spectroscopic calibrations. Aims: The goal of this work is to obtain a new spectroscopic calibration for a fast estimate of Teff and [Fe/H] for a wide range of stellar spectral types. Methods: We used spectra from a joint sample of 708 stars, compiled from 451 FGK dwarfs and 257 GK-giant stars. We used homogeneously determined spectroscopic stellar parameters to derive temperature calibrations using a set of selected EW line-ratios, and [Fe/H] calibrations using a set of selected Fe I lines. Results: We have derived 322 EW line-ratios and 100 Fe I lines that can be used to compute Teff and [Fe/H], respectively. We show that these calibrations are effective for FGK dwarfs and GK-giant stars in the following ranges: 4500 K
Control methods for merging ALSM and ground-based laser point clouds acquired under forest canopies
NASA Astrophysics Data System (ADS)
Slatton, Kenneth C.; Coleman, Matt; Carter, William E.; Shrestha, Ramesh L.; Sartori, Michael
2004-12-01
Merging of point data acquired from ground-based and airborne scanning laser rangers has been demonstrated for cases in which a common set of targets can be readily located in both data sets. However, direct merging of point data was not generally possible if the two data sets did not share common targets. This is often the case for ranging measurements acquired in forest canopies, where airborne systems image the canopy crowns well, but receive a relatively sparse set of points from the ground and understory. Conversely, ground-based scans of the understory do not generally sample the upper canopy. An experiment was conducted to establish a viable procedure for acquiring and georeferencing laser ranging data underneath a forest canopy. Once georeferenced, the ground-based data points can be merged with airborne points even in cases where no natural targets are common to both data sets. Two ground-based laser scans are merged and georeferenced with a final absolute error in the target locations of less than 10cm. This is comparable to the accuracy of the georeferenced airborne data. Thus, merging of the georeferenced ground-based and airborne data should be feasible. The motivation for this investigation is to facilitate a thorough characterization of airborne laser ranging phenomenology over forested terrain as a function of vertical location in the canopy.
NASA Astrophysics Data System (ADS)
Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng
2016-09-01
This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.
Grelet, C; Bastin, C; Gelé, M; Davière, J-B; Johan, M; Werner, A; Reding, R; Fernandez Pierna, J A; Colinet, F G; Dardenne, P; Gengler, N; Soyeurt, H; Dehareng, F
2016-06-01
To manage negative energy balance and ketosis in dairy farms, rapid and cost-effective detection is needed. Among the milk biomarkers that could be useful for this purpose, acetone and β-hydroxybutyrate (BHB) have been proved as molecules of interest regarding ketosis and citrate was recently identified as an early indicator of negative energy balance. Because Fourier transform mid-infrared spectrometry can provide rapid and cost-effective predictions of milk composition, the objective of this study was to evaluate the ability of this technology to predict these biomarkers in milk. Milk samples were collected in commercial and experimental farms in Luxembourg, France, and Germany. Acetone, BHB, and citrate contents were determined by flow injection analysis. Milk mid-infrared spectra were recorded and standardized for all samples. After edits, a total of 548 samples were used in the calibration and validation data sets for acetone, 558 for BHB, and 506 for citrate. Acetone content ranged from 0.020 to 3.355mmol/L with an average of 0.103mmol/L; BHB content ranged from 0.045 to 1.596mmol/L with an average of 0.215mmol/L; and citrate content ranged from 3.88 to 16.12mmol/L with an average of 9.04mmol/L. Acetone and BHB contents were log-transformed and a part of the samples with low values was randomly excluded to approach a normal distribution. The 3 edited data sets were then randomly divided into a calibration data set (3/4 of the samples) and a validation data set (1/4 of the samples). Prediction equations were developed using partial least square regression. The coefficient of determination (R(2)) of cross-validation was 0.73 for acetone, 0.71 for BHB, and 0.90 for citrate with root mean square error of 0.248, 0.109, and 0.70mmol/L, respectively. Finally, the external validation was performed and R(2) obtained were 0.67 for acetone, 0.63 for BHB, and 0.86 for citrate, with respective root mean square error of validation of 0.196, 0.083, and 0.76mmol/L. Although the practical usefulness of the equations developed should be further verified with other field data, results from this study demonstrated the potential of Fourier transform mid-infrared spectrometry to predict citrate content with good accuracy and to supply indicative contents of BHB and acetone in milk, thereby providing rapid and cost-effective tools to manage ketosis and negative energy balance in dairy farms. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Dandan; Zhao, Gong-Bo; Wang, Yuting; Percival, Will J.; Ruggeri, Rossana; Zhu, Fangzhou; Tojeiro, Rita; Myers, Adam D.; Chuang, Chia-Hsun; Baumgarten, Falk; Zhao, Cheng; Gil-Marín, Héctor; Ross, Ashley J.; Burtin, Etienne; Zarrouk, Pauline; Bautista, Julian; Brinkmann, Jonathan; Dawson, Kyle; Brownstein, Joel R.; de la Macorra, Axel; Schneider, Donald P.; Shafieloo, Arman
2018-06-01
We present a measurement of the anisotropic and isotropic Baryon Acoustic Oscillations (BAO) from the extended Baryon Oscillation Spectroscopic Survey Data Release 14 quasar sample with optimal redshift weights. Applying the redshift weights improves the constraint on the BAO dilation parameter α(zeff) by 17 per cent. We reconstruct the evolution history of the BAO distance indicators in the redshift range of 0.8 < z < 2.2. This paper is part of a set that analyses the eBOSS DR14 quasar sample.
Lack, Justin B; Cardeno, Charis M; Crepeau, Marc W; Taylor, William; Corbett-Detig, Russell B; Stevens, Kristian A; Langley, Charles H; Pool, John E
2015-04-01
Hundreds of wild-derived Drosophila melanogaster genomes have been published, but rigorous comparisons across data sets are precluded by differences in alignment methodology. The most common approach to reference-based genome assembly is a single round of alignment followed by quality filtering and variant detection. We evaluated variations and extensions of this approach and settled on an assembly strategy that utilizes two alignment programs and incorporates both substitutions and short indels to construct an updated reference for a second round of mapping prior to final variant detection. Utilizing this approach, we reassembled published D. melanogaster population genomic data sets and added unpublished genomes from several sub-Saharan populations. Most notably, we present aligned data from phase 3 of the Drosophila Population Genomics Project (DPGP3), which provides 197 genomes from a single ancestral range population of D. melanogaster (from Zambia). The large sample size, high genetic diversity, and potentially simpler demographic history of the DPGP3 sample will make this a highly valuable resource for fundamental population genetic research. The complete set of assemblies described here, termed the Drosophila Genome Nexus, presently comprises 623 consistently aligned genomes and is publicly available in multiple formats with supporting documentation and bioinformatic tools. This resource will greatly facilitate population genomic analysis in this model species by reducing the methodological differences between data sets. Copyright © 2015 by the Genetics Society of America.
Italian version of the task and ego orientation in sport questionnaire.
Bortoli, Laura; Robazza, Claudio
2005-02-01
The 1992 Task and Ego Orientation in Sport Questionnaire developed by Duda and Nicholls was translated into Italian and administered to 802 young athletes, 248 girls and 554 boys aged 8 to 14 years, drawn from a range of individual and team sports, to examine its factor structure. Data sets of a calibration sample (boys 12-14 years) and of four cross-validation samples (boys 8-11 years, girls 8-11 years, boys 12-14 years, and girls 12-14 years) were subjected to confirmatory factor analysis specifying, as in the original questionnaire, an Ego Orientation scale (6 items) and a Task Orientation scale (7 items). Results across sex and age yielded chi2/df ratios ranging from 1.95 to 3.57, GFI indices above .90, AGFI indices ranging from .90 to .92, and RMSEA values not above .10. Findings provided acceptable support for the two-dimension structure of the test. In the whole sample, the Ego factor accounted for the 27.2% of variance and the Task factor accounted for the 33.5% of variance. Acceptable internal consistency of the two scales was also shown, with Cronbach alpha values ranging from .73 to .85.
7 CFR 28.402 - Strict Middling Color.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Strict Middling Color. 28.402 Section 28.402... for the Color Grade of American Upland Cotton § 28.402 Strict Middling Color. Strict Middling Color is color which is within the range represented by a set of samples in the custody of the United States...
7 CFR 28.402 - Strict Middling Color.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Strict Middling Color. 28.402 Section 28.402... for the Color Grade of American Upland Cotton § 28.402 Strict Middling Color. Strict Middling Color is color which is within the range represented by a set of samples in the custody of the United States...
7 CFR 28.423 - Middling Spotted Color.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Middling Spotted Color. 28.423 Section 28.423... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Spotted Cotton § 28.423 Middling Spotted Color. Middling Spotted Color is color which is within the range represented by a set of samples in the custody of...
7 CFR 28.432 - Middling Tinged Color.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Middling Tinged Color. 28.432 Section 28.432... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Tinged Cotton § 28.432 Middling Tinged Color. Middling Tinged Color is color which is within the range represented by a set of samples in the custody of...
7 CFR 28.434 - Low Middling Tinged Color.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Low Middling Tinged Color. 28.434 Section 28.434... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Tinged Cotton § 28.434 Low Middling Tinged Color. Low Middling Tinged Color is color which is within the range represented by a set of samples in the...
7 CFR 28.402 - Strict Middling Color.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Strict Middling Color. 28.402 Section 28.402... for the Color Grade of American Upland Cotton § 28.402 Strict Middling Color. Strict Middling Color is color which is within the range represented by a set of samples in the custody of the United States...
7 CFR 28.423 - Middling Spotted Color.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Middling Spotted Color. 28.423 Section 28.423... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Spotted Cotton § 28.423 Middling Spotted Color. Middling Spotted Color is color which is within the range represented by a set of samples in the custody of...
7 CFR 28.432 - Middling Tinged Color.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Middling Tinged Color. 28.432 Section 28.432... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Tinged Cotton § 28.432 Middling Tinged Color. Middling Tinged Color is color which is within the range represented by a set of samples in the custody of...
7 CFR 28.432 - Middling Tinged Color.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Middling Tinged Color. 28.432 Section 28.432... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Tinged Cotton § 28.432 Middling Tinged Color. Middling Tinged Color is color which is within the range represented by a set of samples in the custody of...
7 CFR 28.434 - Low Middling Tinged Color.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Low Middling Tinged Color. 28.434 Section 28.434... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Tinged Cotton § 28.434 Low Middling Tinged Color. Low Middling Tinged Color is color which is within the range represented by a set of samples in the...
7 CFR 28.432 - Middling Tinged Color.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Middling Tinged Color. 28.432 Section 28.432... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Tinged Cotton § 28.432 Middling Tinged Color. Middling Tinged Color is color which is within the range represented by a set of samples in the custody of...
7 CFR 28.423 - Middling Spotted Color.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Middling Spotted Color. 28.423 Section 28.423... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Spotted Cotton § 28.423 Middling Spotted Color. Middling Spotted Color is color which is within the range represented by a set of samples in the custody of...
7 CFR 28.434 - Low Middling Tinged Color.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Low Middling Tinged Color. 28.434 Section 28.434... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Tinged Cotton § 28.434 Low Middling Tinged Color. Low Middling Tinged Color is color which is within the range represented by a set of samples in the...
7 CFR 28.434 - Low Middling Tinged Color.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Low Middling Tinged Color. 28.434 Section 28.434... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Tinged Cotton § 28.434 Low Middling Tinged Color. Low Middling Tinged Color is color which is within the range represented by a set of samples in the...
7 CFR 28.402 - Strict Middling Color.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Strict Middling Color. 28.402 Section 28.402... for the Color Grade of American Upland Cotton § 28.402 Strict Middling Color. Strict Middling Color is color which is within the range represented by a set of samples in the custody of the United States...
7 CFR 28.423 - Middling Spotted Color.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Middling Spotted Color. 28.423 Section 28.423... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Spotted Cotton § 28.423 Middling Spotted Color. Middling Spotted Color is color which is within the range represented by a set of samples in the custody of...
Comprehensive Quantification of the Spastic Catch in Children with Cerebral Palsy
ERIC Educational Resources Information Center
Lynn, Bar-On; Erwin, Aertbelien; Guy, Molenaers; Herman, Bruyninckx; Davide, Monari; Ellen, Jaspers; Anne, Cazaerck; Kaat, Desloovere
2013-01-01
In clinical settings, the spastic catch is judged subjectively. This study assessed the psychometric properties of objective parameters that define and quantify the severity of the spastic catch in children with cerebral palsy (CP). A convenience sample of children with spastic CP (N = 46; age range: 4-16 years) underwent objective spasticity…
The inherent sampling and preservational biases of the archaeological record make it difficult
to quantify prehistoric human diets, especially in coastal settings, where populations had access to a wide range of marine and terrestrial food sources. In certain cases, geochemica...
7 CFR 28.423 - Middling Spotted Color.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Middling Spotted Color. 28.423 Section 28.423... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Spotted Cotton § 28.423 Middling Spotted Color. Middling Spotted Color is color which is within the range represented by a set of samples in the custody of...
7 CFR 28.434 - Low Middling Tinged Color.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Low Middling Tinged Color. 28.434 Section 28.434... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Tinged Cotton § 28.434 Low Middling Tinged Color. Low Middling Tinged Color is color which is within the range represented by a set of samples in the...
7 CFR 28.432 - Middling Tinged Color.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Middling Tinged Color. 28.432 Section 28.432... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Tinged Cotton § 28.432 Middling Tinged Color. Middling Tinged Color is color which is within the range represented by a set of samples in the custody of...
7 CFR 28.402 - Strict Middling Color.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Strict Middling Color. 28.402 Section 28.402... for the Color Grade of American Upland Cotton § 28.402 Strict Middling Color. Strict Middling Color is color which is within the range represented by a set of samples in the custody of the United States...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, W.A.; LaDelfe, C.M.; Weaver, T.A.
1978-10-01
During the field seasons of 1976 and 1977, 1,060 natural water and 1,240 waterborne sediment samples were collected from 1,768 locations in the Trinidad, Colorado, NTMS quadrangle. The samples from this 19,600-km/sup 2/ area were analyzed at the Los Alamos Scientific Laboratory for total uranium. The uranium concentrations in waters ranged from less than the detection limit of 0.02 parts per billion (ppb) to 88.3 ppb, with a mean value of 4.05 ppb. The concentrations in sediments ranged from 1.3 parts per million (ppM) to 721.9 ppM, with a mean value of 5.55 ppM. Based on simple statistical analyses ofmore » these data, arbitrary anomaly thresholds were set at 20 ppb for water samples and 12 ppM for sediment samples. By this definition, fifty-eight water and 39 sediment samples were considered anomalous. At least five areas delineated by the data appear to warrant more detailed investigations. Twenty-six anomalous water samples outline a broad area corresponding to the axis of the Apishapa uplift, seven others form a cluster in Huerfano Park, and five others outline a small area in the northern part of the San Luis Valley. Twenty-three anomalous sediment samples outline an area corresponding generally to Precambrian metamorphic rocks in the Culebra Range, and seven anomalous sediment samples form a cluster near Crestone Peak in the Sangre de Cristo Mountains.« less
System health monitoring using multiple-model adaptive estimation techniques
NASA Astrophysics Data System (ADS)
Sifford, Stanley Ryan
Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.
Ordoñez, Edgar Y; Rodil, Rosario; Quintana, José Benito; Cela, Rafael
2015-02-15
A new analytical procedure involving the use of water and a low percentage of ethanol combined to high temperature liquid chromatography-tandem mass spectrometry has been developed for the determination of nine high-intensity sweeteners in a variety of drink samples. The method permitted the analysis in 23min (including column reequilibration) and consuming only 0.85mL of a green organic solvent (ethanol). This methodology provided limits of detection (after 50-fold dilution) in the 0.05-10mg/L range, with recoveries (obtained from five different types of beverages) being in the 86-110% range and relative standard deviation values lower than 12%. Finally, the method was applied to 25 different samples purchased in Spain, where acesulfame and sucralose were the most frequently detected analytes (>50% of the samples) and cyclamate was found over the legislation limit set by the European Union in a sample and at the regulation boundary in three others. Copyright © 2014 Elsevier Ltd. All rights reserved.
Galileo photometry of Apollo landing sites
NASA Technical Reports Server (NTRS)
Helfenstein, P.; Veverka, J.; Head, James W.; Pieters, C.; Pratt, S.; Mustard, J.; Klaasen, K.; Neukum, G.; Hoffmann, H.; Jaumann, R.
1993-01-01
As of December 1992, the Galileo spacecraft performed its second and final flyby (EM2), of the Earth-Moon system, during which it acquired Solid State Imaging (SSI) camera images of the lunar surface suitable for photometric analysis using Hapke's, photometric model. These images, together with those from the first flyby (EM1) in December 1989, provide observations of all of the Apollo landing sites over a wide range of photometric geometries and at eight broadband filter wavelengths ranging from 0.41 micron to 0.99 micron. We have completed a preliminary photometric analysis of Apollo landing sites visible in EM1 images and developed a new strategy for a more complete analysis of the combined EM1 and EM2 data sets in conjunction with telescopic observations and spectrogoniometric measurements of returned lunar samples. No existing single data set, whether from spacecraft flyby, telescopic observation, or laboratory analysis of returned samples, describes completely the light scattering behavior of a particular location on the Moon at all angles of incidence (i), emission (e), and phase angles (a). Earthbased telescopic observations of particular lunar sites provide good coverage of incidence nad phase angles, but their range in emission angle is limited to only a few degrees because of the Moon's synchronous rotation. Spacecraft flyby observations from Galileo are now available for specific lunar features at many photometric geometries unobtainable from Earth; however, this data set lacks coverage at very small phase angles (a less than 13 deg) important for distinguishing the well-known 'opposition effect'. Spectrogoniometric measurements from returned lunar samples can provide photometric coverage at almost any geometry; however, mechanical properties of prepared particulate laboratory samples, such as particle compaction and macroscopic roughness, likely differ from those on the lunar surface. In this study, we have developed methods for the simultaneous analysis of all three types of data: we combine Galileo and telescopic observations to obtain the most complete coverage with photometric geometry, and use spectrogoniometric observations of lunar soils to help distinguish the photometric effects of macroscopic roughness from those caused by particle phase function behavior (i.e., the directional scattering properties of regolith particles).
NASA Astrophysics Data System (ADS)
Dillner, A. M.; Takahama, S.
2014-11-01
Organic carbon (OC) can constitute 50% or more of the mass of atmospheric particulate matter. Typically, the organic carbon concentration is measured using thermal methods such as Thermal-Optical Reflectance (TOR) from quartz fiber filters. Here, methods are presented whereby Fourier Transform Infrared (FT-IR) absorbance spectra from polytetrafluoroethylene (PTFE or Teflon) filters are used to accurately predict TOR OC. Transmittance FT-IR analysis is rapid, inexpensive, and non-destructive to the PTFE filters. To develop and test the method, FT-IR absorbance spectra are obtained from 794 samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites sampled during 2011. Partial least squares regression is used to calibrate sample FT-IR absorbance spectra to artifact-corrected TOR OC. The FTIR spectra are divided into calibration and test sets by sampling site and date which leads to precise and accurate OC predictions by FT-IR as indicated by high coefficient of determination (R2; 0.96), low bias (0.02 μg m-3, all μg m-3 values based on the nominal IMPROVE sample volume of 32.8 m-3), low error (0.08 μg m-3) and low normalized error (11%). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision and accuracy to collocated TOR measurements. FT-IR spectra are also divided into calibration and test sets by OC mass and by OM / OC which reflects the organic composition of the particulate matter and is obtained from organic functional group composition; this division also leads to precise and accurate OC predictions. Low OC concentrations have higher bias and normalized error due to TOR analytical errors and artifact correction errors, not due to the range of OC mass of the samples in the calibration set. However, samples with low OC mass can be used to predict samples with high OC mass indicating that the calibration is linear. Using samples in the calibration set that have a different OM / OC or ammonium / OC distributions than the test set leads to only a modest increase in bias and normalized error in the predicted samples. We conclude that FT-IR analysis with partial least squares regression is a robust method for accurately predicting TOR OC in IMPROVE network samples; providing complementary information to the organic functional group composition and organic aerosol mass estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).
Korzynska, Anna; Roszkowiak, Lukasz; Pijanowska, Dorota; Kozlowski, Wojciech; Markiewicz, Tomasz
2014-01-01
The aim of this study is to compare the digital images of the tissue biopsy captured with optical microscope using bright field technique under various light conditions. The range of colour's variation in immunohistochemically stained with 3,3'-Diaminobenzidine and Haematoxylin tissue samples is immense and coming from various sources. One of them is inadequate setting of camera's white balance to microscope's light colour temperature. Although this type of error can be easily handled during the stage of image acquisition, it can be eliminated with use of colour adjustment algorithms. The examination of the dependence of colour variation from microscope's light temperature and settings of the camera is done as an introductory research to the process of automatic colour standardization. Six fields of view with empty space among the tissue samples have been selected for analysis. Each field of view has been acquired 225 times with various microscope light temperature and camera white balance settings. The fourteen randomly chosen images have been corrected and compared, with the reference image, by the following methods: Mean Square Error, Structural SIMilarity and visual assessment of viewer. For two types of backgrounds and two types of objects, the statistical image descriptors: range, median, mean and its standard deviation of chromaticity on a and b channels from CIELab colour space, and luminance L, and local colour variability for objects' specific area have been calculated. The results have been averaged for 6 images acquired in the same light conditions and camera settings for each sample. The analysis of the results leads to the following conclusions: (1) the images collected with white balance setting adjusted to light colour temperature clusters in certain area of chromatic space, (2) the process of white balance correction for images collected with white balance camera settings not matched to the light temperature moves image descriptors into proper chromatic space but simultaneously the value of luminance changes. So the process of the image unification in a sense of colour fidelity can be solved in separate introductory stage before the automatic image analysis.
Solid phase microextraction applied to the analysis of organophosphorus insecticides in fruits.
Fytianos, K; Raikos, N; Theodoridis, G; Velinova, Z; Tsoukali, H
2006-12-01
Trace amounts of organophosphorus pesticides (OPs) were determined in various fruits by headspace solid phase microextraction (HS-SPME) and gas chromatography-nitrogen phosphorous detection (GC-NPD). Sampling from the headspace enhanced method selectivity, whereas at the same time improved fiber life time and method sensitivity. Diazinon, parathion, methyl parathion, malathion and fenithrothion were determined in various fruits: more than 150 samples of 21 types of fruits were studied. SPME-GC-NPD provided a useful and very efficient analytical tool: method linearity ranged from 1.2 to 700 ng/ml. Limits of detection (LODs) and quantitation (LOQs) ranged from 0.03 to 3 ng/ml and 0.12 to 10 ng/ml respectively, values well below the residue limits set by the EU. Less than 2% of the samples were found positive containing amounts higher than the EU limits. The effect of fruit peeling and washing was also investigated.
Testate amoebae communities sensitive to surface moisture conditions in Patagonian peatlands
NASA Astrophysics Data System (ADS)
Loisel, J.; Booth, R.; Charman, D.; van Bellen, S.; Yu, Z.
2017-12-01
Here we examine moss surface samples that were collected during three field campaigns (2005, 2010, 2014) across southern Patagonian peatlands to assess the potential use of testate amoebae and 13C isotope data as proxy indicators of soil moisture. These proxies have been widely tested across North America, but their use as paleoecological tools remains sparse in the southern hemisphere. Samples were collected along a hydrological gradient spanning a range of water table depth from 0cm in wet hollows to over 85cm in dry hummocks. Moss moisture content was measured in the field. Over 25 taxa were identified, with many of them not found in North America. Ordinations indicate statistically significant and dominant effects of soil moisture and water table depth on testate assemblages, though interestingly 13C is even more strongly correlated with testates amoebae than direct soil conditions. It is possible that moss 13C signature constitutes a compound indicator that represents seasonal soil moisture better than opportunistic sampling during field campaigns. There is no significant effect of year or site across the dataset. In addition to providing a training set that translates testate amoebae moisture tolerance range into water tabel depth for Patagonian peatlands, we also compare our results with those from the North American training set to show that, despite 'novel' Patagonian taxa, the robustness of international training sets is probably sufficient to quantify most changes in soil moisture from any site around the world. We also identify key indicator species that are shown to be of universal value in peat-based hydrological reconstructions.
Ruiz-Jiménez, J; Priego-Capote, F; Luque de Castro, M D
2006-08-01
A study of the feasibility of Fourier transform medium infrared spectroscopy (FT-midIR) for analytical determination of fatty acid profiles, including trans fatty acids, is presented. The training and validation sets-75% (102 samples) and 25% (36 samples) of the samples once the spectral outliers have been removed-to develop FT-midIR general equations, were built with samples from 140 commercial and home-made bakery products. The concentration of the analytes in the samples used for this study is within the typical range found in these kinds of products. Both sets were independent; thus, the validation set was only used for testing the equations. The criterion used for the selection of the validation set was samples with the highest number of neighbours and the most separation between them (H<0.6). Partial least squares regression and cross validation were used for multivariate calibration. The FT-midIR method does not require post-extraction manipulation and gives information about the fatty acid profile in two min. The 14:0, 16:0, 18:0, 18:1 and 18:2 fatty acids can be determined with excellent precision and other fatty acids with good precision according to the Shenk criteria, R (2)>/=0.90, SEP=1-1.5 SEL and R (2)=0.70-0.89, SEP=2-3 SEL, respectively. The results obtained with the proposed method were compared with those provided by the conventional method based on GC-MS. At 95% significance level, the differences between the values obtained for the different fatty acids were within the experimental error.
Goesling, Brian; Colman, Silvie; Trenholm, Christopher; Terzian, Mary; Moore, Kristin
2014-05-01
This systematic review provides a comprehensive, updated assessment of programs with evidence of effectiveness in reducing teen pregnancy, sexually transmitted infections (STIs), or associated sexual risk behaviors. The review was conducted in four steps. First, multiple literature search strategies were used to identify relevant studies released from 1989 through January 2011. Second, identified studies were screened against prespecified eligibility criteria. Third, studies were assessed by teams of two trained reviewers for the quality and execution of their research designs. Fourth, for studies that passed the quality assessment, the review team extracted and analyzed information on the research design, study sample, evaluation setting, and program impacts. A total of 88 studies met the review criteria for study quality and were included in the data extraction and analysis. The studies examined a range of programs delivered in diverse settings. Most studies had mixed-gender and predominately African-American research samples (70% and 51%, respectively). Randomized controlled trials accounted for the large majority (87%) of included studies. Most studies (76%) included multiple follow-ups, with sample sizes ranging from 62 to 5,244. Analysis of the study impact findings identified 31 programs with evidence of effectiveness. Research conducted since the late 1980s has identified more than two dozen teen pregnancy and STI prevention programs with evidence of effectiveness. Key strengths of this research are the large number of randomized controlled trials, the common use of multiple follow-up periods, and attention to a broad range of programs delivered in diverse settings. Two main gaps are a lack of replication studies and the need for more research on Latino youth and other high-risk populations. In addressing these gaps, researchers must overcome common limitations in study design, analysis, and reporting that have negatively affected prior research. Copyright © 2014 Society for Adolescent Health and Medicine. All rights reserved.
Symanski, E.; Kupper, L. L.; Rappaport, S. M.
1998-01-01
OBJECTIVES: To conduct a comprehensive evaluation of long term changes in occupational exposure among a broad cross section of industries worldwide. METHODS: A review of the scientific literature identified studies that reported historical changes in exposure. About 700 sets of data from 119 published and several unpublished sources were compiled. Data were published over a 30 year period in 25 journals that spanned a range of disciplines. For each data set, the average exposure level was compiled for each period and details on the contaminant, the industry and location, changes in the threshold limit value (TLV), as well as the type of sampling method were recorded. Spearman rank correlation coefficients were used to identify monotonic changes in exposure over time and simple linear regression analyses were used to characterise trends in exposure. RESULTS: About 78% of the natural log transformed data showed linear trends towards lower exposure levels whereas 22% indicated increasing trends. (The Spearman rank correlation analyses produced a similar breakdown between exposures monotonically increasing or decreasing over time.) Although the rates of reduction for the data showing downward trends ranged from -1% to -62% per year, most exposures declined at rates between -4% and -14% per year (the interquartile range), with a median value of -8% per year. Exposures seemed to increase at rates that were slightly lower than those of exposures which have declined over time. Data sets that showed downward (versus upward) trends were influenced by several factors including type and carcinogenicity of the contaminant, type of monitoring, historical changes in the threshold limit values (TLVs), and period of sampling. CONCLUSIONS: This review supports the notion that occupational exposures are generally lower today than they were years or decades ago. However, such trends seem to have been affected by factors related to the contaminant, as well as to the period and type of sampling. PMID:9764107
Debray, Thomas P A; Vergouwe, Yvonne; Koffijberg, Hendrik; Nieboer, Daan; Steyerberg, Ewout W; Moons, Karel G M
2015-03-01
It is widely acknowledged that the performance of diagnostic and prognostic prediction models should be assessed in external validation studies with independent data from "different but related" samples as compared with that of the development sample. We developed a framework of methodological steps and statistical methods for analyzing and enhancing the interpretation of results from external validation studies of prediction models. We propose to quantify the degree of relatedness between development and validation samples on a scale ranging from reproducibility to transportability by evaluating their corresponding case-mix differences. We subsequently assess the models' performance in the validation sample and interpret the performance in view of the case-mix differences. Finally, we may adjust the model to the validation setting. We illustrate this three-step framework with a prediction model for diagnosing deep venous thrombosis using three validation samples with varying case mix. While one external validation sample merely assessed the model's reproducibility, two other samples rather assessed model transportability. The performance in all validation samples was adequate, and the model did not require extensive updating to correct for miscalibration or poor fit to the validation settings. The proposed framework enhances the interpretation of findings at external validation of prediction models. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Parastar, Hadi; Mostafapour, Sara; Azimi, Gholamhasan
2016-01-01
Comprehensive two-dimensional gas chromatography and flame ionization detection combined with unfolded-partial least squares is proposed as a simple, fast and reliable method to assess the quality of gasoline and to detect its potential adulterants. The data for the calibration set are first baseline corrected using a two-dimensional asymmetric least squares algorithm. The number of significant partial least squares components to build the model is determined using the minimum value of root-mean square error of leave-one out cross validation, which was 4. In this regard, blends of gasoline with kerosene, white spirit and paint thinner as frequently used adulterants are used to make calibration samples. Appropriate statistical parameters of regression coefficient of 0.996-0.998, root-mean square error of prediction of 0.005-0.010 and relative error of prediction of 1.54-3.82% for the calibration set show the reliability of the developed method. In addition, the developed method is externally validated with three samples in validation set (with a relative error of prediction below 10.0%). Finally, to test the applicability of the proposed strategy for the analysis of real samples, five real gasoline samples collected from gas stations are used for this purpose and the gasoline proportions were in range of 70-85%. Also, the relative standard deviations were below 8.5% for different samples in the prediction set. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ecological tolerances of Miocene larger benthic foraminifera from Indonesia
NASA Astrophysics Data System (ADS)
Novak, Vibor; Renema, Willem
2018-01-01
To provide a comprehensive palaeoenvironmental reconstruction based on larger benthic foraminifera (LBF), a quantitative analysis of their assemblage composition is needed. Besides microfacies analysis which includes environmental preferences of foraminiferal taxa, statistical analyses should also be employed. Therefore, detrended correspondence analysis and cluster analysis were performed on relative abundance data of identified LBF assemblages deposited in mixed carbonate-siliciclastic (MCS) systems and blue-water (BW) settings. Studied MCS system localities include ten sections from the central part of the Kutai Basin in East Kalimantan, ranging from late Burdigalian to Serravallian age. The BW samples were collected from eleven sections of the Bulu Formation on Central Java, dated as Serravallian. Results from detrended correspondence analysis reveal significant differences between these two environmental settings. Cluster analysis produced five clusters of samples; clusters 1 and 2 comprise dominantly MCS samples, clusters 3 and 4 with dominance of BW samples, and cluster 5 showing a mixed composition with both MCS and BW samples. The results of cluster analysis were afterwards subjected to indicator species analysis resulting in the interpretation that generated three groups among LBF taxa: typical assemblage indicators, regularly occurring taxa and rare taxa. By interpreting the results of detrended correspondence analysis, cluster analysis and indicator species analysis, along with environmental preferences of identified LBF taxa, a palaeoenvironmental model is proposed for the distribution of LBF in Miocene MCS systems and adjacent BW settings of Indonesia.
Vongkamjan, Kitiya; Benjakul, Soottawat; Kim Vu, Hue Thi; Vuddhakul, Varaporn
2017-09-01
Listeria monocytogenes is a foodborne pathogen commonly found in environments of seafood processing, thus presenting a challenge for eradication from seafood processing facilities. Monitoring the prevalence and subtype diversity of L. monocytogenes together with phages that are specific to Listeria spp. ("Listeria phages") will provide knowledge on the bacteria-phage ecology in food processing plants. In this work, a total of 595 samples were collected from raw material, finished seafood products and environmental samples from different sites of a seafood processing plant during 17 sampling visits in 1.5 years of study. L. monocytogenes and Listeria spp. (non-monocytogenes) were found in 22 (3.7%) and 43 (7.2%) samples, respectively, whereas 29 Listeria phages were isolated from 9 (1.5%) phage-positive samples. DNA fingerprint analysis of L. monocytogenes isolates revealed 11 Random Amplified Polymorphic DNA (RAPD) profiles, with two subtypes were frequently observed over time. Our data reveal a presence of Listeria phages within the same seafood processing environments where a diverse set of L. monocytogenes subtypes was also found. Although serotype 4b was observed at lower frequency, data indicate that isolates from this seafood processing plant belonged to both epidemiologically important serotypes 1/2a and 4b, which may suggest a potential public health risk. Phages (all showed a unique genome size of 65 ± 2 kb) were classified into 9 host range groups, representing both broad- and narrow-host range. While most L. monocytogenes isolates from this facility were susceptible to phages, five isolates showed resistance to 12-20 phages. Variations in phage host range among Listeria phages isolated from food processing plant may affect a presence of a diverse set of L. monocytogenes isolates derived from the same processing environment in Thailand. Copyright © 2017 Elsevier Ltd. All rights reserved.
Twinn, Sheila; Thompson, David R; Lopez, Violeta; Lee, Diana T F; Shiu, Ann T Y
2005-01-01
Different factors have been shown to influence the development of models of advanced nursing practice (ANP) in primary-care settings. Although ANP is being developed in hospitals in Hong Kong, China, it remains undeveloped in primary care and little is known about the factors determining the development of such a model. The aims of the present study were to investigate the contribution of different models of nursing practice to the care provided in primary-care settings in Hong Kong, and to examine the determinants influencing the development of a model of ANP in such settings. A multiple case study design was selected using both qualitative and quantitative methods of data collection. Sampling methods reflected the population groups and stage of the case study. Sampling included a total population of 41 nurses from whom a secondary volunteer sample was drawn for face-to-face interviews. In each case study, a convenience sample of 70 patients were recruited, from whom 10 were selected purposively for a semi-structured telephone interview. An opportunistic sample of healthcare professionals was also selected. The within-case and cross-case analysis demonstrated four major determinants influencing the development of ANP: (1) current models of nursing practice; (2) the use of skills mix; (3) the perceived contribution of ANP to patient care; and (4) patients' expectations of care. The level of autonomy of individual nurses was considered particularly important. These determinants were used to develop a model of ANP for a primary-care setting. In conclusion, although the findings highlight the complexity determining the development and implementation of ANP in primary care, the proposed model suggests that definitions of advanced practice are appropriate to a range of practice models and cultural settings. However, the findings highlight the importance of assessing the effectiveness of such models in terms of cost and long-term patient outcomes.
The Impact of Biosampling Procedures on Molecular Data Interpretation*
Sköld, Karl; Alm, Henrik; Scholz, Birger
2013-01-01
The separation between biological and technical variation without extensive use of technical replicates is often challenging, particularly in the context of different forms of protein and peptide modifications. Biosampling procedures in the research laboratory are easier to conduct within a shorter time frame and under controlled conditions as compared with clinical sampling, with the latter often having issues of reproducibility. But is the research laboratory biosampling really less variable? Biosampling introduces within minutes rapid tissue-specific changes in the cellular microenvironment, thus inducing a range of different pathways associated with cell survival. Biosampling involves hypoxia and, depending on the circumstances, hypothermia, circumstances for which there are evolutionarily conserved defense strategies in the range of species and also are relevant for the range of biomedical conditions. It remains unclear to what extent such adaptive processes are reflected in different biosampling procedures or how important they are for the definition of sample quality. Lately, an increasing number of comparative studies on different biosampling approaches, post-mortem effects and pre-sampling biological state, have investigated such immediate early biosampling effects. Commonalities between biosampling effects and a range of ischemia/reperfusion- and hypometabolism/anoxia-associated biological phenomena indicate that even small variations in post-sampling time intervals are likely to introduce a set of nonrandom and tissue-specific effects of experimental importance (both in vivo and in vitro). This review integrates the information provided by these comparative studies and discusses how an adaptive biological perspective in biosampling procedures may be relevant for sample quality issues. PMID:23382104
The CAMELS data set: catchment attributes and meteorology for large-sample studies
NASA Astrophysics Data System (ADS)
Addor, Nans; Newman, Andrew J.; Mizukami, Naoki; Clark, Martyn P.
2017-10-01
We present a new data set of attributes for 671 catchments in the contiguous United States (CONUS) minimally impacted by human activities. This complements the daily time series of meteorological forcing and streamflow provided by Newman et al. (2015b). To produce this extension, we synthesized diverse and complementary data sets to describe six main classes of attributes at the catchment scale: topography, climate, streamflow, land cover, soil, and geology. The spatial variations among basins over the CONUS are discussed and compared using a series of maps. The large number of catchments, combined with the diversity of the attributes we extracted, makes this new data set well suited for large-sample studies and comparative hydrology. In comparison to the similar Model Parameter Estimation Experiment (MOPEX) data set, this data set relies on more recent data, it covers a wider range of attributes, and its catchments are more evenly distributed across the CONUS. This study also involves assessments of the limitations of the source data sets used to compute catchment attributes, as well as detailed descriptions of how the attributes were computed. The hydrometeorological time series provided by Newman et al. (2015b, https://doi.org/10.5065/D6MW2F4D) together with the catchment attributes introduced in this paper (https://doi.org/10.5065/D6G73C3Q) constitute the freely available CAMELS data set, which stands for Catchment Attributes and MEteorology for Large-sample Studies.
Adjemian, Jennifer C Z; Girvetz, Evan H; Beckett, Laurel; Foley, Janet E
2006-01-01
More than 20 species of fleas in California are implicated as potential vectors of Yersinia pestis. Extremely limited spatial data exist for plague vectors-a key component to understanding where the greatest risks for human, domestic animal, and wildlife health exist. This study increases the spatial data available for 13 potential plague vectors by using the ecological niche modeling system Genetic Algorithm for Rule-Set Production (GARP) to predict their respective distributions. Because the available sample sizes in our data set varied greatly from one species to another, we also performed an analysis of the robustness of GARP by using the data available for flea Oropsylla montana (Baker) to quantify the effects that sample size and the chosen explanatory variables have on the final species distribution map. GARP effectively modeled the distributions of 13 vector species. Furthermore, our analyses show that all of these modeled ranges are robust, with a sample size of six fleas or greater not significantly impacting the percentage of the in-state area where the flea was predicted to be found, or the testing accuracy of the model. The results of this study will help guide the sampling efforts of future studies focusing on plague vectors.
Dunham, Jason B.; Chelgren, Nathan D.; Heck, Michael P.; Clark, Steven M.
2013-01-01
We evaluated the probability of detecting larval lampreys using different methods of backpack electrofishing in wadeable streams in the U.S. Pacific Northwest. Our primary objective was to compare capture of lampreys using electrofishing with standard settings for salmon and trout to settings specifically adapted for capture of lampreys. Field work consisted of removal sampling by means of backpack electrofishing in 19 sites in streams representing a broad range of conditions in the region. Captures of lampreys at these sites were analyzed with a modified removal-sampling model and Bayesian estimation to measure the relative odds of capture using the lamprey-specific settings compared with the standard salmonid settings. We found that the odds of capture were 2.66 (95% credible interval, 0.87–78.18) times greater for the lamprey-specific settings relative to standard salmonid settings. When estimates of capture probability were applied to estimating the probabilities of detection, we found high (>0.80) detectability when the actual number of lampreys in a site was greater than 10 individuals and effort was at least two passes of electrofishing, regardless of the settings used. Further work is needed to evaluate key assumptions in our approach, including the evaluation of individual-specific capture probabilities and population closure. For now our results suggest comparable results are possible for detection of lampreys by using backpack electrofishing with salmonid- or lamprey-specific settings.
Seo, Dongmin; Paek, Sung-Ho; Oh, Sangwoo; Seo, Sungkyu; Paek, Se-Hwan
2016-01-01
The incidence of diabetes is continually increasing, and by 2030, it is expected to have increased by 69% and 20% in underdeveloped and developed countries, respectively. Therefore, glucose sensors are likely to remain in high demand in medical device markets. For the current study, we developed a needle-type bio-layer interference (BLI) sensor that can continuously monitor glucose levels. Using dialysis procedures, we were able to obtain hypoglycemic samples from commercial human serum. These dialysis-derived samples, alongside samples of normal human serum were used to evaluate the utility of the sensor for the detection of the clinical interest range of glucose concentrations (70–200 mg/dL), revealing high system performance for a wide glycemic state range (45–500 mg/dL). Reversibility and reproducibility were also tested over a range of time spans. Combined with existing BLI system technology, this sensor holds great promise for use as a wearable online continuous glucose monitoring system for patients in a hospital setting. PMID:27669267
Bacteriological Survey of Fresh Pork Sausage Produced at Establishments Under Federal Inspection1
Surkiewicz, Bernard F.; Johnston, Ralph W.; Elliott, R. Paul; Simmons, E. Ruth
1972-01-01
At the time of manufacture, 75% of 67 sets of finished fresh pork sausage collected in 44 plants had aerobic plate counts in the range of 500,000 or fewer/g; 88% contained 100 or fewer E. coli/g; and 75% contained 100 or fewer S. aureus/g (geometric means of 10 samples). Salmonellae were isolated from 28% of 529 samples of pork trimmings used for sausage, and from 28% of 560 finished sausage samples. Semiquantitative analysis revealed that salmonellae were at low levels; more than 80% of the salmonellae-positive samples were positive only in 25-g portions (negative in 1.0- and 0.1-g portions). PMID:4553799
Tylosin content in meat and honey samples over a two-year period in Croatia.
Kolanović, Božica S; Bilandžić, Nina; Varenina, Ivana; Božić, Durđica
2014-01-01
A total of 646 meat and 96 honey samples were examined over a 2-year period for the presence of tylosin residues. ELISA method used was validated according to the criteria of Commission Decision 2002/657/EC established for qualitative screening methods. The CCβ values were 32.1 µg kg⁻¹ in muscle and 24.4 µg kg⁻¹ in honey. The recoveries from spiked samples ranged from 66.4-118.6%, with a coefficient of variation between 12.6% and 18.6%. All the investigated samples showed no presence of tylosin. Calculated estimated daily intakes show exposure levels lower than the acceptable daily intakes set by World Health Organization.
Govindarajan, R; Llueguera, E; Melero, A; Molero, J; Soler, N; Rueda, C; Paradinas, C
2010-01-01
Statistical Process Control (SPC) was applied to monitor patient set-up in radiotherapy and, when the measured set-up error values indicated a loss of process stability, its root cause was identified and eliminated to prevent set-up errors. Set up errors were measured for medial-lateral (ml), cranial-caudal (cc) and anterior-posterior (ap) dimensions and then the upper control limits were calculated. Once the control limits were known and the range variability was acceptable, treatment set-up errors were monitored using sub-groups of 3 patients, three times each shift. These values were plotted on a control chart in real time. Control limit values showed that the existing variation was acceptable. Set-up errors, measured and plotted on a X chart, helped monitor the set-up process stability and, if and when the stability was lost, treatment was interrupted, the particular cause responsible for the non-random pattern was identified and corrective action was taken before proceeding with the treatment. SPC protocol focuses on controlling the variability due to assignable cause instead of focusing on patient-to-patient variability which normally does not exist. Compared to weekly sampling of set-up error in each and every patient, which may only ensure that just those sampled sessions were set-up correctly, the SPC method enables set-up error prevention in all treatment sessions for all patients and, at the same time, reduces the control costs. Copyright © 2009 SECA. Published by Elsevier Espana. All rights reserved.
Martínez Vega, Mabel V; Sharifzadeh, Sara; Wulfsohn, Dvoralai; Skov, Thomas; Clemmensen, Line Harder; Toldam-Andersen, Torben B
2013-12-01
Visible-near infrared spectroscopy remains a method of increasing interest as a fast alternative for the evaluation of fruit quality. The success of the method is assumed to be achieved by using large sets of samples to produce robust calibration models. In this study we used representative samples of an early and a late season apple cultivar to evaluate model robustness (in terms of prediction ability and error) on the soluble solids content (SSC) and acidity prediction, in the wavelength range 400-1100 nm. A total of 196 middle-early season and 219 late season apples (Malus domestica Borkh.) cvs 'Aroma' and 'Holsteiner Cox' samples were used to construct spectral models for SSC and acidity. Partial least squares (PLS), ridge regression (RR) and elastic net (EN) models were used to build prediction models. Furthermore, we compared three sub-sample arrangements for forming training and test sets ('smooth fractionator', by date of measurement after harvest and random). Using the 'smooth fractionator' sampling method, fewer spectral bands (26) and elastic net resulted in improved performance for SSC models of 'Aroma' apples, with a coefficient of variation CVSSC = 13%. The model showed consistently low errors and bias (PLS/EN: R(2) cal = 0.60/0.60; SEC = 0.88/0.88°Brix; Biascal = 0.00/0.00; R(2) val = 0.33/0.44; SEP = 1.14/1.03; Biasval = 0.04/0.03). However, the prediction acidity and for SSC (CV = 5%) of the late cultivar 'Holsteiner Cox' produced inferior results as compared with 'Aroma'. It was possible to construct local SSC and acidity calibration models for early season apple cultivars with CVs of SSC and acidity around 10%. The overall model performance of these data sets also depend on the proper selection of training and test sets. The 'smooth fractionator' protocol provided an objective method for obtaining training and test sets that capture the existing variability of the fruit samples for construction of visible-NIR prediction models. The implication is that by using such 'efficient' sampling methods for obtaining an initial sample of fruit that represents the variability of the population and for sub-sampling to form training and test sets it should be possible to use relatively small sample sizes to develop spectral predictions of fruit quality. Using feature selection and elastic net appears to improve the SSC model performance in terms of R(2), RMSECV and RMSEP for 'Aroma' apples. © 2013 Society of Chemical Industry.
Dolch, Michael E; Janitza, Silke; Boulesteix, Anne-Laure; Graßmann-Lichtenauer, Carola; Praun, Siegfried; Denzer, Wolfgang; Schelling, Gustav; Schubert, Sören
2016-12-01
Identification of microorganisms in positive blood cultures still relies on standard techniques such as Gram staining followed by culturing with definite microorganism identification. Alternatively, matrix-assisted laser desorption/ionization time-of-flight mass spectrometry or the analysis of headspace volatile compound (VC) composition produced by cultures can help to differentiate between microorganisms under experimental conditions. This study assessed the efficacy of volatile compound based microorganism differentiation into Gram-negatives and -positives in unselected positive blood culture samples from patients. Headspace gas samples of positive blood culture samples were transferred to sterilized, sealed, and evacuated 20 ml glass vials and stored at -30 °C until batch analysis. Headspace gas VC content analysis was carried out via an auto sampler connected to an ion-molecule reaction mass spectrometer (IMR-MS). Measurements covered a mass range from 16 to 135 u including CO2, H2, N2, and O2. Prediction rules for microorganism identification based on VC composition were derived using a training data set and evaluated using a validation data set within a random split validation procedure. One-hundred-fifty-two aerobic samples growing 27 Gram-negatives, 106 Gram-positives, and 19 fungi and 130 anaerobic samples growing 37 Gram-negatives, 91 Gram-positives, and two fungi were analysed. In anaerobic samples, ten discriminators were identified by the random forest method allowing for bacteria differentiation into Gram-negative and -positive (error rate: 16.7 % in validation data set). For aerobic samples the error rate was not better than random. In anaerobic blood culture samples of patients IMR-MS based headspace VC composition analysis facilitates bacteria differentiation into Gram-negative and -positive.
The influence of sampling interval on the accuracy of trail impact assessment
Leung, Y.-F.; Marion, J.L.
1999-01-01
Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.
NASA Astrophysics Data System (ADS)
Amesbury, Matthew J.; Swindles, Graeme T.; Bobrov, Anatoly; Charman, Dan J.; Holden, Joseph; Lamentowicz, Mariusz; Mallon, Gunnar; Mazei, Yuri; Mitchell, Edward A. D.; Payne, Richard J.; Roland, Thomas P.; Turner, T. Edward; Warner, Barry G.
2016-11-01
In the decade since the first pan-European testate amoeba-based transfer function for peatland palaeohydrological reconstruction was published, a vast amount of additional data collection has been undertaken by the research community. Here, we expand the pan-European dataset from 128 to 1799 samples, spanning 35° of latitude and 55° of longitude. After the development of a new taxonomic scheme to permit compilation of data from a wide range of contributors and the removal of samples with high pH values, we developed ecological transfer functions using a range of model types and a dataset of ∼1300 samples. We rigorously tested the efficacy of these models using both statistical validation and independent test sets with associated instrumental data. Model performance measured by statistical indicators was comparable to other published models. Comparison to test sets showed that taxonomic resolution did not impair model performance and that the new pan-European model can therefore be used as an effective tool for palaeohydrological reconstruction. Our results question the efficacy of relying on statistical validation of transfer functions alone and support a multi-faceted approach to the assessment of new models. We substantiated recent advice that model outputs should be standardised and presented as residual values in order to focus interpretation on secure directional shifts, avoiding potentially inaccurate conclusions relating to specific water-table depths. The extent and diversity of the dataset highlighted that, at the taxonomic resolution applied, a majority of taxa had broad geographic distributions, though some morphotypes appeared to have restricted ranges.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilliom, R.J.; Helsel, D.R.
1986-02-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensoredmore » observations, for determining the best performing parameter estimation method for any particular data det. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.« less
Increased prevalence of sex chromosome aneuploidies in specific language impairment and dyslexia
Simpson, Nuala H; Addis, Laura; Brandler, William M; Slonims, Vicky; Clark, Ann; Watson, Jocelynne; Scerri, Thomas S; Hennessy, Elizabeth R; Bolton, Patrick F; Conti-Ramsden, Gina; Fairfax, Benjamin P; Knight, Julian C; Stein, John; Talcott, Joel B; O'Hare, Anne; Baird, Gillian; Paracchini, Silvia; Fisher, Simon E; Newbury, Dianne F; Consortium, SLI
2014-01-01
Aim Sex chromosome aneuploidies increase the risk of spoken or written language disorders but individuals with specific language impairment (SLI) or dyslexia do not routinely undergo cytogenetic analysis. We assess the frequency of sex chromosome aneuploidies in individuals with language impairment or dyslexia. Method Genome-wide single nucleotide polymorphism genotyping was performed in three sample sets: a clinical cohort of individuals with speech and language deficits (87 probands: 61 males, 26 females; age range 4 to 23 years), a replication cohort of individuals with SLI, from both clinical and epidemiological samples (209 probands: 139 males, 70 females; age range 4 to 17 years), and a set of individuals with dyslexia (314 probands: 224 males, 90 females; age range 7 to 18 years). Results In the clinical language-impaired cohort, three abnormal karyotypic results were identified in probands (proband yield 3.4%). In the SLI replication cohort, six abnormalities were identified providing a consistent proband yield (2.9%). In the sample of individuals with dyslexia, two sex chromosome aneuploidies were found giving a lower proband yield of 0.6%. In total, two XYY, four XXY (Klinefelter syndrome), three XXX, one XO (Turner syndrome), and one unresolved karyotype were identified. Interpretation The frequency of sex chromosome aneuploidies within each of the three cohorts was increased over the expected population frequency (approximately 0.25%) suggesting that genetic testing may prove worthwhile for individuals with language and literacy problems and normal non-verbal IQ. Early detection of these aneuploidies can provide information and direct the appropriate management for individuals. PMID:24117048
Benchmarking contactless acquisition sensor reproducibility for latent fingerprint trace evidence
NASA Astrophysics Data System (ADS)
Hildebrandt, Mario; Dittmann, Jana
2015-03-01
Optical, nano-meter range, contactless, non-destructive sensor devices are promising acquisition techniques in crime scene trace forensics, e.g. for digitizing latent fingerprint traces. Before new approaches are introduced in crime investigations, innovations need to be positively tested and quality ensured. In this paper we investigate sensor reproducibility by studying different scans from four sensors: two chromatic white light sensors (CWL600/CWL1mm), one confocal laser scanning microscope, and one NIR/VIS/UV reflection spectrometer. Firstly, we perform an intra-sensor reproducibility testing for CWL600 with a privacy conform test set of artificial-sweat printed, computer generated fingerprints. We use 24 different fingerprint patterns as original samples (printing samples/templates) for printing with artificial sweat (physical trace samples) and their acquisition with contactless sensory resulting in 96 sensor images, called scan or acquired samples. The second test set for inter-sensor reproducibility assessment consists of the first three patterns from the first test set, acquired in two consecutive scans using each device. We suggest using a simple feature space set in spatial and frequency domain known from signal processing and test its suitability for six different classifiers classifying scan data into small differences (reproducible) and large differences (non-reproducible). Furthermore, we suggest comparing the classification results with biometric verification scores (calculated with NBIS, with threshold of 40) as biometric reproducibility score. The Bagging classifier is nearly for all cases the most reliable classifier in our experiments and the results are also confirmed with the biometric matching rates.
NASA Astrophysics Data System (ADS)
Ghani, Mastura; Adlan, Mohd Nordin; Kamal, Nurul Hana Mokhtar; Aziz, Hamidi Abdul
2017-10-01
A laboratory physical model study on riverbed filtration (RBeF) was conducted to investigate site suitability of soil from Tanah Merah, Kelantan for RBeF. Soil samples were collected and transported to the Geotechnical Engineering Laboratory, Universiti Sains Malaysia for sieve analysis and hydraulic conductivity tests. A physical model was fabricated with gravel packs laid at the bottom of it to cover the screen and then soil sample were placed above gravel pack for 30 cm depth. River water samples from Lubok Buntar, Kedah were used to simulate the effectiveness of RBeF for turbidity removal. Turbidity readings were tested at the inlet and outlet of the filter with specified flow rate. Results from soil characterization show that the soil samples were classified as poorly graded sand with hydraulic conductivity ranged from 7.95 x 10-3 to 6.61 x 10-2 cm/s. Turbidity removal ranged from 44.91% - 92.75% based on the turbidity of water samples before filtration in the range of 33.1-161 NTU. The turbidity of water samples after RBeF could be enhanced up to 2.53 NTU. River water samples with higher turbidity of more than 160 NTU could only reach 50% or less removal by the physical model. Flow rates of the RBeF were in the range of 0.11-1.61 L/min while flow rates at the inlet were set up between 2-4 L/min. Based on the result of soil classification, Tanah Merah site is suitable for RBeF whereas result from physical model study suggested that 30 cm depth of filter media is not sufficient to be used if river water turbidity is higher.
Protein and glycomic plasma markers for early detection of adenoma and colon cancer.
Rho, Jung-Hyun; Ladd, Jon J; Li, Christopher I; Potter, John D; Zhang, Yuzheng; Shelley, David; Shibata, David; Coppola, Domenico; Yamada, Hiroyuki; Toyoda, Hidenori; Tada, Toshifumi; Kumada, Takashi; Brenner, Dean E; Hanash, Samir M; Lampe, Paul D
2018-03-01
To discover and confirm blood-based colon cancer early-detection markers. We created a high-density antibody microarray to detect differences in protein levels in plasma from individuals diagnosed with colon cancer <3 years after blood was drawn (ie, prediagnostic) and cancer-free, matched controls. Potential markers were tested on plasma samples from people diagnosed with adenoma or cancer, compared with controls. Components of an optimal 5-marker panel were tested via immunoblotting using a third sample set, Luminex assay in a large fourth sample set and immunohistochemistry (IHC) on tissue microarrays. In the prediagnostic samples, we found 78 significantly (t-test) increased proteins, 32 of which were confirmed in the diagnostic samples. From these 32, optimal 4-marker panels of BAG family molecular chaperone regulator 4 (BAG4), interleukin-6 receptor subunit beta (IL6ST), von Willebrand factor (VWF) and CD44 or epidermal growth factor receptor (EGFR) were established. Each panel member and the panels also showed increases in the diagnostic adenoma and cancer samples in independent third and fourth sample sets via immunoblot and Luminex, respectively. IHC results showed increased levels of BAG4, IL6ST and CD44 in adenoma and cancer tissues. Inclusion of EGFR and CD44 sialyl Lewis-A and Lewis-X content increased the panel performance. The protein/glycoprotein panel was statistically significantly higher in colon cancer samples, characterised by a range of area under the curves from 0.90 (95% CI 0.82 to 0.98) to 0.86 (95% CI 0.83 to 0.88), for the larger second and fourth sets, respectively. A panel including BAG4, IL6ST, VWF, EGFR and CD44 protein/glycomics performed well for detection of early stages of colon cancer and should be further examined in larger studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Griffin, Dale W.; Petrosky, Terry; Morman, Suzette A.; Luna, Vicki A.
2009-01-01
Soil samples were collected along a north-south transect extending from Manitoba, Canada, to the US-Mexico border near El Paso, Texas in 2004 (104 samples), a group of sites within New Orleans, Louisiana following Hurricane Katrina in 2005 (19 samples), and a Gulf Coast transect extending from Sulphur, Louisiana, to DeFuniak Springs, Florida, in 2007 (38 samples). Samples were collected from the top 40 cm of soil and were screened for the presence of total Bacillus species and Bacillus anthracis (anthrax), specifically using multiplex-polymerase chain reaction (PCR). Using an assay with a sensitivity of ~170 equivalent colony-forming units (CFU) g-1 field moist soil, the prevalence rate of Bacillus sp./B. anthracis in the north-south transect and the 2005 New Orleans post-Katrina sample set were 20/5% and 26/26%, respectively. Prevalence in the 2007 Gulf Coast sample set using an assay with a sensitivity of ~4 CFU g-1 of soil was 63/0%. Individual transect-set data indicate a positive relation between occurrences of species and soil moisture or soil constituents (i.e., Zn and Cu content). The 2005 New Orleans post-Katrina data indicated that B. anthracis is readily detectable in Gulf Coast soils following flood events. The data also indicated that occurrence, as it relates to soil chemistry, may be confounded by flood-induced dissemination of germinated cells and the mixing of soil constituents for short temporal periods following an event.
Griffin, Dale W.; Petrosky, T.; Morman, S.A.; Luna, V.A.
2009-01-01
Soil samples were collected along a north-south transect extending from Manitoba, Canada, to the US-Mexico border near El Paso, Texas in 2004 (104 samples), a group of sites within New Orleans, Louisiana following Hurricane Katrina in 2005 (19 samples), and a Gulf Coast transect extending from Sulphur, Louisiana, to DeFuniak Springs, Florida, in 2007 (38 samples). Samples were collected from the top 40 cm of soil and were screened for the presence of total Bacillus species and Bacillus anthracis (anthrax), specifically using multiplex-polymerase chain reaction (PCR). Using an assay with a sensitivity of ???170 equivalent colony-forming units (CFU) g-1 field moist soil, the prevalence rate of Bacillus sp./B. anthracis in the north-south transect and the 2005 New Orleans post-Katrina sample set were 20/5% and 26/26%, respectively. Prevalence in the 2007 Gulf Coast sample set using an assay with a sensitivity of ???4 CFU g-1 of soil was 63/0%. Individual transect-set data indicate a positive relation between occurrences of species and soil moisture or soil constituents (i.e., Zn and Cu content). The 2005 New Orleans post-Katrina data indicated that B. anthracis is readily detectable in Gulf Coast soils following flood events. The data also indicated that occurrence, as it relates to soil chemistry, may be confounded by flood-induced dissemination of germinated cells and the mixing of soil constituents for short temporal periods following an event.
ERIC Educational Resources Information Center
Pegler, Chris
2005-01-01
This paper draws on the presentation of three online pilot "series" of learning objects aimed at offering university staff convenient updating opportunities around issues connected with e-learning. The "Hot Topics" format presented short themed sets (series) of learning objects to a wide-range of staff, encouraging sampling strategies to support…
Student Use of Animated Pedagogical Agents in a Middle School Science Inquiry Program
ERIC Educational Resources Information Center
Bowman, Catherine D. D.
2012-01-01
Animated pedagogical agents (APAs) have the potential to provide one-on-one, just-in-time instruction, guidance or mentoring in classrooms where such individualized human interactions may be infeasible. Much current APA research focuses on a wide range of design variables tested with small samples or in laboratory settings, while overlooking…
7 CFR 28.404 - Strict Low Middling Color.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Strict Low Middling Color. 28.404 Section 28.404... for the Color Grade of American Upland Cotton § 28.404 Strict Low Middling Color. Strict Low Middling Color is color which is within the range represented by a set of samples in the custody of the United...
7 CFR 28.406 - Strict Good Ordinary Color.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Strict Good Ordinary Color. 28.406 Section 28.406... for the Color Grade of American Upland Cotton § 28.406 Strict Good Ordinary Color. Strict Good Ordinary Color is color which is within the range represented by a set of samples in the custody of the...
7 CFR 28.404 - Strict Low Middling Color.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Strict Low Middling Color. 28.404 Section 28.404... for the Color Grade of American Upland Cotton § 28.404 Strict Low Middling Color. Strict Low Middling Color is color which is within the range represented by a set of samples in the custody of the United...
7 CFR 28.404 - Strict Low Middling Color.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Strict Low Middling Color. 28.404 Section 28.404... for the Color Grade of American Upland Cotton § 28.404 Strict Low Middling Color. Strict Low Middling Color is color which is within the range represented by a set of samples in the custody of the United...
7 CFR 28.406 - Strict Good Ordinary Color.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Strict Good Ordinary Color. 28.406 Section 28.406... for the Color Grade of American Upland Cotton § 28.406 Strict Good Ordinary Color. Strict Good Ordinary Color is color which is within the range represented by a set of samples in the custody of the...
7 CFR 28.406 - Strict Good Ordinary Color.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Strict Good Ordinary Color. 28.406 Section 28.406... for the Color Grade of American Upland Cotton § 28.406 Strict Good Ordinary Color. Strict Good Ordinary Color is color which is within the range represented by a set of samples in the custody of the...
7 CFR 28.406 - Strict Good Ordinary Color.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Strict Good Ordinary Color. 28.406 Section 28.406... for the Color Grade of American Upland Cotton § 28.406 Strict Good Ordinary Color. Strict Good Ordinary Color is color which is within the range represented by a set of samples in the custody of the...
7 CFR 28.404 - Strict Low Middling Color.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Strict Low Middling Color. 28.404 Section 28.404... for the Color Grade of American Upland Cotton § 28.404 Strict Low Middling Color. Strict Low Middling Color is color which is within the range represented by a set of samples in the custody of the United...
Functional Grammar in the ESL Classroom: Noticing, Exploring and Practicing
ERIC Educational Resources Information Center
Lock, Graham; Jones, Rodney
2011-01-01
A set of easy to use techniques helps students discover for themselves how grammar works in real world contexts and how grammatical choices are not just about form but about meaning. Sample teaching ideas, covering a wide range of grammatical topics including verb tense, voice, reference and the organization of texts, accompanies each procedure.…
7 CFR 28.406 - Strict Good Ordinary Color.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Strict Good Ordinary Color. 28.406 Section 28.406... for the Color Grade of American Upland Cotton § 28.406 Strict Good Ordinary Color. Strict Good Ordinary Color is color which is within the range represented by a set of samples in the custody of the...
7 CFR 28.404 - Strict Low Middling Color.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Strict Low Middling Color. 28.404 Section 28.404... for the Color Grade of American Upland Cotton § 28.404 Strict Low Middling Color. Strict Low Middling Color is color which is within the range represented by a set of samples in the custody of the United...
Doerry, Armin W.
2004-07-20
Movement of a GMTI radar during a coherent processing interval over which a set of radar pulses are processed may cause defocusing of a range-Doppler map in the video signal. This problem may be compensated by varying waveform or sampling parameters of each pulse to compensate for distortions caused by variations in viewing angles from the radar to the target.
NASA Astrophysics Data System (ADS)
Williams, Christopher J.; Moffitt, Christine M.
2003-03-01
An important emerging issue in fisheries biology is the health of free-ranging populations of fish, particularly with respect to the prevalence of certain pathogens. For many years, pathologists focused on captive populations and interest was in the presence or absence of certain pathogens, so it was economically attractive to test pooled samples of fish. Recently, investigators have begun to study individual fish prevalence from pooled samples. Estimation of disease prevalence from pooled samples is straightforward when assay sensitivity and specificity are perfect, but this assumption is unrealistic. Here we illustrate the use of a Bayesian approach for estimating disease prevalence from pooled samples when sensitivity and specificity are not perfect. We also focus on diagnostic plots to monitor the convergence of the Gibbs-sampling-based Bayesian analysis. The methods are illustrated with a sample data set.
2013-01-01
Background Microsatellites are widely used for many genetic studies. In contrast to single nucleotide polymorphism (SNP) and genotyping-by-sequencing methods, they are readily typed in samples of low DNA quality/concentration (e.g. museum/non-invasive samples), and enable the quick, cheap identification of species, hybrids, clones and ploidy. Microsatellites also have the highest cross-species utility of all types of markers used for genotyping, but, despite this, when isolated from a single species, only a relatively small proportion will be of utility. Marker development of any type requires skill and time. The availability of sufficient “off-the-shelf” markers that are suitable for genotyping a wide range of species would not only save resources but also uniquely enable new comparisons of diversity among taxa at the same set of loci. No other marker types are capable of enabling this. We therefore developed a set of avian microsatellite markers with enhanced cross-species utility. Results We selected highly-conserved sequences with a high number of repeat units in both of two genetically distant species. Twenty-four primer sets were designed from homologous sequences that possessed at least eight repeat units in both the zebra finch (Taeniopygia guttata) and chicken (Gallus gallus). Each primer sequence was a complete match to zebra finch and, after accounting for degenerate bases, at least 86% similar to chicken. We assessed primer-set utility by genotyping individuals belonging to eight passerine and four non-passerine species. The majority of the new Conserved Avian Microsatellite (CAM) markers amplified in all 12 species tested (on average, 94% in passerines and 95% in non-passerines). This new marker set is of especially high utility in passerines, with a mean 68% of loci polymorphic per species, compared with 42% in non-passerine species. Conclusions When combined with previously described conserved loci, this new set of conserved markers will not only reduce the necessity and expense of microsatellite isolation for a wide range of genetic studies, including avian parentage and population analyses, but will also now enable comparisons of genetic diversity among different species (and populations) at the same set of loci, with no or reduced bias. Finally, the approach used here can be applied to other taxa in which appropriate genome sequences are available. PMID:23497230
Development and validation of an Argentine set of facial expressions of emotion.
Vaiman, Marcelo; Wagner, Mónica Anna; Caicedo, Estefanía; Pereno, Germán Leandro
2017-02-01
Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion research is receiving in this region. Here we present the development and validation of the Universidad Nacional de Cordoba, Expresiones de Emociones Faciales (UNCEEF), a Facial Action Coding System (FACS)-verified set of pictures of Argentineans expressing the six basic emotions, plus neutral expressions. FACS scores, recognition rates, Hu scores, and discrimination indices are reported. Evidence of convergent validity was obtained using the Pictures of Facial Affect in an Argentine sample. However, recognition accuracy was greater for UNCEEF. The importance of local sets of emotion pictures is discussed.
Nuclear Resonance Fluorescence Response of U-235
NASA Astrophysics Data System (ADS)
Warren, Glen
2008-05-01
Nuclear resonance fluorescence (NRF) is a physical process that provides an isotopic-specific signature that could be used for the identification and characterization of materials. The technique involves the detection of prompt discrete-energy photons emitted from a sample, which is exposed to photons in the MeV energy range. Potential applications of the technique range from detection of high explosives to characterization of special nuclear materials. Pacific Northwest National Laboratory and Passport Systems have collaboratively conducted a set of measurements to search for an NRF response of U-235 in the 1.5 to 9 MeV energy range. Results from these measurements will be presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melentev, G. A., E-mail: gamelen@spbstu.ru; Shalygin, V. A.; Vorobjev, L. E.
2016-03-07
We present the results of experimental and theoretical studies of the surface plasmon polariton excitations in heavily doped GaN epitaxial layers. Reflection and emission of radiation in the frequency range of 2–20 THz including the Reststrahlen band were investigated for samples with grating etched on the sample surface, as well as for samples with flat surface. The reflectivity spectrum for p-polarized radiation measured for the sample with the surface-relief grating demonstrates a set of resonances associated with excitations of different surface plasmon polariton modes. Spectral peculiarities due to the diffraction effect have been also revealed. The characteristic features of themore » reflectivity spectrum, namely, frequencies, amplitudes, and widths of the resonance dips, are well described theoretically by a modified technique of rigorous coupled-wave analysis of Maxwell equations. The emissivity spectra of the samples were measured under epilayer temperature modulation by pulsed electric field. The emissivity spectrum of the sample with surface-relief grating shows emission peaks in the frequency ranges corresponding to the decay of the surface plasmon polariton modes. Theoretical analysis based on the blackbody-like radiation theory well describes the main peculiarities of the observed THz emission.« less
Results for five sets of forensic genetic markers studied in a Greek population sample.
Tomas, C; Skitsa, I; Steinmeier, E; Poulsen, L; Ampati, A; Børsting, C; Morling, N
2015-05-01
A population sample of 223 Greek individuals was typed for five sets of forensic genetic markers with the kits NGM SElect™, SNPforID 49plex, DIPplex®, Argus X-12 and PowerPlex® Y23. No significant deviation from Hardy-Weinberg expectations was observed for any of the studied markers after Holm-Šidák correction. Statistically significant (P<0.05) levels of linkage disequilibrium were observed between markers within two of the studied X-chromosome linkage groups. AMOVA analyses of the five sets of markers did not show population structure when the individuals were grouped according to their geographic origin. The Greek population grouped closely to the other European populations measured by F(ST)(*) distances. The match probability ranged from a value of 1 in 2×10(7) males by using haplotype frequencies of four X-chromosome haplogroups in males to 1 in 1.73×10(21) individuals for 16 autosomal STRs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Ruoff, Kaspar; Karoui, Romdhane; Dufour, Eric; Luginbühl, Werner; Bosset, Jacques-Olivier; Bogdanov, Stefan; Amado, Renato
2005-03-09
The potential of front-face fluorescence spectroscopy for the authentication of unifloral and polyfloral honey types (n = 57 samples) previously classified using traditional methods such as chemical, pollen, and sensory analysis was evaluated. Emission spectra were recorded between 280 and 480 nm (excit: 250 nm), 305 and 500 nm (excit: 290 nm), and 380 and 600 nm (excit: 373 nm) directly on honey samples. In addition, excitation spectra (290-440 nm) were recorded with the emission measured at 450 nm. A total of four different spectral data sets were considered for data analysis. After normalization of the spectra, chemometric evaluation of the spectral data was carried out using principal component analysis (PCA) and linear discriminant analysis (LDA). The rate of correct classification ranged from 36% to 100% by using single spectral data sets (250, 290, 373, 450 nm) and from 73% to 100% by combining these four data sets. For alpine polyfloral honey and the unifloral varieties investigated (acacia, alpine rose, honeydew, chestnut, and rape), correct classification ranged from 96% to 100%. This preliminary study indicates that front-face fluorescence spectroscopy is a promising technique for the authentication of the botanical origin of honey. It is nondestructive, rapid, easy to use, and inexpensive. The use of additional excitation wavelengths between 320 and 440 nm could increase the correct classification of the less characteristic fluorescent varieties.
Fast, Safe, Propellant-Efficient Spacecraft Motion Planning Under Clohessy-Wiltshire-Hill Dynamics
NASA Technical Reports Server (NTRS)
Starek, Joseph A.; Schmerling, Edward; Maher, Gabriel D.; Barbee, Brent W.; Pavone, Marco
2016-01-01
This paper presents a sampling-based motion planning algorithm for real-time and propellant-optimized autonomous spacecraft trajectory generation in near-circular orbits. Specifically, this paper leverages recent algorithmic advances in the field of robot motion planning to the problem of impulsively actuated, propellant- optimized rendezvous and proximity operations under the Clohessy-Wiltshire-Hill dynamics model. The approach calls upon a modified version of the FMT* algorithm to grow a set of feasible trajectories over a deterministic, low-dispersion set of sample points covering the free state space. To enforce safety, the tree is only grown over the subset of actively safe samples, from which there exists a feasible one-burn collision-avoidance maneuver that can safely circularize the spacecraft orbit along its coasting arc under a given set of potential thruster failures. Key features of the proposed algorithm include 1) theoretical guarantees in terms of trajectory safety and performance, 2) amenability to real-time implementation, and 3) generality, in the sense that a large class of constraints can be handled directly. As a result, the proposed algorithm offers the potential for widespread application, ranging from on-orbit satellite servicing to orbital debris removal and autonomous inspection missions.
Adapted random sampling patterns for accelerated MRI.
Knoll, Florian; Clason, Christian; Diwoky, Clemens; Stollberger, Rudolf
2011-02-01
Variable density random sampling patterns have recently become increasingly popular for accelerated imaging strategies, as they lead to incoherent aliasing artifacts. However, the design of these sampling patterns is still an open problem. Current strategies use model assumptions like polynomials of different order to generate a probability density function that is then used to generate the sampling pattern. This approach relies on the optimization of design parameters which is very time consuming and therefore impractical for daily clinical use. This work presents a new approach that generates sampling patterns by making use of power spectra of existing reference data sets and hence requires neither parameter tuning nor an a priori mathematical model of the density of sampling points. The approach is validated with downsampling experiments, as well as with accelerated in vivo measurements. The proposed approach is compared with established sampling patterns, and the generalization potential is tested by using a range of reference images. Quantitative evaluation is performed for the downsampling experiments using RMS differences to the original, fully sampled data set. Our results demonstrate that the image quality of the method presented in this paper is comparable to that of an established model-based strategy when optimization of the model parameter is carried out and yields superior results to non-optimized model parameters. However, no random sampling pattern showed superior performance when compared to conventional Cartesian subsampling for the considered reconstruction strategy.
Measurement of pH in whole blood by near-infrared spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alam, M. Kathleen; Maynard, John D.; Robinson, M. Ries
1999-03-01
Whole blood pH has been determined {ital in vitro} by using near-infrared spectroscopy over the wavelength range of 1500 to 1785 nm with multivariate calibration modeling of the spectral data obtained from two different sample sets. In the first sample set, the pH of whole blood was varied without controlling cell size and oxygen saturation (O{sub 2} Sat) variation. The result was that the red blood cell (RBC) size and O{sub 2} Sat correlated with pH. Although the partial least-squares (PLS) multivariate calibration of these data produced a good pH prediction cross-validation standard error of prediction (CVSEP)=0.046, R{sup 2}=0.982, themore » spectral data were dominated by scattering changes due to changing RBC size that correlated with the pH changes. A second experiment was carried out where the RBC size and O{sub 2} Sat were varied orthogonally to the pH variation. A PLS calibration of the spectral data obtained from these samples produced a pH prediction with an R{sup 2} of 0.954 and a cross-validated standard error of prediction of 0.064 pH units. The robustness of the PLS calibration models was tested by predicting the data obtained from the other sets. The predicted pH values obtained from both data sets yielded R{sup 2} values greater than 0.9 once the data were corrected for differences in hemoglobin concentration. For example, with the use of the calibration produced from the second sample set, the pH values from the first sample set were predicted with an R{sup 2} of 0.92 after the predictions were corrected for bias and slope. It is shown that spectral information specific to pH-induced chemical changes in the hemoglobin molecule is contained within the PLS loading vectors developed for both the first and second data sets. It is this pH specific information that allows the spectra dominated by pH-correlated scattering changes to provide robust pH predictive ability in the uncorrelated data, and visa versa. {copyright} {ital 1999} {ital Society for Applied Spectroscopy}« less
Stephens, G.C.; Evenson, E.B.; Detra, D.E.
1990-01-01
In mountainous regions containing extensive glacier systems there is a lack of suitable material for conventional geochemical sampling. As a result, in most geochemical sampling programs a few stream-sediment samples collected at, or near, the terminus of valley glaciers are used to evaluate the mineral potential of the glaciated area. We have developed and tested a technique which utilizes the medial moraines of valley glaciers for systematic geochemical exploration of the glacial catchment area. Moraine sampling provides geochemical information that is site-specific in that geochemical anomalies can be traced directly up-ice to bedrock sources. Traverses were made across the Trident and Susitna glaciers in the central Alaska Range where fine-grained (clay to sand size) samples were collected from each medial moraine. These samples were prepared and chemically analyzed to determine the concentration of specific elements. Fifty pebbles were collected at each moraine for archival purposes and for subsequent lithologic identification. Additionally, fifty cobbles and fifty boulders were examined and described at each sample site to determine the nature and abundance of lithologies present in the catchment area, the extent and nature of visible mineralization, the presence and intensity of hydrothermal alteration and the existence of veins, dikes and other minor structural features. Results from the central Alaska Range have delineated four distinct multi-element anomalies which are a response to potential mineralization up-ice from the medial moraine traverse. By integrating the lithologic, mineralogical and geochemical data the probable geological setting of the geochemical anomalies is determined. ?? 1990.
Nuclear Resonance Fluorescence of U-235 above 3 MeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warren, Glen A.; Caggiano, Joseph A.; Miller, Erin A.
Pacific Northwest National Laboratory and Passport Systems have collaborated to conduct measurements to search for a nuclear resonance fluorescence response of U-235 from 3 to 5 MeV using an 8 g sample of highly enriched uranium. These new measurements complement previously reported measurements below 3 MeV. Preliminary analysis indicates that no strong resonances exist for U-235 in this energy range. A second set of measurements focused on a signature search in the 5 to 10 MeV range is still under analysis.
Faruki, Hawazin; Mayhew, Gregory M; Fan, Cheng; Wilkerson, Matthew D; Parker, Scott; Kam-Morgan, Lauren; Eisenberg, Marcia; Horten, Bruce; Hayes, D Neil; Perou, Charles M; Lai-Goldman, Myla
2016-06-01
Context .- A histologic classification of lung cancer subtypes is essential in guiding therapeutic management. Objective .- To complement morphology-based classification of lung tumors, a previously developed lung subtyping panel (LSP) of 57 genes was tested using multiple public fresh-frozen gene-expression data sets and a prospectively collected set of formalin-fixed, paraffin-embedded lung tumor samples. Design .- The LSP gene-expression signature was evaluated in multiple lung cancer gene-expression data sets totaling 2177 patients collected from 4 platforms: Illumina RNAseq (San Diego, California), Agilent (Santa Clara, California) and Affymetrix (Santa Clara) microarrays, and quantitative reverse transcription-polymerase chain reaction. Gene centroids were calculated for each of 3 genomic-defined subtypes: adenocarcinoma, squamous cell carcinoma, and neuroendocrine, the latter of which encompassed both small cell carcinoma and carcinoid. Classification by LSP into 3 subtypes was evaluated in both fresh-frozen and formalin-fixed, paraffin-embedded tumor samples, and agreement with the original morphology-based diagnosis was determined. Results .- The LSP-based classifications demonstrated overall agreement with the original clinical diagnosis ranging from 78% (251 of 322) to 91% (492 of 538 and 869 of 951) in the fresh-frozen public data sets and 84% (65 of 77) in the formalin-fixed, paraffin-embedded data set. The LSP performance was independent of tissue-preservation method and gene-expression platform. Secondary, blinded pathology review of formalin-fixed, paraffin-embedded samples demonstrated concordance of 82% (63 of 77) with the original morphology diagnosis. Conclusions .- The LSP gene-expression signature is a reproducible and objective method for classifying lung tumors and demonstrates good concordance with morphology-based classification across multiple data sets. The LSP panel can supplement morphologic assessment of lung cancers, particularly when classification by standard methods is challenging.
Knobel, L.L.; DeWayne, Cecil L.; Wegner, S.J.; Moore, L.L.
1992-01-01
From 1952 to 1988, about 140 curies of strontium-90 were discharged in liquid waste to disposal ponds and wells at the INEL (Idaho National Engineering Laboratory). Water from four wells was sampled as part of the U.S. Geological Survey's quality-assurance program to evaluate the effects of filtration and preservation methods on strontium-90 concentrations in ground water at the INEL. Water from each well was filtered through eithera 0.45- or a 0.1-micrometer membrane filter; unfiltered samples also were collected. Two sets of filtered and two sets of unfiltered water samples were collected at each well. One of the two sets of water samples was field acidified. Strontium-90 concentrations ranged from below the reporting level to 52 ?? 4 picocuries per liter. Descriptive statistics were used to determine reproducibility of the analytical results for strontium-90 concentrations in water from each well. Comparisons were made with unfiltered, acidified samples at each well. Analytical results for strontium-90 concentrations in water from well 88 were not in statistical agreement between the unfiltered, acidified sample and the filtered (0.45 micrometer), acidified sample. The strontium-90 concentration for water from well 88 was less than the reporting level. For water from wells with strontium-90 concentrations at or above the reporting level, 94 percent or more of the strontium-90 is in true solution or in colloidal particles smaller than 0.1 micrometer. These results suggest that changes in filtration and preservation methods used for sample collection do not significantly affect reproducibility of strontium-90 analyses in ground water at the INEL.
Knobel, L L; Cecil, L D; Wegner, S J; Moore, L L
1992-01-01
From 1952 to 1988, about 140 curies of strontium-90 were discharged in liquid waste to disposal ponds and wells at the INEL (Idaho National Engineering Laboratory). Water from four wells was sampled as part of the U.S. Geological Survey's quality-assurance program to evaluate the effects of filtration and preservation methods on strontium-90 concentrations in ground water at the INEL. Water from each well was filtered through either a 0.45- or a 0.1-micrometer membrane filter; unfiltered samples also were collected. Two sets of filtered and two sets of unfiltered water samples were collected at each well. One of the two sets of water samples was field acidified.Strontium-90 concentrations ranged from below the reporting level to 52±4 picocuries per liter. Descriptive statistics were used to determine reproducibility of the analytical results for strontium-90 concentrations in water from each well. Comparisons were made with unfiltered, acidified samples at each well. Analytical results for strontium-90 concentrations in water from well 88 were not in statistical agreement between the unfiltered, acidified sample and the filtered (0.45 micrometer), acidified sample. The strontium-90 concentration for water from well 88 was less than the reporting level.For water from wells with strontium-90 concentrations at or above the reporting level, 94 percent or more of the strontium-90 is in true solution or in colloidal particles smaller than 0.1 micrometer. These results suggest that changes in filtration and preservation methods used for sample collection do not significantly affect reproducibility of strontium-90 analyses in ground water at the INEL.
Batista, Philip D; Janes, Jasmine K; Boone, Celia K; Murray, Brent W; Sperling, Felix A H
2016-09-01
Assessments of population genetic structure and demographic history have traditionally been based on neutral markers while explicitly excluding adaptive markers. In this study, we compared the utility of putatively adaptive and neutral single-nucleotide polymorphisms (SNPs) for inferring mountain pine beetle population structure across its geographic range. Both adaptive and neutral SNPs, and their combination, allowed range-wide structure to be distinguished and delimited a population that has recently undergone range expansion across northern British Columbia and Alberta. Using an equal number of both adaptive and neutral SNPs revealed that adaptive SNPs resulted in a stronger correlation between sampled populations and inferred clustering. Our results suggest that adaptive SNPs should not be excluded prior to analysis from neutral SNPs as a combination of both marker sets resulted in better resolution of genetic differentiation between populations than either marker set alone. These results demonstrate the utility of adaptive loci for resolving population genetic structure in a nonmodel organism.
Wright, Adam; Pang, Justine; Feblowitz, Joshua C; Maloney, Francine L; Wilcox, Allison R; Ramelson, Harley Z; Schneider, Louise I; Bates, David W
2011-01-01
Accurate knowledge of a patient's medical problems is critical for clinical decision making, quality measurement, research, billing and clinical decision support. Common structured sources of problem information include the patient problem list and billing data; however, these sources are often inaccurate or incomplete. To develop and validate methods of automatically inferring patient problems from clinical and billing data, and to provide a knowledge base for inferring problems. We identified 17 target conditions and designed and validated a set of rules for identifying patient problems based on medications, laboratory results, billing codes, and vital signs. A panel of physicians provided input on a preliminary set of rules. Based on this input, we tested candidate rules on a sample of 100,000 patient records to assess their performance compared to gold standard manual chart review. The physician panel selected a final rule for each condition, which was validated on an independent sample of 100,000 records to assess its accuracy. Seventeen rules were developed for inferring patient problems. Analysis using a validation set of 100,000 randomly selected patients showed high sensitivity (range: 62.8-100.0%) and positive predictive value (range: 79.8-99.6%) for most rules. Overall, the inference rules performed better than using either the problem list or billing data alone. We developed and validated a set of rules for inferring patient problems. These rules have a variety of applications, including clinical decision support, care improvement, augmentation of the problem list, and identification of patients for research cohorts.
Castro-Vale, Ivone; Severo, Milton; Carvalho, Davide; Mota-Cardoso, Rui
2015-01-01
Emotion recognition is very important for social interaction. Several mental disorders influence facial emotion recognition. War veterans and their offspring are subject to an increased risk of developing psychopathology. Emotion recognition is an important aspect that needs to be addressed in this population. To our knowledge, no test exists that is validated for use with war veterans and their offspring. The current study aimed to validate the JACFEE photo set to study facial emotion recognition in war veterans and their offspring. The JACFEE photo set was presented to 135 participants, comprised of 62 male war veterans and 73 war veterans' offspring. The participants identified the facial emotion presented from amongst the possible seven emotions that were tested for: anger, contempt, disgust, fear, happiness, sadness, and surprise. A loglinear model was used to evaluate whether the agreement between the intended and the chosen emotions was higher than the expected. Overall agreement between chosen and intended emotions was 76.3% (Cohen kappa = 0.72). The agreement ranged from 63% (sadness expressions) to 91% (happiness expressions). The reliability by emotion ranged from 0.617 to 0.843 and the overall JACFEE photo set Cronbach alpha was 0.911. The offspring showed higher agreement when compared with the veterans (RR: 41.52 vs 12.12, p < 0.001), which confirms the construct validity of the test. The JACFEE set of photos showed good validity and reliability indices, which makes it an adequate instrument for researching emotion recognition ability in the study sample of war veterans and their respective offspring.
Castro-Vale, Ivone; Severo, Milton; Carvalho, Davide; Mota-Cardoso, Rui
2015-01-01
Emotion recognition is very important for social interaction. Several mental disorders influence facial emotion recognition. War veterans and their offspring are subject to an increased risk of developing psychopathology. Emotion recognition is an important aspect that needs to be addressed in this population. To our knowledge, no test exists that is validated for use with war veterans and their offspring. The current study aimed to validate the JACFEE photo set to study facial emotion recognition in war veterans and their offspring. The JACFEE photo set was presented to 135 participants, comprised of 62 male war veterans and 73 war veterans’ offspring. The participants identified the facial emotion presented from amongst the possible seven emotions that were tested for: anger, contempt, disgust, fear, happiness, sadness, and surprise. A loglinear model was used to evaluate whether the agreement between the intended and the chosen emotions was higher than the expected. Overall agreement between chosen and intended emotions was 76.3% (Cohen kappa = 0.72). The agreement ranged from 63% (sadness expressions) to 91% (happiness expressions). The reliability by emotion ranged from 0.617 to 0.843 and the overall JACFEE photo set Cronbach alpha was 0.911. The offspring showed higher agreement when compared with the veterans (RR: 41.52 vs 12.12, p < 0.001), which confirms the construct validity of the test. The JACFEE set of photos showed good validity and reliability indices, which makes it an adequate instrument for researching emotion recognition ability in the study sample of war veterans and their respective offspring. PMID:26147938
Moderate resolution spectrophotometry of high redshift quasars
NASA Technical Reports Server (NTRS)
Schneider, Donald P.; Schmidt, Maarten; Gunn, James E.
1991-01-01
A uniform set of photometry and high signal-to-noise moderate resolution spectroscopy of 33 quasars with redshifts larger than 3.1 is presented. The sample consists of 17 newly discovered quasars (two with redshifts in excess of 4.4) and 16 sources drawn from the literature. The objects in this sample have r magnitudes between 17.4 and 21.4; their luminosities range from -28.8 to -24.9. Three of the 33 objects are broad absorption line quasars. A number of possible high redshift damped Ly-alpha systems were found.
Water quality and possible sources of nitrate in the Cimarron Terrace Aquifer, Oklahoma, 2003
Masoner, Jason R.; Mashburn, Shana L.
2004-01-01
Water from the Cimarron terrace aquifer in northwest Oklahoma commonly has nitrate concentrations that exceed the maximum contaminant level of 10 milligrams per liter of nitrite plus nitrate as nitrogen (referred to as nitrate) set by the U.S. Environmental Protection Agency for public drinking water supplies. Starting in July 2003, the U.S. Geological Survey, in cooperation with the Oklahoma Department of Environmental Quality, conducted a study in the Cimarron terrace aquifer to assess the water quality and possible sources of nitrate. A qualitative and quantitative approach based on multiple lines of evidence from chemical analysis of nitrate, nitrogen isotopes in nitrate, pesticides (indicative of cropland fertilizer application), and wastewater compounds (indicative of animal or human wastewater) were used to indicate possible sources of nitrate in the Cimarron terrace aquifer. Nitrate was detected in 44 of 45 ground-water samples and had the greatest median concentration (8.03 milligrams per liter) of any nutrient analyzed. Nitrate concentrations ranged from <0.06 to 31.8 milligrams per liter. Seventeen samples had nitrate concentrations exceeding the maximum contaminant level of 10 milligrams per liter. Nitrate concentrations in agricultural areas were significantly greater than nitrate concentrations in grassland areas. Pesticides were detected in 15 of 45 ground-water samples. Atrazine and deethylatrazine, a metabolite of atrazine, were detected most frequently. Deethylatrazine was detected in water samples from 9 wells and atrazine was detected in samples from 8 wells. Tebuthiuron was detected in water samples from 5 wells; metolachlor was detected in samples from 4 wells; prometon was detected in samples from 4 wells; and alachlor was detected in 1 well. None of the detected pesticide concentrations exceeded the maximum contaminant level or health advisory level set by the U.S. Environmental Protection Agency. Wastewater compounds were detected in 28 of 45 groundwater samples. Of the 20 wastewater compounds detected, 11 compounds were from household chemicals, 3 compounds were hydrocarbons, 2 compounds were industrial chemicals, 2 compounds were pesticides, 1 compound was of animal source, and 1 compound was a detergent compound. The most frequently detected wastewater compound was phenol, which was detected in 23 wells. N,N-diethyl-meta-toluamide (DEET) was detected in water samples from 5 wells. Benzophenone, ethanol- 2-butoxy-phosphate, and tributylphosphate were detected in water samples from 3 wells. Fertilizer was determined to be the possible source of nitrate in samples from 13 of 45 wells sampled, with a15N values ranging from 0.43 to 3.46 permil. The possible source of nitrate for samples from the greatest number of wells (22 wells) was from mixed sources of nitrate from fertilizer, septic or manure, or natural sources. Mixed nitrate sources had a 15N values ranging from 0.25 to 9.83 permil. Septic or manure was determined as the possible source of nitrate in samples from 2 wells. Natural sources were determined to be the possible source of nitrate in samples from 7 wells, with a 15N values ranging from 0.83 to 9.44 permil.
Park, Sang-Hoon; Lee, David; Lee, Sang-Goog
2018-02-01
For the last few years, many feature extraction methods have been proposed based on biological signals. Among these, the brain signals have the advantage that they can be obtained, even by people with peripheral nervous system damage. Motor imagery electroencephalograms (EEG) are inexpensive to measure, offer a high temporal resolution, and are intuitive. Therefore, these have received a significant amount of attention in various fields, including signal processing, cognitive science, and medicine. The common spatial pattern (CSP) algorithm is a useful method for feature extraction from motor imagery EEG. However, performance degradation occurs in a small-sample setting (SSS), because the CSP depends on sample-based covariance. Since the active frequency range is different for each subject, it is also inconvenient to set the frequency range to be different every time. In this paper, we propose the feature extraction method based on a filter bank to solve these problems. The proposed method consists of five steps. First, motor imagery EEG is divided by a using filter bank. Second, the regularized CSP (R-CSP) is applied to the divided EEG. Third, we select the features according to mutual information based on the individual feature algorithm. Fourth, parameter sets are selected for the ensemble. Finally, we classify using ensemble based on features. The brain-computer interface competition III data set IVa is used to evaluate the performance of the proposed method. The proposed method improves the mean classification accuracy by 12.34%, 11.57%, 9%, 4.95%, and 4.47% compared with CSP, SR-CSP, R-CSP, filter bank CSP (FBCSP), and SR-FBCSP. Compared with the filter bank R-CSP ( , ), which is a parameter selection version of the proposed method, the classification accuracy is improved by 3.49%. In particular, the proposed method shows a large improvement in performance in the SSS.
NASA Astrophysics Data System (ADS)
Pakyuz-Charrier, Evren; Lindsay, Mark; Ogarko, Vitaliy; Giraud, Jeremie; Jessell, Mark
2018-04-01
Three-dimensional (3-D) geological structural modeling aims to determine geological information in a 3-D space using structural data (foliations and interfaces) and topological rules as inputs. This is necessary in any project in which the properties of the subsurface matters; they express our understanding of geometries in depth. For that reason, 3-D geological models have a wide range of practical applications including but not restricted to civil engineering, the oil and gas industry, the mining industry, and water management. These models, however, are fraught with uncertainties originating from the inherent flaws of the modeling engines (working hypotheses, interpolator's parameterization) and the inherent lack of knowledge in areas where there are no observations combined with input uncertainty (observational, conceptual and technical errors). Because 3-D geological models are often used for impactful decision-making it is critical that all 3-D geological models provide accurate estimates of uncertainty. This paper's focus is set on the effect of structural input data measurement uncertainty propagation in implicit 3-D geological modeling. This aim is achieved using Monte Carlo simulation for uncertainty estimation (MCUE), a stochastic method which samples from predefined disturbance probability distributions that represent the uncertainty of the original input data set. MCUE is used to produce hundreds to thousands of altered unique data sets. The altered data sets are used as inputs to produce a range of plausible 3-D models. The plausible models are then combined into a single probabilistic model as a means to propagate uncertainty from the input data to the final model. In this paper, several improved methods for MCUE are proposed. The methods pertain to distribution selection for input uncertainty, sample analysis and statistical consistency of the sampled distribution. Pole vector sampling is proposed as a more rigorous alternative than dip vector sampling for planar features and the use of a Bayesian approach to disturbance distribution parameterization is suggested. The influence of incorrect disturbance distributions is discussed and propositions are made and evaluated on synthetic and realistic cases to address the sighted issues. The distribution of the errors of the observed data (i.e., scedasticity) is shown to affect the quality of prior distributions for MCUE. Results demonstrate that the proposed workflows improve the reliability of uncertainty estimation and diminish the occurrence of artifacts.
Pareja, Jhon; López, Sebastian; Jaramillo, Daniel; Hahn, David W; Molina, Alejandro
2013-04-10
The performances of traditional laser-induced breakdown spectroscopy (LIBS) and laser ablation-LIBS (LA-LIBS) were compared by quantifying the total elemental concentration of potassium in highly heterogeneous solid samples, namely soils. Calibration curves for a set of fifteen samples with a wide range of potassium concentrations were generated. The LA-LIBS approach produced a superior linear response different than the traditional LIBS scheme. The analytical response of LA-LIBS was tested with a large set of different soil samples for the quantification of the total concentration of Fe, Mn, Mg, Ca, Na, and K. Results showed an acceptable linear response for Ca, Fe, Mg, and K while poor signal responses were found for Na and Mn. Signs of remaining matrix effects for the LA-LIBS approach in the case of soil analysis were found and discussed. Finally, some improvements and possibilities for future studies toward quantitative soil analysis with the LA-LIBS technique are suggested.
Ghasemi Damavandi, Hamidreza; Sen Gupta, Ananya; Nelson, Robert K; Reddy, Christopher M
2016-01-01
Comprehensive two-dimensional gas chromatography [Formula: see text] provides high-resolution separations across hundreds of compounds in a complex mixture, thus unlocking unprecedented information for intricate quantitative interpretation. We exploit this compound diversity across the [Formula: see text] topography to provide quantitative compound-cognizant interpretation beyond target compound analysis with petroleum forensics as a practical application. We focus on the [Formula: see text] topography of biomarker hydrocarbons, hopanes and steranes, as they are generally recalcitrant to weathering. We introduce peak topography maps (PTM) and topography partitioning techniques that consider a notably broader and more diverse range of target and non-target biomarker compounds compared to traditional approaches that consider approximately 20 biomarker ratios. Specifically, we consider a range of 33-154 target and non-target biomarkers with highest-to-lowest peak ratio within an injection ranging from 4.86 to 19.6 (precise numbers depend on biomarker diversity of individual injections). We also provide a robust quantitative measure for directly determining "match" between samples, without necessitating training data sets. We validate our methods across 34 [Formula: see text] injections from a diverse portfolio of petroleum sources, and provide quantitative comparison of performance against established statistical methods such as principal components analysis (PCA). Our data set includes a wide range of samples collected following the 2010 Deepwater Horizon disaster that released approximately 160 million gallons of crude oil from the Macondo well (MW). Samples that were clearly collected following this disaster exhibit statistically significant match [Formula: see text] using PTM-based interpretation against other closely related sources. PTM-based interpretation also provides higher differentiation between closely correlated but distinct sources than obtained using PCA-based statistical comparisons. In addition to results based on this experimental field data, we also provide extentive perturbation analysis of the PTM method over numerical simulations that introduce random variability of peak locations over the [Formula: see text] biomarker ROI image of the MW pre-spill sample (sample [Formula: see text] in Additional file 4: Table S1). We compare the robustness of the cross-PTM score against peak location variability in both dimensions and compare the results against PCA analysis over the same set of simulated images. Detailed description of the simulation experiment and discussion of results are provided in Additional file 1: Section S8. We provide a peak-cognizant informational framework for quantitative interpretation of [Formula: see text] topography. Proposed topographic analysis enables [Formula: see text] forensic interpretation across target petroleum biomarkers, while including the nuances of lesser-known non-target biomarkers clustered around the target peaks. This allows potential discovery of hitherto unknown connections between target and non-target biomarkers.
A Note on a Sampling Theorem for Functions over GF(q)n Domain
NASA Astrophysics Data System (ADS)
Ukita, Yoshifumi; Saito, Tomohiko; Matsushima, Toshiyasu; Hirasawa, Shigeichi
In digital signal processing, the sampling theorem states that any real valued function ƒ can be reconstructed from a sequence of values of ƒ that are discretely sampled with a frequency at least twice as high as the maximum frequency of the spectrum of ƒ. This theorem can also be applied to functions over finite domain. Then, the range of frequencies of ƒ can be expressed in more detail by using a bounded set instead of the maximum frequency. A function whose range of frequencies is confined to a bounded set is referred to as bandlimited function. And a sampling theorem for bandlimited functions over Boolean domain has been obtained. Here, it is important to obtain a sampling theorem for bandlimited functions not only over Boolean domain (GF(q)n domain) but also over GF(q)n domain, where q is a prime power and GF(q) is Galois field of order q. For example, in experimental designs, although the model can be expressed as a linear combination of the Fourier basis functions and the levels of each factor can be represented by GF(q)n, the number of levels often take a value greater than two. However, the sampling theorem for bandlimited functions over GF(q)n domain has not been obtained. On the other hand, the sampling points are closely related to the codewords of a linear code. However, the relation between the parity check matrix of a linear code and any distinct error vectors has not been obtained, although it is necessary for understanding the meaning of the sampling theorem for bandlimited functions. In this paper, we generalize the sampling theorem for bandlimited functions over Boolean domain to a sampling theorem for bandlimited functions over GF(q)n domain. We also present a theorem for the relation between the parity check matrix of a linear code and any distinct error vectors. Lastly, we clarify the relation between the sampling theorem for functions over GF(q)n domain and linear codes.
Howson, E L A; Armson, B; Madi, M; Kasanga, C J; Kandusi, S; Sallu, R; Chepkwony, E; Siddle, A; Martin, P; Wood, J; Mioulet, V; King, D P; Lembo, T; Cleaveland, S; Fowler, V L
2017-06-01
Accurate, timely diagnosis is essential for the control, monitoring and eradication of foot-and-mouth disease (FMD). Clinical samples from suspect cases are normally tested at reference laboratories. However, transport of samples to these centralized facilities can be a lengthy process that can impose delays on critical decision making. These concerns have motivated work to evaluate simple-to-use technologies, including molecular-based diagnostic platforms, that can be deployed closer to suspect cases of FMD. In this context, FMD virus (FMDV)-specific reverse transcription loop-mediated isothermal amplification (RT-LAMP) and real-time RT-PCR (rRT-PCR) assays, compatible with simple sample preparation methods and in situ visualization, have been developed which share equivalent analytical sensitivity with laboratory-based rRT-PCR. However, the lack of robust 'ready-to-use kits' that utilize stabilized reagents limits the deployment of these tests into field settings. To address this gap, this study describes the performance of lyophilized rRT-PCR and RT-LAMP assays to detect FMDV. Both of these assays are compatible with the use of fluorescence to monitor amplification in real-time, and for the RT-LAMP assays end point detection could also be achieved using molecular lateral flow devices. Lyophilization of reagents did not adversely affect the performance of the assays. Importantly, when these assays were deployed into challenging laboratory and field settings within East Africa they proved to be reliable in their ability to detect FMDV in a range of clinical samples from acutely infected as well as convalescent cattle. These data support the use of highly sensitive molecular assays into field settings for simple and rapid detection of FMDV. © 2015 The Authors. Transboundary and Emerging Diseases Published by Blackwell Verlag GmbH.
Gika, Helen G; Theodoridis, Georgios A; Wilson, Ian D
2008-05-02
Typically following collection biological samples are kept in a freezer for periods ranging from a few days to several months before analysis. Experience has shown that in LC-MS-based metabonomics research the best analytical practice is to store samples as these are collected, complete the sample set and analyse it in a single run. However, this approach is prudent only if the samples stored in the refrigerator or in the freezer are stable. Another important issue is the stability of the samples following the freeze-thaw process. To investigate these matters urine samples were collected from 6 male volunteers and analysed by LC-MS and ultra-performance liquid chromatography (UPLC)-MS [in both positive and negative electrospray ionization (ESI)] on the day of collection or at intervals of up to 6 months storage at -20 degrees C and -80 degrees C. Other sets of these samples underwent a series of up to nine freeze-thaw cycles. The stability of samples kept at 4 degrees C in an autosampler for up to 6 days was also assessed, with clear differences appearing after 48h. Data was analysed using multivariate statistical analysis (principal component analysis). The results show that sample storage at both -20 and -80 degrees C appeared to ensure sample stability. Similarly up to nine freeze thaw cycles were without any apparent effect on the profile.
Developing and testing new smoking measures for the Health Plan Employer Data and Information Set.
Pbert, Lori; Vuckovic, Nancy; Ockene, Judith K; Hollis, Jack F; Riedlinger, Karen
2003-04-01
To develop and test items for the Health Plan Employee Data and Information Set (HEDIS) that assess delivery of the full range of provider-delivered tobacco interventions. The authors identified potential items via literature review; items were reviewed by national experts. Face validity of candidate items was tested in focus groups. The final survey was sent to a random sample of 1711 adult primary care patients; the re-test survey was sent to self-identified smokers. The process identified reliable items to capture provider assessment of motivation and provision of assistance and follow-up. One can reliably assess patient self-report of provider delivery of the full range of brief tobacco interventions. Such assessment and feedback to health plans and providers may increase use of evidence-based brief interventions.
A high-throughput microRNA expression profiling system.
Guo, Yanwen; Mastriano, Stephen; Lu, Jun
2014-01-01
As small noncoding RNAs, microRNAs (miRNAs) regulate diverse biological functions, including physiological and pathological processes. The expression and deregulation of miRNA levels contain rich information with diagnostic and prognostic relevance and can reflect pharmacological responses. The increasing interest in miRNA-related research demands global miRNA expression profiling on large numbers of samples. We describe here a robust protocol that supports high-throughput sample labeling and detection on hundreds of samples simultaneously. This method employs 96-well-based miRNA capturing from total RNA samples and on-site biochemical reactions, coupled with bead-based detection in 96-well format for hundreds of miRNAs per sample. With low-cost, high-throughput, high detection specificity, and flexibility to profile both small and large numbers of samples, this protocol can be adapted in a wide range of laboratory settings.
Bruegel, Mathias; Nagel, Dorothea; Funk, Manuela; Fuhrmann, Petra; Zander, Johannes; Teupser, Daniel
2015-06-01
Various types of automated hematology analyzers are used in clinical laboratories. Here, we performed a side-by-side comparison of five current top of the range routine hematology analyzers in the setting of a university hospital central laboratory. Complete blood counts (CBC), differentials, reticulocyte and nucleated red blood cell (NRBC) counts of 349 patient samples, randomly taken out of routine diagnostics, were analyzed with Cell-Dyn Sapphire (Abbott), DxH 800 (Beckman Coulter), Advia 2120i (Siemens), XE-5000 and XN-2000 (Sysmex). Inter-instrument comparison of CBCs including reticulocyte and NRBC counts and investigation of flagging quality in relation to microscopy were performed with the complete set of samples. Inter-instrument comparison of five-part differential was performed using samples without atypical cells in blood smear (n=292). Automated five-part differentials and NRBCs were additionally compared with microscopy. The five analyzers showed a good concordance for basic blood count parameters. Correlations between instruments were less well for reticulocyte counts, NRBCs, and differentials. The poorest concordance for NRBCs with microscopy was observed for Advia 2120i (Kendall's τb=0.37). The highest flagging sensitivity for blasts was observed for XN-2000 (97% compared to 65%-76% for other analyzers), whereas overall specificity was comparable between different instruments. To the best of our knowledge, this is the most comprehensive side-by-side comparison of five current top of the range routine hematology analyzers. Variable analyzer quality and parameter specific limitations must be considered in defining laboratory algorithms in clinical practice.
Silva de Lima, Ana Lígia; Evers, Luc J W; Hahn, Tim; Bataille, Lauren; Hamilton, Jamie L; Little, Max A; Okuma, Yasuyuki; Bloem, Bastiaan R; Faber, Marjan J
2017-08-01
Despite the large number of studies that have investigated the use of wearable sensors to detect gait disturbances such as Freezing of gait (FOG) and falls, there is little consensus regarding appropriate methodologies for how to optimally apply such devices. Here, an overview of the use of wearable systems to assess FOG and falls in Parkinson's disease (PD) and validation performance is presented. A systematic search in the PubMed and Web of Science databases was performed using a group of concept key words. The final search was performed in January 2017, and articles were selected based upon a set of eligibility criteria. In total, 27 articles were selected. Of those, 23 related to FOG and 4 to falls. FOG studies were performed in either laboratory or home settings, with sample sizes ranging from 1 PD up to 48 PD presenting Hoehn and Yahr stage from 2 to 4. The shin was the most common sensor location and accelerometer was the most frequently used sensor type. Validity measures ranged from 73-100% for sensitivity and 67-100% for specificity. Falls and fall risk studies were all home-based, including samples sizes of 1 PD up to 107 PD, mostly using one sensor containing accelerometers, worn at various body locations. Despite the promising validation initiatives reported in these studies, they were all performed in relatively small sample sizes, and there was a significant variability in outcomes measured and results reported. Given these limitations, the validation of sensor-derived assessments of PD features would benefit from more focused research efforts, increased collaboration among researchers, aligning data collection protocols, and sharing data sets.
Poudel, Ram C.; Möller, Michael; Gao, Lian-Ming; Ahrends, Antje; Baral, Sushim R.; Liu, Jie; Thomas, Philip; Li, De-Zhu
2012-01-01
Background Despite the availability of several studies to clarify taxonomic problems on the highly threatened yews of the Hindu Kush-Himalaya (HKH) and adjacent regions, the total number of species and their exact distribution ranges remains controversial. We explored the use of comprehensive sets of morphological, molecular and climatic data to clarify taxonomy and distributions of yews in this region. Methodology/Principal Findings A total of 743 samples from 46 populations of wild yew and 47 representative herbarium specimens were analyzed. Principle component analyses on 27 morphological characters and 15 bioclimatic variables plus altitude and maximum parsimony analysis on molecular ITS and trnL-F sequences indicated the existence of three distinct species occurring in different ecological (climatic) and altitudinal gradients along the HKH and adjacent regions Taxus contorta from eastern Afghanistan to the eastern end of Central Nepal, T. wallichiana from the western end of Central Nepal to Northwest China, and the first report of the South China low to mid-elevation species T. mairei in Nepal, Bhutan, Northeast India, Myanmar and South Vietnam. Conclusion/Significance The detailed sampling and combination of different data sets allowed us to identify three clearly delineated species and their precise distribution ranges in the HKH and adjacent regions, which showed no overlap or no distinct hybrid zone. This might be due to differences in the ecological (climatic) requirements of the species. The analyses further provided the selection of diagnostic morphological characters for the identification of yews occurring in the HKH and adjacent regions. Our work demonstrates that extensive sampling combined with the analysis of diverse data sets can reliably address the taxonomy of morphologically challenging plant taxa. PMID:23056501
Temperature Control Diagnostics for Sample Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santodonato, Louis J; Walker, Lakeisha MH; Church, Andrew J
2010-01-01
In a scientific laboratory setting, standard equipment such as cryocoolers are often used as part of a custom sample environment system designed to regulate temperature over a wide range. The end user may be more concerned with precise sample temperature control than with base temperature. But cryogenic systems tend to be specified mainly in terms of cooling capacity and base temperature. Technical staff at scientific user facilities (and perhaps elsewhere) often wonder how to best specify and evaluate temperature control capabilities. Here we describe test methods and give results obtained at a user facility that operates a large sample environmentmore » inventory. Although this inventory includes a wide variety of temperature, pressure, and magnetic field devices, the present work focuses on cryocooler-based systems.« less
Electric Propulsion System Selection Process for Interplanetary Missions
NASA Technical Reports Server (NTRS)
Landau, Damon; Chase, James; Kowalkowski, Theresa; Oh, David; Randolph, Thomas; Sims, Jon; Timmerman, Paul
2008-01-01
The disparate design problems of selecting an electric propulsion system, launch vehicle, and flight time all have a significant impact on the cost and robustness of a mission. The effects of these system choices combine into a single optimization of the total mission cost, where the design constraint is a required spacecraft neutral (non-electric propulsion) mass. Cost-optimal systems are designed for a range of mass margins to examine how the optimal design varies with mass growth. The resulting cost-optimal designs are compared with results generated via mass optimization methods. Additional optimizations with continuous system parameters address the impact on mission cost due to discrete sets of launch vehicle, power, and specific impulse. The examined mission set comprises a near-Earth asteroid sample return, multiple main belt asteroid rendezvous, comet rendezvous, comet sample return, and a mission to Saturn.
Gas Emissions Acquired during the Aircraft Particle Emission Experiment (APEX) Series
NASA Technical Reports Server (NTRS)
Changlie, Wey; Chowen, Chou Wey
2007-01-01
NASA, in collaboration with other US federal agencies, engine/airframe manufacturers, airlines, and airport authorities, recently sponsored a series of 3 ground-based field investigations to examine the particle and gas emissions from a variety of in-use commercial aircraft. Emissions parameters were measured at multiple engine power settings, ranging from idle to maximum thrust, in samples collected at 3 different down stream locations of the exhaust. Sampling rakes at nominally 1 meter down stream contained multiple probes to facilitate a study of the spatial variation of emissions across the engine exhaust plane. Emission indices measured at 1 m were in good agreement with the engine certification data as well as predictions provided by the engine company. However at low power settings, trace species emissions were observed to be highly dependent on ambient conditions and engine temperature.
Vogel, Thomas; Perez, Danny
2015-08-28
We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. Furthermore, themore » method is particularly useful for the fast and reliable estimation of the microcanonical temperature T(U) or, equivalently, of the density of states g(U) over a wide range of energies.« less
Soultan, Alaaeldin; Safi, Kamran
2017-01-01
Digitized species occurrence data provide an unprecedented source of information for ecologists and conservationists. Species distribution model (SDM) has become a popular method to utilise these data for understanding the spatial and temporal distribution of species, and for modelling biodiversity patterns. Our objective is to study the impact of noise in species occurrence data (namely sample size and positional accuracy) on the performance and reliability of SDM, considering the multiplicative impact of SDM algorithms, species specialisation, and grid resolution. We created a set of four 'virtual' species characterized by different specialisation levels. For each of these species, we built the suitable habitat models using five algorithms at two grid resolutions, with varying sample sizes and different levels of positional accuracy. We assessed the performance and reliability of the SDM according to classic model evaluation metrics (Area Under the Curve and True Skill Statistic) and model agreement metrics (Overall Concordance Correlation Coefficient and geographic niche overlap) respectively. Our study revealed that species specialisation had by far the most dominant impact on the SDM. In contrast to previous studies, we found that for widespread species, low sample size and low positional accuracy were acceptable, and useful distribution ranges could be predicted with as few as 10 species occurrences. Range predictions for narrow-ranged species, however, were sensitive to sample size and positional accuracy, such that useful distribution ranges required at least 20 species occurrences. Against expectations, the MAXENT algorithm poorly predicted the distribution of specialist species at low sample size.
Zhang, Mengliang; Harrington, Peter de B
2015-01-01
Multivariate partial least-squares (PLS) method was applied to the quantification of two complex polychlorinated biphenyls (PCBs) commercial mixtures, Aroclor 1254 and 1260, in a soil matrix. PCBs in soil samples were extracted by headspace solid phase microextraction (SPME) and determined by gas chromatography/mass spectrometry (GC/MS). Decachlorinated biphenyl (deca-CB) was used as internal standard. After the baseline correction was applied, four data representations including extracted ion chromatograms (EIC) for Aroclor 1254, EIC for Aroclor 1260, EIC for both Aroclors and two-way data sets were constructed for PLS-1 and PLS-2 calibrations and evaluated with respect to quantitative prediction accuracy. The PLS model was optimized with respect to the number of latent variables using cross validation of the calibration data set. The validation of the method was performed with certified soil samples and real field soil samples and the predicted concentrations for both Aroclors using EIC data sets agreed with the certified values. The linear range of the method was from 10μgkg(-1) to 1000μgkg(-1) for both Aroclor 1254 and 1260 in soil matrices and the detection limit was 4μgkg(-1) for Aroclor 1254 and 6μgkg(-1) for Aroclor 1260. This holistic approach for the determination of mixtures of complex samples has broad application to environmental forensics and modeling. Copyright © 2014 Elsevier Ltd. All rights reserved.
Borges, Chad R
2007-07-01
A chemometrics-based data analysis concept has been developed as a substitute for manual inspection of extracted ion chromatograms (XICs), which facilitates rapid, analyst-mediated interpretation of GC- and LC/MS(n) data sets from samples undergoing qualitative batchwise screening for prespecified sets of analytes. Automatic preparation of data into two-dimensional row space-derived scatter plots (row space plots) eliminates the need to manually interpret hundreds to thousands of XICs per batch of samples while keeping all interpretation of raw data directly in the hands of the analyst-saving great quantities of human time without loss of integrity in the data analysis process. For a given analyte, two analyte-specific variables are automatically collected by a computer algorithm and placed into a data matrix (i.e., placed into row space): the first variable is the ion abundance corresponding to scan number x and analyte-specific m/z value y, and the second variable is the ion abundance corresponding to scan number x and analyte-specific m/z value z (a second ion). These two variables serve as the two axes of the aforementioned row space plots. In order to collect appropriate scan number (retention time) information, it is necessary to analyze, as part of every batch, a sample containing a mixture of all analytes to be tested. When pure standard materials of tested analytes are unavailable, but representative ion m/z values are known and retention time can be approximated, data are evaluated based on two-dimensional scores plots from principal component analysis of small time range(s) of mass spectral data. The time-saving efficiency of this concept is directly proportional to the percentage of negative samples and to the total number of samples processed simultaneously.
7 CFR 28.501 - Color Grade No. 1.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Color Grade No. 1. 28.501 Section 28.501 Agriculture... American Pima Cotton § 28.501 Color Grade No. 1. Color grade No. 1 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.503 - Color Grade No. 3.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Color Grade No. 3. 28.503 Section 28.503 Agriculture... American Pima Cotton § 28.503 Color Grade No. 3. Color grade No. 3 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.503 - Color Grade No. 3.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Color Grade No. 3. 28.503 Section 28.503 Agriculture... American Pima Cotton § 28.503 Color Grade No. 3. Color grade No. 3 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.506 - Color Grade No. 6.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Color Grade No. 6. 28.506 Section 28.506 Agriculture... American Pima Cotton § 28.506 Color Grade No. 6. Color grade No. 6 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.502 - Color Grade No. 2.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Color Grade No. 2. 28.502 Section 28.502 Agriculture... American Pima Cotton § 28.502 Color Grade No. 2. Color grade No. 2 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.505 - Color Grade No. 5.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Color Grade No. 5. 28.505 Section 28.505 Agriculture... American Pima Cotton § 28.505 Color Grade No. 5. Color grade No. 5 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.502 - Color Grade No. 2.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Color Grade No. 2. 28.502 Section 28.502 Agriculture... American Pima Cotton § 28.502 Color Grade No. 2. Color grade No. 2 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.503 - Color Grade No. 3.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Color Grade No. 3. 28.503 Section 28.503 Agriculture... American Pima Cotton § 28.503 Color Grade No. 3. Color grade No. 3 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.501 - Color Grade No. 1.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Color Grade No. 1. 28.501 Section 28.501 Agriculture... American Pima Cotton § 28.501 Color Grade No. 1. Color grade No. 1 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.502 - Color Grade No. 2.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Color Grade No. 2. 28.502 Section 28.502 Agriculture... American Pima Cotton § 28.502 Color Grade No. 2. Color grade No. 2 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.505 - Color Grade No. 5.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Color Grade No. 5. 28.505 Section 28.505 Agriculture... American Pima Cotton § 28.505 Color Grade No. 5. Color grade No. 5 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.504 - Color Grade No. 4.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Color Grade No. 4. 28.504 Section 28.504 Agriculture... American Pima Cotton § 28.504 Color Grade No. 4. Color grade No. 4 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.505 - Color Grade No. 5.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Color Grade No. 5. 28.505 Section 28.505 Agriculture... American Pima Cotton § 28.505 Color Grade No. 5. Color grade No. 5 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.506 - Color Grade No. 6.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Color Grade No. 6. 28.506 Section 28.506 Agriculture... American Pima Cotton § 28.506 Color Grade No. 6. Color grade No. 6 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.501 - Color Grade No. 1.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Color Grade No. 1. 28.501 Section 28.501 Agriculture... American Pima Cotton § 28.501 Color Grade No. 1. Color grade No. 1 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.505 - Color Grade No. 5.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Color Grade No. 5. 28.505 Section 28.505 Agriculture... American Pima Cotton § 28.505 Color Grade No. 5. Color grade No. 5 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.504 - Color Grade No. 4.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Color Grade No. 4. 28.504 Section 28.504 Agriculture... American Pima Cotton § 28.504 Color Grade No. 4. Color grade No. 4 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.504 - Color Grade No. 4.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Color Grade No. 4. 28.504 Section 28.504 Agriculture... American Pima Cotton § 28.504 Color Grade No. 4. Color grade No. 4 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.502 - Color Grade No. 2.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Color Grade No. 2. 28.502 Section 28.502 Agriculture... American Pima Cotton § 28.502 Color Grade No. 2. Color grade No. 2 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.506 - Color Grade No. 6.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Color Grade No. 6. 28.506 Section 28.506 Agriculture... American Pima Cotton § 28.506 Color Grade No. 6. Color grade No. 6 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.501 - Color Grade No. 1.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Color Grade No. 1. 28.501 Section 28.501 Agriculture... American Pima Cotton § 28.501 Color Grade No. 1. Color grade No. 1 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.504 - Color Grade No. 4.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Color Grade No. 4. 28.504 Section 28.504 Agriculture... American Pima Cotton § 28.504 Color Grade No. 4. Color grade No. 4 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.506 - Color Grade No. 6.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Color Grade No. 6. 28.506 Section 28.506 Agriculture... American Pima Cotton § 28.506 Color Grade No. 6. Color grade No. 6 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.503 - Color Grade No. 3.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Color Grade No. 3. 28.503 Section 28.503 Agriculture... American Pima Cotton § 28.503 Color Grade No. 3. Color grade No. 3 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.503 - Color Grade No. 3.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Color Grade No. 3. 28.503 Section 28.503 Agriculture... American Pima Cotton § 28.503 Color Grade No. 3. Color grade No. 3 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.504 - Color Grade No. 4.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Color Grade No. 4. 28.504 Section 28.504 Agriculture... American Pima Cotton § 28.504 Color Grade No. 4. Color grade No. 4 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.502 - Color Grade No. 2.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Color Grade No. 2. 28.502 Section 28.502 Agriculture... American Pima Cotton § 28.502 Color Grade No. 2. Color grade No. 2 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.506 - Color Grade No. 6.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Color Grade No. 6. 28.506 Section 28.506 Agriculture... American Pima Cotton § 28.506 Color Grade No. 6. Color grade No. 6 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.505 - Color Grade No. 5.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Color Grade No. 5. 28.505 Section 28.505 Agriculture... American Pima Cotton § 28.505 Color Grade No. 5. Color grade No. 5 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
7 CFR 28.501 - Color Grade No. 1.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Color Grade No. 1. 28.501 Section 28.501 Agriculture... American Pima Cotton § 28.501 Color Grade No. 1. Color grade No. 1 shall be American Pima cotton which in color is within the range represented by a set of samples in the custody of the U.S. Department of...
ERIC Educational Resources Information Center
Fracasso, Lucille E.; Bangs, Kathryn; Binder, Katherine S.
2016-01-01
The Adult Basic Education (ABE) population consists of a wide range of abilities with needs that may be unique to this set of learners. The purpose of this study was to better understand the relative contributions of phonological decoding and morphological awareness to spelling, vocabulary, and comprehension across a sample of ABE students. In…
Song, Sunbin; Garrido, Lúcia; Nagy, Zoltan; Mohammadi, Siawoosh; Steel, Adam; Driver, Jon; Dolan, Ray J.; Duchaine, Bradley; Furl, Nicholas
2015-01-01
Individuals with developmental prosopagnosia (DP) experience face recognition impairments despite normal intellect and low-level vision and no history of brain damage. Prior studies using diffusion tensor imaging in small samples of subjects with DP (n=6 or n=8) offer conflicting views on the neurobiological bases for DP, with one suggesting white matter differences in two major long-range tracts running through the temporal cortex, and another suggesting white matter differences confined to fibers local to ventral temporal face-specific functional regions of interest (fROIs) in the fusiform gyrus. Here, we address these inconsistent findings using a comprehensive set of analyzes in a sample of DP subjects larger than both prior studies combined (n=16). While we found no microstructural differences in long-range tracts between DP and age-matched control participants, we found differences local to face-specific fROIs, and relationships between these microstructural measures with face recognition ability. We conclude that subtle differences in local rather than long-range tracts in the ventral temporal lobe are more likely associated with developmental prosopagnosia. PMID:26456436
On the Role of High Amounts of Mn Element in CdS Structure
NASA Astrophysics Data System (ADS)
Gonullu, Meryem Polat; Kose, Salih
2017-03-01
CdS and MnS are technologically important semiconducting materials. In this work, due to the limited ability of these materials separately, a detailed characterization of the new samples formed by the combined use of them has been reported. CdS films, with the incorporation of Mn in a wide range of concentrations, have been produced by a low-cost Ultrasonic Spray Pyrolysis set-up. Spectroscopic Ellipsometry (SE) has been used to determine the thicknesses and optical constants ( n, k) of the samples. It has been determined that samples with high amounts of Mn have lower refractive index values. Absorbance spectra have shown additional band edges along with the one belonging to CdS, for samples with Mn concentrations higher than 50 pct. This has been attributed to a phase separation above this limit. Raman spectroscopy analysis which shows additional Raman peaks belonging to MnS phase also supports these findings. Depending on this phase separation, crystalline structure has been deteriorated. Surface properties of the samples have been investigated by SEM and AFM. Elemental analysis has been performed by EDS. Resistivity measurements performed by a four-probe set-up have shown that samples containing high amount of Mn have lower electrical resistivity values.
Zaugg, Steven D.; Phillips, Patrick J.; Smith, Steven G.
2014-01-01
Research on the effects of exposure of stream biota to complex mixtures of pharmaceuticals and other organic compounds associated with wastewater requires the development of additional analytical capabilities for these compounds in water samples. Two gas chromatography/mass spectrometry (GC/MS) analytical methods used at the U.S. Geological Survey National Water Quality Laboratory (NWQL) to analyze organic compounds associated with wastewater were adapted to include additional pharmaceutical and other organic compounds beginning in 2009. This report includes a description of method performance for 42 additional compounds for the filtered-water method (hereafter referred to as the filtered method) and 46 additional compounds for the unfiltered-water method (hereafter referred to as the unfiltered method). The method performance for the filtered method described in this report has been published for seven of these compounds; however, the addition of several other compounds to the filtered method and the addition of the compounds to the unfiltered method resulted in the need to document method performance for both of the modified methods. Most of these added compounds are pharmaceuticals or pharmaceutical degradates, although two nonpharmaceutical compounds are included in each method. The main pharmaceutical compound classes added to the two modified methods include muscle relaxants, opiates, analgesics, and sedatives. These types of compounds were added to the original filtered and unfiltered methods largely in response to the tentative identification of a wide range of pharmaceutical and other organic compounds in samples collected from wastewater-treatment plants. Filtered water samples are extracted by vacuum through disposable solid-phase cartridges that contain modified polystyrene-divinylbenzene resin. Unfiltered samples are extracted by using continuous liquid-liquid extraction with dichloromethane. The compounds of interest for filtered and unfiltered sample types were determined by use of the capillary-column gas chromatography/mass spectrometry. The performance of each method was assessed by using data on recoveries of compounds in fortified surface-water, wastewater, and reagent-water samples. These experiments (referred to as spike experiments) consist of fortifying (or spiking) samples with known amounts of target analytes. Surface-water-spike experiments were performed by using samples obtained from a stream in Colorado (unfiltered method) and a stream in New York (filtered method). Wastewater spike experiments for both the filtered and unfiltered methods were performed by using a treated wastewater obtained from a single wastewater treatment plant in New York. Surface water and wastewater spike experiments were fortified at both low and high concentrations and termed low- and high-level spikes, respectively. Reagent water spikes were assessed in three ways: (1) set spikes, (2) a low-concentration fortification experiment, and (3) a high-concentration fortification experiment. Set spike samples have been determined since 2009, and consist of analysis of fortified reagent water for target compounds included for each group of 10 to18 environmental samples analyzed at the NWQL. The low-concentration and high-concentration reagent spike experiments, by contrast, represent a one-time assessment of method performance. For each spike experiment, mean recoveries ranging from 60 to 130 percent indicate low bias, and relative standard deviations (RSDs) less than ( Of the compounds included in the filtered method, 21 had mean recoveries ranging from 63 to 129 percent for the low-level and high-level surface-water spikes, and had low ()132 percent]. For wastewater spikes, 24 of the compounds included in the filtered method had recoveries ranging from 61 to 130 percent for the low-level and high-level spikes. RSDs were 130 percent) or variable recoveries (RSDs >30 percent) for low-level wastewater spikes, or low recoveries ( Of the compounds included in the unfiltered method, 17 had mean spike recoveries ranging from 74 to 129 percent and RSDs ranging from 5 to 25 percent for low-level and high-level surface water spikes. The remaining compounds had poor mean recoveries (130 percent), or high RSDs (>29 percent) for these spikes. For wastewater, 14 of the compounds included in the unfiltered method had mean recoveries ranging from 62 to 127 percent and RSDs 130 percent), or low mean recoveries (33 percent) for the low-level wastewater spikes. Of the compounds found in wastewater, 24 had mean set spike recoveries ranging from 64 to 104 percent and RSDs Separate method detection limits (MDLs) were computed for surface water and wastewater for both the filtered and unfiltered methods. Filtered method MDLs ranged from 0.007 to 0.14 microgram per liter (μg/L) for the surface water matrix and from 0.004 to 0.62 μg/L for the wastewater matrix. Unfiltered method MDLs ranged from 0.014 to 0.33 μg/L for the surface water matrix and from 0.008 to 0.36 μg/L for the wastewater matrix.
Rivera, Zahira Herrera; Oosterink, Efraim; Rietveld, Luuk; Schoutsen, Frans; Stolker, Linda
2011-08-26
The influence of natural organic matter on the screening of pharmaceuticals in water was determined by using high resolution liquid chromatography (HRLC) combined with full scan mass spectrometry (MS) techniques like time of flight (ToF) or Orbitrap MS. Water samples containing different amount of natural organic matter (NOM) and residues of a set of 11 pharmaceuticals were analyzed by using Exactive Orbitrap™ LC-MS. The samples were screened for residues of pharmaceuticals belonging to different classes like benzimidazoles, macrolides, penicillins, quinolones, sulfonamides, tetracyclines, tranquillizers, non-steroidal anti-inflammatory drugs (NSAIDs), anti-epileptics and lipid regulators. The method characteristics were established over a concentration range of 0.1-500 μg L(-1). The 11 pharmaceuticals were added to two effluent and two influent water samples. The NOM concentration within the samples ranged from 2 to 8 mg L(-1) of dissolved organic carbon. The HRLC-Exactive Orbitrap™ LC-MS system was set at a resolution of 50,000 (FWHM) and this selection was found sufficient for the detection of the list of pharmaceuticals. With this resolution setting, accurate mass measurements with errors below 2 ppm were found, despite of the NOM concentration of the different types of water samples. The linearities were acceptable with correlation coefficients greater than 0.95 for 30 of the 51 measured linearities. The limit of detection varies between 0.1 μg L(-1)and 100 μg L(-1). It was demonstrated that sensitivity could be affected by matrix constituents in both directions of signal reduction or enhancement. Finally it was concluded that with direct shoot method used (no sample pretreatment) all compounds, were detected but LODs depend on matrix-analyte-concentration combination. No direct relation was observed between NOM concentration and method characteristics. For accurate quantification the use of internal standards and/or sample clean-up is necessary. The direct shoot method is only applicable for qualitative screening purposes. The use of full scan MS makes it possible to search for unknown contaminants. With the use of adequate software and a database containing more than 50,000 entries a tool is available to search for unknowns. Copyright © 2011 Elsevier B.V. All rights reserved.
Size and modal analyses of fines and ultrafines from some Apollo 17 samples
NASA Technical Reports Server (NTRS)
Greene, G. M.; King, D. T., Jr.; Banholzer, G. S., Jr.; King, E. A.
1975-01-01
Scanning electron and optical microscopy techniques have been used to determine the grain-size frequency distributions and morphology-based modal analyses of fine and ultrafine fractions of some Apollo 17 regolith samples. There are significant and large differences between the grain-size frequency distributions of the less than 10-micron size fraction of Apollo 17 samples, but there are no clear relations to the local geologic setting from which individual samples have been collected. This may be due to effective lateral mixing of regolith particles in this size range by micrometeoroid impacts. None of the properties of the frequency distributions support the idea of selective transport of any fine grain-size fraction, as has been proposed by other workers. All of the particle types found in the coarser size fractions also occur in the less than 10-micron particles. In the size range from 105 to 10 microns there is a strong tendency for the percentage of regularly shaped glass to increase as the graphic mean grain size of the less than 1-mm size fraction decreases, both probably being controlled by exposure age.
NASA Astrophysics Data System (ADS)
Chzhan, Michael; Kuppusamy, Periannan; Samouilov, Alexandre; He, Guanglong; Zweier, Jay L.
1999-04-01
There has been a need for development of microwave resonator designs optimized to provide high sensitivity and high stability for EPR spectroscopy and imaging measurements ofin vivosystems. The design and construction of a novel reentrant resonator with transversely oriented electric field (TERR) and rectangular sample opening cross section for EPR spectroscopy and imaging ofin vivobiological samples, such as the whole body of mice and rats, is described. This design with its transversely oriented capacitive element enables wide and simple setting of the center frequency by trimming the dimensions of the capacitive plate over the range 100-900 MHz with unloadedQvalues of approximately 1100 at 750 MHz, while the mechanical adjustment mechanism allows smooth continuous frequency tuning in the range ±50 MHz. This orientation of the capacitive element limits the electric field based loss of resonatorQobserved with large lossy samples, and it facilitates the use of capacitive coupling. Both microwave performance data and EPR measurements of aqueous samples demonstrate high sensitivity and stability of the design, which make it well suited forin vivoapplications.
NASA Technical Reports Server (NTRS)
Tan, Bin; Brown de Colstoun, Eric; Wolfe, Robert E.; Tilton, James C.; Huang, Chengquan; Smith, Sarah E.
2012-01-01
An algorithm is developed to automatically screen the outliers from massive training samples for Global Land Survey - Imperviousness Mapping Project (GLS-IMP). GLS-IMP is to produce a global 30 m spatial resolution impervious cover data set for years 2000 and 2010 based on the Landsat Global Land Survey (GLS) data set. This unprecedented high resolution impervious cover data set is not only significant to the urbanization studies but also desired by the global carbon, hydrology, and energy balance researches. A supervised classification method, regression tree, is applied in this project. A set of accurate training samples is the key to the supervised classifications. Here we developed the global scale training samples from 1 m or so resolution fine resolution satellite data (Quickbird and Worldview2), and then aggregate the fine resolution impervious cover map to 30 m resolution. In order to improve the classification accuracy, the training samples should be screened before used to train the regression tree. It is impossible to manually screen 30 m resolution training samples collected globally. For example, in Europe only, there are 174 training sites. The size of the sites ranges from 4.5 km by 4.5 km to 8.1 km by 3.6 km. The amount training samples are over six millions. Therefore, we develop this automated statistic based algorithm to screen the training samples in two levels: site and scene level. At the site level, all the training samples are divided to 10 groups according to the percentage of the impervious surface within a sample pixel. The samples following in each 10% forms one group. For each group, both univariate and multivariate outliers are detected and removed. Then the screen process escalates to the scene level. A similar screen process but with a looser threshold is applied on the scene level considering the possible variance due to the site difference. We do not perform the screen process across the scenes because the scenes might vary due to the phenology, solar-view geometry, and atmospheric condition etc. factors but not actual landcover difference. Finally, we will compare the classification results from screened and unscreened training samples to assess the improvement achieved by cleaning up the training samples. Keywords:
Er, B; Demirhan, B; Yentür, G
2014-01-01
Aflatoxins are fungal toxins known to be carcinogenic and are classified as food contaminants. This study was performed to investigate aflatoxin (AF) M1 levels in baby foods sold in Ankara (Turkey) and to evaluate the obtained results according to the Turkish Food Codex (TFC). For this purpose, a total of 84 baby food samples (50 follow-on milks and 34 infant formulas) were obtained from different markets in Ankara and the presence of AFM1 in the samples was analyzed by ELISA. In 32 (38.1%) of 84 infant food samples, the presence of AFM1 was detected in concentrations ranging between 0.0055 and 0.0201 µg/kg. The mean level (± standard error) of AFM1 was found to be 0.0089 ± 0.0006 µg/kg in positive infant follow-on milks. Aflatoxin M1 was detected in only 1 infant formula sample (2.94%) at a concentration of 0.0061 µg/kg. The extrapolated levels of AFB1 contamination in feedstuffs were calculated based on levels of AFM1 in baby food samples. The data estimating AFB1 contamination in dairy cattle feedstuff indicate that contamination may range from 0.3410 to 1.2580 µg/kg, with the mean level (± standard error) being 0.5499 ± 0.0385 µg/kg, which is lower than the level set by the TFC and European Union regulations (5 µg/kg). According to the obtained results, the levels of AFM1 in analyzed samples were within the allowed limit (0.025 µg/kg) set in the TFC. Low levels of AFM1 in infant follow-on milks and infant formula samples obtained during the study do not pose a health risk to infants. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
MicrOmega: a VIS/NIR hyperspectral microscope for in situ analysis in space
NASA Astrophysics Data System (ADS)
Leroi, V.; Bibring, J. P.; Berthé, M.
2008-07-01
MicrOmega is an ultra miniaturized spectral microscope for in situ analysis of samples. It is composed of 2 microscopes: one with a spatial sampling of 5 μm, working in 4 color in the visible range and one NIR hyperspectral microscope in the spectral range 0.9-4 μm with a spatial sampling of 20 μm per pixel (described in this paper). MicrOmega/NIR illuminates and images samples a few mm in size and acquires the NIR spectrum of each resolved pixel in up to 600 contiguous spectral channels. The goal of this instrument is to analyse in situ the composition of collected samples at almost their grain size scale, in a non destructive way. It should be among the first set of instruments who will analyse the sample and enable other complementary analyses to be performed on it. With the spectral range and resolution chosen, a wide variety of constituents can be identified: minerals, such as pyroxene and olivine, ferric oxides, hydrated phyllosilicates, sulfates and carbonates; ices and organics. The composition of the various phases within a given sample is a critical record of its formation and evolution. Coupled to the mapping information, it provides unique clues to describe the history of the parent body. In particular, the capability to identify hydrated grains and to characterize their adjacent phases has a huge potential in the search for potential bio-relics. We will present the major instrumental principles and specifications of MicrOmega/NIR, and its expected performances in particular for the ESA/ExoMars Mission.
Quantitative LIBS analysis of vanadium in samples of hexagonal mesoporous silica catalysts.
Pouzar, Miloslav; Kratochvíl, Tomás; Capek, Libor; Smoláková, Lucie; Cernohorský, Tomás; Krejcová, Anna; Hromádko, Ludek
2011-02-15
The method for the analysis of vanadium in hexagonal mesoporous silica (V-HMS) catalysts using Laser Induced Breakdown Spectrometry (LIBS) was suggested. Commercially available LIBS spectrometer was calibrated with the aid of authentic V-HMS samples previously analyzed by ICP OES after microwave digestion. Deposition of the sample on the surface of adhesive tape was adopted as a sample preparation method. Strong matrix effect connected with the catalyst preparation technique (1st vanadium added in the process of HMS synthesis, 2nd already synthesised silica matrix was impregnated by vanadium) was observed. The concentration range of V in the set of nine calibration standards was 1.3-4.5% (w/w). Limit of detection was 0.13% (w/w) and it was calculated as a triple standard deviation from five replicated determinations of vanadium in the real sample with a very low vanadium concentration. Comparable results of LIBS and ED XRF were obtained if the same set of standards was used for calibration of both methods and vanadium was measured in the same type of real samples. LIBS calibration constructed using V-HMS-impregnated samples failed for measuring of V-HMS-synthesized samples. LIBS measurements seem to be strongly influenced with different chemical forms of vanadium in impregnated and synthesised samples. The combination of LIBS and ED XRF is able to provide new information about measured samples (in our case for example about procedure of catalyst preparation). Copyright © 2010 Elsevier B.V. All rights reserved.
Space and time aliasing structure is monthly mean polar-orbiting satellite data
NASA Technical Reports Server (NTRS)
Zeng, Lixin; Levy, Gad
1995-01-01
Monthly mean wind fields from the European Remote Sensing Satellite (ERS1) scatterometer are presented. A banded structure which resembles the satellite subtrack is clearly and consistently apparent in the isotachs as well as the u and v components of the routinely produced fields. The structure also appears in the means of data from other polar-orbiting satellites and instruments. An experiment is designed to trace the cause of the banded structure. The European Centre for Medium-Range Weather Forecast (ECMWF) gridded surface wind analyses are used as a control set. These analyses are also sampled with the ERS1 temporal-spatial samplig pattern to form a simulated scatterometer wind set. Both sets are used to create monthly averages. The banded structures appear in the monthly mean simulated data but do not appear in the control set. It is concluded that the source of the banded structure lies in the spatial and temporal sampling of the polar-orbiting satellite which results in undersampling. The problem involves multiple timescales and space scales, oversampling and under-sampling in space, aliasing in the time and space domains, and preferentially sampled variability. It is shown that commonly used spatial smoothers (or filters), while producing visually pleasing results, also significantly bias the true mean. A three-dimensional spatial-temporal interpolator is designed and used to determine the mean field. It is found to produce satisfactory monthly means from both simulated and real ERS1 data. The implications to climate studies involving polar-orbiting satellite data are discussed.
Millimeter- and submillimeter-wave characterization of various fabrics.
Dunayevskiy, Ilya; Bortnik, Bartosz; Geary, Kevin; Lombardo, Russell; Jack, Michael; Fetterman, Harold
2007-08-20
Transmission measurements of 14 fabrics are presented in the millimeter-wave and submillimeter-wave electromagnetic regions from 130 GHz to 1.2 THz. Three independent sources and experimental set-ups were used to obtain accurate results over a wide spectral range. Reflectivity, a useful parameter for imaging applications, was also measured for a subset of samples in the submillimeter-wave regime along with polarization sensitivity of the transmitted beam and transmission through doubled layers. All of the measurements were performed in free space. Details of these experimental set-ups along with their respective challenges are presented.
Increased prevalence of sex chromosome aneuploidies in specific language impairment and dyslexia.
Simpson, Nuala H; Addis, Laura; Brandler, William M; Slonims, Vicky; Clark, Ann; Watson, Jocelynne; Scerri, Thomas S; Hennessy, Elizabeth R; Bolton, Patrick F; Conti-Ramsden, Gina; Fairfax, Benjamin P; Knight, Julian C; Stein, John; Talcott, Joel B; O'Hare, Anne; Baird, Gillian; Paracchini, Silvia; Fisher, Simon E; Newbury, Dianne F
2014-04-01
Sex chromosome aneuploidies increase the risk of spoken or written language disorders but individuals with specific language impairment (SLI) or dyslexia do not routinely undergo cytogenetic analysis. We assess the frequency of sex chromosome aneuploidies in individuals with language impairment or dyslexia. Genome-wide single nucleotide polymorphism genotyping was performed in three sample sets: a clinical cohort of individuals with speech and language deficits (87 probands: 61 males, 26 females; age range 4 to 23 years), a replication cohort of individuals with SLI, from both clinical and epidemiological samples (209 probands: 139 males, 70 females; age range 4 to 17 years), and a set of individuals with dyslexia (314 probands: 224 males, 90 females; age range 7 to 18 years). In the clinical language-impaired cohort, three abnormal karyotypic results were identified in probands (proband yield 3.4%). In the SLI replication cohort, six abnormalities were identified providing a consistent proband yield (2.9%). In the sample of individuals with dyslexia, two sex chromosome aneuploidies were found giving a lower proband yield of 0.6%. In total, two XYY, four XXY (Klinefelter syndrome), three XXX, one XO (Turner syndrome), and one unresolved karyotype were identified. The frequency of sex chromosome aneuploidies within each of the three cohorts was increased over the expected population frequency (approximately 0.25%) suggesting that genetic testing may prove worthwhile for individuals with language and literacy problems and normal non-verbal IQ. Early detection of these aneuploidies can provide information and direct the appropriate management for individuals. © 2013 The Authors. Developmental Medicine & Child Neurology published by John Wiley & Sons Ltd on behalf of Mac Keith Press.
Temporal Wind Pairs for Space Launch Vehicle Capability Assessment and Risk Mitigation
NASA Technical Reports Server (NTRS)
Decker, Ryan K.; Barbre, Robert E., Jr.
2015-01-01
Space launch vehicles incorporate upper-level wind assessments to determine wind effects on the vehicle and for a commit to launch decision. These assessments make use of wind profiles measured hours prior to launch and may not represent the actual wind the vehicle will fly through. Uncertainty in the winds over the time period between the assessment and launch introduces uncertainty in assessment of vehicle controllability and structural integrity that must be accounted for to ensure launch safety. Temporal wind pairs are used in engineering development of allowances to mitigate uncertainty. Five sets of temporal wind pairs at various times (0.75, 1.5, 2, 3 and 4-hrs) at the United States Air Force Eastern Range and Western Range, as well as the National Aeronautics and Space Administration's Wallops Flight Facility are developed for use in upper-level wind assessments on vehicle performance. Historical databases are compiled from balloon-based and vertically pointing Doppler radar wind profiler systems. Various automated and manual quality control procedures are used to remove unacceptable profiles. Statistical analyses on the resultant wind pairs from each site are performed to determine if the observed extreme wind changes in the sample pairs are representative of extreme temporal wind change. Wind change samples in the Eastern Range and Western Range databases characterize extreme wind change. However, the small sample sizes in the Wallops Flight Facility databases yield low confidence that the sample population characterizes extreme wind change that could occur.
Temporal Wind Pairs for Space Launch Vehicle Capability Assessment and Risk Mitigation
NASA Technical Reports Server (NTRS)
Decker, Ryan K.; Barbre, Robert E., Jr.
2014-01-01
Space launch vehicles incorporate upper-level wind assessments to determine wind effects on the vehicle and for a commit to launch decision. These assessments make use of wind profiles measured hours prior to launch and may not represent the actual wind the vehicle will fly through. Uncertainty in the winds over the time period between the assessment and launch introduces uncertainty in assessment of vehicle controllability and structural integrity that must be accounted for to ensure launch safety. Temporal wind pairs are used in engineering development of allowances to mitigate uncertainty. Five sets of temporal wind pairs at various times (0.75, 1.5, 2, 3 and 4-hrs) at the United States Air Force Eastern Range and Western Range, as well as the National Aeronautics and Space Administration's Wallops Flight Facility are developed for use in upper-level wind assessments on vehicle performance. Historical databases are compiled from balloon-based and vertically pointing Doppler radar wind profiler systems. Various automated and manual quality control procedures are used to remove unacceptable profiles. Statistical analyses on the resultant wind pairs from each site are performed to determine if the observed extreme wind changes in the sample pairs are representative of extreme temporal wind change. Wind change samples in the Eastern Range and Western Range databases characterize extreme wind change. However, the small sample sizes in the Wallops Flight Facility databases yield low confidence that the sample population characterizes extreme wind change that could occur.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, H.; Higuchi, T.; Nishioki, N.
1997-01-01
A dual tunneling-unit scanning tunneling microscope (DTU STM) was developed for nm order length measurement with wide scan range. The crystalline lattice of highly oriented pyrolitic graphite (HOPG) was used as reference scale. A reference unit was set up on top of a test unit. The reference sample holder and the probe tip of test unit were attached to one single XY scanner on either surface, while the test sample holder was open. This enables simultaneous acquisition of wide images of HOPG and test sample. The length in test sample image was measured by counting the number of HOPG lattices.more » An inchworm actuator and an impact drive mechanism were introduced to roughly position probe tips. The XY scanner was designed to be elastic to eliminate image distortion. Some comparison experiments using two HOPG chips were carried out in air. The DTU STM is confirmed to be a stable and more powerful device for length measurement which has nanometer accuracy when covering a wide scan range up to several micrometers, and is capable of measuring comparatively large and heavy samples. {copyright} {ital 1997 American Vacuum Society.}« less
Hybrid setup for micro- and nano-computed tomography in the hard X-ray range
NASA Astrophysics Data System (ADS)
Fella, Christian; Balles, Andreas; Hanke, Randolf; Last, Arndt; Zabler, Simon
2017-12-01
With increasing miniaturization in industry and medical technology, non-destructive testing techniques are an area of ever-increasing importance. In this framework, X-ray microscopy offers an efficient tool for the analysis, understanding, and quality assurance of microscopic samples, in particular as it allows reconstructing three-dimensional data sets of the whole sample's volume via computed tomography (CT). The following article describes a compact X-ray microscope in the hard X-ray regime around 9 keV, based on a highly brilliant liquid-metal-jet source. In comparison to commercially available instruments, it is a hybrid that works in two different modes. The first one is a micro-CT mode without optics, which uses a high-resolution detector to allow scans of samples in the millimeter range with a resolution of 1 μm. The second mode is a microscope, which contains an X-ray optical element to magnify the sample and allows resolving 150 nm features. Changing between the modes is possible without moving the sample. Thus, the instrument represents an important step towards establishing high-resolution laboratory-based multi-mode X-ray microscopy as a standard investigation method.
Petito Boyce, Catherine; Sax, Sonja N; Cohen, Joel M
2017-08-01
Inhalation plays an important role in exposures to lead in airborne particulate matter in occupational settings, and particle size determines where and how much of airborne lead is deposited in the respiratory tract and how much is subsequently absorbed into the body. Although some occupational airborne lead particle size data have been published, limited information is available reflecting current workplace conditions in the U.S. To address this data gap, the Battery Council International (BCI) conducted workplace monitoring studies at nine lead acid battery manufacturing facilities (BMFs) and five secondary smelter facilities (SSFs) across the U.S. This article presents the results of the BCI studies focusing on the particle size distributions calculated from Personal Marple Impactor sampling data and particle deposition estimates in each of the three major respiratory tract regions derived using the Multiple-Path Particle Dosimetry model. The BCI data showed the presence of predominantly larger-sized particles in the work environments evaluated, with average mass median aerodynamic diameters (MMADs) ranging from 21-32 µm for the three BMF job categories and from 15-25 µm for the five SSF job categories tested. The BCI data also indicated that the percentage of lead mass measured at the sampled facilities in the submicron range (i.e., <1 µm, a particle size range associated with enhanced absorption of associated lead) was generally small. The estimated average percentages of lead mass in the submicron range for the tested job categories ranged from 0.8-3.3% at the BMFs and from 0.44-6.1% at the SSFs. Variability was observed in the particle size distributions across job categories and facilities, and sensitivity analyses were conducted to explore this variability. The BCI results were compared with results reported in the scientific literature. Screening-level analyses were also conducted to explore the overall degree of lead absorption potentially associated with the observed particle size distributions and to identify key issues associated with applying such data to set occupational exposure limits for lead.
Classification problems of Mount Kenya soils
NASA Astrophysics Data System (ADS)
Mutuma, Evans; Csorba, Ádám; Wawire, Amos; Dobos, Endre; Michéli, Erika
2017-04-01
Soil sampling on the agricultural lands covering 1200 square kilometers in the Eastern part of Mount Kenya was carried out to assess the status of soil organic carbon (SOC) as a soil fertility indicator, and to create an up-to-date soil classification map. The geology of the area consists of volcanic rocks and recent superficial deposits. The volcanic rocks are related to the Pliocene time; mainly: lahars, phonolites, tuffs, basalt and ashes. A total of 28 open profiles and 49 augered profiles with 269 samples were collected. The samples were analyzed for total carbon, organic carbon, particle size distribution, percent bases, cation exchange capacity and pH among other parameters. The objective of the study was to evaluate the variability of SOC in different Reference Soil Groups (RGS) and to compare the determined classification units with the KENSOTER database. Soil classification was performed based on the World Reference Base (WRB) for Soil Resources 2014. Based on the earlier surveys, geological and environmental setting, Nitisols were expected to be the dominant soils of the sampled area. However, this was not the case. The major differences to earlier survey data (KENSOTER database) are the presence of high activity clays (CEC value range 27.6 cmol/kg - 70 cmol/kg), high silt content (range 32.6 % - 52.4 %) and silt/clay ratio (range of 0.6 - 1.4) keeping these soils out of the Nitisols RSG. There was good accordance in the morphological features with the earlier survey but failed the silt/clay ratio criteria for Nitisols. This observation calls attention to set new classification criteria for Nitisols and other soils of warm, humid regions with variable rate of weathering to avoid difficulties in interpretation. To address the classification problem, this paper further discusses the taxonomic relationships between the studied soils. On the contrary most of the diagnostic elements (like the presence Umbric horizon, Vitric and Andic properties) and the some qualifiers (Humic, Dystric, Clayic, Skeletic, Leptic, etc) represent useful information for land use and management in the area.
X-ray and optical substructures of the DAFT/FADA survey clusters
NASA Astrophysics Data System (ADS)
Guennou, L.; Durret, F.; Adami, C.; Lima Neto, G. B.
2013-04-01
We have undertaken the DAFT/FADA survey with the double aim of setting constraints on dark energy based on weak lensing tomography and of obtaining homogeneous and high quality data for a sample of 91 massive clusters in the redshift range 0.4-0.9 for which there were HST archive data. We have analysed the XMM-Newton data available for 42 of these clusters to derive their X-ray temperatures and luminosities and search for substructures. Out of these, a spatial analysis was possible for 30 clusters, but only 23 had deep enough X-ray data for a really robust analysis. This study was coupled with a dynamical analysis for the 26 clusters having at least 30 spectroscopic galaxy redshifts in the cluster range. Altogether, the X-ray sample of 23 clusters and the optical sample of 26 clusters have 14 clusters in common. We present preliminary results on the coupled X-ray and dynamical analyses of these 14 clusters.
The Mira-Titan Universe. II. Matter Power Spectrum Emulation
NASA Astrophysics Data System (ADS)
Lawrence, Earl; Heitmann, Katrin; Kwan, Juliana; Upadhye, Amol; Bingham, Derek; Habib, Salman; Higdon, David; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas
2017-09-01
We introduce a new cosmic emulator for the matter power spectrum covering eight cosmological parameters. Targeted at optical surveys, the emulator provides accurate predictions out to a wavenumber k˜ 5 Mpc-1 and redshift z≤slant 2. In addition to covering the standard set of ΛCDM parameters, massive neutrinos and a dynamical dark energy of state are included. The emulator is built on a sample set of 36 cosmological models, carefully chosen to provide accurate predictions over the wide and large parameter space. For each model, we have performed a high-resolution simulation, augmented with 16 medium-resolution simulations and TimeRG perturbation theory results to provide accurate coverage over a wide k-range; the data set generated as part of this project is more than 1.2Pbytes. With the current set of simulated models, we achieve an accuracy of approximately 4%. Because the sampling approach used here has established convergence and error-control properties, follow-up results with more than a hundred cosmological models will soon achieve ˜ 1 % accuracy. We compare our approach with other prediction schemes that are based on halo model ideas and remapping approaches. The new emulator code is publicly available.
The Mira-Titan Universe. II. Matter Power Spectrum Emulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawrence, Earl; Heitmann, Katrin; Kwan, Juliana
We introduce a new cosmic emulator for the matter power spectrum covering eight cosmological parameters. Targeted at optical surveys, the emulator provides accurate predictions out to a wavenumber k similar to 5 Mpc(-1) and redshift z <= 2. In addition to covering the standard set of Lambda CDM parameters, massive neutrinos and a dynamical dark energy of state are included. The emulator is built on a sample set of 36 cosmological models, carefully chosen to provide accurate predictions over the wide and large parameter space. For each model, we have performed a high-resolution simulation, augmented with 16 medium-resolution simulations andmore » TimeRG perturbation theory results to provide accurate coverage over a wide k-range; the data set generated as part of this project is more than 1.2Pbytes. With the current set of simulated models, we achieve an accuracy of approximately 4%. Because the sampling approach used here has established convergence and error-control properties, follow-up results with more than a hundred cosmological models will soon achieve similar to 1% accuracy. We compare our approach with other prediction schemes that are based on halo model ideas and remapping approaches.« less
Magnetic Properties of a Fluvial Chronosequence From the Eastern Wind River Range, Wyoming
NASA Astrophysics Data System (ADS)
Quinton, E. E.; Dahms, D. E.; Geiss, C. E.
2010-12-01
In order to constrain the rate of magnetic enhancement in glacial fluvial sediments, we sampled modern soils from eight fluvial terraces in the East Wind River Range in Wyoming. Soil profiles up to 1.2 meters deep were described in the field and sampled in five cm intervals from a series of hand-dug pits or natural river-bank exposure. The age of the studied profiles are estimated to range from >600 ka to modern. They include Sacagawea Ridge, Bull Lake and Pinedale-age fluvial terraces as well as one Holocene profile. To characterize changes in magnetic properties we measured low-field magnetic susceptibility, anhysteretic remanent magnetization, isothermal remanent magnetization and S-ratios for all, and hysteresis loops for a selected sub-set of samples. Our measurements show no clear trend in magnetic enhancement with estimated soil age. The observed lack of magnetic enhancement in the older soils may be due to long-term deflation, which continuously strips off the magnetically enhanced topsoil. It is also possible that the main pedogenic processes, such as the development of well-expressed calcic horizons destroy or mask the effects of long-term magnetic enhancement.
Magnetic properties of the upper mantle beneath the continental United States
NASA Astrophysics Data System (ADS)
Friedman, S. A.; Ferre, E. C.; Demory, F.; Rochette, P.; Martin Hernandez, F.; Conder, J. A.
2012-12-01
The interpretation of long wavelength satellite magnetic data (Magsat, Oersted, CHAMP, SWARM) requires an understanding of magnetic mineralogy in the lithospheric mantle and reliable models of induced and remanent magnetic sources in the lithospheric mantle and the crust. Blakely et al. (2005) proposed the hypothesis of a magnetic lithospheric mantle in subduction zones. This prompted us to reexamine magnetic sources in the lithospheric mantle in different tectonic settings where unaltered mantle xenolith have been reported since the 1990s. Xenoliths from the upper mantle beneath the continental United States show different magnetic properties depending on the tectonic setting in which they equilibrated. Three localities in the South Central United States (San Carlos, AZ; Kilbourne Hole, NM; Knippa, TX) produced lherzolite and harzburgite xenoliths, while the Bearpaw Mountains in Montana (subduction zone) produced dunite and phlogopite-rich dunite xenoliths. Paleomagnetic data on these samples shows the lack of secondary alteration which is commonly caused by post-eruption serpentinization and the lack of basalt contamination. The main magnetic carrier is pure magnetite. The ascent of mantle xenoliths to the surface of the Earth generally takes only a few hours. Numerical modelling shows that nucleation of magnetite during ascent would form superparamagnetic grains and therefore cannot explain the observed magnetic grain sizes. This implies that the ferromagnetic phases present in the studied samples formed at mantle depth. The samples from the South Central United States exhibit a small range in low-field magnetic susceptibility (+/- 0.00003 [SI]), and Natural Remanent Magnetization (NRM) between 0.001 - 0.100 A/m. To the contrary samples from the Bearpaw Mountains exhibit a wider range of low-field susceptibilities (0.00001 to 0.0015 [SI]) and NRM (0.01 and 9.00 A/m). These samples have been serpentinized in-situ by metasomatic fluids related to the Farallon plate (Facer et al., 2009). Hence, the magnetic properties of the lithospheric mantle beneath the continental United States differ significantly depending on tectonic setting. The combination of the low geotherm observed in the Bearpaw Mountains with the stronger induced and remanent magnetization of mantle rocks in this area may produce a detectable LWMA.
Mangroves and Sediments - It's not all about mud!
NASA Astrophysics Data System (ADS)
Lokier, Stephen; Paul, Andreas; Fiorini, Flavia
2016-04-01
Mangals occur both as natural mangals and as plantations along the Arabian Gulf coastline of the United Arab Emirates (UAE). Over recent years there has been a significant campaign to extend the area of the mangrove forests, a project that has resulted in significant dredging activity in tandem with the planting of mangrove samplings. The philosophy for this operation has been in order to increase coastal protection from erosion and as a bid to somewhat offset the UAE's carbon footprint. This project, along with significant coastal infrastructure development, has, regrettably, reduced the number of mangal settings that may be considered as pristine. With this in mind, we have undertaken an extensive sampling campaign in order to fully characterise the sediments associated within the depositional sub-environments of mangal systems. Satellite imagery and ground-based reconnaissance were employed to identify a natural mangal area to the East of Abu Dhabi Island. Within this area, a transect was established across a naturally-occurring mangal channel system. Along-transect sampling stations were selected in order to reflect the range of environmental conditions, both in terms of energy and in relation to the degree of tidal exposure. At each station an array of environmental parameters were monitored. These included, but were not limited to, temperature, salinity, current velocity and turbidity. The surface sediment at each sample station was regularly sampled and returned to the laboratory where it was subjected to a range of analysis including grain size and modal analysis, identification of biota and measurement of total organic content. The results of this study allow us to develop a mangal sediment facies map that accurately establishes the relationships between sediments, depositional setting and environmental parameters. These results can be employed to inform the interpretation of ancient successions deposited under similar conditions. Further, the findings of this study will aid in the development of accurate petroleum reservoir models that are constrained by a quantitative data set. Lastly, a comparison between the environmental and sediment characteristics of natural and artificial mangals will aid our understanding of the effects of these new systems on the sedimentary dynamics of the UAE's coastline.
The stellar populations of nearby early-type galaxies
NASA Astrophysics Data System (ADS)
Concannon, Kristi Dendy
The recent completion of comprehensive photometric and spectroscopic galaxy surveys has revealed that early-type galaxies form a more heterogeneous family than previously thought. To better understand the star formation histories of early-type galaxies, we have obtained a set of high resolution, high signal-to-noise ratio spectra for a sample of 180 nearby early-type galaxies with the FAST spectrograph and the 1.5m telescope at F. L. Whipple Observatory. The spectra cover the wavelength range 3500 5500 Å which allows the comparison of various Balmer lines, most importantly the higher order lines in the blue, and have a S/N ratio higher than that of previous samples, which makes it easier to investigate the intrinsic spread in the observed parameters. The data set contains galaxies in both the local field and Virgo cluster environment and spans the velocity dispersion range 50 < log σ < 250km s -1. In conjunction with recent improvements in population synthesis modeling, our data set enables us to investigate the star formation history of E/S0 galaxies as a function of mass (σ), environment, and to some extent morphology. We are able to probe the effects of age and metallicity on fundamental observable relations such as the Mg-σ relation, and show that there is a significant spread in age in such diagrams, at all log σ, such that their “uniformity” can not be interpreted as a homogeneous history for early-type galaxies. Analyzing the age and [Fe/H] distribution as a function of the galaxy mass, we find that an age-σ relation exists among galaxies in both the local field and the Virgo cluster, such that the lower log σ galaxies have younger luminosity-weighted mean ages. The age spread of the low σ galaxies suggests that essentially all of the low-mass galaxies contain young to intermediate age populations, whereas the spread in age of the high log σ galaxies (log σ >˜ 2.0) is much larger, with galaxies spanning the age range of 4 19 Gyr. Thus, rather than pointing to all Es and S0s being old, the data show that even the most massive galaxies in our sample span a range of intermediate to old ages.
Application of Handheld Laser-Induced Breakdown Spectroscopy (LIBS) to Geochemical Analysis.
Connors, Brendan; Somers, Andrew; Day, David
2016-05-01
While laser-induced breakdown spectroscopy (LIBS) has been in use for decades, only within the last two years has technology progressed to the point of enabling true handheld, self-contained instruments. Several instruments are now commercially available with a range of capabilities and features. In this paper, the SciAps Z-500 handheld LIBS instrument functionality and sub-systems are reviewed. Several assayed geochemical sample sets, including igneous rocks and soils, are investigated. Calibration data are presented for multiple elements of interest along with examples of elemental mapping in heterogeneous samples. Sample preparation and the data collection method from multiple locations and data analysis are discussed. © The Author(s) 2016.
A catalog of porosity and permeability from core plugs in siliciclastic rocks
Nelson, Philip H.; Kibler, Joyce E.
2003-01-01
Porosity and permeability measurements on cored samples from siliciclastic formations are presented for 70 data sets, taken from published data and descriptions. Data sets generally represent specific formations, usually from a limited number of wells. Each data set is represented by a written summary, a plot of permeability versus porosity, and a digital file of the data. The summaries include a publication reference, the geologic age of the formation, location, well names, depth range, various geologic descriptions, and core measurement conditions. Attributes such as grain size or depositional environment are identified by symbols on the plots. An index lists the authors and date, geologic age, formation name, sandstone classification, location, basin or structural province, and field name.
Kasaie, Parastu; Mathema, Barun; Kelton, W David; Azman, Andrew S; Pennington, Jeff; Dowdy, David W
2015-01-01
In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission ("recent transmission proportion"), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional 'n-1' approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the 'n-1' technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the 'n-1' model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models' performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data.
Kasaie, Parastu; Mathema, Barun; Kelton, W. David; Azman, Andrew S.; Pennington, Jeff; Dowdy, David W.
2015-01-01
In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission (“recent transmission proportion”), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional ‘n-1’ approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the ‘n-1’ technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the ‘n-1’ model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models’ performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data. PMID:26679499
Mismatch between marine plankton range movements and the velocity of climate change
NASA Astrophysics Data System (ADS)
Chivers, William J.; Walne, Anthony W.; Hays, Graeme C.
2017-02-01
The response of marine plankton to climate change is of critical importance to the oceanic food web and fish stocks. We use a 60-year ocean basin-wide data set comprising >148,000 samples to reveal huge differences in range changes associated with climate change across 35 plankton taxa. While the range of dinoflagellates and copepods tended to closely track the velocity of climate change (the rate of isotherm movement), the range of the diatoms moved much more slowly. Differences in range shifts were up to 900 km in a recent warming period, with average velocities of range movement between 7 km per decade northwards for taxa exhibiting niche plasticity and 99 km per decade for taxa exhibiting niche conservatism. The differing responses of taxa to global warming will cause spatial restructuring of the plankton ecosystem with likely consequences for grazing pressures on phytoplankton and hence for biogeochemical cycling, higher trophic levels and biodiversity.
NASA Technical Reports Server (NTRS)
Richey, C. R.; Kinzer, R. E.; Cataldo, G.; Wollack, E. J.; Nuth, J. A.; Benford, D. J.; Silverberg, R. F.; Rinhart, S. A.
2013-01-01
The Optical Properties of Astronomical Silicates with Infrared Techniques program utilizes multiple instruments to provide spectral data over a wide range of temperatures and wavelengths. Experimental methods include Vector Network Analyzer and Fourier transform spectroscopy transmission, and reflection/scattering measurements. From this data, we can determine the optical parameters for the index of refraction, n, and the absorption coefficient, k. The analysis of the laboratory transmittance data for each sample type is based upon different mathematical models, which are applied to each data set according to their degree of coherence. Presented here are results from iron silicate dust grain analogs, in several sample preparations and at temperatures ranging from 5 to 300 K, across the infrared and millimeter portion of the spectrum (from 2.5 to 10,000/micron or 4000 to 1/cm).
NASA Astrophysics Data System (ADS)
Dillner, A. M.; Takahama, S.
2015-03-01
Organic carbon (OC) can constitute 50% or more of the mass of atmospheric particulate matter. Typically, organic carbon is measured from a quartz fiber filter that has been exposed to a volume of ambient air and analyzed using thermal methods such as thermal-optical reflectance (TOR). Here, methods are presented that show the feasibility of using Fourier transform infrared (FT-IR) absorbance spectra from polytetrafluoroethylene (PTFE or Teflon) filters to accurately predict TOR OC. This work marks an initial step in proposing a method that can reduce the operating costs of large air quality monitoring networks with an inexpensive, non-destructive analysis technique using routinely collected PTFE filter samples which, in addition to OC concentrations, can concurrently provide information regarding the composition of organic aerosol. This feasibility study suggests that the minimum detection limit and errors (or uncertainty) of FT-IR predictions are on par with TOR OC such that evaluation of long-term trends and epidemiological studies would not be significantly impacted. To develop and test the method, FT-IR absorbance spectra are obtained from 794 samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011. Partial least-squares regression is used to calibrate sample FT-IR absorbance spectra to TOR OC. The FTIR spectra are divided into calibration and test sets by sampling site and date. The calibration produces precise and accurate TOR OC predictions of the test set samples by FT-IR as indicated by high coefficient of variation (R2; 0.96), low bias (0.02 μg m-3, the nominal IMPROVE sample volume is 32.8 m3), low error (0.08 μg m-3) and low normalized error (11%). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision to collocated TOR measurements. FT-IR spectra are also divided into calibration and test sets by OC mass and by OM / OC ratio, which reflects the organic composition of the particulate matter and is obtained from organic functional group composition; these divisions also leads to precise and accurate OC predictions. Low OC concentrations have higher bias and normalized error due to TOR analytical errors and artifact-correction errors, not due to the range of OC mass of the samples in the calibration set. However, samples with low OC mass can be used to predict samples with high OC mass, indicating that the calibration is linear. Using samples in the calibration set that have different OM / OC or ammonium / OC distributions than the test set leads to only a modest increase in bias and normalized error in the predicted samples. We conclude that FT-IR analysis with partial least-squares regression is a robust method for accurately predicting TOR OC in IMPROVE network samples - providing complementary information to the organic functional group composition and organic aerosol mass estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).
Wang, Linwei; Mohammad, Sohaib H.; Li, Qiaozhi; Rienthong, Somsak; Rienthong, Dhanida; Nedsuwan, Supalert; Mahasirimongkol, Surakameth; Yasui, Yutaka
2014-01-01
There is an urgent need for simple, rapid, and affordable diagnostic tests for tuberculosis (TB) to combat the great burden of the disease in developing countries. The microscopic observation drug susceptibility assay (MODS) is a promising tool to fill this need, but it is not widely used due to concerns regarding its biosafety and efficiency. This study evaluated the automated MODS (Auto-MODS), which operates on principles similar to those of MODS but with several key modifications, making it an appealing alternative to MODS in resource-limited settings. In the operational setting of Chiang Rai, Thailand, we compared the performance of Auto-MODS with the gold standard liquid culture method in Thailand, mycobacterial growth indicator tube (MGIT) 960 plus the SD Bioline TB Ag MPT64 test, in terms of accuracy and efficiency in differentiating TB and non-TB samples as well as distinguishing TB and multidrug-resistant (MDR) TB samples. Sputum samples from clinically diagnosed TB and non-TB subjects across 17 hospitals in Chiang Rai were consecutively collected from May 2011 to September 2012. A total of 360 samples were available for evaluation, of which 221 (61.4%) were positive and 139 (38.6%) were negative for mycobacterial cultures according to MGIT 960. Of the 221 true-positive samples, Auto-MODS identified 212 as positive and 9 as negative (sensitivity, 95.9%; 95% confidence interval [CI], 92.4% to 98.1%). Of the 139 true-negative samples, Auto-MODS identified 135 as negative and 4 as positive (specificity, 97.1%; 95% CI, 92.8% to 99.2%). The median time to culture positivity was 10 days, with an interquartile range of 8 to 13 days for Auto-MODS. Auto-MODS is an effective and cost-sensitive alternative diagnostic tool for TB diagnosis in resource-limited settings. PMID:25378569
Hime, Paul M; Hotaling, Scott; Grewelle, Richard E; O'Neill, Eric M; Voss, S Randal; Shaffer, H Bradley; Weisrock, David W
2016-12-01
Perhaps the most important recent advance in species delimitation has been the development of model-based approaches to objectively diagnose species diversity from genetic data. Additionally, the growing accessibility of next-generation sequence data sets provides powerful insights into genome-wide patterns of divergence during speciation. However, applying complex models to large data sets is time-consuming and computationally costly, requiring careful consideration of the influence of both individual and population sampling, as well as the number and informativeness of loci on species delimitation conclusions. Here, we investigated how locus number and information content affect species delimitation results for an endangered Mexican salamander species, Ambystoma ordinarium. We compared results for an eight-locus, 137-individual data set and an 89-locus, seven-individual data set. For both data sets, we used species discovery methods to define delimitation models and species validation methods to rigorously test these hypotheses. We also used integrated demographic model selection tools to choose among delimitation models, while accounting for gene flow. Our results indicate that while cryptic lineages may be delimited with relatively few loci, sampling larger numbers of loci may be required to ensure that enough informative loci are available to accurately identify and validate shallow-scale divergences. These analyses highlight the importance of striking a balance between dense sampling of loci and individuals, particularly in shallowly diverged lineages. They also suggest the presence of a currently unrecognized, endangered species in the western part of A. ordinarium's range. © 2016 John Wiley & Sons Ltd.
Experiments on the Effects of Confining Pressure During Reaction-Driven Cracking
NASA Astrophysics Data System (ADS)
Skarbek, R. M.; Savage, H. M.; Kelemen, P. B.; Lambart, S.; Robinson, B.
2016-12-01
Cracking caused by reaction-driven volume increase is an important process in many geological settings. In particular, the interaction of brittle rocks with reactive fluids can create fractures that modify the permeability and reactive surface area, leading to a large variety of feedbacks. The conditions controlling reaction-driven cracking are poorly understood, especially at geologically relevant confining pressures. We conducted two sets of experiments to study the effects of confining pressure on cracking during the formation of gypsum from anhydrite CaSO4 + 2H2O = CaSO4•2H2O, and portlandite from calcium oxide CaO + H2O = Ca(OH)2. In the first set of experiments, we cold-pressed CaSO4, or CaO powder to form cylinders. Samples were confined in steel, and compressed with an axial load of 0.1 to 4 MPa. Water was allowed to infiltrate the initially unsaturated samples through the bottom face via capillary and Darcian flow across a micro-porous frit. The height of the sample was recorded during the experiment, and serves as a measure of volume change due to the hydration reaction. We also recorded acoustic emissions (AEs) using piezoelectric transducers (PZTs), to serve as a measure of cracking during an experiment. Experiments were stopped when the recorded volume change reached 80% - 100% of the stoichiometrically calculated volume change of the reaction. In a second set of experiments, we pressed CaSO4 powder to form cylinders 8.9 cm in length and 3.5 cm in diameter for testing in a tri-axial press with ports for fluid input and output, across the top and bottom faces of the sample. The tri-axial experiments were set up to investigate the reaction-driven cracking process for a range of confining pressures. Cracking during experiments was monitored using strain gauges and PZTs attached to the sample. We measured permeability during experiments by imposing a fluid pressure gradient across the sample. These experiments elucidate the role of cracking caused by crystallization pressure in many important hydration reactions.
Real-time high dynamic range laser scanning microscopy
NASA Astrophysics Data System (ADS)
Vinegoni, C.; Leon Swisher, C.; Fumene Feruglio, P.; Giedt, R. J.; Rousso, D. L.; Stapleton, S.; Weissleder, R.
2016-04-01
In conventional confocal/multiphoton fluorescence microscopy, images are typically acquired under ideal settings and after extensive optimization of parameters for a given structure or feature, often resulting in information loss from other image attributes. To overcome the problem of selective data display, we developed a new method that extends the imaging dynamic range in optical microscopy and improves the signal-to-noise ratio. Here we demonstrate how real-time and sequential high dynamic range microscopy facilitates automated three-dimensional neural segmentation. We address reconstruction and segmentation performance on samples with different size, anatomy and complexity. Finally, in vivo real-time high dynamic range imaging is also demonstrated, making the technique particularly relevant for longitudinal imaging in the presence of physiological motion and/or for quantification of in vivo fast tracer kinetics during functional imaging.
Arsonists in California and New York: a tentative look
William G. Bradshaw; Timothy G. Huff
1985-01-01
A nonrandom sample of 90 arsonists in California and New York was interviewed for the California Department of Forestry from 1977 to 1979. About two-thirds of them were in prison for arson, and the others were in mental hospitals. The 90 interviewees were mostly unmarried males ranging in age from 17 to 51 years. Survey results show that those who set their first arson...
ERIC Educational Resources Information Center
Rivera, Hector H.; Tharp, Roland G.
2006-01-01
This study provides an empirical description of the dimensions of community values, beliefs, and opinions through a survey conducted in the Pueblo Indian community of Zuni in New Mexico. The sample was composed of 200 randomly chosen community members ranging from 21 to 103 years old. A principal component factor analysis was conducted, as well as…
Kuligowski, J; Quintás, G; Garrigues, S; de la Guardia, M
2010-03-15
A new background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry has been developed. It is based on the use of a point-to-point matching algorithm that compares the absorption spectra of the sample data set with those of a previously recorded reference data set in order to select an appropriate reference spectrum. The spectral range used for the point-to-point comparison is selected with minimal user-interaction, thus facilitating considerably the application of the whole method. The background correction method has been successfully tested on a chromatographic separation of four nitrophenols running acetonitrile (0.08%, v/v TFA):water (0.08%, v/v TFA) gradients with compositions ranging from 35 to 85% (v/v) acetonitrile, giving accurate results for both, baseline resolved and overlapped peaks. Copyright (c) 2009 Elsevier B.V. All rights reserved.
International Intercomparison of Regular Transmittance Scales
NASA Astrophysics Data System (ADS)
Eckerle, K. L.; Sutter, E.; Freeman, G. H. C.; Andor, G.; Fillinger, L.
1990-01-01
An intercomparison of the regular spectral transmittance scales of NIST, Gaithersburg, MD (USA); PTB, Braunschweig (FRG); NPL, Teddington, Middlesex (UK); and OMH, Budapest (H) was accomplished using three sets of neutral glass filters with transmittances ranging from approximately 0.92 to 0.001. The difference between the results from the reference spectrophotometers of the laboratories was generally smaller than the total uncertainty of the interchange. The relative total uncertainty ranges from 0.05% to 0.75% for transmittances from 0.92 to 0.001. The sample-induced error was large - contributing 40% or more of the total except in a few cases.
Shumow, Laura; Bodor, Alison
2011-07-05
This manuscript describes the results of an HPLC study for the determination of the flavan-3-ol monomers, (±)-catechin and (±)-epicatechin, in cocoa and plain dark and milk chocolate products. The study was performed under the auspices of the National Confectioners Association (NCA) and involved the analysis of a series of samples by laboratories of five member companies using a common method. The method reported in this paper uses reversed phase HPLC with fluorescence detection to analyze (±)-epicatechin and (±)-catechin extracted with an acidic solvent from defatted cocoa and chocolate. In addition to a variety of cocoa and chocolate products, the sample set included a blind duplicate used to assess method reproducibility. All data were subjected to statistical analysis with outliers eliminated from the data set. The percent coefficient of variation (%CV) of the sample set ranged from approximately 7 to 15%. Further experimental details are described in the body of the manuscript and the results indicate the method is suitable for the determination of (±)-catechin and (±)-epicatechin in cocoa and chocolate products and represents the first collaborative study of this HPLC method for these compounds in these matrices.
Miyajima, Yoshiharu; Satoh, Kazuo; Uchida, Takao; Yamada, Tsuyoshi; Abe, Michiko; Watanabe, Shin-ichi; Makimura, Miho; Makimura, Koichi
2013-03-01
Trichophyton rubrum and Trichophyton mentagrophytes human-type (synonym, Trichophyton interdigitale (anthropophilic)) are major causative pathogens of tinea unguium. For suitable diagnosis and treatment, rapid and accurate identification of etiologic agents in clinical samples using reliable molecular based method is required. For identification of organisms causing tinea unguium, we developed a new real-time polymerase chain reaction (PCR) with a pan-fungal primer set and probe, as well as specific primer sets and probes for T. rubrum and T. mentagrophytes human-type. We designed two sets of primers from the internal transcribed spacer 1 (ITS1) region of fungal ribosomal DNA (rDNA) and three quadruple fluorescent probes, one for detection wide range pathogenic fungi and two for classification of T. rubrum and T. mentagrophytes by specific binding to different sites in the ITS1 region. We investigated the specificity of these primer sets and probes using fungal genomic DNA, and also examined 42 clinical specimens with our real-time PCR. The primers and probes specifically detected T. rubrum, T. mentagrophytes, and a wide range of pathogenic fungi. The causative pathogens were identified in 42 nail and skin samples from 32 patients. The total time required for identification of fungal species in each clinical specimen was about 3h. The copy number of each fungal DNA in the clinical specimens was estimated from the intensity of fluorescence simultaneously. This PCR system is one of the most rapid and sensitive methods available for diagnosing dermatophytosis, including tinea unguium and tinea pedis. Copyright © 2012 Japanese Society for Investigative Dermatology. Published by Elsevier Ireland Ltd. All rights reserved.
Pang, Justine; Feblowitz, Joshua C; Maloney, Francine L; Wilcox, Allison R; Ramelson, Harley Z; Schneider, Louise I; Bates, David W
2011-01-01
Background Accurate knowledge of a patient's medical problems is critical for clinical decision making, quality measurement, research, billing and clinical decision support. Common structured sources of problem information include the patient problem list and billing data; however, these sources are often inaccurate or incomplete. Objective To develop and validate methods of automatically inferring patient problems from clinical and billing data, and to provide a knowledge base for inferring problems. Study design and methods We identified 17 target conditions and designed and validated a set of rules for identifying patient problems based on medications, laboratory results, billing codes, and vital signs. A panel of physicians provided input on a preliminary set of rules. Based on this input, we tested candidate rules on a sample of 100 000 patient records to assess their performance compared to gold standard manual chart review. The physician panel selected a final rule for each condition, which was validated on an independent sample of 100 000 records to assess its accuracy. Results Seventeen rules were developed for inferring patient problems. Analysis using a validation set of 100 000 randomly selected patients showed high sensitivity (range: 62.8–100.0%) and positive predictive value (range: 79.8–99.6%) for most rules. Overall, the inference rules performed better than using either the problem list or billing data alone. Conclusion We developed and validated a set of rules for inferring patient problems. These rules have a variety of applications, including clinical decision support, care improvement, augmentation of the problem list, and identification of patients for research cohorts. PMID:21613643
Sensing cocaine in saliva with infrared laser spectroscopy
NASA Astrophysics Data System (ADS)
Hans, Kerstin M.-C.; Müller, Matthias; Gianella, Michele; Wägli, Ph.; Sigrist, Markus W.
2013-02-01
Increasing numbers of accidents caused by drivers under the influence of drugs, raise drug tests to worldwide interest. We developed a one-step extraction technique for cocaine in saliva and analyzed reference samples with laser spectroscopy employing two different schemes. The first is based on attenuated total reflection (ATR), which is applied to dried samples. The second scheme uses transmission measurements for the analysis of liquid samples. ATR spectroscopy achieved a limit of detection (LOD) of 3μg/ml. The LOD for the transmission approach in liquid samples is < 10 μg/ml. These LODs are realistic as such concentration ranges are encountered in the saliva of drug users after the administration of a single dose of cocaine. An improved stabilization of the set-up should lower the limit of detection significantly.
Further analyses of Rio Cuarto impact glass
NASA Technical Reports Server (NTRS)
Schultz, Peter H.; Bunch, T. E.; Koeberl, C.; Collins, W.
1993-01-01
Initial analyses of the geologic setting, petrology, and geochemistry of glasses recovered from within and around the elongate Rio Cuarto (RC) craters in Argentina focused on selected samples in order to document the general similarity with impactites around other terrestrial impact craters and to establish their origin. Continued analysis has surveyed the diversity in compositions for a range of samples, examined further evidence for temperature and pressure history, and compared the results with experimentally fused loess from oblique hypervelocity impacts. These new results not only firmly establish their impact origin but provide new insight on the impact process.
The impact of hypnotic suggestibility in clinical care settings.
Montgomery, Guy H; Schnur, Julie B; David, Daniel
2011-07-01
Hypnotic suggestibility has been described as a powerful predictor of outcomes associated with hypnotic interventions. However, there have been no systematic approaches to quantifying this effect across the literature. This meta-analysis evaluates the magnitude of the effect of hypnotic suggestibility on hypnotic outcomes in clinical settings. PsycINFO and PubMed were searched from their inception through July 2009. Thirty-four effects from 10 studies and 283 participants are reported. Results revealed a statistically significant overall effect size in the small to medium range (r = .24; 95% Confidence Interval = -0.28 to 0.75), indicating that greater hypnotic suggestibility led to greater effects of hypnosis interventions. Hypnotic suggestibility accounted for 6% of the variance in outcomes. Smaller sample size studies, use of the SHCS, and pediatric samples tended to result in larger effect sizes. The authors question the usefulness of assessing hypnotic suggestibility in clinical contexts.
The impact of hypnotic suggestibility in clinical care settings
Montgomery, Guy H.; Schnur, Julie B.; David, Daniel
2013-01-01
Hypnotic suggestibility has been described as a powerful predictor of outcomes associated with hypnotic interventions. However, there have been no systematic approaches to quantifying this effect across the literature. The present meta-analysis evaluates the magnitude of the effect of hypnotic suggestibility on hypnotic outcomes in clinical settings. PsycINFO and PubMed were searched from their inception through July 2009. Thirty-four effects from ten studies and 283 participants are reported. Results revealed a statistically significant overall effect size in the small to medium range (r = 0.24; 95% Confidence Interval = −0.28 to 0.75), indicating that greater hypnotic suggestibility led to greater effects of hypnosis interventions. Hypnotic suggestibility accounted for 6% of the variance in outcomes. Smaller sample size studies, use of the SHCS, and pediatric samples tended to result in larger effect sizes. Results question the usefulness of assessing hypnotic suggestibility in clinical contexts. PMID:21644122
Belyanina, S I
2015-02-01
Cytogenetic analysis was performed on samples of Chironomus plumosus L. (Diptera, Chironomidae) taken from waterbodies of various types in Bryansk region (Russia) and Gomel region (Belarus). Karyotypes of specimens taken from stream pools of the Volga were used as reference samples. The populations of Bryansk and Gomel regions (except for a population of Lake Strativa in Starodubskii district, Bryansk region) exhibit broad structural variation, including somatic mosaicism for morphotypes of the salivary gland chromosome set, decondensation of telomeric sites, and the presence of small structural changes, as opposed to populations of Saratov region. As compared with Saratov and Bryansk regions, the Balbiani ring in the B-arm of chromosome I is repressed in populations of Gomel region. It is concluded that the chromosome set of Ch. plumosus in a range of waterbodies of Bryansk and Gomel regions is unstable.
Dexter, Alex; Race, Alan M; Steven, Rory T; Barnes, Jennifer R; Hulme, Heather; Goodwin, Richard J A; Styles, Iain B; Bunch, Josephine
2017-11-07
Clustering is widely used in MSI to segment anatomical features and differentiate tissue types, but existing approaches are both CPU and memory-intensive, limiting their application to small, single data sets. We propose a new approach that uses a graph-based algorithm with a two-phase sampling method that overcomes this limitation. We demonstrate the algorithm on a range of sample types and show that it can segment anatomical features that are not identified using commonly employed algorithms in MSI, and we validate our results on synthetic MSI data. We show that the algorithm is robust to fluctuations in data quality by successfully clustering data with a designed-in variance using data acquired with varying laser fluence. Finally, we show that this method is capable of generating accurate segmentations of large MSI data sets acquired on the newest generation of MSI instruments and evaluate these results by comparison with histopathology.
Gravel Transport Measured With Bedload Traps in Mountain Streams: Field Data Sets to be Published
NASA Astrophysics Data System (ADS)
Bunte, K.; Swingle, K. W.; Abt, S. R.; Ettema, R.; Cenderelli, D. A.
2017-12-01
Direct, accurate measurements of coarse bedload transport exist for only a few streams worldwide, because the task is laborious and requires a suitable device. However, sets of accurate field data would be useful for reference with unsampled sites and as a basis for model developments. The authors have carefully measured gravel transport and are compiling their data sets for publication. To ensure accurate measurements of gravel bedload in wadeable flow, the designed instrument consisted of an unflared aluminum frame (0.3 x 0.2 m) large enough for entry of cobbles. The attached 1 m or longer net with a 4 mm mesh held large bedload volumes. The frame was strapped onto a ground plate anchored onto the channel bed. This setup avoided involuntary sampler particle pick-up and enabled long sampling times, integrating over fluctuating transport. Beveled plates and frames facilitated easy particle entry. Accelerating flow over smooth plates compensated for deceleration within the net. Spacing multiple frames by 1 m enabled sampling much of the stream width. Long deployment, and storage of sampled bedload away from the frame's entrance, were attributes of traps rather than samplers; hence the name "bedload traps". The authors measured gravel transport with 4-6 bedload traps per cross-section at 10 mountain streams in CO, WY, and OR, accumulating 14 data sets (>1,350 samples). In 10 data sets, measurements covered much of the snowmelt high-flow season yielding 50-200 samples. Measurement time was typically 1 hour but ranged from 3 minutes to 3 hours, depending on transport intensity. Measuring back-to-back provided 6 to 10 samples over a 6 to 10-hour field day. Bedload transport was also measured with a 3-inch Helley-Smith sampler. The data set provides fractional (0.5 phi) transport rates in terms of particle mass and number for each bedload trap in the cross-section, the largest particle size, as well as total cross-sectional gravel transport rates. Ancillary field data include stage, discharge, long-term flow records if available, surface and subsurface sediment sizes, as well as longitudinal and cross-sectional site surveys. Besides transport relations, incipient motion conditions, hysteresis, and lateral variation, the data provide a reliable modeling basis to test insights and hypotheses regarding bedload transport.
Arantes, Joana; Machado, Armando
2008-07-01
Pigeons were trained on two temporal bisection tasks, which alternated every two sessions. In the first task, they learned to choose a red key after a 1-s signal and a green key after a 4-s signal; in the second task, they learned to choose a blue key after a 4-s signal and a yellow key after a 16-s signal. Then the pigeons were exposed to a series of test trials in order to contrast two timing models, Learning-to-Time (LeT) and Scalar Expectancy Theory (SET). The models made substantially different predictions particularly for the test trials in which the sample duration ranged from 1 s to 16 s and the choice keys were Green and Blue, the keys associated with the same 4-s samples: LeT predicted that preference for Green should increase with sample duration, a context effect, but SET predicted that preference for Green should not vary with sample duration. The results were consistent with LeT. The present study adds to the literature the finding that the context effect occurs even when the two basic discriminations are never combined in the same session.
Bradshaw, Richard T; Essex, Jonathan W
2016-08-09
Hydration free energy (HFE) calculations are often used to assess the performance of biomolecular force fields and the quality of assigned parameters. The AMOEBA polarizable force field moves beyond traditional pairwise additive models of electrostatics and may be expected to improve upon predictions of thermodynamic quantities such as HFEs over and above fixed-point-charge models. The recent SAMPL4 challenge evaluated the AMOEBA polarizable force field in this regard but showed substantially worse results than those using the fixed-point-charge GAFF model. Starting with a set of automatically generated AMOEBA parameters for the SAMPL4 data set, we evaluate the cumulative effects of a series of incremental improvements in parametrization protocol, including both solute and solvent model changes. Ultimately, the optimized AMOEBA parameters give a set of results that are not statistically significantly different from those of GAFF in terms of signed and unsigned error metrics. This allows us to propose a number of guidelines for new molecule parameter derivation with AMOEBA, which we expect to have benefits for a range of biomolecular simulation applications such as protein-ligand binding studies.
Measurement of latent cognitive abilities involved in concept identification learning.
Thomas, Michael L; Brown, Gregory G; Gur, Ruben C; Moore, Tyler M; Patt, Virginie M; Nock, Matthew K; Naifeh, James A; Heeringa, Steven; Ursano, Robert J; Stein, Murray B
2015-01-01
We used cognitive and psychometric modeling techniques to evaluate the construct validity and measurement precision of latent cognitive abilities measured by a test of concept identification learning: the Penn Conditional Exclusion Test (PCET). Item response theory parameters were embedded within classic associative- and hypothesis-based Markov learning models and were fitted to 35,553 Army soldiers' PCET data from the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS). Data were consistent with a hypothesis-testing model with multiple latent abilities-abstraction and set shifting. Latent abstraction ability was positively correlated with number of concepts learned, and latent set-shifting ability was negatively correlated with number of perseverative errors, supporting the construct validity of the two parameters. Abstraction was most precisely assessed for participants with abilities ranging from 1.5 standard deviations below the mean to the mean itself. Measurement of set shifting was acceptably precise only for participants making a high number of perseverative errors. The PCET precisely measures latent abstraction ability in the Army STARRS sample, especially within the range of mildly impaired to average ability. This precision pattern is ideal for a test developed to measure cognitive impairment as opposed to cognitive strength. The PCET also measures latent set-shifting ability, but reliable assessment is limited to the impaired range of ability, reflecting that perseverative errors are rare among cognitively healthy adults. Integrating cognitive and psychometric models can provide information about construct validity and measurement precision within a single analytical framework.
NASA Astrophysics Data System (ADS)
Healy, David A.; O'Connor, David J.; Burke, Aoife M.; Sodeau, John R.
2012-12-01
A Bioaerosol sensing instrument referred to as WIBS-4, designed to continuously monitor ambient bioaerosols on-line, has been used to record a multiparameter “signature” from each of a number of Primary Biological Aerosol Particulate (PBAP) samples found in air. These signatures were obtained in a controlled laboratory environment and are based on the size, asymmetry (“shape”) and auto-fluorescence of the particles. Fifteen samples from two separate taxonomic ranks (kingdoms), Plantae (×8) and Fungi (×7) were individually introduced to the WIBS-4 for measurement along with two non-fluorescing chemical solids, common salt and chalk. Over 2000 individual-particle measurements were recorded for each sample type and the ability of the WIBS spectroscopic technique to distinguish between chemicals, pollen and fungal spore material was examined by identifying individual PBAP signatures. The results obtained show that WIBS-4 could potentially be a very useful analytical tool for distinguishing between natural airborne PBAP samples, such as the fungal spores and may potentially play an important role in detecting and discriminating the toxic fungal spore, Aspergillus fumigatus, from others in real-time. If the sizing range of the commercial instrument was customarily increased and permitted to operate simultaneously in its two sizing ranges, pollen and spores could potentially be discriminated between. The data also suggest that the gain setting sensitivity on the detector would also have to be reduced by a factor >5, to routinely detect, in-range fluorescence measurements for pollen samples.
Phillips, P J; Schubert, C; Argue, D; Fisher, I; Furlong, E T; Foreman, W; Gray, J; Chalmers, A
2015-04-15
Septic-system discharges can be an important source of micropollutants (including pharmaceuticals and endocrine active compounds) to adjacent groundwater and surface water systems. Groundwater samples were collected from well networks tapping glacial till in New England (NE) and sandy surficial aquifer New York (NY) during one sampling round in 2011. The NE network assesses the effect of a single large septic system that receives discharge from an extended health care facility for the elderly. The NY network assesses the effect of many small septic systems used seasonally on a densely populated portion of Fire Island. The data collected from these two networks indicate that hydrogeologic and demographic factors affect micropollutant concentrations in these systems. The highest micropollutant concentrations from the NE network were present in samples collected from below the leach beds and in a well downgradient of the leach beds. Total concentrations for personal care/domestic use compounds, pharmaceutical compounds and plasticizer compounds generally ranged from 1 to over 20 μg/L in the NE network samples. High tris(2-butoxyethyl phosphate) plasticizer concentrations in wells beneath and downgradient of the leach beds (>20 μg/L) may reflect the presence of this compound in cleaning agents at the extended health-care facility. The highest micropollutant concentrations for the NY network were present in the shoreline wells and reflect groundwater that is most affected by septic system discharges. One of the shoreline wells had personal care/domestic use, pharmaceutical, and plasticizer concentrations ranging from 0.4 to 5.7 μg/L. Estradiol equivalency quotient concentrations were also highest in a shoreline well sample (3.1 ng/L). Most micropollutant concentrations increase with increasing specific conductance and total nitrogen concentrations for shoreline well samples. These findings suggest that septic systems serving institutional settings and densely populated areas in coastal settings may be locally important sources of micropollutants to adjacent aquifer and marine systems. Published by Elsevier B.V.
Alemán, Yoan; Vinken, Lore; Kourí, Vivian; Pérez, Lissette; Álvarez, Alina; Abrahantes, Yeissel; Fonseca, Carlos; Pérez, Jorge; Correa, Consuelo; Soto, Yudira; Schrooten, Yoeri; Vandamme, Anne-Mieke; Van Laethem, Kristel
2015-01-01
As commercial human immunodeficiency virus type 1 drug resistance assays are expensive, they are not commonly used in resource-limited settings. Hence, a more affordable in-house procedure was set up taking into account the specific epidemiological and economic circumstances of Cuba. The performance characteristics of the in-house assay were evaluated using clinical samples with various subtypes and resistance patterns. The lower limit of amplification was determined on dilutions series of 20 clinical isolates and ranged from 84 to 529 RNA copies/mL. For the assessment of trueness, 14 clinical samples were analyzed and the ViroSeq HIV-1 Genotyping System v2.0 was used as the reference standard. The mean nucleotide sequence identity between the two assays was 98.7% ± 1.0. Additionally, 99.0% of the amino acids at drug resistance positions were identical. The sensitivity and specificity in detecting drug resistance mutations was respectively 94.1% and 99.5%. Only few discordances in drug resistance interpretation patterns were observed. The repeatability and reproducibility were evaluated using 10 clinical samples with 3 replicates per sample. The in-house test was very precise as nucleotide sequence identity among paired nucleotide sequences ranged from 98.7% to 99.9%. The acceptance criteria were met by the in-house test for all performance characteristics, demonstrating a high degree of accuracy. Subsequently, the applicability in routine clinical practice was evaluated on 380 plasma samples. The amplification success rate was 91% and good quality consensus sequences encoding the entire protease and the first 335 codons in reverse transcriptase could be obtained for 99% of the successful amplicons. The reagent cost per sample using the in-house procedure was around € 80 per genotyping attempt. Overall, the in-house assay provided good results, was feasible with equipment and reagents available in Cuba and was half as expensive as commercial assays.
NASA Astrophysics Data System (ADS)
Dillner, A. M.; Takahama, S.
2015-10-01
Elemental carbon (EC) is an important constituent of atmospheric particulate matter because it absorbs solar radiation influencing climate and visibility and it adversely affects human health. The EC measured by thermal methods such as thermal-optical reflectance (TOR) is operationally defined as the carbon that volatilizes from quartz filter samples at elevated temperatures in the presence of oxygen. Here, methods are presented to accurately predict TOR EC using Fourier transform infrared (FT-IR) absorbance spectra from atmospheric particulate matter collected on polytetrafluoroethylene (PTFE or Teflon) filters. This method is similar to the procedure developed for OC in prior work (Dillner and Takahama, 2015). Transmittance FT-IR analysis is rapid, inexpensive and nondestructive to the PTFE filter samples which are routinely collected for mass and elemental analysis in monitoring networks. FT-IR absorbance spectra are obtained from 794 filter samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011. Partial least squares regression is used to calibrate sample FT-IR absorbance spectra to collocated TOR EC measurements. The FT-IR spectra are divided into calibration and test sets. Two calibrations are developed: one developed from uniform distribution of samples across the EC mass range (Uniform EC) and one developed from a uniform distribution of Low EC mass samples (EC < 2.4 μg, Low Uniform EC). A hybrid approach which applies the Low EC calibration to Low EC samples and the Uniform EC calibration to all other samples is used to produce predictions for Low EC samples that have mean error on par with parallel TOR EC samples in the same mass range and an estimate of the minimum detection limit (MDL) that is on par with TOR EC MDL. For all samples, this hybrid approach leads to precise and accurate TOR EC predictions by FT-IR as indicated by high coefficient of determination (R2; 0.96), no bias (0.00 μg m-3, a concentration value based on the nominal IMPROVE sample volume of 32.8 m3), low error (0.03 μg m-3) and reasonable normalized error (21 %). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision and accuracy to collocated TOR measurements. Only the normalized error is higher for the FT-IR EC measurements than for collocated TOR. FT-IR spectra are also divided into calibration and test sets by the ratios OC/EC and ammonium/EC to determine the impact of OC and ammonium on EC prediction. We conclude that FT-IR analysis with partial least squares regression is a robust method for accurately predicting TOR EC in IMPROVE network samples, providing complementary information to TOR OC predictions (Dillner and Takahama, 2015) and the organic functional group composition and organic matter estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).
NASA Astrophysics Data System (ADS)
Dillner, A. M.; Takahama, S.
2015-06-01
Elemental carbon (EC) is an important constituent of atmospheric particulate matter because it absorbs solar radiation influencing climate and visibility and it adversely affects human health. The EC measured by thermal methods such as Thermal-Optical Reflectance (TOR) is operationally defined as the carbon that volatilizes from quartz filter samples at elevated temperatures in the presence of oxygen. Here, methods are presented to accurately predict TOR EC using Fourier Transform Infrared (FT-IR) absorbance spectra from atmospheric particulate matter collected on polytetrafluoroethylene (PTFE or Teflon) filters. This method is similar to the procedure tested and developed for OC in prior work (Dillner and Takahama, 2015). Transmittance FT-IR analysis is rapid, inexpensive, and non-destructive to the PTFE filter samples which are routinely collected for mass and elemental analysis in monitoring networks. FT-IR absorbance spectra are obtained from 794 filter samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011. Partial least squares regression is used to calibrate sample FT-IR absorbance spectra to collocated TOR EC measurements. The FTIR spectra are divided into calibration and test sets. Two calibrations are developed, one which is developed from uniform distribution of samples across the EC mass range (Uniform EC) and one developed from a~uniform distribution of low EC mass samples (EC < 2.4 μg, Low Uniform EC). A hybrid approach which applies the low EC calibration to low EC samples and the Uniform EC calibration to all other samples is used to produces predictions for low EC samples that have mean error on par with parallel TOR EC samples in the same mass range and an estimate of the minimum detection limit (MDL) that is on par with TOR EC MDL. For all samples, this hybrid approach leads to precise and accurate TOR EC predictions by FT-IR as indicated by high coefficient of variation (R2; 0.96), no bias (0.00 μg m-3, concentration value based on the nominal IMPROVE sample volume of 32.8 m-3), low error (0.03 μg m-3) and reasonable normalized error (21 %). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision and accuracy to collocated TOR measurements. Only the normalized error is higher for the FT-IR EC measurements than for collocated TOR. FT-IR spectra are also divided into calibration and test sets by the ratios OC/EC and ammonium/EC to determine the impact of OC and ammonium on EC prediction. We conclude that FT-IR analysis with partial least squares regression is a robust method for accurately predicting TOR EC in IMPROVE network samples; providing complementary information to TOR OC predictions (Dillner and Takahama, 2015) and the organic functional group composition and organic matter (OM) estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).
Rudel, Ruthann A.; Melly, Steven J.; Geno, Paul W.; Sun, Gang; Brody , Julia G.
1998-01-01
As part of a larger effort to characterize the impacts to Cape Cod drinking water supplies from on-site wastewater disposal, we developed two analytical methods using HPLC and GC/MS for a range of compounds identified as endocrine-disrupting chemicals (EDCs), including the nonionic surfactants alkylphenol polyethoxylates (APEOs) and their degradation products. We analyzed samples for nonylphenol, octylphenol, and their ethoxylates up to the hexaethoxylate using an HPLC method, with detection limits ranging from 2 to 6 μg/L. A set of phenolic compounds including bisphenol A and nonylphenol were derivatized and analyzed by GC/MS with detection limits from 0.001 to 0.02 μg/L. Total APEOs in untreated wastewater and septage samples ranged from 1350 to 11 000 μg/L by the HPLC method. Nonylphenol was detected in all septage samples at concentrations above 1000 μg/L. Phenylphenol and bisphenol A were detected in septage and wastewater at about 1 μg/L. In groundwater downgradient of an infiltration bed for secondary treated effluent, nonyl/octylphenol and ethoxylates were present at about 30 μg/L. Bisphenol A, nonylphenol monoethoxycarboxylate, and nonyl/octylphenol tetraethoxylate were detected in some drinking water wells at concentrations ranging from below the quantitation limit to 32.9 μg/L. Results suggest that septic systems may be a significant source of APEOs to groundwater.
NASA Astrophysics Data System (ADS)
Toro, C.; Jobson, B. T.; Haselbach, L.; Shen, S.; Chung, S. H.
2016-08-01
This work reports uptake coefficients and by-product yields of ozone precursors onto two photocatalytic paving materials (asphalt and concrete) treated with a commercial TiO2 surface application product. The experimental approach used a continuously stirred tank reactor (CSTR) and allowed for testing large samples with the same surface morphology encountered with real urban surfaces. The measured uptake coefficient (γgeo) and surface resistances are useful for parametrizing dry deposition velocities in air quality model evaluation of the impact of photoactive surfaces on urban air chemistry. At 46% relative humidity, the surface resistance to NO uptake was ∼1 s cm-1 for concrete and ∼2 s cm-1 for a freshly coated older roadway asphalt sample. HONO and NO2 were detected as side products from NO uptake to asphalt, with NO2 molar yields on the order of 20% and HONO molar yields ranging between 14 and 33%. For concrete samples, the NO2 molar yields increased with the increase of water vapor, ranging from 1% to 35% and HONO was not detected as a by-product. Uptake of monoaromatic VOCs to the asphalt sample set displayed a dependence on the compound vapor pressure, and was influenced by competitive adsorption from less volatile VOCs. Formaldehyde and acetaldehyde were detected as byproducts, with molar yields ranging from 5 to 32%.
Evaluation of the Cardiac Depression Visual Analogue Scale in a medical and non-medical sample.
Di Benedetto, Mirella; Sheehan, Matthew
2014-01-01
Comorbid depression and medical illness is associated with a number of adverse health outcomes such as lower medication adherence and higher rates of subsequent mortality. Reliable and valid psychological measures capable of detecting a range of depressive symptoms found in medical settings are needed. The Cardiac Depression Visual Analogue Scale (CDVAS) is a recently developed, brief six-item measure originally designed to assess the range and severity of depressive symptoms within a cardiac population. The current study aimed to further investigate the psychometric properties of the CDVAS in a general and medical sample. The sample consisted of 117 participants, whose mean age was 40.0 years (SD = 19.0, range 18-84). Participants completed the CDVAS, the Cardiac Depression Scale (CDS), the Depression Anxiety Stress Scales (DASS) and a demographic and health questionnaire. The CDVAS was found to have adequate internal reliability (α = .76), strong concurrent validity with the CDS (r = .89) and the depression sub-scale of the DASS (r = .70), strong discriminant validity and strong predictive validity. The principal components analysis revealed that the CDVAS measured only one component, providing further support for the construct validity of the scale. Results of the current study indicate that the CDVAS is a short, simple, valid and reliable measure of depressive symptoms suitable for use in a general and medical sample.
Wang, Xiaoming; Rytting, Erik; Abdelrahman, Doaa R.; Nanovskaya, Tatiana N.; Hankins, Gary D.V.; Ahmed, Mahmoud S.
2013-01-01
The liquid chromatography with electrospray ionization mass spectrometry for the quantitative determination of famotidine in human urine, maternal and umbilical cord plasma was developed and validated. The plasma samples were alkalized with ammonium hydroxide and extracted twice with ethyl acetate. The extraction recovery of famotidine in maternal and umbilical cord plasma ranged from 53% to 64% and 72% to 79%, respectively. Urine samples were directly diluted with the initial mobile phase then injected into the HPLC system. Chromatographic separation of famotidine was achieved by using a Phenomenex Synergi™ Hydro-RP™ column with a gradient elution of acetonitrile and 10 mM ammonium acetate aqueous solution (pH 8.3, adjusted with ammonium hydroxide). Mass Spectrometric detection of famotidine was set in the positive mode and used a selected ion monitoring method. Carbon-13-labeled famotidine was used as internal standard. The calibration curves were linear (r2> 0.99) in the concentration ranges of 0.631-252 ng/mL for umbilical and maternal plasma samples, and of 0.075-30.0 μg/mL for urine samples. The relative deviation of method was less than 14% for intra- and inter-day assays, and the accuracy ranged between 93% and 110%. The matrix effect of famotidine in human urine, maternal and umbilical cord plasma is less than 17%. PMID:23401067
Fast, High Resolution, and Wide Modulus Range Nanomechanical Mapping with Bimodal Tapping Mode.
Kocun, Marta; Labuda, Aleksander; Meinhold, Waiman; Revenko, Irène; Proksch, Roger
2017-10-24
Tapping mode atomic force microscopy (AFM), also known as amplitude modulated (AM) or AC mode, is a proven, reliable, and gentle imaging mode with widespread applications. Over the several decades that tapping mode has been in use, quantification of tip-sample mechanical properties such as stiffness has remained elusive. Bimodal tapping mode keeps the advantages of single-frequency tapping mode while extending the technique by driving and measuring an additional resonant mode of the cantilever. The simultaneously measured observables of this additional resonance provide the additional information necessary to extract quantitative nanomechanical information about the tip-sample mechanics. Specifically, driving the higher cantilever resonance in a frequency modulated (FM) mode allows direct measurement of the tip-sample interaction stiffness and, with appropriate modeling, the set point-independent local elastic modulus. Here we discuss the advantages of bimodal tapping, coined AM-FM imaging, for modulus mapping. Results are presented for samples over a wide modulus range, from a compliant gel (∼100 MPa) to stiff materials (∼100 GPa), with the same type of cantilever. We also show high-resolution (subnanometer) stiffness mapping of individual molecules in semicrystalline polymers and of DNA in fluid. Combined with the ability to remain quantitative even at line scan rates of nearly 40 Hz, the results demonstrate the versatility of AM-FM imaging for nanomechanical characterization in a wide range of applications.
Brain, Matthew; Anderson, Mike; Parkes, Scott; Fowler, Peter
2012-12-01
To describe magnesium flux and serum concentrations in ICU patients receiving continuous venovenous haemodiafiltration (CVVHDF). Samples were collected from 22 CVVHDF circuits using citrate anticoagulation solutions (Prismocitrate 10/2 and Prism0cal) and from 26 circuits using Hemosol B0 and heparin anticoagulation. CVVHDF prescription, magnesium supplementation and anticoagulation choice was by the treating intensivist. We analysed 334 sample sets consisting of arterial, prefilter and postfilter blood and effluent. Magnesium loss was calculated from an equation for conservation of mass, and arterial magnesium concentration was described by an equation for exponential decay. Using flow rates typical of adults receiving CVVHDF, we determined a median half-life for arterial magnesium concentration to decay to a new steady state of 4.73 hours (interquartile range [IQR], 3.73-7.32 hours). Median arterial magnesium concentration was 0.88mmol/L (IQR, 0.83-0.97mmol/L) in the heparin group and 0.79mmol/L (IQR, 0.69-0.91mmol/L) in the citrate group. Arterial magnesium concentrations fell below the reference range regularly in the citrate group and, when low, there was magnesium flux from dialysate to patient. Magnesium loss was greater in patients receiving citrate. Exponential decline in magnesium concentrations was sufficiently rapid that subtherapeutic serum magnesium concentrations may occur well before detection when once-daily sampling was used. Measurements should be interpreted with regard to timing of magnesium infusions. We suggest that continuous renal replacement therapy fluids with higher magnesium concentrations be introduced in the critical care setting.
NASA Astrophysics Data System (ADS)
András Simon, Károly; Puskás, Sándor; Ricza, Tamás; Bozóki, Zoltán
2017-04-01
Improvement of natural gas extraction is one of the constant challenges of gas industry. Gas transport through the material of the reservoir is driven by two forces. Conventional diffusion driven by the concentration gradient and the Darcy flow driven by the differential pressure at the two ends of the material. Their segregated yield and their interrelation is largely influenced by the intrinsic structure of the sample so their measurement can yield important information There are multiple methods for measuring these parameters (Sander et al, 2017). We present a measurement set-up which uses photoacoustic spectroscopy for the detection of the transported components. It is a highly sensitive and selective measurement method (Bozóki et al., 2011) and can be used to measure concentration through 4-5 orders of magnitudes. Furthermore it can be operated fully automatically, has response time in the second range and outstanding long term stability. This allows us to perform measurements on a wide variety of samples either in static or in dynamic mode under different conditions and various analytes. Furthermore transport of several gas components can be measured simultaneously. Our set-up facilities measurements in a wide pressure, temperature and concentration range. Bozóki Z., Pogány A., Szabó G. (2011), Applied Spectroscopy Reviews 46, 1-37 Sander, R., Pan, Z. and Connell, Luke D. (2017), Journal of Natural Gas Science and Engineering 37, 248-279.
Cassidy, Richard J; Zhang, Xinyan; Patel, Pretesh R; Shelton, Joseph W; Escott, Chase E; Sica, Gabriel L; Rossi, Michael R; Hill, Charles E; Steuer, Conor E; Pillai, Rathi N; Ramalingam, Suresh S; Owonikoko, Taofeek K; Behera, Madhusmita; Force, Seth D; Fernandez, Felix G; Curran, Walter J; Higgins, Kristin A
2017-10-01
Genetic aberrations are well characterized in lung adenocarcinomas (LACs) and clinical outcomes have been influenced by targeted therapies in the advanced setting. Stereotactic body radiotherapy (SBRT) is the standard-of-care therapy for patients with nonoperable, early-stage LAC, but to the authors' knowledge, no information is available regarding the impact of genomic changes in these patients. The current study sought to determine the frequency and clinical impact of genetic aberrations in this population. Under an Institutional Review Board-approved protocol, the records of 242 consecutive patients with early-stage lung cancers were reviewed; inclusion criteria included LAC histology with an adequate tumor sample for the successful use of next-generation sequencing and fluorescence in situ hybridization testing. Univariate analysis was performed to identify factors associated with clinical outcomes. LAC samples from 98 of the 242 patients were reviewed (40.5%), of whom 45 patients (46.0%) had genetic testing. The following mutations were noted: KRAS in 20.0% of samples, BRAF in 2.2% of samples, SMAD family member 4 (SMAD4) in 4.4% of samples, epidermal growth factor receptor (EGFR) in 15.6% of samples, STK1 in 2.2% of samples, tumor protein 53 (TP53) in 15.6% of samples, and phosphatase and tensin homolog (PTEN) in 2.2% of samples. The following gene rearrangements were observed: anaplastic lymphoma kinase (ALK) in 8.9% of samples, RET in 2.2% of samples, and MET amplification in 17.8% of samples. The median total delivered SBRT dose was 50 grays (range, 48-60 grays) over a median of 5 fractions (range, 3-8 fractions). The KRAS mutation was associated with worse local control (odds ratio [OR], 3.64; P<.05). MET amplification was associated with worse regional (OR, 4.64; P<.05) and distant (OR, 3.73; P<.05) disease control. To the authors' knowledge, the current series is the first to quantify genetic mutations and their association with clinical outcomes in patients with early-stage LAC treated with SBRT. KRAS mutations were associated with worse local control and MET amplification was associated with worse regional and distant disease control, findings that need to be validated in a prospective setting. Cancer 2017;123:3681-3690. © 2017 American Cancer Society. © 2017 American Cancer Society.
Vongkamjan, Kitiya; Switt, Andrea Moreno; den Bakker, Henk C.; Fortes, Esther D.
2012-01-01
Since the food-borne pathogen Listeria monocytogenes is common in dairy farm environments, it is likely that phages infecting this bacterium (“listeriaphages”) are abundant on dairy farms. To better understand the ecology and diversity of listeriaphages on dairy farms and to develop a diverse phage collection for further studies, silage samples collected on two dairy farms were screened for L. monocytogenes and listeriaphages. While only 4.5% of silage samples tested positive for L. monocytogenes, 47.8% of samples were positive for listeriaphages, containing up to >1.5 × 104 PFU/g. Host range characterization of the 114 phage isolates obtained, with a reference set of 13 L. monocytogenes strains representing the nine major serotypes and four lineages, revealed considerable host range diversity; phage isolates were classified into nine lysis groups. While one serotype 3c strain was not lysed by any phage isolates, serotype 4 strains were highly susceptible to phages and were lysed by 63.2 to 88.6% of phages tested. Overall, 12.3% of phage isolates showed a narrow host range (lysing 1 to 5 strains), while 28.9% of phages represented broad host range (lysing ≥11 strains). Genome sizes of the phage isolates were estimated to range from approximately 26 to 140 kb. The extensive host range and genomic diversity of phages observed here suggest an important role of phages in the ecology of L. monocytogenes on dairy farms. In addition, the phage collection developed here has the potential to facilitate further development of phage-based biocontrol strategies (e.g., in silage) and other phage-based tools. PMID:23042180
Grimbergen, M C M; van Swol, C F P; Kendall, C; Verdaasdonk, R M; Stone, N; Bosch, J L H R
2010-01-01
The overall quality of Raman spectra in the near-infrared region, where biological samples are often studied, has benefited from various improvements to optical instrumentation over the past decade. However, obtaining ample spectral quality for analysis is still challenging due to device requirements and short integration times required for (in vivo) clinical applications of Raman spectroscopy. Multivariate analytical methods, such as principal component analysis (PCA) and linear discriminant analysis (LDA), are routinely applied to Raman spectral datasets to develop classification models. Data compression is necessary prior to discriminant analysis to prevent or decrease the degree of over-fitting. The logical threshold for the selection of principal components (PCs) to be used in discriminant analysis is likely to be at a point before the PCs begin to introduce equivalent signal and noise and, hence, include no additional value. Assessment of the signal-to-noise ratio (SNR) at a certain peak or over a specific spectral region will depend on the sample measured. Therefore, the mean SNR over the whole spectral region (SNR(msr)) is determined in the original spectrum as well as for spectra reconstructed from an increasing number of principal components. This paper introduces a method of assessing the influence of signal and noise from individual PC loads and indicates a method of selection of PCs for LDA. To evaluate this method, two data sets with different SNRs were used. The sets were obtained with the same Raman system and the same measurement parameters on bladder tissue collected during white light cystoscopy (set A) and fluorescence-guided cystoscopy (set B). This method shows that the mean SNR over the spectral range in the original Raman spectra of these two data sets is related to the signal and noise contribution of principal component loads. The difference in mean SNR over the spectral range can also be appreciated since fewer principal components can reliably be used in the low SNR data set (set B) compared to the high SNR data set (set A). Despite the fact that no definitive threshold could be found, this method may help to determine the cutoff for the number of principal components used in discriminant analysis. Future analysis of a selection of spectral databases using this technique will allow optimum thresholds to be selected for different applications and spectral data quality levels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michiels, Steven, E-mail: michiels.steven@kuleuven
Purpose: 3D printing technology is investigated for the purpose of patient immobilization during proton therapy. It potentially enables a merge of patient immobilization, bolus range shifting, and other functions into one single patient-specific structure. In this first step, a set of 3D printed materials is characterized in detail, in terms of structural and radiological properties, elemental composition, directional dependence, and structural changes induced by radiation damage. These data will serve as inputs for the design of 3D printed immobilization structure prototypes. Methods: Using four different 3D printing techniques, in total eight materials were subjected to testing. Samples with a nominalmore » dimension of 20 × 20 × 80 mm{sup 3} were 3D printed. The geometrical printing accuracy of each test sample was measured with a dial gage. To assess the mechanical response of the samples, standardized compression tests were performed to determine the Young’s modulus. To investigate the effect of radiation on the mechanical response, the mechanical tests were performed both prior and after the administration of clinically relevant dose levels (70 Gy), multiplied with a safety factor of 1.4. Dual energy computed tomography (DECT) methods were used to calculate the relative electron density to water ρ{sub e}, the effective atomic number Z{sub eff}, and the proton stopping power ratio (SPR) to water SPR. In order to validate the DECT based calculation of radiological properties, beam measurements were performed on the 3D printed samples as well. Photon irradiations were performed to measure the photon linear attenuation coefficients, while proton irradiations were performed to measure the proton range shift of the samples. The directional dependence of these properties was investigated by performing the irradiations for different orientations of the samples. Results: The printed test objects showed reduced geometric printing accuracy for 2 materials (deviation > 0.25 mm). Compression tests yielded Young’s moduli ranging from 0.6 to 2940 MPa. No deterioration in the mechanical response was observed after exposure of the samples to 100 Gy in a therapeutic MV photon beam. The DECT-based characterization yielded Z{sub eff} ranging from 5.91 to 10.43. The SPR and ρ{sub e} both ranged from 0.6 to 1.22. The measured photon attenuation coefficients at clinical energies scaled linearly with ρ{sub e}. Good agreement was seen between the DECT estimated SPR and the measured range shift, except for the higher Z{sub eff}. As opposed to the photon attenuation, the proton range shifting appeared to be printing orientation dependent for certain materials. Conclusions: In this study, the first step toward 3D printed, multifunctional immobilization was performed, by going through a candidate clinical workflow for the first time: from the material printing to DECT characterization with a verification through beam measurements. Besides a proof of concept for beam modification, the mechanical response of printed materials was also investigated to assess their capabilities for positioning functionality. For the studied set of printing techniques and materials, a wide variety of mechanical and radiological properties can be selected from for the intended purpose. Moreover the elaborated hybrid DECT methods aid in performing in-house quality assurance of 3D printed components, as these methods enable the estimation of the radiological properties relevant for use in radiation therapy.« less
Large Scale Environmental Monitoring through Integration of Sensor and Mesh Networks.
Jurdak, Raja; Nafaa, Abdelhamid; Barbirato, Alessio
2008-11-24
Monitoring outdoor environments through networks of wireless sensors has received interest for collecting physical and chemical samples at high spatial and temporal scales. A central challenge to environmental monitoring applications of sensor networks is the short communication range of the sensor nodes, which increases the complexity and cost of monitoring commodities that are located in geographically spread areas. To address this issue, we propose a new communication architecture that integrates sensor networks with medium range wireless mesh networks, and provides users with an advanced web portal for managing sensed information in an integrated manner. Our architecture adopts a holistic approach targeted at improving the user experience by optimizing the system performance for handling data that originates at the sensors, traverses the mesh network, and resides at the server for user consumption. This holistic approach enables users to set high level policies that can adapt the resolution of information collected at the sensors, set the preferred performance targets for their application, and run a wide range of queries and analysis on both real-time and historical data. All system components and processes will be described in this paper.
Nicolaisen, Mogens; West, Jonathan S; Sapkota, Rumakanta; Canning, Gail G M; Schoen, Cor; Justesen, Annemarie F
2017-01-01
Information on the diversity of fungal spores in air is limited, and also the content of airborne spores of fungal plant pathogens is understudied. In the present study, a total of 152 air samples were taken from rooftops at urban settings in Slagelse, DK, Wageningen NL, and Rothamsted, UK together with 41 samples from above oilseed rape fields in Rothamsted. Samples were taken during 10-day periods in spring and autumn, each sample representing 1 day of sampling. The fungal content of samples was analyzed by metabarcoding of the fungal internal transcribed sequence 1 (ITS1) and by qPCR for specific fungi. The metabarcoding results demonstrated that season had significant effects on airborne fungal communities. In contrast, location did not have strong effects on the communities, even though locations were separated by up to 900 km. Also, a number of plant pathogens had strikingly similar patterns of abundance at the three locations. Rooftop samples were more diverse than samples taken above fields, probably reflecting greater mixing of air from a range of microenvironments for the rooftop sites. Pathogens that were known to be present in the crop were also found in air samples taken above the field. This paper is one of the first detailed studies of fungal composition in air with the focus on plant pathogens and shows that it is possible to detect a range of pathogens in rooftop air samplers using metabarcoding.
A core handling device for the Mars Sample Return Mission
NASA Technical Reports Server (NTRS)
Gwynne, Owen
1989-01-01
A core handling device for use on Mars is being designed. To provide a context for the design study, it was assumed that a Mars Rover/Sample Return (MRSR) Mission would have the following characteristics: a year or more in length; visits by the rover to 50 or more sites; 100 or more meter-long cores being drilled by the rover; and the capability of returning about 5 kg of Mars regolith to Earth. These characteristics lead to the belief that in order to bring back a variegated set of samples that can address the range of scientific objetives for a MRSR mission to Mars there needs to be considerable analysis done on board the rover. Furthermore, the discrepancy between the amount of sample gathered and the amount to be returned suggests that there needs to be some method of choosing the optimal set of samples. This type of analysis will require pristine material-unaltered by the drilling process. Since the core drill thermally and mechanically alters the outer diameter (about 10 pct) of the core sample, this outer area cannot be used. The primary function of the core handling device is to extract subsamples from the core and to position these subsamples, and the core itself if needed, with respect to the various analytical instruments that can be used to perform these analyses.
Sanitation in self-service automatic washers.
Buford, L E; Pickett, M S; Hartman, P A
1977-01-01
The potential for microbial transfer in self-service laundry washing machines was investigated by obtaining swab samples from the interior surfaces of commercial machines and wash water samples before and after disinfectant treatment. Three disinfectants (chlorine, a quaternary ammonium product, and a phenolic disinfectant) were used. Four self-service laundry facilities were sampled, with 10 replications of the procedure for each treatment at each location. Although washers were set on a warmwater setting, the wash water temperatures ranged from 24 to 51 degrees C. The quaternary ammonium product seemed most effective, averaging a 97% microbial kill; chlorine was the second most effective, with a 58% kill, and the phenolic disinfectant was least effective, with only a 25% kill. The efficacies of the chlorine and phenolic disinfectants were reduced at low water temperatures commonly experienced in self-service laundries. Interfamily cross-contamination in self-service facilities is a potential public health problem, which is aggravated by environmental conditions, such as water temperature and the practices of the previous users of the equipment. Procedural changes in laundering are recommended, including the use of a disinfectant to maintain adequate levels of sanitation. PMID:13714
[Determination of Carbaryl in Rice by Using FT Far-IR and THz-TDS Techniques].
Sun, Tong; Zhang, Zhuo-yong; Xiang, Yu-hong; Zhu, Ruo-hua
2016-02-01
Determination of carbaryl in rice by using Fourier transform far-infrared (FT- Far-IR) and terahertz time-domain spectroscopy (THz-TDS) combined with chemometrics was studied and the spectral characteristics of carbaryl in terahertz region was investigated. Samples were prepared by mixing carbaryl at different amounts with rice powder, and then a 13 mm diameter, and about 1 mm thick pellet with polyethylene (PE) as matrix was compressed under the pressure of 5-7 tons. Terahertz time domain spectra of the pellets were measured at 0.5~1.5 THz, and the absorption spectra at 1.6. 3 THz were acquired with Fourier transform far-IR spectroscopy. The method of sample preparation is so simple that it does not need separation and enrichment. The absorption peaks in the frequency range of 1.8-6.3 THz have been found at 3.2 and 5.2 THz by Far-IR. There are several weak absorption peaks in the range of 0.5-1.5 THz by THz-TDS. These two kinds of characteristic absorption spectra were randomly divided into calibration set and prediction set by leave-N-out cross-validation, respectively. Finally, the partial least squares regression (PLSR) method was used to establish two quantitative analysis models. The root mean square error (RMSECV), the root mean square errors of prediction (RMSEP) and the correlation coefficient of the prediction are used as a basis for the model of performance evaluation. For the R,, a higher value is better; for the RMSEC and RMSEP, lower is better. The obtained results demonstrated that the predictive accuracy of. the two models with PLSR method were satisfactory. For the FT-Far-IR model, the correlation between actual and predicted values of prediction samples (Rv) was 0.99. The root mean square error of prediction set (RMSEP) was 0.008 6, and for calibration set (RMSECV) was 0.007 7. For the THz-TDS model, R. was 0. 98, RMSEP was 0.004 4, and RMSECV was 0.002 5. Results proved that the technology of FT-Far-IR and THz- TDS can be a feasible tool for quantitative determination of carbaryl in rice. This paper provides a new method for the quantitative determination pesticide in other grain samples.
Bronze, Michelle; Wallis, Carole L.; Stuyver, Lieven; Steegen, Kim; Balinda, Sheila; Kityo, Cissy; Stevens, Wendy; Rinke de Wit, Tobias F.; Schuurman, Rob
2013-01-01
In resource-limited settings (RLS), reverse transcriptase (RT) inhibitors form the backbone of first-line treatment regimens. We have developed a simplified HIV-1 drug resistance genotyping assay targeting the region of RT harboring all major RT inhibitor resistance mutation positions, thus providing all relevant susceptibility data for first-line failures, coupled with minimal cost and labor. The assay comprises a one-step RT-PCR amplification reaction, followed by sequencing using one forward and one reverse primer, generating double-stranded coverage of RT amino acids (aa) 41 to 238. The assay was optimized for all major HIV-1 group M subtypes in plasma and dried blood spot (DBS) samples using a panel of reference viruses for HIV-1 subtypes A to D, F to H, and circulating recombinant form 01_AE (CRF01_AE) and applied to 212 clinical plasma samples and 25 DBS samples from HIV-1-infected individuals from Africa and Europe. The assay was subsequently transferred to Uganda and applied locally on clinical plasma samples. All major HIV-1 subtypes could be detected with an analytical sensitivity of 5.00E+3 RNA copies/ml for plasma and DBS. Application of the assay on 212 clinical samples from African subjects comprising subtypes A to D, F to H (rare), CRF01_AE, and CRF02_AG at a viral load (VL) range of 6.71E+2 to 1.00E+7 (median, 1.48E+5) RNA copies/ml was 94.8% (n = 201) successful. Application on clinical samples in Uganda demonstrated a comparable success rate. Genotyping of clinical DBS samples, all subtype C with a VL range of 1.02E+3 to 4.49E+5 (median, 1.42E+4) RNA copies/ml, was 84.0% successful. The described assay greatly reduces hands-on time and the costs required for genotyping and is ideal for use in RLS, as demonstrated in a reference laboratory in Uganda and its successful application on DBS samples. PMID:23536405
McDermott, A; Visentin, G; De Marchi, M; Berry, D P; Fenelon, M A; O'Connor, P M; Kenny, O A; McParland, S
2016-04-01
The aim of this study was to evaluate the effectiveness of mid-infrared spectroscopy in predicting milk protein and free amino acid (FAA) composition in bovine milk. Milk samples were collected from 7 Irish research herds and represented cows from a range of breeds, parities, and stages of lactation. Mid-infrared spectral data in the range of 900 to 5,000 cm(-1) were available for 730 milk samples; gold standard methods were used to quantify individual protein fractions and FAA of these samples with a view to predicting these gold standard protein fractions and FAA levels with available mid-infrared spectroscopy data. Separate prediction equations were developed for each trait using partial least squares regression; accuracy of prediction was assessed using both cross validation on a calibration data set (n=400 to 591 samples) and external validation on an independent data set (n=143 to 294 samples). The accuracy of prediction in external validation was the same irrespective of whether undertaken on the entire external validation data set or just within the Holstein-Friesian breed. The strongest coefficient of correlation obtained for protein fractions in external validation was 0.74, 0.69, and 0.67 for total casein, total β-lactoglobulin, and β-casein, respectively. Total proteins (i.e., total casein, total whey, and total lactoglobulin) were predicted with greater accuracy then their respective component traits; prediction accuracy using the infrared spectrum was superior to prediction using just milk protein concentration. Weak to moderate prediction accuracies were observed for FAA. The greatest coefficient of correlation in both cross validation and external validation was for Gly (0.75), indicating a moderate accuracy of prediction. Overall, the FAA prediction models overpredicted the gold standard values. Near-unity correlations existed between total casein and β-casein irrespective of whether the traits were based on the gold standard (0.92) or mid-infrared spectroscopy predictions (0.95). Weaker correlations among FAA were observed than the correlations among the protein fractions. Pearson correlations between gold standard protein fractions and the milk processing characteristics of rennet coagulation time, curd firming time, curd firmness, heat coagulating time, pH, and casein micelle size were weak to moderate and ranged from -0.48 (protein and pH) to 0.50 (total casein and a30). Pearson correlations between gold standard FAA and these milk processing characteristics were also weak to moderate and ranged from -0.60 (Val and pH) to 0.49 (Val and K20). Results from this study indicate that mid-infrared spectroscopy has the potential to predict protein fractions and some FAA in milk at a population level. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Using Remote Sensing to Determine the Spatial Scales of Estuaries
NASA Astrophysics Data System (ADS)
Davis, C. O.; Tufillaro, N.; Nahorniak, J.
2016-02-01
One challenge facing Earth system science is to understand and quantify the complexity of rivers, estuaries, and coastal zone regions. Earlier studies using data from airborne hyperspectral imagers (Bissett et al., 2004, Davis et al., 2007) demonstrated from a very limited data set that the spatial scales of the coastal ocean could be resolved with spatial sampling of 100 m Ground Sample Distance (GSD) or better. To develop a much larger data set (Aurin et al., 2013) used MODIS 250 m data for a wide range of coastal regions. Their conclusion was that farther offshore 500 m GSD was adequate to resolve large river plume features while nearshore regions (a few kilometers from the coast) needed higher spatial resolution data not available from MODIS. Building on our airborne experience, the Hyperspectral Imager for the Coastal Ocean (HICO, Lucke et al., 2011) was designed to provide hyperspectral data for the coastal ocean at 100 m GSD. HICO operated on the International Space Station for 5 years and collected over 10,000 scenes of the coastal ocean and other regions around the world. Here we analyze HICO data from an example set of major river delta regions to assess the spatial scales of variability in those systems. In one system, the San Francisco Bay and Delta, we also analyze Landsat 8 OLI data at 30 m and 15 m to validate the 100 m GSD sampling scale for the Bay and assess spatial sampling needed as you move up river.
2012-01-01
Background While research on the impact of global climate change (GCC) on ecosystems and species is flourishing, a fundamental component of biodiversity – molecular variation – has not yet received its due attention in such studies. Here we present a methodological framework for projecting the loss of intraspecific genetic diversity due to GCC. Methods The framework consists of multiple steps that combines 1) hierarchical genetic clustering methods to define comparable units of inference, 2) species accumulation curves (SAC) to infer sampling completeness, and 3) species distribution modelling (SDM) to project the genetic diversity loss under GCC. We suggest procedures for existing data sets as well as specifically designed studies. We illustrate the approach with two worked examples from a land snail (Trochulus villosus) and a caddisfly (Smicridea (S.) mucronata). Results Sampling completeness was diagnosed on the third coarsest haplotype clade level for T. villosus and the second coarsest for S. mucronata. For both species, a substantial species range loss was projected under the chosen climate scenario. However, despite substantial differences in data set quality concerning spatial sampling and sampling depth, no loss of haplotype clades due to GCC was predicted for either species. Conclusions The suggested approach presents a feasible method to tap the rich resources of existing phylogeographic data sets and guide the design and analysis of studies explicitly designed to estimate the impact of GCC on a currently still neglected level of biodiversity. PMID:23176586
Digital simulation of an arbitrary stationary stochastic process by spectral representation.
Yura, Harold T; Hanson, Steen G
2011-04-01
In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America
Inci, Fatih; Filippini, Chiara; Baday, Murat; Ozen, Mehmet Ozgun; Calamak, Semih; Durmus, Naside Gozde; Wang, ShuQi; Hanhauser, Emily; Hobbs, Kristen S; Juillard, Franceline; Kuang, Ping Ping; Vetter, Michael L; Carocci, Margot; Yamamoto, Hidemi S; Takagi, Yuko; Yildiz, Umit Hakan; Akin, Demir; Wesemann, Duane R; Singhal, Amit; Yang, Priscilla L; Nibert, Max L; Fichorova, Raina N; Lau, Daryl T-Y; Henrich, Timothy J; Kaye, Kenneth M; Schachter, Steven C; Kuritzkes, Daniel R; Steinmetz, Lars M; Gambhir, Sanjiv S; Davis, Ronald W; Demirci, Utkan
2015-08-11
Recent advances in biosensing technologies present great potential for medical diagnostics, thus improving clinical decisions. However, creating a label-free general sensing platform capable of detecting multiple biotargets in various clinical specimens over a wide dynamic range, without lengthy sample-processing steps, remains a considerable challenge. In practice, these barriers prevent broad applications in clinics and at patients' homes. Here, we demonstrate the nanoplasmonic electrical field-enhanced resonating device (NE(2)RD), which addresses all these impediments on a single platform. The NE(2)RD employs an immunodetection assay to capture biotargets, and precisely measures spectral color changes by their wavelength and extinction intensity shifts in nanoparticles without prior sample labeling or preprocessing. We present through multiple examples, a label-free, quantitative, portable, multitarget platform by rapidly detecting various protein biomarkers, drugs, protein allergens, bacteria, eukaryotic cells, and distinct viruses. The linear dynamic range of NE(2)RD is five orders of magnitude broader than ELISA, with a sensitivity down to 400 fg/mL This range and sensitivity are achieved by self-assembling gold nanoparticles to generate hot spots on a 3D-oriented substrate for ultrasensitive measurements. We demonstrate that this precise platform handles multiple clinical samples such as whole blood, serum, and saliva without sample preprocessing under diverse conditions of temperature, pH, and ionic strength. The NE(2)RD's broad dynamic range, detection limit, and portability integrated with a disposable fluidic chip have broad applications, potentially enabling the transition toward precision medicine at the point-of-care or primary care settings and at patients' homes.
Inci, Fatih; Filippini, Chiara; Ozen, Mehmet Ozgun; Calamak, Semih; Durmus, Naside Gozde; Wang, ShuQi; Hanhauser, Emily; Hobbs, Kristen S.; Juillard, Franceline; Kuang, Ping Ping; Vetter, Michael L.; Carocci, Margot; Yamamoto, Hidemi S.; Takagi, Yuko; Yildiz, Umit Hakan; Akin, Demir; Wesemann, Duane R.; Singhal, Amit; Yang, Priscilla L.; Nibert, Max L.; Fichorova, Raina N.; Lau, Daryl T.-Y.; Henrich, Timothy J.; Kaye, Kenneth M.; Schachter, Steven C.; Kuritzkes, Daniel R.; Steinmetz, Lars M.; Gambhir, Sanjiv S.; Davis, Ronald W.; Demirci, Utkan
2015-01-01
Recent advances in biosensing technologies present great potential for medical diagnostics, thus improving clinical decisions. However, creating a label-free general sensing platform capable of detecting multiple biotargets in various clinical specimens over a wide dynamic range, without lengthy sample-processing steps, remains a considerable challenge. In practice, these barriers prevent broad applications in clinics and at patients’ homes. Here, we demonstrate the nanoplasmonic electrical field-enhanced resonating device (NE2RD), which addresses all these impediments on a single platform. The NE2RD employs an immunodetection assay to capture biotargets, and precisely measures spectral color changes by their wavelength and extinction intensity shifts in nanoparticles without prior sample labeling or preprocessing. We present through multiple examples, a label-free, quantitative, portable, multitarget platform by rapidly detecting various protein biomarkers, drugs, protein allergens, bacteria, eukaryotic cells, and distinct viruses. The linear dynamic range of NE2RD is five orders of magnitude broader than ELISA, with a sensitivity down to 400 fg/mL This range and sensitivity are achieved by self-assembling gold nanoparticles to generate hot spots on a 3D-oriented substrate for ultrasensitive measurements. We demonstrate that this precise platform handles multiple clinical samples such as whole blood, serum, and saliva without sample preprocessing under diverse conditions of temperature, pH, and ionic strength. The NE2RD’s broad dynamic range, detection limit, and portability integrated with a disposable fluidic chip have broad applications, potentially enabling the transition toward precision medicine at the point-of-care or primary care settings and at patients’ homes. PMID:26195743
Rapid and simultaneous determination of Strontium-89 and Strontium-90 in seawater.
Tayeb, Michelle; Dai, Xiongxin; Sdraulig, Sandra
2016-03-01
A rapid method has been developed for the direct determination of radiostrontium ((89)Sr and (90)Sr) released in seawater in the early phase of an accident. The method employs a fast and effective pre-concentration of radiostrontium by Sr-Ca co-precipitation followed by separation of radiostrontium using extraction chromatography technique. Radiostrontium is effectively separated in the presence of excessive dominant salts of seawater. Čerenkov and liquid scintillation assay (LSA) techniques are used to determine (89)Sr and (90)Sr. Sample preparation time is approximately 4 h for a set of 10 samples. The method was validated using spiked seawater samples at various activity ratios of (89)Sr:(90)Sr ranging from 1:10 to 9:1. The mean chemical recovery of Sr was 85 ± 3%. (90)Sr showed variable relative bias which enhanced with increasing ratio of (89)Sr:(90)Sr and was in the range ± 21%. The highest biases of (90)Sr determination were due to lower activity concentrations of (90)Sr and are regarded as acceptable in emergency situations with elevated levels of radiostrontium in the sample. The minimum detectable concentration (MDC) of (90)Sr and (89)Sr varied at different (89)Sr:(90)Sr ratios. For 0.1 L seawater and 15 min counting time on a low background Hidex liquid scintillation counter (LSC), the MDC of (90)Sr was in the range of 1.7-3.5 Bq L(-1) and MDC of (89)Sr was in the range 0.5-2.4 Bq L(-1). Copyright © 2016 Elsevier Ltd. All rights reserved.
Hospital-based emergency nursing in rural settings.
Brown, Jennifer F
2008-01-01
In 2006, the Institute of Medicine (IOM) released a series of reports that highlighted the urgent need for improvements in the nation's emergency health services. This news has provided new energy to a growing body of research about the development and implementation of best practices in emergency care. Despite evidence of geographical disparities in health services, relatively little attention has been focused on rural emergency services to identify environmental differences. The purpose of this chapter is to summarize the contributions of nursing research to the rural emergency services literature. The research resembles a so-called shotgun effect as the exploratory and interventional studies cover a wide range of topics without consistency or justification. Emergency nursing research has been conducted primarily in urban settings, with small samples and insufficient methodological rigor. This chapter will discuss the limitations of the research and set forth an agenda of critical topics that need to be explored related to emergency nursing in rural settings.
Multiple Versus Single Set Validation of Multivariate Models to Avoid Mistakes.
Harrington, Peter de Boves
2018-01-02
Validation of multivariate models is of current importance for a wide range of chemical applications. Although important, it is neglected. The common practice is to use a single external validation set for evaluation. This approach is deficient and may mislead investigators with results that are specific to the single validation set of data. In addition, no statistics are available regarding the precision of a derived figure of merit (FOM). A statistical approach using bootstrapped Latin partitions is advocated. This validation method makes an efficient use of the data because each object is used once for validation. It was reviewed a decade earlier but primarily for the optimization of chemometric models this review presents the reasons it should be used for generalized statistical validation. Average FOMs with confidence intervals are reported and powerful, matched-sample statistics may be applied for comparing models and methods. Examples demonstrate the problems with single validation sets.
Votano, Joseph R; Parham, Marc; Hall, L Mark; Hall, Lowell H; Kier, Lemont B; Oloff, Scott; Tropsha, Alexander
2006-11-30
Four modeling techniques, using topological descriptors to represent molecular structure, were employed to produce models of human serum protein binding (% bound) on a data set of 1008 experimental values, carefully screened from publicly available sources. To our knowledge, this data is the largest set on human serum protein binding reported for QSAR modeling. The data was partitioned into a training set of 808 compounds and an external validation test set of 200 compounds. Partitioning was accomplished by clustering the compounds in a structure descriptor space so that random sampling of 20% of the whole data set produced an external test set that is a good representative of the training set with respect to both structure and protein binding values. The four modeling techniques include multiple linear regression (MLR), artificial neural networks (ANN), k-nearest neighbors (kNN), and support vector machines (SVM). With the exception of the MLR model, the ANN, kNN, and SVM QSARs were ensemble models. Training set correlation coefficients and mean absolute error ranged from r2=0.90 and MAE=7.6 for ANN to r2=0.61 and MAE=16.2 for MLR. Prediction results from the validation set yielded correlation coefficients and mean absolute errors which ranged from r2=0.70 and MAE=14.1 for ANN to a low of r2=0.59 and MAE=18.3 for the SVM model. Structure descriptors that contribute significantly to the models are discussed and compared with those found in other published models. For the ANN model, structure descriptor trends with respect to their affects on predicted protein binding can assist the chemist in structure modification during the drug design process.
Xiao, Hui; Sun, Ke; Sun, Ye; Wei, Kangli; Tu, Kang; Pan, Leiqing
2017-11-22
Near-infrared (NIR) spectroscopy was applied for the determination of total soluble solid contents (SSC) of single Ruby Seedless grape berries using both benchtop Fourier transform (VECTOR 22/N) and portable grating scanning (SupNIR-1500) spectrometers in this study. The results showed that the best SSC prediction was obtained by VECTOR 22/N in the range of 12,000 to 4000 cm -1 (833-2500 nm) for Ruby Seedless with determination coefficient of prediction (R p ²) of 0.918, root mean squares error of prediction (RMSEP) of 0.758% based on least squares support vector machine (LS-SVM). Calibration transfer was conducted on the same spectral range of two instruments (1000-1800 nm) based on the LS-SVM model. By conducting Kennard-Stone (KS) to divide sample sets, selecting the optimal number of standardization samples and applying Passing-Bablok regression to choose the optimal instrument as the master instrument, a modified calibration transfer method between two spectrometers was developed. When 45 samples were selected for the standardization set, the linear interpolation-piecewise direct standardization (linear interpolation-PDS) performed well for calibration transfer with R p ² of 0.857 and RMSEP of 1.099% in the spectral region of 1000-1800 nm. And it was proved that re-calculating the standardization samples into master model could improve the performance of calibration transfer in this study. This work indicated that NIR could be used as a rapid and non-destructive method for SSC prediction, and provided a feasibility to solve the transfer difficulty between totally different NIR spectrometers.
Romanoski, A J; Nestadt, G; Chahal, R; Merchant, A; Folstein, M F; Gruenberg, E M; McHugh, P R
1988-02-01
The authors describe the Standardized Psychiatric Examination (SPE), a new method for conducting psychiatric examinations in both clinical and research settings that preserves the clinical method. The SPE provides a consistent replicable format for eliciting and recording psychiatric history, signs, and symptoms without perturbing the patient-clinician interaction. By means of the SPE, the clinician can formulate diagnoses using DSM-III or ICD-9 criteria and yet generate CATEGO profiles derived from the Present State Examination, 9th edition. Psychiatrists using the SPE demonstrated high interrater reliability in ascertaining individual psychopathological symptoms (Kappa range, 0.55 to 1.0) and in making DSM-III diagnoses (Kappa range, 0.79 to 1.0) among a sample of study subjects (N = 43) drawn from both a psychiatric inpatient population and a large community sample of nonpatients from the Epidemiological Catchment Area (ECA) study. The implications of the SPE for clinical practice and for research are discussed.
Phosphorus in sediment in the Kent Park Lake watershed, Johnson County, Iowa, 2014–15
Kalkhoff, Stephen J.
2016-07-12
Phosphorus data were collected from the Kent Park Lake watershed in Johnson County, Iowa, in 2014 and 2015 to obtain information to assist in the management of the water quality in the lake. Phosphorus concentrations were measured for sediment from several ponds in the watershed and sediment deposited in the lake. The first set of samples was collected in 2014 to understand phosphorus in several potential sources to the lake and the spatial variability in lake sediments. Phosphorus concentrations ranged from 68 to 380 milligrams per kilogram in lake sediment and from 57 to 220 milligrams per kilogram in sedimentation and dredge spoil ponds. Additional samples were collected in 2015 to determine how phosphorus concentrations vary with depth in the lake sediment. Phosphorus concentrations generally decreased with increasing depth within the lake sediment. In 2015, total phosphorus concentrations in lake sediment ranged from 50 to 340 milligrams per kilogram.
NASA Technical Reports Server (NTRS)
Richey, C. R.; Kinzer, R. E.; Cataldo, G.; Wollack, E. J.; Nuth, J. A.; Benford, D. J.; Silverberg, R. F.; Rinehart, S. A.
2013-01-01
The Optical Properties of Astronomical Silicates with Infrared Techniques (OPASI-T) program utilizes multiple instruments to provide spectral data over a wide range of temperature and wavelengths. Experimental methods include Vector Network Analyzer (VNA) and Fourier Transform Spectroscopy (FTS) transmission, and reflection/scattering measurements. From this data, we can determine the optical parameters for the index of refraction, n, and the absorption coefficient, k. The analysis of the laboratory transmittance data for each sample type is based upon different mathematical models, which are applied to each data set according to their degree of coherence. Presented here are results from iron silicate dust grain analogs, in several sample preparations and at temperatures ranging from 5-300 K, across the infrared and millimeter portion of the spectrum (from 2.5-10,000 m or 4,000-1 cm(exp-1).
NASA Technical Reports Server (NTRS)
Richey, Christina Rae; Kinzer, R. E.; Cataldo, R. E. G.; Wollack, E. J.; Nuth, J. A.; Benford, D. J.; Silverberg, R. F.; Rinehart, S. A.
2013-01-01
The Optical Properties of Astronomical Silicates with Infrared Techniques (OPASI-T) program utilizes multiple instruments to provide spectral data over a wide range of temperature and wavelengths. Experimental methods include Vector Network Analyzer (VNA) and Fourier Transform Spectroscopy (FTS) transmission, and reflection/scattering measurements. From this data, we can determine the optical parameters for the index of refraction, n, and the absorption coefficient, k. The analysis of the laboratory transmittance data for each sample type is based upon different mathematical models, which are applied to each data set according to their degree of coherence. Presented here are results from iron silicate dust grain analogs, in several sample preparations and at temperatures ranging from 5-300 K, across the infrared and millimeter portion of the spectrum (from 2.5-10,000 µm or 4,000-1 cm(exp -1).
Timmermans, M J T N; Dodsworth, S; Culverwell, C L; Bocak, L; Ahrens, D; Littlewood, D T J; Pons, J; Vogler, A P
2010-11-01
Mitochondrial genome sequences are important markers for phylogenetics but taxon sampling remains sporadic because of the great effort and cost required to acquire full-length sequences. Here, we demonstrate a simple, cost-effective way to sequence the full complement of protein coding mitochondrial genes from pooled samples using the 454/Roche platform. Multiplexing was achieved without the need for expensive indexing tags ('barcodes'). The method was trialled with a set of long-range polymerase chain reaction (PCR) fragments from 30 species of Coleoptera (beetles) sequenced in a 1/16th sector of a sequencing plate. Long contigs were produced from the pooled sequences with sequencing depths ranging from ∼10 to 100× per contig. Species identity of individual contigs was established via three 'bait' sequences matching disparate parts of the mitochondrial genome obtained by conventional PCR and Sanger sequencing. This proved that assembly of contigs from the sequencing pool was correct. Our study produced sequences for 21 nearly complete and seven partial sets of protein coding mitochondrial genes. Combined with existing sequences for 25 taxa, an improved estimate of basal relationships in Coleoptera was obtained. The procedure could be employed routinely for mitochondrial genome sequencing at the species level, to provide improved species 'barcodes' that currently use the cox1 gene only.
Rapid quantitative chemical mapping of surfaces with sub-2 nm resolution
NASA Astrophysics Data System (ADS)
Lai, Chia-Yun; Perri, Saverio; Santos, Sergio; Garcia, Ricardo; Chiesa, Matteo
2016-05-01
We present a theory that exploits four observables in bimodal atomic force microscopy to produce maps of the Hamaker constant H. The quantitative H maps may be employed by the broader community to directly interpret the high resolution of standard bimodal AFM images as chemical maps while simultaneously quantifying chemistry in the non-contact regime. We further provide a simple methodology to optimize a range of operational parameters for which H is in the closest agreement with the Lifshitz theory in order to (1) simplify data acquisition and (2) generalize the methodology to any set of cantilever-sample systems.We present a theory that exploits four observables in bimodal atomic force microscopy to produce maps of the Hamaker constant H. The quantitative H maps may be employed by the broader community to directly interpret the high resolution of standard bimodal AFM images as chemical maps while simultaneously quantifying chemistry in the non-contact regime. We further provide a simple methodology to optimize a range of operational parameters for which H is in the closest agreement with the Lifshitz theory in order to (1) simplify data acquisition and (2) generalize the methodology to any set of cantilever-sample systems. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr00496b
Mendonca, Filho J.G.; Araujo, C.V.; Borrego, A.G.; Cook, A.; Flores, D.; Hackley, P.; Hower, J.C.; Kern, M.L.; Kommeren, K.; Kus, J.; Mastalerz, Maria; Mendonca, J.O.; Menezes, T.R.; Newman, J.; Ranasinghe, P.; Souza, I.V.A.F.; Suarez-Ruiz, I.; Ujiie, Y.
2010-01-01
The main objective of this work was to study the effect of the kerogen isolation procedures on maturity parameters of organic matter using optical microscopes. This work represents the results of the Organic Matter Concentration Working Group (OMCWG) of the International Committee for Coal and Organic Petrology (ICCP) during the years 2008 and 2009. Four samples have been analysed covering a range of maturity (low and moderate) and terrestrial and marine geological settings. The analyses comprise random vitrinite reflectance measured on both kerogen concentrate and whole rock mounts and fluorescence spectra taken on alginite. Eighteen participants from twelve laboratories from all over the world performed the analyses. Samples of continental settings contained enough vitrinite for participants to record around 50 measurements whereas fewer readings were taken on samples from marine setting. The scatter of results was also larger in the samples of marine origin. Similar vitrinite reflectance values were in general recorded in the whole rock and in the kerogen concentrate. The small deviations of the trend cannot be attributed to the acid treatment involved in kerogen isolation but to reasons related to components identification or to the difficulty to achieve a good polish of samples with high mineral matter content. In samples difficult to polish, vitrinite reflectance was measured on whole rock tended to be lower. The presence or absence of rock fabric affected the selection of the vitrinite population for measurement and this also had an influence in the average value reported and in the scatter of the results. Slightly lower standard deviations were reported for the analyses run on kerogen concentrates. Considering the spectral fluorescence results, it was observed that the ??max presents a shift to higher wavelengths in the kerogen concentrate sample in comparison to the whole-rock sample, thus revealing an influence of preparation methods (acid treatment) on fluorescence properties. ?? 2010 Elsevier B.V.
Liu, Qian-qian; Wang, Chun-yan; Shi, Xiao-feng; Li, Wen-dong; Luan, Xiao-ning; Hou, Shi-lin; Zhang, Jin-liang; Zheng, Rong-er
2012-04-01
In this paper, a new method was developed to differentiate the spill oil samples. The synchronous fluorescence spectra in the lower nonlinear concentration range of 10(-2) - 10(-1) g x L(-1) were collected to get training data base. Radial basis function artificial neural network (RBF-ANN) was used to identify the samples sets, along with principal component analysis (PCA) as the feature extraction method. The recognition rate of the closely-related oil source samples is 92%. All the results demonstrated that the proposed method could identify the crude oil samples effectively by just one synchronous spectrum of the spill oil sample. The method was supposed to be very suitable to the real-time spill oil identification, and can also be easily applied to the oil logging and the analysis of other multi-PAHs or multi-fluorescent mixtures.
Jia, Xiujuan; Wang, Tiebang; Bu, Xiaodong; Tu, Qiang; Spencer, Sandra
2006-04-11
A graphite furnace atomic absorption (GFAA) spectrometric method for the determination of ruthenium (Rh) in solid and liquid pharmaceutical compounds has been developed. Samples are dissolved or diluted in dimethyl sulfoxide (DMSO) without any other treatment before they were analyzed by GFAA with a carefully designed heating program to avoid pre-atomization signal loss and to achieve suitable sensitivity. Various inorganic and organic solvents were tested and compared and DMSO was found to be the most suitable. In addition, ruthenium was found to be stable in DMSO for at least 5 days. Spike recoveries ranged from 81 to 100% and the limit of quantitation (LOQ) was determined to be 0.5 microg g(-1) for solid samples or 0.005 microg ml(-1) for liquid samples based a 100-fold dilution. The same set of samples was also analyzed by ICP-MS with a different sample preparation method, and excellent agreement was achieved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nee, K.; Bryan, S.; Levitskaia, T.
The reliability of chemical processes can be greatly improved by implementing inline monitoring systems. Combining multivariate analysis with non-destructive sensors can enhance the process without interfering with the operation. Here, we present here hierarchical models using both principal component analysis and partial least square analysis developed for different chemical components representative of solvent extraction process streams. A training set of 380 samples and an external validation set of 95 samples were prepared and Near infrared and Raman spectral data as well as conductivity under variable temperature conditions were collected. The results from the models indicate that careful selection of themore » spectral range is important. By compressing the data through Principal Component Analysis (PCA), we lower the rank of the data set to its most dominant features while maintaining the key principal components to be used in the regression analysis. Within the studied data set, concentration of five chemical components were modeled; total nitrate (NO 3 -), total acid (H +), neodymium (Nd 3+), sodium (Na +), and ionic strength (I.S.). The best overall model prediction for each of the species studied used a combined data set comprised of complementary techniques including NIR, Raman, and conductivity. Finally, our study shows that chemometric models are powerful but requires significant amount of carefully analyzed data to capture variations in the chemistry.« less
Nee, K.; Bryan, S.; Levitskaia, T.; ...
2017-12-28
The reliability of chemical processes can be greatly improved by implementing inline monitoring systems. Combining multivariate analysis with non-destructive sensors can enhance the process without interfering with the operation. Here, we present here hierarchical models using both principal component analysis and partial least square analysis developed for different chemical components representative of solvent extraction process streams. A training set of 380 samples and an external validation set of 95 samples were prepared and Near infrared and Raman spectral data as well as conductivity under variable temperature conditions were collected. The results from the models indicate that careful selection of themore » spectral range is important. By compressing the data through Principal Component Analysis (PCA), we lower the rank of the data set to its most dominant features while maintaining the key principal components to be used in the regression analysis. Within the studied data set, concentration of five chemical components were modeled; total nitrate (NO 3 -), total acid (H +), neodymium (Nd 3+), sodium (Na +), and ionic strength (I.S.). The best overall model prediction for each of the species studied used a combined data set comprised of complementary techniques including NIR, Raman, and conductivity. Finally, our study shows that chemometric models are powerful but requires significant amount of carefully analyzed data to capture variations in the chemistry.« less
Moshaverinia, Alireza; Ansari, Sahar; Moshaverinia, Maryam; Schricker, Scott R; Chee, Winston W L
2011-09-01
The objective of this study is to investigate the effects of application of ultrasound on the physical properties of a novel NVC (N-vinylcaprolactam)-containing conventional glass-ionomer cement (GIC). Experimental GIC (EXP) samples were made from the acrylic acid (AA)-itaconic acid (IA)-NVC synthesized terpolymer with Fuji IX powder in a 3.6:1 P/L ratio as recommended by the manufacturer. Specimens were mixed and fabricated at room temperature and were conditioned in distilled water at 37°C for 1 day up to 4 week. Ultrasound (US) was applied 20 s after mixing by placing the dental scaler tip on the top of the cement and applying light hand pressure to ensure the tip remained in contact with cement without causing any deformation. Vickers hardness was determined using a microhardness tester. The working and setting times were determined using a Gillmore needle. Water sorption was also investigated. Commercial Fuji IX was used as control for comparison (CON). The data obtained for the EXP GIC set through conventional set (CS) and ultrasonically set (US) were compared with the CON group, using one-way ANOVA and the Tukey multiple range test at α = 0.05. Not only ultrasonic (US) application accelerated the curing process of both EXP cement and CON group but also improved the surface hardness of all the specimens. US set samples showed significantly lower water sorption values (P < 0.05) due to improved acid-base reaction within the GIC matrix and accelerated maturation process. According to the statistical analysis of data, significant increase was observed in the surface hardness properties of CS and US specimens both in EXP samples and the CON groups. It was concluded that it is possible to command set GICs by the application of ultrasound, leading to GICs with enhanced physical and handling properties. US application might be a potential way to broaden the clinical applications of conventional GICs in restorative dentistry for procedures such as class V cavity restorations.
On the Asymptotic Relative Efficiency of Planned Missingness Designs.
Rhemtulla, Mijke; Savalei, Victoria; Little, Todd D
2016-03-01
In planned missingness (PM) designs, certain data are set a priori to be missing. PM designs can increase validity and reduce cost; however, little is known about the loss of efficiency that accompanies these designs. The present paper compares PM designs to reduced sample (RN) designs that have the same total number of data points concentrated in fewer participants. In 4 studies, we consider models for both observed and latent variables, designs that do or do not include an "X set" of variables with complete data, and a full range of between- and within-set correlation values. All results are obtained using asymptotic relative efficiency formulas, and thus no data are generated; this novel approach allows us to examine whether PM designs have theoretical advantages over RN designs removing the impact of sampling error. Our primary findings are that (a) in manifest variable regression models, estimates of regression coefficients have much lower relative efficiency in PM designs as compared to RN designs, (b) relative efficiency of factor correlation or latent regression coefficient estimates is maximized when the indicators of each latent variable come from different sets, and (c) the addition of an X set improves efficiency in manifest variable regression models only for the parameters that directly involve the X-set variables, but it substantially improves efficiency of most parameters in latent variable models. We conclude that PM designs can be beneficial when the model of interest is a latent variable model; recommendations are made for how to optimize such a design.
Sexton, J Bryan; Adair, Kathryn C; Leonard, Michael W; Frankel, Terri Christensen; Proulx, Joshua; Watson, Sam R; Magnus, Brooke; Bogan, Brittany; Jamal, Maleek; Schwendimann, Rene; Frankel, Allan S
2018-01-01
Background There is a poorly understood relationship between Leadership WalkRounds (WR) and domains such as safety culture, employee engagement, burnout and work-life balance. Methods This cross-sectional survey study evaluated associations between receiving feedback about actions taken as a result of WR and healthcare worker assessments of patient safety culture, employee engagement, burnout and work-life balance, across 829 work settings. Results 16 797 of 23 853 administered surveys were returned (70.4%). 5497 (32.7% of total) reported that they had participated in WR, and 4074 (24.3%) reported that they participated in WR with feedback. Work settings reporting more WR with feedback had substantially higher safety culture domain scores (first vs fourth quartile Cohen’s d range: 0.34–0.84; % increase range: 15–27) and significantly higher engagement scores for four of its six domains (first vs fourth quartile Cohen’s d range: 0.02–0.76; % increase range: 0.48–0.70). Conclusion This WR study of patient safety and organisational outcomes tested relationships with a comprehensive set of safety culture and engagement metrics in the largest sample of hospitals and respondents to date. Beyond measuring simply whether WRs occur, we examine WR with feedback, as WR being done well. We suggest that when WRs are conducted, acted on, and the results are fed back to those involved, the work setting is a better place to deliver and receive care as assessed across a broad range of metrics, including teamwork, safety, leadership, growth opportunities, participation in decision-making and the emotional exhaustion component of burnout. Whether WR with feedback is a manifestation of better norms, or a cause of these norms, is unknown, but the link is demonstrably potent. PMID:28993441
Sexton, J Bryan; Adair, Kathryn C; Leonard, Michael W; Frankel, Terri Christensen; Proulx, Joshua; Watson, Sam R; Magnus, Brooke; Bogan, Brittany; Jamal, Maleek; Schwendimann, Rene; Frankel, Allan S
2018-04-01
There is a poorly understood relationship between Leadership WalkRounds (WR) and domains such as safety culture, employee engagement, burnout and work-life balance. This cross-sectional survey study evaluated associations between receiving feedback about actions taken as a result of WR and healthcare worker assessments of patient safety culture, employee engagement, burnout and work-life balance, across 829 work settings. 16 797 of 23 853 administered surveys were returned (70.4%). 5497 (32.7% of total) reported that they had participated in WR, and 4074 (24.3%) reported that they participated in WR with feedback. Work settings reporting more WR with feedback had substantially higher safety culture domain scores (first vs fourth quartile Cohen's d range: 0.34-0.84; % increase range: 15-27) and significantly higher engagement scores for four of its six domains (first vs fourth quartile Cohen's d range: 0.02-0.76; % increase range: 0.48-0.70). This WR study of patient safety and organisational outcomes tested relationships with a comprehensive set of safety culture and engagement metrics in the largest sample of hospitals and respondents to date. Beyond measuring simply whether WRs occur, we examine WR with feedback, as WR being done well . We suggest that when WRs are conducted, acted on, and the results are fed back to those involved, the work setting is a better place to deliver and receive care as assessed across a broad range of metrics, including teamwork, safety, leadership, growth opportunities, participation in decision-making and the emotional exhaustion component of burnout. Whether WR with feedback is a manifestation of better norms, or a cause of these norms, is unknown, but the link is demonstrably potent. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Real-time high dynamic range laser scanning microscopy
Vinegoni, C.; Leon Swisher, C.; Fumene Feruglio, P.; Giedt, R. J.; Rousso, D. L.; Stapleton, S.; Weissleder, R.
2016-01-01
In conventional confocal/multiphoton fluorescence microscopy, images are typically acquired under ideal settings and after extensive optimization of parameters for a given structure or feature, often resulting in information loss from other image attributes. To overcome the problem of selective data display, we developed a new method that extends the imaging dynamic range in optical microscopy and improves the signal-to-noise ratio. Here we demonstrate how real-time and sequential high dynamic range microscopy facilitates automated three-dimensional neural segmentation. We address reconstruction and segmentation performance on samples with different size, anatomy and complexity. Finally, in vivo real-time high dynamic range imaging is also demonstrated, making the technique particularly relevant for longitudinal imaging in the presence of physiological motion and/or for quantification of in vivo fast tracer kinetics during functional imaging. PMID:27032979
Aflatoxins in composite spices collected from local markets of Karachi, Pakistan.
Asghar, Muhammad Asif; Zahir, Erum; Rantilal, Summan; Ahmed, Aftab; Iqbal, Javed
2016-06-01
This survey was carried out to evaluate the occurrence of total aflatoxins (AFs; B1+B2+G1+G2) in unpacked composite spices. A total of 75 samples of composite spices such as biryani, karhai, tikka, nihari and korma masalas were collected from local markets of Karachi, Pakistan, and analysed using HPLC technique. The results indicated that AFs were detected in 77% (n = 58) samples ranging from 0.68 to 25.74 µg kg(-1) with a mean of 4.63 ± 0.95 µg kg(-1). In 88% (n = 66) samples, AFs level was below the maximum limits (ML = 10 µg kg(-1)) as imposed by EU. Furthermore, 61% (n = 46) tested samples contained AFs level between 1 and 10 µg kg(-1), 9% (n = 7) exhibited AFs contamination ranged 10-20 µg kg(-1) and only 3% (n = 2) of the investigated samples contained AFs levels higher than the ML of 20 µg kg(-1) for total aflatoxins as set by the USA. It was concluded that there is need to establish a strict and continuous national monitoring plan to improve safety and quality of spices in Pakistan.
Degeneffe, Dennis; Reicks, Marla
2008-01-01
Objective To identify a comprehensive set of distinct “need states” based on the eating occasions experienced by midlife women. Design Series of 7 focus group interviews. Setting Meeting room on a university campus. Participants A convenience sample of 34 multi-ethnic women (mean age = 46 years). Phenomenon of Interest Descriptions of eating occasions by “need states” - specific patterns of needs for the occasion. Analysis Interviews were audio taped, transcribed verbatim and analyzed for common themes using qualitative data analysis procedures. Findings Eight need states suggested a hypothetical framework reflecting a wide range in emotional gratification. Need states with a low level of emotional gratification were dominated by sets of functional needs such as coping with stress, balancing intake across occasions, meeting external demands of time and effort and maintaining a routine. Food was a means for reinforcing family identity, social expression and celebration in need states with high levels of emotional gratification. Occurrence of need states varied by day and meal/snack occasion, with food type/amount dependent on need state. Conclusions and Implications Eating occasions are driven by specific sets of needs ranging from physical/functional to more emotional/social needs. Addressing need states may improve weight intervention programs for midlife women. PMID:18984495
A Data-Centric Strategy for Modern Biobanking.
Quinlan, Philip R; Gardner, Stephen; Groves, Martin; Emes, Richard; Garibaldi, Jonathan
2015-01-01
Biobanking has been in existence for many decades and over that time has developed significantly. Biobanking originated from a need to collect, store and make available biological samples for a range of research purposes. It has changed as the understanding of biological processes has increased and new sample handling techniques have been developed to ensure samples were fit-for-purpose.As a result of these developments, modern biobanking is now facing two substantial new challenges. Firstly, new research methods such as next generation sequencing can generate datasets that are at an infinitely greater scale and resolution than previous methods. Secondly, as the understanding of diseases increases researchers require a far richer data set about the donors from which the sample originate.To retain a sample-centric strategy in a research environment that is increasingly dictated by data will place a biobank at a significant disadvantage and even result in the samples collected going unused. As a result biobanking is required to change strategic focus from a sample dominated perspective to a data-centric strategy.
Wu, George; Yeung, Stanley; Chen, Frank
2017-01-01
Neurokinin-1 receptor antagonist, 5-hydroxytryptamine-3 receptor antagonist, and dexamethasone combination therapy is the standard of care for the prevention of chemotherapy-induced nausea and vomiting. Herein, we describe the physical and chemical stability of an injectable emulsion of the Neurokinin-1 receptor antagonist rolapitant 185 mg in 92.5 mL (free base, 166.5 mg in 92.5 mL) admixed with either 2.5 mL of dexamethasone sodium phosphate (10 mg) or 5 mL of dexamethasone sodium phosphate (20 mg). Admixtures were prepared and stored in two types of container closures (glass and Crystal Zenith plastic bottles) and four types of intravenous administration tubing sets (or intravenous tubing sets). The assessment of the physical and chemical stability was conducted on admixtures packaged in bottled samples stored at room temperature (20°C to 25°C under fluorescent light) and evaluated at 0, 1, and 6 hours. For admixtures in intravenous tubing sets, the assessment of physicochemical stability was performed after 0 and 7 hours of storage at 20°C to 25°C, and then after 20 hours (total 27 hours) under refrigeration (2°C to 8°C) and protected from light. Physical stability was assessed by visually examining the bottle contents under normal room light and measuring turbidity and particulate matter. Chemical stability was assessed by measuring the pH of the admixture and determining drug concentrations through high-performance liquid chromatographic analysis. Results showed that all samples were physically compatible throughout the duration of the study. The admixtures stayed within narrow and acceptable ranges in pH, turbidity, and particulate matter. Admixtures of rolapitant and dexamethasone were chemically stable when stored in glass and Crystal Zenith bottles for at least 6 hours at room temperature, as well as in the four selected intravenous tubing sets for 7 hours at 20°C to 25°C and then for 20 (total 27 hours) hours at 2°C to 8°C. No loss of potency of any admixed component occurred in the samples stored at the temperature ranges studied. Copyright© by International Journal of Pharmaceutical Compounding, Inc.
Allometry and Ecology of the Bilaterian Gut Microbiome.
Sherrill-Mix, Scott; McCormick, Kevin; Lauder, Abigail; Bailey, Aubrey; Zimmerman, Laurie; Li, Yingying; Django, Jean-Bosco N; Bertolani, Paco; Colin, Christelle; Hart, John A; Hart, Terese B; Georgiev, Alexander V; Sanz, Crickette M; Morgan, David B; Atencia, Rebeca; Cox, Debby; Muller, Martin N; Sommer, Volker; Piel, Alexander K; Stewart, Fiona A; Speede, Sheri; Roman, Joe; Wu, Gary; Taylor, Josh; Bohm, Rudolf; Rose, Heather M; Carlson, John; Mjungu, Deus; Schmidt, Paul; Gaughan, Celeste; Bushman, Joyslin I; Schmidt, Ella; Bittinger, Kyle; Collman, Ronald G; Hahn, Beatrice H; Bushman, Frederic D
2018-03-27
Classical ecology provides principles for construction and function of biological communities, but to what extent these apply to the animal-associated microbiota is just beginning to be assessed. Here, we investigated the influence of several well-known ecological principles on animal-associated microbiota by characterizing gut microbial specimens from bilaterally symmetrical animals ( Bilateria ) ranging from flies to whales. A rigorously vetted sample set containing 265 specimens from 64 species was assembled. Bacterial lineages were characterized by 16S rRNA gene sequencing. Previously published samples were also compared, allowing analysis of over 1,098 samples in total. A restricted number of bacterial phyla was found to account for the great majority of gut colonists. Gut microbial composition was associated with host phylogeny and diet. We identified numerous gut bacterial 16S rRNA gene sequences that diverged deeply from previously studied taxa, identifying opportunities to discover new bacterial types. The number of bacterial lineages per gut sample was positively associated with animal mass, paralleling known species-area relationships from island biogeography and implicating body size as a determinant of community stability and niche complexity. Samples from larger animals harbored greater numbers of anaerobic communities, specifying a mechanism for generating more-complex microbial environments. Predictions for species/abundance relationships from models of neutral colonization did not match the data set, pointing to alternative mechanisms such as selection of specific colonists by environmental niche. Taken together, the data suggest that niche complexity increases with gut size and that niche selection forces dominate gut community construction. IMPORTANCE The intestinal microbiome of animals is essential for health, contributing to digestion of foods, proper immune development, inhibition of pathogen colonization, and catabolism of xenobiotic compounds. How these communities assemble and persist is just beginning to be investigated. Here we interrogated a set of gut samples from a wide range of animals to investigate the roles of selection and random processes in microbial community construction. We show that the numbers of bacterial species increased with the weight of host organisms, paralleling findings from studies of island biogeography. Communities in larger organisms tended to be more anaerobic, suggesting one mechanism for niche diversification. Nonselective processes enable specific predictions for community structure, but our samples did not match the predictions of the neutral model. Thus, these findings highlight the importance of niche selection in community construction and suggest mechanisms of niche diversification. Copyright © 2018 Sherrill-Mix et al.
Ground-Cover Measurements: Assessing Correlation Among Aerial and Ground-Based Methods
NASA Astrophysics Data System (ADS)
Booth, D. Terrance; Cox, Samuel E.; Meikle, Tim; Zuuring, Hans R.
2008-12-01
Wyoming’s Green Mountain Common Allotment is public land providing livestock forage, wildlife habitat, and unfenced solitude, amid other ecological services. It is also the center of ongoing debate over USDI Bureau of Land Management’s (BLM) adjudication of land uses. Monitoring resource use is a BLM responsibility, but conventional monitoring is inadequate for the vast areas encompassed in this and other public-land units. New monitoring methods are needed that will reduce monitoring costs. An understanding of data-set relationships among old and new methods is also needed. This study compared two conventional methods with two remote sensing methods using images captured from two meters and 100 meters above ground level from a camera stand (a ground, image-based method) and a light airplane (an aerial, image-based method). Image analysis used SamplePoint or VegMeasure software. Aerial methods allowed for increased sampling intensity at low cost relative to the time and travel required by ground methods. Costs to acquire the aerial imagery and measure ground cover on 162 aerial samples representing 9000 ha were less than 3000. The four highest correlations among data sets for bare ground—the ground-cover characteristic yielding the highest correlations (r)—ranged from 0.76 to 0.85 and included ground with ground, ground with aerial, and aerial with aerial data-set associations. We conclude that our aerial surveys are a cost-effective monitoring method, that ground with aerial data-set correlations can be equal to, or greater than those among ground-based data sets, and that bare ground should continue to be investigated and tested for use as a key indicator of rangeland health.
Merz, Clayton; Catchen, Julian M; Hanson-Smith, Victor; Emerson, Kevin J; Bradshaw, William E; Holzapfel, Christina M
2013-01-01
Herein we tested the repeatability of phylogenetic inference based on high throughput sequencing by increased taxon sampling using our previously published techniques in the pitcher-plant mosquito, Wyeomyia smithii in North America. We sampled 25 natural populations drawn from different localities nearby 21 previous collection localities and used these new data to construct a second, independent phylogeny, expressly to test the reproducibility of phylogenetic patterns. Comparison of trees between the two data sets based on both maximum parsimony and maximum likelihood with Bayesian posterior probabilities showed close correspondence in the grouping of the most southern populations into clear clades. However, discrepancies emerged, particularly in the middle of W. smithii's current range near the previous maximum extent of the Laurentide Ice Sheet, especially concerning the most recent common ancestor to mountain and northern populations. Combining all 46 populations from both studies into a single maximum parsimony tree and taking into account the post-glacial historical biogeography of associated flora provided an improved picture of W. smithii's range expansion in North America. In a more general sense, we propose that extensive taxon sampling, especially in areas of known geological disruption is key to a comprehensive approach to phylogenetics that leads to biologically meaningful phylogenetic inference.
Song, Yong-Ak; Chan, Michael; Celio, Chris; Tannenbaum, Steven R.; Wishnok, John S.; Han, Jongyoon
2010-01-01
In this paper, we are evaluating the strategy of sorting peptides / proteins based on the charge to mass without resorting to ampholytes and / or isoelectric focusing, using a single- and two-step free-flow zone electrophoresis. We developed a simple fabrication method to create a salt bridge for free-flow zone electrophoresis in PDMS chips by surface printing a hydrophobic layer on a glass substrate. Since the surface-printed hydrophobic layer prevents plasma bonding between the PDMS chip and the substrate, an electrical junction gap can be created for free-flow zone electrophoresis. With this device, we demonstrated a separation of positive and negative peptides and proteins at a given pH in standard buffer systems, and validated the sorting result with LC/MS. Furthermore, we coupled two sorting steps via off-chip titration, and isolated peptides within specific pI ranges from sample mixtures, where the pI range was simply set by the pH values of the buffer solutions. This free-flow zone electrophoresis sorting device, with its simplicity of fabrication, and a sorting resolution of 0.5 pH unit, can potentially be a high-throughput sample fractionation tool for targeted proteomics using LC/MS. PMID:20163146
Haematology and plasma chemistry of the red top ice blue mbuna cichlid (Metriaclima greshakei).
Snellgrove, Donna L; Alexander, Lucille G
2011-10-01
Clinical haematology and blood plasma chemistry can be used as a valuable tool to provide substantial diagnostic information for fish. A wide range of parameters can be used to assess nutritional status, digestive function, disease identification, routine metabolic levels, general physiological status and even the assessment and management of wild fish populations. However to evaluate such data accurately, baseline reference intervals for each measurable parameter must be established for the species of fish in question. Baseline data for ornamental fish species are limited, as research is more commonly conducted using commercially cultured fish. Blood samples were collected from sixteen red top ice blue cichlids (Metriaclima greshakei), an ornamental freshwater fish, to describe a range of haematology and plasma chemistry parameters. Since this cichlid is fairly large in comparison with most tropical ornamental fish, two independent blood samples were taken to assess a large range of parameters. No significant differences were noted between sample periods for any parameter. Values obtained for a large number of parameters were similar to those established for other closely related fish species such as tilapia (Oreochromis spp.). In addition to reporting the first set of blood values for M. Greshakei, to our knowledge, this study highlights the possibility of using previously established data for cultured cichlid species in studies with ornamental cichlid fish.
NASA Astrophysics Data System (ADS)
Upadhyay, Neelam; Jaiswal, Pranita; Jha, Shyam Narayan
2018-02-01
Pure ghee is superior to other fats and oils due to the presence of bioactive lipids and its rich flavor. Adulteration of ghee with cheaper fats and oils is a prevalent fraudulent practice. ATR-FTIR spectroscopy was coupled with chemometrics for the purpose of detection of presence of pig body fat in pure ghee. Pure mixed ghee was spiked with pig body fat @ 3, 4, 5, 10, 15% level. The spectra of pure (ghee and pig body fat) along with the spiked samples was taken in MIR from 4000 to 500 cm-1. Some wavenumber ranges were selected on the basis of differences in the spectra obtained. Separate clusters of the samples were obtained by employing principal component analysis at 5% level of significance on the selected wavenumber range. Probable class membership was predicted by applying SIMCA approach. Approximately, 90% of the samples classified into their respective class and pure ghee and pig body fat never misclassified themselves. The value of R2 was >0.99 for both calibration and validation sets using partial least square method. The study concluded that spiking of pig body fat in pure ghee can be detected even at a level of 3%.
Boelter, Fred; Simmons, Catherine; Hewett, Paul
2011-04-01
Fluid sealing devices (gaskets and packing) containing asbestos are manufactured and blended with binders such that the asbestos fibers are locked in a matrix that limits the potential for fiber release. Occasionally, fluid sealing devices fail and need to be replaced or are removed during preventive maintenance activities. This is the first study known to pool over a decade's worth of exposure assessments involving fluid sealing devices used in a variety of applications. Twenty-one assessments of work activities and air monitoring were performed under conditions with no mechanical ventilation and work scenarios described as "worst-case" conditions. Frequently, the work was conducted using aggressive techniques, along with dry removal practices. Personal and area samples were collected and analyzed in accordance with the National Institute for Occupational Safety and Health Methods 7400 and 7402. A total of 782 samples were analyzed by phase contrast microscopy, and 499 samples were analyzed by transmission electron microscopy. The statistical data analysis focused on the overall data sets which were personal full-shift time-weighted average (TWA) exposures, personal 30-min exposures, and area full-shift TWA values. Each data set contains three estimates of exposure: (1) total fibers; (2) asbestos fibers only but substituting a value of 0.0035 f/cc for censored data; and (3) asbestos fibers only but substituting the limit of quantification value for censored data. Censored data in the various data sets ranged from 7% to just over 95%. Because all the data sets were censored, the geometric mean and geometric standard deviation were estimated using the maximum likelihood estimation method. Nonparametric, Kaplan-Meier, and lognormal statistics were applied and found to be consistent and reinforcing. All three sets of statistics suggest that the mean and median exposures were less than 25% of 0.1 f/cc 8-hr TWA sample or 1.0 f/cc 30-min samples, and that there is at least 95% confidence that the true 95th percentile exposures are less than 0.1 f/cc as an 8-hr TWA.
SESAR: Addressing the need for unique sample identification in the Solid Earth Sciences
NASA Astrophysics Data System (ADS)
Lehnert, K. A.; Goldstein, S. L.; Lenhardt, C.; Vinayagamoorthy, S.
2004-12-01
The study of solid earth samples is key to our knowledge of Earth's dynamical systems and evolution. The data generated provide the basis for models and hypotheses in all disciplines of the Geosciences from tectonics to magmatic processes to mantle dynamics to paleoclimate research. Sample-based data are diverse ranging from major and trace element abundances, radiogenic and stable isotope ratios of rocks, minerals, fluid or melt inclusions, to age determinations and descriptions of lithology, texture, mineral or fossil content, stratigraphic context, physical properties. The usefulness of these data is critically dependent on their integration as a coherent data set for each sample. If different data sets for the same sample cannot be combined because the sample cannot be unambiguously recognized, valuable information is lost. The ambiguous naming of samples has been a major problem in the geosciences. Different samples are often given identical names, and there is a tendency for different people analyzing the same sample to rename it in their publications according to local conventions. This situation has generated significant confusion, with samples often losing their "history", making it difficult or impossible to link available data. This has become most evident through the compilation of geochemical data in relational databases such as PetDB, NAVDAT, and GEOROC. While the relational data structure allows linking of disparate data for samples published in different references, linkages cannot be established due to ambiguous sample names. SESAR is a response to this problem of ambiguous naming of samples. SESAR will create a common clearinghouse that provides a centralized registry of sample identifiers, to avoid ambiguity, to systematize sample designation, and ensure that all information associated with a sample would in fact be unique. The project will build a web-based digital registry for solid earth samples that will provide for the first time a way to uniquely name and identify samples on a global scale, along with the generation of barcodes for sample labeling. We will describe a prototype of the registry, demonstrating its structure, user-friendly web interface, and functionality, and outline future plans for further enhancement of the system, pertaining to interoperability within the Geoscience Cyber-infrastructure. With the ability to track a sample through its history, the system will facilitate the ability of investigators to build on previously collected data on samples as new measurements are made or techniques are developed. The unique identifiers will also dramatically advance interoperability among existing and emerging data and information management systems for sample-based data such as CHRONOS, EarthChem, SedDB, PaleoStrat, opening an extensive range of new opportunities for discovery and for interdisciplinary approaches in research.
40Ar 39Ar Ages and tectonic setting of ophiolite from the Neyriz area, southeast Zagros Range, Iran
Lanphere, M.A.; Pamic, J.
1983-01-01
An ophiolite, considered to be an allochthonous fragment of Tethyan oceanic crust and mantle, crops out near Neyriz in the Zagros Range, Iran. 40Ar 39Ar ages ranging from 76.8 ?? 23.8 Ma to 105 ?? 23.3 Ma were measured on hornblende from five samples of plagiogranite and diabase from the ophiolite. The most precise ages are 85.9 ?? 3.8 Ma for a diabase and 83.6 ?? 8.4 Ma for a plagiogranite. The weighted mean age of hornblende from the five samples is 87.5 ?? 7.2 Ma which indicates that the igneous part of the Neyriz ophiolite formed during the early part of the Late Cretaceous. Pargasite from amphibolite below peridotite of the Neyriz ophiolite has a 40Ar 39Ar age of 94.9 ?? 7.6 Ma. The pargasite age agrees within analytical uncertainty with the ages measured on diabase and plagiogranite. Comparable ages have been measured on igneous rocks from the Samail ophiolite of Oman and on amphibolite below peridotite of the Samail ophiolite. ?? 1983.
NASA Astrophysics Data System (ADS)
Banas, Krzysztof; Banas, Agnieszka M.; Heussler, Sascha P.; Breese, Mark B. H.
2018-01-01
In the contemporary spectroscopy there is a trend to record spectra with the highest possible spectral resolution. This is clearly justified if the spectral features in the spectrum are very narrow (for example infra-red spectra of gas samples). However there is a plethora of samples (in the liquid and especially in the solid form) where there is a natural spectral peak broadening due to collisions and proximity predominately. Additionally there is a number of portable devices (spectrometers) with inherently restricted spectral resolution, spectral range or both, which are extremely useful in some field applications (archaeology, agriculture, food industry, cultural heritage, forensic science). In this paper the investigation of the influence of spectral resolution, spectral range and signal-to-noise ratio on the identification of high explosive substances by applying multivariate statistical methods on the Fourier transform infra-red spectral data sets is studied. All mathematical procedures on spectral data for dimension reduction, clustering and validation were implemented within R open source environment.
Bowling, Frank L; Stickings, Daryl S; Edwards-Jones, Valerie; Armstrong, David G; Boulton, Andrew Jm
2009-05-08
The purpose of this study was to assess the level of air contamination with bacteria after surgical hydrodebridement and to determine the effectiveness of hydro surgery on bacterial reduction of a simulated infected wound. Four porcine samples were scored then infected with a broth culture containing a variety of organisms and incubated at 37 degrees C for 24 hours. The infected samples were then debrided with the hydro surgery tool (Versajet, Smith and Nephew, Largo, Florida, USA). Samples were taken for microbiology, histology and scanning electron microscopy pre-infection, post infection and post debridement. Air bacterial contamination was evaluated before, during and after debridement by using active and passive methods; for active sampling the SAS-Super 90 air sampler was used, for passive sampling settle plates were located at set distances around the clinic room. There was no statistically significant reduction in bacterial contamination of the porcine samples post hydrodebridement. Analysis of the passive sampling showed a significant (p < 0.001) increase in microbial counts post hydrodebridement. Levels ranging from 950 colony forming units per meter cubed (CFUs/m3) to 16780 CFUs/m3 were observed with active sampling of the air whilst using hydro surgery equipment compared with a basal count of 582 CFUs/m3. During removal of the wound dressing, a significant increase was observed relative to basal counts (p < 0.05). Microbial load of the air samples was still significantly raised 1 hour post-therapy. The results suggest a significant increase in bacterial air contamination both by active sampling and passive sampling. We believe that action might be taken to mitigate fallout in the settings in which this technique is used.
A review of blood sample handling and pre-processing for metabolomics studies.
Hernandes, Vinicius Veri; Barbas, Coral; Dudzik, Danuta
2017-09-01
Metabolomics has been found to be applicable to a wide range of clinical studies, bringing a new era for improving clinical diagnostics, early disease detection, therapy prediction and treatment efficiency monitoring. A major challenge in metabolomics, particularly untargeted studies, is the extremely diverse and complex nature of biological specimens. Despite great advances in the field there still exist fundamental needs for considering pre-analytical variability that can introduce bias to the subsequent analytical process and decrease the reliability of the results and moreover confound final research outcomes. Many researchers are mainly focused on the instrumental aspects of the biomarker discovery process, and sample related variables sometimes seem to be overlooked. To bridge the gap, critical information and standardized protocols regarding experimental design and sample handling and pre-processing are highly desired. Characterization of a range variation among sample collection methods is necessary to prevent results misinterpretation and to ensure that observed differences are not due to an experimental bias caused by inconsistencies in sample processing. Herein, a systematic discussion of pre-analytical variables affecting metabolomics studies based on blood derived samples is performed. Furthermore, we provide a set of recommendations concerning experimental design, collection, pre-processing procedures and storage conditions as a practical review that can guide and serve for the standardization of protocols and reduction of undesirable variation. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Yun, Huan; Liu, Xin; Cui, Jie; Yang, Jing; Liu, Ying
2017-08-08
A method for screening of acidity regulators in dairy based on ion chromatography-high resolution mass spectrometry technology (IC-HRMS) was set up. The dairy samples were extracted by KOH (pH 7-8) and Oasis MAX SPE column, and separated by a Dionex IonPac AS11-HC column (250 mm×4 mm). All the acidity regulators were detected by Orbitrap full scan mode. Taking six organic acids as an example, the calibration curves showed good linearities in the range of 0.05-5.00 mg/L, and the correlation coefficients ( r ) were higher than 0.99. By detecting the spiked samples, the recoveries were in the range of 74.3%-115.5% with the relative standard deviations (RSDs) between 0.64% and 4.81%. Malic acid, citric acid, lactic acid, succinic acid and adipic acid could be detected by IC-HRMS in the commercial dairy samples. The results indicate that the method is simple, rapid and suitable for the qualitative screening of acidity regulators in dairy products.
NASA Astrophysics Data System (ADS)
Eom, Young-Ho; Jo, Hang-Hyun
2015-05-01
Many complex networks in natural and social phenomena have often been characterized by heavy-tailed degree distributions. However, due to rapidly growing size of network data and concerns on privacy issues about using these data, it becomes more difficult to analyze complete data sets. Thus, it is crucial to devise effective and efficient estimation methods for heavy tails of degree distributions in large-scale networks only using local information of a small fraction of sampled nodes. Here we propose a tail-scope method based on local observational bias of the friendship paradox. We show that the tail-scope method outperforms the uniform node sampling for estimating heavy tails of degree distributions, while the opposite tendency is observed in the range of small degrees. In order to take advantages of both sampling methods, we devise the hybrid method that successfully recovers the whole range of degree distributions. Our tail-scope method shows how structural heterogeneities of large-scale complex networks can be used to effectively reveal the network structure only with limited local information.
Dark Energy Survey Year 1 Results: galaxy mock catalogues for BAO
NASA Astrophysics Data System (ADS)
Avila, S.; Crocce, M.; Ross, A. J.; García-Bellido, J.; Percival, W. J.; Banik, N.; Camacho, H.; Kokron, N.; Chan, K. C.; Andrade-Oliveira, F.; Gomes, R.; Gomes, D.; Lima, M.; Rosenfeld, R.; Salvador, A. I.; Friedrich, O.; Abdalla, F. B.; Annis, J.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Carrasco Kind, M.; Carretero, J.; Castander, F. J.; Cunha, C. E.; da Costa, L. N.; Davis, C.; De Vicente, J.; Doel, P.; Fosalba, P.; Frieman, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Hartley, W. G.; Hollowood, D.; Honscheid, K.; James, D. J.; Kuehn, K.; Kuropatkin, N.; Miquel, R.; Plazas, A. A.; Sanchez, E.; Scarpine, V.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Walker, A. R.; Dark Energy Survey Collaboration
2018-05-01
Mock catalogues are a crucial tool in the analysis of galaxy surveys data, both for the accurate computation of covariance matrices, and for the optimisation of analysis methodology and validation of data sets. In this paper, we present a set of 1800 galaxy mock catalogues designed to match the Dark Energy Survey Year-1 BAO sample (Crocce et al. 2017) in abundance, observational volume, redshift distribution and uncertainty, and redshift dependent clustering. The simulated samples were built upon HALOGEN (Avila et al. 2015) halo catalogues, based on a 2LPT density field with an empirical halo bias. For each of them, a lightcone is constructed by the superposition of snapshots in the redshift range 0.45 < z < 1.4. Uncertainties introduced by so-called photometric redshifts estimators were modelled with a double-skewed-Gaussian curve fitted to the data. We populate halos with galaxies by introducing a hybrid Halo Occupation Distribution - Halo Abundance Matching model with two free parameters. These are adjusted to achieve a galaxy bias evolution b(zph) that matches the data at the 1-σ level in the range 0.6 < zph < 1.0. We further analyse the galaxy mock catalogues and compare their clustering to the data using the angular correlation function w(θ), the comoving transverse separation clustering ξμ < 0.8(s⊥) and the angular power spectrum Cℓ, finding them in agreement. This is the first large set of three-dimensional {ra,dec,z} galaxy mock catalogues able to simultaneously accurately reproduce the photometric redshift uncertainties and the galaxy clustering.
Béjaoui, Afef; Chaabane, Hédia; Jemli, Maroua; Boulila, Abdennacer; Boussaid, Mohamed
2013-12-01
Variation in the quantity and quality of the essential oil (EO) of wild population of Origanum vulgare at different phenological stages, including vegetative, late vegetative, and flowering set, is reported. The oils of air-dried samples were obtained by hydrodistillation. The yield of oils (w/w%) at different stages were in the order of late vegetative (2.0%), early vegetative (1.7%), and flowering (0.6%) set. The oils were analyzed by gas chromatography (GC) and GC-mass spectrometry (GC-MS). In total, 36, 33, and 16 components were identified and quantified in vegetative, late vegetative, and flowering set, representing 94.47%, 95.91%, and 99.62% of the oil, respectively. Carvacrol was the major compound in all samples. The ranges of major constituents were as follows: carvacrol (61.08-83.37%), p-cymene (3.02-9.87%), and γ-terpinene (4.13-6.34%). Antibacterial activity of the oils was tested against three Gram-positive and two Gram-negative bacteria by the disc diffusion method and determining their diameter of inhibition and the minimum inhibitory concentration (MIC) values. The inhibition zones and MIC values for bacterial strains, which were sensitive to the EO of O. vulgare subsp. glandulosum, were in the range of 9-36 mm and 125-600 μg/mL, respectively. The oils of various phenological stages showed high activity against all tested bacteria, of which Bacillus subtilis was the most sensitive and resistant strain, respectively. Thus, they represent an inexpensive source of natural antibacterial substances that exhibited potential for use in pathogenic systems.
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
Volatile organic compounds up to C 20 emitted from motor vehicles; measurement methods
NASA Astrophysics Data System (ADS)
Zielinska, Barbara; Sagebiel, John C.; Harshfield, Gregory; Gertler, Alan W.; Pierson, William R.
To understand better the sources of observed differences between on-road vehicle emissions and model estimates, and to evaluate the emission of ozone precursors from motor vehicles, a series of experiments was conducted in the Fort McHenry Tunnel, Baltimore, Maryland (18-24 June 1992), and in the Tuscarora Mountain Tunnel, Pennsylvania (2-8 September 1992). Samples were collected using stainless steel canisters (whole air samples, analyzed for C 2C 12 hydrocarbons), Tenax-TA solid adsorbent cartridges (for semi-volatile hydrocarbons, in the C 8C 20 range), and 2,4-dinitrophenylhydrazine (DNPH) impregnated cartridges (for carbonyl compounds). The samples were analyzed using high resolution gas chromatographic separation with Fourier transform infrared/mass spectrometric detection (GC/IRD/ MSD) for qualitative identification and with flame ionization detection (GC/FID) for quantitation of hydrocarbons, and high performance liquid chromatography (HPLC) for identification and quantitation of carbonyl compounds. A custom-designed database management system was used to handle the large data sets generated by these analyses. From the evaluation of canister and Tenax sample stability upon storage, it was found that hydrocarbons in the C 8C 12 range seemed to be more stable in the Tenax cartridge than in the canister. The effect of the Nafion® dryer (frequently used for moisture removal prior to cryogenic concentration of the canister samples) was also assessed and it was found to lower the measured concentrations of hydrocarbons collected in the canisters. Comparison of hydrocarbon concentrations found in the Tenax and canister samples allows an assessment of the contribution of semi-volatile hydrocarbons (C 10C 20 range derived from Tenax data) to the total non-methane hydrocarbons (C 2C 20, derived from canisters and Tenax data). The results of this study show that hydrocarbons in the range of C 10C 20 are important components of gas-phase hydrocarbons emitted from heavy-duty diesel vehicles (they account for approximately half of the total gas-phase non-methane hydrocarbon emission rates) and hence that solid adsorbent sampling should be used in addition to canister sampling in measurements of motor vehicle emissions.
Radiation detection method and system using the sequential probability ratio test
Nelson, Karl E [Livermore, CA; Valentine, John D [Redwood City, CA; Beauchamp, Brock R [San Ramon, CA
2007-07-17
A method and system using the Sequential Probability Ratio Test to enhance the detection of an elevated level of radiation, by determining whether a set of observations are consistent with a specified model within a given bounds of statistical significance. In particular, the SPRT is used in the present invention to maximize the range of detection, by providing processing mechanisms for estimating the dynamic background radiation, adjusting the models to reflect the amount of background knowledge at the current point in time, analyzing the current sample using the models to determine statistical significance, and determining when the sample has returned to the expected background conditions.
Green, R.O.; Pieters, C.; Mouroulis, P.; Eastwood, M.; Boardman, J.; Glavich, T.; Isaacson, P.; Annadurai, M.; Besse, S.; Barr, D.; Buratti, B.; Cate, D.; Chatterjee, A.; Clark, R.; Cheek, L.; Combe, J.; Dhingra, D.; Essandoh, V.; Geier, S.; Goswami, J.N.; Green, R.; Haemmerle, V.; Head, J.; Hovland, L.; Hyman, S.; Klima, R.; Koch, T.; Kramer, G.; Kumar, A.S.K.; Lee, Kenneth; Lundeen, S.; Malaret, E.; McCord, T.; McLaughlin, S.; Mustard, J.; Nettles, J.; Petro, N.; Plourde, K.; Racho, C.; Rodriquez, J.; Runyon, C.; Sellar, G.; Smith, C.; Sobel, H.; Staid, M.; Sunshine, J.; Taylor, L.; Thaisen, K.; Tompkins, S.; Tseng, H.; Vane, G.; Varanasi, P.; White, M.; Wilson, D.
2011-01-01
The NASA Discovery Moon Mineralogy Mapper imaging spectrometer was selected to pursue a wide range of science objectives requiring measurement of composition at fine spatial scales over the full lunar surface. To pursue these objectives, a broad spectral range imaging spectrometer with high uniformity and high signal-to-noise ratio capable of measuring compositionally diagnostic spectral absorption features from a wide variety of known and possible lunar materials was required. For this purpose the Moon Mineralogy Mapper imaging spectrometer was designed and developed that measures the spectral range from 430 to 3000 nm with 10 nm spectral sampling through a 24 degree field of view with 0.7 milliradian spatial sampling. The instrument has a signal-to-noise ratio of greater than 400 for the specified equatorial reference radiance and greater than 100 for the polar reference radiance. The spectral cross-track uniformity is >90% and spectral instantaneous field-of-view uniformity is >90%. The Moon Mineralogy Mapper was launched on Chandrayaan-1 on the 22nd of October. On the 18th of November 2008 the Moon Mineralogy Mapper was turned on and collected a first light data set within 24 h. During this early checkout period and throughout the mission the spacecraft thermal environment and orbital parameters varied more than expected and placed operational and data quality constraints on the measurements. On the 29th of August 2009, spacecraft communication was lost. Over the course of the flight mission 1542 downlinked data sets were acquired that provide coverage of more than 95% of the lunar surface. An end-to-end science data calibration system was developed and all measurements have been passed through this system and delivered to the Planetary Data System (PDS.NASA.GOV). An extensive effort has been undertaken by the science team to validate the Moon Mineralogy Mapper science measurements in the context of the mission objectives. A focused spectral, radiometric, spatial, and uniformity validation effort has been pursued with selected data sets including an Earth-view data set. With this effort an initial validation of the on-orbit performance of the imaging spectrometer has been achieved, including validation of the cross-track spectral uniformity and spectral instantaneous field of view uniformity. The Moon Mineralogy Mapper is the first imaging spectrometer to measure a data set of this kind at the Moon. These calibrated science measurements are being used to address the full set of science goals and objectives for this mission. Copyright 2011 by the American Geophysical Union.
C. Pieters,; P. Mouroulis,; M. Eastwood,; J. Boardman,; Green, R.O.; Glavich, T.; Isaacson, P.; Annadurai, M.; Besse, S.; Cate, D.; Chatterjee, A.; Clark, R.; Barr, D.; Cheek, L.; Combe, J.; Dhingra, D.; Essandoh, V.; Geier, S.; Goswami, J.N.; Green, R.; Haemmerle, V.; Head, J.; Hovland, L.; Hyman, S.; Klima, R.; Koch, T.; Kramer, G.; Kumar, A.S.K.; Lee, K.; Lundeen, S.; Malaret, E.; McCord, T.; McLaughlin, S.; Mustard, J.; Nettles, J.; Petro, N.; Plourde, K.; Racho, C.; Rodriguez, J.; Runyon, C.; Sellar, G.; Smith, C.; Sobel, H.; Staid, M.; Sunshine, J.; Taylor, L.; Thaisen, K.; Tompkins, S.; Tseng, H.; Vane, G.; Varanasi, P.; White, M.; Wilson, D.
2011-01-01
The NASA Discovery Moon Mineralogy Mapper imaging spectrometer was selected to pursue a wide range of science objectives requiring measurement of composition at fine spatial scales over the full lunar surface. To pursue these objectives, a broad spectral range imaging spectrometer with high uniformity and high signal-to-noise ratio capable of measuring compositionally diagnostic spectral absorption features from a wide variety of known and possible lunar materials was required. For this purpose the Moon Mineralogy Mapper imaging spectrometer was designed and developed that measures the spectral range from 430 to 3000 nm with 10 nm spectral sampling through a 24 degree field of view with 0.7 milliradian spatial sampling. The instrument has a signal-to-noise ratio of greater than 400 for the specified equatorial reference radiance and greater than 100 for the polar reference radiance. The spectral cross-track uniformity is >90% and spectral instantaneous field-of-view uniformity is >90%. The Moon Mineralogy Mapper was launched on Chandrayaan-1 on the 22nd of October. On the 18th of November 2008 the Moon Mineralogy Mapper was turned on and collected a first light data set within 24 h. During this early checkout period and throughout the mission the spacecraft thermal environment and orbital parameters varied more than expected and placed operational and data quality constraints on the measurements. On the 29th of August 2009, spacecraft communication was lost. Over the course of the flight mission 1542 downlinked data sets were acquired that provide coverage of more than 95% of the lunar surface. An end-to-end science data calibration system was developed and all measurements have been passed through this system and delivered to the Planetary Data System (PDS.NASA.GOV). An extensive effort has been undertaken by the science team to validate the Moon Mineralogy Mapper science measurements in the context of the mission objectives. A focused spectral, radiometric, spatial, and uniformity validation effort has been pursued with selected data sets including an Earth-view data set. With this effort an initial validation of the on-orbit performance of the imaging spectrometer has been achieved, including validation of the cross-track spectral uniformity and spectral instantaneous field of view uniformity. The Moon Mineralogy Mapper is the first imaging spectrometer to measure a data set of this kind at the Moon. These calibrated science measurements are being used to address the full set of science goals and objectives for this mission.
NASA Astrophysics Data System (ADS)
Cereceda-Balic, F.; Palomo-Marín, M. R.; Bernalte, E.; Vidal, V.; Christie, J.; Fadic, X.; Guevara, J. L.; Miro, C.; Pinilla Gil, E.
2012-02-01
Seasonal snow precipitation in the Andes mountain range is evaluated as an environmental indicator of the composition of atmospheric emissions in Santiago de Chile metropolitan area, by measuring a set of representative trace elements in snow samples by ICP-MS. Three late winter sampling campaigns (2003, 2008 and 2009) were conducted in three sampling areas around Cerro Colorado, a Central Andes mountain range sector NE of Santiago (36 km). Nevados de Chillán, a sector in The Andes located about 500 km south from the metropolitan area, was selected as a reference area. The experimental results at Cerro Colorado and Nevados de Chillán were compared with previously published data of fresh snow from remote and urban background sites. High snow concentrations of a range of anthropogenic marker elements were found at Cerro Colorado, probably derived from Santiago urban aerosol transport and deposition combined with the effect of mining and smelting activities in the area, whereas Nevados de Chillán levels roughly correspond to urban background areas. Enhanced concentrations in surface snow respect to deeper samples are discussed. Significant differences found between the 2003, 2008 and 2009 anthropogenic source markers profiles at Cerro Colorado sampling points were correlated with changes in emission sources at the city. The preliminary results obtained in this study, the first of this kind in the southern hemisphere, show promising use of snow precipitation in the Central Andes as a suitable matrix for receptor model studies aimed at identifying and quantifying pollution sources in Santiago de Chile.
NASA Astrophysics Data System (ADS)
Nover, Georg; Hbib, Nasser; Mansfeld, Arne
2017-04-01
Changes of porosity, permeability, electrical conductivity and E-modul were studied on sandstones from the Werkendam drillings WED2 (CO2-free) and WED3 (CO2-rich) (The Netherlands). WED2 and WED3 are separated by a fault. Porosities of the untreated samples range from <0.3% up to 16.5%, permeabilities from<0.01 mD up to >160 mD. Significant differences of samples from the WED2 and WED3 well were not detected. The petrophysical properties of the whole set of samples was measured prior to any experiment, then in total 8 samples from WED2 and WED3 were selected for the following experiments with supercritical CO2 (scCO2). These were performed at pressures of 10-12 MPa and temperatures ranging from 100 up to 120°C. The pores were partially saturated with brine (0.1 M NaCl). In a first step the autoclave experiments lasted about 45 days and were then extended in a second series up to 120 days total reaction time. An increase in porosity, permeability and electrical conductivity was measured after each experimental series with scCO2. Two of the samples failed along fractures due to dissolution and thereby caused loss of stability. The frequency dependent complex conductivity was measured in the frequency range 10-3 Hz up to 45 kHz thus having access to fluid/solid interactions at the inner surface of the pores. In a final sequence the uniaxial compressive strength and E-modul were measured on untreated and processed samples. Thus we could get an estimate on weakening of the mechanical stability caused by scCO2-treatment.
Satzke, Catherine; Dunne, Eileen M; Porter, Barbara D; Klugman, Keith P; Mulholland, E Kim
2015-11-01
The pneumococcus is a diverse pathogen whose primary niche is the nasopharynx. Over 90 different serotypes exist, and nasopharyngeal carriage of multiple serotypes is common. Understanding pneumococcal carriage is essential for evaluating the impact of pneumococcal vaccines. Traditional serotyping methods are cumbersome and insufficient for detecting multiple serotype carriage, and there are few data comparing the new methods that have been developed over the past decade. We established the PneuCarriage project, a large, international multi-centre study dedicated to the identification of the best pneumococcal serotyping methods for carriage studies. Reference sample sets were distributed to 15 research groups for blinded testing. Twenty pneumococcal serotyping methods were used to test 81 laboratory-prepared (spiked) samples. The five top-performing methods were used to test 260 nasopharyngeal (field) samples collected from children in six high-burden countries. Sensitivity and positive predictive value (PPV) were determined for the test methods and the reference method (traditional serotyping of >100 colonies from each sample). For the alternate serotyping methods, the overall sensitivity ranged from 1% to 99% (reference method 98%), and PPV from 8% to 100% (reference method 100%), when testing the spiked samples. Fifteen methods had ≥70% sensitivity to detect the dominant (major) serotype, whilst only eight methods had ≥70% sensitivity to detect minor serotypes. For the field samples, the overall sensitivity ranged from 74.2% to 95.8% (reference method 93.8%), and PPV from 82.2% to 96.4% (reference method 99.6%). The microarray had the highest sensitivity (95.8%) and high PPV (93.7%). The major limitation of this study is that not all of the available alternative serotyping methods were included. Most methods were able to detect the dominant serotype in a sample, but many performed poorly in detecting the minor serotype populations. Microarray with a culture amplification step was the top-performing method. Results from this comprehensive evaluation will inform future vaccine evaluation and impact studies, particularly in low-income settings, where pneumococcal disease burden remains high.
Brabcová, Ivana; Hlaváčková, Markéta; Satínský, Dalibor; Solich, Petr
2013-11-15
A simple and automated HPLC column-switching method with rapid sample pretreatment has been developed for quantitative determination of β-carotene in food supplements. Commercially samples of food supplements were dissolved in chloroform with help of saponification with 1M solution of sodium hydroxide in ultrasound bath. A 20-min sample dissolution/extraction step was necessary before chromatography analysis to transfer β-carotene from solid state of food supplements preparations (capsules,tablets) to chloroform solution. Sample volume - 3μL of chloroform phase was directly injected into the HPLC system. Next on-line sample clean-up was achieved on the pretreatment precolumn Chromolith Guard Cartridge RP-18e (Merck), 10×4.6mm, with a washing mobile phase (methanol:water, 92:8, (v/v)) at a flow rate of 1.5mL/min. Valve switch to analytical column was set at 2.5min in a back-flush mode. After column switching to the analytical column Ascentis Express C-18, 30×4.6mm, particle size 2.7μm (Sigma Aldrich), the separation and determination of β-carotene in food supplements was performed using a mobile phase consisting of 100% methanol, column temperature at 60°C and flow rate 1.5mL/min. The detector was set at 450nm. Under the optimum chromatographic conditions standard calibration curve was measured with good linearity - correlation coefficient for β-carotene (r(2)=0.999014; n=6) between the peak areas and concentration of β-carotene 20-200μg/mL. Accuracy of the method defined as a mean recovery was in the range 96.66-102.40%. The intraday method precision was satisfactory at three concentration levels 20, 125 and 200μg/mL and relative standard deviations were in the range 0.90-1.02%. The chromatography method has shown high sample throughput during column-switching pretreatment process and analysis in one step in short time (6min) of the whole chromatographic analysis. Copyright © 2013 Elsevier Ltd. All rights reserved.
Satzke, Catherine; Dunne, Eileen M.; Porter, Barbara D.; Klugman, Keith P.; Mulholland, E. Kim
2015-01-01
Background The pneumococcus is a diverse pathogen whose primary niche is the nasopharynx. Over 90 different serotypes exist, and nasopharyngeal carriage of multiple serotypes is common. Understanding pneumococcal carriage is essential for evaluating the impact of pneumococcal vaccines. Traditional serotyping methods are cumbersome and insufficient for detecting multiple serotype carriage, and there are few data comparing the new methods that have been developed over the past decade. We established the PneuCarriage project, a large, international multi-centre study dedicated to the identification of the best pneumococcal serotyping methods for carriage studies. Methods and Findings Reference sample sets were distributed to 15 research groups for blinded testing. Twenty pneumococcal serotyping methods were used to test 81 laboratory-prepared (spiked) samples. The five top-performing methods were used to test 260 nasopharyngeal (field) samples collected from children in six high-burden countries. Sensitivity and positive predictive value (PPV) were determined for the test methods and the reference method (traditional serotyping of >100 colonies from each sample). For the alternate serotyping methods, the overall sensitivity ranged from 1% to 99% (reference method 98%), and PPV from 8% to 100% (reference method 100%), when testing the spiked samples. Fifteen methods had ≥70% sensitivity to detect the dominant (major) serotype, whilst only eight methods had ≥70% sensitivity to detect minor serotypes. For the field samples, the overall sensitivity ranged from 74.2% to 95.8% (reference method 93.8%), and PPV from 82.2% to 96.4% (reference method 99.6%). The microarray had the highest sensitivity (95.8%) and high PPV (93.7%). The major limitation of this study is that not all of the available alternative serotyping methods were included. Conclusions Most methods were able to detect the dominant serotype in a sample, but many performed poorly in detecting the minor serotype populations. Microarray with a culture amplification step was the top-performing method. Results from this comprehensive evaluation will inform future vaccine evaluation and impact studies, particularly in low-income settings, where pneumococcal disease burden remains high. PMID:26575033
Tang, Ning; Pahalawatta, Vihanga; Frank, Andrea; Bagley, Zowie; Viana, Raquel; Lampinen, John; Leckie, Gregor; Huang, Shihai; Abravaya, Klara; Wallis, Carole L
2017-07-01
HIV RNA suppression is a key indicator for monitoring success of antiretroviral therapy. From a logistical perspective, viral load (VL) testing using Dried Blood Spots (DBS) is a promising alternative to plasma based VL testing in resource-limited settings. To evaluate the analytical and clinical performance of the Abbott RealTime HIV-1 assay using a fully automated one-spot DBS sample protocol. Limit of detection (LOD), linearity, lower limit of quantitation (LLQ), upper limit of quantitation (ULQ), and precision were determined using serial dilutions of HIV-1 Virology Quality Assurance stock (VQA Rush University), or HIV-1-containing armored RNA, made in venous blood. To evaluate correlation, bias, and agreement, 497 HIV-1 positive adult clinical samples were collected from Ivory Coast, Uganda and South Africa. For each HIV-1 participant, DBS-fingerprick, DBS-venous and plasma sample results were compared. Correlation and bias values were obtained. The sensitivity and specificity were analyzed at a threshold of 1000 HIV-1 copies/mL generated using the standard plasma protocol. The Abbott HIV-1 DBS protocol had an LOD of 839 copies/mL, a linear range from 500 to 1×10 7 copies/mL, an LLQ of 839 copies/mL, a ULQ of 1×10 7 copies/mL, and an inter-assay SD of ≤0.30 log copies/mL for all tested levels within this range. With clinical samples, the correlation coefficient (r value) was 0.896 between DBS-fingerprick and plasma and 0.901 between DBS-venous and plasma, and the bias was -0.07 log copies/mL between DBS-fingerprick and plasma and -0.02 log copies/mL between DBS-venous and plasma. The sensitivity of DBS-fingerprick and DBS-venous was 93%, while the specificity of both DBS methods was 95%. The results demonstrated that the Abbott RealTime HIV-1 assay with DBS sample protocol is highly sensitive, specific and precise across a wide dynamic range and correlates well with plasma values. The Abbott RealTime HIV-1 assay with DBS sample protocol provides an alternative sample collection and transfer option in resource-limited settings and expands the utility of a viral load test to monitor HIV-1 ART treatment for infected patients. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
U-series Chronology of volcanoes in the Central Kenya Peralkaline Province, East African Rift
NASA Astrophysics Data System (ADS)
Negron, L. M.; Ma, L.; Deino, A.; Anthony, E. Y.
2012-12-01
We are studying the East African Rift System (EARS) in the Central Kenya Peralkaline Province (CKPP), and specifically the young volcanoes Mt. Suswa, Longonot, and Menengai. Ar dates by Al Deino on K-feldspar phenocrysts show a strong correlation between older Ar ages and decreasing 230Th/232Th, which we interpret to reflect the age of eruption. This system has been the subject of recent research done by several UTEP alumni including Antony Wamalwa using potential field and magnetotelluric (MT) data to identify and characterize fractures and hydrothermal fluids. Also research on geochemical modeling done by John White, Vanessa Espejel and Peter Omenda led to the hypothesis of possible disequilibrium in these young, mainly obsidian samples in their post eruptive history. A pilot study of 8 samples, (also including W-2a USGS standard and a blank) establish the correlation that was seen between the ages found by Deino along with the 230/232Th ratios. All 8 samples from Mt. Suswa showed a 234U/238U ratio of (1) which indicates secular equilibrium or unity and that these are very fresh samples with no post-eruptive decay or leaching of U isotopes. The pilot set was comprised of four samples from the ring-trench group (RTG) with ages ranging from 7ka-present, two samples from the post-caldera stage ranging from 31-10ka, one sample from the syn-caldera stage dated at 41ka, and one sample from the pre-caldera stage dated at 112ka. The young RTG had a 230/232Th fractionation ratio of 0.8 ranging to the older pre-caldera stage with a 230/232Th ratio of 0.6. From this current data and research of 14C ages by Nick Rogers, the data from Longonot volcano was also similar to the 230/232Th ratio we found. Rogers' data places Longonot volcano ages to be no more than 20ka with the youngest samples also roughly around 0.8 disequilibrium. These strong correlations between the pilot study done for Mt. Suswa, 40Ar ages by Deino, along with 14C ages from Rogers have led to the exploration of present U-series data set of the youngest samples from the rest of the CKPP volcanoes including: Menengai, more from Longonot, and Olkaria. And since it is observed that there is the presence of lateral migration along an axial dike swarm that has operated in other parts of the EARS, we have chosen to run samples from the adjacent mafic fields of Elmenteita, Tandamara, and Ndabibi to see if there is a trend in the correlation of the 230/232Th ratios at the time of eruption as well as observing how close these samples get to unity. This would answer questions as to whether similar 230/232Th ratios imply that the mafic fields feed the calderas.
7 CFR 27.23 - Duplicate sets of samples of cotton.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...
7 CFR 27.23 - Duplicate sets of samples of cotton.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...
7 CFR 27.23 - Duplicate sets of samples of cotton.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...
7 CFR 27.23 - Duplicate sets of samples of cotton.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...
7 CFR 27.23 - Duplicate sets of samples of cotton.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...
Stephens, David; Diesing, Markus
2014-01-01
Detailed seabed substrate maps are increasingly in demand for effective planning and management of marine ecosystems and resources. It has become common to use remotely sensed multibeam echosounder data in the form of bathymetry and acoustic backscatter in conjunction with ground-truth sampling data to inform the mapping of seabed substrates. Whilst, until recently, such data sets have typically been classified by expert interpretation, it is now obvious that more objective, faster and repeatable methods of seabed classification are required. This study compares the performances of a range of supervised classification techniques for predicting substrate type from multibeam echosounder data. The study area is located in the North Sea, off the north-east coast of England. A total of 258 ground-truth samples were classified into four substrate classes. Multibeam bathymetry and backscatter data, and a range of secondary features derived from these datasets were used in this study. Six supervised classification techniques were tested: Classification Trees, Support Vector Machines, k-Nearest Neighbour, Neural Networks, Random Forest and Naive Bayes. Each classifier was trained multiple times using different input features, including i) the two primary features of bathymetry and backscatter, ii) a subset of the features chosen by a feature selection process and iii) all of the input features. The predictive performances of the models were validated using a separate test set of ground-truth samples. The statistical significance of model performances relative to a simple baseline model (Nearest Neighbour predictions on bathymetry and backscatter) were tested to assess the benefits of using more sophisticated approaches. The best performing models were tree based methods and Naive Bayes which achieved accuracies of around 0.8 and kappa coefficients of up to 0.5 on the test set. The models that used all input features didn't generally perform well, highlighting the need for some means of feature selection.
The Fungal Frontier: A Comparative Analysis of Methods Used in the Study of the Human Gut Mycobiome.
Huseyin, Chloe E; Rubio, Raul Cabrera; O'Sullivan, Orla; Cotter, Paul D; Scanlan, Pauline D
2017-01-01
The human gut is host to a diverse range of fungal species, collectively referred to as the gut "mycobiome". The gut mycobiome is emerging as an area of considerable research interest due to the potential roles of these fungi in human health and disease. However, there is no consensus as to what the best or most suitable methodologies available are with respect to characterizing the human gut mycobiome. The aim of this study is to provide a comparative analysis of several previously published mycobiome-specific culture-dependent and -independent methodologies, including choice of culture media, incubation conditions (aerobic versus anaerobic), DNA extraction method, primer set and freezing of fecal samples to assess their relative merits and suitability for gut mycobiome analysis. There was no significant effect of media type or aeration on culture-dependent results. However, freezing was found to have a significant effect on fungal viability, with significantly lower fungal numbers recovered from frozen samples. DNA extraction method had a significant effect on DNA yield and quality. However, freezing and extraction method did not have any impact on either α or β diversity. There was also considerable variation in the ability of different fungal-specific primer sets to generate PCR products for subsequent sequence analysis. Through this investigation two DNA extraction methods and one primer set was identified which facilitated the analysis of the mycobiome for all samples in this study. Ultimately, a diverse range of fungal species were recovered using both approaches, with Candida and Saccharomyces identified as the most common fungal species recovered using culture-dependent and culture-independent methods, respectively. As has been apparent from ecological surveys of the bacterial fraction of the gut microbiota, the use of different methodologies can also impact on our understanding of gut mycobiome composition and therefore requires careful consideration. Future research into the gut mycobiome needs to adopt a common strategy to minimize potentially confounding effects of methodological choice and to facilitate comparative analysis of datasets.
Alchemical prediction of hydration free energies for SAMPL
Mobley, David L.; Liu, Shaui; Cerutti, David S.; Swope, William C.; Rice, Julia E.
2013-01-01
Hydration free energy calculations have become important tests of force fields. Alchemical free energy calculations based on molecular dynamics simulations provide a rigorous way to calculate these free energies for a particular force field, given sufficient sampling. Here, we report results of alchemical hydration free energy calculations for the set of small molecules comprising the 2011 Statistical Assessment of Modeling of Proteins and Ligands (SAMPL) challenge. Our calculations are largely based on the Generalized Amber Force Field (GAFF) with several different charge models, and we achieved RMS errors in the 1.4-2.2 kcal/mol range depending on charge model, marginally higher than what we typically observed in previous studies1-5. The test set consists of ethane, biphenyl, and a dibenzyl dioxin, as well as a series of chlorinated derivatives of each. We found that, for this set, using high-quality partial charges from MP2/cc-PVTZ SCRF RESP fits provided marginally improved agreement with experiment over using AM1-BCC partial charges as we have more typically done, in keeping with our recent findings5. Switching to OPLS Lennard-Jones parameters with AM1-BCC charges also improves agreement with experiment. We also find a number of chemical trends within each molecular series which we can explain, but there are also some surprises, including some that are captured by the calculations and some that are not. PMID:22198475
Alghanim, Hussain; Antunes, Joana; Silva, Deborah Soares Bispo Santos; Alho, Clarice Sampaio; Balamurugan, Kuppareddi; McCord, Bruce
2017-11-01
Recent developments in the analysis of epigenetic DNA methylation patterns have demonstrated that certain genetic loci show a linear correlation with chronological age. It is the goal of this study to identify a new set of epigenetic methylation markers for the forensic estimation of human age. A total number of 27 CpG sites at three genetic loci, SCGN, DLX5 and KLF14, were examined to evaluate the correlation of their methylation status with age. These sites were evaluated using 72 blood samples and 91 saliva samples collected from volunteers with ages ranging from 5 to 73 years. DNA was bisulfite modified followed by PCR amplification and pyrosequencing to determine the level of DNA methylation at each CpG site. In this study, certain CpG sites in SCGN and KLF14 loci showed methylation levels that were correlated with chronological age, however, the tested CpG sites in DLX5 did not show a correlation with age. Using a 52-saliva sample training set, two age-predictor models were developed by means of a multivariate linear regression analysis for age prediction. The two models performed similarly with a single-locus model explaining 85% of the age variance at a mean absolute deviation of 5.8 years and a dual-locus model explaining 84% of the age variance with a mean absolute deviation of 6.2 years. In the validation set, the mean absolute deviation was measured to be 8.0 years and 7.1 years for the single- and dual-locus model, respectively. Another age predictor model was also developed using a 40-blood sample training set that accounted for 71% of the age variance. This model gave a mean absolute deviation of 6.6 years for the training set and 10.3years for the validation set. The results indicate that specific CpGs in SCGN and KLF14 can be used as potential epigenetic markers to estimate age using saliva and blood specimens. These epigenetic markers could provide important information in cases where the determination of a suspect's age is critical in developing investigative leads. Copyright © 2017. Published by Elsevier B.V.
Wang, Xuefeng; Lee, Seunggeun; Zhu, Xiaofeng; Redline, Susan; Lin, Xihong
2013-12-01
Family-based genetic association studies of related individuals provide opportunities to detect genetic variants that complement studies of unrelated individuals. Most statistical methods for family association studies for common variants are single marker based, which test one SNP a time. In this paper, we consider testing the effect of an SNP set, e.g., SNPs in a gene, in family studies, for both continuous and discrete traits. Specifically, we propose a generalized estimating equations (GEEs) based kernel association test, a variance component based testing method, to test for the association between a phenotype and multiple variants in an SNP set jointly using family samples. The proposed approach allows for both continuous and discrete traits, where the correlation among family members is taken into account through the use of an empirical covariance estimator. We derive the theoretical distribution of the proposed statistic under the null and develop analytical methods to calculate the P-values. We also propose an efficient resampling method for correcting for small sample size bias in family studies. The proposed method allows for easily incorporating covariates and SNP-SNP interactions. Simulation studies show that the proposed method properly controls for type I error rates under both random and ascertained sampling schemes in family studies. We demonstrate through simulation studies that our approach has superior performance for association mapping compared to the single marker based minimum P-value GEE test for an SNP-set effect over a range of scenarios. We illustrate the application of the proposed method using data from the Cleveland Family GWAS Study. © 2013 WILEY PERIODICALS, INC.
Diagnostic accuracy of a two-item Drug Abuse Screening Test (DAST-2).
Tiet, Quyen Q; Leyva, Yani E; Moos, Rudolf H; Smith, Brandy
2017-11-01
Drug use is prevalent and costly to society, but individuals with drug use disorders (DUDs) are under-diagnosed and under-treated, particularly in primary care (PC) settings. Drug screening instruments have been developed to identify patients with DUDs and facilitate treatment. The Drug Abuse Screening Test (DAST) is one of the most well-known drug screening instruments. However, similar to many such instruments, it is too long for routine use in busy PC settings. This study developed and validated a briefer and more practical DAST for busy PC settings. We recruited 1300 PC patients in two Department of Veterans Affairs (VA) clinics. Participants responded to a structured diagnostic interview. We randomly selected half of the sample to develop and the other half to validate the new instrument. We employed signal detection techniques to select the best DAST items to identify DUDs (based on the MINI) and negative consequences of drug use (measured by the Inventory of Drug Use Consequences). Performance indicators were calculated. The two-item DAST (DAST-2) was 97% sensitive and 91% specific for DUDs in the development sample and 95% sensitive and 89% specific in the validation sample. It was highly sensitive and specific for DUD and negative consequences of drug use in subgroups of patients, including gender, age, race/ethnicity, marital status, educational level, and posttraumatic stress disorder status. The DAST-2 is an appropriate drug screening instrument for routine use in PC settings in the VA and may be applicable in broader range of PC clinics. Published by Elsevier Ltd.
Kadar, Hanane; Veyrand, Bruno; Barbarossa, Andrea; Pagliuca, Giampiero; Legrand, Arnaud; Bosher, Cécile; Boquien, Clair-Yves; Durand, Sophie; Monteau, Fabrice; Antignac, Jean-Philippe; Le Bizec, Bruno
2011-10-01
Perfluorinated compounds (PFCs) are man-made chemicals for which endocrine disrupting properties and related possible side effects on human health have been reported, particularly in the case of an exposure during the early stages of development, (notably the perinatal period). Existing analytical methods dedicated to PFCs monitoring in food and/or human fluids are currently based on liquid chromatography coupled to tandem mass spectrometry, and were recently demonstrated to present some limitations in terms of sensitivity and/or specificity. An alternative strategy dedicated to the analysis of fourteen PFCs in human breast milk was proposed, based on an effective sample preparation followed by a liquid chromatography coupled to high resolution mass spectrometry measurement (LC-HRMS). This methodology confirmed the high interest for HRMS after negative ionization for such halogenated substances, and finally permitted to reach detection limits around the pg mL(-1) range with an outstanding signal specificity compared to LC-MS/MS. The proposed method was applied to a first set of 30 breast milk samples from French women. The main PFCs detected in all these samples were PFOS and PFOA with respective median values of 74 (range from 24 to 171) and 57 (range from 18 to 102) pg mL(-1), respectively. These exposure data appeared in the same range as other reported values for European countries. Copyright © 2011 Elsevier Ltd. All rights reserved.
Quantitative Assessment of Molecular Dynamics Sampling for Flexible Systems.
Nemec, Mike; Hoffmann, Daniel
2017-02-14
Molecular dynamics (MD) simulation is a natural method for the study of flexible molecules but at the same time is limited by the large size of the conformational space of these molecules. We ask by how much the MD sampling quality for flexible molecules can be improved by two means: the use of diverse sets of trajectories starting from different initial conformations to detect deviations between samples and sampling with enhanced methods such as accelerated MD (aMD) or scaled MD (sMD) that distort the energy landscape in controlled ways. To this end, we test the effects of these approaches on MD simulations of two flexible biomolecules in aqueous solution, Met-Enkephalin (5 amino acids) and HIV-1 gp120 V3 (a cycle of 35 amino acids). We assess the convergence of the sampling quantitatively with known, extensive measures of cluster number N c and cluster distribution entropy S c and with two new quantities, conformational overlap O conf and density overlap O dens , both conveniently ranging from 0 to 1. These new overlap measures quantify self-consistency of sampling in multitrajectory MD experiments, a necessary condition for converged sampling. A comprehensive assessment of sampling quality of MD experiments identifies the combination of diverse trajectory sets and aMD as the most efficient approach among those tested. However, analysis of O dens between conventional and aMD trajectories also reveals that we have not completely corrected aMD sampling for the distorted energy landscape. Moreover, for V3, the courses of N c and O dens indicate that much higher resources than those generally invested today will probably be needed to achieve convergence. The comparative analysis also shows that conventional MD simulations with insufficient sampling can be easily misinterpreted as being converged.
La 2-xSr xCuO 4-δ superconducting samples prepared by the wet-chemical method
NASA Astrophysics Data System (ADS)
Loose, A.; Gonzalez, J. L.; Lopez, A.; Borges, H. A.; Baggio-Saitovitch, E.
2009-10-01
In this work, we report on the physical properties of good-quality polycrystalline superconducting samples of La 2-xSr xCu 1-yZn yO 4-δ ( y=0, 0.02) prepared by a wet-chemical method, focusing on the temperature dependence of the critical current. Using the wet-chemical method, we were able to produce samples with improved homogeneity compared to the solid-state method. A complete set of samples with several carrier concentrations, ranging from the underdoped (strontium concentration x≈0.05) to the highly overdoped ( x≈0.25) region, were prepared and investigated. The X-ray diffraction analysis, zero-field cooling magnetization and electrical resistivity measurements were reported on earlier. The structural parameters of the prepared samples seem to be slightly modified by the preparation method and their critical temperatures were lower than reported in the literature. The temperature dependence of the critical current was explained by a theoretical model which took the granular structure of the samples into account.
Marques, Sara S.; Magalhães, Luís M.; Tóth, Ildikó V.; Segundo, Marcela A.
2014-01-01
Total antioxidant capacity assays are recognized as instrumental to establish antioxidant status of biological samples, however the varying experimental conditions result in conclusions that may not be transposable to other settings. After selection of the complexing agent, reagent addition order, buffer type and concentration, copper reducing assays were adapted to a high-throughput scheme and validated using model biological antioxidant compounds of ascorbic acid, Trolox (a soluble analogue of vitamin E), uric acid and glutathione. A critical comparison was made based on real samples including NIST-909c human serum certified sample, and five study samples. The validated method provided linear range up to 100 µM Trolox, (limit of detection 2.3 µM; limit of quantification 7.7 µM) with recovery results above 85% and precision <5%. The validated developed method with an increased sensitivity is a sound choice for assessment of TAC in serum samples. PMID:24968275
Air Flow and Pressure Drop Measurements Across Porous Oxides
NASA Technical Reports Server (NTRS)
Fox, Dennis S.; Cuy, Michael D.; Werner, Roger A.
2008-01-01
This report summarizes the results of air flow tests across eight porous, open cell ceramic oxide samples. During ceramic specimen processing, the porosity was formed using the sacrificial template technique, with two different sizes of polystyrene beads used for the template. The samples were initially supplied with thicknesses ranging from 0.14 to 0.20 in. (0.35 to 0.50 cm) and nonuniform backside morphology (some areas dense, some porous). Samples were therefore ground to a thickness of 0.12 to 0.14 in. (0.30 to 0.35 cm) using dry 120 grit SiC paper. Pressure drop versus air flow is reported. Comparisons of samples with thickness variations are made, as are pressure drop estimates. As the density of the ceramic material increases the maximum corrected flow decreases rapidly. Future sample sets should be supplied with samples of similar thickness and having uniform surface morphology. This would allow a more consistent determination of air flow versus processing parameters and the resulting porosity size and distribution.
NASA Astrophysics Data System (ADS)
Grulkowski, Ireneusz; Karnowski, Karol; Ruminski, Daniel; Wojtkowski, Maciej
2016-03-01
Availability of the long-depth-range OCT systems enables comprehensive structural imaging of the eye and extraction of biometric parameters characterizing the entire eye. Several approaches have been developed to perform OCT imaging with extended depth ranges. In particular, current SS-OCT technology seems to be suited to visualize both anterior and posterior eye in a single measurement. The aim of this study is to demonstrate integrated anterior segment and retinal SS-OCT imaging using a single instrument, in which the sample arm is equipped with the electrically tunable lens (ETL). ETL is composed of the optical liquid confined in the space by an elastic polymer membrane. The shape of the membrane, electrically controlled by a specific ring, defines the radius of curvature of the lens surface, thus it regulates the power of the lens. ETL can be also equipped with additional offset lens to adjust the tuning range of the optical power. We characterize the operation of the tunable lens using wavefront sensing. We develop the optimized optical set-up with two adaptive operational states of the ETL in order to focus the light either on the retina or on the anterior segment of the eye. We test the performance of the set-up by utilizing whole eye phantom as the object. Finally, we perform human eye in vivo imaging using the SS-OCT instrument with versatile imaging functionality that accounts for the optics of the eye and enables dynamic control of the optical beam focus.
McAuliffe, Alan; McGann, Marek
2016-01-01
Speelman and McGann’s (2013) examination of the uncritical way in which the mean is often used in psychological research raises questions both about the average’s reliability and its validity. In the present paper, we argue that interrogating the validity of the mean involves, amongst other things, a better understanding of the person’s experiences, the meaning of their actions, at the time that the behavior of interest is carried out. Recently emerging approaches within Psychology and Cognitive Science have argued strongly that experience should play a more central role in our examination of behavioral data, but the relationship between experience and behavior remains very poorly understood. We outline some of the history of the science on this fraught relationship, as well as arguing that contemporary methods for studying experience fall into one of two categories. “Wide” approaches tend to incorporate naturalistic behavior settings, but sacrifice accuracy and reliability in behavioral measurement. “Narrow” approaches maintain controlled measurement of behavior, but involve too specific a sampling of experience, which obscures crucial temporal characteristics. We therefore argue for a novel, mid-range sampling technique, that extends Hurlburt’s descriptive experience sampling, and adapts it for the controlled setting of the laboratory. This controlled descriptive experience sampling may be an appropriate tool to help calibrate both the mean and the meaning of an experimental situation with one another. PMID:27242588
An interferometric fiber optic hydrophone with large upper limit of dynamic range
NASA Astrophysics Data System (ADS)
Zhang, Lei; Kan, Baoxi; Zheng, Baichao; Wang, Xuefeng; Zhang, Haiyan; Hao, Liangbin; Wang, Hailiang; Hou, Zhenxing; Yu, Wenpeng
2017-10-01
Interferometric fiber optic hydrophone based on heterodyne detection is used to measure the missile dropping point in the sea. The signal caused by the missile dropping in the water will be too large to be detected, so it is necessary to boost the upper limit of dynamic range (ULODR) of fiber optic hydrophone. In this article we analysis the factors which influence the ULODR of fiber optic hydrophone based on heterodyne detection, the ULODR is decided by the sampling frequency fsam and the heterodyne frequency Δf. The sampling frequency and the heterodyne frequency should be satisfied with the Nyquist sampling theorem which fsam will be two times larger than Δf, in this condition the ULODR is depended on the heterodyne frequency. In order to enlarge the ULODR, the Nyquist sampling theorem was broken, and we proposed a fiber optic hydrophone which the heterodyne frequency is larger than the sampling frequency. Both the simulation and experiment were done in this paper, the consequences are similar: When the sampling frequency is 100kHz, the ULODR of large heterodyne frequency fiber optic hydrophone is 2.6 times larger than that of the small heterodyne frequency fiber optic hydrophone. As the heterodyne frequency is larger than the sampling frequency, the ULODR is depended on the sampling frequency. If the sampling frequency was set at 2MHz, the ULODR of fiber optic hydrophone based on heterodyne detection will be boosted to 1000rad at 1kHz, and this large heterodyne fiber optic hydrophone can be applied to locate the drop position of the missile in the sea.
Microbiological quality of Argentinian paprika.
Melo González, María G; Romero, Stella M; Arjona, Mila; Larumbe, Ada G; Vaamonde, Graciela
The aim of this study was to evaluate the microbiological quality of paprika produced in Catamarca, Argentina. Microbiological analyses were carried out for the enumeration of total aerobic mesophilic bacteria, coliforms, yeasts and molds, and the detection of Salmonella in samples obtained from different local producers during three consecutive years. The mycobiota was identified paying special attention to the mycotoxigenic molds. Standard plate counts of aerobic mesophilic bacteria ranged from 2.7×10 5 to 3.7×10 7 CFU/g. Coliform counts ranged from <10 to 8.1×10 4 CFU/g. Salmonella was not detected in any of the samples tested. Fungal counts (including yeasts and molds) ranged between 2×10 2 and 1.9×10 5 CFU/g. These results showed a high level of microbial contamination, exceeding in several samples the maximum limits set in international food regulations. The study of the mycobiota demonstrated that Aspergillus was the predominant genus and Aspergillus niger (potential producer of ochratoxin A) the most frequently isolated species, followed by Aspergillus flavus (potential producer of aflatoxins). Other species of potential toxigenic fungi such as Aspergillus ochraceus, Aspergillus westerdijkiae, Penicillium chrysogenum, Penicillium crustosum, Penicillium commune, Penicillium expansum and Alternaria tenuissima species group were encountered as part of the mycobiota of the paprika samples indicating a risk of mycotoxin contamination. A. westerdijkiae was isolated for the first time in Argentina. Copyright © 2017 Asociación Argentina de Microbiología. Publicado por Elsevier España, S.L.U. All rights reserved.
Pires, Cherrine K.; Martelli, Patrícia B.; Lima, José L. F. C.; Saraiva, Maria Lúcia M. F. S.
2003-01-01
An automatic flow procedure based on multicommutation dedicated for the determination of glucose in animal blood serum using glucose oxidase with chemiluminescence detection is described. The flow manifold consisted of a set of three-way solenoid valves assembled to implement multicommutation. A microcomputer furnished with an electronic interface and software written in Quick BASIC 4.5 controlled the manifold and performed data acquisition. Glucose oxidase was immobilized on porous silica beads (glass aminopropyl) and packed in a minicolumn (15 × 5 mm). The procedure was based on the enzymatic degradation of glucose, producing hydrogen peroxide, which oxidized luminol in the presence of hexacyanoferrate(III), causing the chemiluminescence. The system was tested by analysing a set of serum animal samples without previous treatment. Results were in agreement with those obtained with the conventional method (LABTEST Kit) at the 95% confidence level. The detection limit and variation coefficient were estimated as 12.0 mg l−1 (99.7% confidence level) and 3.5% (n = 20), respectively. The sampling rate was about 60 determinations h−1 with sample concentrations ranging from 50 to 600 mg l−1 glucose. The consumptions of serum sample, hexacyanoferrate(III) and luminol were 46 μl, 10.0 mg and 0.2 mg/determination, respectively. PMID:18924619
Methane Leaks from Natural Gas Systems Follow Extreme Distributions.
Brandt, Adam R; Heath, Garvin A; Cooley, Daniel
2016-11-15
Future energy systems may rely on natural gas as a low-cost fuel to support variable renewable power. However, leaking natural gas causes climate damage because methane (CH 4 ) has a high global warming potential. In this study, we use extreme-value theory to explore the distribution of natural gas leak sizes. By analyzing ∼15 000 measurements from 18 prior studies, we show that all available natural gas leakage data sets are statistically heavy-tailed, and that gas leaks are more extremely distributed than other natural and social phenomena. A unifying result is that the largest 5% of leaks typically contribute over 50% of the total leakage volume. While prior studies used log-normal model distributions, we show that log-normal functions poorly represent tail behavior. Our results suggest that published uncertainty ranges of CH 4 emissions are too narrow, and that larger sample sizes are required in future studies to achieve targeted confidence intervals. Additionally, we find that cross-study aggregation of data sets to increase sample size is not recommended due to apparent deviation between sampled populations. Understanding the nature of leak distributions can improve emission estimates, better illustrate their uncertainty, allow prioritization of source categories, and improve sampling design. Also, these data can be used for more effective design of leak detection technologies.
Oxidation and mobilization of selenium by nitrate in irrigation drainage
Wright, W.G.
1999-01-01
Selenium (Se) can be oxidized by nitrate (NO3-) from irrigation on Cretaceous marine shale in western Colorado. Dissolved Se concentrations are positively correlated with dissolved NO3- concentrations in surface water and ground water samples from irrigated areas. Redox conditions dominate in the mobilization of Se in marine shale hydrogeologic settings; dissolved Se concentrations increase with increasing platinum-electrode potentials. Theoretical calculations for the oxidation of Se by NO3- and oxygen show favorable Gibbs free energies for the oxidation of Se by NO3-, indicating NO3- can act as an electron acceptor for the oxidation of Se. Laboratory batch experiments were performed by adding Mancos Shale samples to zero- dissolved-oxygen water containing 0, 5, 50, and 100 mg/L NO3- as N (mg N/L). Samples were incubated in airtight bottles at 25??C for 188 d; samples collected from the batch experiment bottles show increased Se concentrations over time with increased NO3- concentrations. Pseudo first-order rate constants for NO3- oxidation of Se ranged from 0.0007 to 0.0048/d for 0 to 100 mg N/L NO3- concentrations, respectively. Management of N fertilizer applications in Cretaceous shale settings might help to control the oxidation and mobilization of Se and other trace constituents into the environment.
MicroRNA signatures in B-cell lymphomas
Di Lisio, L; Sánchez-Beato, M; Gómez-López, G; Rodríguez, M E; Montes-Moreno, S; Mollejo, M; Menárguez, J; Martínez, M A; Alves, F J; Pisano, D G; Piris, M A; Martínez, N
2012-01-01
Accurate lymphoma diagnosis, prognosis and therapy still require additional markers. We explore the potential relevance of microRNA (miRNA) expression in a large series that included all major B-cell non-Hodgkin lymphoma (NHL) types. The data generated were also used to identify miRNAs differentially expressed in Burkitt lymphoma (BL) and diffuse large B-cell lymphoma (DLBCL) samples. A series of 147 NHL samples and 15 controls were hybridized on a human miRNA one-color platform containing probes for 470 human miRNAs. Each lymphoma type was compared against the entire set of NHLs. BL was also directly compared with DLBCL, and 43 preselected miRNAs were analyzed in a new series of routinely processed samples of 28 BLs and 43 DLBCLs using quantitative reverse transcription-polymerase chain reaction. A signature of 128 miRNAs enabled the characterization of lymphoma neoplasms, reflecting the lymphoma type, cell of origin and/or discrete oncogene alterations. Comparative analysis of BL and DLBCL yielded 19 differentially expressed miRNAs, which were confirmed in a second confirmation series of 71 paraffin-embedded samples. The set of differentially expressed miRNAs found here expands the range of potential diagnostic markers for lymphoma diagnosis, especially when differential diagnosis of BL and DLBCL is required. PMID:22829247
A cluster pattern algorithm for the analysis of multiparametric cell assays.
Kaufman, Menachem; Bloch, David; Zurgil, Naomi; Shafran, Yana; Deutsch, Mordechai
2005-09-01
The issue of multiparametric analysis of complex single cell assays of both static and flow cytometry (SC and FC, respectively) has become common in recent years. In such assays, the analysis of changes, applying common statistical parameters and tests, often fails to detect significant differences between the investigated samples. The cluster pattern similarity (CPS) measure between two sets of gated clusters is based on computing the difference between their density distribution functions' set points. The CPS was applied for the discrimination between two observations in a four-dimensional parameter space. The similarity coefficient (r) ranges between 0 (perfect similarity) to 1 (dissimilar). Three CPS validation tests were carried out: on the same stock samples of fluorescent beads, yielding very low r's (0, 0.066); and on two cell models: mitogenic stimulation of peripheral blood mononuclear cells (PBMC), and apoptosis induction in Jurkat T cell line by H2O2. In both latter cases, r indicated similarity (r < 0.23) within the same group, and dissimilarity (r > 0.48) otherwise. This classification and algorithm approach offers a measure of similarity between samples. It relies on the multidimensional pattern of the sample parameters. The algorithm compensates for environmental drifts in this apparatus and assay; it also may be applied to more than four dimensions.
Yan, Yiming; Tan, Zhichao; Su, Nan; Zhao, Chunhui
2017-08-24
In this paper, a building extraction method is proposed based on a stacked sparse autoencoder with an optimized structure and training samples. Building extraction plays an important role in urban construction and planning. However, some negative effects will reduce the accuracy of extraction, such as exceeding resolution, bad correction and terrain influence. Data collected by multiple sensors, as light detection and ranging (LIDAR), optical sensor etc., are used to improve the extraction. Using digital surface model (DSM) obtained from LIDAR data and optical images, traditional method can improve the extraction effect to a certain extent, but there are some defects in feature extraction. Since stacked sparse autoencoder (SSAE) neural network can learn the essential characteristics of the data in depth, SSAE was employed to extract buildings from the combined DSM data and optical image. A better setting strategy of SSAE network structure is given, and an idea of setting the number and proportion of training samples for better training of SSAE was presented. The optical data and DSM were combined as input of the optimized SSAE, and after training by an optimized samples, the appropriate network structure can extract buildings with great accuracy and has good robustness.
Variable selection under multiple imputation using the bootstrap in a prognostic study
Heymans, Martijn W; van Buuren, Stef; Knol, Dirk L; van Mechelen, Willem; de Vet, Henrica CW
2007-01-01
Background Missing data is a challenging problem in many prognostic studies. Multiple imputation (MI) accounts for imputation uncertainty that allows for adequate statistical testing. We developed and tested a methodology combining MI with bootstrapping techniques for studying prognostic variable selection. Method In our prospective cohort study we merged data from three different randomized controlled trials (RCTs) to assess prognostic variables for chronicity of low back pain. Among the outcome and prognostic variables data were missing in the range of 0 and 48.1%. We used four methods to investigate the influence of respectively sampling and imputation variation: MI only, bootstrap only, and two methods that combine MI and bootstrapping. Variables were selected based on the inclusion frequency of each prognostic variable, i.e. the proportion of times that the variable appeared in the model. The discriminative and calibrative abilities of prognostic models developed by the four methods were assessed at different inclusion levels. Results We found that the effect of imputation variation on the inclusion frequency was larger than the effect of sampling variation. When MI and bootstrapping were combined at the range of 0% (full model) to 90% of variable selection, bootstrap corrected c-index values of 0.70 to 0.71 and slope values of 0.64 to 0.86 were found. Conclusion We recommend to account for both imputation and sampling variation in sets of missing data. The new procedure of combining MI with bootstrapping for variable selection, results in multivariable prognostic models with good performance and is therefore attractive to apply on data sets with missing values. PMID:17629912
NASA Astrophysics Data System (ADS)
Pingitore, N. E.; Clague, J.; Amaya, M. A.
2006-12-01
Understanding the interplay of indoor and outdoor sources of lead in an urban setting is one foundation in establishing risk for lead exposure in children in our cities. A household may be the source for lead contamination due to the deterioration of interior lead-based paint, or a sink if lead particles are tracked or blown into the home from such potential ambient sources as yard soil or urban street dust. In addressing this issue, X-Ray Absorption Spectroscopy (XAS) presents the opportunity to directly and quantitatively speciate lead at low concentrations in bulk samples. We performed XAS analyses on dust wipes from window sills or floors from 8 houses that exceeded Federal standards for lead in dust. We entered these data into a Principal Components Analysis (PCA) that also included El Paso environmental samples: lead-based paints, soils, and airborne particulate matter. A simple two-component mixing system accounted for more than 95% of the variance of this data set. Paint and lead oxide appear to be the principal components, with all the samples falling in a compositional range from pure paint to 75% paint, 25% lead oxide. Note that several different lead compounds are possible constituents of a given lead-based paint. The paints spread from one end out along perhaps a fifth of the range of the compositional axis, followed closely, but not overlapped, by the soil samples, which covered the remainder of the compositional range. Two of the dust wipes plotted within the paint range, and the remaining 6 dust wipes plotted randomly through the soil range. Samples of airborne particulate matter plotted in both the paint and soil ranges. These observations suggest that the lead on most of the dust wipes originated outside the house, probably from deteriorated exterior lead-based paint deposited in adjacent yards. This paint mixed with lead oxide present in the soil and entered the houses by the airborne route. The probable source of the oxide in the soil is former airborne deposition of automobile exhaust from leaded gasoline (lead halides quickly react to form oxide). The dust wipes that fall within the compositional range of the paints may have originated from deterioration of interior paint. The XAS findings are consistent with our tests of several hundred houses in El Paso: most of the wipes that exceeded Federal lead standards came from houses in the oldest neighborhoods of the city, where lead paint is still present. X-Ray absorption spectroscopy experiments were conducted at the Stanford Synchrotron Radiation Laboratory on beam lines 7-3 and 10-2. Spectra were collected at the Pb L-III absorption edge in fluorescence mode using a 13-element or a 30-element Ge solid-state detector. This publication was made possible by grant numbers 1RO1-ES11367 and 1 S11 ES013339-01A1 from the National Institute of Environmental Health Sciences (NIEHS), NIH. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIEHS, NIH.
Program for Experimentation With Expert Systems
NASA Technical Reports Server (NTRS)
Engle, S. W.
1986-01-01
CERBERUS is forward-chaining, knowledge-based system program useful for experimentation with expert systems. Inference-engine mechanism performs deductions according to user-supplied rule set. Information stored in intermediate area, and user interrogated only when no applicable data found in storage. Each assertion posed by CERBERUS answered with certainty ranging from 0 to 100 percent. Rule processor stops investigating applicable rules when goal reaches certainty of 95 percent or higher. Capable of operating for wide variety of domains. Sample rule files included for animal identification, pixel classification in image processing, and rudimentary car repair for novice mechanic. User supplies set of end goals or actions. System complexity decided by user's rule file. CERBERUS written in FORTRAN 77.
Atmospheric radiance interpolation for the modeling of hyperspectral data
NASA Astrophysics Data System (ADS)
Fuehrer, Perry; Healey, Glenn; Rauch, Brian; Slater, David; Ratkowski, Anthony
2008-04-01
The calibration of data from hyperspectral sensors to spectral radiance enables the use of physical models to predict measured spectra. Since environmental conditions are often unknown, material detection algorithms have emerged that utilize predicted spectra over ranges of environmental conditions. The predicted spectra are typically generated by a radiative transfer (RT) code such as MODTRAN TM. Such techniques require the specification of a set of environmental conditions. This is particularly challenging in the LWIR for which temperature and atmospheric constituent profiles are required as inputs for the RT codes. We have developed an automated method for generating environmental conditions to obtain a desired sampling of spectra in the sensor radiance domain. Our method provides a way of eliminating the usual problems encountered, because sensor radiance spectra depend nonlinearly on the environmental parameters, when model conditions are specified by a uniform sampling of environmental parameters. It uses an initial set of radiance vectors concatenated over a set of conditions to define the mapping from environmental conditions to sensor spectral radiance. This approach enables a given number of model conditions to span the space of desired radiance spectra and improves both the accuracy and efficiency of detection algorithms that rely upon use of predicted spectra.
Ruoff, Kaspar; Luginbühl, Werner; Künzli, Raphael; Bogdanov, Stefan; Bosset, Jacques Olivier; von der Ohe, Katharina; von der Ohe, Werner; Amado, Renato
2006-09-06
Front-face fluorescence spectroscopy, directly applied on honey samples, was used for the authentication of 11 unifloral and polyfloral honey types (n = 371 samples) previously classified using traditional methods such as chemical, pollen, and sensory analysis. Excitation spectra (220-400 nm) were recorded with the emission measured at 420 nm. In addition, emission spectra were recorded between 290 and 500 nm (excitation at 270 nm) as well as between 330 and 550 nm (excitation at 310 nm). A total of four different spectral data sets were considered for data analysis. Chemometric evaluation of the spectra included principal component analysis and linear discriminant analysis; the error rates of the discriminant models were calculated by using Bayes' theorem. They ranged from <0.1% (polyfloral and chestnut honeys) to 9.9% (fir honeydew honey) by using single spectral data sets and from <0.1% (metcalfa honeydew, polyfloral, and chestnut honeys) to 7.5% (lime honey) by combining two data sets. This study indicates that front-face fluorescence spectroscopy is a promising technique for the authentication of the botanical origin of honey and may also be useful for the determination of the geographical origin within the same unifloral honey type.
The Mira-Titan Universe. II. Matter Power Spectrum Emulation
Lawrence, Earl; Heitmann, Katrin; Kwan, Juliana; ...
2017-09-20
We introduce a new cosmic emulator for the matter power spectrum covering eight cosmological parameters. Targeted at optical surveys, the emulator provides accurate predictions out to a wavenumber k ~ 5Mpc -1 and redshift z ≤ 2. Besides covering the standard set of CDM parameters, massive neutrinos and a dynamical dark energy of state are included. The emulator is built on a sample set of 36 cosmological models, carefully chosen to provide accurate predictions over the wide and large parameter space. For each model, we have performed a high-resolution simulation, augmented with sixteen medium-resolution simulations and TimeRG perturbation theory resultsmore » to provide accurate coverage of a wide k-range; the dataset generated as part of this project is more than 1.2Pbyte. With the current set of simulated models, we achieve an accuracy of approximately 4%. Because the sampling approach used here has established convergence and error-control properties, follow-on results with more than a hundred cosmological models will soon achieve ~1% accuracy. We compare our approach with other prediction schemes that are based on halo model ideas and remapping approaches. The new emulator code is publicly available.« less
The Mira-Titan Universe. II. Matter Power Spectrum Emulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawrence, Earl; Heitmann, Katrin; Kwan, Juliana
We introduce a new cosmic emulator for the matter power spectrum covering eight cosmological parameters. Targeted at optical surveys, the emulator provides accurate predictions out to a wavenumber k ~ 5Mpc -1 and redshift z ≤ 2. Besides covering the standard set of CDM parameters, massive neutrinos and a dynamical dark energy of state are included. The emulator is built on a sample set of 36 cosmological models, carefully chosen to provide accurate predictions over the wide and large parameter space. For each model, we have performed a high-resolution simulation, augmented with sixteen medium-resolution simulations and TimeRG perturbation theory resultsmore » to provide accurate coverage of a wide k-range; the dataset generated as part of this project is more than 1.2Pbyte. With the current set of simulated models, we achieve an accuracy of approximately 4%. Because the sampling approach used here has established convergence and error-control properties, follow-on results with more than a hundred cosmological models will soon achieve ~1% accuracy. We compare our approach with other prediction schemes that are based on halo model ideas and remapping approaches. The new emulator code is publicly available.« less
NASA Astrophysics Data System (ADS)
Helbert, J.; Maturilli, A.; Ferrari, S.; Dyar, M. D.; Smrekar, S. E.
2014-12-01
The permanent cloud cover of Venus prohibits observation of the surface with traditional imaging techniques over most of the visible spectral range. Venus' CO2 atmosphere is transparent exclusively in small spectral windows near 1 μm. The Visible and Infrared Thermal Imaging Spectrometer (VIRTIS) team on the European Space Agency Venus-Express mission have recently used these windows successfully to map the southern hemisphere from orbit. VIRTIS is showing variations in surface brightness, which can be interpreted as variations in surface emissivity. Deriving surface composition from these variations is a challenging task. Comparison with laboratory analogue spectra are complicated by the fact that Venus has an average surface temperature of 730K. Mineral crystal structures and their resultant spectral signatures are notably affected by temperature, therefore any interpretations based on room temperature laboratory spectra database can be misleading. In order to support the interpretation of near-infrared data from Venus we have started an extensive measurement campaign at the Planetary Emissivity Laboratory (PEL, Institute of Planetary Research of the German Aerospace Center, Berlin). The PEL facility, which is unique in the world, allows emission measurements covering the 1 to 2 μm wavelength range at sample temperatures of up to 770K. Conciliating the expected emissivity variation between felsic and mafic minerals with Venera and VEGA geochemical data we have started with a set of five analog samples. This set includes basalt, gneiss, granodiorite, anorthosite and hematite, thus covering the range of mineralogies. Preliminary results show significant spectral contrast, thus allowing different samples to be distinguished with only 5 spectral points and validating the use of thermal emissivity for investigating composition. This unique new dataset from PEL not only allows interpretation of the Venus Express VIRTIS data but also provide a baseline for considering new instrument designs for future Venus missions.
Kirchheimer, Bernhard; Schinkel, Christoph C F; Dellinger, Agnes S; Klatt, Simone; Moser, Dietmar; Winkler, Manuela; Lenoir, Jonathan; Caccianiga, Marco; Guisan, Antoine; Nieto-Lugilde, Diego; Svenning, Jens-Christian; Thuiller, Wilfried; Vittoz, Pascal; Willner, Wolfgang; Zimmermann, Niklaus E; Hörandl, Elvira; Dullinger, Stefan
2016-03-22
Emerging polyploids may depend on environmental niche shifts for successful establishment. Using the alpine plant Ranunculus kuepferi as a model system, we explore the niche shift hypothesis at different spatial resolutions and in contrasting parts of the species range. European Alps. We sampled 12 individuals from each of 102 populations of R. kuepferi across the Alps, determined their ploidy levels, derived coarse-grain (100 × 100 m) environmental descriptors for all sampling sites by downscaling WorldClim maps, and calculated fine-scale environmental descriptors (2 × 2 m) from indicator values of the vegetation accompanying the sampled individuals. Both coarse and fine-scale variables were further computed for 8239 vegetation plots from across the Alps. Subsequently, we compared niche optima and breadths of diploid and tetraploid cytotypes by combining principal components analysis and kernel smoothing procedures. Comparisons were done separately for coarse and fine-grain data sets and for sympatric, allopatric and the total set of populations. All comparisons indicate that the niches of the two cytotypes differ in optima and/or breadths, but results vary in important details. The whole-range analysis suggests differentiation along the temperature gradient to be most important. However, sympatric comparisons indicate that this climatic shift was not a direct response to competition with diploid ancestors. Moreover, fine-grained analyses demonstrate niche contraction of tetraploids, especially in the sympatric range, that goes undetected with coarse-grained data. Although the niche optima of the two cytotypes differ, separation along ecological gradients was probably less decisive for polyploid establishment than a shift towards facultative apomixis, a particularly effective strategy to avoid minority cytotype exclusion. In addition, our results suggest that coarse-grained analyses overestimate niche breadths of widely distributed taxa. Niche comparison analyses should hence be conducted at environmental data resolutions appropriate for the organism and question under study.
Investigation into low-level anti-rubella virus IgG results reported by commercial immunoassays.
Dimech, Wayne; Arachchi, Nilukshi; Cai, Jingjing; Sahin, Terri; Wilson, Kim
2013-02-01
Since the 1980s, commercial anti-rubella virus IgG assays have been calibrated against a WHO International Standard and results have been reported in international units per milliliter (IU/ml). Laboratories testing routine patients' samples collected 100 samples that gave anti-rubella virus IgG results of 40 IU/ml or less from each of five different commercial immunoassays (CIA). The total of 500 quantitative results obtained from 100 samples from each CIA were compared with results obtained from an in-house enzyme immunoassay (IH-EIA) calibrated using the WHO standard. All 500 samples were screened using a hemagglutination inhibition assay (HAI). Any sample having an HAI titer of 1:8 or less was assigned a negative anti-rubella virus antibody status. If the HAI titer was greater than 1:8, the sample was tested in an immunoblot (IB) assay. If the IB result was negative, the sample was assigned a negative anti-rubella virus IgG status; otherwise, the sample was assigned a positive status. Concordance between the CIA qualitative results and the assigned negative status ranged from 50.0 to 93.8% and 74.5 to 97.8% for the assigned positive status. Using a receiver operating characteristic analysis with the cutoff set at 10 IU/ml, the estimated sensitivity and specificity ranged from 70.2 to 91.2% and 65.9 to 100%, respectively. There was poor correlation between the quantitative CIA results and those obtained by the IH-EIA, with the coefficient of determination (R(2)) ranging from 0.002 to 0.413. Although CIAs have been calibrated with the same international standard for more than 2 decades, the level of standardization continues to be poor. It may be time for the scientific community to reevaluate the relevance of quantification of anti-rubella virus IgG.
Investigation into Low-Level Anti-Rubella Virus IgG Results Reported by Commercial Immunoassays
Arachchi, Nilukshi; Cai, Jingjing; Sahin, Terri; Wilson, Kim
2013-01-01
Since the 1980s, commercial anti-rubella virus IgG assays have been calibrated against a WHO International Standard and results have been reported in international units per milliliter (IU/ml). Laboratories testing routine patients' samples collected 100 samples that gave anti-rubella virus IgG results of 40 IU/ml or less from each of five different commercial immunoassays (CIA). The total of 500 quantitative results obtained from 100 samples from each CIA were compared with results obtained from an in-house enzyme immunoassay (IH-EIA) calibrated using the WHO standard. All 500 samples were screened using a hemagglutination inhibition assay (HAI). Any sample having an HAI titer of 1:8 or less was assigned a negative anti-rubella virus antibody status. If the HAI titer was greater than 1:8, the sample was tested in an immunoblot (IB) assay. If the IB result was negative, the sample was assigned a negative anti-rubella virus IgG status; otherwise, the sample was assigned a positive status. Concordance between the CIA qualitative results and the assigned negative status ranged from 50.0 to 93.8% and 74.5 to 97.8% for the assigned positive status. Using a receiver operating characteristic analysis with the cutoff set at 10 IU/ml, the estimated sensitivity and specificity ranged from 70.2 to 91.2% and 65.9 to 100%, respectively. There was poor correlation between the quantitative CIA results and those obtained by the IH-EIA, with the coefficient of determination (R2) ranging from 0.002 to 0.413. Although CIAs have been calibrated with the same international standard for more than 2 decades, the level of standardization continues to be poor. It may be time for the scientific community to reevaluate the relevance of quantification of anti-rubella virus IgG. PMID:23254301
Joyce, Richard; Kuziene, Viktorija; Zou, Xin; Wang, Xueting; Pullen, Frank; Loo, Ruey Leng
2016-01-01
An ultra-performance liquid chromatography quadrupole time of flight mass spectrometry (UPLC-qTOF-MS) method using hydrophilic interaction liquid chromatography was developed and validated for simultaneous quantification of 18 free amino acids in urine with a total acquisition time including the column re-equilibration of less than 18 min per sample. This method involves simple sample preparation steps which consisted of 15 times dilution with acetonitrile to give a final composition of 25 % aqueous and 75 % acetonitrile without the need of any derivatization. The dynamic range for our calibration curve is approximately two orders of magnitude (120-fold from the lowest calibration curve point) with good linearity (r (2) ≥ 0.995 for all amino acids). Good separation of all amino acids as well as good intra- and inter-day accuracy (<15 %) and precision (<15 %) were observed using three quality control samples at a concentration of low, medium and high range of the calibration curve. The limits of detection (LOD) and lower limit of quantification of our method were ranging from approximately 1-300 nM and 0.01-0.5 µM, respectively. The stability of amino acids in the prepared urine samples was found to be stable for 72 h at 4 °C, after one freeze thaw cycle and for up to 4 weeks at -80 °C. We have applied this method to quantify the content of 18 free amino acids in 646 urine samples from a dietary intervention study. We were able to quantify all 18 free amino acids in these urine samples, if they were present at a level above the LOD. We found our method to be reproducible (accuracy and precision were typically <10 % for QCL, QCM and QCH) and the relatively high sample throughput nature of this method potentially makes it a suitable alternative for the analysis of urine samples in clinical setting.
Further evidence for cosmological evolution of the fine structure constant.
Webb, J K; Murphy, M T; Flambaum, V V; Dzuba, V A; Barrow, J D; Churchill, C W; Prochaska, J X; Wolfe, A M
2001-08-27
We describe the results of a search for time variability of the fine structure constant alpha using absorption systems in the spectra of distant quasars. Three large optical data sets and two 21 cm and mm absorption systems provide four independent samples, spanning approximately 23% to 87% of the age of the universe. Each sample yields a smaller alpha in the past and the optical sample shows a 4 sigma deviation: Delta alpha/alpha = -0.72+/-0.18 x 10(-5) over the redshift range 0.5
BP Spill Sampling and Monitoring Data
This dataset analyzes waste from the the British Petroleum Deepwater Horizon Rig Explosion Emergency Response, providing opportunity to query data sets by metadata criteria and find resulting raw datasets in CSV format.The data query tool allows users to download EPA's air, water and sediment sampling and monitoring data that has been collected in response to the BP oil spill. All sampling and monitoring data that has been collected to date is available for download as raw structured data.The query tools enables CSV file creation to be refined based on the following search criteria: date range (between April 28, 2010 and 9/29/2010); location by zip, city, or county; media (solid waste, weathered oil, air, surface water, liquid waste, tar, sediment, water); substance categories (based on media selection) and substances (based on substance category selection).
Validating a large geophysical data set: Experiences with satellite-derived cloud parameters
NASA Technical Reports Server (NTRS)
Kahn, Ralph; Haskins, Robert D.; Knighton, James E.; Pursch, Andrew; Granger-Gallegos, Stephanie
1992-01-01
We are validating the global cloud parameters derived from the satellite-borne HIRS2 and MSU atmospheric sounding instrument measurements, and are using the analysis of these data as one prototype for studying large geophysical data sets in general. The HIRS2/MSU data set contains a total of 40 physical parameters, filling 25 MB/day; raw HIRS2/MSU data are available for a period exceeding 10 years. Validation involves developing a quantitative sense for the physical meaning of the derived parameters over the range of environmental conditions sampled. This is accomplished by comparing the spatial and temporal distributions of the derived quantities with similar measurements made using other techniques, and with model results. The data handling needed for this work is possible only with the help of a suite of interactive graphical and numerical analysis tools. Level 3 (gridded) data is the common form in which large data sets of this type are distributed for scientific analysis. We find that Level 3 data is inadequate for the data comparisons required for validation. Level 2 data (individual measurements in geophysical units) is needed. A sampling problem arises when individual measurements, which are not uniformly distributed in space or time, are used for the comparisons. Standard 'interpolation' methods involve fitting the measurements for each data set to surfaces, which are then compared. We are experimenting with formal criteria for selecting geographical regions, based upon the spatial frequency and variability of measurements, that allow us to quantify the uncertainty due to sampling. As part of this project, we are also dealing with ways to keep track of constraints placed on the output by assumptions made in the computer code. The need to work with Level 2 data introduces a number of other data handling issues, such as accessing data files across machine types, meeting large data storage requirements, accessing other validated data sets, processing speed and throughput for interactive graphical work, and problems relating to graphical interfaces.
M3FT-17OR0301070211 - Preparation of Hot Isostatically Pressed AgZ Waste Form Samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jubin, Robert Thomas; Bruffey, Stephanie H.; Jordan, Jacob A.
The production of radioactive iodine-bearing waste forms that exhibit long-term stability and are suitable for permanent geologic disposal has been the subject of substantial research interest. One potential method of iodine waste form production is hot isostatic pressing (HIP). Recent studies at Oak Ridge National Laboratory (ORNL) have investigated the conversion of iodine-loaded silver mordenite (I-AgZ) directly to a waste form by HIP. ORNL has performed HIP with a variety of sample compositions and pressing conditions. The base mineral has varied among AgZ (in pure and engineered forms), silver-exchanged faujasite, and silverexchanged zeolite A. Two iodine loading methods, occlusion andmore » chemisorption, have been explored. Additionally, the effects of variations in temperature and pressure of the process have been examined, with temperature ranges of 525°C–1,100°C and pressure ranges of 100–300 MPa. All of these samples remain available to collaborators upon request. The sample preparation detailed in this document is an extension of that work. In addition to previously prepared samples, this report documents the preparation of additional samples to support stability testing. These samples include chemisorbed I-AgZ and pure AgI. Following sample preparation, each sample was processed by HIP by American Isostatic Presses Inc. and returned to ORNL for storage. ORNL will store the samples until they are requested by collaborators for durability testing. The sample set reported here will support waste form durability testing across the national laboratories and will provide insight into the effects of varied iodine content on iodine retention by the produced waste form and on potential improvements in waste form durability provided by the zeolite matrix.« less
Measuring the value of accurate link prediction for network seeding.
Wei, Yijin; Spencer, Gwen
2017-01-01
The influence-maximization literature seeks small sets of individuals whose structural placement in the social network can drive large cascades of behavior. Optimization efforts to find the best seed set often assume perfect knowledge of the network topology. Unfortunately, social network links are rarely known in an exact way. When do seeding strategies based on less-than-accurate link prediction provide valuable insight? We introduce optimized-against-a-sample ([Formula: see text]) performance to measure the value of optimizing seeding based on a noisy observation of a network. Our computational study investigates [Formula: see text] under several threshold-spread models in synthetic and real-world networks. Our focus is on measuring the value of imprecise link information. The level of investment in link prediction that is strategic appears to depend closely on spread model: in some parameter ranges investments in improving link prediction can pay substantial premiums in cascade size. For other ranges, such investments would be wasted. Several trends were remarkably consistent across topologies.
On the optimality of a universal noiseless coder
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Rice, Robert F.; Miller, Warner H.
1993-01-01
Rice developed a universal noiseless coding structure that provides efficient performance over an extremely broad range of source entropy. This is accomplished by adaptively selecting the best of several easily implemented variable length coding algorithms. Variations of such noiseless coders have been used in many NASA applications. Custom VLSI coder and decoder modules capable of processing over 50 million samples per second have been fabricated and tested. In this study, the first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, for source symbol sets having a Laplacian distribution. Except for the default option, other options are shown to be equivalent to the Huffman codes of a modified Laplacian symbol set, at specified symbol entropy values. Simulation results are obtained on actual aerial imagery over a wide entropy range, and they confirm the optimality of the scheme. Comparison with other known techniques are performed on several widely used images and the results further validate the coder's optimality.
NASA Astrophysics Data System (ADS)
Cécillon, Lauric; Baudin, François; Chenu, Claire; Houot, Sabine; Jolivet, Romain; Kätterer, Thomas; Lutfalla, Suzanne; Macdonald, Andy; van Oort, Folkert; Plante, Alain F.; Savignac, Florence; Soucémarianadin, Laure N.; Barré, Pierre
2018-05-01
Changes in global soil carbon stocks have considerable potential to influence the course of future climate change. However, a portion of soil organic carbon (SOC) has a very long residence time ( > 100 years) and may not contribute significantly to terrestrial greenhouse gas emissions during the next century. The size of this persistent SOC reservoir is presumed to be large. Consequently, it is a key parameter required for the initialization of SOC dynamics in ecosystem and Earth system models, but there is considerable uncertainty in the methods used to quantify it. Thermal analysis methods provide cost-effective information on SOC thermal stability that has been shown to be qualitatively related to SOC biogeochemical stability. The objective of this work was to build the first quantitative model of the size of the centennially persistent SOC pool based on thermal analysis. We used a unique set of 118 archived soil samples from four agronomic experiments in northwestern Europe with long-term bare fallow and non-bare fallow treatments (e.g., manure amendment, cropland and grassland) as a sample set for which estimating the size of the centennially persistent SOC pool is relatively straightforward. At each experimental site, we estimated the average concentration of centennially persistent SOC and its uncertainty by applying a Bayesian curve-fitting method to the observed declining SOC concentration over the duration of the long-term bare fallow treatment. Overall, the estimated concentrations of centennially persistent SOC ranged from 5 to 11 g C kg-1 of soil (lowest and highest boundaries of four 95 % confidence intervals). Then, by dividing the site-specific concentrations of persistent SOC by the total SOC concentration, we could estimate the proportion of centennially persistent SOC in the 118 archived soil samples and the associated uncertainty. The proportion of centennially persistent SOC ranged from 0.14 (standard deviation of 0.01) to 1 (standard deviation of 0.15). Samples were subjected to thermal analysis by Rock-Eval 6 that generated a series of 30 parameters reflecting their SOC thermal stability and bulk chemistry. We trained a nonparametric machine-learning algorithm (random forests multivariate regression model) to predict the proportion of centennially persistent SOC in new soils using Rock-Eval 6 thermal parameters as predictors. We evaluated the model predictive performance with two different strategies. We first used a calibration set (n = 88) and a validation set (n = 30) with soils from all sites. Second, to test the sensitivity of the model to pedoclimate, we built a calibration set with soil samples from three out of the four sites (n = 84). The multivariate regression model accurately predicted the proportion of centennially persistent SOC in the validation set composed of soils from all sites (R2 = 0.92, RMSEP = 0.07, n = 30). The uncertainty of the model predictions was quantified by a Monte Carlo approach that produced conservative 95 % prediction intervals across the validation set. The predictive performance of the model decreased when predicting the proportion of centennially persistent SOC in soils from one fully independent site with a different pedoclimate, yet the mean error of prediction only slightly increased (R2 = 0.53, RMSEP = 0.10, n = 34). This model based on Rock-Eval 6 thermal analysis can thus be used to predict the proportion of centennially persistent SOC with known uncertainty in new soil samples from different pedoclimates, at least for sites that have similar Rock-Eval 6 thermal characteristics to those included in the calibration set. Our study reinforces the evidence that there is a link between the thermal and biogeochemical stability of soil organic matter and demonstrates that Rock-Eval 6 thermal analysis can be used to quantify the size of the centennially persistent organic carbon pool in temperate soils.
Cozmuta, Ioana; Blanco, Mario; Goddard, William A
2007-03-29
It is important for many industrial processes to design new materials with improved selective permeability properties. Besides diffusion, the molecule's solubility contributes largely to the overall permeation process. This study presents a method to calculate solubility coefficients of gases such as O2, H2O (vapor), N2, and CO2 in polymeric matrices from simulation methods (Molecular Dynamics and Monte Carlo) using first principle predictions. The generation and equilibration (annealing) of five polymer models (polypropylene, polyvinyl alcohol, polyvinyl dichloride, polyvinyl chloride-trifluoroethylene, and polyethylene terephtalate) are extensively described. For each polymer, the average density and Hansen solubilities over a set of ten samples compare well with experimental data. For polyethylene terephtalate, the average properties between a small (n = 10) and a large (n = 100) set are compared. Boltzmann averages and probability density distributions of binding and strain energies indicate that the smaller set is biased in sampling configurations with higher energies. However, the sample with the lowest cohesive energy density from the smaller set is representative of the average of the larger set. Density-wise, low molecular weight polymers tend to have on average lower densities. Infinite molecular weight samples do however provide a very good representation of the experimental density. Solubility constants calculated with two ensembles (grand canonical and Henry's constant) are equivalent within 20%. For each polymer sample, the solubility constant is then calculated using the faster (10x) Henry's constant ensemble (HCE) from 150 ps of NPT dynamics of the polymer matrix. The influence of various factors (bad contact fraction, number of iterations) on the accuracy of Henry's constant is discussed. To validate the calculations against experimental results, the solubilities of nitrogen and carbon dioxide in polypropylene are examined over a range of temperatures between 250 and 650 K. The magnitudes of the calculated solubilities agree well with experimental results, and the trends with temperature are predicted correctly. The HCE method is used to predict the solubility constants at 298 K of water vapor and oxygen. The water vapor solubilities follow more closely the experimental trend of permeabilities, both ranging over 4 orders of magnitude. For oxygen, the calculated values do not follow entirely the experimental trend of permeabilities, most probably because at this temperature some of the polymers are in the glassy regime and thus are diffusion dominated. Our study also concludes large confidence limits are associated with the calculated Henry's constants. By investigating several factors (terminal ends of the polymer chains, void distribution, etc.), we conclude that the large confidence limits are intimately related to the polymer's conformational changes caused by thermal fluctuations and have to be regarded--at least at microscale--as a characteristic of each polymer and the nature of its interaction with the solute. Reducing the mobility of the polymer matrix as well as controlling the distribution of the free (occupiable) volume would act as mechanisms toward lowering both the gas solubility and the diffusion coefficients.
Metabolomics for organic food authentication: Results from a long-term field study in carrots.
Cubero-Leon, Elena; De Rudder, Olivier; Maquet, Alain
2018-01-15
Increasing demand for organic products and their premium prices make them an attractive target for fraudulent malpractices. In this study, a large-scale comparative metabolomics approach was applied to investigate the effect of the agronomic production system on the metabolite composition of carrots and to build statistical models for prediction purposes. Orthogonal projections to latent structures-discriminant analysis (OPLS-DA) was applied successfully to predict the origin of the agricultural system of the harvested carrots on the basis of features determined by liquid chromatography-mass spectrometry. When the training set used to build the OPLS-DA models contained samples representative of each harvest year, the models were able to classify unknown samples correctly (100% correct classification). If a harvest year was left out of the training sets and used for predictions, the correct classification rates achieved ranged from 76% to 100%. The results therefore highlight the potential of metabolomic fingerprinting for organic food authentication purposes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Foam generation and sample composition optimization for the FOAM-C experiment of the ISS
NASA Astrophysics Data System (ADS)
Carpy, R.; Picker, G.; Amann, B.; Ranebo, H.; Vincent-Bonnieu, S.; Minster, O.; Winter, J.; Dettmann, J.; Castiglione, L.; Höhler, R.; Langevin, D.
2011-12-01
End of 2009 and early 2010 a sealed cell, for foam generation and observation, has been designed and manufactured at Astrium Friedrichshafen facilities. With the use of this cell, different sample compositions of "wet foams" have been optimized for mixtures of chemicals such as water, dodecanol, pluronic, aethoxisclerol, glycerol, CTAB, SDS, as well as glass beads. This development is performed in the frame of the breadboarding development activities of the Experiment Container FOAM-C for operation in the ISS Fluid Science Laboratory (ISS). The sample cell supports multiple observation methods such as: Diffusing-Wave and Diffuse Transmission Spectrometry, Time Resolved Correlation Spectroscopy [1] and microscope observation, all of these methods are applied in the cell with a relatively small experiment volume <3cm3. These units, will be on orbit replaceable sets, that will allow multiple sample compositions processing (in the range of >40).
An assessment of the variability in performance of wet atmospheric deposition samplers
Graham, R.C.; Robertson, J.K.; Obal, John
1987-01-01
The variability in performance of two brands of wet/dry atmospheric deposition samplers were compared for 1 year at a sincle site. A total of nine samplers were used. Samples were collected weekly and analyzed for pH, specific conductance, common chemical constituents, and sample volume. Additionally, data on the duration of each sampler opening were recorded using a microdatalogger. These data disprove the common perception that samplers remain open throughout a precipitation event. The sensitivity of sampler sensors within the range tested did not have a defineable impact on sample collection. The nonnormal distribution within the data set necessitated application of the nonparametric Friedman Test to assess comparability of sample chemical composition and volume between and within sampler brands. Statistically significant differences existed for most comparisons, however the test did not permit quantification of their magnitudes. Differences in analyte concentrations between samplers were small. (USGS)
Effect of aspect ratio on the mechanical behavior of packings of spheroids
NASA Astrophysics Data System (ADS)
Parafiniuk, Piotr; Bańda, Maciej; Stasiak, Mateusz; Horabik, Józef; Wiącek, Joanna; Molenda, Marek
2018-07-01
This paper presents measurements of the mechanical response of assemblages formed by spheroid particles. Sets of such particles in the form of thin, cylindrical samples were subjected to uniaxial confined compression. The particles were flattened and elongated, with aspect ratios ranging from 0.5 to 2.5. All particles were fabricated using a 3D printer and each had the same volume. Because the particles had well-defined shapes, it was possible to experimentally observe how the mechanical response of the anisotropic and highly constrained samples depended on the elongation of the particles. In particular, we showed how the sample density, lateral pressure ratio, and work done to compact a sample of elongated or flattened particles changed with change in particle aspect ratio. Furthermore, we found that the evolution of packing density in subsequent loading-unloading cycles followed a stretched exponential law regardless of particle aspect ratio.
Psychometric evaluation of the Revised Professional Practice Environment (RPPE) scale.
Erickson, Jeanette Ives; Duffy, Mary E; Ditomassi, Marianne; Jones, Dorothy
2009-05-01
The purpose was to examine the psychometric properties of the Revised Professional Practice Environment (RPPE) scale. Despite renewed focus on studying health professionals' practice environments, there are still few reliable and valid instruments available to assist nurse administrators in decision making. A psychometric evaluation using a random-sample cross-validation procedure (calibration sample [CS], n = 775; validation sample [VS], n = 775) was undertaken. Cronbach alpha internal consistency reliability of the total score (r = 0.93 [CS] and 0.92 [VS]), resulting subscale scores (r range: 0.80-0.87 [CS], 0.81-0.88 [VS]), and principal components analyses with Varimax rotation and Kaiser normalization (8 components, 59.2% variance [CS], 59.7% [VS]) produced almost identical results in both samples. The multidimensional RPPE is a psychometrically sound measure of 8 components of the professional practice environment in the acute care setting and sufficiently reliable and valid for use as independent subscales in healthcare research.
Bachmann-Harildstad, Gregor; Stenklev, Niels Christian; Myrvoll, Elin; Jablonski, Greg; Klingenberg, Olav
2011-01-01
The diagnosis of perilymphatic fluid (PLF) fistula is still challenging. Perilymphatic fluid fistula is one possible complication after stapedotomy or cochlear implant surgery. We have performed a prospective diagnostic pilot study to further investigate β-trace protein (β-TP) as a marker for PLF fistula. In this pilot study, we tested the sensitivity of the β-TP marker using a simple method for sample collection from the tympanic cavity. Prospective controlled diagnostic study. Two-center tertiary referral hospitals. A total of 35 adult patients undergoing ear surgery were included. Subjects were divided into 2 groups: 1) 19 patients undergoing stapedotomy were investigated for PLF fistula in samples obtained from the tympanic cavity and 2) 16 patients undergoing myringoplasty were investigated for PLF fistula in samples from the tympanic cavity. This group served as the control. Mean age +/- SD at surgery was 49.9 +/- 8.0 years in the study group and 39.69 +/- 15.47 years in the control group. β-Trace protein (prostaglandin D synthase) in tympanic cavity samples and serum samples was analyzed. The samples were collected by gradually filling the tympanic cavity with 100 to 200 μl sodium chloride and by immediately collecting a volume of 60 to 100 μl in a mucus specimen set container. The concentration of β-TP was quantified using laser nephelometry. The median β-TP in the study group was 0.8 mg/L (range, 0.05-4.5 mg/L). In the control group, the median β-TP value was 0.16 mg/L (range, 0.01-0.36 mg/L). Thirty-five percent of the values in the study group were below the highest value in the negative control group. The β-TP values of the tympanic cavity samples were significantly higher in the study group than in controls (p = 0.0001). The serum values were 0.55 +/- 0.18 and 0.53 +/- 0.11 mg/L, respectively. It may be feasible to test for PLF fistula using β-TP in samples from the tympanic cavity. Our results, however, suggest a relative low diagnostic sensitivity, given a cutoff that is set to obtain a high specificity when using a simple sample collection method. Furthermore, the test does not permit the distinction between PLF fistula and cerebrospinal fluid fistula. Further studies should focus on minimal dilution at sampling and on minimizing sample volumes.
Kim, Si Hyun; Jeong, Haeng Soon; Kim, Yeong Hoon; Song, Sae Am; Lee, Ja Young; Oh, Seung Hwan; Kim, Hye Ran; Lee, Jeong Nyeo; Kho, Weon-Gyu; Shin, Jeong Hwan
2012-03-01
The aims of this study were to compare several DNA extraction methods and 16S rDNA primers and to evaluate the clinical utility of broad-range PCR in continuous ambulatory peritoneal dialysis (CAPD) culture fluids. Six type strains were used as model organisms in dilutions from 10(8) to 10(0) colony-forming units (CFU)/mL for the evaluation of 5 DNA extraction methods and 5 PCR primer pairs. Broad-range PCR was applied to 100 CAPD culture fluids, and the results were compared with conventional culture results. There were some differences between the various DNA extraction methods and primer sets with regard to the detection limits. The InstaGene Matrix (Bio-Rad Laboratories, USA) and Exgene Clinic SV kits (GeneAll Biotechnology Co. Ltd, Korea) seem to have higher sensitivities than the others. The results of broad-range PCR were concordant with the results from culture in 97% of all cases (97/100). Two culture-positive cases that were broad-range PCR-negative were identified as Candida albicans, and 1 PCR-positive but culture-negative sample was identified as Bacillus circulans by sequencing. Two samples among 54 broad-range PCR-positive products could not be sequenced. There were differences in the analytical sensitivity of various DNA extraction methods and primers for broad-range PCR. The broad-range PCR assay can be used to detect bacterial pathogens in CAPD culture fluid as a supplement to culture methods.
Questionnaire-based assessment of executive functioning: Psychometrics.
Castellanos, Irina; Kronenberger, William G; Pisoni, David B
2018-01-01
The psychometric properties of the Learning, Executive, and Attention Functioning (LEAF) scale were investigated in an outpatient clinical pediatric sample. As a part of clinical testing, the LEAF scale, which broadly measures neuropsychological abilities related to executive functioning and learning, was administered to parents of 118 children and adolescents referred for psychological testing at a pediatric psychology clinic; 85 teachers also completed LEAF scales to assess reliability across different raters and settings. Scores on neuropsychological tests of executive functioning and academic achievement were abstracted from charts. Psychometric analyses of the LEAF scale demonstrated satisfactory internal consistency, parent-teacher inter-rater reliability in the small to large effect size range, and test-retest reliability in the large effect size range, similar to values for other executive functioning checklists. Correlations between corresponding subscales on the LEAF and other behavior checklists were large, while most correlations with neuropsychological tests of executive functioning and achievement were significant but in the small to medium range. Results support the utility of the LEAF as a reliable and valid questionnaire-based assessment of delays and disturbances in executive functioning and learning. Applications and advantages of the LEAF and other questionnaire measures of executive functioning in clinical neuropsychology settings are discussed.
Large Scale Environmental Monitoring through Integration of Sensor and Mesh Networks
Jurdak, Raja; Nafaa, Abdelhamid; Barbirato, Alessio
2008-01-01
Monitoring outdoor environments through networks of wireless sensors has received interest for collecting physical and chemical samples at high spatial and temporal scales. A central challenge to environmental monitoring applications of sensor networks is the short communication range of the sensor nodes, which increases the complexity and cost of monitoring commodities that are located in geographically spread areas. To address this issue, we propose a new communication architecture that integrates sensor networks with medium range wireless mesh networks, and provides users with an advanced web portal for managing sensed information in an integrated manner. Our architecture adopts a holistic approach targeted at improving the user experience by optimizing the system performance for handling data that originates at the sensors, traverses the mesh network, and resides at the server for user consumption. This holistic approach enables users to set high level policies that can adapt the resolution of information collected at the sensors, set the preferred performance targets for their application, and run a wide range of queries and analysis on both real-time and historical data. All system components and processes will be described in this paper. PMID:27873941
Surveillance of Endoscopes: Comparison of Different Sampling Techniques.
Cattoir, Lien; Vanzieleghem, Thomas; Florin, Lisa; Helleputte, Tania; De Vos, Martine; Verhasselt, Bruno; Boelens, Jerina; Leroux-Roels, Isabel
2017-09-01
OBJECTIVE To compare different techniques of endoscope sampling to assess residual bacterial contamination. DESIGN Diagnostic study. SETTING The endoscopy unit of an 1,100-bed university hospital performing ~13,000 endoscopic procedures annually. METHODS In total, 4 sampling techniques, combining flushing fluid with or without a commercial endoscope brush, were compared in an endoscope model. Based on these results, sterile physiological saline flushing with or without PULL THRU brush was selected for evaluation on 40 flexible endoscopes by adenosine triphosphate (ATP) measurement and bacterial culture. Acceptance criteria from the French National guideline (<25 colony-forming units [CFU] per endoscope and absence of indicator microorganisms) were used as part of the evaluation. RESULTS On biofilm-coated PTFE tubes, physiological saline in combination with a PULL THRU brush generated higher mean ATP values (2,579 relative light units [RLU]) compared with saline alone (1,436 RLU; P=.047). In the endoscope samples, culture yield using saline plus the PULL THRU (mean, 43 CFU; range, 1-400 CFU) was significantly higher than that of saline alone (mean, 17 CFU; range, 0-500 CFU; P<.001). In samples obtained using the saline+PULL THRU brush method, ATP values of samples classified as unacceptable were significantly higher than those of samples classified as acceptable (P=.001). CONCLUSION Physiological saline flushing combined with PULL THRU brush to sample endoscopes generated higher ATP values and increased the yield of microbial surveillance culture. Consequently, the acceptance rate of endoscopes based on a defined CFU limit was significantly lower when the saline+PULL THRU method was used instead of saline alone. Infect Control Hosp Epidemiol 2017;38:1062-1069.
NASA Astrophysics Data System (ADS)
Ehya, Farhad; Mazraei, Shaghayegh Moalaye
2017-10-01
Barite mineralization occurs at Chenarvardeh deposit as layers and lenses in Upper Eocene volcanic and pyroclastic rocks. The host rocks are intensely saussuritized in most places. Barite is accompanied by calcite, Mn-oxides, galena and malachite as subordinate minerals. The amount of Sr in barites is low and varies between 0.11 and 0.30 wt%. The concentration of Rb, Zr, Y, Ta and Hf is also low (<5 ppm) in barite samples. The amount of total REEs (∑REE) is low in barites, ranging from 7.51 to 30.50 ppm. Chondrite-normalized REE patterns reveal LREE enrichment with respect to HREE, and positive Ce anomalies. Fluid inclusions are common in barite samples, being dominantly from liquid-rich two phase (L + V) type. Salinity values in fluid inclusions range from 9.41 to 18.69 wt% NaCl equivalent with most frequent salinities falling in the range of 10-15 wt% NaCl equivalent. Homogenization temperatures (Th) range between 160 and 220 °C, being the 180-200 °C range as the most common Th interval. A combination of factors, including geologic setting, host rock, mineral assemblages, REE geochemistry and fluid inclusion data are consistent with a submarine volcanic hydrothermal model for barite formation at the Chenarvardeh deposit. Mineral-forming fluids originated from solutions related to submarine hydrothermal activities deposited barite on seafloor as they encountered sulfate-bearing seawater.
Juul, Malene; Bertl, Johanna; Guo, Qianyun; Nielsen, Morten Muhlig; Świtnicki, Michał; Hornshøj, Henrik; Madsen, Tobias; Hobolth, Asger; Pedersen, Jakob Skou
2017-01-01
Non-coding mutations may drive cancer development. Statistical detection of non-coding driver regions is challenged by a varying mutation rate and uncertainty of functional impact. Here, we develop a statistically founded non-coding driver-detection method, ncdDetect, which includes sample-specific mutational signatures, long-range mutation rate variation, and position-specific impact measures. Using ncdDetect, we screened non-coding regulatory regions of protein-coding genes across a pan-cancer set of whole-genomes (n = 505), which top-ranked known drivers and identified new candidates. For individual candidates, presence of non-coding mutations associates with altered expression or decreased patient survival across an independent pan-cancer sample set (n = 5454). This includes an antigen-presenting gene (CD1A), where 5’UTR mutations correlate significantly with decreased survival in melanoma. Additionally, mutations in a base-excision-repair gene (SMUG1) correlate with a C-to-T mutational-signature. Overall, we find that a rich model of mutational heterogeneity facilitates non-coding driver identification and integrative analysis points to candidates of potential clinical relevance. DOI: http://dx.doi.org/10.7554/eLife.21778.001 PMID:28362259
A probabilistic QMRA of Salmonella in direct agricultural reuse of treated municipal wastewater.
Amha, Yamrot M; Kumaraswamy, Rajkumari; Ahmad, Farrukh
2015-01-01
Developing reliable quantitative microbial risk assessment (QMRA) procedures aids in setting recommendations on reuse applications of treated wastewater. In this study, a probabilistic QMRA to determine the risk of Salmonella infections resulting from the consumption of edible crops irrigated with treated wastewater was conducted. Quantitative polymerase chain reaction (qPCR) was used to enumerate Salmonella spp. in post-disinfected samples, where they showed concentrations ranging from 90 to 1,600 cells/100 mL. The results were used to construct probabilistic exposure models for the raw consumption of three vegetables (lettuce, cabbage, and cucumber) irrigated with treated wastewater, and to estimate the disease burden using Monte Carlo analysis. The results showed elevated median disease burden, when compared with acceptable disease burden set by the World Health Organization, which is 10⁻⁶ disability-adjusted life years per person per year. Of the three vegetables considered, lettuce showed the highest risk of infection in all scenarios considered, while cucumber showed the lowest risk. The results of the Salmonella concentration obtained with qPCR were compared with the results of Escherichia coli concentration for samples taken on the same sampling dates.
Bed-sediment grain-size and morphologic data from Suisun, Grizzly, and Honker Bays, CA, 1998-2002
Hampton, Margaret A.; Snyder, Noah P.; Chin, John L.; Allison, Dan W.; Rubin, David M.
2003-01-01
The USGS Place Based Studies Program for San Francisco Bay investigates this sensitive estuarine system to aid in resource management. As part of the inter-disciplinary research program, the USGS collected side-scan sonar data and bed-sediment samples from north San Francisco Bay to characterize bed-sediment texture and investigate temporal trends in sedimentation. The study area is located in central California and consists of Suisun Bay, and Grizzly and Honker Bays, sub-embayments of Suisun Bay. During the study (1998-2002), the USGS collected three side-scan sonar data sets and approximately 300 sediment samples. The side-scan data revealed predominantly fine-grained material on the bayfloor. We also mapped five different bottom types from the data set, categorized as featureless, furrows, sand waves, machine-made, and miscellaneous. We performed detailed grain-size and statistical analyses on the sediment samples. Overall, we found that grain size ranged from clay to fine sand, with the coarsest material in the channels and finer material located in the shallow bays. Grain-size analyses revealed high spatial variability in size distributions in the channel areas. In contrast, the shallow regions exhibited low spatial variability and consistent sediment size over time.
Helsel, Dennis R.; Gilliom, Robert J.
1986-01-01
Estimates of distributional parameters (mean, standard deviation, median, interquartile range) are often desired for data sets containing censored observations. Eight methods for estimating these parameters have been evaluated by R. J. Gilliom and D. R. Helsel (this issue) using Monte Carlo simulations. To verify those findings, the same methods are now applied to actual water quality data. The best method (lowest root-mean-squared error (rmse)) over all parameters, sample sizes, and censoring levels is log probability regression (LR), the method found best in the Monte Carlo simulations. Best methods for estimating moment or percentile parameters separately are also identical to the simulations. Reliability of these estimates can be expressed as confidence intervals using rmse and bias values taken from the simulation results. Finally, a new simulation study shows that best methods for estimating uncensored sample statistics from censored data sets are identical to those for estimating population parameters. Thus this study and the companion study by Gilliom and Helsel form the basis for making the best possible estimates of either population parameters or sample statistics from censored water quality data, and for assessments of their reliability.
Reconnaissance data on lakes in the Alpine Lakes Wilderness Area, Washington
Dethier, David P.; Heller, Paul L.; Safioles, Sally A.
1979-01-01
Sixty lakes in the Alpine Lakes Wilderness Area have been sampled from rubber rafts or helicopter to obtain information on their physical setting and on present water-quality conditions. The lakes are located near the crest of the Cascade Range in Chelan and King Counties, Washington. Basic data from these lakes will be useful for planners concerned with lake and wilderness management, and of interest to hikers and other recreationists who use the lakes.
A compendium of controlled diffusion blades generated by an automated inverse design procedure
NASA Technical Reports Server (NTRS)
Sanz, Jose M.
1989-01-01
A set of sample cases was produced to test an automated design procedure developed at the NASA Lewis Research Center for the design of controlled diffusion blades. The range of application of the automated design procedure is documented. The results presented include characteristic compressor and turbine blade sections produced with the automated design code as well as various other airfoils produced with the base design method prior to the incorporation of the automated procedure.
Mesoscale Particle-Based Model of Electrophoresis
Giera, Brian; Zepeda-Ruiz, Luis A.; Pascall, Andrew J.; ...
2015-07-31
Here, we develop and evaluate a semi-empirical particle-based model of electrophoresis using extensive mesoscale simulations. We parameterize the model using only measurable quantities from a broad set of colloidal suspensions with properties that span the experimentally relevant regime. With sufficient sampling, simulated diffusivities and electrophoretic velocities match predictions of the ubiquitous Stokes-Einstein and Henry equations, respectively. This agreement holds for non-polar and aqueous solvents or ionic liquid colloidal suspensions under a wide range of applied electric fields.
Mesoscale Particle-Based Model of Electrophoresis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giera, Brian; Zepeda-Ruiz, Luis A.; Pascall, Andrew J.
Here, we develop and evaluate a semi-empirical particle-based model of electrophoresis using extensive mesoscale simulations. We parameterize the model using only measurable quantities from a broad set of colloidal suspensions with properties that span the experimentally relevant regime. With sufficient sampling, simulated diffusivities and electrophoretic velocities match predictions of the ubiquitous Stokes-Einstein and Henry equations, respectively. This agreement holds for non-polar and aqueous solvents or ionic liquid colloidal suspensions under a wide range of applied electric fields.
Grande-Martínez, Ángel; Arrebola, Francisco Javier; Moreno, Laura Díaz; Vidal, José Luis Martínez; Frenich, Antonia Garrido
2015-01-01
A rapid and sensitive multiresidue method was developed and validated for the determination of around 100 pesticides in dry samples (rice and wheat flour) by ultra-performance LC coupled to a triple quadrupole mass analyzer working in tandem mode (UPLC/QqQ-MS/MS). The sample preparation step was optimized for both matrixes. Pesticides were extracted from rice samples using aqueous ethyl acetate, while aqueous acetonitrile extraction [modified QuEChERS (quick, easy, cheap, effective, rugged, and safe) method] was used for wheat flour matrixes. In both cases the extracts were then cleaned up by dispersive solid phase extraction with MgSO4 and primary secondary amine+C18 sorbents. A further cleanup step with Florisil was necessary to remove fat in wheat flour. The method was validated at two concentration levels (3.6 and 40 μg/kg for most compounds), obtaining recoveries ranging from 70 to 120%, intraday and interday precision values≤20% expressed as RSDs, and expanded uncertainty values≤50%. The LOQ values ranged between 3.6 and 20 μg/kg, although it was set at 3.6 μg/kg for the majority of the pesticides. The method was applied to the analysis of 20 real samples, and no pesticides were detected.
Buoyancy-corrected gravimetric analysis of lightly loaded filters.
Rasmussen, Pat E; Gardner, H David; Niu, Jianjun
2010-09-01
Numerous sources of uncertainty are associated with the gravimetric analysis of lightly loaded air filter samples (< 100 microg). The purpose of the study presented here is to investigate the effectiveness and limitations of air buoyancy corrections over experimentally adjusted conditions of temperature (21-25 degrees C) and relative humidity (RH) (16-60% RH). Conditioning (24 hr) and weighing were performed inside the Archimedes M3 environmentally controlled chamber. The measurements were performed using 20 size-fractionated samples of resuspended house dust loaded onto Teflo (PTFE) filters using a Micro-Orifice Uniform Deposit Impactor representing a wide range of mass loading (7.2-3130 microg) and cut sizes (0.056-9.9 microm). By maintaining tight controls on humidity (within 0.5% RH of control setting) throughout pre- and postweighing at each stepwise increase in RH, it was possible to quantify error due to water absorption: 45% of the total mass change due to water absorption occurred between 16 and 50% RH, and 55% occurred between 50 and 60% RH. The buoyancy corrections ranged from -3.5 to +5.8 microg in magnitude and improved relative standard deviation (RSD) from 21.3% (uncorrected) to 5.6% (corrected) for a 7.2 microg sample. It is recommended that protocols for weighing low-mass particle samples (e.g., nanoparticle samples) should include buoyancy corrections and tight temperature/humidity controls. In some cases, conditioning times longer than 24 hr may be warranted.
Developing a cosmic ray muon sampling capability for muon tomography and monitoring applications
NASA Astrophysics Data System (ADS)
Chatzidakis, S.; Chrysikopoulou, S.; Tsoukalas, L. H.
2015-12-01
In this study, a cosmic ray muon sampling capability using a phenomenological model that captures the main characteristics of the experimentally measured spectrum coupled with a set of statistical algorithms is developed. The "muon generator" produces muons with zenith angles in the range 0-90° and energies in the range 1-100 GeV and is suitable for Monte Carlo simulations with emphasis on muon tomographic and monitoring applications. The muon energy distribution is described by the Smith and Duller (1959) [35] phenomenological model. Statistical algorithms are then employed for generating random samples. The inverse transform provides a means to generate samples from the muon angular distribution, whereas the Acceptance-Rejection and Metropolis-Hastings algorithms are employed to provide the energy component. The predictions for muon energies 1-60 GeV and zenith angles 0-90° are validated with a series of actual spectrum measurements and with estimates from the software library CRY. The results confirm the validity of the phenomenological model and the applicability of the statistical algorithms to generate polyenergetic-polydirectional muons. The response of the algorithms and the impact of critical parameters on computation time and computed results were investigated. Final output from the proposed "muon generator" is a look-up table that contains the sampled muon angles and energies and can be easily integrated into Monte Carlo particle simulation codes such as Geant4 and MCNP.
Multiferroic and magnetoelectric studies on BMFO-NZFO nanocomposites
NASA Astrophysics Data System (ADS)
Dhanalakshmi, B.; Kollu, Pratap; Barnes, Crispin H. W.; Rao, B. Parvatheeswara; Rao, P. S. V. Subba
2018-05-01
Bismuth ferrite-based multiferroic composites, xṡBi0.95Mn0.05FeO3 - (1 - x)ṡNi0.5Zn0.5Fe2O4, where x takes the values of 0.2, 0.4, 0.5, 0.6 and 0.8, have been prepared by combining sol-gel autocombustion and solid-state methods. Phase identification of the samples was done by X-ray diffraction analysis. SEM-EDX measurements on the samples were used to evaluate the microstructural aspects and quantitative evaluation of the samples. Room temperature P-E loop measurements on the samples were done under the application of external electric fields in the range from 0 to 6 kV/mm at a frequency of 50 Hz to understand the ferroelectric strength of the compounds. Magnetic studies on the samples were made by M-H loop measurements in the field range of ± 10 kOe. Magnetoelectric coupling measurements were made using a dynamic lock-in test set-up. The results indicate that the mixing of nickel-zinc ferrite in Bi0.95Mn0.05FeO3, in spite of the enhanced conductivity, has produced considerable improvements in saturation magnetization while retaining the remnant ferroelectric polarization in reasonable magnitudes to obtain improved M-E coupling. Among all the composites, the composite with x = 0.5 has resulted better M-E performance.
Torii, Yasushi; Goto, Yoshitaka; Nakahira, Shinji; Ginnaga, Akihiro
2014-11-01
The biological activity of botulinum toxin type A has been evaluated using the mouse intraperitoneal (ip) LD50 test. This method requires a large number of mice to precisely determine toxin activity, and, as such, poses problems with regard to animal welfare. We previously developed a compound muscle action potential (CMAP) assay using rats as an alternative method to the mouse ip LD50 test. In this study, to evaluate this quantitative method of measuring toxin activity using CMAP, we assessed the parameters necessary for quantitative tests according to ICH Q2 (R1). This assay could be used to evaluate the activity of the toxin, even when inactive toxin was mixed with the sample. To reduce the number of animals needed, this assay was set to measure two samples per animal. Linearity was detected over a range of 0.1-12.8 U/mL, and the measurement range was set at 0.4-6.4 U/mL. The results for accuracy and precision showed low variability. The body weight was selected as a variable factor, but it showed no effect on the CMAP amplitude. In this study, potency tests using the rat CMAP assay of botulinum toxin type A demonstrated that it met the criteria for a quantitative analysis method. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bouchez, Julien; Galy, Valier; Hilton, Robert G.; Gaillardet, Jérôme; Moreira-Turcq, Patricia; Pérez, Marcela Andrea; France-Lanord, Christian; Maurice, Laurence
2014-05-01
In order to reveal particulate organic carbon (POC) source and mode of transport in the largest river basin on Earth, we sampled the main sediment-laden tributaries of the Amazon system (Solimões, Madeira and Amazon) during two sampling campaigns, following vertical depth-profiles. This sampling technique takes advantage of hydrodynamic sorting to access the full range of solid erosion products transported by the river. Using the Al/Si ratio of the river sediments as a proxy for grain size, we find a general increase in POC content with Al/Si, as sediments become finer. However, the sample set shows marked variability in the POC content for a given Al/Si ratio, with the Madeira River having lower POC content across the measured range in Al/Si. The POC content is not strongly related to the specific surface area (SSA) of the suspended load, and bed sediments have a much lower POC/SSA ratio. These data suggest that SSA exerts a significant, yet partial, control on POC transport in Amazon River suspended sediment. We suggest that the role of clay mineralogy, discrete POC particles and rock-derived POC warrant further attention in order to fully understand POC transport in large rivers.
Hidden order signatures in the antiferromagnetic phase of U (Ru1-xFex) 2Si2
NASA Astrophysics Data System (ADS)
Williams, T. J.; Aczel, A. A.; Stone, M. B.; Wilson, M. N.; Luke, G. M.
2017-03-01
We present a comprehensive set of elastic and inelastic neutron scattering measurements on a range of Fe-doped samples of U (Ru1-xFex) 2Si2 with 0.01 ≤x ≤0.15 . All of the samples measured exhibit long-range antiferromagnetic order, with the size of the magnetic moment quickly increasing to 0.51 μB at 2.5% doping and continuing to increase monotonically with doping, reaching 0.69 μB at 15% doping. Time-of-flight and inelastic triple-axis measurements show the existence of excitations at (1 0 0) and (1.4 0 0) in all samples, which are also observed in the parent compound. While the excitations in the 1% doping are quantitatively identical to the parent material, the gap and width of the excitations change rapidly at 2.5% Fe doping and above. The 1% doped sample shows evidence for a separation in temperature between the hidden order and antiferromagnetic transitions, suggesting that the antiferromagnetic state emerges at very low Fe dopings. The combined neutron scattering data suggest not only discontinuous changes in the magnetic moment and excitations between the hidden order and antiferromagnetic phases, but that these changes continue to evolve up to at least x =0.15 .
Koerner, Terence B; Cleroux, Chantal; Poirier, Christine; Cantin, Isabelle; La Vieille, Sébastien; Hayward, Stephen; Dubois, Sheila
2013-01-01
A large national investigation into the extent of gluten cross-contamination of naturally gluten-free ingredients (flours and starches) sold in Canada was performed. Samples (n = 640) were purchased from eight Canadian cities and via the internet during the period 2010-2012 and analysed for gluten contamination. The results showed that 61 of the 640 (9.5%) samples were contaminated above the Codex-recommended maximum level for gluten-free products (20 mg kg⁻¹) with a range of 5-7995 mg kg⁻¹. For the ingredients that were labelled gluten-free the contamination range (5-141 mg kg⁻¹) and number of samples were lower (3 of 268). This picture was consistent over time, with approximately the same percentage of samples above 20 mg kg⁻¹ in both the initial set and the subsequent lot. Looking at the total mean (composite) contamination for specific ingredients the largest and most consistent contaminations come from higher fibre ingredients such as soy (902 mg kg⁻¹), millet (272 mg kg⁻¹) and buckwheat (153 mg kg⁻¹). Of the naturally gluten-free flours and starches tested that do not contain a gluten-free label, the higher fibre ingredients would constitute the greatest probability of being contaminated with gluten above 20 mg kg⁻¹.
Alpha Air Sample Counting Efficiency Versus Dust Loading: Evaluation of a Large Data Set
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogue, M. G.; Gause-Lott, S. M.; Owensby, B. N.
Dust loading on air sample filters is known to cause a loss of efficiency for direct counting of alpha activity on the filters, but the amount of dust loading and the correction factor needed to account for attenuated alpha particles is difficult to assess. In this paper, correction factors are developed by statistical analysis of a large database of air sample results for a uranium and plutonium processing facility at the Savannah River Site. As is typically the case, dust-loading data is not directly available, but sample volume is found to be a reasonable proxy measure; the amount of dustmore » loading is inferred by a combination of the derived correction factors and a Monte Carlo model. The technique compares the distribution of activity ratios [beta/(beta + alpha)] by volume and applies a range of correction factors on the raw alpha count rate. The best-fit results with this method are compared with MCNP modeling of activity uniformly deposited in the dust and analytical laboratory results of digested filters. Finally, a linear fit is proposed to evenly-deposited alpha activity collected on filters with dust loading over a range of about 2 mg cm -2 to 1,000 mg cm -2.« less
Alpha Air Sample Counting Efficiency Versus Dust Loading: Evaluation of a Large Data Set
Hogue, M. G.; Gause-Lott, S. M.; Owensby, B. N.; ...
2018-03-03
Dust loading on air sample filters is known to cause a loss of efficiency for direct counting of alpha activity on the filters, but the amount of dust loading and the correction factor needed to account for attenuated alpha particles is difficult to assess. In this paper, correction factors are developed by statistical analysis of a large database of air sample results for a uranium and plutonium processing facility at the Savannah River Site. As is typically the case, dust-loading data is not directly available, but sample volume is found to be a reasonable proxy measure; the amount of dustmore » loading is inferred by a combination of the derived correction factors and a Monte Carlo model. The technique compares the distribution of activity ratios [beta/(beta + alpha)] by volume and applies a range of correction factors on the raw alpha count rate. The best-fit results with this method are compared with MCNP modeling of activity uniformly deposited in the dust and analytical laboratory results of digested filters. Finally, a linear fit is proposed to evenly-deposited alpha activity collected on filters with dust loading over a range of about 2 mg cm -2 to 1,000 mg cm -2.« less
Understanding scaling through history-dependent processes with collapsing sample space.
Corominas-Murtra, Bernat; Hanel, Rudolf; Thurner, Stefan
2015-04-28
History-dependent processes are ubiquitous in natural and social systems. Many such stochastic processes, especially those that are associated with complex systems, become more constrained as they unfold, meaning that their sample space, or their set of possible outcomes, reduces as they age. We demonstrate that these sample-space-reducing (SSR) processes necessarily lead to Zipf's law in the rank distributions of their outcomes. We show that by adding noise to SSR processes the corresponding rank distributions remain exact power laws, p(x) ~ x(-λ), where the exponent directly corresponds to the mixing ratio of the SSR process and noise. This allows us to give a precise meaning to the scaling exponent in terms of the degree to which a given process reduces its sample space as it unfolds. Noisy SSR processes further allow us to explain a wide range of scaling exponents in frequency distributions ranging from α = 2 to ∞. We discuss several applications showing how SSR processes can be used to understand Zipf's law in word frequencies, and how they are related to diffusion processes in directed networks, or aging processes such as in fragmentation processes. SSR processes provide a new alternative to understand the origin of scaling in complex systems without the recourse to multiplicative, preferential, or self-organized critical processes.
Miller, Mark P.; Mullins, Tom; Forsman, Eric D.; Haig, Susan M.
2017-01-01
Genetic differentiation among Spotted Owl (Strix occidentalis) subspecies has been established in prior studies. These investigations also provided evidence for introgression and hybridization among taxa but were limited by a lack of samples from geographic regions where subspecies came into close contact. We analyzed new sets of samples from Northern Spotted Owls (NSO: S. o. caurina) and California Spotted Owls (CSO: S. o. occidentalis) in northern California using mitochondrial DNA sequences (mtDNA) and 10 nuclear microsatellite loci to obtain a clearer depiction of genetic differentiation and hybridization in the region. Our analyses revealed that a NSO population close to the northern edge of the CSO range in northern California (the NSO Contact Zone population) is highly differentiated relative to other NSO populations throughout the remainder of their range. Phylogenetic analyses identified a unique lineage of mtDNA in the NSO Contact Zone, and Bayesian clustering analyses of the microsatellite data identified the Contact Zone as a third distinct population that is differentiated from CSO and NSO found in the remainder of the subspecies' range. Hybridization between NSO and CSO was readily detected in the NSO Contact Zone, with over 50% of individuals showing evidence of hybrid ancestry. Hybridization was also identified among 14% of CSO samples, which were dispersed across the subspecies' range in the Sierra Nevada Mountains. The asymmetry of hybridization suggested that the hybrid zone may be dynamic and moving. Although evidence of hybridization existed, we identified no F1 generation hybrid individuals. We instead found evidence for F2 or backcrossed individuals among our samples. The absence of F1 hybrids may indicate that (1) our 10 microsatellites were unable to distinguish hybrid types, (2) primary interactions between subspecies are occurring elsewhere on the landscape, or (3) dispersal between the subspecies' ranges is reduced relative to historical levels, potentially as a consequence of recent regional fires.
Schmidt, Debra A; Koutsos, Elizabeth A; Ellersieck, Mark R; Griffin, Mark E
2009-03-01
Serum concentrations of amino acids, fatty acids, lipoproteins, vitamins A and E, and minerals in zoo giraffes (Giraffa camelopardalis) were compared to values obtained from free-ranging giraffes in an effort to identify potential nutritional differences in the zoo population. Zoo giraffes have a specific set of maladies that may be nutritionally related, including peracute mortality, energy malnutrition, pancreatic disease, urolithiasis, hoof disease, and severe intestinal parasitism. Dietary requirements for giraffes are not known; invasive studies used with domestic animals cannot be performed on zoo animals. Though domestic animal standards are often used to evaluate nutritional health of exotic animals, they may not be the most appropriate standards to use. Serum samples from 20 zoo giraffes at 10 zoological institutions in the United States were compared to previously collected samples from 24 free-ranging giraffes in South Africa. Thirteen of the zoo animal samples were collected from animals trained for blood collection, and seven were banked samples obtained from a previous serum collection. Dietary information was also collected on each zoo giraffe; most zoo giraffe diets consisted of alfalfa-based pellets (acid detergent fiber-16), alfalfa hay, and browse in varying quantities. Differences between zoo and free-ranging giraffes, males and females, and adults and subadults were analyzed with the use of a 2 x 2 x 2 factorial and Fisher's Least Significant Difference (LSD) for mean separation. Of the 84 parameters measured, 54 (60%) were significantly different (P < or = 0.05) between zoo and free-ranging giraffes. Nine (11%) items were significantly different (P < or = 0.05) between adult and subadult animals. Only one parameter, sodium concentration, was found to be significantly different (P < or = 0.05) between genders. Further investigation in zoo giraffe diets is needed to address the differences seen in this study and the potentially related health problems.
[Study on Application of NIR Spectral Information Screening in Identification of Maca Origin].
Wang, Yuan-zhong; Zhao, Yan-li; Zhang, Ji; Jin, Hang
2016-02-01
Medicinal and edible plant Maca is rich in various nutrients and owns great medicinal value. Based on near infrared diffuse reflectance spectra, 139 Maca samples collected from Peru and Yunnan were used to identify their geographical origins. Multiplication signal correction (MSC) coupled with second derivative (SD) and Norris derivative filter (ND) was employed in spectral pretreatment. Spectrum range (7,500-4,061 cm⁻¹) was chosen by spectrum standard deviation. Combined with principal component analysis-mahalanobis distance (PCA-MD), the appropriate number of principal components was selected as 5. Based on the spectrum range and the number of principal components selected, two abnormal samples were eliminated by modular group iterative singular sample diagnosis method. Then, four methods were used to filter spectral variable information, competitive adaptive reweighted sampling (CARS), monte carlo-uninformative variable elimination (MC-UVE), genetic algorithm (GA) and subwindow permutation analysis (SPA). The spectral variable information filtered was evaluated by model population analysis (MPA). The results showed that RMSECV(SPA) > RMSECV(CARS) > RMSECV(MC-UVE) > RMSECV(GA), were 2. 14, 2. 05, 2. 02, and 1. 98, and the spectral variables were 250, 240, 250 and 70, respectively. According to the spectral variable filtered, partial least squares discriminant analysis (PLS-DA) was used to build the model, with random selection of 97 samples as training set, and the other 40 samples as validation set. The results showed that, R²: GA > MC-UVE > CARS > SPA, RMSEC and RMSEP: GA < MC-UVE < CARS
McEwan, Desmond; Harden, Samantha M; Zumbo, Bruno D; Sylvester, Benjamin D; Kaulius, Megan; Ruissen, Geralyn R; Dowd, A Justine; Beauchamp, Mark R
2016-01-01
Drawing from goal setting theory (Latham & Locke, 1991; Locke & Latham, 2002; Locke et al., 1981), the purpose of this study was to conduct a systematic review and meta-analysis of multi-component goal setting interventions for changing physical activity (PA) behaviour. A literature search returned 41,038 potential articles. Included studies consisted of controlled experimental trials wherein participants in the intervention conditions set PA goals and their PA behaviour was compared to participants in a control group who did not set goals. A meta-analysis was ultimately carried out across 45 articles (comprising 52 interventions, 126 effect sizes, n = 5912) that met eligibility criteria using a random-effects model. Overall, a medium, positive effect (Cohen's d(SE) = .552(.06), 95% CI = .43-.67, Z = 9.03, p < .001) of goal setting interventions in relation to PA behaviour was found. Moderator analyses across 20 variables revealed several noteworthy results with regard to features of the study, sample characteristics, PA goal content, and additional goal-related behaviour change techniques. In conclusion, multi-component goal setting interventions represent an effective method of fostering PA across a diverse range of populations and settings. Implications for effective goal setting interventions are discussed.