Systems and Methods for Correcting Optical Reflectance Measurements
NASA Technical Reports Server (NTRS)
Yang, Ye (Inventor); Shear, Michael A. (Inventor); Soller, Babs R. (Inventor); Soyemi, Olusola O. (Inventor)
2014-01-01
We disclose measurement systems and methods for measuring analytes in target regions of samples that also include features overlying the target regions. The systems include: (a) a light source; (b) a detection system; (c) a set of at least first, second, and third light ports which transmit light from the light source to a sample and receive and direct light reflected from the sample to the detection system, generating a first set of data including information corresponding to both an internal target within the sample and features overlying the internal target, and a second set of data including information corresponding to features overlying the internal target; and (d) a processor configured to remove information characteristic of the overlying features from the first set of data using the first and second sets of data to produce corrected information representing the internal target.
Systems and methods for correcting optical reflectance measurements
NASA Technical Reports Server (NTRS)
Yang, Ye (Inventor); Soller, Babs R. (Inventor); Soyemi, Olusola O. (Inventor); Shear, Michael A. (Inventor)
2009-01-01
We disclose measurement systems and methods for measuring analytes in target regions of samples that also include features overlying the target regions. The systems include: (a) a light source; (b) a detection system; (c) a set of at least first, second, and third light ports which transmit light from the light source to a sample and receive and direct light reflected from the sample to the detection system, generating a first set of data including information corresponding to both an internal target within the sample and features overlying the internal target, and a second set of data including information corresponding to features overlying the internal target; and (d) a processor configured to remove information characteristic of the overlying features from the first set of data using the first and second sets of data to produce corrected information representing the internal target.
Mode synthesizing atomic force microscopy and mode-synthesizing sensing
Passian, Ali; Thundat, Thomas George; Tetard, Laurene
2013-05-17
A method of analyzing a sample that includes applying a first set of energies at a first set of frequencies to a sample and applying, simultaneously with the applying the first set of energies, a second set of energies at a second set of frequencies, wherein the first set of energies and the second set of energies form a multi-mode coupling. The method further includes detecting an effect of the multi-mode coupling.
Mode-synthesizing atomic force microscopy and mode-synthesizing sensing
Passain, Ali; Thundat, Thomas George; Tetard, Laurene
2014-07-22
A method of analyzing a sample that includes applying a first set of energies at a first set of frequencies to a sample and applying, simultaneously with the applying the first set of energies, a second set of energies at a second set of frequencies, wherein the first set of energies and the second set of energies form a multi-mode coupling. The method further includes detecting an effect of the multi-mode coupling.
Remote temperature-set-point controller
Burke, W.F.; Winiecki, A.L.
1984-10-17
An instrument is described for carrying out mechanical strain tests on metallic samples with the addition of means for varying the temperature with strain. The instrument includes opposing arms and associated equipment for holding a sample and varying the mechanical strain on the sample through a plurality of cycles of increasing and decreasing strain within predetermined limits, circuitry for producing an output signal representative of the strain during the tests, apparatus including a a set point and a coil about the sample for providing a controlled temperature in the sample, and circuitry interconnected between the strain output signal and set point for varying the temperature of the sample linearly with strain during the tests.
Remote temperature-set-point controller
Burke, William F.; Winiecki, Alan L.
1986-01-01
An instrument for carrying out mechanical strain tests on metallic samples with the addition of an electrical system for varying the temperature with strain, the instrument including opposing arms and associated equipment for holding a sample and varying the mechanical strain on the sample through a plurality of cycles of increasing and decreasing strain within predetermined limits, circuitry for producing an output signal representative of the strain during the tests, apparatus including a set point and a coil about the sample for providing a controlled temperature in the sample, and circuitry interconnected between the strain output signal and set point for varying the temperature of the sample linearly with strain during the tests.
Synchronizing data from irregularly sampled sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uluyol, Onder
A system and method include receiving a set of sampled measurements for each of multiple sensors, wherein the sampled measurements are at irregular intervals or different rates, re-sampling the sampled measurements of each of the multiple sensors at a higher rate than one of the sensor's set of sampled measurements, and synchronizing the sampled measurements of each of the multiple sensors.
Connor, Thomas H; Smith, Jerome P
2016-09-01
At the present time, the method of choice to determine surface contamination of the workplace with antineoplastic and other hazardous drugs is surface wipe sampling and subsequent sample analysis with a variety of analytical techniques. The purpose of this article is to review current methodology for determining the level of surface contamination with hazardous drugs in healthcare settings and to discuss recent advances in this area. In addition it will provide some guidance for conducting surface wipe sampling and sample analysis for these drugs in healthcare settings. Published studies on the use of wipe sampling to measure hazardous drugs on surfaces in healthcare settings drugs were reviewed. These studies include the use of well-documented chromatographic techniques for sample analysis in addition to newly evolving technology that provides rapid analysis of specific antineoplastic. Methodology for the analysis of surface wipe samples for hazardous drugs are reviewed, including the purposes, technical factors, sampling strategy, materials required, and limitations. The use of lateral flow immunoassay (LFIA) and fluorescence covalent microbead immunosorbent assay (FCMIA) for surface wipe sample evaluation is also discussed. Current recommendations are that all healthc a re settings where antineoplastic and other hazardous drugs are handled include surface wipe sampling as part of a comprehensive hazardous drug-safe handling program. Surface wipe sampling may be used as a method to characterize potential occupational dermal exposure risk and to evaluate the effectiveness of implemented controls and the overall safety program. New technology, although currently limited in scope, may make wipe sampling for hazardous drugs more routine, less costly, and provide a shorter response time than classical analytical techniques now in use.
Nonlinear interferometric vibrational imaging
NASA Technical Reports Server (NTRS)
Boppart, Stephen A. (Inventor); Marks, Daniel L. (Inventor)
2009-01-01
A method of examining a sample, which includes: exposing a reference to a first set of electromagnetic radiation, to form a second set of electromagnetic radiation scattered from the reference; exposing a sample to a third set of electromagnetic radiation to form a fourth set of electromagnetic radiation scattered from the sample; and interfering the second set of electromagnetic radiation and the fourth set of electromagnetic radiation. The first set and the third set of electromagnetic radiation are generated from a source; at least a portion of the second set of electromagnetic radiation is of a frequency different from that of the first set of electromagnetic radiation; and at least a portion of the fourth set of electromagnetic radiation is of a frequency different from that of the third set of electromagnetic radiation.
A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers
Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund; ...
2018-03-28
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less
A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less
A ricin forensic profiling approach based on a complex set of biomarkers.
Fredriksson, Sten-Åke; Wunschel, David S; Lindström, Susanne Wiklund; Nilsson, Calle; Wahl, Karen; Åstot, Crister
2018-08-15
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1-PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods and robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved. Copyright © 2018 Elsevier B.V. All rights reserved.
GUM Analysis for TIMS Isotopic Ratios in BEP0 Graphite Qualification Samples, Round 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerlach, David C.; Heasler, Patrick G.; Reid, Bruce D.
In May 2007, one set of three samples from NBL were addressed to Steve Petersen for TIMS analysis, and included BEP0 samples numbered 27008, 30986, and 50846. All cores were trimmed by tooling, and lightly cleaned by CO2 pellet blasting. Small discs were cut from the second set of samples for SIMS analysis, with the remainder of each used for TIMS preparation.
Ghana watershed prototype products
,
2007-01-01
A number of satellite data sets are available through the U.S. Geological Survey (USGS) for monitoring land surface features. Representative data sets include Landsat, Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), and Shuttle Radar Topography Mission (SRTM). The Ghana Watershed Prototype Products cover an area within southern Ghana, Africa, and include examples of the aforementioned data sets along with sample SRTM derivative data sets.
Results and analysis of saltstone cores taken from saltstone disposal unit cell 2A
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reigel, M. M.; Hill, K. A.
2016-03-01
As part of an ongoing Performance Assessment (PA) Maintenance Plan, Savannah River Remediation (SRR) has developed a sampling and analyses strategy to facilitate the comparison of field-emplaced samples (i.e., saltstone placed and cured in a Saltstone Disposal Unit (SDU)) with samples prepared and cured in the laboratory. The primary objectives of the Sampling and Analyses Plan (SAP) are; (1) to demonstrate a correlation between the measured properties of laboratory-prepared, simulant samples (termed Sample Set 3), and the field-emplaced saltstone samples (termed Sample Set 9), and (2) to validate property values assumed for the Saltstone Disposal Facility (SDF) PA modeling. Themore » analysis and property data for Sample Set 9 (i.e. six core samples extracted from SDU Cell 2A (SDU2A)) are documented in this report, and where applicable, the results are compared to the results for Sample Set 3. Relevant properties to demonstrate the aforementioned objectives include bulk density, porosity, saturated hydraulic conductivity (SHC), and radionuclide leaching behavior.« less
Thompson, Steven K
2006-12-01
A flexible class of adaptive sampling designs is introduced for sampling in network and spatial settings. In the designs, selections are made sequentially with a mixture distribution based on an active set that changes as the sampling progresses, using network or spatial relationships as well as sample values. The new designs have certain advantages compared with previously existing adaptive and link-tracing designs, including control over sample sizes and of the proportion of effort allocated to adaptive selections. Efficient inference involves averaging over sample paths consistent with the minimal sufficient statistic. A Markov chain resampling method makes the inference computationally feasible. The designs are evaluated in network and spatial settings using two empirical populations: a hidden human population at high risk for HIV/AIDS and an unevenly distributed bird population.
Predictions of GPS X-Set Performance during the Places Experiment
1979-07-01
previously existing GPS X-set receiver simulation was modified to include the received signal spectrum and the receiver code correlation operation... CORRELATION OPERATION The X-set receiver simulation documented in Reference 3-1 is a direct sampled -data digital implementation of the GPS X-set...ul(t) -sin w2t From Carrier and Code Loops (wit +0 1 (t)) Figure 3-6. Simplified block diagram of code correlator operation and I-Q sampling . 6 I
NASA Astrophysics Data System (ADS)
Erener, Arzu; Sivas, A. Abdullah; Selcuk-Kestel, A. Sevtap; Düzgün, H. Sebnem
2017-07-01
All of the quantitative landslide susceptibility mapping (QLSM) methods requires two basic data types, namely, landslide inventory and factors that influence landslide occurrence (landslide influencing factors, LIF). Depending on type of landslides, nature of triggers and LIF, accuracy of the QLSM methods differs. Moreover, how to balance the number of 0 (nonoccurrence) and 1 (occurrence) in the training set obtained from the landslide inventory and how to select which one of the 1's and 0's to be included in QLSM models play critical role in the accuracy of the QLSM. Although performance of various QLSM methods is largely investigated in the literature, the challenge of training set construction is not adequately investigated for the QLSM methods. In order to tackle this challenge, in this study three different training set selection strategies along with the original data set is used for testing the performance of three different regression methods namely Logistic Regression (LR), Bayesian Logistic Regression (BLR) and Fuzzy Logistic Regression (FLR). The first sampling strategy is proportional random sampling (PRS), which takes into account a weighted selection of landslide occurrences in the sample set. The second method, namely non-selective nearby sampling (NNS), includes randomly selected sites and their surrounding neighboring points at certain preselected distances to include the impact of clustering. Selective nearby sampling (SNS) is the third method, which concentrates on the group of 1's and their surrounding neighborhood. A randomly selected group of landslide sites and their neighborhood are considered in the analyses similar to NNS parameters. It is found that LR-PRS, FLR-PRS and BLR-Whole Data set-ups, with order, yield the best fits among the other alternatives. The results indicate that in QLSM based on regression models, avoidance of spatial correlation in the data set is critical for the model's performance.
47 CFR 1.363 - Introduction of statistical data.
Code of Federal Regulations, 2010 CFR
2010-10-01
... case of sample surveys, there shall be a clear description of the survey design, including the... evidence in common carrier hearing proceedings, including but not limited to sample surveys, econometric... description of the experimental design shall be set forth, including a specification of the controlled...
47 CFR 1.363 - Introduction of statistical data.
Code of Federal Regulations, 2013 CFR
2013-10-01
... case of sample surveys, there shall be a clear description of the survey design, including the... evidence in common carrier hearing proceedings, including but not limited to sample surveys, econometric... description of the experimental design shall be set forth, including a specification of the controlled...
47 CFR 1.363 - Introduction of statistical data.
Code of Federal Regulations, 2014 CFR
2014-10-01
... case of sample surveys, there shall be a clear description of the survey design, including the... evidence in common carrier hearing proceedings, including but not limited to sample surveys, econometric... description of the experimental design shall be set forth, including a specification of the controlled...
47 CFR 1.363 - Introduction of statistical data.
Code of Federal Regulations, 2012 CFR
2012-10-01
... case of sample surveys, there shall be a clear description of the survey design, including the... evidence in common carrier hearing proceedings, including but not limited to sample surveys, econometric... description of the experimental design shall be set forth, including a specification of the controlled...
47 CFR 1.363 - Introduction of statistical data.
Code of Federal Regulations, 2011 CFR
2011-10-01
... case of sample surveys, there shall be a clear description of the survey design, including the... evidence in common carrier hearing proceedings, including but not limited to sample surveys, econometric... description of the experimental design shall be set forth, including a specification of the controlled...
The prevalence of terraced treescapes in analyses of phylogenetic data sets.
Dobrin, Barbara H; Zwickl, Derrick J; Sanderson, Michael J
2018-04-04
The pattern of data availability in a phylogenetic data set may lead to the formation of terraces, collections of equally optimal trees. Terraces can arise in tree space if trees are scored with parsimony or with partitioned, edge-unlinked maximum likelihood. Theory predicts that terraces can be large, but their prevalence in contemporary data sets has never been surveyed. We selected 26 data sets and phylogenetic trees reported in recent literature and investigated the terraces to which the trees would belong, under a common set of inference assumptions. We examined terrace size as a function of the sampling properties of the data sets, including taxon coverage density (the proportion of taxon-by-gene positions with any data present) and a measure of gene sampling "sufficiency". We evaluated each data set in relation to the theoretical minimum gene sampling depth needed to reduce terrace size to a single tree, and explored the impact of the terraces found in replicate trees in bootstrap methods. Terraces were identified in nearly all data sets with taxon coverage densities < 0.90. They were not found, however, in high-coverage-density (i.e., ≥ 0.94) transcriptomic and genomic data sets. The terraces could be very large, and size varied inversely with taxon coverage density and with gene sampling sufficiency. Few data sets achieved a theoretical minimum gene sampling depth needed to reduce terrace size to a single tree. Terraces found during bootstrap resampling reduced overall support. If certain inference assumptions apply, trees estimated from empirical data sets often belong to large terraces of equally optimal trees. Terrace size correlates to data set sampling properties. Data sets seldom include enough genes to reduce terrace size to one tree. When bootstrap replicate trees lie on a terrace, statistical support for phylogenetic hypotheses may be reduced. Although some of the published analyses surveyed were conducted with edge-linked inference models (which do not induce terraces), unlinked models have been used and advocated. The present study describes the potential impact of that inference assumption on phylogenetic inference in the context of the kinds of multigene data sets now widely assembled for large-scale tree construction.
Wygant, Dustin B; Ben-Porath, Yossef S; Arbisi, Paul A; Berry, David T R; Freeman, David B; Heilbronner, Robert L
2009-11-01
The current study examined the effectiveness of the MMPI-2 Restructured Form (MMPI-2-RF; Ben-Porath and Tellegen, 2008) over-reporting indicators in civil forensic settings. The MMPI-2-RF includes three revised MMPI-2 over-reporting validity scales and a new scale to detect over-reported somatic complaints. Participants dissimulated medical and neuropsychological complaints in two simulation samples, and a known-groups sample used symptom validity tests as a response bias criterion. Results indicated large effect sizes for the MMPI-2-RF validity scales, including a Cohen's d of .90 for Fs in a head injury simulation sample, 2.31 for FBS-r, 2.01 for F-r, and 1.97 for Fs in a medical simulation sample, and 1.45 for FBS-r and 1.30 for F-r in identifying poor effort on SVTs. Classification results indicated good sensitivity and specificity for the scales across the samples. This study indicates that the MMPI-2-RF over-reporting validity scales are effective at detecting symptom over-reporting in civil forensic settings.
Legacy and currently used pesticides in the atmospheric environment of Lake Victoria, East Africa.
Arinaitwe, Kenneth; Kiremire, Bernard T; Muir, Derek C G; Fellin, Phil; Li, Henrik; Teixeira, Camilla; Mubiru, Drake N
2016-02-01
The Lake Victoria watershed has extensive agricultural activity with a long history of pesticide use but there is limited information on historical use or on environmental levels. To address this data gap, high volume air samples were collected from two sites close to the northern shore of Lake Victoria; Kakira (KAK) and Entebbe (EBB). The samples, to be analyzed for pesticides, were collected over various periods between 1999 and 2004 inclusive (KAK 1999-2000, KAK 2003-2004, EBB 2003 and EBB 2004 sample sets) and from 2008 to 2010 inclusive (EBB 2008, EBB 2009 and EBB 2010 sample sets). The latter sample sets (which also included precipitation samples) were also analyzed for currently used pesticides (CUPs) including chlorpyrifos, chlorthalonil, metribuzin, trifluralin, malathion and dacthal. Chlorpyrifos was the predominant CUP in air samples with average concentrations of 93.5, 26.1 and 3.54 ng m(-3) for the EBB 2008, 2009, 2010 sample sets, respectively. Average concentrations of total endosulfan (ΣEndo), total DDT related compounds (ΣDDTs) and hexachlorocyclohexanes (ΣHCHs) ranged from 12.3-282, 22.8-130 and 3.72-81.8 pg m(-3), respectively, for all the sample sets. Atmospheric prevalence of residues of persistent organic pollutants (POPs) increased with fresh emissions of endosulfan, DDT and lindane. Hexachlorobenzene (HCB), pentachlorobenzene (PeCB) and dieldrin were also detected in air samples. Transformation products, pentachloroanisole, 3,4,5-trichloroveratrole and 3,4,5,6-tetrachloroveratrole, were also detected. The five most prevalent compounds in the precipitation samples were in the order chlorpyrifos>chlorothalonil>ΣEndo>ΣDDTs>ΣHCHs with average fluxes of 1123, 396, 130, 41.7 and 41.3 ng m(-2)sample(-1), respectively. PeCB exceeded HCB in precipitation samples. The reverse was true for air samples. Backward air trajectories suggested transboundary and local emission sources of the analytes. The results underscore the need for a concerted regional vigilance in management of chemicals. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Dupnick, E.; Wiggins, D.
1980-01-01
The scheduling algorithm for mission planning and logistics evaluation (SAMPLE) is presented. Two major subsystems are included: The mission payloads program; and the set covering program. Formats and parameter definitions for the payload data set (payload model), feasible combination file, and traffic model are documented.
Ottaway, Josh; Farrell, Jeremy A; Kalivas, John H
2013-02-05
An essential part to calibration is establishing the analyte calibration reference samples. These samples must characterize the sample matrix and measurement conditions (chemical, physical, instrumental, and environmental) of any sample to be predicted. Calibration usually requires measuring spectra for numerous reference samples in addition to determining the corresponding analyte reference values. Both tasks are typically time-consuming and costly. This paper reports on a method named pure component Tikhonov regularization (PCTR) that does not require laboratory prepared or determined reference values. Instead, an analyte pure component spectrum is used in conjunction with nonanalyte spectra for calibration. Nonanalyte spectra can be from different sources including pure component interference samples, blanks, and constant analyte samples. The approach is also applicable to calibration maintenance when the analyte pure component spectrum is measured in one set of conditions and nonanalyte spectra are measured in new conditions. The PCTR method balances the trade-offs between calibration model shrinkage and the degree of orthogonality to the nonanalyte content (model direction) in order to obtain accurate predictions. Using visible and near-infrared (NIR) spectral data sets, the PCTR results are comparable to those obtained using ridge regression (RR) with reference calibration sets. The flexibility of PCTR also allows including reference samples if such samples are available.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-19
... protect the waterways, waterway users, and vessels from hazards associated with intensive fish sampling... sampling efforts will include the setting of nets throughout this portion of the Chicago Sanitary and Ship Canal. The purpose of this sampling is to provide essential information in connection with efforts to...
Chen, I-Jen; Foloppe, Nicolas
2013-12-15
Computational conformational sampling underpins much of molecular modeling and design in pharmaceutical work. The sampling of smaller drug-like compounds has been an active area of research. However, few studies have tested in details the sampling of larger more flexible compounds, which are also relevant to drug discovery, including therapeutic peptides, macrocycles, and inhibitors of protein-protein interactions. Here, we investigate extensively mainstream conformational sampling methods on three carefully curated compound sets, namely the 'Drug-like', larger 'Flexible', and 'Macrocycle' compounds. These test molecules are chemically diverse with reliable X-ray protein-bound bioactive structures. The compared sampling methods include Stochastic Search and the recent LowModeMD from MOE, all the low-mode based approaches from MacroModel, and MD/LLMOD recently developed for macrocycles. In addition to default settings, key parameters of the sampling protocols were explored. The performance of the computational protocols was assessed via (i) the reproduction of the X-ray bioactive structures, (ii) the size, coverage and diversity of the output conformational ensembles, (iii) the compactness/extendedness of the conformers, and (iv) the ability to locate the global energy minimum. The influence of the stochastic nature of the searches on the results was also examined. Much better results were obtained by adopting search parameters enhanced over the default settings, while maintaining computational tractability. In MOE, the recent LowModeMD emerged as the method of choice. Mixed torsional/low-mode from MacroModel performed as well as LowModeMD, and MD/LLMOD performed well for macrocycles. The low-mode based approaches yielded very encouraging results with the flexible and macrocycle sets. Thus, one can productively tackle the computational conformational search of larger flexible compounds for drug discovery, including macrocycles. Copyright © 2013 Elsevier Ltd. All rights reserved.
NHEXAS PHASE I REGION 5 STUDY--QA ANALYTICAL RESULTS FOR VOCS IN REPLICATES
This data set includes analytical results for measurements of VOCs in 204 duplicate (replicate) samples. Measurements were made for up to 23 VOCs in samples of air, water, and blood. Duplicate samples (samples collected along with or next to the original samples) were collected t...
NHEXAS PHASE I REGION 5 STUDY--QA ANALYTICAL RESULTS FOR METALS IN REPLICATES
This data set includes analytical results for measurements of metals in 490 duplicate (replicate) samples and for particles in 130 duplicate samples. Measurements were made for up to 11 metals in samples of air, dust, water, blood, and urine. Duplicate samples (samples collected ...
A standard bacterial isolate set for research on contemporary dairy spoilage.
Trmčić, A; Martin, N H; Boor, K J; Wiedmann, M
2015-08-01
Food spoilage is an ongoing issue that could be dealt with more efficiently if some standardization and unification was introduced in this field of research. For example, research and development efforts to understand and reduce food spoilage can greatly be enhanced through availability and use of standardized isolate sets. To address this critical issue, we have assembled a standard isolate set of dairy spoilers and other selected nonpathogenic organisms frequently associated with dairy products. This publicly available bacterial set consists of (1) 35 gram-positive isolates including 9 Bacillus and 15 Paenibacillus isolates and (2) 16 gram-negative isolates including 4 Pseudomonas and 8 coliform isolates. The set includes isolates obtained from samples of pasteurized milk (n=43), pasteurized chocolate milk (n=1), raw milk (n=1), cheese (n=2), as well as isolates obtained from samples obtained from dairy-powder production (n=4). Analysis of growth characteristics in skim milk broth identified 16 gram-positive and 13 gram-negative isolates as psychrotolerant. Additional phenotypic characterization of isolates included testing for activity of β-galactosidase and lipolytic and proteolytic enzymes. All groups of isolates included in the isolate set exhibited diversity in growth and enzyme activity. Source data for all isolates in this isolate set are publicly available in the FoodMicrobeTracker database (http://www.foodmicrobetracker.com), which allows for continuous updating of information and advancement of knowledge on dairy-spoilage representatives included in this isolate set. This isolate set along with publicly available isolate data provide a unique resource that will help advance knowledge of dairy-spoilage organisms as well as aid industry in development and validation of new control strategies. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Cross-Domain Semi-Supervised Learning Using Feature Formulation.
Xingquan Zhu
2011-12-01
Semi-Supervised Learning (SSL) traditionally makes use of unlabeled samples by including them into the training set through an automated labeling process. Such a primitive Semi-Supervised Learning (pSSL) approach suffers from a number of disadvantages including false labeling and incapable of utilizing out-of-domain samples. In this paper, we propose a formative Semi-Supervised Learning (fSSL) framework which explores hidden features between labeled and unlabeled samples to achieve semi-supervised learning. fSSL regards that both labeled and unlabeled samples are generated from some hidden concepts with labeling information partially observable for some samples. The key of the fSSL is to recover the hidden concepts, and take them as new features to link labeled and unlabeled samples for semi-supervised learning. Because unlabeled samples are only used to generate new features, but not to be explicitly included in the training set like pSSL does, fSSL overcomes the inherent disadvantages of the traditional pSSL methods, especially for samples not within the same domain as the labeled instances. Experimental results and comparisons demonstrate that fSSL significantly outperforms pSSL-based methods for both within-domain and cross-domain semi-supervised learning.
Ritchie, Andrew M; Lo, Nathan; Ho, Simon Y W
2017-05-01
In Bayesian phylogenetic analyses of genetic data, prior probability distributions need to be specified for the model parameters, including the tree. When Bayesian methods are used for molecular dating, available tree priors include those designed for species-level data, such as the pure-birth and birth-death priors, and coalescent-based priors designed for population-level data. However, molecular dating methods are frequently applied to data sets that include multiple individuals across multiple species. Such data sets violate the assumptions of both the speciation and coalescent-based tree priors, making it unclear which should be chosen and whether this choice can affect the estimation of node times. To investigate this problem, we used a simulation approach to produce data sets with different proportions of within- and between-species sampling under the multispecies coalescent model. These data sets were then analyzed under pure-birth, birth-death, constant-size coalescent, and skyline coalescent tree priors. We also explored the ability of Bayesian model testing to select the best-performing priors. We confirmed the applicability of our results to empirical data sets from cetaceans, phocids, and coregonid whitefish. Estimates of node times were generally robust to the choice of tree prior, but some combinations of tree priors and sampling schemes led to large differences in the age estimates. In particular, the pure-birth tree prior frequently led to inaccurate estimates for data sets containing a mixture of inter- and intraspecific sampling, whereas the birth-death and skyline coalescent priors produced stable results across all scenarios. Model testing provided an adequate means of rejecting inappropriate tree priors. Our results suggest that tree priors do not strongly affect Bayesian molecular dating results in most cases, even when severely misspecified. However, the choice of tree prior can be significant for the accuracy of dating results in the case of data sets with mixed inter- and intraspecies sampling. [Bayesian phylogenetic methods; model testing; molecular dating; node time; tree prior.]. © The authors 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For permissions, please e-mail: journals.permission@oup.com.
NHEXAS PHASE I REGION 5 STUDY--METALS IN BLOOD ANALYTICAL RESULTS
This data set includes analytical results for measurements of metals in 165 blood samples. These samples were collected to examine the relationships between personal exposure measurements, environmental measurements, and body burden. Venous blood samples were collected by venipun...
NHEXAS PHASE I REGION 5 STUDY--VOCS IN BLOOD ANALYTICAL RESULTS
This data set includes analytical results for measurements of VOCs (volatile organic compounds) in 145 blood samples. These samples were collected to examine the relationships between personal exposure measurements, environmental measurements, and body burden. Venous blood sample...
NHEXAS PHASE I REGION 5 STUDY--QA ANALYTICAL RESULTS FOR VOCS IN BLANKS
This data set includes analytical results for measurements of VOCs in 88 blank samples. Measurements were made for up to 23 VOCs in blank samples of air, water, and blood. Blank samples were used to assess the potential for sample contamination during collection, storage, shipmen...
Classification of adulterated honeys by multivariate analysis.
Amiry, Saber; Esmaiili, Mohsen; Alizadeh, Mohammad
2017-06-01
In this research, honey samples were adulterated with date syrup (DS) and invert sugar syrup (IS) at three concentrations (7%, 15% and 30%). 102 adulterated samples were prepared in six batches with 17 replications for each batch. For each sample, 32 parameters including color indices, rheological, physical, and chemical parameters were determined. To classify the samples, based on type and concentrations of adulterant, a multivariate analysis was applied using principal component analysis (PCA) followed by a linear discriminant analysis (LDA). Then, 21 principal components (PCs) were selected in five sets. Approximately two-thirds were identified correctly using color indices (62.75%) or rheological properties (67.65%). A power discrimination was obtained using physical properties (97.06%), and the best separations were achieved using two sets of chemical properties (set 1: lactone, diastase activity, sucrose - 100%) (set 2: free acidity, HMF, ash - 95%). Copyright © 2016 Elsevier Ltd. All rights reserved.
CTEPP NC DATA SUPPLEMENTAL INFORMATION ON FIELD AND LABORATORY SAMPLES
This data set contains supplemental data related to the final core analytical results table. This includes sample collection data for example sample weight, air volume, creatinine, specific gravity etc.
The Children’s Total Exposure to Persistent Pesticides and Other Persistent...
Apollo-11 lunar sample information catalogue
NASA Technical Reports Server (NTRS)
Kramer, F. E. (Compiler); Twedell, D. B. (Compiler); Walton, W. J. A., Jr. (Compiler)
1977-01-01
The Apollo 11 mission is reviewed with emphasis on the collection of lunar samples, their geologic setting, early processing, and preliminary examination. The experience gained during five subsequent missions was applied to obtain physical-chemical data for each sample using photographic and binocular microscope techniques. Topics discussed include: binocular examination procedure; breccia clast dexrriptuons, thin section examinations procedure typical breccia in thin section, typical basalt in thin section, sample histories, and chemical and age data. An index to photographs is included.
Putnam, Larry D.; Hoogestraat, Galen K.; Sawyer, J. Foster
2008-01-01
Onsite wastewater disposal systems (OWDS) are used extensively in the Black Hills of South Dakota where many of the watersheds and aquifers are characterized by fractured or solution-enhanced bedrock with thin soil cover. A study was conducted during 2006-08 to characterize water-quality effects and indicators of OWDS. Water samples were collected and analyzed for potential indicators of OWDS, including chloride, bromide, boron, nitrite plus nitrate (NO2+NO3), ammonia, major ions, nutrients, selected trace elements, isotopes of nitrate, microbiological indicators, and organic wastewater compounds (OWCs). The microbiological indicators were fecal coliforms, Escherichia coli (E. coli), enterococci, Clostridium perfringens (C. perfringens), and coliphages. Sixty ground-water sampling sites were located either downgradient from areas of dense OWDS or in background areas and included 25 monitoring wells, 34 private wells, and 1 spring. Nine surface-water sampling sites were located on selected streams and tributaries either downstream or upstream from residential development within the Precambrian setting. Sampling results were grouped by their hydrogeologic setting: alluvial, Spearfish, Minnekahta, and Precambrian. Mean downgradient dissolved NO2+NO3 concentrations in ground water for the alluvial, Spearfish, Minnekahta, and Precambrian settings were 0.734, 7.90, 8.62, and 2.25 milligrams per liter (mg/L), respectively. Mean downgradient dissolved chloride concentrations in ground water for these settings were 324, 89.6, 498, and 33.2 mg/L, respectively. Mean downgradient dissolved boron concentrations in ground water for these settings were 736, 53, 64, and 43 micrograms per liter (ug/L), respectively. Mean dissolved surface-water concentrations for NO2+NO3, chloride, and boron for downstream sites were 0.222 mg/L, 32.1 mg/L, and 28 ug/L, respectively. Mean values of delta-15N and delta-18O (isotope ratios of 14N to 15N and 18O to 16O relative to standard ratios) for nitrate in ground-water samples were 10.4 and -2.0 per mil (0/100), respectively, indicating a relatively small contribution from synthetic fertilizer and probably a substantial contribution from OWDS. The surface-water sample with the highest dissolved NO2+NO3 concentration of 1.6 mg/L had a delta-15N value of 12.36 0/100, which indicates warm-blooded animals (including humans) as the nitrate source. Fecal coliforms were detected in downgradient ground water most frequently in the Spearfish (19 percent) and Minnekahta (9.7 percent) settings. E. coli was detected most frequently in the Minnekahta (29 percent) and Spearfish (13 percent) settings. Enterococci were detected more frequently than other microbiological indicators in all four settings. Fecal coliforms and E. coli were detected in 73 percent and 95 percent of all surface-water samples, respectively. Enterococci, coliphages (somatic), and C. perfringens were detected in 50, 70, and 50 percent of surface-water samples, respectively. Of the 62 OWC analytes, 12 were detected only in environmental samples, 10 were detected in at least one environmental and one blank sample (not necessarily companion pairs), 2 were detected only in blank samples, and 38 were not detected in any blank, environmental, or replicate sample from either ground or surface water. Eleven different organic compounds were detected in ground-water samples at eight different sites. The most frequently occurring compound was DEET, which was found in 32 percent of the environmental samples, followed by tetrachloroethene, which was detected in 20 percent of the samples. For surface-water samples, 16 organic compounds were detected in 9 of the 10 total samples. The compound with the highest occurrence in surface-water samples was camphor, which was detected in 50 percent of samples. The alluvial setting was characterized by relatively low dissolved NO2+NO3 concentrations, detection of ammonia nitrogen, and relatively high concentr
Lunar and Meteorite Thin Sections for Undergraduate and Graduate Studies
NASA Astrophysics Data System (ADS)
Allen, J.; Allen, C.
2012-12-01
The Johnson Space Center (JSC) has the unique responsibility to curate NASA's extraterrestrial samples from past and future missions. Curation includes documentation, preservation, preparation, and distribution of samples for research, education, and public outreach. Studies of rock and soil samples from the Moon and meteorites continue to yield useful information about the early history of the Moon, the Earth, and the inner solar system. Petrographic Thin Section Packages containing polished thin sections of samples from either the Lunar or Meteorite collections have been prepared. Each set of twelve sections of Apollo lunar samples or twelve sections of meteorites is available for loan from JSC. The thin sections sets are designed for use in domestic college and university courses in petrology. The loan period is very strict and limited to two weeks. Contact Ms. Mary Luckey, Education Sample Curator. Email address: mary.k.luckey@nasa.gov Each set of slides is accompanied by teaching materials and a sample disk of representative lunar or meteorite samples. It is important to note that the samples in these sets are not exactly the same as the ones listed here. This list represents one set of samples. A key education resource available on the Curation website is Antarctic Meteorite Teaching Collection: Educational Meteorite Thin Sections, originally compiled by Bevan French, Glenn McPherson, and Roy Clarke and revised by Kevin Righter in 2010. Curation Websites College and university staff and students are encouraged to access the Lunar Petrographic Thin Section Set Publication and the Meteorite Petrographic Thin Section Package Resource which feature many thin section images and detailed descriptions of the samples, research results. http://curator.jsc.nasa.gov/Education/index.cfm Request research samples: http://curator.jsc.nasa.gov/ JSC-CURATION-EDUCATION-DISKS@mail.nasa.govLunar Thin Sections; Meteorite Thin Sections;
Potential High Priority Subaerial Environments for Mars Sample Return
NASA Astrophysics Data System (ADS)
iMOST Team; Bishop, J. L.; Horgan, B.; Benning, L. G.; Carrier, B. L.; Hausrath, E. M.; Altieri, F.; Amelin, Y.; Ammannito, E.; Anand, M.; Beaty, D. W.; Borg, L. E.; Boucher, D.; Brucato, J. R.; Busemann, H.; Campbell, K. A.; Czaja, A. D.; Debaille, V.; Des Marais, D. J.; Dixon, M.; Ehlmann, B. L.; Farmer, J. D.; Fernandez-Remolar, D. C.; Fogarty, J.; Glavin, D. P.; Goreva, Y. S.; Grady, M. M.; Hallis, L. J.; Harrington, A. D.; Herd, C. D. K.; Humayun, M.; Kleine, T.; Kleinhenz, J.; Mangold, N.; Mackelprang, R.; Mayhew, L. E.; McCubbin, F. M.; Mccoy, J. T.; McLennan, S. M.; McSween, H. Y.; Moser, D. E.; Moynier, F.; Mustard, J. F.; Niles, P. B.; Ori, G. G.; Raulin, F.; Rettberg, P.; Rucker, M. A.; Schmitz, N.; Sefton-Nash, E.; Sephton, M. A.; Shaheen, R.; Shuster, D. L.; Siljestrom, S.; Smith, C. L.; Spry, J. A.; Steele, A.; Swindle, T. D.; ten Kate, I. L.; Tosca, N. J.; Usui, T.; Van Kranendonk, M. J.; Wadhwa, M.; Weiss, B. P.; Werner, S. C.; Westall, F.; Wheeler, R. M.; Zipfel, J.; Zorzano, M. P.
2018-04-01
The highest priority subaerial environments for Mars Sample Return include subaerial weathering (paleosols, periglacial/glacial, and rock coatings/rinds), wetlands (mineral precipitates, redox environments, and salt ponds), or cold spring settings.
CTEPP-OH DATA SUPPLEMENTAL INFORMATION ON FIELD AND LABORATORY SAMPLES
This data set contains supplemental data related to the final core analytical results table for CTEPP-OH. This includes sample collection data for example sample weight, air volume, creatinine, specific gravity etc.
The Children’s Total Exposure to Persistent Pesticides and Oth...
NHEXAS PHASE I REGION 5 STUDY--QA ANALYTICAL RESULTS FOR METALS IN BLANKS
This data set includes analytical results for measurements of metals in 205 blank samples and for particles in 64 blank samples. Measurements were made for up to 12 metals in blank samples of air, dust, soil, water, food and beverages, blood, hair, and urine. Blank samples were u...
User guide for luminescence sampling in archaeological and geological contexts
Nelson, Michelle S.; Gray, Harrison J.; Johnson, Jack A.; Rittenour, Tammy M.; Feathers, James K.; Mahan, Shannon
2015-01-01
Luminescence dating provides a direct age estimate of the time of last exposure of quartz or feldspar minerals to light or heat and has been successfully applied to deposits, rock surfaces, and fired materials in a number of archaeological and geological settings. Sampling strategies are diverse and can be customized depending on local circumstances, although all sediment samples need to include a light-safe sample and material for dose-rate determination. The accuracy and precision of luminescence dating results are directly related to the type and quality of the material sampled and sample collection methods in the field. Selection of target material for dating should include considerations of adequacy of resetting of the luminescence signal (optical and thermal bleaching), the ability to characterize the radioactive environment surrounding the sample (dose rate), and the lack of evidence for post-depositional mixing (bioturbation in soils and sediment). Sample strategies for collection of samples from sedimentary settings and fired materials are discussed. This paper should be used as a guide for luminescence sampling and is meant to provide essential background information on how to properly collect samples and on the types of materials suitable for luminescence dating.
SELDI-TOF-MS proteomic profiling of serum, urine, and amniotic fluid in neural tube defects.
Liu, Zhenjiang; Yuan, Zhengwei; Zhao, Qun
2014-01-01
Neural tube defects (NTDs) are common birth defects, whose specific biomarkers are needed. The purpose of this pilot study is to determine whether protein profiling in NTD-mothers differ from normal controls using SELDI-TOF-MS. ProteinChip Biomarker System was used to evaluate 82 maternal serum samples, 78 urine samples and 76 amniotic fluid samples. The validity of classification tree was then challenged with a blind test set including another 20 NTD-mothers and 18 controls in serum samples, and another 19 NTD-mothers and 17 controls in urine samples, and another 20 NTD-mothers and 17 controls in amniotic fluid samples. Eight proteins detected in serum samples were up-regulated and four proteins were down-regulated in the NTD group. Four proteins detected in urine samples were up-regulated and one protein was down-regulated in the NTD group. Six proteins detected in amniotic fluid samples were up-regulated and one protein was down-regulated in the NTD group. The classification tree for serum samples separated NTDs from healthy individuals, achieving a sensitivity of 91% and a specificity of 97% in the training set, and achieving a sensitivity of 90% and a specificity of 97% and a positive predictive value of 95% in the test set. The classification tree for urine samples separated NTDs from controls, achieving a sensitivity of 95% and a specificity of 94% in the training set, and achieving a sensitivity of 89% and a specificity of 82% and a positive predictive value of 85% in the test set. The classification tree for amniotic fluid samples separated NTDs from controls, achieving a sensitivity of 93% and a specificity of 89% in the training set, and achieving a sensitivity of 90% and a specificity of 88% and a positive predictive value of 90% in the test set. These suggest that SELDI-TOF-MS is an additional method for NTDs pregnancies detection.
Tian, Hai-Qing; Wang, Chun-Guang; Zhang, Hai-Jun; Yu, Zhi-Hong; Li, Jian-Kang
2012-11-01
Outlier samples strongly influence the precision of the calibration model in soluble solids content measurement of melons using NIR Spectra. According to the possible sources of outlier samples, three methods (predicted concentration residual test; Chauvenet test; leverage and studentized residual test) were used to discriminate these outliers respectively. Nine suspicious outliers were detected from calibration set which including 85 fruit samples. Considering the 9 suspicious outlier samples maybe contain some no-outlier samples, they were reclaimed to the model one by one to see whether they influence the model and prediction precision or not. In this way, 5 samples which were helpful to the model joined in calibration set again, and a new model was developed with the correlation coefficient (r) 0. 889 and root mean square errors for calibration (RMSEC) 0.6010 Brix. For 35 unknown samples, the root mean square errors prediction (RMSEP) was 0.854 degrees Brix. The performance of this model was more better than that developed with non outlier was eliminated from calibration set (r = 0.797, RMSEC= 0.849 degrees Brix, RMSEP = 1.19 degrees Brix), and more representative and stable with all 9 samples were eliminated from calibration set (r = 0.892, RMSEC = 0.605 degrees Brix, RMSEP = 0.862 degrees).
NASA Technical Reports Server (NTRS)
Generazio, Edward R. (Inventor)
2012-01-01
A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.
NASA Astrophysics Data System (ADS)
Sheikholeslami, R.; Hosseini, N.; Razavi, S.
2016-12-01
Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).
Zhang, Chu; Feng, Xuping; Wang, Jian; Liu, Fei; He, Yong; Zhou, Weijun
2017-01-01
Detection of plant diseases in a fast and simple way is crucial for timely disease control. Conventionally, plant diseases are accurately identified by DNA, RNA or serology based methods which are time consuming, complex and expensive. Mid-infrared spectroscopy is a promising technique that simplifies the detection procedure for the disease. Mid-infrared spectroscopy was used to identify the spectral differences between healthy and infected oilseed rape leaves. Two different sample sets from two experiments were used to explore and validate the feasibility of using mid-infrared spectroscopy in detecting Sclerotinia stem rot (SSR) on oilseed rape leaves. The average mid-infrared spectra showed differences between healthy and infected leaves, and the differences varied among different sample sets. Optimal wavenumbers for the 2 sample sets selected by the second derivative spectra were similar, indicating the efficacy of selecting optimal wavenumbers. Chemometric methods were further used to quantitatively detect the oilseed rape leaves infected by SSR, including the partial least squares-discriminant analysis, support vector machine and extreme learning machine. The discriminant models using the full spectra and the optimal wavenumbers of the 2 sample sets were effective for classification accuracies over 80%. The discriminant results for the 2 sample sets varied due to variations in the samples. The use of two sample sets proved and validated the feasibility of using mid-infrared spectroscopy and chemometric methods for detecting SSR on oilseed rape leaves. The similarities among the selected optimal wavenumbers in different sample sets made it feasible to simplify the models and build practical models. Mid-infrared spectroscopy is a reliable and promising technique for SSR control. This study helps in developing practical application of using mid-infrared spectroscopy combined with chemometrics to detect plant disease.
Tamburini, Elena; Vincenzi, Fabio; Costa, Stefania; Mantovi, Paolo; Pedrini, Paola; Castaldelli, Giuseppe
2017-10-17
Near-Infrared Spectroscopy is a cost-effective and environmentally friendly technique that could represent an alternative to conventional soil analysis methods, including total organic carbon (TOC). Soil fertility and quality are usually measured by traditional methods that involve the use of hazardous and strong chemicals. The effects of physical soil characteristics, such as moisture content and particle size, on spectral signals could be of great interest in order to understand and optimize prediction capability and set up a robust and reliable calibration model, with the future perspective of being applied in the field. Spectra of 46 soil samples were collected. Soil samples were divided into three data sets: unprocessed, only dried and dried, ground and sieved, in order to evaluate the effects of moisture and particle size on spectral signals. Both separate and combined normalization methods including standard normal variate (SNV), multiplicative scatter correction (MSC) and normalization by closure (NCL), as well as smoothing using first and second derivatives (DV1 and DV2), were applied to a total of seven cases. Pretreatments for model optimization were designed and compared for each data set. The best combination of pretreatments was achieved by applying SNV and DV2 on partial least squares (PLS) modelling. There were no significant differences between the predictions using the three different data sets ( p < 0.05). Finally, a unique database including all three data sets was built to include all the sources of sample variability that were tested and used for final prediction. External validation of TOC was carried out on 16 unknown soil samples to evaluate the predictive ability of the final combined calibration model. Hence, we demonstrate that sample preprocessing has minor influence on the quality of near infrared spectroscopy (NIR) predictions, laying the ground for a direct and fast in situ application of the method. Data can be acquired outside the laboratory since the method is simple and does not need more than a simple band ratio of the spectra.
Glässel, A; Coenen, M; Kollerits, B; Cieza, A
2014-06-01
The extended ICF Core Set for stroke is an application of the International Classification of Functioning, Disability and Health (ICF) of the World Health Organisation (WHO) with the purpose to represent the typical spectrum of functioning of persons with stroke. The objective of the study is to add evidence to the content validity of the extended ICF Core Set for stroke from persons after stroke taking into account gender perspective. A qualitative study design was conducted by using individual interviews with women and men after stroke in an in- and outpatient rehabilitation setting. The sampling followed the maximum variation strategy. Sample size was determined by saturation. Concepts from qualitative data analysis were linked to ICF categories and compared to the extended ICF Core Set for stroke. Twelve women and 12 men participated in 24 individual interviews. In total, 143 out of 166 ICF categories included in the extended ICF Core Set for stroke were confirmed (women: N.=13; men: N.=17; both genders: N.=113). Thirty-eight additional categories that are not yet included in the extended ICF Core Set for stroke were raised by women and men. This study confirms that the experience of functioning and disability after stroke shows communalities and differences for women and men. The validity of the extended ICF Core Set for stroke could be mostly confirmed, since it does not only include those areas of functioning and disability relevant to both genders but also those exclusively relevant to either women or men. Further research is needed on ICF categories not yet included in the extended ICF Core Set for stroke.
30 CFR 280.30 - What activities will not require environmental analysis?
Code of Federal Regulations, 2011 CFR
2011-07-01
... additional environmental analysis. The types of activities include: (a) Gravity and magnetometric... oceanographic observations and measurements, including the setting of instruments; (g) Sampling by box core or...
Barr Fritcher, Emily G; Voss, Jesse S; Brankley, Shannon M; Campion, Michael B; Jenkins, Sarah M; Keeney, Matthew E; Henry, Michael R; Kerr, Sarah M; Chaiteerakij, Roongruedee; Pestova, Ekaterina V; Clayton, Amy C; Zhang, Jun; Roberts, Lewis R; Gores, Gregory J; Halling, Kevin C; Kipp, Benjamin R
2015-12-01
Pancreatobiliary cancer is detected by fluorescence in situ hybridization (FISH) of pancreatobiliary brush samples with UroVysion probes, originally designed to detect bladder cancer. We designed a set of new probes to detect pancreatobiliary cancer and compared its performance with that of UroVysion and routine cytology analysis. We tested a set of FISH probes on tumor tissues (cholangiocarcinoma or pancreatic carcinoma) and non-tumor tissues from 29 patients. We identified 4 probes that had high specificity for tumor vs non-tumor tissues; we called this set of probes pancreatobiliary FISH. We performed a retrospective analysis of brush samples from 272 patients who underwent endoscopic retrograde cholangiopancreatography for evaluation of malignancy at the Mayo Clinic; results were available from routine cytology and FISH with UroVysion probes. Archived residual specimens were retrieved and used to evaluate the pancreatobiliary FISH probes. Cutoff values for FISH with the pancreatobiliary probes were determined using 89 samples and validated in the remaining 183 samples. Clinical and pathologic evidence of malignancy in the pancreatobiliary tract within 2 years of brush sample collection was used as the standard; samples from patients without malignancies were used as negative controls. The validation cohort included 85 patients with malignancies (46.4%) and 114 patients with primary sclerosing cholangitis (62.3%). Samples containing cells above the cutoff for polysomy (copy number gain of ≥2 probes) were classified as positive in FISH with the UroVysion and pancreatobiliary probes. Multivariable logistic regression was used to estimate associations between clinical and pathology findings and results from FISH. The combination of FISH probes 1q21, 7p12, 8q24, and 9p21 identified cancer cells with 93% sensitivity and 100% specificity in pancreatobiliary tissue samples and were therefore included in the pancreatobiliary probe set. In the validation cohort of brush samples, pancreatobiliary FISH identified samples from patients with malignancy with a significantly higher level of sensitivity (64.7%) than the UroVysion probes (45.9%) (P < .001) or routine cytology analysis (18.8%) (P < .001), but similar specificity (92.9%, 90.8%, and 100.0% respectively). Factors significantly associated with detection of carcinoma, in adjusted analyses, included detection of polysomy by pancreatobiliary FISH (P < .001), a mass by cross-sectional imaging (P < .001), cancer cells by routine cytology (overall P = .003), as well as absence of primary sclerosing cholangitis (P = .011). We identified a set of FISH probes that detects cancer cells in pancreatobiliary brush samples from patients with and without primary sclerosing cholangitis with higher levels of sensitivity than UroVysion probes. Cytologic brushing test results and clinical features were independently associated with detection of cancer and might be used to identify patients with pancreatobiliary cancers. Copyright © 2015 AGA Institute. Published by Elsevier Inc. All rights reserved.
Multi-Omics Factor Analysis-a framework for unsupervised integration of multi-omics data sets.
Argelaguet, Ricard; Velten, Britta; Arnol, Damien; Dietrich, Sascha; Zenz, Thorsten; Marioni, John C; Buettner, Florian; Huber, Wolfgang; Stegle, Oliver
2018-06-20
Multi-omics studies promise the improved characterization of biological processes across molecular layers. However, methods for the unsupervised integration of the resulting heterogeneous data sets are lacking. We present Multi-Omics Factor Analysis (MOFA), a computational method for discovering the principal sources of variation in multi-omics data sets. MOFA infers a set of (hidden) factors that capture biological and technical sources of variability. It disentangles axes of heterogeneity that are shared across multiple modalities and those specific to individual data modalities. The learnt factors enable a variety of downstream analyses, including identification of sample subgroups, data imputation and the detection of outlier samples. We applied MOFA to a cohort of 200 patient samples of chronic lymphocytic leukaemia, profiled for somatic mutations, RNA expression, DNA methylation and ex vivo drug responses. MOFA identified major dimensions of disease heterogeneity, including immunoglobulin heavy-chain variable region status, trisomy of chromosome 12 and previously underappreciated drivers, such as response to oxidative stress. In a second application, we used MOFA to analyse single-cell multi-omics data, identifying coordinated transcriptional and epigenetic changes along cell differentiation. © 2018 The Authors. Published under the terms of the CC BY 4.0 license.
BOREAS TGB-5 Dissolved Organic Carbon Data from NSA Beaver Ponds
NASA Technical Reports Server (NTRS)
Bourbonniere, Rick; Hall, Forrest G. (Editor); Conrad, Sara K. (Editor)
2000-01-01
The BOReal Ecosystem-Atmosphere Study Trace Gas Biogeochemistry (BOREAS TGB-5) team collected several data sets related to carbon and trace gas fluxes and concentrations in the Northern Study Area (NSA). This data set contains concentrations of dissolved organic and inorganic carbon species from water samples collected at various NSA sites. In particular, this set covers the NSA Tower Beaver Pond Site and the NSA Gillam Road Beaver Pond Site, including data from all visits to open water sampling locations during the BOREAS field campaigns from April to September 1994. The data are provided in tabular ASCII files.
Pleil, Joachim D; Lorber, Matthew N
2007-11-01
The United States Environmental Protection Agency collected ambient air samples in lower Manhattan for about 9 months following the September 11, 2001 World Trade Center (WTC) attacks. Measurements were made of a host of airborne contaminants including volatile organic compounds, polycyclic aromatic hydrocarbons, asbestos, lead, and other contaminants of concern. The present study focuses on the broad class of polychlorinated dibenzo-p-dioxins (CDDs) and dibenzofurans (CDFs) with specific emphasis on the 17 CDD/CDF congeners that exhibit mammalian toxicity. This work is a statistical study comparing the internal patterns of CDD/CDFs using data from an unambiguous fire event (WTC) and other data sets to help identify their sources. A subset of 29 samples all taken between September 16 and October 31, 2001 were treated as a basis set known to be heavily impacted by the WTC building fire source. A second basis set was created using data from Los Angeles and Oakland, CA as published by the California Air Resources Board (CARB) and treated as the archetypical background pattern for CDD/CDFs. The CARB data had a congener profile appearing similar to background air samples from different locations in America and around the world and in different matrices, such as background soils. Such disparate data would normally be interpreted with a qualitative pattern recognition based on congener bar graphs or other forms of factor or cluster analysis that group similar samples together graphically. The procedure developed here employs aspects of those statistical methods to develop a single continuous output variable per sample. Specifically, a form of variance structure-based cluster analysis is used to group congeners within samples to reduce collinearity in the basis sets, new variables are created based on these groups, and multivariate regression is applied to the reduced variable set to determine a predictive equation. This equation predicts a value for an output variable, OPT: the predicted value of OPT is near zero (0.00) for a background congener profile and near one (1.00) forthe profile characterized by the WTC air profile. Although this empirical method is calibrated with relatively small sets of airborne samples, it is shown to be generalizable to other WTC, fire source, and background air samples as well as other sample matrices including soils, window films and other dust wipes, and bulk dusts. However, given the limited data set examined, the method does not allow further discrimination between the WTC data and the other fire sources. This type of analysis is demonstrated to be useful for complex trace-level data sets with limited data and some below-detection entries.
30 CFR 280.30 - What activities will not require environmental analysis?
Code of Federal Regulations, 2010 CFR
2010-07-01
... types of activities include: (a) Gravity and magnetometric observations and measurements; (b) Bottom and..., including the setting of instruments; (g) Sampling by box core or grab sampler to determine seabed...
NHEXAS PHASE I REGION 5 STUDY--QA ANALYTICAL RESULTS FOR METALS IN SPIKES
This data set includes analytical results for measurements of metals in 49 field control samples (spikes). Measurements were made for up to 11 metals in samples of water, blood, and urine. Field controls were used to assess recovery of target analytes from a sample media during s...
Simultaneous Identification of Multiple Driver Pathways in Cancer
Leiserson, Mark D. M.; Blokh, Dima
2013-01-01
Distinguishing the somatic mutations responsible for cancer (driver mutations) from random, passenger mutations is a key challenge in cancer genomics. Driver mutations generally target cellular signaling and regulatory pathways consisting of multiple genes. This heterogeneity complicates the identification of driver mutations by their recurrence across samples, as different combinations of mutations in driver pathways are observed in different samples. We introduce the Multi-Dendrix algorithm for the simultaneous identification of multiple driver pathways de novo in somatic mutation data from a cohort of cancer samples. The algorithm relies on two combinatorial properties of mutations in a driver pathway: high coverage and mutual exclusivity. We derive an integer linear program that finds set of mutations exhibiting these properties. We apply Multi-Dendrix to somatic mutations from glioblastoma, breast cancer, and lung cancer samples. Multi-Dendrix identifies sets of mutations in genes that overlap with known pathways – including Rb, p53, PI(3)K, and cell cycle pathways – and also novel sets of mutually exclusive mutations, including mutations in several transcription factors or other genes involved in transcriptional regulation. These sets are discovered directly from mutation data with no prior knowledge of pathways or gene interactions. We show that Multi-Dendrix outperforms other algorithms for identifying combinations of mutations and is also orders of magnitude faster on genome-scale data. Software available at: http://compbio.cs.brown.edu/software. PMID:23717195
Garmann, D; McLeay, S; Shah, A; Vis, P; Maas Enriquez, M; Ploeger, B A
2017-07-01
The pharmacokinetics (PK), safety and efficacy of BAY 81-8973, a full-length, unmodified, recombinant human factor VIII (FVIII), were evaluated in the LEOPOLD trials. The aim of this study was to develop a population PK model based on pooled data from the LEOPOLD trials and to investigate the importance of including samples with FVIII levels below the limit of quantitation (BLQ) to estimate half-life. The analysis included 1535 PK observations (measured by the chromogenic assay) from 183 male patients with haemophilia A aged 1-61 years from the 3 LEOPOLD trials. The limit of quantitation was 1.5 IU dL -1 for the majority of samples. Population PK models that included or excluded BLQ samples were used for FVIII half-life estimations, and simulations were performed using both estimates to explore the influence on the time below a determined FVIII threshold. In the data set used, approximately 16.5% of samples were BLQ, which is not uncommon for FVIII PK data sets. The structural model to describe the PK of BAY 81-8973 was a two-compartment model similar to that seen for other FVIII products. If BLQ samples were excluded from the model, FVIII half-life estimations were longer compared with a model that included BLQ samples. It is essential to assess the importance of BLQ samples when performing population PK estimates of half-life for any FVIII product. Exclusion of BLQ data from half-life estimations based on population PK models may result in an overestimation of half-life and underestimation of time under a predetermined FVIII threshold, resulting in potential underdosing of patients. © 2017 Bayer AG. Haemophilia Published by John Wiley & Sons Ltd.
Application of spatially gridded temperature and land cover data sets for urban heat island analysis
Gallo, Kevin; Xian, George Z.
2014-01-01
Two gridded data sets that included (1) daily mean temperatures from 2006 through 2011 and (2) satellite-derived impervious surface area, were combined for a spatial analysis of the urban heat-island effect within the Dallas-Ft. Worth Texas region. The primary advantage of using these combined datasets included the capability to designate each 1 × 1 km grid cell of available temperature data as urban or rural based on the level of impervious surface area within the grid cell. Generally, the observed differences in urban and rural temperature increased as the impervious surface area thresholds used to define an urban grid cell were increased. This result, however, was also dependent on the size of the sample area included in the analysis. As the spatial extent of the sample area increased and included a greater number of rural defined grid cells, the observed urban and rural differences in temperature also increased. A cursory comparison of the spatially gridded temperature observations with observations from climate stations suggest that the number and location of stations included in an urban heat island analysis requires consideration to assure representative samples of each (urban and rural) environment are included in the analysis.
NASA Technical Reports Server (NTRS)
Caplin, R. S.; Royer, E. R.
1978-01-01
Attempts are made to provide a total design of a Microbial Load Monitor (MLM) system flight engineering model. Activities include assembly and testing of Sample Receiving and Card Loading Devices (SRCLDs), operator related software, and testing of biological samples in the MLM. Progress was made in assembling SRCLDs with minimal leaks and which operate reliably in the Sample Loading System. Seven operator commands are used to control various aspects of the MLM such as calibrating and reading the incubating reading head, setting the clock and reading time, and status of Card. Testing of the instrument, both in hardware and biologically, was performed. Hardware testing concentrated on SRCLDs. Biological testing covered 66 clinical and seeded samples. Tentative thresholds were set and media performance listed.
2169 steel waveform experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furnish, Michael David; Alexander, C. Scott; Reinhart, William Dodd
2012-11-01
In support of LLNL efforts to develop multiscale models of a variety of materials, we have performed a set of eight gas gun impact experiments on 2169 steel (21% Cr, 6% Ni, 9% Mn, balance predominantly Fe). These experiments provided carefully controlled shock, reshock and release velocimetry data, with initial shock stresses ranging from 10 to 50 GPa (particle velocities from 0.25 to 1.05 km/s). Both windowed and free-surface measurements were included in this experiment set to increase the utility of the data set, as were samples ranging in thickness from 1 to 5 mm. Target physical phenomena included themore » elastic/plastic transition (Hugoniot elastic limit), the Hugoniot, any phase transition phenomena, and the release path (windowed and free-surface). The Hugoniot was found to be nearly linear, with no indications of the Fe phase transition. Releases were non-hysteretic, and relatively consistent between 3- and 5-mmthick samples (the 3 mm samples giving slightly lower wavespeeds on release). Reshock tests with explosively welded impactors produced clean results; those with glue bonds showed transient releases prior to the arrival of the reshock, reducing their usefulness for deriving strength information. The free-surface samples, which were steps on a single piece of steel, showed lower wavespeeds for thin (1 mm) samples than for thicker (2 or 4 mm) samples. A configuration used for the last three shots allows release information to be determined from these free surface samples. The sample strength appears to increase with stress from ~1 GPa to ~ 3 GPa over this range, consistent with other recent work but about 40% above the Steinberg model.« less
The purpose of this SOP is to describe the procedure for sampling personal air for metals and pesticides during a predetermined time period. The SOP includes the set up of the samplers for collection of either a metals sample or a pesticides sample, the calibration and initial c...
Pre-Flight Characterization of Samples for the MISSE-7 Spacesuit Fabric Exposure Experiment
NASA Technical Reports Server (NTRS)
Gaier, James R.; McCue, Terry R.; Clark, Gregory W.; Rogers, Kerry J.; Mengesu, Tsega
2009-01-01
A series of six sample spacesuit pressure garment assembly (PGA) fabric samples were prepared for the Materials International Space Station Experiment 7 (MISSE-7) flight experiment to test the effects of damage by lunar dust on the susceptibility of the fabrics to radiation damage. These included pristine Apollo-era fluorinated ethylene-propylene (FEP) fabric, Apollo-era FEP fabric that had been abraded with JSC-1A lunar simulant, and a piece of Alan Bean s Apollo 12 PGA sectioned from near the left knee. Also included was a sample of pristine orthofabric, and orthofabric that had been abraded to two different levels with JSC-1A. The samples were characterized using optical microscopy, field emission scanning electron microscopy, and atomic force microscopy. Two sets of six samples were then loaded in space environment exposure hardware, one of which was stored as control samples. The other set was affixed to the MISSE-7 experiment package, and will be mounted on the International Space Station, and exposed to the wake-side low Earth orbit environment. It will be retrieved after an exposure of approximately 12 months, and returned for post flight analysis.
Ruiz-Jiménez, J; Priego-Capote, F; Luque de Castro, M D
2006-08-01
A study of the feasibility of Fourier transform medium infrared spectroscopy (FT-midIR) for analytical determination of fatty acid profiles, including trans fatty acids, is presented. The training and validation sets-75% (102 samples) and 25% (36 samples) of the samples once the spectral outliers have been removed-to develop FT-midIR general equations, were built with samples from 140 commercial and home-made bakery products. The concentration of the analytes in the samples used for this study is within the typical range found in these kinds of products. Both sets were independent; thus, the validation set was only used for testing the equations. The criterion used for the selection of the validation set was samples with the highest number of neighbours and the most separation between them (H<0.6). Partial least squares regression and cross validation were used for multivariate calibration. The FT-midIR method does not require post-extraction manipulation and gives information about the fatty acid profile in two min. The 14:0, 16:0, 18:0, 18:1 and 18:2 fatty acids can be determined with excellent precision and other fatty acids with good precision according to the Shenk criteria, R (2)>/=0.90, SEP=1-1.5 SEL and R (2)=0.70-0.89, SEP=2-3 SEL, respectively. The results obtained with the proposed method were compared with those provided by the conventional method based on GC-MS. At 95% significance level, the differences between the values obtained for the different fatty acids were within the experimental error.
Accelerated Optical Projection Tomography Applied to In Vivo Imaging of Zebrafish
Correia, Teresa; Yin, Jun; Ramel, Marie-Christine; Andrews, Natalie; Katan, Matilda; Bugeon, Laurence; Dallman, Margaret J.; McGinty, James; Frankel, Paul; French, Paul M. W.; Arridge, Simon
2015-01-01
Optical projection tomography (OPT) provides a non-invasive 3-D imaging modality that can be applied to longitudinal studies of live disease models, including in zebrafish. Current limitations include the requirement of a minimum number of angular projections for reconstruction of reasonable OPT images using filtered back projection (FBP), which is typically several hundred, leading to acquisition times of several minutes. It is highly desirable to decrease the number of required angular projections to decrease both the total acquisition time and the light dose to the sample. This is particularly important to enable longitudinal studies, which involve measurements of the same fish at different time points. In this work, we demonstrate that the use of an iterative algorithm to reconstruct sparsely sampled OPT data sets can provide useful 3-D images with 50 or fewer projections, thereby significantly decreasing the minimum acquisition time and light dose while maintaining image quality. A transgenic zebrafish embryo with fluorescent labelling of the vasculature was imaged to acquire densely sampled (800 projections) and under-sampled data sets of transmitted and fluorescence projection images. The under-sampled OPT data sets were reconstructed using an iterative total variation-based image reconstruction algorithm and compared against FBP reconstructions of the densely sampled data sets. To illustrate the potential for quantitative analysis following rapid OPT data acquisition, a Hessian-based method was applied to automatically segment the reconstructed images to select the vasculature network. Results showed that 3-D images of the zebrafish embryo and its vasculature of sufficient visual quality for quantitative analysis can be reconstructed using the iterative algorithm from only 32 projections—achieving up to 28 times improvement in imaging speed and leading to total acquisition times of a few seconds. PMID:26308086
Assessing the Alcohol-BMI Relationship in a US National Sample of College Students
ERIC Educational Resources Information Center
Barry, Adam E.; Piazza-Gardner, Anna K.; Holton, M. Kim
2015-01-01
Objective: This study sought to assess the body mass index (BMI)-alcohol relationship among a US national sample of college students. Design: Secondary data analysis using the Fall 2011 National College Health Assessment (NCHA). Setting: A total of 44 US higher education institutions. Methods: Participants included a national sample of college…
Establishing an academic biobank in a resource-challenged environment.
Soo, Cassandra Claire; Mukomana, Freedom; Hazelhurst, Scott; Ramsay, Michele
2017-05-24
Past practices of informal sample collections and spreadsheets for data and sample management fall short of best-practice models for biobanking, and are neither cost effective nor efficient to adequately serve the needs of large research studies. The biobank of the Sydney Brenner Institute for Molecular Bioscience serves as a bioresource for institutional, national and international research collaborations. It provides high-quality human biospecimens from African populations, secure data and sample curation and storage, as well as monitored sample handling and management processes, to promote both non-communicable and infectious-disease research. Best-practice guidelines have been adapted to align with a low-resource setting and have been instrumental in the development of a quality-management system, including standard operating procedures and a quality-control regimen. Here, we provide a summary of 10 important considerations for initiating and establishing an academic research biobank in a low-resource setting. These include addressing ethical, legal, technical, accreditation and/or certification concerns and financial sustainability.
Establishing an academic biobank in a resource-challenged environment
Soo, C C; Mukomana, F; Hazelhurst, S; Ramsay, M
2018-01-01
Past practices of informal sample collections and spreadsheets for data and sample management fall short of best-practice models for biobanking, and are neither cost effective nor efficient to adequately serve the needs of large research studies. The biobank of the Sydney Brenner Institute for Molecular Bioscience serves as a bioresource for institutional, national and international research collaborations. It provides high-quality human biospecimens from African populations, secure data and sample curation and storage, as well as monitored sample handling and management processes, to promote both non-communicable and infectious-disease research. Best-practice guidelines have been adapted to align with a low-resource setting and have been instrumental in the development of a quality-management system, including standard operating procedures and a quality-control regimen. Here, we provide a summary of 10 important considerations for initiating and establishing an academic research biobank in a low-resource setting. These include addressing ethical, legal, technical, accreditation and/or certification concerns and financial sustainability. PMID:28604319
21 CFR 10.95 - Participation in outside standard-setting activities.
Code of Federal Regulations, 2014 CFR
2014-04-01
... activity and resulting standards will not be designed for the economic benefit of any company, group, or... invitations will be extended to a representative sampling of the public, including consumer groups, industry... the group or organization responsible for the activity. (c) Standard-setting activities by State and...
21 CFR 10.95 - Participation in outside standard-setting activities.
Code of Federal Regulations, 2013 CFR
2013-04-01
... activity and resulting standards will not be designed for the economic benefit of any company, group, or... invitations will be extended to a representative sampling of the public, including consumer groups, industry... the group or organization responsible for the activity. (c) Standard-setting activities by State and...
The High School & Beyond Data Set: Academic Self-Concept Measures.
ERIC Educational Resources Information Center
Strein, William
A series of confirmatory factor analyses using both LISREL VI (maximum likelihood method) and LISCOMP (weighted least squares method using covariance matrix based on polychoric correlations) and including cross-validation on independent samples were applied to items from the High School and Beyond data set to explore the measurement…
Ecological tolerances of Miocene larger benthic foraminifera from Indonesia
NASA Astrophysics Data System (ADS)
Novak, Vibor; Renema, Willem
2018-01-01
To provide a comprehensive palaeoenvironmental reconstruction based on larger benthic foraminifera (LBF), a quantitative analysis of their assemblage composition is needed. Besides microfacies analysis which includes environmental preferences of foraminiferal taxa, statistical analyses should also be employed. Therefore, detrended correspondence analysis and cluster analysis were performed on relative abundance data of identified LBF assemblages deposited in mixed carbonate-siliciclastic (MCS) systems and blue-water (BW) settings. Studied MCS system localities include ten sections from the central part of the Kutai Basin in East Kalimantan, ranging from late Burdigalian to Serravallian age. The BW samples were collected from eleven sections of the Bulu Formation on Central Java, dated as Serravallian. Results from detrended correspondence analysis reveal significant differences between these two environmental settings. Cluster analysis produced five clusters of samples; clusters 1 and 2 comprise dominantly MCS samples, clusters 3 and 4 with dominance of BW samples, and cluster 5 showing a mixed composition with both MCS and BW samples. The results of cluster analysis were afterwards subjected to indicator species analysis resulting in the interpretation that generated three groups among LBF taxa: typical assemblage indicators, regularly occurring taxa and rare taxa. By interpreting the results of detrended correspondence analysis, cluster analysis and indicator species analysis, along with environmental preferences of identified LBF taxa, a palaeoenvironmental model is proposed for the distribution of LBF in Miocene MCS systems and adjacent BW settings of Indonesia.
Strong smoker interest in 'setting an example to children' by quitting: national survey data.
Thomson, George; Wilson, Nick; Weerasekera, Deepa; Edwards, Richard
2011-02-01
To further explore smoker views on reasons to quit. As part of the multi-country ITC Project, a national sample of 1,376 New Zealand adult (18+ years) smokers was surveyed in 2007/08. This sample included boosted sampling of Māori, Pacific and Asian New Zealanders. 'Setting an example to children' was given as 'very much' a reason to quit by 51%, compared to 45% giving personal health concerns. However, the 'very much' and 'somewhat' responses (combined) were greater for personal health (81%) than 'setting an example to children' (74%). Price was the third ranked reason (67%). In a multivariate analysis, women were significantly more likely to state that 'setting an example to children' was 'very much' or 'somewhat' a reason to quit; as were Māori, or Pacific compared to European; and those suffering financial stress. The relatively high importance of 'example to children' as a reason to quit is an unusual finding, and may have arisen as a result of social marketing campaigns encouraging cessation to protect families in New Zealand. The policy implications could include a need for a greater emphasis on social reasons (e.g. 'example to children'), in pack warnings, and in social marketing for smoking cessation. © 2011 The Authors. ANZJPH © 2010 Public Health Association of Australia.
System and method for resolving gamma-ray spectra
Gentile, Charles A.; Perry, Jason; Langish, Stephen W.; Silber, Kenneth; Davis, William M.; Mastrovito, Dana
2010-05-04
A system for identifying radionuclide emissions is described. The system includes at least one processor for processing output signals from a radionuclide detecting device, at least one training algorithm run by the at least one processor for analyzing data derived from at least one set of known sample data from the output signals, at least one classification algorithm derived from the training algorithm for classifying unknown sample data, wherein the at least one training algorithm analyzes the at least one sample data set to derive at least one rule used by said classification algorithm for identifying at least one radionuclide emission detected by the detecting device.
NASA Astrophysics Data System (ADS)
Orenstein, E. C.; Morgado, P. M.; Peacock, E.; Sosik, H. M.; Jaffe, J. S.
2016-02-01
Technological advances in instrumentation and computing have allowed oceanographers to develop imaging systems capable of collecting extremely large data sets. With the advent of in situ plankton imaging systems, scientists must now commonly deal with "big data" sets containing tens of millions of samples spanning hundreds of classes, making manual classification untenable. Automated annotation methods are now considered to be the bottleneck between collection and interpretation. Typically, such classifiers learn to approximate a function that predicts a predefined set of classes for which a considerable amount of labeled training data is available. The requirement that the training data span all the classes of concern is problematic for plankton imaging systems since they sample such diverse, rapidly changing populations. These data sets may contain relatively rare, sparsely distributed, taxa that will not have associated training data; a classifier trained on a limited set of classes will miss these samples. The computer vision community, leveraging advances in Convolutional Neural Networks (CNNs), has recently attempted to tackle such problems using "zero-shot" object categorization methods. Under a zero-shot framework, a classifier is trained to map samples onto a set of attributes rather than a class label. These attributes can include visual and non-visual information such as what an organism is made out of, where it is distributed globally, or how it reproduces. A second stage classifier is then used to extrapolate a class. In this work, we demonstrate a zero-shot classifier, implemented with a CNN, to retrieve out-of-training-set labels from images. This method is applied to data from two continuously imaging, moored instruments: the Scripps Plankton Camera System (SPCS) and the Imaging FlowCytobot (IFCB). Results from simulated deployment scenarios indicate zero-shot classifiers could be successful at recovering samples of rare taxa in image sets. This capability will allow ecologists to identify trends in the distribution of difficult to sample organisms in their data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Lucia, Frank C. Jr.; Gottfried, Jennifer L.; Munson, Chase A.
2008-11-01
A technique being evaluated for standoff explosives detection is laser-induced breakdown spectroscopy (LIBS). LIBS is a real-time sensor technology that uses components that can be configured into a ruggedized standoff instrument. The U.S. Army Research Laboratory has been coupling standoff LIBS spectra with chemometrics for several years now in order to discriminate between explosives and nonexplosives. We have investigated the use of partial least squares discriminant analysis (PLS-DA) for explosives detection. We have extended our study of PLS-DA to more complex sample types, including binary mixtures, different types of explosives, and samples not included in the model. We demonstrate themore » importance of building the PLS-DA model by iteratively testing it against sample test sets. Independent test sets are used to test the robustness of the final model.« less
NASA Astrophysics Data System (ADS)
Willmes, M.; McMorrow, L.; Kinsley, L.; Armstrong, R.; Aubert, M.; Eggins, S.; Falguères, C.; Maureille, B.; Moffat, I.; Grün, R.
2014-03-01
Strontium isotope ratios (87Sr / 86Sr) are a key geochemical tracer used in a wide range of fields including archaeology, ecology, food and forensic sciences. These applications are based on the principle that the Sr isotopic ratios of natural materials reflect the sources of strontium available during their formation. A major constraint for current studies is the lack of robust reference maps to evaluate the source of strontium isotope ratios measured in the samples. Here we provide a new data set of bioavailable Sr isotope ratios for the major geologic units of France, based on plant and soil samples (Pangaea data repository doi:10.1594/PANGAEA.819142). The IRHUM (Isotopic Reconstruction of Human Migration) database is a web platform to access, explore and map our data set. The database provides the spatial context and metadata for each sample, allowing the user to evaluate the suitability of the sample for their specific study. In addition, it allows users to upload and share their own data sets and data products, which will enhance collaboration across the different research fields. This article describes the sampling and analytical methods used to generate the data set and how to use and access the data set through the IRHUM database. Any interpretation of the isotope data set is outside the scope of this publication.
The Mass Spectrometric Ortho Effect Studied for All 209 PCB Congeners
A method for the determination of polychlorinated biphenyls (PCBs) in caulk was developed; with application to a set of caulk and window glazing material samples. This method was evaluated by analyzing a combination of 47 samples of caulk, glazing materials, and including quality...
NHEXAS PHASE I REGION 5 STUDY--METALS IN DUST ANALYTICAL RESULTS
This data set includes analytical results for measurements of metals in 1,906 dust samples. Dust samples were collected to assess potential residential sources of dermal and inhalation exposures and to examine relationships between analyte levels in dust and in personal and bioma...
Simultaneous determination of specific alpha and beta emitters by LSC-PLS in water samples.
Fons-Castells, J; Tent-Petrus, J; Llauradó, M
2017-01-01
Liquid scintillation counting (LSC) is a commonly used technique for the determination of alpha and beta emitters. However, LSC has poor resolution and the continuous spectra for beta emitters hinder the simultaneous determination of several alpha and beta emitters from the same spectrum. In this paper, the feasibility of multivariate calibration by partial least squares (PLS) models for the determination of several alpha ( nat U, 241 Am and 226 Ra) and beta emitters ( 40 K, 60 Co, 90 Sr/ 90 Y, 134 Cs and 137 Cs) in water samples is reported. A set of alpha and beta spectra from radionuclide calibration standards were used to construct three PLS models. Experimentally mixed radionuclides and intercomparision materials were used to validate the models. The results had a maximum relative bias of 25% when all the radionuclides in the sample were included in the calibration set; otherwise the relative bias was over 100% for some radionuclides. The results obtained show that LSC-PLS is a useful approach for the simultaneous determination of alpha and beta emitters in multi-radionuclide samples. However, to obtain useful results, it is important to include all the radionuclides expected in the studied scenario in the calibration set. Copyright © 2016 Elsevier Ltd. All rights reserved.
Böl, Markus; Kruse, Roland; Ehret, Alexander E; Leichsenring, Kay; Siebert, Tobias
2012-10-11
Due to the increasing developments in modelling of biological material, adequate parameter identification techniques are urgently needed. The majority of recent contributions on passive muscle tissue identify material parameters solely by comparing characteristic, compressive stress-stretch curves from experiments and simulation. In doing so, different assumptions concerning e.g. the sample geometry or the degree of friction between the sample and the platens are required. In most cases these assumptions are grossly simplified leading to incorrect material parameters. In order to overcome such oversimplifications, in this paper a more reliable parameter identification technique is presented: we use the inverse finite element method (iFEM) to identify the optimal parameter set by comparison of the compressive stress-stretch response including the realistic geometries of the samples and the presence of friction at the compressed sample faces. Moreover, we judge the quality of the parameter identification by comparing the simulated and experimental deformed shapes of the samples. Besides this, the study includes a comprehensive set of compressive stress-stretch data on rabbit soleus muscle and the determination of static friction coefficients between muscle and PTFE. Copyright © 2012 Elsevier Ltd. All rights reserved.
Behavior of Aluminum in Solid Propellant Combustion
1982-06-01
dry pressing 30% Valley Met H- 30 aluminum, 7% carnauba wax , and 63% 100 P AP. One sample was prepared using as received H-30, a second sample used pre...34propellant" formulations. The formulations included dry pressed AP/AI, and AP/AI/ Wax samples. Sandwiches were also prepared consisting of an aluminum...Binder flame instead of by aluminum exposure during accumulate break-up. Combustion of AP/AI/ Wax Samples A set of propellant samples were prepared by
The pre-synaptic vesicle protein synaptotagmin is a novel biomarker for Alzheimer's disease.
Öhrfelt, Annika; Brinkmalm, Ann; Dumurgier, Julien; Brinkmalm, Gunnar; Hansson, Oskar; Zetterberg, Henrik; Bouaziz-Amar, Elodie; Hugon, Jacques; Paquet, Claire; Blennow, Kaj
2016-10-03
Synaptic degeneration is a central pathogenic event in Alzheimer's disease that occurs early during the course of disease and correlates with cognitive symptoms. The pre-synaptic vesicle protein synaptotagmin-1 appears to be essential for the maintenance of an intact synaptic transmission and cognitive function. Synaptotagmin-1 in cerebrospinal fluid is a candidate Alzheimer biomarker for synaptic dysfunction that also may correlate with cognitive decline. In this study, a novel mass spectrometry-based assay for measurement of cerebrospinal fluid synaptotagmin-1 was developed, and was evaluated in two independent sample sets of patients and controls. Sample set I included cerebrospinal fluid samples from patients with dementia due to Alzheimer's disease (N = 17, age 52-86 years), patients with mild cognitive impairment due to Alzheimer's disease (N = 5, age 62-88 years), and controls (N = 17, age 41-82 years). Sample set II included cerebrospinal fluid samples from patients with dementia due to Alzheimer's disease (N = 24, age 52-84 years), patients with mild cognitive impairment due to Alzheimer's disease (N = 18, age 58-83 years), and controls (N = 36, age 43-80 years). The reproducibility of the novel method showed coefficients of variation of the measured synaptotagmin-1 peptide 215-223 (VPYSELGGK) and peptide 238-245 (HDIIGEFK) of 14 % or below. In both investigated sample sets, the CSF levels of synaptotagmin-1 were significantly increased in patients with dementia due to Alzheimer's disease (P ≤ 0.0001) and in patients with mild cognitive impairment due to Alzheimer's disease (P < 0.001). In addition, in sample set I the synaptotagmin-1 level was significantly higher in patients with mild cognitive impairment due to Alzheimer's disease compared with patients with dementia due to Alzheimer's disease (P ≤ 0.05). Cerebrospinal fluid synaptotagmin-1 is a promising biomarker to monitor synaptic dysfunction and degeneration in Alzheimer's disease that may be useful for clinical diagnosis, to monitor effect on synaptic integrity by novel drug candidates, and to explore pathophysiology directly in patients with Alzheimer's disease.
Favorable Geochemistry from Springs and Wells in Colorado
Richard E. Zehner
2012-02-01
This layer contains favorable geochemistry for high-temperature geothermal systems, as interpreted by Richard "Rick" Zehner. The data is compiled from the data obtained from the USGS. The original data set combines 15,622 samples collected in the State of Colorado from several sources including 1) the original Geotherm geochemical database, 2) USGS NWIS (National Water Information System), 3) Colorado Geological Survey geothermal sample data, and 4) original samples collected by R. Zehner at various sites during the 2011 field season. These samples are also available in a separate shapefile FlintWaterSamples.shp. Data from all samples were reportedly collected using standard water sampling protocols (filtering through 0.45 micron filter, etc.) Sample information was standardized to ppm (micrograms/liter) in spreadsheet columns. Commonly-used cation and silica geothermometer temperature estimates are included.
Chemistry Data for Geothermometry Mapping of Deep Hydrothermal Reservoirs in Southeastern Idaho
Earl Mattson
2016-01-18
This dataset includes chemistry of geothermal water samples of the Eastern Snake River Plain and surrounding area. The samples included in this dataset were collected during the springs and summers of 2014 and 2015. All chemical analysis of the samples were conducted in the Analytical Laboratory at the Center of Advanced Energy Studies in Idaho Falls, Idaho. This data set supersedes #425 submission and is the final submission for AOP 3.1.2.1 for INL. Isotopic data collected by Mark Conrad will be submitted in a separate file.
Cross-cultural dataset for the evolution of religion and morality project.
Purzycki, Benjamin Grant; Apicella, Coren; Atkinson, Quentin D; Cohen, Emma; McNamara, Rita Anne; Willard, Aiyana K; Xygalatas, Dimitris; Norenzayan, Ara; Henrich, Joseph
2016-11-08
A considerable body of research cross-culturally examines the evolution of religious traditions, beliefs and behaviors. The bulk of this research, however, draws from coded qualitative ethnographies rather than from standardized methods specifically designed to measure religious beliefs and behaviors. Psychological data sets that examine religious thought and behavior in controlled conditions tend to be disproportionately sampled from student populations. Some cross-national databases employ standardized methods at the individual level, but are primarily focused on fully market integrated, state-level societies. The Evolution of Religion and Morality Project sought to generate a data set that systematically probed individual level measures sampling across a wider range of human populations. The set includes data from behavioral economic experiments and detailed surveys of demographics, religious beliefs and practices, material security, and intergroup perceptions. This paper describes the methods and variables, briefly introduces the sites and sampling techniques, notes inconsistencies across sites, and provides some basic reporting for the data set.
Perceived climate in physical activity settings.
Gill, Diane L; Morrow, Ronald G; Collins, Karen E; Lucey, Allison B; Schultz, Allison M
2010-01-01
This study focused on the perceived climate for LGBT youth and other minority groups in physical activity settings. A large sample of undergraduates and a selected sample including student teachers/interns and a campus Pride group completed a school climate survey and rated the climate in three physical activity settings (physical education, organized sport, exercise). Overall, school climate survey results paralleled the results with national samples revealing high levels of homophobic remarks and low levels of intervention. Physical activity climate ratings were mid-range, but multivariate analysis of variation test (MANOVA) revealed clear differences with all settings rated more inclusive for racial/ethnic minorities and most exclusive for gays/lesbians and people with disabilities. The results are in line with national surveys and research suggesting sexual orientation and physical characteristics are often the basis for harassment and exclusion in sport and physical activity. The current results also indicate that future physical activity professionals recognize exclusion, suggesting they could benefit from programs that move beyond awareness to skills and strategies for creating more inclusive programs.
Cross-cultural dataset for the evolution of religion and morality project
Purzycki, Benjamin Grant; Apicella, Coren; Atkinson, Quentin D.; Cohen, Emma; McNamara, Rita Anne; Willard, Aiyana K.; Xygalatas, Dimitris; Norenzayan, Ara; Henrich, Joseph
2016-01-01
A considerable body of research cross-culturally examines the evolution of religious traditions, beliefs and behaviors. The bulk of this research, however, draws from coded qualitative ethnographies rather than from standardized methods specifically designed to measure religious beliefs and behaviors. Psychological data sets that examine religious thought and behavior in controlled conditions tend to be disproportionately sampled from student populations. Some cross-national databases employ standardized methods at the individual level, but are primarily focused on fully market integrated, state-level societies. The Evolution of Religion and Morality Project sought to generate a data set that systematically probed individual level measures sampling across a wider range of human populations. The set includes data from behavioral economic experiments and detailed surveys of demographics, religious beliefs and practices, material security, and intergroup perceptions. This paper describes the methods and variables, briefly introduces the sites and sampling techniques, notes inconsistencies across sites, and provides some basic reporting for the data set. PMID:27824332
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Jason; Smith, Fred
This semiannual event includes sampling groundwater and surface water at the Monticello Disposal and Processing Sites. Sampling and analyses were conducted as specified in the Sampling and Analysis Plan for U.S. Department of Energy Office of Legacy Management Sites (LMS/PRO/S04351, continually updated) and Program Directive MNT-2016-01. Complete sample sets were collected from 42 of 48 planned locations (9 of 9 former mill site wells, 13 of 13 downgradient wells, 7 of 9 downgradient permeable reactive barrier wells, 4 of 7 seeps and wetlands, and 9 of 10 surface water locations). Planned monitoring locations are shown in Attachment 1, Sampling andmore » Analysis Work Order. Locations R6-M3, SW00-01, Seep 1, Seep 2, and Seep 5 were not sampled due to insufficient water availability. A partial sample was collected at location R4-M3 due to insufficient water. All samples from the permeable reactive barrier wells were filtered as specified in the program directive. Duplicate samples were collected from surface water location Sorenson and from monitoring wells 92-07 and RlO-Ml. Water levels were measured at all sampled wells and an additional set of wells. See Attachment2, Trip Report for additional details. The contaminants of concern (COCs) for the Monticello sites are arsenic, manganese, molybdenum, nitrate+ nitrite as nitrogen (nitrate+ nitrite as N), selenium, uranium, and vanadium. Locations with COCs that exceeded remediation goals are listed in Table 1 and Table 2. Time-concentration graphs of the COCs for all groundwater and surface water locations are included in Attachment 3, Data Presentation. An assessment of anomalous data is included in Attachment 4.« less
ERIC Educational Resources Information Center
Marshall, James L.
2000-01-01
Introduces a portable and permanent set of the elemental collection including 87 samples of elements which are, minimum, one gram or more. Demonstrates radioactivity, magnetism, fluorescence, melting solids, spectral analysis, and conduction of heat. Includes a display of minerals associated with the elements. (YDS)
NHEXAS PHASE I ARIZONA STUDY--METALS IN SOIL ANALYTICAL RESULTS
The Metals in Soil data set contains analytical results for measurements of up to 11 metals in 551 soil samples over 392 households. Samples were taken by collecting surface soil in the yard and next to the foundation from each residence. The primary metals of interest include ...
A Dual-Focus Motivational Intervention to Reduce the Risk of Alcohol-Exposed Pregnancy
ERIC Educational Resources Information Center
Velasquez, Mary M.; Ingersoll, Karen S.; Sobell, Mark B.; Floyd, R. Louise; Sobell, Linda Carter; von Sternberg, Kirk
2010-01-01
Project CHOICES developed an integrated behavioral intervention for prevention of prenatal alcohol exposure in women at high risk for alcohol-exposed pregnancies. Settings included primary care, university-hospital based obstetrical/gynecology practices, an urban jail, substance abuse treatment settings, and a media-recruited sample in three large…
Niosh analytical methods for Set G
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1976-12-01
Industrial Hygiene sampling and analytical monitoring methods validated under the joint NIOSH/OSHA Standards Completion Program for Set G are contained herein. Monitoring methods for the following compounds are included: butadiene, heptane, ketene, methyl cyclohexane, octachloronaphthalene, pentachloronaphthalene, petroleum distillates, propylene dichloride, turpentine, dioxane, hexane, LPG, naphtha(coal tar), octane, pentane, propane, and stoddard solvent.
Oscillating-flow regenerator test rig
NASA Technical Reports Server (NTRS)
Wood, J. G.; Gedeon, D. R.
1994-01-01
This report summarizes work performed in setting up and performing tests on a regenerator test rig. An earlier status report presented test results, together with heat transfer correlations, for four regenerator samples (two woven screen samples and two felt metal samples). Lessons learned from this testing led to improvements to the experimental setup, mainly instrumentation as well as to the test procedure. Given funding and time constraints for this project it was decided to complete as much testing as possible while the rig was set up and operational, and to forego final data reduction and analysis until later. Additional testing was performed on several of the previously tested samples as well an on five newly fabricated samples. The following report is a summary of the work performed at OU, with many of the final test results included in raw data form.
A modeling approach to compare ΣPCB concentrations between congener-specific analyses
Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.
2017-01-01
Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time.
Song, Tao; Zhang, Feng-ping; Liu, Yao-min; Wu, Zong-wen; Suo, You-rui
2012-08-01
In the present research, a novel method was established for determination of five fatty acids in soybean oil by transmission reflection-near infrared spectroscopy. The optimum conditions of mathematics model of five components (C16:0, C18:0, C18:1, C18:2 and C18:3) were studied, including the sample set selection, chemical value analysis, the detection methods and condition. Chemical value was analyzed by gas chromatography. One hundred fifty eight samples were selected, 138 for modeling set, 10 for testing set and 10 for unknown sample set. All samples were placed in sample pools and scanned by transmission reflection-near infrared spectrum after sonicleaning for 10 minute. The 1100-2500 nm spectral region was analyzed. The acquisition interval was 2 nm. Modified partial least square method was chosen for calibration mode creating. Result demonstrated that the 1-VR of five fatty acids between the reference value of the modeling sample set and the near infrared spectrum predictive value were 0.8839, 0.5830, 0.9001, 0.9776 and 0.9596, respectively. And the SECV of five fatty acids between the reference value of the modeling sample set and the near infrared spectrum predictive value were 0.42, 0.29, 0.83, 0.46 and 0.21, respectively. The standard error of the calibration (SECV) of five fatty acids between the reference value of testing sample set and the near infrared spectrum predictive value were 0.891, 0.790, 0.900, 0.976 and 0.942, respectively. It was proved that the near infrared spectrum predictive value was linear with chemical value and the mathematical model established for fatty acids of soybean oil was feasible. For validation, 10 unknown samples were selected for analysis by near infrared spectrum. The result demonstrated that the relative standard deviation between predict value and chemical value was less than 5.50%. That was to say that transmission reflection-near infrared spectroscopy had a good veracity in analysis of fatty acids of soybean oil.
Determination of polarimetric parameters of honey by near-infrared transflectance spectroscopy.
García-Alvarez, M; Ceresuela, S; Huidobro, J F; Hermida, M; Rodríguez-Otero, J L
2002-01-30
NIR transflectance spectroscopy was used to determine polarimetric parameters (direct polarization, polarization after inversion, specific rotation in dry matter, and polarization due to nonmonosaccharides) and sucrose in honey. In total, 156 honey samples were collected during 1992 (45 samples), 1995 (56 samples), and 1996 (55 samples). Samples were analyzed by NIR spectroscopy and polarimetric methods. Calibration (118 samples) and validation (38 samples) sets were made up; honeys from the three years were included in both sets. Calibrations were performed by modified partial least-squares regression and scatter correction by standard normal variation and detrend methods. For direct polarization, polarization after inversion, specific rotation in dry matter, and polarization due to nonmonosaccharides, good statistics (bias, SEV, and R(2)) were obtained for the validation set, and no statistically (p = 0.05) significant differences were found between instrumental and polarimetric methods for these parameters. Statistical data for sucrose were not as good as those of the other parameters. Therefore, NIR spectroscopy is not an effective method for quantitative analysis of sucrose in these honey samples. However, NIR spectroscopy may be an acceptable method for semiquantitative evaluation of sucrose for honeys, such as those in our study, containing up to 3% of sucrose. Further work is necessary to validate the uncertainty at higher levels.
ERIC Educational Resources Information Center
Lounsbury, John W.; Gibson, Lucy W.; Sundstrom, Eric; Wilburn, Denise; Loveland, James M.
2004-01-01
An empirical test of Munson and Rubenstein's (1992) assertion that 'school is work' compared a sample of students in a high school with a sample of workers in a manufacturing plant in the same metropolitan area. Data from both samples included scores on six personality traits--Conscientiousness, Agreeableness, Openness, Emotional Stability,…
Updated 34-band Photometry for the SINGS/KINGFISH Samples of Nearby Galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dale, D. A.; Turner, J. A.; Cook, D. O.
2017-03-01
We present an update to the ultraviolet-to-radio database of global broadband photometry for the 79 nearby galaxies that comprise the union of the KINGFISH (Key Insights on Nearby Galaxies: A Far-Infrared Survey with Herschel ) and SINGS ( Spitzer Infrared Nearby Galaxies Survey) samples. The 34-band data set presented here includes contributions from observational work carried out with a variety of facilities including GALEX , SDSS, Pan-STARRS1, NOAO , 2MASS, Wide-Field Infrared Survey Explorer , Spitzer , Herschel , Planck , JCMT , and the VLA. Improvements of note include recalibrations of previously published SINGS BVR {sub C} I {submore » C} and KINGFISH far-infrared/submillimeter photometry. Similar to previous results in the literature, an excess of submillimeter emission above model predictions is seen primarily for low-metallicity dwarf or irregular galaxies. This 33-band photometric data set for the combined KINGFISH+SINGS sample serves as an important multiwavelength reference for the variety of galaxies observed at low redshift. A thorough analysis of the observed spectral energy distributions is carried out in a companion paper.« less
Luna B. Leopold--pioneer setting the stage for modern hydrology
Hunt, Randall J.; Meine, Curt
2012-01-01
In 1986, during the first year of graduate school, the lead author was sampling the water from a pitcher pump in front of “The Shack,” the setting of the opening essays in Aldo Leopold's renowned book A Sand County Almanac. The sampling was part of my Master's work that included quarterly monitoring of water quality on the Leopold Memorial Reserve (LMR) near Baraboo, Wisconsin. The Shack was already a well-known landmark, and it was common to come upon visitors and hikers there. As such, I took no special note of the man who approached me as I was filling sample bottles and asked, as was typical, “What are you doing?”
In-Situ Data for Microphysical Retrievals: TC4, 2007
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mace, Gerald
This data set is derived from measurements collected in situ by the NASA DC8 during the Tropical Cloud Climate Composition Coupling Experiment (TC4) that was conducted during July and August, 2007 (Toon et al., 2010). During this experiment the DC8 was based in San Jose, Costa Rica and sampled clouds in the maritime region of the Eastern Pacific and adjoining continental areas. The primary objective of the DC8 during this deployment was to sample ice clouds associated with convective activity. While the vast majority of the data are from ice-phase clouds that have recent association with convection, other types ofmore » clouds such as boundary layer clouds and active convection were also sampled and are represented in this data set. The derived data set, as compiled in this delivery, includes approximately 15,000 5-second averaged measurements collected by the NASA DC8.« less
NASA Technical Reports Server (NTRS)
Lynes, Michael A. (Inventor); Fernandez, Salvador M. (Inventor)
2010-01-01
An assay technique for label-free, highly parallel, qualitative and quantitative detection of specific cell populations in a sample and for assessing cell functional status, cell-cell interactions and cellular responses to drugs, environmental toxins, bacteria, viruses and other factors that may affect cell function. The technique includes a) creating a first array of binding regions in a predetermined spatial pattern on a sensor surface capable of specifically binding the cells to be assayed; b) creating a second set of binding regions in specific spatial patterns relative to the first set designed to efficiently capture potential secreted or released products from cells captured on the first set of binding regions; c) contacting the sensor surface with the sample, and d) simultaneously monitoring the optical properties of all the binding regions of the sensor surface to determine the presence and concentration of specific cell populations in the sample and their functional status by detecting released or secreted bioproducts.
VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox
NASA Astrophysics Data System (ADS)
Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.
2016-12-01
VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.
McDade, Thomas W; Williams, Sharon; Snodgrass, J Josh
2007-11-01
Logistical constraints associated with the collection and analysis of biological samples in community-based settings have been a significant impediment to integrative, multilevel bio-demographic and biobehavioral research. However recent methodological developments have overcome many of these constraints and have also expanded the options for incorporating biomarkers into population-based health research in international as well as domestic contexts. In particular using dried blood spot (DBS) samples-drops of whole blood collected on filter paper from a simple finger prick-provides a minimally invasive method for collecting blood samples in nonclinical settings. After a brief discussion of biomarkers more generally, we review procedures for collecting, handling, and analyzing DBS samples. Advantages of using DBS samples-compared with venipuncture include the relative ease and low cost of sample collection, transport, and storage. Disadvantages include requirements for assay development and validation as well as the relatively small volumes of sample. We present the results of a comprehensive literature review of published protocols for analysis of DBS samples, and we provide more detailed analysis of protocols for 45 analytes likely to be of particular relevance to population-level health research. Our objective is to provide investigators with the information they need to make informed decisions regarding the appropriateness of blood spot methods for their research interests.
Family Violence: A Curriculum Sample. Women's Issues Series, Vol. II.
ERIC Educational Resources Information Center
Refugee Women's Alliance, Seattle, WA.
The materials in this curriculum sample are written as an English-as-a-Second-Language (ESL) lesson for immigrants and refugees, designed to begin discussion of family violence. An introductory section outlines issues related to discussion of family violence in the classroom setting, including the importance of opening lines of communication and…
U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--METALS IN SOIL ANALYTICAL RESULTS
The Metals in Soil data set contains analytical results for measurements of up to 11 metals in 91 soil samples over 91 households. Samples were taken by collecting surface soil in the yard of each residence. The primary metals of interest include lead (CAS# 7439-92-1), arsenic ...
Bladder cancer biomarker discovery using global metabolomic profiling of urine.
Wittmann, Bryan M; Stirdivant, Steven M; Mitchell, Matthew W; Wulff, Jacob E; McDunn, Jonathan E; Li, Zhen; Dennis-Barrie, Aphrihl; Neri, Bruce P; Milburn, Michael V; Lotan, Yair; Wolfert, Robert L
2014-01-01
Bladder cancer (BCa) is a common malignancy worldwide and has a high probability of recurrence after initial diagnosis and treatment. As a result, recurrent surveillance, primarily involving repeated cystoscopies, is a critical component of post diagnosis patient management. Since cystoscopy is invasive, expensive and a possible deterrent to patient compliance with regular follow-up screening, new non-invasive technologies to aid in the detection of recurrent and/or primary bladder cancer are strongly needed. In this study, mass spectrometry based metabolomics was employed to identify biochemical signatures in human urine that differentiate bladder cancer from non-cancer controls. Over 1000 distinct compounds were measured including 587 named compounds of known chemical identity. Initial biomarker identification was conducted using a 332 subject sample set of retrospective urine samples (cohort 1), which included 66 BCa positive samples. A set of 25 candidate biomarkers was selected based on statistical significance, fold difference and metabolic pathway coverage. The 25 candidate biomarkers were tested against an independent urine sample set (cohort 2) using random forest analysis, with palmitoyl sphingomyelin, lactate, adenosine and succinate providing the strongest predictive power for differentiating cohort 2 cancer from non-cancer urines. Cohort 2 metabolite profiling revealed additional metabolites, including arachidonate, that were higher in cohort 2 cancer vs. non-cancer controls, but were below quantitation limits in the cohort 1 profiling. Metabolites related to lipid metabolism may be especially interesting biomarkers. The results suggest that urine metabolites may provide a much needed non-invasive adjunct diagnostic to cystoscopy for detection of bladder cancer and recurrent disease management.
Dobecki, Marek
2012-01-01
This paper reviews the requirements for measurement methods of chemical agents in the air at workstations. European standards, which have a status of Polish standards, comprise some requirements and information on sampling strategy, measuring techniques, type of samplers, sampling pumps and methods of occupational exposure evaluation at a given technological process. Measurement methods, including air sampling and analytical procedure in a laboratory, should be appropriately validated before intended use. In the validation process, selected methods are tested and budget of uncertainty is set up. The validation procedure that should be implemented in the laboratory together with suitable statistical tools and major components of uncertainity to be taken into consideration, were presented in this paper. Methods of quality control, including sampling and laboratory analyses were discussed. Relative expanded uncertainty for each measurement expressed as a percentage, should not exceed the limit of values set depending on the type of occupational exposure (short-term or long-term) and the magnitude of exposure to chemical agents in the work environment.
The DINGO dataset: a comprehensive set of data for the SAMPL challenge
NASA Astrophysics Data System (ADS)
Newman, Janet; Dolezal, Olan; Fazio, Vincent; Caradoc-Davies, Tom; Peat, Thomas S.
2012-05-01
Part of the latest SAMPL challenge was to predict how a small fragment library of 500 commercially available compounds would bind to a protein target. In order to assess the modellers' work, a reasonably comprehensive set of data was collected using a number of techniques. These included surface plasmon resonance, isothermal titration calorimetry, protein crystallization and protein crystallography. Using these techniques we could determine the kinetics of fragment binding, the energy of binding, how this affects the ability of the target to crystallize, and when the fragment did bind, the pose or orientation of binding. Both the final data set and all of the raw images have been made available to the community for scrutiny and further work. This overview sets out to give the parameters of the experiments done and what might be done differently for future studies.
2008-12-20
Equation 6 for the sample likelihood function gives a “concentrated likelihood function,” which depends on correlation parameters θh and ph. This...step one and estimates correlation parameters using the new data set including all previous sample points and the new data point x. The algorithm...Unclassified b. ABSTRACT Unclassified c. THIS PAGE Unclassified UU 279 19b. TELEPHONE NUMBER (include area code ) N/A
9 CFR 130.50 - Payment of user fees.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., such as tests on samples submitted to NVSL or FADDL, diagnostic reagents, slide sets, tissue sets, and..., MD 20738-1231. (2) All types of checks, including traveler's checks, drawn on a U.S. bank in U.S.... bank in U.S. dollars and made payable to the U.S. Department of Agriculture or USDA; or (4) Credit...
9 CFR 130.50 - Payment of user fees.
Code of Federal Regulations, 2013 CFR
2013-01-01
..., such as tests on samples submitted to NVSL or FADDL, diagnostic reagents, slide sets, tissue sets, and..., MD 20738-1231. (2) All types of checks, including traveler's checks, drawn on a U.S. bank in U.S.... bank in U.S. dollars and made payable to the U.S. Department of Agriculture or USDA; or (4) Credit...
9 CFR 130.50 - Payment of user fees.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., such as tests on samples submitted to NVSL or FADDL, diagnostic reagents, slide sets, tissue sets, and..., MD 20738-1231. (2) All types of checks, including traveler's checks, drawn on a U.S. bank in U.S.... bank in U.S. dollars and made payable to the U.S. Department of Agriculture or USDA; or (4) Credit...
9 CFR 130.50 - Payment of user fees.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., such as tests on samples submitted to NVSL or FADDL, diagnostic reagents, slide sets, tissue sets, and..., MD 20738-1231. (2) All types of checks, including traveler's checks, drawn on a U.S. bank in U.S.... bank in U.S. dollars and made payable to the U.S. Department of Agriculture or USDA; or (4) Credit...
9 CFR 130.50 - Payment of user fees.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., such as tests on samples submitted to NVSL or FADDL, diagnostic reagents, slide sets, tissue sets, and..., MD 20738-1231. (2) All types of checks, including traveler's checks, drawn on a U.S. bank in U.S.... bank in U.S. dollars and made payable to the U.S. Department of Agriculture or USDA; or (4) Credit...
Microsatellite markers for northern red oak (Fagaceae: Quercus rubra)
Preston R. Aldrich; Charles H. Michler; Weilin Sun; Jeanne Romero-Severson
2002-01-01
We provide primer sequences for 14 (GA)n microsatellite loci developed from northern red oak, an important timber species. We screened loci using two sets of samples. A parent-offspring set included DNA from seven acorns collected from one mother tree along with maternal DNA, to determine that all progeny carried a maternal allele at each locus....
NASA Technical Reports Server (NTRS)
Young, Kelsey E.; Evans, C. A.; Hodges, K. V.
2012-01-01
While traditional geologic mapping includes the examination of structural relationships between rock units in the field, more advanced technology now enables us to simultaneously collect and combine analytical datasets with field observations. Information about tectonomagmatic processes can be gleaned from these combined data products. Historically, construction of multi-layered field maps that include sample data has been accomplished serially (first map and collect samples, analyze samples, combine data, and finally, readjust maps and conclusions about geologic history based on combined data sets). New instruments that can be used in the field, such as a handheld xray fluorescence (XRF) unit, are now available. Targeted use of such instruments enables geologists to collect preliminary geochemical data while in the field so that they can optimize scientific data return from each field traverse. Our study tests the application of this technology and projects the benefits gained by real-time geochemical data in the field. The integrated data set produces a richer geologic map and facilitates a stronger contextual picture for field geologists when collecting field observations and samples for future laboratory work. Real-time geochemical data on samples also provide valuable insight regarding sampling decisions by the field geologist
Caught you: threats to confidentiality due to the public release of large-scale genetic data sets
2010-01-01
Background Large-scale genetic data sets are frequently shared with other research groups and even released on the Internet to allow for secondary analysis. Study participants are usually not informed about such data sharing because data sets are assumed to be anonymous after stripping off personal identifiers. Discussion The assumption of anonymity of genetic data sets, however, is tenuous because genetic data are intrinsically self-identifying. Two types of re-identification are possible: the "Netflix" type and the "profiling" type. The "Netflix" type needs another small genetic data set, usually with less than 100 SNPs but including a personal identifier. This second data set might originate from another clinical examination, a study of leftover samples or forensic testing. When merged to the primary, unidentified set it will re-identify all samples of that individual. Even with no second data set at hand, a "profiling" strategy can be developed to extract as much information as possible from a sample collection. Starting with the identification of ethnic subgroups along with predictions of body characteristics and diseases, the asthma kids case as a real-life example is used to illustrate that approach. Summary Depending on the degree of supplemental information, there is a good chance that at least a few individuals can be identified from an anonymized data set. Any re-identification, however, may potentially harm study participants because it will release individual genetic disease risks to the public. PMID:21190545
Caught you: threats to confidentiality due to the public release of large-scale genetic data sets.
Wjst, Matthias
2010-12-29
Large-scale genetic data sets are frequently shared with other research groups and even released on the Internet to allow for secondary analysis. Study participants are usually not informed about such data sharing because data sets are assumed to be anonymous after stripping off personal identifiers. The assumption of anonymity of genetic data sets, however, is tenuous because genetic data are intrinsically self-identifying. Two types of re-identification are possible: the "Netflix" type and the "profiling" type. The "Netflix" type needs another small genetic data set, usually with less than 100 SNPs but including a personal identifier. This second data set might originate from another clinical examination, a study of leftover samples or forensic testing. When merged to the primary, unidentified set it will re-identify all samples of that individual. Even with no second data set at hand, a "profiling" strategy can be developed to extract as much information as possible from a sample collection. Starting with the identification of ethnic subgroups along with predictions of body characteristics and diseases, the asthma kids case as a real-life example is used to illustrate that approach. Depending on the degree of supplemental information, there is a good chance that at least a few individuals can be identified from an anonymized data set. Any re-identification, however, may potentially harm study participants because it will release individual genetic disease risks to the public.
Robust kernel collaborative representation for face recognition
NASA Astrophysics Data System (ADS)
Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong
2015-05-01
One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.
Sample Return from Ancient Hydrothermal Springs
NASA Technical Reports Server (NTRS)
Allen, Carlton C.; Oehler, Dorothy Z.
2008-01-01
Hydrothermal spring deposits on Mars would make excellent candidates for sample return. Molecular phylogeny suggests that that life on Earth may have arisen in hydrothermal settings [1-3], and on Mars, such settings not only would have supplied energy-rich waters in which martian life may have evolved [4-7] but also would have provided warm, liquid water to martian life forms as the climate became colder and drier [8]. Since silica, sulfates, and clays associated with hydrothermal settings are known to preserve geochemical and morphological remains of ancient terrestrial life [9-11], such settings on Mars might similarly preserve evidence of martian life. Finally, because formation of hydrothermal springs includes surface and subsurface processes, martian spring deposits would offer the potential to assess astrobiological potential and hydrological history in a variety of settings, including surface mineralized terraces, associated stream deposits, and subsurface environments where organic remains may have been well protected from oxidation. Previous attempts to identify martian spring deposits from orbit have been general or limited by resolution of available data [12-14]. However, new satellite imagery from HiRISE has a resolution of 28 cm/pixel, and based on these new data, we have interpreted several features in Vernal Crater, Arabia Terra as ancient hydrothermal springs [15, 16].
Watershed-based survey designs
Detenbeck, N.E.; Cincotta, D.; Denver, J.M.; Greenlee, S.K.; Olsen, A.R.; Pitchford, A.M.
2005-01-01
Watershed-based sampling design and assessment tools help serve the multiple goals for water quality monitoring required under the Clean Water Act, including assessment of regional conditions to meet Section 305(b), identification of impaired water bodies or watersheds to meet Section 303(d), and development of empirical relationships between causes or sources of impairment and biological responses. Creation of GIS databases for hydrography, hydrologically corrected digital elevation models, and hydrologic derivatives such as watershed boundaries and upstream–downstream topology of subcatchments would provide a consistent seamless nationwide framework for these designs. The elements of a watershed-based sample framework can be represented either as a continuous infinite set defined by points along a linear stream network, or as a discrete set of watershed polygons. Watershed-based designs can be developed with existing probabilistic survey methods, including the use of unequal probability weighting, stratification, and two-stage frames for sampling. Case studies for monitoring of Atlantic Coastal Plain streams, West Virginia wadeable streams, and coastal Oregon streams illustrate three different approaches for selecting sites for watershed-based survey designs.
Instrumentation for investigation of corona discharges from insulated wires
NASA Technical Reports Server (NTRS)
Doreswamy, C. V.; Crowell, C. S.
1975-01-01
A coaxial cylinder configuration is used to investigate the effect of corona impulses on the deterioration of electrical insulation. The corona currents flowing through the resistance develop a voltage which is fed to the measuring set-up. The value of this resistance is made equal to the surge impedance of the coaxial cylinder set up to prevent reflections. This instrumentation includes a phase shifter and Schmidt trigger and is designed to sample, measure, and display corona impulses occurring during any predetermined sampling period of a randomly selectable half cycle of the 60 Hz high voltage wave.
Nakagaki, Naomi; Hitt, Kerie J.; Price, Curtis V.; Falcone, James A.
2012-01-01
Characterization of natural and anthropogenic features that define the environmental settings of sampling sites for streams and groundwater, including drainage basins and groundwater study areas, is an essential component of water-quality and ecological investigations being conducted as part of the U.S. Geological Survey's National Water-Quality Assessment program. Quantitative characterization of environmental settings, combined with physical, chemical, and biological data collected at sampling sites, contributes to understanding the status of, and influences on, water-quality and ecological conditions. To support studies for the National Water-Quality Assessment program, a geographic information system (GIS) was used to develop a standard set of methods to consistently characterize the sites, drainage basins, and groundwater study areas across the nation. This report describes three methods used for characterization-simple overlay, area-weighted areal interpolation, and land-cover-weighted areal interpolation-and their appropriate applications to geographic analyses that have different objectives and data constraints. In addition, this document records the GIS thematic datasets that are used for the Program's national design and data analyses.
Storm-water data for Bear Creek basin, Jackson County, Oregon 1977-78
Wittenberg, Loren A.
1978-01-01
Storm-water-quality samples were collected from four subbasins in the Bear Creek basin in southern Oregon. These subbasins vary in drainage size, channel slope, effective impervious area, and land use. Automatic waterquality samplers and precipitation and discharge gages were set up in each of the four subbasins. During the period October 1977 through May 1978, 19 sets of samples, including two base-flow samples, were collected. Fecal coliform bacteria colonies per 100-milliliter sample ranged from less than 1,000 to more than 1,000,000. Suspended-sediment concentrations ranged from less than 1 to more than 2,300 milligrams per liter. One subbasin consisting of downtown businesses and streets with heavy vehicular traffic was monitored for lead. Total lead values ranging from 100 to 1,900 micrograms per liter were measured during one storm event.
ERIC Educational Resources Information Center
Willits, Fern; Brennan, Mark
2017-01-01
This study assessed the relationships of student attributes, course characteristics and course outcomes to college students' ratings of course quality in three types of settings. The analysis utilised data from online surveys of samples of college students conducted in 2011 and 2012 at the Pennsylvania State University. Included in the analysis…
Ambient-temperature incubation for the field detection of Escherichia coli in drinking water.
Brown, J; Stauber, C; Murphy, J L; Khan, A; Mu, T; Elliott, M; Sobsey, M D
2011-04-01
Escherichia coli is the pre-eminent microbiological indicator used to assess safety of drinking water globally. The cost and equipment requirements for processing samples by standard methods may limit the scale of water quality testing in technologically less developed countries and other resource-limited settings, however. We evaluate here the use of ambient-temperature incubation in detection of E. coli in drinking water samples as a potential cost-saving and convenience measure with applications in regions with high (>25°C) mean ambient temperatures. This study includes data from three separate water quality assessments: two in Cambodia and one in the Dominican Republic. Field samples of household drinking water were processed in duplicate by membrane filtration (Cambodia), Petrifilm™ (Cambodia) or Colilert® (Dominican Republic) on selective media at both standard incubation temperature (35–37°C) and ambient temperature, using up to three dilutions and three replicates at each dilution. Matched sample sets were well correlated with 80% of samples (n = 1037) within risk-based microbial count strata (E. coli CFU 100 ml−1 counts of <1, 1–10, 11–100, 101–1000, >1000), and a pooled coefficient of variation of 17% (95% CI 15–20%) for paired sample sets across all methods. These results suggest that ambient-temperature incubation of E. coli in at least some settings may yield sufficiently robust data for water safety monitoring where laboratory or incubator access is limited.
[Studies on the brand traceability of milk powder based on NIR spectroscopy technology].
Guan, Xiao; Gu, Fang-Qing; Liu, Jing; Yang, Yong-Jian
2013-10-01
Brand traceability of several different kinds of milk powder was studied by combining near infrared spectroscopy diffuse reflectance mode with soft independent modeling of class analogy (SIMCA) in the present paper. The near infrared spectrum of 138 samples, including 54 Guangming milk powder samples, 43 Netherlands samples, and 33 Nestle samples and 8 Yili samples, were collected. After pretreatment of full spectrum data variables in training set, principal component analysis was performed, and the contribution rate of the cumulative variance of the first three principal components was about 99.07%. Milk powder principal component regression model based on SIMCA was established, and used to classify the milk powder samples in prediction sets. The results showed that the recognition rate of Guangming milk powder, Netherlands milk powder and Nestle milk powder was 78%, 75% and 100%, the rejection rate was 100%, 87%, and 88%, respectively. Therefore, the near infrared spectroscopy combined with SIMCA model can classify milk powder with high accuracy, and is a promising identification method of milk powder variety.
Microarray Genomic Systems Development
2008-06-01
11 species), Escherichia coli TOP10 (7 strains), and Geobacillus stearothermophilus . Using standard molecular biology methods, we isolated genomic...comparisons. Results: Different species of bacteria, including Escherichia coli, Bacillus bacteria, and Geobacillus stearothermophilus produce qualitatively...oligonucleotides to labelled genomic DNA from a set of test samples, including eleven Bacillus species, Geobacillus stearothermophilus , and seven Escherichia
Rapid DNA analysis for automated processing and interpretation of low DNA content samples.
Turingan, Rosemary S; Vasantgadkar, Sameer; Palombo, Luke; Hogan, Catherine; Jiang, Hua; Tan, Eugene; Selden, Richard F
2016-01-01
Short tandem repeat (STR) analysis of casework samples with low DNA content include those resulting from the transfer of epithelial cells from the skin to an object (e.g., cells on a water bottle, or brim of a cap), blood spatter stains, and small bone and tissue fragments. Low DNA content (LDC) samples are important in a wide range of settings, including disaster response teams to assist in victim identification and family reunification, military operations to identify friend or foe, criminal forensics to identify suspects and exonerate the innocent, and medical examiner and coroner offices to identify missing persons. Processing LDC samples requires experienced laboratory personnel, isolated workstations, and sophisticated equipment, requires transport time, and involves complex procedures. We present a rapid DNA analysis system designed specifically to generate STR profiles from LDC samples in field-forward settings by non-technical operators. By performing STR in the field, close to the site of collection, rapid DNA analysis has the potential to increase throughput and to provide actionable information in real time. A Low DNA Content BioChipSet (LDC BCS) was developed and manufactured by injection molding. It was designed to function in the fully integrated Accelerated Nuclear DNA Equipment (ANDE) instrument previously designed for analysis of buccal swab and other high DNA content samples (Investigative Genet. 4(1):1-15, 2013). The LDC BCS performs efficient DNA purification followed by microfluidic ultrafiltration of the purified DNA, maximizing the quantity of DNA available for subsequent amplification and electrophoretic separation and detection of amplified fragments. The system demonstrates accuracy, precision, resolution, signal strength, and peak height ratios appropriate for casework analysis. The LDC rapid DNA analysis system is effective for the generation of STR profiles from a wide range of sample types. The technology broadens the range of sample types that can be processed and minimizes the time between sample collection, sample processing and analysis, and generation of actionable intelligence. The fully integrated Expert System is capable of interpreting a wide range or sample types and input DNA quantities, allowing samples to be processed and interpreted without a technical operator.
NASA Aviation Safety Reporting System
NASA Technical Reports Server (NTRS)
1980-01-01
Problems in briefing of relief by air traffic controllers are discussed, including problems that arise when duty positions are changed by controllers. Altimeter reading and setting errors as factors in aviation safety are discussed, including problems associated with altitude-including instruments. A sample of reports from pilots and controllers is included, covering the topics of ATIS broadcasts an clearance readback problems. A selection of Alert Bulletins, with their responses, is included.
Ancestral inference from haplotypes and mutations.
Griffiths, Robert C; Tavaré, Simon
2018-04-25
We consider inference about the history of a sample of DNA sequences, conditional upon the haplotype counts and the number of segregating sites observed at the present time. After deriving some theoretical results in the coalescent setting, we implement rejection sampling and importance sampling schemes to perform the inference. The importance sampling scheme addresses an extension of the Ewens Sampling Formula for a configuration of haplotypes and the number of segregating sites in the sample. The implementations include both constant and variable population size models. The methods are illustrated by two human Y chromosome datasets. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Breier, J. A.; Sheik, C. S.; Gomez-Ibanez, D.; Sayre-McCord, R. T.; Sanger, R.; Rauch, C.; Coleman, M.; Bennett, S. A.; Cron, B. R.; Li, M.; German, C. R.; Toner, B. M.; Dick, G. J.
2014-12-01
A new tool was developed for large volume sampling to facilitate marine microbiology and biogeochemical studies. It was developed for remotely operated vehicle and hydrocast deployments, and allows for rapid collection of multiple sample types from the water column and dynamic, variable environments such as rising hydrothermal plumes. It was used successfully during a cruise to the hydrothermal vent systems of the Mid-Cayman Rise. The Suspended Particulate Rosette V2 large volume multi-sampling system allows for the collection of 14 sample sets per deployment. Each sample set can include filtered material, whole (unfiltered) water, and filtrate. Suspended particulate can be collected on filters up to 142 mm in diameter and pore sizes down to 0.2 μm. Filtration is typically at flowrates of 2 L min-1. For particulate material, filtered volume is constrained only by sampling time and filter capacity, with all sample volumes recorded by digital flowmeter. The suspended particulate filter holders can be filled with preservative and sealed immediately after sample collection. Up to 2 L of whole water, filtrate, or a combination of the two, can be collected as part of each sample set. The system is constructed of plastics with titanium fasteners and nickel alloy spring loaded seals. There are no ferrous alloys in the sampling system. Individual sample lines are prefilled with filtered, deionized water prior to deployment and remain sealed unless a sample is actively being collected. This system is intended to facilitate studies concerning the relationship between marine microbiology and ocean biogeochemistry.
Binns, Michael; de Atauri, Pedro; Vlysidis, Anestis; Cascante, Marta; Theodoropoulos, Constantinos
2015-02-18
Flux balance analysis is traditionally implemented to identify the maximum theoretical flux for some specified reaction and a single distribution of flux values for all the reactions present which achieve this maximum value. However it is well known that the uncertainty in reaction networks due to branches, cycles and experimental errors results in a large number of combinations of internal reaction fluxes which can achieve the same optimal flux value. In this work, we have modified the applied linear objective of flux balance analysis to include a poling penalty function, which pushes each new set of reaction fluxes away from previous solutions generated. Repeated poling-based flux balance analysis generates a sample of different solutions (a characteristic set), which represents all the possible functionality of the reaction network. Compared to existing sampling methods, for the purpose of generating a relatively "small" characteristic set, our new method is shown to obtain a higher coverage than competing methods under most conditions. The influence of the linear objective function on the sampling (the linear bias) constrains optimisation results to a subspace of optimal solutions all producing the same maximal fluxes. Visualisation of reaction fluxes plotted against each other in 2 dimensions with and without the linear bias indicates the existence of correlations between fluxes. This method of sampling is applied to the organism Actinobacillus succinogenes for the production of succinic acid from glycerol. A new method of sampling for the generation of different flux distributions (sets of individual fluxes satisfying constraints on the steady-state mass balances of intermediates) has been developed using a relatively simple modification of flux balance analysis to include a poling penalty function inside the resulting optimisation objective function. This new methodology can achieve a high coverage of the possible flux space and can be used with and without linear bias to show optimal versus sub-optimal solution spaces. Basic analysis of the Actinobacillus succinogenes system using sampling shows that in order to achieve the maximal succinic acid production CO₂ must be taken into the system. Solutions involving release of CO₂ all give sub-optimal succinic acid production.
Bagger, Frederik Otzen; Sasivarevic, Damir; Sohi, Sina Hadi; Laursen, Linea Gøricke; Pundhir, Sachin; Sønderby, Casper Kaae; Winther, Ole; Rapin, Nicolas; Porse, Bo T.
2016-01-01
Research on human and murine haematopoiesis has resulted in a vast number of gene-expression data sets that can potentially answer questions regarding normal and aberrant blood formation. To researchers and clinicians with limited bioinformatics experience, these data have remained available, yet largely inaccessible. Current databases provide information about gene-expression but fail to answer key questions regarding co-regulation, genetic programs or effect on patient survival. To address these shortcomings, we present BloodSpot (www.bloodspot.eu), which includes and greatly extends our previously released database HemaExplorer, a database of gene expression profiles from FACS sorted healthy and malignant haematopoietic cells. A revised interactive interface simultaneously provides a plot of gene expression along with a Kaplan–Meier analysis and a hierarchical tree depicting the relationship between different cell types in the database. The database now includes 23 high-quality curated data sets relevant to normal and malignant blood formation and, in addition, we have assembled and built a unique integrated data set, BloodPool. Bloodpool contains more than 2000 samples assembled from six independent studies on acute myeloid leukemia. Furthermore, we have devised a robust sample integration procedure that allows for sensitive comparison of user-supplied patient samples in a well-defined haematopoietic cellular space. PMID:26507857
Kremastinou, J.; Polymerou, V.; Lavranos, D.; Aranda Arrufat, A.; Harwood, J.; Martínez Lorenzo, M. J.; Ng, K. P.; Queiros, L.; Vereb, I.
2016-01-01
Treponema pallidum infections can have severe complications if not diagnosed and treated at an early stage. Screening and diagnosis of syphilis require assays with high specificity and sensitivity. The Elecsys Syphilis assay is an automated treponemal immunoassay for the detection of antibodies against T. pallidum. The performance of this assay was investigated previously in a multicenter study. The current study expands on that evaluation in a variety of diagnostic settings and patient populations, at seven independent laboratories. The samples included routine diagnostic samples, blood donation samples, samples from patients with confirmed HIV infections, samples from living organ or bone marrow donors, and banked samples, including samples previously confirmed as syphilis positive. This study also investigated the seroconversion sensitivity of the assay. With a total of 1,965 syphilis-negative routine diagnostic samples and 5,792 syphilis-negative samples collected from blood donations, the Elecsys Syphilis assay had specificity values of 99.85% and 99.86%, respectively. With 333 samples previously identified as syphilis positive, the sensitivity was 100% regardless of disease stage. The assay also showed 100% sensitivity and specificity with samples from 69 patients coinfected with HIV. The Elecsys Syphilis assay detected infection in the same bleed or earlier, compared with comparator assays, in a set of sequential samples from a patient with primary syphilis. In archived serial blood samples collected from 14 patients with direct diagnoses of primary syphilis, the Elecsys Syphilis assay detected T. pallidum antibodies for 3 patients for whom antibodies were not detected with the Architect Syphilis TP assay, indicating a trend for earlier detection of infection, which may have the potential to shorten the time between infection and reactive screening test results. PMID:27358468
Adam, B A; Smith, R N; Rosales, I A; Matsunami, M; Afzali, B; Oura, T; Cosimi, A B; Kawai, T; Colvin, R B; Mengel, M
2017-11-01
Molecular testing represents a promising adjunct for the diagnosis of antibody-mediated rejection (AMR). Here, we apply a novel gene expression platform in sequential formalin-fixed paraffin-embedded samples from nonhuman primate (NHP) renal transplants. We analyzed 34 previously described gene transcripts related to AMR in humans in 197 archival NHP samples, including 102 from recipients that developed chronic AMR, 80 from recipients without AMR, and 15 normal native nephrectomies. Three endothelial genes (VWF, DARC, and CAV1), derived from 10-fold cross-validation receiver operating characteristic curve analysis, demonstrated excellent discrimination between AMR and non-AMR samples (area under the curve = 0.92). This three-gene set correlated with classic features of AMR, including glomerulitis, capillaritis, glomerulopathy, C4d deposition, and DSAs (r = 0.39-0.63, p < 0.001). Principal component analysis confirmed the association between three-gene set expression and AMR and highlighted the ambiguity of v lesions and ptc lesions between AMR and T cell-mediated rejection (TCMR). Elevated three-gene set expression corresponded with the development of immunopathological evidence of rejection and often preceded it. Many recipients demonstrated mixed AMR and TCMR, suggesting that this represents the natural pattern of rejection. These data provide NHP animal model validation of recent updates to the Banff classification including the assessment of molecular markers for diagnosing AMR. © 2017 The American Society of Transplantation and the American Society of Transplant Surgeons.
Núñez, Carolina; Baeta, Miriam; Ibarbia, Nerea; Ortueta, Urko; Jiménez-Moreno, Susana; Blazquez-Caeiro, José Luis; Builes, Juan José; Herrera, Rene J; Martínez-Jarreta, Begoña; de Pancorbo, Marian M
2017-04-01
A Y-STR multiplex system has been developed with the purpose of complementing the widely used 17 Y-STR haplotyping (AmpFlSTR Y Filer® PCR Amplification kit) routinely employed in forensic and population genetic studies. This new multiplex system includes six additional STR loci (DYS576, DYS481, DYS549, DYS533, DYS570, and DYS643) to reach the 23 Y-STR of the PowerPlex® Y23 System. In addition, this kit includes the DYS456 and DYS385 loci for traceability purposes. Male samples from 625 individuals from ten worldwide populations were genotyped, including three sample sets from populations previously published with the 17 Y-STR system to expand their current data. Validation studies demonstrated good performance of the panel set in terms of concordance, sensitivity, and stability in the presence of inhibitors and artificially degraded DNA. The results obtained for haplotype diversity and discrimination capacity with this multiplex system were considerably high, providing further evidences of the suitability of this novel Y-STR system for forensic purposes. Thus, the use of this multiplex for samples previously genotyped with 17 Y-STRs will be an efficient and low-cost alternative to complete the set of 23 Y-STRs and improve allele databases for population and forensic purposes. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Understanding the role of conscientiousness in healthy aging: where does the brain come in?
Patrick, Christopher J
2014-05-01
In reviewing this impressive series of articles, I was struck by 2 points in particular: (a) the fact that the empirically oriented articles focused on analyses of data from very large samples, with the articles by Friedman, Kern, Hampson, and Duckworth (2014) and Kern, Hampson, Goldbert, and Friedman (2014) highlighting an approach to merging existing data sets through use of "metric bridges" to address key questions not addressable through 1 data set alone, and (b) the fact that the articles as a whole included limited mention of neuroscientific (i.e., brain research) concepts, methods, and findings. One likely reason for the lack of reference to brain-oriented work is the persisting gap between smaller sample size lab-experimental and larger sample size multivariate-correlational approaches to psychological research. As a strategy for addressing this gap and bringing a distinct neuroscientific component to the National Institute on Aging's conscientiousness and health initiative, I suggest that the metric bridging approach highlighted by Friedman and colleagues could be used to connect existing large-scale data sets containing both neurophysiological variables and measures of individual difference constructs to other data sets containing richer arrays of nonphysiological variables-including data from longitudinal or twin studies focusing on personality and health-related outcomes (e.g., Terman Life Cycle study and Hawaii longitudinal studies, as described in the article by Kern et al., 2014). (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Nonlinear Dynamics of Cantilever-Sample Interactions in Atomic Force Microscopy
NASA Technical Reports Server (NTRS)
Cantrell, John H.; Cantrell, Sean A.
2010-01-01
The interaction of the cantilever tip of an atomic force microscope (AFM) with the sample surface is obtained by treating the cantilever and sample as independent systems coupled by a nonlinear force acting between the cantilever tip and a volume element of the sample surface. The volume element is subjected to a restoring force from the remainder of the sample that provides dynamical equilibrium for the combined systems. The model accounts for the positions on the cantilever of the cantilever tip, laser probe, and excitation force (if any) via a basis set of set of orthogonal functions that may be generalized to account for arbitrary cantilever shapes. The basis set is extended to include nonlinear cantilever modes. The model leads to a pair of coupled nonlinear differential equations that are solved analytically using a matrix iteration procedure. The effects of oscillatory excitation forces applied either to the cantilever or to the sample surface (or to both) are obtained from the solution set and applied to the to the assessment of phase and amplitude signals generated by various acoustic-atomic force microscope (A-AFM) modalities. The influence of bistable cantilever modes of on AFM signal generation is discussed. The effects on the cantilever-sample surface dynamics of subsurface features embedded in the sample that are perturbed by surface-generated oscillatory excitation forces and carried to the cantilever via wave propagation are accounted by the Bolef-Miller propagating wave model. Expressions pertaining to signal generation and image contrast in A-AFM are obtained and applied to amplitude modulation (intermittent contact) atomic force microscopy and resonant difference-frequency atomic force ultrasonic microscopy (RDF-AFUM). The influence of phase accumulation in A-AFM on image contrast is discussed, as is the effect of hard contact and maximum nonlinearity regimes of A-AFM operation.
Order of stimulus presentation influences children's acquisition in receptive identification tasks.
Petursdottir, Anna Ingeborg; Aguilar, Gabriella
2016-03-01
Receptive identification is usually taught in matching-to-sample format, which entails the presentation of an auditory sample stimulus and several visual comparison stimuli in each trial. Conflicting recommendations exist regarding the order of stimulus presentation in matching-to-sample trials. The purpose of this study was to compare acquisition in receptive identification tasks under 2 conditions: when the sample was presented before the comparisons (sample first) and when the comparisons were presented before the sample (comparison first). Participants included 4 typically developing kindergarten-age boys. Stimuli, which included birds and flags, were presented on a computer screen. Acquisition in the 2 conditions was compared in an adapted alternating-treatments design combined with a multiple baseline design across stimulus sets. All participants took fewer trials to meet the mastery criterion in the sample-first condition than in the comparison-first condition. © 2015 Society for the Experimental Analysis of Behavior.
Dovgerd, A P; Zharkov, D O
2014-01-01
PCR amplification of severely degraded DNA, including ancient DNA, forensic samples, and preparations from deeply processed foodstuffs, is a serious problem. Living organisms have a set of enzymes to repair lesions in their DNA. In this work, we have developed and characterized model systems of degraded high-molecular-weight DNA with a predominance of different types of damage. It was shown that depurination and oxidation of the model plasmid DNA template led to a decrease in the PCR efficiency. A set of enzymes performing a full cycle of excision repair of some lesions was determined. The treatment of model-damaged substrates with this set of enzymes resulted in an increased PCR product yield as compared with that of the unrepaired samples.
Baranes, Adrien F; Oudeyer, Pierre-Yves; Gottlieb, Jacqueline
2014-01-01
Devising efficient strategies for exploration in large open-ended spaces is one of the most difficult computational problems of intelligent organisms. Because the available rewards are ambiguous or unknown during the exploratory phase, subjects must act in intrinsically motivated fashion. However, a vast majority of behavioral and neural studies to date have focused on decision making in reward-based tasks, and the rules guiding intrinsically motivated exploration remain largely unknown. To examine this question we developed a paradigm for systematically testing the choices of human observers in a free play context. Adult subjects played a series of short computer games of variable difficulty, and freely choose which game they wished to sample without external guidance or physical rewards. Subjects performed the task in three distinct conditions where they sampled from a small or a large choice set (7 vs. 64 possible levels of difficulty), and where they did or did not have the possibility to sample new games at a constant level of difficulty. We show that despite the absence of external constraints, the subjects spontaneously adopted a structured exploration strategy whereby they (1) started with easier games and progressed to more difficult games, (2) sampled the entire choice set including extremely difficult games that could not be learnt, (3) repeated moderately and high difficulty games much more frequently than was predicted by chance, and (4) had higher repetition rates and chose higher speeds if they could generate new sequences at a constant level of difficulty. The results suggest that intrinsically motivated exploration is shaped by several factors including task difficulty, novelty and the size of the choice set, and these come into play to serve two internal goals-maximize the subjects' knowledge of the available tasks (exploring the limits of the task set), and maximize their competence (performance and skills) across the task set.
Isoflurane and Ketamine Anesthesia have Different Effects on Ventilatory Pattern Variability in Rats
Chung, Augustine; Fishman, Mikkel; Dasenbrook, Elliot C.; Loparo, Kenneth A.; Dick, Thomas E.; Jacono, Frank J.
2013-01-01
We hypothesize that isoflurane and ketamine impact ventilatory pattern variability (VPV) differently. Adult Sprague-Dawley rats were recorded in a whole-body plethysmograph before, during and after deep anesthesia. VPV was quantified from 60-s epochs using a complementary set of analytic techniques that included constructing surrogate data sets that preserved the linear structure but disrupted nonlinear deterministic properties of the original data. Even though isoflurane decreased and ketamine increased respiratory rate, VPV as quantified by the coefficient of variation decreased for both anesthetics. Further, mutual information increased and sample entropy decreased and the nonlinear complexity index (NLCI) increased during anesthesia despite qualitative differences in the shape and period of the waveform. Surprisingly mutual information and sample entropy did not change in the surrogate sets constructed from isoflurane data, but in those constructed from ketamine data, mutual information increased and sample entropy decreased significantly in the surrogate segments constructed from anesthetized relative to unanesthetized epochs. These data suggest that separate mechanisms modulate linear and nonlinear variability of breathing. PMID:23246800
The redshift distribution of cosmological samples: a forward modeling approach
NASA Astrophysics Data System (ADS)
Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina
2017-08-01
Determining the redshift distribution n(z) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n(z) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc{UFig} (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n(z) distributions for the acceptable models. We demonstrate the method by determining n(z) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n(z) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.
The redshift distribution of cosmological samples: a forward modeling approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam
Determining the redshift distribution n ( z ) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n ( z ) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc(UFig) (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizesmore » and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n ( z ) distributions for the acceptable models. We demonstrate the method by determining n ( z ) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n ( z ) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.« less
Open science resources for the discovery and analysis of Tara Oceans data
Pesant, Stéphane; Not, Fabrice; Picheral, Marc; Kandels-Lewis, Stefanie; Le Bescot, Noan; Gorsky, Gabriel; Iudicone, Daniele; Karsenti, Eric; Speich, Sabrina; Troublé, Romain; Dimier, Céline; Searson, Sarah; Acinas, Silvia G.; Bork, Peer; Boss, Emmanuel; Bowler, Chris; Vargas, Colomban De; Follows, Michael; Gorsky, Gabriel; Grimsley, Nigel; Hingamp, Pascal; Iudicone, Daniele; Jaillon, Olivier; Kandels-Lewis, Stefanie; Karp-Boss, Lee; Karsenti, Eric; Krzic, Uros; Not, Fabrice; Ogata, Hiroyuki; Pesant, Stéphane; Raes, Jeroen; Reynaud, Emmanuel G.; Sardet, Christian; Sieracki, Mike; Speich, Sabrina; Stemmann, Lars; Sullivan, Matthew B.; Sunagawa, Shinichi; Velayoudon, Didier; Weissenbach, Jean; Wincker, Patrick
2015-01-01
The Tara Oceans expedition (2009–2013) sampled contrasting ecosystems of the world oceans, collecting environmental data and plankton, from viruses to metazoans, for later analysis using modern sequencing and state-of-the-art imaging technologies. It surveyed 210 ecosystems in 20 biogeographic provinces, collecting over 35,000 samples of seawater and plankton. The interpretation of such an extensive collection of samples in their ecological context requires means to explore, assess and access raw and validated data sets. To address this challenge, the Tara Oceans Consortium offers open science resources, including the use of open access archives for nucleotides (ENA) and for environmental, biogeochemical, taxonomic and morphological data (PANGAEA), and the development of on line discovery tools and collaborative annotation tools for sequences and images. Here, we present an overview of Tara Oceans Data, and we provide detailed registries (data sets) of all campaigns (from port-to-port), stations and sampling events. PMID:26029378
Open science resources for the discovery and analysis of Tara Oceans data
NASA Astrophysics Data System (ADS)
2015-05-01
The Tara Oceans expedition (2009-2013) sampled contrasting ecosystems of the world oceans, collecting environmental data and plankton, from viruses to metazoans, for later analysis using modern sequencing and state-of-the-art imaging technologies. It surveyed 210 ecosystems in 20 biogeographic provinces, collecting over 35,000 samples of seawater and plankton. The interpretation of such an extensive collection of samples in their ecological context requires means to explore, assess and access raw and validated data sets. To address this challenge, the Tara Oceans Consortium offers open science resources, including the use of open access archives for nucleotides (ENA) and for environmental, biogeochemical, taxonomic and morphological data (PANGAEA), and the development of on line discovery tools and collaborative annotation tools for sequences and images. Here, we present an overview of Tara Oceans Data, and we provide detailed registries (data sets) of all campaigns (from port-to-port), stations and sampling events.
Open science resources for the discovery and analysis of Tara Oceans data.
Pesant, Stéphane; Not, Fabrice; Picheral, Marc; Kandels-Lewis, Stefanie; Le Bescot, Noan; Gorsky, Gabriel; Iudicone, Daniele; Karsenti, Eric; Speich, Sabrina; Troublé, Romain; Dimier, Céline; Searson, Sarah
2015-01-01
The Tara Oceans expedition (2009-2013) sampled contrasting ecosystems of the world oceans, collecting environmental data and plankton, from viruses to metazoans, for later analysis using modern sequencing and state-of-the-art imaging technologies. It surveyed 210 ecosystems in 20 biogeographic provinces, collecting over 35,000 samples of seawater and plankton. The interpretation of such an extensive collection of samples in their ecological context requires means to explore, assess and access raw and validated data sets. To address this challenge, the Tara Oceans Consortium offers open science resources, including the use of open access archives for nucleotides (ENA) and for environmental, biogeochemical, taxonomic and morphological data (PANGAEA), and the development of on line discovery tools and collaborative annotation tools for sequences and images. Here, we present an overview of Tara Oceans Data, and we provide detailed registries (data sets) of all campaigns (from port-to-port), stations and sampling events.
Applications of MIDAS regression in analysing trends in water quality
NASA Astrophysics Data System (ADS)
Penev, Spiridon; Leonte, Daniela; Lazarov, Zdravetz; Mann, Rob A.
2014-04-01
We discuss novel statistical methods in analysing trends in water quality. Such analysis uses complex data sets of different classes of variables, including water quality, hydrological and meteorological. We analyse the effect of rainfall and flow on trends in water quality utilising a flexible model called Mixed Data Sampling (MIDAS). This model arises because of the mixed frequency in the data collection. Typically, water quality variables are sampled fortnightly, whereas the rain data is sampled daily. The advantage of using MIDAS regression is in the flexible and parsimonious modelling of the influence of the rain and flow on trends in water quality variables. We discuss the model and its implementation on a data set from the Shoalhaven Supply System and Catchments in the state of New South Wales, Australia. Information criteria indicate that MIDAS modelling improves upon simplistic approaches that do not utilise the mixed data sampling nature of the data.
Polsky, Yarom
2014-05-23
This entry contains raw data files from experiments performed on the Vulcan beamline at the Spallation Neutron Source at Oak Ridge National Laboratory using a pressure cell. Cylindrical granite and marble samples were subjected to confining pressures of either 0 psi or approximately 2500 psi and internal pressures of either 0 psi, 1500 psi or 2500 psi through a blind axial hole at the center of one end of the sample. The sample diameters were 1.5" and the sample lengths were 6". The blind hole was 0.25" in diameter and 3" deep. One set of experiments measured strains at points located circumferentially around the center of the sample with identical radii to determine if there was strain variability (this would not be expected for a homogeneous material based on the symmetry of loading). Another set of experiments measured load variation across the radius of the sample at a fixed axial and circumferential location. Raw neutron diffraction intensity files and experimental parameter descriptions are included.
Damschen, William C.; Hansel, John A.; Nustad, Rochelle A.
2008-01-01
From January through October 2006, six sets of water-quality samples were collected at 28 sites, which included inflow and outflow from seven major municipal water-treatment plants (14 sites) and influent and effluent samples from seven major municipal wastewater treatment plants (14 sites) along the Red River of the North in North Dakota and Minnesota. Samples were collected in cooperation with the Bureau of Reclamation for use in the development of return-flow boundary conditions in a 2006 water-quality model for the Red River of the North. All samples were analyzed for nutrients and major ions. For one set of effluent samples from each of the wastewater-treatment plants, water was analyzed for Eschirichia coli, fecal coliform, 20-day biochemical oxygen demand, 20-day nitrogenous biochemical oxygen demand, total organic carbon, and dissolved organic carbon. In general, results from the field equipment blank and replicate samples indicate that the overall process of sample collection, processing, and analysis did not introduce substantial contamination and that consistent results were obtained.
1991, EPA publicized the Lead and Copper Rule (LCR),which set regulations to minimize the amount of lead copper in drinking water. The LCR set the copper action level at 1.3 mg/L in more then 10% of customer’s first-draw taps sampled. Potential health effects of copper include vo...
ERIC Educational Resources Information Center
Satre, Derek D.; McCance-Katz, Elinore F.; Moreno-John, Gina; Julian, Katherine A.; O'Sullivan, Patricia S.; Satterfield, Jason M.
2012-01-01
This article describes the use of a brief needs assessment survey in the development of alcohol and drug screening, brief intervention, and referral to treatment (SBIRT) curricula in 2 health care settings in the San Francisco Bay Area. The samples included university medical center faculty (n = 27) and nonphysician community health and social…
Field emission chemical sensor
Panitz, J.A.
1983-11-22
A field emission chemical sensor for specific detection of a chemical entity in a sample includes a closed chamber enclosing two field emission electrode sets, each field emission electrode set comprising (a) an electron emitter electrode from which field emission electrons can be emitted when an effective voltage is connected to the electrode set; and (b) a collector electrode which will capture said electrons emitted from said emitter electrode. One of the electrode sets is passive to the chemical entity and the other is active thereto and has an active emitter electrode which will bind the chemical entity when contacted therewith.
Liver Full Reference Set Application: David Lubman - Univ of Michigan (2011) — EDRN Public Portal
In this work we will perform the next step in the biomarker development and validation. This step will be the Phase 2 validation of glycoproteins that have passed Phase 1 blinded validation using ELISA kits based on target glycoproteins selected based on our previous work. This will be done in a large Phase 2 sample set obtained in a multicenter study funded by the EDRN. The assays will be performed in our research lab located in the Center for Cancer Proteomics in the University of Michigan Medical Center. This study will include patients in whom serum was stored for future validation and includes samples from early HCC (n = 158), advanced cases (n=214) and cirrhotic controls (n = 417). These samples will be supplied by the EDRN (per Dr. Jo Ann Rinaudo) and will be analyzed in a blinded fashion by Dr. Feng from the Fred Hutchinson Cancer Center. This phase 2 study was designed to have above 90% power at one-sided 5% type-I error for comparing the joint sensitivity and specificity for differentiating early stage HCC from cirrhotic patients between AFP and a new marker. Sample sizes of 200 for early stage HCC and 400 for cirrhotics were required to achieve the stated power (14). We will select our candidates for this larger phase validation set based on the results of previous work. These will include HGF and CD14 and the results of these assays will be used to evaluate the performance of each of these markers and combinations of HGF and CD14 and AFP and HGF. It is expected that each assay will be repeated three times for each marker and will also be performed for AFP as the standard for comparison. 250 uL of each sample is requested for analysis.
A National Evaluation of Community-Based Youth Cessation Programs: Design and Implementation
ERIC Educational Resources Information Center
Curry, Susan J.; Mermelstein, Robin J.; Sporer, Amy K.; Emery, Sherry L.; Berbaum, Michael L.; Campbell, Richard T.; Carusi, Charles; Flay, Brian; Taylor, Kristie; Warnecke, Richard B.
2010-01-01
Although widely available, little is known about the effectiveness of youth cessation treatments delivered in real-world settings. The authors recruited a nonprobability sample of 41 community-based group-format programs that treated at least 15 youth per year and included evidence-based treatment components. Data collection included longitudinal…
User's Guide to the Stand Prognosis Model
William R. Wykoff; Nicholas L. Crookston; Albert R. Stage
1982-01-01
The Stand Prognosis Model is a computer program that projects the development of forest stands in the Northern Rocky Mountains. Thinning options allow for simulation of a variety of management strategies. Input consists of a stand inventory, including sample tree records, and a set of option selection instructions. Output includes data normally found in stand, stock,...
This data set contains the method performance results. This includes field blanks, method blanks, duplicate samples, analytical duplicates, matrix spikes, and surrogate recovery standards.
The Children’s Total Exposure to Persistent Pesticides and Other Persistent Pollutant (...
NASA Astrophysics Data System (ADS)
Nemoto, Mitsutaka; Nomura, Yukihiro; Hanaoka, Shohei; Masutani, Yoshitaka; Yoshikawa, Takeharu; Hayashi, Naoto; Yoshioka, Naoki; Ohtomo, Kuni
Anatomical point landmarks as most primitive anatomical knowledge are useful for medical image understanding. In this study, we propose a detection method for anatomical point landmark based on appearance models, which include gray-level statistical variations at point landmarks and their surrounding area. The models are built based on results of Principal Component Analysis (PCA) of sample data sets. In addition, we employed generative learning method by transforming ROI of sample data. In this study, we evaluated our method with 24 data sets of body trunk CT images and obtained 95.8 ± 7.3 % of the average sensitivity in 28 landmarks.
Sherrit, Stewart; Masys, Tony J; Wiederick, Harvey D; Mukherjee, Binu K
2011-09-01
We present a procedure for determining the reduced piezoelectric, dielectric, and elastic coefficients for a C(∞) material, including losses, from a single disk sample. Measurements have been made on a Navy III lead zirconate titanate (PZT) ceramic sample and the reduced matrix of coefficients for this material is presented. In addition, we present the transform equations, in reduced matrix form, to other consistent material constant sets. We discuss the propagation of errors in going from one material data set to another and look at the limitations inherent in direct calculations of other useful coefficients from the data.
The clubhouse as an empowering setting.
Mowbray, Carol T; Lewandowski, Lisa; Holter, Mark; Bybee, Deborah
2006-08-01
Attention to psychosocial rehabilitation (PSR) practice has expanded in recent years. However, social work research studies on PSR are not numerous. This study focuses on operational characteristics of clubhouses, a major PSR program model, and the organizational attributes (including resource levels) that predict the extent to which the clubhouse constitutes an empowering setting. The authors present data from a statewide sample of 30 clubhouses, annually serving nearly 4,000 consumers (adults with serious mental illnesses), based on interviews of clubhouse directors, on-site observations, and government information sources. Results indicate that users were predominantly male, white, and middle age; about one-third had a major functional disability. There were wide variations in member characteristics as well as in resource levels. In terms of empowerment, this sample of clubs averaged rather low levels of member involvement in governance and operations but seemed to provide members with opportunities and assistance in making their own decisions. The empowerment variables had different predictors, including client characteristics, urban-related characteristics, staffing, and resource levels. Implications for social work practice in PSR settings are discussed.
Paul, Topon Kumar; Iba, Hitoshi
2009-01-01
In order to get a better understanding of different types of cancers and to find the possible biomarkers for diseases, recently, many researchers are analyzing the gene expression data using various machine learning techniques. However, due to a very small number of training samples compared to the huge number of genes and class imbalance, most of these methods suffer from overfitting. In this paper, we present a majority voting genetic programming classifier (MVGPC) for the classification of microarray data. Instead of a single rule or a single set of rules, we evolve multiple rules with genetic programming (GP) and then apply those rules to test samples to determine their labels with majority voting technique. By performing experiments on four different public cancer data sets, including multiclass data sets, we have found that the test accuracies of MVGPC are better than those of other methods, including AdaBoost with GP. Moreover, some of the more frequently occurring genes in the classification rules are known to be associated with the types of cancers being studied in this paper.
The Role of Presented Objects in Deriving Color Preference Criteria from Psychophysical Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Royer, Michael P.; Wei, Minchen
Of the many “components” of a color rendering measure, one is perhaps the most important: the set of color samples (spectral reflectance functions) that are employed as a standardized means of evaluating and rating a light source. At the same time, a standardized set of color samples can never apply perfectly to a real space or a real set of observed objects, meaning there will always be some level of mismatch between the predicted and observed color shifts. This mismatch is important for lighting specifiers to consider, but even more critical for experiments that seek to evaluate the relationship betweenmore » color rendering measures and human perception. This article explores how the color distortions of three possible experimental object sets compare to the color distortions predicted using the color evaluation samples of IES TM-30-15 (TM-30). The experimental object sets include those from Royer and colleagues [2016], a set of produce (10 fruits and vegetables), and the X-rite Color Checker Classic. The differences are traced back to properties of the samples sets, such as the coverage of color space, average chroma level, and specific spectral features. The consequence of the differences, that the visual evaluation is based on color distortions that are substantially different from what is predicted, can lead to inaccurate criteria or models of a given perception, such as preference. To minimize the error in using criteria or models when specifying color rendering attributes for a given application, the criteria or models should be developed using a set of experimental objects that matches the typical objects of the application as closely as possible. Alternatively, if typical objects of an application cannot be reasonably determined, an object set that matches the distortions predicted by TM-30 as close as possible is likely to provide the most meaningful results.« less
Paris, Daniel H; Blacksell, Stuart D; Newton, Paul N; Day, Nicholas P J
2008-12-01
We present a loop-mediated isothermal PCR assay (LAMP) targeting the groEL gene, which encodes the 60kDa heat shock protein of Orientia tsutsugamushi. Evaluation included testing of 63 samples of contemporary in vitro isolates, buffy coats and whole blood samples from patients with fever. Detection limits for LAMP were assessed by serial dilutions and quantitation by real-time PCR assay based on the same target gene: three copies/microl for linearized plasmids, 26 copies/microl for VERO cell culture isolates, 14 copies/microl for full blood samples and 41 copies/microl for clinical buffy coats. Based on a limited sample number, the LAMP assay is comparable in sensitivity with conventional nested PCR (56kDa gene), with limits of detection well below the range of known admission bacterial loads of patients with scrub typhus. This inexpensive method requires no sophisticated equipment or sample preparation, and may prove useful as a diagnostic assay in financially poor settings; however, it requires further prospective validation in the field setting.
Mas, Sergi; Gassó, Patricia; Morer, Astrid; Calvo, Anna; Bargalló, Nuria; Lafuente, Amalia; Lázaro, Luisa
2016-01-01
We propose an integrative approach that combines structural magnetic resonance imaging data (MRI), diffusion tensor imaging data (DTI), neuropsychological data, and genetic data to predict early-onset obsessive compulsive disorder (OCD) severity. From a cohort of 87 patients, 56 with complete information were used in the present analysis. First, we performed a multivariate genetic association analysis of OCD severity with 266 genetic polymorphisms. This association analysis was used to select and prioritize the SNPs that would be included in the model. Second, we split the sample into a training set (N = 38) and a validation set (N = 18). Third, entropy-based measures of information gain were used for feature selection with the training subset. Fourth, the selected features were fed into two supervised methods of class prediction based on machine learning, using the leave-one-out procedure with the training set. Finally, the resulting model was validated with the validation set. Nine variables were used for the creation of the OCD severity predictor, including six genetic polymorphisms and three variables from the neuropsychological data. The developed model classified child and adolescent patients with OCD by disease severity with an accuracy of 0.90 in the testing set and 0.70 in the validation sample. Above its clinical applicability, the combination of particular neuropsychological, neuroimaging, and genetic characteristics could enhance our understanding of the neurobiological basis of the disorder. PMID:27093171
Venkatesh, Arjun K; Mei, Hao; Kocher, Keith E; Granovsky, Michael; Obermeyer, Ziad; Spatz, Erica S; Rothenberg, Craig; Krumholz, Harlan M; Lin, Zhenqui
2017-04-01
Administrative claims data sets are often used for emergency care research and policy investigations of healthcare resource utilization, acute care practices, and evaluation of quality improvement interventions. Despite the high profile of emergency department (ED) visits in analyses using administrative claims, little work has evaluated the degree to which existing definitions based on claims data accurately captures conventionally defined hospital-based ED services. We sought to construct an operational definition for ED visitation using a comprehensive Medicare data set and to compare this definition to existing operational definitions used by researchers and policymakers. We examined four operational definitions of an ED visit commonly used by researchers and policymakers using a 20% sample of the 2012 Medicare Chronic Condition Warehouse (CCW) data set. The CCW data set included all Part A (hospital) and Part B (hospital outpatient, physician) claims for a nationally representative sample of continuously enrolled Medicare fee-for-services beneficiaries. Three definitions were based on published research or existing quality metrics including: 1) provider claims-based definition, 2) facility claims-based definition, and 3) CMS Research Data Assistance Center (ResDAC) definition. In addition, we developed a fourth operational definition (Yale definition) that sought to incorporate additional coding rules for identifying ED visits. We report levels of agreement and disagreement among the four definitions. Of 10,717,786 beneficiaries included in the sample data set, 22% had evidence of ED use during the study year under any of the ED visit definitions. The definition using provider claims identified a total of 4,199,148 ED visits, the facility definition 4,795,057 visits, the ResDAC definition 5,278,980 ED visits, and the Yale definition 5,192,235 ED visits. The Yale definition identified a statistically different (p < 0.05) collection of ED visits than all other definitions including 17% more ED visits than the provider definition and 2% fewer visits than the ResDAC definition. Differences in ED visitation counts between each definition occurred for several reasons including the inclusion of critical care or observation services in the ED, discrepancies between facility and provider billing regulations, and operational decisions of each definition. Current operational definitions of ED visitation using administrative claims produce different estimates of ED visitation based on the underlying assumptions applied to billing data and data set availability. Future analyses using administrative claims data should seek to validate specific definitions and inform the development of a consistent, consensus ED visitation definitions to standardize research reporting and the interpretation of policy interventions. © 2016 by the Society for Academic Emergency Medicine.
Venkatesh, Arjun K.; Mei, Hao; Kocher, Keith E.; Granovsky, Michael; Obermeyer, Ziad; Spatz, Erica S.; Rothenberg, Craig; Krumholz, Harlan M.; Lin, Zhenqui
2018-01-01
Objectives Administrative claims data sets are often used for emergency care research and policy investigations of healthcare resource utilization, acute care practices, and evaluation of quality improvement interventions. Despite the high profile of emergency department (ED) visits in analyses using administrative claims, little work has evaluated the degree to which existing definitions based on claims data accurately captures conventionally defined hospital-based ED services. We sought to construct an operational definition for ED visitation using a comprehensive Medicare data set and to compare this definition to existing operational definitions used by researchers and policymakers. Methods We examined four operational definitions of an ED visit commonly used by researchers and policymakers using a 20% sample of the 2012 Medicare Chronic Condition Warehouse (CCW) data set. The CCW data set included all Part A (hospital) and Part B (hospital outpatient, physician) claims for a nationally representative sample of continuously enrolled Medicare fee-for-services beneficiaries. Three definitions were based on published research or existing quality metrics including: 1) provider claims–based definition, 2) facility claims–based definition, and 3) CMS Research Data Assistance Center (ResDAC) definition. In addition, we developed a fourth operational definition (Yale definition) that sought to incorporate additional coding rules for identifying ED visits. We report levels of agreement and disagreement among the four definitions. Results Of 10,717,786 beneficiaries included in the sample data set, 22% had evidence of ED use during the study year under any of the ED visit definitions. The definition using provider claims identified a total of 4,199,148 ED visits, the facility definition 4,795,057 visits, the ResDAC definition 5,278,980 ED visits, and the Yale definition 5,192,235 ED visits. The Yale definition identified a statistically different (p < 0.05) collection of ED visits than all other definitions including 17% more ED visits than the provider definition and 2% fewer visits than the ResDAC definition. Differences in ED visitation counts between each definition occurred for several reasons including the inclusion of critical care or observation services in the ED, discrepancies between facility and provider billing regulations, and operational decisions of each definition. Conclusion Current operational definitions of ED visitation using administrative claims produce different estimates of ED visitation based on the underlying assumptions applied to billing data and data set availability. Future analyses using administrative claims data should seek to validate specific definitions and inform the development of a consistent, consensus ED visitation definitions to standardize research reporting and the interpretation of policy interventions. PMID:27864915
NASA Astrophysics Data System (ADS)
Verma, Surendra P.; Pandarinath, Kailasa; Verma, Sanjeet K.
2011-07-01
In the lead presentation (invited talk) of Session SE05 (Frontiers in Geochemistry with Reference to Lithospheric Evolution and Metallogeny) of AOGS2010, we have highlighted the requirement of correct statistical treatment of geochemical data. In most diagrams used for interpreting compositional data, the basic statistical assumption of open space for all variables is violated. Among these graphic tools, discrimination diagrams have been in use for nearly 40 years to decipher tectonic setting. The newer set of five tectonomagmatic discrimination diagrams published in 2006 (based on major-elements) and two sets made available in 2008 and 2011 (both based on immobile elements) fulfill all statistical requirements for correct handling of compositional data, including the multivariate nature of compositional variables, representative sampling, and probability-based tectonic field boundaries. Additionally in the most recent proposal of 2011, samples having normally distributed, discordant-outlier free, log-ratio variables were used in linear discriminant analysis. In these three sets of five diagrams each, discrimination was successfully documented for four tectonic settings (island arc, continental rift, ocean-island, and mid-ocean ridge). The discrimination diagrams have been extensively evaluated for their performance by different workers. We exemplify these two sets of new diagrams (one set based on major-elements and the other on immobile elements) using ophiolites from Boso Peninsula, Japan. This example is included for illustration purposes only and is not meant for testing of these newer diagrams. Their evaluation and comparison with older, conventional bivariate or ternary diagrams have been reported in other papers.
Group-sequential three-arm noninferiority clinical trial designs
Ochiai, Toshimitsu; Hamasaki, Toshimitsu; Evans, Scott R.; Asakura, Koko; Ohno, Yuko
2016-01-01
We discuss group-sequential three-arm noninferiority clinical trial designs that include active and placebo controls for evaluating both assay sensitivity and noninferiority. We extend two existing approaches, the fixed margin and fraction approaches, into a group-sequential setting with two decision-making frameworks. We investigate the operating characteristics including power, Type I error rate, maximum and expected sample sizes, as design factors vary. In addition, we discuss sample size recalculation and its’ impact on the power and Type I error rate via a simulation study. PMID:26892481
NASA Astrophysics Data System (ADS)
Dillner, A. M.; Takahama, S.
2014-11-01
Organic carbon (OC) can constitute 50% or more of the mass of atmospheric particulate matter. Typically, the organic carbon concentration is measured using thermal methods such as Thermal-Optical Reflectance (TOR) from quartz fiber filters. Here, methods are presented whereby Fourier Transform Infrared (FT-IR) absorbance spectra from polytetrafluoroethylene (PTFE or Teflon) filters are used to accurately predict TOR OC. Transmittance FT-IR analysis is rapid, inexpensive, and non-destructive to the PTFE filters. To develop and test the method, FT-IR absorbance spectra are obtained from 794 samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites sampled during 2011. Partial least squares regression is used to calibrate sample FT-IR absorbance spectra to artifact-corrected TOR OC. The FTIR spectra are divided into calibration and test sets by sampling site and date which leads to precise and accurate OC predictions by FT-IR as indicated by high coefficient of determination (R2; 0.96), low bias (0.02 μg m-3, all μg m-3 values based on the nominal IMPROVE sample volume of 32.8 m-3), low error (0.08 μg m-3) and low normalized error (11%). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision and accuracy to collocated TOR measurements. FT-IR spectra are also divided into calibration and test sets by OC mass and by OM / OC which reflects the organic composition of the particulate matter and is obtained from organic functional group composition; this division also leads to precise and accurate OC predictions. Low OC concentrations have higher bias and normalized error due to TOR analytical errors and artifact correction errors, not due to the range of OC mass of the samples in the calibration set. However, samples with low OC mass can be used to predict samples with high OC mass indicating that the calibration is linear. Using samples in the calibration set that have a different OM / OC or ammonium / OC distributions than the test set leads to only a modest increase in bias and normalized error in the predicted samples. We conclude that FT-IR analysis with partial least squares regression is a robust method for accurately predicting TOR OC in IMPROVE network samples; providing complementary information to the organic functional group composition and organic aerosol mass estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).
Miller, Warren B; Millstein, Susan G; Pasta, David J
2008-01-01
Relatively little is known about the motivational antecedents to the use of assisted reproductive technology (ART). In this paper we measure the fertility motivations of infertile couples who are considering the use of ART, using an established instrument, the Childbearing Questionnaire (CBQ). Our sample consists of 214 men and 216 women who were interviewed at home after an initial screening for ART but before making a final decision. We conducted two sets of analyses with the obtained data. In one set, we compared the scores on scales and subscales of the CBQ for the males and females in our sample with the scores for males and females from a comparable normative sample. For these analyses we first examined sample and gender differences with a four-group analysis of variance. We then conducted a series of linear models that included background characteristics as covariates and interactions between sample, gender, and age and between those three variables and the background characteristics. The results showed the expected higher positive and lower negative motivations in the ART sample and a significant effect on positive motivations of the interaction between sample and age. In the second set of analyses, we developed several new subscales relevant to facets of the desire for a child that appear to be important in ART decision-making. These facets include the desire to be genetically related to the child and the desire to experience pregnancy and childbirth. A third facet, the desire for parenthood, is already well covered by the existing subscales. The results showed the new subscales to have satisfactory reliability and validity. The results also showed that the original and new subscales predicted the three facets of the desire for a child in a multivariate context. We conclude with a general discussion of the way our findings relate both to ART decision-making and to further research on the motivations that drive it.
Navigating complex sample analysis using national survey data.
Saylor, Jennifer; Friedmann, Erika; Lee, Hyeon Joo
2012-01-01
The National Center for Health Statistics conducts the National Health and Nutrition Examination Survey and other national surveys with probability-based complex sample designs. Goals of national surveys are to provide valid data for the population of the United States. Analyses of data from population surveys present unique challenges in the research process but are valuable avenues to study the health of the United States population. The aim of this study was to demonstrate the importance of using complex data analysis techniques for data obtained with complex multistage sampling design and provide an example of analysis using the SPSS Complex Samples procedure. Illustration of challenges and solutions specific to secondary data analysis of national databases are described using the National Health and Nutrition Examination Survey as the exemplar. Oversampling of small or sensitive groups provides necessary estimates of variability within small groups. Use of weights without complex samples accurately estimates population means and frequency from the sample after accounting for over- or undersampling of specific groups. Weighting alone leads to inappropriate population estimates of variability, because they are computed as if the measures were from the entire population rather than a sample in the data set. The SPSS Complex Samples procedure allows inclusion of all sampling design elements, stratification, clusters, and weights. Use of national data sets allows use of extensive, expensive, and well-documented survey data for exploratory questions but limits analysis to those variables included in the data set. The large sample permits examination of multiple predictors and interactive relationships. Merging data files, availability of data in several waves of surveys, and complex sampling are techniques used to provide a representative sample but present unique challenges. In sophisticated data analysis techniques, use of these data is optimized.
This data set contains the method performance results for CTEPP-OH. This includes field blanks, method blanks, duplicate samples, analytical duplicates, matrix spikes, and surrogate recovery standards.
The Children’s Total Exposure to Persistent Pesticides and Other Persisten...
Kinsey, Willie B.; Johnson, Mark V.; Gronberg, JoAnn M.
2005-01-01
This report contains pesticide, volatile organic compound, major ion, nutrient, tritium, stable isotope, organic carbon, and trace-metal data collected from 149 ground-water wells, and pesticide data collected from 39 surface-water stream sites in the San Joaquin Valley of California. Included with the ground-water data are field measurements of pH, specific conductance, alkalinity, temperature, and dissolved oxygen. This report describes data collection procedures, analytical methods, quality assurance, and quality controls used by the National Water-Quality Assessment Program to ensure data reliability. Data contained in this report were collected during a four year period by the San Joaquin?Tulare Basins Study Unit of the United States Geological Survey's National Water-Quality Assessment Program. Surface-water-quality data collection began in April 1992, with sampling done three times a week at three sites as part of a pilot study conducted to provide background information for the surface-water-study design. Monthly samples were collected at 10 sites for major ions and nutrients from January 1993 to March 1995. Additional samples were collected at four of these sites, from January to December 1993, to study spatial and temporal variability in dissolved pesticide concentrations. Samples for several synoptic studies were collected from 1993 to 1995. Ground-water-quality data collection was restricted to the eastern alluvial fans subarea of the San Joaquin Valley. Data collection began in 1993 with the sampling of 21 wells in vineyard land-use settings. In 1994, 29 wells were sampled in almond land-use settings and 9 in vineyard land-use settings; an additional 11 wells were sampled along a flow path in the eastern Fresno County vineyard land-use area. Among the 79 wells sampled in 1995, 30 wells were in the corn, alfalfa, and vegetable land-use setting, and 1 well was in the vineyard land-use setting; an additional 20 were flow-path wells. Also sampled in 1995 were 28 wells used for a regional assessment of ground-water quality in the eastern San Joaquin Valley.
Potential bias in TEOS10 density of sea water samples
NASA Astrophysics Data System (ADS)
Budéus, G. Th.
2018-04-01
Direct density measurements of ocean water samples are compared to TEOS10 derived densities. The water sample set includes waters from remote areas as Antarctic waters and the central Arctic, but also waters of regions that resemble closely the reference composition of TEOS10. With few exceptions, the measured densities are smaller than those derived according to TEOS10. The result suggests a potential systematic overestimation of density by TEOS10. For the majority of waters the deviations are about 10 g/m3.
Bayes classification of interferometric TOPSAR data
NASA Technical Reports Server (NTRS)
Michel, T. R.; Rodriguez, E.; Houshmand, B.; Carande, R.
1995-01-01
We report the Bayes classification of terrain types at different sites using airborne interferometric synthetic aperture radar (INSAR) data. A Gaussian maximum likelihood classifier was applied on multidimensional observations derived from the SAR intensity, the terrain elevation model, and the magnitude of the interferometric correlation. Training sets for forested, urban, agricultural, or bare areas were obtained either by selecting samples with known ground truth, or by k-means clustering of random sets of samples uniformly distributed across all sites, and subsequent assignments of these clusters using ground truth. The accuracy of the classifier was used to optimize the discriminating efficiency of the set of features that was chosen. The most important features include the SAR intensity, a canopy penetration depth model, and the terrain slope. We demonstrate the classifier's performance across sites using a unique set of training classes for the four main terrain categories. The scenes examined include San Francisco (CA) (predominantly urban and water), Mount Adams (WA) (forested with clear cuts), Pasadena (CA) (urban with mountains), and Antioch Hills (CA) (water, swamps, fields). Issues related to the effects of image calibration and the robustness of the classification to calibration errors are explored. The relative performance of single polarization Interferometric data classification is contrasted against classification schemes based on polarimetric SAR data.
Shilts, Mical Kay; Horowitz, Marcel; Townsend, Marilyn S
2004-01-01
Estimate effectiveness of goal setting for nutrition and physical activity behavior change, review the effect of goal-setting characteristics on behavior change, and investigate effectiveness of interventions containing goal setting. For this review, a literature search was conducted for the period January 1977 through December 2003 that included a Current Contents, Biosis Previews, Medline, PubMed, PsycINFO, and ERIC search of databases and a reference list search. Key words were goal, goal setting, nutrition, diet, dietary, physical activity, exercise, behavior change, interventions, and fitness. The search identified 144 studies, of which 28 met inclusion criteria for being published in a peer reviewed journal and using goal setting in an intervention to modify dietary or physical activity behaviors. Excluded from this review were those studies that (1) evaluated goal setting cross-sectionally without an intervention; (2) used goal setting for behavioral disorders, to improve academic achievement, or in sports performance; (3) were reviews. The articles were categorized by target audience and secondarily by research focus. Data extracted included outcome measure, research rating, purpose, sample, sample description, assignment, findings, and goal-setting support. Thirteen of the 23 adult studies used a goal-setting effectiveness study design and eight produced positive results supporting goal setting. No adolescent or child studies used this design. The results were inconclusive for the studies investigating goal-setting characteristics (n = 7). Four adult and four child intervention evaluation studies showed positive outcomes. No studies reported power calculations, and only 32% of the studies were rated as fully supporting goal setting. Goal setting has shown some promise in promoting dietary and physical activity behavior change among adults, but methodological issues still need to be resolved. The literature with adolescents and children is limited, and the authors are not aware of any published studies with this audience investigating the independent effect of goal setting on dietary or physical activity behavior. Although, goal setting is widely used with children and adolescents in nutrition interventions, its effectiveness has yet to be reported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pugmire, R.J.; Solum, M.S.
This study was designed to apply {sup 13}C-nuclear magnetic resonance (NMR) spectrometry to the analysis of direct coal liquefaction process-stream materials. {sup 13}C-NMR was shown to have a high potential for application to direct coal liquefaction-derived samples in Phase II of this program. In this Phase III project, {sup 13}C-NMR was applied to a set of samples derived from the HRI Inc. bench-scale liquefaction Run CC-15. The samples include the feed coal, net products and intermediate streams from three operating periods of the run. High-resolution {sup 13}C-NMR data were obtained for the liquid samples and solid-state CP/MAS {sup 13}C-NMR datamore » were obtained for the coal and filter-cake samples. The {sup 1}C-NMR technique is used to derive a set of twelve carbon structural parameters for each sample (CONSOL Table A). Average molecular structural descriptors can then be derived from these parameters (CONSOL Table B).« less
Garcia, Ediberto; Newfang, Daniel; Coyle, Jayme P; Blake, Charles L; Spencer, John W; Burrelli, Leonard G; Johnson, Giffe T; Harbison, Raymond D
2018-07-01
Three independently conducted asbestos exposure evaluations were conducted using wire gauze pads similar to standard practice in the laboratory setting. All testing occurred in a controlled atmosphere inside an enclosed chamber simulating a laboratory setting. Separate teams consisting of a laboratory technician, or technician and assistant simulated common tasks involving wire gauze pads, including heating and direct wire gauze manipulation. Area and personal air samples were collected and evaluated for asbestos consistent with the National Institute of Occupational Safety Health method 7400 and 7402, and the Asbestos Hazard Emergency Response Act (AHERA) method. Bulk gauze pad samples were analyzed by Polarized Light Microscopy and Transmission Electron Microscopy to determine asbestos content. Among air samples, chrysotile asbestos was the only fiber found in the first and third experiments, and tremolite asbestos for the second experiment. None of the air samples contained asbestos in concentrations above the current permissible regulatory levels promulgated by OSHA. These findings indicate that the level of asbestos exposure when working with wire gauze pads in the laboratory setting is much lower than levels associated with asbestosis or asbestos-related lung cancer and mesothelioma. Copyright © 2018. Published by Elsevier Inc.
7 CFR 27.23 - Duplicate sets of samples of cotton.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...
7 CFR 27.23 - Duplicate sets of samples of cotton.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...
7 CFR 27.23 - Duplicate sets of samples of cotton.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...
7 CFR 27.23 - Duplicate sets of samples of cotton.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...
7 CFR 27.23 - Duplicate sets of samples of cotton.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...
Code of Federal Regulations, 2010 CFR
2010-07-01
... proposed location and purpose of the activities, including: (1) Gravity and magneto-metric measurements; (2...) Sediment sampling of a limited nature using either core or grab samplers, and the specified diameter and...) Hydrographic and oceanographic measurements, including the setting of instruments; and (7) Small diameter core...
Code of Federal Regulations, 2011 CFR
2011-07-01
... proposed location and purpose of the activities, including: (1) Gravity and magneto-metric measurements; (2...) Sediment sampling of a limited nature using either core or grab samplers, and the specified diameter and...) Hydrographic and oceanographic measurements, including the setting of instruments; and (7) Small diameter core...
ERIC Educational Resources Information Center
Lee, Julia Ai Cheng; Al Otaiba, Stephanie
2017-01-01
In this study, we examined the spelling performance of 430 kindergartners, which included a high-risk sample, to determine the relations between end-of-kindergarten reading and spelling in a high-quality language arts setting. We described, analyzed, and compared spelling outcomes, including spelling errors, between good and poor readers. The…
Effects of long term space environment exposure on optical substrates and coatings (S0050-2)
NASA Technical Reports Server (NTRS)
Harvey, Keith; Mustico, Arthur; Vallimont, John
1993-01-01
Eastman Kodak Company included twelve substrate and coating samples on the Long Duration Exposure Facility (LDEF) structure. There were three Fused Silica and three Ultra Low Expansion (ULE) uncoated glass samples, two ULE samples with a high reflectance silver coating, two Fused Silica samples with an antireflectance coating, and two Fused silica samples with a solar rejection coating. A set of duplicate control samples was also manufactured and stored in a controlled environment for comparison purposes. Kodak's samples were included as a subset of the Georgia Institute of Technology tray, which was located on row 5-E, tray S0050-2. This placed the samples on the trailing edge of the structure, which protected them from the effects of atomic oxygen bombardment. An evaluation of the flight samples for effects from the 5 year mission showed that a contaminant was deposited on the samples, a micrometeoroid impact occurred on one of the samples, and the radiation darkening which was expected for the glass did not occur. The results are listed in more detail.
Bagger, Frederik Otzen; Sasivarevic, Damir; Sohi, Sina Hadi; Laursen, Linea Gøricke; Pundhir, Sachin; Sønderby, Casper Kaae; Winther, Ole; Rapin, Nicolas; Porse, Bo T
2016-01-04
Research on human and murine haematopoiesis has resulted in a vast number of gene-expression data sets that can potentially answer questions regarding normal and aberrant blood formation. To researchers and clinicians with limited bioinformatics experience, these data have remained available, yet largely inaccessible. Current databases provide information about gene-expression but fail to answer key questions regarding co-regulation, genetic programs or effect on patient survival. To address these shortcomings, we present BloodSpot (www.bloodspot.eu), which includes and greatly extends our previously released database HemaExplorer, a database of gene expression profiles from FACS sorted healthy and malignant haematopoietic cells. A revised interactive interface simultaneously provides a plot of gene expression along with a Kaplan-Meier analysis and a hierarchical tree depicting the relationship between different cell types in the database. The database now includes 23 high-quality curated data sets relevant to normal and malignant blood formation and, in addition, we have assembled and built a unique integrated data set, BloodPool. Bloodpool contains more than 2000 samples assembled from six independent studies on acute myeloid leukemia. Furthermore, we have devised a robust sample integration procedure that allows for sensitive comparison of user-supplied patient samples in a well-defined haematopoietic cellular space. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Technical Reports Server (NTRS)
Hall, Forrest G. (Editor); Knapp, David E. (Editor); Nerbas, Tim; Anderson, Darwin
2000-01-01
This data set was collected by TE-1 to provide a set of soil properties for BOREAS investigators in the SSA. The soil samples were collected at sets of soil pits in 1993 and 1994. Each set of soil pits was in the vicinity of one of the five flux towers in the BOREAS SSA. The collected soil samples were sent to a lab, where the major soil properties were determined. These properties include, but are not limited to, soil horizon; dry soil color; pH; bulk density; total, organic, and inorganic carbon; electric conductivity; cation exchange capacity; exchangeable sodium, potassium, calcium, magnesium, and hydrogen; water content at 0.01, 0.033, and 1.5 MPascals; nitrogen; phosphorus; particle size distribution; texture; pH of the mineral soil and of the organic soil; extractable acid; and sulfur. The data are stored in tabular ASCII text files. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).
NASA Technical Reports Server (NTRS)
Iraci, Laura T.
2016-01-01
The Alpha Jet Atmospheric eXperiment (AJAX) is a research project based at Moffett Field, CA, which collects airborne measurements of ozone, carbon dioxide, methane, water vapor, and formaldehyde, as well as 3-D winds, temperature, pressure, and location. Since its first science flight in 2011, AJAX has developed a wide a variety of mission types, combining vertical profiles (from approximately 8 km to near surface), boundary layer legs, and plume sampling as needed. With an ongoing five-year data set, the team has sampled over 160 vertical profiles, a dozen wildfires, and numerous stratospheric ozone intrusions. Our largest data collection includes 55 vertical profiles at Railroad Valley, NV, approximately 100 miles southwest of Great Basin National Park, and many of those flights include comparisons to surface monitors in the Nevada Rural Ozone Initiative network. We have also collected a smaller set of measurements northwest of Joshua Tree National Park, and are looking to develop partnerships that can put this data to use to assess or improve air quality in nearby Parks. AJAX also studies the plumes emitted by wildfires in California, as most emissions inventories are based on prescribed fires. We have sampled a dozen fires, and results will be presented from several, including the Rim (2013), Soberanes and Cedar (2016) Fires.
Wilson, Nick; Edwards, Richard; Parry, Rhys
2011-03-04
To assess the need for additional smokefree settings, by measuring secondhand smoke (SHS) in a range of public places in an urban setting. Measurements were made in Wellington City during the 6-year period after the implementation of legislation that made indoor areas of restaurants and bars/pubs smokefree in December 2004, and up to 20 years after the 1990 legislation making most indoor workplaces smokefree. Fine particulate levels (PM2.5) were measured with a portable real-time airborne particle monitor. We collated data from our previously published work involving random sampling, purposeful sampling and convenience sampling of a wide range of settings (in 2006) and from additional sampling of selected indoor and outdoor areas (in 2007-2008 and 2010). The "outdoor" smoking areas of hospitality venues had the highest particulate levels, with a mean value of 72 mcg/m3 (range of maximum values 51-284 mcg/m3) (n=20 sampling periods). These levels are likely to create health hazards for some workers and patrons (i.e., when considered in relation to the WHO air quality guidelines). National survey data also indicate that these venues are the ones where SHS exposure is most frequently reported by non-smokers. Areas inside bars that were adjacent to "outdoor" smoking areas also had high levels, with a mean of 54 mcg/m3 (range of maximum values: 18-239 mcg/m3, for n=13 measurements). In all other settings mean levels were lower (means: 2-22 mcg/m3). These other settings included inside traditional style pubs/sports bars (n=10), bars (n=18), restaurants (n=9), cafes (n=5), inside public buildings (n=15), inside transportation settings (n=15), and various outdoor street/park settings (n=22). During the data collection in all settings made smokefree by law, there was only one occasion of a person observed smoking. The results suggest that compliance in pubs/bars and restaurants has remained extremely high in this city in the nearly six years since implementation of the upgraded smokefree legislation. The results also highlight additional potential health gain from extending smokefree policies to reduce SHS exposure in the "outdoor" smoking areas of hospitality venues and to reduce SHS drift from these areas to indoor areas.
Field emission chemical sensor for receptor/binder, such as antigen/antibody
Panitz, John A.
1986-01-01
A field emission chemical sensor for specific detection of a chemical entity in a sample includes a closed chamber enclosing two field emission electrode sets, each field emission electrode set comprising (a) an electron emitter electrode from which field emission electrons can be emitted when an effective voltage is connected to the electrode set; and (b) a collector electrode which will capture said electrons emitted from said emitter electrode. One of the electrode sets is passive to the chemical entity and the other is active thereto and has an active emitter electrode which will bind the chemical entity when contacted therewith.
Beckers, Matthew; Mohorianu, Irina; Stocks, Matthew; Applegate, Christopher; Dalmay, Tamas; Moulton, Vincent
2017-01-01
Recently, high-throughput sequencing (HTS) has revealed compelling details about the small RNA (sRNA) population in eukaryotes. These 20 to 25 nt noncoding RNAs can influence gene expression by acting as guides for the sequence-specific regulatory mechanism known as RNA silencing. The increase in sequencing depth and number of samples per project enables a better understanding of the role sRNAs play by facilitating the study of expression patterns. However, the intricacy of the biological hypotheses coupled with a lack of appropriate tools often leads to inadequate mining of the available data and thus, an incomplete description of the biological mechanisms involved. To enable a comprehensive study of differential expression in sRNA data sets, we present a new interactive pipeline that guides researchers through the various stages of data preprocessing and analysis. This includes various tools, some of which we specifically developed for sRNA analysis, for quality checking and normalization of sRNA samples as well as tools for the detection of differentially expressed sRNAs and identification of the resulting expression patterns. The pipeline is available within the UEA sRNA Workbench, a user-friendly software package for the processing of sRNA data sets. We demonstrate the use of the pipeline on a H. sapiens data set; additional examples on a B. terrestris data set and on an A. thaliana data set are described in the Supplemental Information. A comparison with existing approaches is also included, which exemplifies some of the issues that need to be addressed for sRNA analysis and how the new pipeline may be used to do this. PMID:28289155
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poore III, Willis P; Belles, Randy; Mays, Gary T
This report summarizes the approach that ORNL developed for screening a sample set of US Department of Defense (DOD) military base sites and DOE sites for possible powering with an SMR; the methodology employed, including spatial modeling; and initial results for several sample sites. The objective in conducting this type of siting evaluation is demonstrate the capability to characterize specific DOD and DOE sites to identify any particular issues associated with powering the sites with an SMR using OR-SAGE; it is not intended to be a definitive assessment per se as to the absolute suitability of any particular site.
Documentation of Apollo 15 samples
NASA Technical Reports Server (NTRS)
Sutton, R. L.; Hait, M. H.; Larson, K. B.; Swann, G. A.; Reed, V. S.; Schaber, G. G.
1972-01-01
A catalog is presented of the documentation of Apollo 15 samples using photographs and verbal descriptions returned from the lunar surface. Almost all of the Apollo 15 samples were correlated with lunar surface photographs, descriptions, and traverse locations. Where possible, the lunar orientations of rock samples were reconstructed in the lunar receiving laboratory, using a collimated light source to reproduce illumination and shadow characteristics of the same samples shown in lunar photographs. In several cases, samples were not recognized in lunar surface photographs, and their approximate locations are known only by association with numbered sample bags used during their collection. Tables, photographs, and maps included in this report are designed to aid in the understanding of the lunar setting of the Apollo 15 samples.
Twinn, Sheila; Thompson, David R; Lopez, Violeta; Lee, Diana T F; Shiu, Ann T Y
2005-01-01
Different factors have been shown to influence the development of models of advanced nursing practice (ANP) in primary-care settings. Although ANP is being developed in hospitals in Hong Kong, China, it remains undeveloped in primary care and little is known about the factors determining the development of such a model. The aims of the present study were to investigate the contribution of different models of nursing practice to the care provided in primary-care settings in Hong Kong, and to examine the determinants influencing the development of a model of ANP in such settings. A multiple case study design was selected using both qualitative and quantitative methods of data collection. Sampling methods reflected the population groups and stage of the case study. Sampling included a total population of 41 nurses from whom a secondary volunteer sample was drawn for face-to-face interviews. In each case study, a convenience sample of 70 patients were recruited, from whom 10 were selected purposively for a semi-structured telephone interview. An opportunistic sample of healthcare professionals was also selected. The within-case and cross-case analysis demonstrated four major determinants influencing the development of ANP: (1) current models of nursing practice; (2) the use of skills mix; (3) the perceived contribution of ANP to patient care; and (4) patients' expectations of care. The level of autonomy of individual nurses was considered particularly important. These determinants were used to develop a model of ANP for a primary-care setting. In conclusion, although the findings highlight the complexity determining the development and implementation of ANP in primary care, the proposed model suggests that definitions of advanced practice are appropriate to a range of practice models and cultural settings. However, the findings highlight the importance of assessing the effectiveness of such models in terms of cost and long-term patient outcomes.
Sparse sampling and reconstruction for electron and scanning probe microscope imaging
Anderson, Hyrum; Helms, Jovana; Wheeler, Jason W.; Larson, Kurt W.; Rohrer, Brandon R.
2015-07-28
Systems and methods for conducting electron or scanning probe microscopy are provided herein. In a general embodiment, the systems and methods for conducting electron or scanning probe microscopy with an undersampled data set include: driving an electron beam or probe to scan across a sample and visit a subset of pixel locations of the sample that are randomly or pseudo-randomly designated; determining actual pixel locations on the sample that are visited by the electron beam or probe; and processing data collected by detectors from the visits of the electron beam or probe at the actual pixel locations and recovering a reconstructed image of the sample.
Exploring the ancestry differentiation and inference capacity of the 28-plex AISNPs.
Hao, Wei-Qi; Liu, Jing; Jiang, Li; Han, Jun-Ping; Wang, Ling; Li, Jiu-Ling; Ma, Quan; Liu, Chao; Wang, Hui-Jun; Li, Cai-Xia
2018-06-07
Inferring an unknown DNA's ancestry using a set of ancestry-informative single nucleotide polymorphisms (SNPs) in forensic science is useful to provide investigative leads. This is especially true when there is no DNA database match or specified suspect. Thus, a set of SNPs with highly robust and balanced differential power is strongly demanded in forensic science. In addition, it is also necessary to build a genotyping database for estimating the ancestry of an individual or an unknown DNA. For the differentiation of Africans, Europeans, East Asians, Native Americans, and Oceanians, the Global Nano set that includes just 31 SNPs was developed by de la Puente et al. Its ability for differentiation and balance was evaluated using the genotype data of the 1000 Genomes Phase III project and the Stanford University HGDP-CEPH. Just 402 samples were genotyped and analyzed as a reference set based on statistical methods. To validate the differentiating capacity using more samples, we developed a single-tube 28-plex SNP assay in which the SNPs were chosen from the 31 allelic loci of the Global AIMs Nano set. Three tri-allelic SNPs used to differentiate mixed-source DNA contribute little to population differentiation and were excluded here. Then, 998 individuals from 21 populations were typed, and these genotypes were combined with the genotype data obtained from 1000 Genomes Phase III and the Stanford University HGDP-CEPH (3090 total samples,43 populations) to estimate the power of this multiplex assay and build a database for the further inference of an individual or an unknown DNA sample in forensic practice.
Syndromic Surveillance: Adapting Innovations to Developing Settings
2008-03-01
outbreak investigation was initiated, including rectal swab sampling of patients with watery diarrhea. Culture tests identified Vibrio cholerae in 44...GF, Kulldorff M, Madigan D, et al. (2007) Issues in applied statistics for public health bioterrorism Technical considerations • Harvesting data
Portable detection system of vegetable oils based on laser induced fluorescence
NASA Astrophysics Data System (ADS)
Zhu, Li; Zhang, Yinchao; Chen, Siying; Chen, He; Guo, Pan; Mu, Taotao
2015-11-01
Food safety, especially edible oils, has attracted more and more attention recently. Many methods and instruments have emerged to detect the edible oils, which include oils classification and adulteration. It is well known than the adulteration is based on classification. Then, in this paper, a portable detection system, based on laser induced fluorescence, is proposed and designed to classify the various edible oils, including (olive, rapeseed, walnut, peanut, linseed, sunflower, corn oils). 532 nm laser modules are used in this equipment. Then, all the components are assembled into a module (100*100*25mm). A total of 700 sets of fluorescence data (100 sets of each type oil) are collected. In order to classify different edible oils, principle components analysis and support vector machine have been employed in the data analysis. The training set consisted of 560 sets of data (80 sets of each oil) and the test set consisted of 140 sets of data (20 sets of each oil). The recognition rate is up to 99%, which demonstrates the reliability of this potable system. With nonintrusive and no sample preparation characteristic, the potable system can be effectively applied for food detection.
García-Molina, María Dolores; García-Olmo, Juan; Barro, Francisco
2016-01-01
The aim of this work was to assess the ability of Near Infrared Spectroscopy (NIRS) to distinguish wheat lines with low gliadin content, obtained by RNA interference (RNAi), from non-transgenic wheat lines. The discriminant analysis was performed using both whole grain and flour. The transgenic sample set included 409 samples for whole grain sorting and 414 samples for flour experiments, while the non-transgenic set consisted of 126 and 156 samples for whole grain and flour, respectively. Samples were scanned using a Foss-NIR Systems 6500 System II instrument. Discrimination models were developed using the entire spectral range (400-2500 nm) and ranges of 400-780 nm, 800-1098 nm and 1100-2500 nm, followed by analysis of means of partial least square (PLS). Two external validations were made, using samples from the years 2013 and 2014 and a minimum of 99% of the flour samples and 96% of the whole grain samples were classified correctly. The results demonstrate the ability of NIRS to successfully discriminate between wheat samples with low-gliadin content and wild types. These findings are important for the development and analysis of foodstuff for celiac disease (CD) patients to achieve better dietary composition and a reduction in disease incidence.
2013-01-01
Introduction There is inconsistent association between urate transporters SLC22A11 (organic anion transporter 4 (OAT4)) and SLC22A12 (urate transporter 1 (URAT1)) and risk of gout. New Zealand (NZ) Māori and Pacific Island people have higher serum urate and more severe gout than European people. The aim of this study was to test genetic variation across the SLC22A11/SLC22A12 locus for association with risk of gout in NZ sample sets. Methods A total of 12 single nucleotide polymorphism (SNP) variants in four haplotype blocks were genotyped using TaqMan® and Sequenom MassArray in 1003 gout cases and 1156 controls. All cases had gout according to the 1977 American Rheumatism Association criteria. Association analysis of single markers and haplotypes was performed using PLINK and Stata. Results A haplotype block 1 SNP (rs17299124) (upstream of SLC22A11) was associated with gout in less admixed Polynesian sample sets, but not European Caucasian (odds ratio; OR = 3.38, P = 6.1 × 10-4; OR = 0.91, P = 0.40, respectively) sample sets. A protective block 1 haplotype caused the rs17299124 association (OR = 0.28, P = 6.0 × 10-4). Within haplotype block 2 (SLC22A11) we could not replicate previous reports of association of rs2078267 with gout in European Caucasian (OR = 0.98, P = 0.82) sample sets, however this SNP was associated with gout in Polynesian (OR = 1.51, P = 0.022) sample sets. Within haplotype block 3 (including SLC22A12) analysis of haplotypes revealed a haplotype with trans-ancestral protective effects (OR = 0.80, P = 0.004), and a second haplotype conferring protection in less admixed Polynesian sample sets (OR = 0.63, P = 0.028) but risk in European Caucasian samples (OR = 1.33, P = 0.039). Conclusions Our analysis provides evidence for multiple ancestral-specific effects across the SLC22A11/SLC22A12 locus that presumably influence the activity of OAT4 and URAT1 and risk of gout. Further fine mapping of the association signal is needed using trans-ancestral re-sequence data. PMID:24360580
Wildhaber, M.L.; Papoulias, D.M.; DeLonay, A.J.; Tillitt, D.E.; Bryan, J.L.; Annis, M.L.
2007-01-01
From May 2001 to June 2002 Wildhaber et al. (2005) conducted monthly sampling of Lower Missouri River shovelnose sturgeon (Scaphirhynchus platorynchus) to develop methods for determination of sex and the reproductive stage of sturgeons in the field. Shovelnose sturgeon were collected from the Missouri River and ultrasonic and endoscopic imagery and blood and gonadal tissue samples were taken. The full set of data was used to develop monthly reproductive stage profiles for S. platorynchus that could be compared to data collected on pallid sturgeon (Scaphirhynchus albus). This paper presents a comprehensive reference set of images, sex steroids, and vitellogenin (VTG, an egg protein precursor) data for assessing shovelnose sturgeon sex and reproductive stage. This reference set includes ultrasonic, endoscopic, histologic, and internal images of male and female gonads of shovelnose sturgeon at each reproductive stage along with complementary data on average 17-β estradiol, 11-ketotestosterone, VTG, gonadosomatic index, and polarization index.
Evaluation of a Serum Lung Cancer Biomarker Panel.
Mazzone, Peter J; Wang, Xiao-Feng; Han, Xiaozhen; Choi, Humberto; Seeley, Meredith; Scherer, Richard; Doseeva, Victoria
2018-01-01
A panel of 3 serum proteins and 1 autoantibody has been developed to assist with the detection of lung cancer. We aimed to validate the accuracy of the biomarker panel in an independent test set and explore the impact of adding a fourth serum protein to the panel, as well as the impact of combining molecular and clinical variables. The training set of serum samples was purchased from commercially available biorepositories. The testing set was from a biorepository at the Cleveland Clinic. All lung cancer and control subjects were >50 years old and had smoked a minimum of 20 pack-years. A panel of biomarkers including CEA (carcinoembryonic antigen), CYFRA21-1 (cytokeratin-19 fragment 21-1), CA125 (carbohydrate antigen 125), HGF (hepatocyte growth factor), and NY-ESO-1 (New York esophageal cancer-1 antibody) was measured using immunoassay techniques. The multiple of the median method, multivariate logistic regression, and random forest modeling was used to analyze the results. The training set consisted of 604 patient samples (268 with lung cancer and 336 controls) and the testing set of 400 patient samples (155 with lung cancer and 245 controls). With a threshold established from the training set, the sensitivity and specificity of both the 4- and 5-biomarker panels on the testing set was 49% and 96%, respectively. Models built on the testing set using only clinical variables had an area under the receiver operating characteristic curve of 0.68, using the biomarker panel 0.81 and by combining clinical and biomarker variables 0.86. This study validates the accuracy of a panel of proteins and an autoantibody in a population relevant to lung cancer detection and suggests a benefit to combining clinical features with the biomarker results.
Evaluation of a Serum Lung Cancer Biomarker Panel
Mazzone, Peter J; Wang, Xiao-Feng; Han, Xiaozhen; Choi, Humberto; Seeley, Meredith; Scherer, Richard; Doseeva, Victoria
2018-01-01
Background: A panel of 3 serum proteins and 1 autoantibody has been developed to assist with the detection of lung cancer. We aimed to validate the accuracy of the biomarker panel in an independent test set and explore the impact of adding a fourth serum protein to the panel, as well as the impact of combining molecular and clinical variables. Methods: The training set of serum samples was purchased from commercially available biorepositories. The testing set was from a biorepository at the Cleveland Clinic. All lung cancer and control subjects were >50 years old and had smoked a minimum of 20 pack-years. A panel of biomarkers including CEA (carcinoembryonic antigen), CYFRA21-1 (cytokeratin-19 fragment 21-1), CA125 (carbohydrate antigen 125), HGF (hepatocyte growth factor), and NY-ESO-1 (New York esophageal cancer-1 antibody) was measured using immunoassay techniques. The multiple of the median method, multivariate logistic regression, and random forest modeling was used to analyze the results. Results: The training set consisted of 604 patient samples (268 with lung cancer and 336 controls) and the testing set of 400 patient samples (155 with lung cancer and 245 controls). With a threshold established from the training set, the sensitivity and specificity of both the 4- and 5-biomarker panels on the testing set was 49% and 96%, respectively. Models built on the testing set using only clinical variables had an area under the receiver operating characteristic curve of 0.68, using the biomarker panel 0.81 and by combining clinical and biomarker variables 0.86. Conclusions: This study validates the accuracy of a panel of proteins and an autoantibody in a population relevant to lung cancer detection and suggests a benefit to combining clinical features with the biomarker results. PMID:29371783
Kaech Moll, Veronika M; Escorpizo, Reuben; Portmann Bergamaschi, Ruth; Finger, Monika E
2016-08-01
The Comprehensive ICF Core Set for vocational rehabilitation (VR) is a list of essential categories on functioning based on the World Health Organization (WHO) International Classification of Functioning, Disability and Health (ICF), which describes a standard for interdisciplinary assessment, documentation, and communication in VR. The aim of this study was to examine the content validity of the Comprehensive ICF Core Set for VR from the perspective of physical therapists. A 3-round email survey was performed using the Delphi method. A convenience sample of international physical therapists working in VR with work experience of ≥2 years were asked to identify aspects they consider as relevant when evaluating or treating clients in VR. Responses were linked to the ICF categories and compared with the Comprehensive ICF Core Set for VR. Sixty-two physical therapists from all 6 WHO world regions responded with 3,917 statements that were subsequently linked to 338 ICF categories. Fifteen (17%) of the 90 categories in the Comprehensive ICF Core Set for VR were confirmed by the physical therapists in the sample. Twenty-two additional ICF categories were identified that were not included in the Comprehensive ICF Core Set for VR. Vocational rehabilitation in physical therapy is not well defined in every country and might have resulted in the small sample size. Therefore, the results cannot be generalized to all physical therapists practicing in VR. The content validity of the ICF Core Set for VR is insufficient from solely a physical therapist perspective. The results of this study could be used to define a physical therapy-specific set of ICF categories to develop and guide physical therapist clinical practice in VR. © 2016 American Physical Therapy Association.
Protein and glycomic plasma markers for early detection of adenoma and colon cancer.
Rho, Jung-Hyun; Ladd, Jon J; Li, Christopher I; Potter, John D; Zhang, Yuzheng; Shelley, David; Shibata, David; Coppola, Domenico; Yamada, Hiroyuki; Toyoda, Hidenori; Tada, Toshifumi; Kumada, Takashi; Brenner, Dean E; Hanash, Samir M; Lampe, Paul D
2018-03-01
To discover and confirm blood-based colon cancer early-detection markers. We created a high-density antibody microarray to detect differences in protein levels in plasma from individuals diagnosed with colon cancer <3 years after blood was drawn (ie, prediagnostic) and cancer-free, matched controls. Potential markers were tested on plasma samples from people diagnosed with adenoma or cancer, compared with controls. Components of an optimal 5-marker panel were tested via immunoblotting using a third sample set, Luminex assay in a large fourth sample set and immunohistochemistry (IHC) on tissue microarrays. In the prediagnostic samples, we found 78 significantly (t-test) increased proteins, 32 of which were confirmed in the diagnostic samples. From these 32, optimal 4-marker panels of BAG family molecular chaperone regulator 4 (BAG4), interleukin-6 receptor subunit beta (IL6ST), von Willebrand factor (VWF) and CD44 or epidermal growth factor receptor (EGFR) were established. Each panel member and the panels also showed increases in the diagnostic adenoma and cancer samples in independent third and fourth sample sets via immunoblot and Luminex, respectively. IHC results showed increased levels of BAG4, IL6ST and CD44 in adenoma and cancer tissues. Inclusion of EGFR and CD44 sialyl Lewis-A and Lewis-X content increased the panel performance. The protein/glycoprotein panel was statistically significantly higher in colon cancer samples, characterised by a range of area under the curves from 0.90 (95% CI 0.82 to 0.98) to 0.86 (95% CI 0.83 to 0.88), for the larger second and fourth sets, respectively. A panel including BAG4, IL6ST, VWF, EGFR and CD44 protein/glycomics performed well for detection of early stages of colon cancer and should be further examined in larger studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Lindsey, Bruce D.; Bickford, Tammy M.
1999-01-01
State agencies responsible for regulating pesticides are required by the U.S. Environmental Protection Agency to develop state management plans for specific pesticides. A key part of these management plans includes assessing the potential for contamination of ground water by pesticides throughout the state. As an example of how a statewide assessment could be implemented, a plan is presented for the Commonwealth of Pennsylvania to illustrate how a hydrogeologic framework can be used as a basis for sampling areas within a state with the highest likelihood of having elevated pesticide concentrations in ground water. The framework was created by subdividing the state into 20 areas on the basis of physiography and aquifer type. Each of these 20 hydrogeologic settings is relatively homogeneous with respect to aquifer susceptibility and pesticide use—factors that would be likely to affect pesticide concentrations in ground water. Existing data on atrazine occurrence in ground water was analyzed to determine (1) which areas of the state already have sufficient samples collected to make statistical comparisons among hydrogeologic settings, and (2) the effect of factors such as land use and aquifer characteristics on pesticide occurrence. The theoretical vulnerability and the results of the data analysis were used to rank each of the 20 hydrogeologic settings on the basis of vulnerability of ground water to contamination by pesticides. Example sampling plans are presented for nine of the hydrogeologic settings that lack sufficient data to assess vulnerability to contamination. Of the highest priority areas of the state, two out of four have been adequately sampled, one of the three areas of moderate to high priority has been adequately sampled, four of the nine areas of moderate to low priority have been adequately sampled, and none of the three low priority areas have been sampled.Sampling to date has shown that, even in the most vulnerable hydrogeologic settings, pesticide concentrations in ground water rarely exceed U.S. Environmental Protection Agency Drinking Water Standards or Health Advisory Levels. Analyses of samples from 1,159 private water supplies reveal only 3 sites for which samples with concentrations of pesticides exceeded drinking-water standards. In most cases, samples with elevated concentrations could be traced to point sources at pesticide loading or mixing areas. These analyses included data from some of the most vulnerable areas of the state, indicating that it is highly unlikely that pesticide concentrations in water from wells in other areas of the state would exceed the drinking-water standards unless a point source of contamination were present. Analysis of existing data showed that water from wells in areas of the state underlain by carbonate (limestone and dolomite) bedrock, which commonly have a high percentage of corn production, was much more likely to have pesticides detected. Application of pesticides to the land surface generally has not caused concentrations of the five state priority pesticides in ground water to exceed health standards; however, this study has not evaluated the potential human health effects of mixtures of pesticides or pesticide degradation products in drinking water. This study also has not determined whether concentrations in ground water are stable, increasing, or decreasing.
[The research protocol III. Study population].
Arias-Gómez, Jesús; Villasís-Keever, Miguel Ángel; Miranda-Novales, María Guadalupe
2016-01-01
The study population is defined as a set of cases, determined, limited, and accessible, that will constitute the subjects for the selection of the sample, and must fulfill several characteristics and distinct criteria. The objectives of this manuscript are focused on specifying each one of the elements required to make the selection of the participants of a research project, during the elaboration of the protocol, including the concepts of study population, sample, selection criteria and sampling methods. After delineating the study population, the researcher must specify the criteria that each participant has to comply. The criteria that include the specific characteristics are denominated selection or eligibility criteria. These criteria are inclusion, exclusion and elimination, and will delineate the eligible population. The sampling methods are divided in two large groups: 1) probabilistic or random sampling and 2) non-probabilistic sampling. The difference lies in the employment of statistical methods to select the subjects. In every research, it is necessary to establish at the beginning the specific number of participants to be included to achieve the objectives of the study. This number is the sample size, and can be calculated or estimated with mathematical formulas and statistic software.
Methane occurrence in groundwater of south-central New York State, 2012: summary of findings
Heisig, Paul M.; Scott, Tia-Marie
2013-01-01
A survey of methane in groundwater was undertaken to document methane occurrence on the basis of hydrogeologic setting within a glaciated 1,810-square-mile area of south-central New York that has not seen shale-gas resource development. The adjacent region in northeastern Pennsylvania has undergone shale-gas resource development from the Marcellus Shale. Well construction and subsurface data were required for each well sampled so that the local hydrogeologic setting could be classified. All wells were also at least 1 mile from any known gas well (active, exploratory, or abandoned). Sixty-six domestic wells and similar purposed supply wells were sampled during summer 2012. Field water-quality characteristics (pH, specific conductance, dissolved oxygen, and temperature) were measured at each well, and samples were collected and analyzed for dissolved gases, including methane and short-chain hydrocarbons. Carbon and hydrogen isotopic ratios of methane were measured in 21 samples that had at least 0.3 milligram per liter (mg/L) methane.
Gottvall, Maria; Vaez, Marjan
2017-01-01
A high proportion of refugees have been subjected to potentially traumatic experiences (PTEs), including torture. PTEs, and torture in particular, are powerful predictors of mental ill health. This paper reports the development and preliminary validation of a brief refugee trauma checklist applicable for survey studies. Methods: A pool of 232 items was generated based on pre-existing instruments. Conceptualization, item selection and item refinement was conducted based on existing literature and in collaboration with experts. Ten cognitive interviews using a Think Aloud Protocol (TAP) were performed in a clinical setting, and field testing of the proposed checklist was performed in a total sample of n = 137 asylum seekers from Syria. Results: The proposed refugee trauma history checklist (RTHC) consists of 2 × 8 items, concerning PTEs that occurred before and during the respondents’ flight, respectively. Results show low item non-response and adequate psychometric properties Conclusions: RTHC is a usable tool for providing self-report data on refugee trauma history surveys of community samples. The core set of included events can be augmented and slight modifications can be applied to RTHC for use also in other refugee populations and settings. PMID:28976937
Pan, Feng; Tao, Guohua
2013-03-07
Full semiclassical (SC) initial value representation (IVR) for time correlation functions involves a double phase space average over a set of two phase points, each of which evolves along a classical path. Conventionally, the two initial phase points are sampled independently for all degrees of freedom (DOF) in the Monte Carlo procedure. Here, we present an efficient importance sampling scheme by including the path correlation between the two initial phase points for the bath DOF, which greatly improves the performance of the SC-IVR calculations for large molecular systems. Satisfactory convergence in the study of quantum coherence in vibrational relaxation has been achieved for a benchmark system-bath model with up to 21 DOF.
Ehler, Edvard; Vaněk, Daniel; Stenzl, Vlastimil; Vančata, Václav
2011-01-01
Aim To evaluate Y-chromosomal diversity of the Moravian Valachs of the Czech Republic and compare them with a Czech population sample and other samples from Central and South-Eastern Europe, and to evaluate the effects of genetic isolation and sampling. Methods The first sample set of the Valachs consisted of 94 unrelated male donors from the Valach region in northeastern Czech Republic border-area. The second sample set of the Valachs consisted of 79 men who originated from 7 paternal lineages defined by surname. No close relatives were sampled. The third sample set consisted of 273 unrelated men from the whole of the Czech Republic and was used for comparison, as well as published data for other 27 populations. The total number of samples was 3244. Y-short tandem repeat (STR) markers were typed by standard methods using PowerPlex® Y System (Promega) and Yfiler® Amplification Kit (Applied Biosystems) kits. Y-chromosomal haplogroups were estimated from the haplotype information. Haplotype diversity and other intra- and inter-population statistics were computed. Results The Moravian Valachs showed a lower genetic variability of Y-STR markers than other Central European populations, resembling more to the isolated Balkan populations (Aromuns, Csango, Bulgarian, and Macedonian Roma) than the surrounding populations (Czechs, Slovaks, Poles, Saxons). We illustrated the effect of sampling on Valach paternal lineages, which includes reduction of discrimination capacity and variability inside Y-chromosomal haplogroups. Valach modal haplotype belongs to R1a haplogroup and it was not detected in the Czech population. Conclusion The Moravian Valachs display strong substructure and isolation in their Y chromosomal markers. They represent a unique Central European population model for population genetics. PMID:21674832
Dolch, Michael E; Janitza, Silke; Boulesteix, Anne-Laure; Graßmann-Lichtenauer, Carola; Praun, Siegfried; Denzer, Wolfgang; Schelling, Gustav; Schubert, Sören
2016-12-01
Identification of microorganisms in positive blood cultures still relies on standard techniques such as Gram staining followed by culturing with definite microorganism identification. Alternatively, matrix-assisted laser desorption/ionization time-of-flight mass spectrometry or the analysis of headspace volatile compound (VC) composition produced by cultures can help to differentiate between microorganisms under experimental conditions. This study assessed the efficacy of volatile compound based microorganism differentiation into Gram-negatives and -positives in unselected positive blood culture samples from patients. Headspace gas samples of positive blood culture samples were transferred to sterilized, sealed, and evacuated 20 ml glass vials and stored at -30 °C until batch analysis. Headspace gas VC content analysis was carried out via an auto sampler connected to an ion-molecule reaction mass spectrometer (IMR-MS). Measurements covered a mass range from 16 to 135 u including CO2, H2, N2, and O2. Prediction rules for microorganism identification based on VC composition were derived using a training data set and evaluated using a validation data set within a random split validation procedure. One-hundred-fifty-two aerobic samples growing 27 Gram-negatives, 106 Gram-positives, and 19 fungi and 130 anaerobic samples growing 37 Gram-negatives, 91 Gram-positives, and two fungi were analysed. In anaerobic samples, ten discriminators were identified by the random forest method allowing for bacteria differentiation into Gram-negative and -positive (error rate: 16.7 % in validation data set). For aerobic samples the error rate was not better than random. In anaerobic blood culture samples of patients IMR-MS based headspace VC composition analysis facilitates bacteria differentiation into Gram-negative and -positive.
Jangam, Chandrakant; Ramya Sanam, S; Chaturvedi, M K; Padmakar, C; Pujari, Paras R; Labhasetwar, Pawan K
2015-10-01
The present case study has been undertaken to investigate the impact of on-site sanitation on groundwater quality in alluvial settings in Lucknow City in India. The groundwater samples have been collected in the areas of Lucknow City where the on-site sanitation systems have been implemented. The groundwater samples have been analyzed for the major physicochemical parameters and fecal coliform. The results of analysis reveal that none of the groundwater samples exceeded the Bureau of Indian Standards (BIS) limits for all the parameters. Fecal coliform was not found in majority of the samples including those samples which were very close to the septic tank. The study area has a thick alluvium cover as a top layer which acts as a natural barrier for groundwater contamination from the on-site sanitation system. The t test has been performed to assess the seasonal effect on groundwater quality. The statistical t test implies that there is a significant effect of season on groundwater quality in the study area.
The topology of large-scale structure. III - Analysis of observations
NASA Astrophysics Data System (ADS)
Gott, J. Richard, III; Miller, John; Thuan, Trinh X.; Schneider, Stephen E.; Weinberg, David H.; Gammie, Charles; Polk, Kevin; Vogeley, Michael; Jeffrey, Scott; Bhavsar, Suketu P.; Melott, Adrian L.; Giovanelli, Riccardo; Hayes, Martha P.; Tully, R. Brent; Hamilton, Andrew J. S.
1989-05-01
A recently developed algorithm for quantitatively measuring the topology of large-scale structures in the universe was applied to a number of important observational data sets. The data sets included an Abell (1958) cluster sample out to Vmax = 22,600 km/sec, the Giovanelli and Haynes (1985) sample out to Vmax = 11,800 km/sec, the CfA sample out to Vmax = 5000 km/sec, the Thuan and Schneider (1988) dwarf sample out to Vmax = 3000 km/sec, and the Tully (1987) sample out to Vmax = 3000 km/sec. It was found that, when the topology is studied on smoothing scales significantly larger than the correlation length (i.e., smoothing length, lambda, not below 1200 km/sec), the topology is spongelike and is consistent with the standard model in which the structure seen today has grown from small fluctuations caused by random noise in the early universe. When the topology is studied on the scale of lambda of about 600 km/sec, a small shift is observed in the genus curve in the direction of a 'meatball' topology.
The topology of large-scale structure. III - Analysis of observations. [in universe
NASA Technical Reports Server (NTRS)
Gott, J. Richard, III; Weinberg, David H.; Miller, John; Thuan, Trinh X.; Schneider, Stephen E.
1989-01-01
A recently developed algorithm for quantitatively measuring the topology of large-scale structures in the universe was applied to a number of important observational data sets. The data sets included an Abell (1958) cluster sample out to Vmax = 22,600 km/sec, the Giovanelli and Haynes (1985) sample out to Vmax = 11,800 km/sec, the CfA sample out to Vmax = 5000 km/sec, the Thuan and Schneider (1988) dwarf sample out to Vmax = 3000 km/sec, and the Tully (1987) sample out to Vmax = 3000 km/sec. It was found that, when the topology is studied on smoothing scales significantly larger than the correlation length (i.e., smoothing length, lambda, not below 1200 km/sec), the topology is spongelike and is consistent with the standard model in which the structure seen today has grown from small fluctuations caused by random noise in the early universe. When the topology is studied on the scale of lambda of about 600 km/sec, a small shift is observed in the genus curve in the direction of a 'meatball' topology.
ERIC Educational Resources Information Center
School Library Media Activities Monthly, 1996
1996-01-01
Describes a four-volume reference set for elementary and middle school students called "The Middle Ages: An Encyclopedia for Students" edited by William Chester Jordan. Provides a sample lesson which includes library media skills objectives, curriculum objectives, grade levels, resources, instructional roles, activity and procedures for…
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Raad, Markus; de Rond, Tristan; Rübel, Oliver
Mass spectrometry imaging (MSI) has primarily been applied in localizing biomolecules within biological matrices. Although well-suited, the application of MSI for comparing thousands of spatially defined spotted samples has been limited. One reason for this is a lack of suitable and accessible data processing tools for the analysis of large arrayed MSI sample sets. In this paper, the OpenMSI Arrayed Analysis Toolkit (OMAAT) is a software package that addresses the challenges of analyzing spatially defined samples in MSI data sets. OMAAT is written in Python and is integrated with OpenMSI (http://openmsi.nersc.gov), a platform for storing, sharing, and analyzing MSI data.more » By using a web-based python notebook (Jupyter), OMAAT is accessible to anyone without programming experience yet allows experienced users to leverage all features. OMAAT was evaluated by analyzing an MSI data set of a high-throughput glycoside hydrolase activity screen comprising 384 samples arrayed onto a NIMS surface at a 450 μm spacing, decreasing analysis time >100-fold while maintaining robust spot-finding. The utility of OMAAT was demonstrated for screening metabolic activities of different sized soil particles, including hydrolysis of sugars, revealing a pattern of size dependent activities. Finally, these results introduce OMAAT as an effective toolkit for analyzing spatially defined samples in MSI. OMAAT runs on all major operating systems, and the source code can be obtained from the following GitHub repository: https://github.com/biorack/omaat.« less
An agglomerative hierarchical clustering approach to visualisation in Bayesian clustering problems
Dawson, Kevin J.; Belkhir, Khalid
2009-01-01
Clustering problems (including the clustering of individuals into outcrossing populations, hybrid generations, full-sib families and selfing lines) have recently received much attention in population genetics. In these clustering problems, the parameter of interest is a partition of the set of sampled individuals, - the sample partition. In a fully Bayesian approach to clustering problems of this type, our knowledge about the sample partition is represented by a probability distribution on the space of possible sample partitions. Since the number of possible partitions grows very rapidly with the sample size, we can not visualise this probability distribution in its entirety, unless the sample is very small. As a solution to this visualisation problem, we recommend using an agglomerative hierarchical clustering algorithm, which we call the exact linkage algorithm. This algorithm is a special case of the maximin clustering algorithm that we introduced previously. The exact linkage algorithm is now implemented in our software package Partition View. The exact linkage algorithm takes the posterior co-assignment probabilities as input, and yields as output a rooted binary tree, - or more generally, a forest of such trees. Each node of this forest defines a set of individuals, and the node height is the posterior co-assignment probability of this set. This provides a useful visual representation of the uncertainty associated with the assignment of individuals to categories. It is also a useful starting point for a more detailed exploration of the posterior distribution in terms of the co-assignment probabilities. PMID:19337306
The ExoMars Sample Preparation and Distribution System
NASA Astrophysics Data System (ADS)
Schulte, Wolfgang; Hofmann, Peter; Baglioni, Pietro; Richter, Lutz; Redlich, . Daniel; Notarnicola, Marco; Durrant, Stephen
2012-07-01
The Sample Preparation and Distribution System (SPDS) is a key element of the ESA ExoMars Rover. It is a set of complex mechanisms designed to receive Mars soil samples acquired from the subsurface with a drill, to crush them and to distribute the obtained soil powder to the scientific instruments of the `Pasteur Payload', in the Rover Analytical Laboratory (ALD). In particular, the SPDS consists of: (1) a Core Sample Handling System (CSHS), including a Core Sample Transportation Mechanism (CSTM) and a Blank Sample Dispenser; (2) a Crushing Station (CS); (3) a Powder Sample Dosing and Distribution System (PSDDS); and (4) a Powder Sample Handling System (PSHS) which is a carousel carrying pyrolysis ovens, a re-fillable sample container and a tool to flatten the powder sample surface. Kayser-Threde has developed, undercontract with the ExoMars prime contractor Thales Alenia Space Italy, breadboards and an engineering model of the SPDS mechanisms. Tests of individual mechanisms, namely the CSTM, CS and PSDDS were conducted both in laboratory ambient conditions and in a simulated Mars environment, using dedicated facilities. The SPDS functionalities and performances were measured and evaluated. In the course of 2011 the SPDS Dosing Station (part of the PSDDS) was also tested in simulated Mars gravity conditions during a parabolic flight campaign. By the time of the conference, an elegant breadboard of the Powder Sample Handling System will have been built and tested. The next step, planned by mid of 2012, will be a complete end-to-end test of the sample handling and processing chain, combining all four SPDS mechanisms. The possibility to verify interface and operational aspects between the SPDS and the ALD scientific instruments using the available instruments breadboards with the end-to-end set-up is currently being evaluated. This paper illustrates the most recent design status of the SPDS mechanisms, summarizes the test results and highlights future development activities, including potential involvement of the ExoMars science experiments.
Taxman, Faye S; Kitsantas, Panagiota
2009-08-01
OBJECTIVE TO BE ADDRESSED: The purpose of this study was to investigate the structural and organizational factors that contribute to the availability and increased capacity for substance abuse treatment programs in correctional settings. We used classification and regression tree statistical procedures to identify how multi-level data can explain the variability in availability and capacity of substance abuse treatment programs in jails and probation/parole offices. The data for this study combined the National Criminal Justice Treatment Practices (NCJTP) Survey and the 2000 Census. The NCJTP survey was a nationally representative sample of correctional administrators for jails and probation/parole agencies. The sample size included 295 substance abuse treatment programs that were classified according to the intensity of their services: high, medium, and low. The independent variables included jurisdictional-level structural variables, attributes of the correctional administrators, and program and service delivery characteristics of the correctional agency. The two most important variables in predicting the availability of all three types of services were stronger working relationships with other organizations and the adoption of a standardized substance abuse screening tool by correctional agencies. For high and medium intensive programs, the capacity increased when an organizational learning strategy was used by administrators and the organization used a substance abuse screening tool. Implications on advancing treatment practices in correctional settings are discussed, including further work to test theories on how to better understand access to intensive treatment services. This study presents the first phase of understanding capacity-related issues regarding treatment programs offered in correctional settings.
SORL1 variants and risk of late-onset Alzheimer's disease.
Li, Yonghong; Rowland, Charles; Catanese, Joseph; Morris, John; Lovestone, Simon; O'Donovan, Michael C; Goate, Alison; Owen, Michael; Williams, Julie; Grupe, Andrew
2008-02-01
A recent study reported significant association of late-onset Alzheimer's disease (LOAD) with multiple single nucleotide polymorphisms (SNPs) and haplotypes in SORL1, a neuronal sortilin-related receptor protein known to be involved in the trafficking and processing of amyloid precursor protein. Here we attempted to validate this finding in three large, well characterized case-control series. Approximately 2000 samples from the three series were individually genotyped for 12 SNPs, including the 10 reported significant SNPs and 2 that constitute the reported significant haplotypes. A total of 25 allelic and haplotypic association tests were performed. One SNP rs2070045 was marginally replicated in the three sample sets combined (nominal P=0.035); however, this result does not remain significant when accounting for multiple comparisons. Further validation in other sample sets will be required to assess the true effects of SORL1 variants in LOAD.
Dunham, Jason B.; Chelgren, Nathan D.; Heck, Michael P.; Clark, Steven M.
2013-01-01
We evaluated the probability of detecting larval lampreys using different methods of backpack electrofishing in wadeable streams in the U.S. Pacific Northwest. Our primary objective was to compare capture of lampreys using electrofishing with standard settings for salmon and trout to settings specifically adapted for capture of lampreys. Field work consisted of removal sampling by means of backpack electrofishing in 19 sites in streams representing a broad range of conditions in the region. Captures of lampreys at these sites were analyzed with a modified removal-sampling model and Bayesian estimation to measure the relative odds of capture using the lamprey-specific settings compared with the standard salmonid settings. We found that the odds of capture were 2.66 (95% credible interval, 0.87–78.18) times greater for the lamprey-specific settings relative to standard salmonid settings. When estimates of capture probability were applied to estimating the probabilities of detection, we found high (>0.80) detectability when the actual number of lampreys in a site was greater than 10 individuals and effort was at least two passes of electrofishing, regardless of the settings used. Further work is needed to evaluate key assumptions in our approach, including the evaluation of individual-specific capture probabilities and population closure. For now our results suggest comparable results are possible for detection of lampreys by using backpack electrofishing with salmonid- or lamprey-specific settings.
Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng
2015-03-01
Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.
Onukwugha, Eberechukwu; Qi, Ran; Jayasekera, Jinani; Zhou, Shujia
2016-02-01
Prognostic classification approaches are commonly used in clinical practice to predict health outcomes. However, there has been limited focus on use of the general approach for predicting costs. We applied a grouping algorithm designed for large-scale data sets and multiple prognostic factors to investigate whether it improves cost prediction among older Medicare beneficiaries diagnosed with prostate cancer. We analysed the linked Surveillance, Epidemiology and End Results (SEER)-Medicare data, which included data from 2000 through 2009 for men diagnosed with incident prostate cancer between 2000 and 2007. We split the survival data into two data sets (D0 and D1) of equal size. We trained the classifier of the Grouping Algorithm for Cancer Data (GACD) on D0 and tested it on D1. The prognostic factors included cancer stage, age, race and performance status proxies. We calculated the average difference between observed D1 costs and predicted D1 costs at 5 years post-diagnosis with and without the GACD. The sample included 110,843 men with prostate cancer. The median age of the sample was 74 years, and 10% were African American. The average difference (mean absolute error [MAE]) per person between the real and predicted total 5-year cost was US$41,525 (MAE US$41,790; 95% confidence interval [CI] US$41,421-42,158) with the GACD and US$43,113 (MAE US$43,639; 95% CI US$43,062-44,217) without the GACD. The 5-year cost prediction without grouping resulted in a sample overestimate of US$79,544,508. The grouping algorithm developed for complex, large-scale data improves the prediction of 5-year costs. The prediction accuracy could be improved by utilization of a richer set of prognostic factors and refinement of categorical specifications.
Improved high-dimensional prediction with Random Forests by the use of co-data.
Te Beest, Dennis E; Mes, Steven W; Wilting, Saskia M; Brakenhoff, Ruud H; van de Wiel, Mark A
2017-12-28
Prediction in high dimensional settings is difficult due to the large number of variables relative to the sample size. We demonstrate how auxiliary 'co-data' can be used to improve the performance of a Random Forest in such a setting. Co-data are incorporated in the Random Forest by replacing the uniform sampling probabilities that are used to draw candidate variables by co-data moderated sampling probabilities. Co-data here are defined as any type information that is available on the variables of the primary data, but does not use its response labels. These moderated sampling probabilities are, inspired by empirical Bayes, learned from the data at hand. We demonstrate the co-data moderated Random Forest (CoRF) with two examples. In the first example we aim to predict the presence of a lymph node metastasis with gene expression data. We demonstrate how a set of external p-values, a gene signature, and the correlation between gene expression and DNA copy number can improve the predictive performance. In the second example we demonstrate how the prediction of cervical (pre-)cancer with methylation data can be improved by including the location of the probe relative to the known CpG islands, the number of CpG sites targeted by a probe, and a set of p-values from a related study. The proposed method is able to utilize auxiliary co-data to improve the performance of a Random Forest.
NASA Technical Reports Server (NTRS)
Reph, M. G.
1984-01-01
This document provides a summary of information available in the NASA Climate Data Catalog. The catalog provides scientific users with technical information about selected climate parameter data sets and the associated sensor measurements from which they are derived. It is an integral part of the Pilot Climate Data System (PCDS), an interactive, scientific management system for locating, obtaining, manipulating, and displaying climate research data. The catalog is maintained in a machine readable representation which can easily be accessed via the PCDS. The purposes, format and content of the catalog are discussed. Summarized information is provided about each of the data sets currently described in the catalog. Sample detailed descriptions are included for individual data sets or families of related data sets.
Juul, Malene; Bertl, Johanna; Guo, Qianyun; Nielsen, Morten Muhlig; Świtnicki, Michał; Hornshøj, Henrik; Madsen, Tobias; Hobolth, Asger; Pedersen, Jakob Skou
2017-01-01
Non-coding mutations may drive cancer development. Statistical detection of non-coding driver regions is challenged by a varying mutation rate and uncertainty of functional impact. Here, we develop a statistically founded non-coding driver-detection method, ncdDetect, which includes sample-specific mutational signatures, long-range mutation rate variation, and position-specific impact measures. Using ncdDetect, we screened non-coding regulatory regions of protein-coding genes across a pan-cancer set of whole-genomes (n = 505), which top-ranked known drivers and identified new candidates. For individual candidates, presence of non-coding mutations associates with altered expression or decreased patient survival across an independent pan-cancer sample set (n = 5454). This includes an antigen-presenting gene (CD1A), where 5’UTR mutations correlate significantly with decreased survival in melanoma. Additionally, mutations in a base-excision-repair gene (SMUG1) correlate with a C-to-T mutational-signature. Overall, we find that a rich model of mutational heterogeneity facilitates non-coding driver identification and integrative analysis points to candidates of potential clinical relevance. DOI: http://dx.doi.org/10.7554/eLife.21778.001 PMID:28362259
Provenance information as a tool for addressing engineered nanoparticle reproducibility challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baer, Donald R.; Munusamy, Prabhakaran; Thrall, Brian D.
Nanoparticles of various types are of increasing research and technological importance in biological and other applications. Difficulties in the production and delivery of nanoparticles with consistent and well defined properties appear in many forms and have a variety of causes. Among several issues are those associated with incomplete information about the history of particles involved in research studies including the synthesis method, sample history after synthesis including time and nature of storage and the detailed nature of any sample processing or modification. In addition, the tendency of particles to change with time or environmental condition suggests that the time betweenmore » analysis and application is important and some type of consistency or verification process can be important. The essential history of a set of particles can be identified as provenance information tells the origin or source of a batch of nano-objects along with information related to handling and any changes that may have taken place since it was originated. A record of sample provenance information for a set of particles can play a useful role in identifying some of the sources and decreasing the extent of particle variability and the observed lack of reproducibility observed by many researchers.« less
Microbiology of Urinary Tract Infections in Gaborone, Botswana
Renuart, Andrew J.; Goldfarb, David M.; Mokomane, Margaret; Tawanana, Ephraim O.; Narasimhamurthy, Mohan; Steenhoff, Andrew P.; Silverman, Jonathan A.
2013-01-01
Objective The microbiology and epidemiology of UTI pathogens are largely unknown in Botswana, a high prevalence HIV setting. Using laboratory data from the largest referral hospital and a private hospital, we describe the major pathogens causing UTI and their antimicrobial resistance patterns. Methods This retrospective study examined antimicrobial susceptibility data for urine samples collected at Princess Marina Hospital (PMH), Bokamoso Private Hospital (BPH), or one of their affiliated outpatient clinics. A urine sample was included in our dataset if it demonstrated pure growth of a single organism and accompanying antimicrobial susceptibility and subject demographic data were available. Results A total of 744 samples were included. Greater than 10% resistance was observed for amoxicillin, co-trimoxazole, amoxicillin-clavulanate, and ciprofloxacin. Resistance of E. coli isolates to ampicillin and co-trimoxazole was greater than 60% in all settings. HIV status did not significantly impact the microbiology of UTIs, but did impact antimicrobial resistance to co-trimoxazole. Conclusions Data suggests that antimicrobial resistance has already emerged to most oral antibiotics, making empiric management of outpatient UTIs challenging. Ampicillin, co-trimoxazole, and ciprofloxacin should not be used as empiric treatment for UTI in this context. Nitrofurantoin could be used for simple cystitis; aminoglycosides for uncomplicated UTI in inpatients. PMID:23469239
Microbiology of urinary tract infections in Gaborone, Botswana.
Renuart, Andrew J; Goldfarb, David M; Mokomane, Margaret; Tawanana, Ephraim O; Narasimhamurthy, Mohan; Steenhoff, Andrew P; Silverman, Jonathan A
2013-01-01
The microbiology and epidemiology of UTI pathogens are largely unknown in Botswana, a high prevalence HIV setting. Using laboratory data from the largest referral hospital and a private hospital, we describe the major pathogens causing UTI and their antimicrobial resistance patterns. This retrospective study examined antimicrobial susceptibility data for urine samples collected at Princess Marina Hospital (PMH), Bokamoso Private Hospital (BPH), or one of their affiliated outpatient clinics. A urine sample was included in our dataset if it demonstrated pure growth of a single organism and accompanying antimicrobial susceptibility and subject demographic data were available. A total of 744 samples were included. Greater than 10% resistance was observed for amoxicillin, co-trimoxazole, amoxicillin-clavulanate, and ciprofloxacin. Resistance of E. coli isolates to ampicillin and co-trimoxazole was greater than 60% in all settings. HIV status did not significantly impact the microbiology of UTIs, but did impact antimicrobial resistance to co-trimoxazole. Data suggests that antimicrobial resistance has already emerged to most oral antibiotics, making empiric management of outpatient UTIs challenging. Ampicillin, co-trimoxazole, and ciprofloxacin should not be used as empiric treatment for UTI in this context. Nitrofurantoin could be used for simple cystitis; aminoglycosides for uncomplicated UTI in inpatients.
Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review
Morris, Tom; Gray, Laura
2017-01-01
Objectives To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Setting Any, not limited to healthcare settings. Participants Any taking part in an SW-CRT published up to March 2016. Primary and secondary outcome measures The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Results Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22–0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Conclusions Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. PMID:29146637
Cho, Pyo Yun; Na, Byoung-Kuk; Mi Choi, Kyung; Kim, Jin Su; Cho, Shin-Hyeong; Lee, Won-Ja; Lim, Sung-Bin; Cha, Seok Ho; Park, Yun-Kyu; Pak, Jhang Ho; Lee, Hyeong-Woo; Hong, Sung-Jong; Kim, Tong-Soo
2013-01-01
Microscopic examination of eggs of parasitic helminths in stool samples has been the most widely used classical diagnostic method for infections, but tiny and low numbers of eggs in stool samples often hamper diagnosis of helminthic infections with classical microscopic examination. Moreover, it is also difficult to differentiate parasite eggs by the classical method, if they have similar morphological characteristics. In this study, we developed a rapid and sensitive polymerase chain reaction (PCR)-based molecular diagnostic method for detection of Clonorchis sinensis eggs in stool samples. Nine primers were designed based on the long-terminal repeat (LTR) of C. sinensis retrotransposon1 (CsRn1) gene, and seven PCR primer sets were paired. Polymerase chain reaction with each primer pair produced specific amplicons for C. sinensis, but not for other trematodes including Metagonimus yokogawai and Paragonimus westermani. Particularly, three primer sets were able to detect 10 C. sinensis eggs and were applicable to amplify specific amplicons from DNA samples purified from stool of C. sinensis-infected patients. This PCR method could be useful for diagnosis of C. sinensis infections in human stool samples with a high level of specificity and sensitivity. PMID:23916334
Sala, Andrea; Corach, Daniel
2014-03-01
Argentinean Patagonia is inhabited by people that live principally in urban areas and by small isolated groups of individuals that belong to indigenous aboriginal groups; this territory exhibits the lowest population density of the country. Mapuche and Tehuelche (Mapudungun linguistic branch), are the only extant Native American groups that inhabit the Argentinean Patagonian provinces of Río Negro and Chubut. Fifteen autosomal STRs, 17 Y-STRs, mtDNA full length control region sequence and two sets of Y and mtDNA-coding region SNPs were analyzed in a set of 434 unrelated individuals. The sample set included two aboriginal groups, a group of individuals whose family name included Native American linguistic root and urban samples from Chubut, Río Negro and Buenos Aires provinces of Argentina. Specific Y Amerindian haplogroup Q1 was found in 87.5% in Mapuche and 58.82% in Tehuelche, while the Amerindian mtDNA haplogroups were present in all the aboriginal sample contributors investigated. Admixture analysis performed by means of autosomal and Y-STRs showed the highest degree of admixture in individuals carrying Mapuche surnames, followed by urban populations, and finally by isolated Native American populations as less degree of admixture. The study provided novel genetic information about the Mapuche and Tehuelche people and allowed us to establish a genetic correlation among individuals with Mapudungun surnames that demonstrates not only a linguistic but also a genetic relationship to the isolated aboriginal communities, representing a suitable proxy indicator for assessing genealogical background.
NASA Astrophysics Data System (ADS)
Dillner, A. M.; Takahama, S.
2015-03-01
Organic carbon (OC) can constitute 50% or more of the mass of atmospheric particulate matter. Typically, organic carbon is measured from a quartz fiber filter that has been exposed to a volume of ambient air and analyzed using thermal methods such as thermal-optical reflectance (TOR). Here, methods are presented that show the feasibility of using Fourier transform infrared (FT-IR) absorbance spectra from polytetrafluoroethylene (PTFE or Teflon) filters to accurately predict TOR OC. This work marks an initial step in proposing a method that can reduce the operating costs of large air quality monitoring networks with an inexpensive, non-destructive analysis technique using routinely collected PTFE filter samples which, in addition to OC concentrations, can concurrently provide information regarding the composition of organic aerosol. This feasibility study suggests that the minimum detection limit and errors (or uncertainty) of FT-IR predictions are on par with TOR OC such that evaluation of long-term trends and epidemiological studies would not be significantly impacted. To develop and test the method, FT-IR absorbance spectra are obtained from 794 samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011. Partial least-squares regression is used to calibrate sample FT-IR absorbance spectra to TOR OC. The FTIR spectra are divided into calibration and test sets by sampling site and date. The calibration produces precise and accurate TOR OC predictions of the test set samples by FT-IR as indicated by high coefficient of variation (R2; 0.96), low bias (0.02 μg m-3, the nominal IMPROVE sample volume is 32.8 m3), low error (0.08 μg m-3) and low normalized error (11%). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision to collocated TOR measurements. FT-IR spectra are also divided into calibration and test sets by OC mass and by OM / OC ratio, which reflects the organic composition of the particulate matter and is obtained from organic functional group composition; these divisions also leads to precise and accurate OC predictions. Low OC concentrations have higher bias and normalized error due to TOR analytical errors and artifact-correction errors, not due to the range of OC mass of the samples in the calibration set. However, samples with low OC mass can be used to predict samples with high OC mass, indicating that the calibration is linear. Using samples in the calibration set that have different OM / OC or ammonium / OC distributions than the test set leads to only a modest increase in bias and normalized error in the predicted samples. We conclude that FT-IR analysis with partial least-squares regression is a robust method for accurately predicting TOR OC in IMPROVE network samples - providing complementary information to the organic functional group composition and organic aerosol mass estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).
System for measuring radioactivity of labelled biopolymers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gross, V.
1980-07-08
A system is described for measuring radioactivity of labelled biopolymers, comprising: a set of containers adapted for receiving aqueous solutions of biological samples containing biopolymers which are subsequently precipitated in said containers on particles of diatomite in the presence of a coprecipitator, then filtered, dissolved, and mixed with a scintillator; radioactivity measuring means including a detection chamber to which is fed the mixture produced in said set of containers; an electric drive for moving said set of containers in a stepwise manner; means for proportional feeding of said coprecipitator and a suspension of diatomite in an acid solution to saidmore » containers which contain the biological sample for forming an acid precipitation of biopolymers; means for the removal of precipitated samples from said containers; precipitated biopolymer filtering means for successively filtering the precipitate, suspending the precipitate, dissolving the biopolymers mixed with said scintillator for feeding of the mixture to said detection chamber; a system of pipelines interconnecting said above-recited means; and said means for measuring radioactivity of labelled biopolymers including, a measuring cell arranged in a detection chamber and communicating with said means for filtering precipitated biopolymers through one pipeline of said system of pipelines; a program unit electrically connected to said electric drive, said means for acid precipatation of biopolymers, said means for the removal of precipitated samples from said containers, said filtering means, and said radioactivity measuring device; said program unit adapted to periodically switch on and off the above-recited means and check the sequence of the radioactivity measuring operations; and a control unit for controlling the initiation of the system and for selecting programs.« less
(Gene sequencing by scanning molecular exciton microscopy)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-01-01
This report details progress made in setting up a laboratory for optical microscopy of genes. The apparatus including a fluorescence microscope, a scanning optical microscope, various spectrometers, and supporting computers is described. Results in developing photon and exciton tips, and in preparing samples are presented. (GHH)
Lange, Berit; Cohn, Jennifer; Roberts, Teri; Camp, Johannes; Chauffour, Jeanne; Gummadi, Nina; Ishizaki, Azumi; Nagarathnam, Anupriya; Tuaillon, Edouard; van de Perre, Philippe; Pichler, Christine; Easterbrook, Philippa; Denkinger, Claudia M
2017-11-01
Dried blood spots (DBS) are a convenient tool to enable diagnostic testing for viral diseases due to transport, handling and logistical advantages over conventional venous blood sampling. A better understanding of the performance of serological testing for hepatitis C (HCV) and hepatitis B virus (HBV) from DBS is important to enable more widespread use of this sampling approach in resource limited settings, and to inform the 2017 World Health Organization (WHO) guidance on testing for HBV/HCV. We conducted two systematic reviews and meta-analyses on the diagnostic accuracy of HCV antibody (HCV-Ab) and HBV surface antigen (HBsAg) from DBS samples compared to venous blood samples. MEDLINE, EMBASE, Global Health and Cochrane library were searched for studies that assessed diagnostic accuracy with DBS and agreement between DBS and venous sampling. Heterogeneity of results was assessed and where possible a pooled analysis of sensitivity and specificity was performed using a bivariate analysis with maximum likelihood estimate and 95% confidence intervals (95%CI). We conducted a narrative review on the impact of varying storage conditions or limits of detection in subsets of samples. The QUADAS-2 tool was used to assess risk of bias. For the diagnostic accuracy of HBsAg from DBS compared to venous blood, 19 studies were included in a quantitative meta-analysis, and 23 in a narrative review. Pooled sensitivity and specificity were 98% (95%CI:95%-99%) and 100% (95%CI:99-100%), respectively. For the diagnostic accuracy of HCV-Ab from DBS, 19 studies were included in a pooled quantitative meta-analysis, and 23 studies were included in a narrative review. Pooled estimates of sensitivity and specificity were 98% (CI95%:95-99) and 99% (CI95%:98-100), respectively. Overall quality of studies and heterogeneity were rated as moderate in both systematic reviews. HCV-Ab and HBsAg testing using DBS compared to venous blood sampling was associated with excellent diagnostic accuracy. However, generalizability is limited as no uniform protocol was applied and most studies did not use fresh samples. Future studies on diagnostic accuracy should include an assessment of impact of environmental conditions common in low resource field settings. Manufacturers also need to formally validate their assays for DBS for use with their commercial assays.
Comparative analyses of basal rate of metabolism in mammals: data selection does matter.
Genoud, Michel; Isler, Karin; Martin, Robert D
2018-02-01
Basal rate of metabolism (BMR) is a physiological parameter that should be measured under strictly defined experimental conditions. In comparative analyses among mammals BMR is widely used as an index of the intensity of the metabolic machinery or as a proxy for energy expenditure. Many databases with BMR values for mammals are available, but the criteria used to select metabolic data as BMR estimates have often varied and the potential effect of this variability has rarely been questioned. We provide a new, expanded BMR database reflecting compliance with standard criteria (resting, postabsorptive state; thermal neutrality; adult, non-reproductive status for females) and examine potential effects of differential selectivity on the results of comparative analyses. The database includes 1739 different entries for 817 species of mammals, compiled from the original sources. It provides information permitting assessment of the validity of each estimate and presents the value closest to a proper BMR for each entry. Using different selection criteria, several alternative data sets were extracted and used in comparative analyses of (i) the scaling of BMR to body mass and (ii) the relationship between brain mass and BMR. It was expected that results would be especially dependent on selection criteria with small sample sizes and with relatively weak relationships. Phylogenetically informed regression (phylogenetic generalized least squares, PGLS) was applied to the alternative data sets for several different clades (Mammalia, Eutheria, Metatheria, or individual orders). For Mammalia, a 'subsampling procedure' was also applied, in which random subsamples of different sample sizes were taken from each original data set and successively analysed. In each case, two data sets with identical sample size and species, but comprising BMR data with different degrees of reliability, were compared. Selection criteria had minor effects on scaling equations computed for large clades (Mammalia, Eutheria, Metatheria), although less-reliable estimates of BMR were generally about 12-20% larger than more-reliable ones. Larger effects were found with more-limited clades, such as sciuromorph rodents. For the relationship between BMR and brain mass the results of comparative analyses were found to depend strongly on the data set used, especially with more-limited, order-level clades. In fact, with small sample sizes (e.g. <100) results often appeared erratic. Subsampling revealed that sample size has a non-linear effect on the probability of a zero slope for a given relationship. Depending on the species included, results could differ dramatically, especially with small sample sizes. Overall, our findings indicate a need for due diligence when selecting BMR estimates and caution regarding results (even if seemingly significant) with small sample sizes. © 2017 Cambridge Philosophical Society.
Definition of compassion-evoking images in a Mexican sample.
Mercadillo, Roberto E; Barrios, Fernando A; Díaz, José Luis
2007-10-01
To assemble a calibrated set of compassion-eliciting visual stimuli, 60 clinically healthy Mexican volunteers (36 women, 24 men; M age = 27.5 yr., SD = 2.4) assessed 84 pictures selected from the International Affective Picture System catalogue using the dimensions of Valence, Arousal, and Dominance included in the Self-assessment Manikin scale and an additional dimension of Compassion. Pictures showing suffering in social contexts and expressions of sadness elicited similar responses of compassion. The highest compassion response was reported for pictures showing illness and pain. Men and women differed in the intensity but not the quality of the compassionate responses. Compassion included attributes of negative emotions such as displeasure. The quality of the emotional response was not different from that previously reported for samples in the U.S.A., Spain, and Brazil. A set of 28 pictures was selected as high-compassion-evoking images and 28 as null-compassion controls suitable for studies designed to ascertain the neural substrates of this moral emotion.
Montelius, Kerstin; Karlsson, Andreas O; Holmlund, Gunilla
2008-06-01
The modern Swedish population is a mixture of people that originate from different parts of the world. This is also the truth for the clients participating in the paternity cases investigated at the department. Calculations based on a Swedish frequency database only, could give us overestimated figures of probability and power of exclusion in cases including clients with a genetic background other than Swedish. Here, we describe allele frequencies regarding the markers in the Identifiler-kit. We have compared three sets of population samples; Swedish, European and non-European to investigate how these three groups of population samples differ. Also, all three population sets were compared to data reported from other European and non-European populations. Swedish allele frequencies for the 15 autosomal STRs included in the Identifiler kit were obtained from unrelated blood donors with Swedish names. The European and non-European frequencies were based on DNA-profiles of alleged fathers from our paternity cases in 2005 and 2006.
A meta-data based method for DNA microarray imputation.
Jörnsten, Rebecka; Ouyang, Ming; Wang, Hui-Yu
2007-03-29
DNA microarray experiments are conducted in logical sets, such as time course profiling after a treatment is applied to the samples, or comparisons of the samples under two or more conditions. Due to cost and design constraints of spotted cDNA microarray experiments, each logical set commonly includes only a small number of replicates per condition. Despite the vast improvement of the microarray technology in recent years, missing values are prevalent. Intuitively, imputation of missing values is best done using many replicates within the same logical set. In practice, there are few replicates and thus reliable imputation within logical sets is difficult. However, it is in the case of few replicates that the presence of missing values, and how they are imputed, can have the most profound impact on the outcome of downstream analyses (e.g. significance analysis and clustering). This study explores the feasibility of imputation across logical sets, using the vast amount of publicly available microarray data to improve imputation reliability in the small sample size setting. We download all cDNA microarray data of Saccharomyces cerevisiae, Arabidopsis thaliana, and Caenorhabditis elegans from the Stanford Microarray Database. Through cross-validation and simulation, we find that, for all three species, our proposed imputation using data from public databases is far superior to imputation within a logical set, sometimes to an astonishing degree. Furthermore, the imputation root mean square error for significant genes is generally a lot less than that of non-significant ones. Since downstream analysis of significant genes, such as clustering and network analysis, can be very sensitive to small perturbations of estimated gene effects, it is highly recommended that researchers apply reliable data imputation prior to further analysis. Our method can also be applied to cDNA microarray experiments from other species, provided good reference data are available.
Ahalt, Cyrus; Binswanger, Ingrid A; Steinman, Michael; Tulsky, Jacqueline; Williams, Brie A
2012-02-01
Incarceration is associated with poor health and high costs. Given the dramatic growth in the criminal justice system's population and associated expenses, inclusion of questions related to incarceration in national health data sets could provide essential data to researchers, clinicians and policy-makers. To evaluate a representative sample of publically available national health data sets for their ability to be used to study the health of currently or formerly incarcerated persons and to identify opportunities to improve criminal justice questions in health data sets. DESIGN & APPROACH: We reviewed the 36 data sets from the Society of General Internal Medicine Dataset Compendium related to individual health. Through content analysis using incarceration-related keywords, we identified data sets that could be used to study currently or formerly incarcerated persons, and we identified opportunities to improve the availability of relevant data. While 12 (33%) data sets returned keyword matches, none could be used to study incarcerated persons. Three (8%) could be used to study the health of formerly incarcerated individuals, but only one data set included multiple questions such as length of incarceration and age at incarceration. Missed opportunities included: (1) data sets that included current prisoners but did not record their status (10, 28%); (2) data sets that asked questions related to incarceration but did not specifically record a subject's status as formerly incarcerated (8, 22%); and (3) longitudinal studies that dropped and/or failed to record persons who became incarcerated during the study (8, 22%). Few health data sets can be used to evaluate the association between incarceration and health. Three types of changes to existing national health data sets could substantially expand the available data, including: recording incarceration status for study participants who are incarcerated; recording subjects' history of incarceration when this data is already being collected; and expanding incarceration-related questions in studies that already record incarceration history.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gates, B. C.; Olson, H. H.; Schuit, G. C.A.
1983-08-22
A new method of structural analysis is applied to a group of hydroliquefied coal samples. The method uses elemental analysis and NMR data to estimate the concentrations of functional groups in the samples. The samples include oil and asphaltene fractions obtained in a series of hydroliquefaction experiments, and a set of 9 fractions separated from a coal-derived oil. The structural characterization of these samples demonstrates that estimates of functional group concentrations can be used to provide detailed structural profiles of complex mixtures and to obtain limited information about reaction pathways. 11 references, 1 figure, 7 tables.
Miralles, Pablo; Chisvert, Alberto; Salvador, Amparo
2015-01-01
An analytical method for the simultaneous determination of hydroxytyrosol and tyrosol in different types of olive extract raw materials and cosmetic cream samples has been developed. The determination was performed by liquid chromatography with UV spectrophotometric detection. Different chromatographic parameters, such as mobile phase pH and composition, oven temperature and different sample preparation variables were studied. The best chromatographic separation was obtained under the following conditions: C18 column set at 35°C and isocratic elution of a mixture ethanol: 1% acetic acid solution at pH 5 (5:95, v/v) as mobile phase pumped at 1 mL min(-1). The detection wavelength was set at 280 nm and the total run time required for the chromatographic analysis was 10 min, except for cosmetic cream samples where 20 min runtime was required (including a cleaning step). The method was satisfactorily applied to 23 samples including solid, water-soluble and fat-soluble olive extracts and cosmetic cream samples containing hydroxytyrosol and tyrosol. Good recoveries (95-107%) and repeatability (1.1-3.6%) were obtained, besides of limits of detection values below the μg mL(-1) level. These good analytical features, as well as its environmentally-friendly characteristics, make the presented method suitable to carry out both the control of the whole manufacture process of raw materials containing the target analytes and the quality control of the finished cosmetic products. Copyright © 2014 Elsevier B.V. All rights reserved.
Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.
Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E
2014-02-28
The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grana, Dario; Verma, Sumit; Pafeng, Josiane
We present a reservoir geophysics study, including rock physics modeling and seismic inversion, of a carbon dioxide sequestration site in Southwestern Wyoming, namely the Rock Springs Uplift, and build a petrophysical model for the potential injection reservoirs for carbon dioxide sequestration. Our objectives include the facies classification and the estimation of the spatial model of porosity and permeability for two sequestration targets of interest, the Madison Limestone and the Weber Sandstone. The available dataset includes a complete set of well logs at the location of the borehole available in the area, a set of 110 core samples, and a seismicmore » survey acquired in the area around the well. The proposed study includes a formation evaluation analysis and facies classification at the well location, the calibration of a rock physics model to link petrophysical properties and elastic attributes using well log data and core samples, the elastic inversion of the pre-stack seismic data, and the estimation of the reservoir model of facies, porosity and permeability conditioned by seismic inverted elastic attributes and well log data. In particular, the rock physics relations are facies-dependent and include granular media equations for clean and shaley sandstone, and inclusion models for the dolomitized limestone. The permeability model has been computed by applying a facies-dependent porosity-permeability relation calibrated using core sample measurements. Finally, the study shows that both formations show good storage capabilities. The Madison Limestone includes a homogeneous layer of high-porosity high-permeability dolomite; the Weber Sandstone is characterized by a lower average porosity but the layer is thicker than the Madison Limestone.« less
Grana, Dario; Verma, Sumit; Pafeng, Josiane; ...
2017-06-20
We present a reservoir geophysics study, including rock physics modeling and seismic inversion, of a carbon dioxide sequestration site in Southwestern Wyoming, namely the Rock Springs Uplift, and build a petrophysical model for the potential injection reservoirs for carbon dioxide sequestration. Our objectives include the facies classification and the estimation of the spatial model of porosity and permeability for two sequestration targets of interest, the Madison Limestone and the Weber Sandstone. The available dataset includes a complete set of well logs at the location of the borehole available in the area, a set of 110 core samples, and a seismicmore » survey acquired in the area around the well. The proposed study includes a formation evaluation analysis and facies classification at the well location, the calibration of a rock physics model to link petrophysical properties and elastic attributes using well log data and core samples, the elastic inversion of the pre-stack seismic data, and the estimation of the reservoir model of facies, porosity and permeability conditioned by seismic inverted elastic attributes and well log data. In particular, the rock physics relations are facies-dependent and include granular media equations for clean and shaley sandstone, and inclusion models for the dolomitized limestone. The permeability model has been computed by applying a facies-dependent porosity-permeability relation calibrated using core sample measurements. Finally, the study shows that both formations show good storage capabilities. The Madison Limestone includes a homogeneous layer of high-porosity high-permeability dolomite; the Weber Sandstone is characterized by a lower average porosity but the layer is thicker than the Madison Limestone.« less
Injury severity data for front and second row passengers in frontal crashes.
Atkinson, Theresa; Leszek Gawarecki; Tavakoli, Massoud
2016-06-01
The data contained here were obtained from the National Highway Transportation Safety Administration׳s National Automotive Sampling System - Crashworthiness Data System (NASS-CDS) for the years 2008-2014. This publically available data set monitors motor vehicle crashes in the United States, using a stratified random sample frame, resulting in information on approximately 5000 crashes each year that can be utilized to create national estimates for crashes. The NASS-CDS data sets document vehicle, crash, and occupant factors. These data can be utilized to examine public health, law enforcement, roadway planning, and vehicle design issues. The data provided in this brief are a subset of crash events and occupants. The crashes provided are exclusively frontal crashes. Within these crashes, only restrained occupants who were seated in the right front seat position or the second row outboard seat positions were included. The front row and second row data sets were utilized to construct occupant pairs crashes where both a right front seat occupant and a second row occupant were available. Both unpaired and paired data sets are provided in this brief.
Injury severity data for front and second row passengers in frontal crashes
Atkinson, Theresa; Leszek Gawarecki; Tavakoli, Massoud
2016-01-01
The data contained here were obtained from the National Highway Transportation Safety Administration׳s National Automotive Sampling System – Crashworthiness Data System (NASS-CDS) for the years 2008–2014. This publically available data set monitors motor vehicle crashes in the United States, using a stratified random sample frame, resulting in information on approximately 5000 crashes each year that can be utilized to create national estimates for crashes. The NASS-CDS data sets document vehicle, crash, and occupant factors. These data can be utilized to examine public health, law enforcement, roadway planning, and vehicle design issues. The data provided in this brief are a subset of crash events and occupants. The crashes provided are exclusively frontal crashes. Within these crashes, only restrained occupants who were seated in the right front seat position or the second row outboard seat positions were included. The front row and second row data sets were utilized to construct occupant pairs crashes where both a right front seat occupant and a second row occupant were available. Both unpaired and paired data sets are provided in this brief. PMID:27077084
Chemical Analyses of Pre-Holocene Rocks from Medicine Lake Volcano and Vicinity, Northern California
Donnelly-Nolan, Julie M.
2008-01-01
Chemical analyses are presented in an accompanying table (Table 1) for more than 600 pre-Holocene rocks collected at and near Medicine Lake Volcano, northern California. The data include major-element X-ray fluorescence (XRF) analyses for all of the rocks plus XRF trace element data for most samples, and instrumental neutron activation analysis (INAA) trace element data for many samples. In addition, a limited number of analyses of Na2O and K2O by flame photometry (FP) are included as well assome wet chemical analyses of FeO, H2O+/-, and CO2. Latitude and longitude location information is provided for all samples. This data set is intended to accompany the geologic map of Medicine Lake Volcano (Donnelly-Nolan, in press); map unit designations are given for each sample collected from the map area.
Howson, E L A; Armson, B; Madi, M; Kasanga, C J; Kandusi, S; Sallu, R; Chepkwony, E; Siddle, A; Martin, P; Wood, J; Mioulet, V; King, D P; Lembo, T; Cleaveland, S; Fowler, V L
2017-06-01
Accurate, timely diagnosis is essential for the control, monitoring and eradication of foot-and-mouth disease (FMD). Clinical samples from suspect cases are normally tested at reference laboratories. However, transport of samples to these centralized facilities can be a lengthy process that can impose delays on critical decision making. These concerns have motivated work to evaluate simple-to-use technologies, including molecular-based diagnostic platforms, that can be deployed closer to suspect cases of FMD. In this context, FMD virus (FMDV)-specific reverse transcription loop-mediated isothermal amplification (RT-LAMP) and real-time RT-PCR (rRT-PCR) assays, compatible with simple sample preparation methods and in situ visualization, have been developed which share equivalent analytical sensitivity with laboratory-based rRT-PCR. However, the lack of robust 'ready-to-use kits' that utilize stabilized reagents limits the deployment of these tests into field settings. To address this gap, this study describes the performance of lyophilized rRT-PCR and RT-LAMP assays to detect FMDV. Both of these assays are compatible with the use of fluorescence to monitor amplification in real-time, and for the RT-LAMP assays end point detection could also be achieved using molecular lateral flow devices. Lyophilization of reagents did not adversely affect the performance of the assays. Importantly, when these assays were deployed into challenging laboratory and field settings within East Africa they proved to be reliable in their ability to detect FMDV in a range of clinical samples from acutely infected as well as convalescent cattle. These data support the use of highly sensitive molecular assays into field settings for simple and rapid detection of FMDV. © 2015 The Authors. Transboundary and Emerging Diseases Published by Blackwell Verlag GmbH.
Pieces of Other Worlds - Extraterrestrial Samples for Education and Public Outreach
NASA Technical Reports Server (NTRS)
Allen, Carlton C.
2010-01-01
During the Year of the Solar System spacecraft from NASA and our international partners will encounter two comets; orbit the asteroid Vesta, continue to explore Mars with rovers, and launch robotic explorers to the Moon and Mars. We have pieces of all these worlds in our laboratories, and their continued study provides incredibly valuable "ground truth" to complement space exploration missions. Extensive information about these unique materials, as well as actual lunar samples and meteorites, are available for display and education. The Johnson Space Center (JSC) has the unique responsibility to curate NASA's extraterrestrial samples from past and future missions. Curation includes documentation, preservation, preparation, and distribution of samples for research, education, and public outreach. At the current time JSC curates six types of extraterrestrial samples: (1) Moon rocks and soils collected by the Apollo astronauts (2) Meteorites collected on US expeditions to Antarctica (including rocks from the Moon, Mars, and many asteroids including Vesta) (3) "Cosmic dust" (asteroid and comet particles) collected by high-altitude aircraft (4) Solar wind atoms collected by the Genesis spacecraft (5) Comet particles collected by the Stardust spacecraft (6) Interstellar dust particles collected by the Stardust spacecraft These rocks, soils, dust particles, and atoms continue to be studied intensively by scientists around the world. Descriptions of the samples, research results, thousands of photographs, and information on how to request research samples are on the JSC Curation website: http://curator.jsc.nasa.gov/ NASA provides a limited number of Moon rock samples for either short-term or long-term displays at museums, planetariums, expositions, and professional events that are open to the public. The JSC Public Affairs Office handles requests for such display samples. Requestors should apply in writing to Mr. Louis Parker, JSC Exhibits Manager. Mr. Parker will advise successful applicants regarding provisions for receipt, display, and return of the samples. All loans will be preceded by a signed loan agreement executed between NASA and the requestor's organization. Email address: louis.a.parker@nasa.gov Sets of twelve thin sections of Apollo lunar samples and sets of twelve thin sections of meteorites are available for short-term loan from JSC Curation. The thin sections are designed for use in college and university courses where petrographic microscopes are available for viewing. Requestors should contact the Ms. Mary Luckey, Education Sample Curator. Email address: mary.k.luckey@nasa.gov
NASA Technical Reports Server (NTRS)
1999-01-01
Field Integrated Design and Operations (FIDO) rover is a prototype of the Mars Sample Return rovers that will carry the integrated Athena Science Payload to Mars in 2003 and 2005. The purpose of FIDO is to simulate, using Mars analog settings, the complex surface operations that will be necessary to find, characterize, obtain, cache, and return samples to the ascent vehicles on the landers. This videotape shows tests of the FIDO in the Mojave Desert. These tests include drilling through rock and movement of the rover. Also included in this tape are interviews with Dr Raymond Arvidson, the test director for FIDO, and Dr. Eric Baumgartner, Robotics Engineer at the Jet Propulsion Laboratory.
A new computer code for discrete fracture network modelling
NASA Astrophysics Data System (ADS)
Xu, Chaoshui; Dowd, Peter
2010-03-01
The authors describe a comprehensive software package for two- and three-dimensional stochastic rock fracture simulation using marked point processes. Fracture locations can be modelled by a Poisson, a non-homogeneous, a cluster or a Cox point process; fracture geometries and properties are modelled by their respective probability distributions. Virtual sampling tools such as plane, window and scanline sampling are included in the software together with a comprehensive set of statistical tools including histogram analysis, probability plots, rose diagrams and hemispherical projections. The paper describes in detail the theoretical basis of the implementation and provides a case study in rock fracture modelling to demonstrate the application of the software.
Using near infrared spectroscopy to classify soybean oil according to expiration date.
da Costa, Gean Bezerra; Fernandes, David Douglas Sousa; Gomes, Adriano A; de Almeida, Valber Elias; Veras, Germano
2016-04-01
A rapid and non-destructive methodology is proposed for the screening of edible vegetable oils according to conservation state expiration date employing near infrared (NIR) spectroscopy and chemometric tools. A total of fifty samples of soybean vegetable oil, of different brands andlots, were used in this study; these included thirty expired and twenty non-expired samples. The oil oxidation was measured by peroxide index. NIR spectra were employed in raw form and preprocessed by offset baseline correction and Savitzky-Golay derivative procedure, followed by PCA exploratory analysis, which showed that NIR spectra would be suitable for the classification task of soybean oil samples. The classification models were based in SPA-LDA (Linear Discriminant Analysis coupled with Successive Projection Algorithm) and PLS-DA (Discriminant Analysis by Partial Least Squares). The set of samples (50) was partitioned into two groups of training (35 samples: 15 non-expired and 20 expired) and test samples (15 samples 5 non-expired and 10 expired) using sample-selection approaches: (i) Kennard-Stone, (ii) Duplex, and (iii) Random, in order to evaluate the robustness of the models. The obtained results for the independent test set (in terms of correct classification rate) were 96% and 98% for SPA-LDA and PLS-DA, respectively, indicating that the NIR spectra can be used as an alternative to evaluate the degree of oxidation of soybean oil samples. Copyright © 2015 Elsevier Ltd. All rights reserved.
García-Molina, María Dolores; García-Olmo, Juan; Barro, Francisco
2016-01-01
Scope The aim of this work was to assess the ability of Near Infrared Spectroscopy (NIRS) to distinguish wheat lines with low gliadin content, obtained by RNA interference (RNAi), from non-transgenic wheat lines. The discriminant analysis was performed using both whole grain and flour. The transgenic sample set included 409 samples for whole grain sorting and 414 samples for flour experiments, while the non-transgenic set consisted of 126 and 156 samples for whole grain and flour, respectively. Methods and Results Samples were scanned using a Foss-NIR Systems 6500 System II instrument. Discrimination models were developed using the entire spectral range (400–2500 nm) and ranges of 400–780 nm, 800–1098 nm and 1100–2500 nm, followed by analysis of means of partial least square (PLS). Two external validations were made, using samples from the years 2013 and 2014 and a minimum of 99% of the flour samples and 96% of the whole grain samples were classified correctly. Conclusions The results demonstrate the ability of NIRS to successfully discriminate between wheat samples with low-gliadin content and wild types. These findings are important for the development and analysis of foodstuff for celiac disease (CD) patients to achieve better dietary composition and a reduction in disease incidence. PMID:27018786
Patient Experience-based Value Sets: Are They Stable?
Pickard, A Simon; Hung, Yu-Ting; Lin, Fang-Ju; Lee, Todd A
2017-11-01
Although societal preference weights are desirable to inform resource-allocation decision-making, patient experienced health state-based value sets can be useful for clinical decision-making, but context may matter. To estimate EQ-5D value sets using visual analog scale (VAS) ratings for patients undergoing knee replacement surgery and compare the estimates before and after surgery. We used the Patient Reported Outcome Measures data collected by the UK National Health Service on patients undergoing knee replacement from 2009 to 2012. Generalized least squares regression models were used to derive value sets based on the EQ-5D-3 level using a development sample before and after surgery, and model performance was examined using a validation sample. A total of 90,450 preoperative and postoperative valuations were included. For preoperative valuations, the largest decrement in VAS values was associated with the dimension of anxiety/depression, followed by self-care, mobility, usual activities, and pain/discomfort. However, pain/discomfort had a greater impact on VAS value decrement in postoperative valuations. Compared with preoperative health problems, postsurgical health problems were associated with larger value decrements, with significant differences in several levels and dimensions, including level 2 of mobility, level 2/3 of usual activities, level 3 of pain/discomfort, and level 3 of anxiety/depression. Similar results were observed across subgroups stratified by age and sex. Findings suggest patient experience-based value sets are not stable (ie, context such as timing matters). However, the knowledge that lower values are assigned to health states postsurgery compared with presurgery may be useful for the patient-doctor decision-making process.
Maples, Jessica L; Carter, Nathan T; Few, Lauren R; Crego, Cristina; Gore, Whitney L; Samuel, Douglas B; Williamson, Rachel L; Lynam, Donald R; Widiger, Thomas A; Markon, Kristian E; Krueger, Robert F; Miller, Joshua D
2015-12-01
The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) includes an alternative model of personality disorders (PDs) in Section III, consisting in part of a pathological personality trait model. To date, the 220-item Personality Inventory for DSM-5 (PID-5; Krueger, Derringer, Markon, Watson, & Skodol, 2012) is the only extant self-report instrument explicitly developed to measure this pathological trait model. The present study used item response theory-based analyses in a large sample (n = 1,417) to investigate whether a reduced set of 100 items could be identified from the PID-5 that could measure the 25 traits and 5 domains. This reduced set of PID-5 items was then tested in a community sample of adults currently receiving psychological treatment (n = 109). Across a wide range of criterion variables including NEO PI-R domains and facets, DSM-5 Section II PD scores, and externalizing and internalizing outcomes, the correlational profiles of the original and reduced versions of the PID-5 were nearly identical (rICC = .995). These results provide strong support for the hypothesis that an abbreviated set of PID-5 items can be used to reliably, validly, and efficiently assess these personality disorder traits. The ability to assess the DSM-5 Section III traits using only 100 items has important implications in that it suggests these traits could still be measured in settings in which assessment-related resources (e.g., time, compensation) are limited. (c) 2015 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Jones, C. L.; Mcfarland, M. J.; Rosethal, W. D.; Theis, S. W. (Principal Investigator)
1982-01-01
In an effort to investigate aircraft multisensor responses to soil moisture and vegetation in agricultural fields, an intensive ground sampling program was conducted in Guymon, Oklahoma and Dalhart, Texas in conjunction with aircraft data collected for visible/infrared and passive and active microwave systems. Field selections, sampling techniques, data processing, and the aircraft schedule are discussed for both sites. Field notes are included along with final (normalized and corrected) data sets.
Jonker, Marcel F; Attema, Arthur E; Donkers, Bas; Stolk, Elly A; Versteegh, Matthijs M
2017-12-01
Health state valuations of patients and non-patients are not the same, whereas health state values obtained from general population samples are a weighted average of both. The latter constitutes an often-overlooked source of bias. This study investigates the resulting bias and tests for the impact of reference dependency on health state valuations using an efficient discrete choice experiment administered to a Dutch nationally representative sample of 788 respondents. A Bayesian discrete choice experiment design consisting of eight sets of 24 (matched pairwise) choice tasks was developed, with each set providing full identification of the included parameters. Mixed logit models were used to estimate health state preferences with respondents' own health included as an additional predictor. Our results indicate that respondents with impaired health worse than or equal to the health state levels under evaluation have approximately 30% smaller health state decrements. This confirms that reference dependency can be observed in general population samples and affirms the relevance of prospect theory in health state valuations. At the same time, the limited number of respondents with severe health impairments does not appear to bias social tariffs as obtained from general population samples. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
40 CFR 1066.410 - Dynamometer test procedure.
Code of Federal Regulations, 2014 CFR
2014-07-01
... drive mode. (For purposes of this paragraph (g), the term four-wheel drive includes other multiple drive... Dynamometer test procedure. (a) Dynamometer testing may consist of multiple drive cycles with both cold-start...-setting part identifies the driving schedules and the associated sample intervals, soak periods, engine...
Environmental Education Activities to Enhance Decision-Making.
ERIC Educational Resources Information Center
Yambert, Paul A.; And Others
This document contains a set of 10 activities that teachers may use with students (ages 10 to adult) to enhance environmental knowledge and environmentally responsible behavior. Sample worksheets are included when applicable. The activities focus on: renewable and nonrenewable resources; recycling; population growth; wildlife; recycling in a…
Lewis, Celine; Clotworthy, Margaret; Hilton, Shona; Magee, Caroline; Robertson, Mark J; Stubbins, Lesley J; Corfield, Julie
2013-01-01
Objective A mixed methods study exploring the UK general public's willingness to donate human biosamples (HBSs) for biomedical research. Setting Cross-sectional focus groups followed by an online survey. Participants Twelve focus groups (81 participants) selectively sampled to reflect a range of demographic groups; 1110 survey responders recruited through a stratified sampling method with quotas set on sex, age, geographical location, socioeconomic group and ethnicity. Main outcome measures (1) Identify participants’ willingness to donate HBSs for biomedical research, (2) explore acceptability towards donating different types of HBSs in various settings and (3) explore preferences regarding use and access to HBSs. Results 87% of survey participants thought donation of HBSs was important and 75% wanted to be asked to donate in general. Responders who self-reported having some or good knowledge of the medical research process were significantly more likely to want to donate (p<0.001). Reasons why focus group participants saw donation as important included: it was a good way of reciprocating for the medical treatment received; it was an important way of developing drugs and treatments; residual tissue would otherwise go to waste and they or their family members might benefit. The most controversial types of HBSs to donate included: brain post mortem (29% would donate), eyes post mortem (35%), embryos (44%), spare eggs (48%) and sperm (58%). Regarding the use of samples, there were concerns over animal research (34%), research conducted outside the UK (35%), and research conducted by pharmaceutical companies (56%), although education and discussion were found to alleviate such concerns. Conclusions There is a high level of public support and willingness to donate HBSs for biomedical research. Underlying concerns exist regarding the use of certain types of HBSs and conditions under which they are used. Improved education and more controlled forms of consent for sensitive samples may mitigate such concerns. PMID:23929915
Ranked set sampling: cost and optimal set size.
Nahhas, Ramzi W; Wolfe, Douglas A; Chen, Haiying
2002-12-01
McIntyre (1952, Australian Journal of Agricultural Research 3, 385-390) introduced ranked set sampling (RSS) as a method for improving estimation of a population mean in settings where sampling and ranking of units from the population are inexpensive when compared with actual measurement of the units. Two of the major factors in the usefulness of RSS are the set size and the relative costs of the various operations of sampling, ranking, and measurement. In this article, we consider ranking error models and cost models that enable us to assess the effect of different cost structures on the optimal set size for RSS. For reasonable cost structures, we find that the optimal RSS set sizes are generally larger than had been anticipated previously. These results will provide a useful tool for determining whether RSS is likely to lead to an improvement over simple random sampling in a given setting and, if so, what RSS set size is best to use in this case.
Wiktelius, Daniel; Ahlinder, Linnea; Larsson, Andreas; Höjer Holmgren, Karin; Norlin, Rikard; Andersson, Per Ola
2018-08-15
Collecting data under field conditions for forensic investigations of chemical warfare agents calls for the use of portable instruments. In this study, a set of aged, crude preparations of sulfur mustard were characterized spectroscopically without any sample preparation using handheld Raman and portable IR instruments. The spectral data was used to construct Random Forest multivariate models for the attribution of test set samples to the synthetic method used for their production. Colored and fluorescent samples were included in the study, which made Raman spectroscopy challenging although fluorescence was diminished by using an excitation wavelength of 1064 nm. The predictive power of models constructed with IR or Raman data alone, as well as with combined data was investigated. Both techniques gave useful data for attribution. Model performance was enhanced when Raman and IR spectra were combined, allowing correct classification of 19/23 (83%) of test set spectra. The results demonstrate that data obtained with spectroscopy instruments amenable for field deployment can be useful in forensic studies of chemical warfare agents. Copyright © 2018 Elsevier B.V. All rights reserved.
Fast, Safe, Propellant-Efficient Spacecraft Motion Planning Under Clohessy-Wiltshire-Hill Dynamics
NASA Technical Reports Server (NTRS)
Starek, Joseph A.; Schmerling, Edward; Maher, Gabriel D.; Barbee, Brent W.; Pavone, Marco
2016-01-01
This paper presents a sampling-based motion planning algorithm for real-time and propellant-optimized autonomous spacecraft trajectory generation in near-circular orbits. Specifically, this paper leverages recent algorithmic advances in the field of robot motion planning to the problem of impulsively actuated, propellant- optimized rendezvous and proximity operations under the Clohessy-Wiltshire-Hill dynamics model. The approach calls upon a modified version of the FMT* algorithm to grow a set of feasible trajectories over a deterministic, low-dispersion set of sample points covering the free state space. To enforce safety, the tree is only grown over the subset of actively safe samples, from which there exists a feasible one-burn collision-avoidance maneuver that can safely circularize the spacecraft orbit along its coasting arc under a given set of potential thruster failures. Key features of the proposed algorithm include 1) theoretical guarantees in terms of trajectory safety and performance, 2) amenability to real-time implementation, and 3) generality, in the sense that a large class of constraints can be handled directly. As a result, the proposed algorithm offers the potential for widespread application, ranging from on-orbit satellite servicing to orbital debris removal and autonomous inspection missions.
Sample handling for mass spectrometric proteomic investigations of human sera.
West-Nielsen, Mikkel; Høgdall, Estrid V; Marchiori, Elena; Høgdall, Claus K; Schou, Christian; Heegaard, Niels H H
2005-08-15
Proteomic investigations of sera are potentially of value for diagnosis, prognosis, choice of therapy, and disease activity assessment by virtue of discovering new biomarkers and biomarker patterns. Much debate focuses on the biological relevance and the need for identification of such biomarkers while less effort has been invested in devising standard procedures for sample preparation and storage in relation to model building based on complex sets of mass spectrometric (MS) data. Thus, development of standardized methods for collection and storage of patient samples together with standards for transportation and handling of samples are needed. This requires knowledge about how sample processing affects MS-based proteome analyses and thereby how nonbiological biased classification errors are avoided. In this study, we characterize the effects of sample handling, including clotting conditions, storage temperature, storage time, and freeze/thaw cycles, on MS-based proteomics of human serum by using principal components analysis, support vector machine learning, and clustering methods based on genetic algorithms as class modeling and prediction methods. Using spiking to artificially create differentiable sample groups, this integrated approach yields data that--even when working with sample groups that differ more than may be expected in biological studies--clearly demonstrate the need for comparable sampling conditions for samples used for modeling and for the samples that are going into the test set group. Also, the study emphasizes the difference between class prediction and class comparison studies as well as the advantages and disadvantages of different modeling methods.
Multiplex detection of respiratory pathogens
McBride, Mary [Brentwood, CA; Slezak, Thomas [Livermore, CA; Birch, James M [Albany, CA
2012-07-31
Described are kits and methods useful for detection of respiratory pathogens (influenza A (including subtyping capability for H1, H3, H5 and H7 subtypes) influenza B, parainfluenza (type 2), respiratory syncytial virus, and adenovirus) in a sample. Genomic sequence information from the respiratory pathogens was analyzed to identify signature sequences, e.g., polynucleotide sequences useful for confirming the presence or absence of a pathogen in a sample. Primer and probe sets were designed and optimized for use in a PCR based, multiplexed Luminex assay to successfully identify the presence or absence of pathogens in a sample.
ANALYSIS OF SAMPLING TECHNIQUES FOR IMBALANCED DATA: AN N=648 ADNI STUDY
Dubey, Rashmi; Zhou, Jiayu; Wang, Yalin; Thompson, Paul M.; Ye, Jieping
2013-01-01
Many neuroimaging applications deal with imbalanced imaging data. For example, in Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset, the mild cognitive impairment (MCI) cases eligible for the study are nearly two times the Alzheimer’s disease (AD) patients for structural magnetic resonance imaging (MRI) modality and six times the control cases for proteomics modality. Constructing an accurate classifier from imbalanced data is a challenging task. Traditional classifiers that aim to maximize the overall prediction accuracy tend to classify all data into the majority class. In this paper, we study an ensemble system of feature selection and data sampling for the class imbalance problem. We systematically analyze various sampling techniques by examining the efficacy of different rates and types of undersampling, oversampling, and a combination of over and under sampling approaches. We thoroughly examine six widely used feature selection algorithms to identify significant biomarkers and thereby reduce the complexity of the data. The efficacy of the ensemble techniques is evaluated using two different classifiers including Random Forest and Support Vector Machines based on classification accuracy, area under the receiver operating characteristic curve (AUC), sensitivity, and specificity measures. Our extensive experimental results show that for various problem settings in ADNI, (1). a balanced training set obtained with K-Medoids technique based undersampling gives the best overall performance among different data sampling techniques and no sampling approach; and (2). sparse logistic regression with stability selection achieves competitive performance among various feature selection algorithms. Comprehensive experiments with various settings show that our proposed ensemble model of multiple undersampled datasets yields stable and promising results. PMID:24176869
Electrodynamics of the middle atmosphere: Superpressure balloon program
NASA Technical Reports Server (NTRS)
Holzworth, Robert H.
1987-01-01
In this experiment a comprehensive set of electrical parameters were measured during eight long duration flights in the southern hemisphere stratosphere. These flight resulted in the largest data set ever collected from the stratosphere. The stratosphere has never been electrodynamically sampled in the systematic manner before. New discoveries include short term variability in the planetary scale electric current system, the unexpected observation of stratospheric conductivity variations over thunderstorms and the observation of direct stratospheric conductivity variations following a relatively small solar flare. Major statistical studies were conducted of the large scale current systems, the stratospheric conductivity and the neutral gravity waves (from pressure and temperature data) using the entire data set.
Zhang, Mengliang; Harrington, Peter de B
2015-01-01
Multivariate partial least-squares (PLS) method was applied to the quantification of two complex polychlorinated biphenyls (PCBs) commercial mixtures, Aroclor 1254 and 1260, in a soil matrix. PCBs in soil samples were extracted by headspace solid phase microextraction (SPME) and determined by gas chromatography/mass spectrometry (GC/MS). Decachlorinated biphenyl (deca-CB) was used as internal standard. After the baseline correction was applied, four data representations including extracted ion chromatograms (EIC) for Aroclor 1254, EIC for Aroclor 1260, EIC for both Aroclors and two-way data sets were constructed for PLS-1 and PLS-2 calibrations and evaluated with respect to quantitative prediction accuracy. The PLS model was optimized with respect to the number of latent variables using cross validation of the calibration data set. The validation of the method was performed with certified soil samples and real field soil samples and the predicted concentrations for both Aroclors using EIC data sets agreed with the certified values. The linear range of the method was from 10μgkg(-1) to 1000μgkg(-1) for both Aroclor 1254 and 1260 in soil matrices and the detection limit was 4μgkg(-1) for Aroclor 1254 and 6μgkg(-1) for Aroclor 1260. This holistic approach for the determination of mixtures of complex samples has broad application to environmental forensics and modeling. Copyright © 2014 Elsevier Ltd. All rights reserved.
Strategies to explore functional genomics data sets in NCBI's GEO database.
Wilhite, Stephen E; Barrett, Tanya
2012-01-01
The Gene Expression Omnibus (GEO) database is a major repository that stores high-throughput functional genomics data sets that are generated using both microarray-based and sequence-based technologies. Data sets are submitted to GEO primarily by researchers who are publishing their results in journals that require original data to be made freely available for review and analysis. In addition to serving as a public archive for these data, GEO has a suite of tools that allow users to identify, analyze, and visualize data relevant to their specific interests. These tools include sample comparison applications, gene expression profile charts, data set clusters, genome browser tracks, and a powerful search engine that enables users to construct complex queries.
Strategies to Explore Functional Genomics Data Sets in NCBI’s GEO Database
Wilhite, Stephen E.; Barrett, Tanya
2012-01-01
The Gene Expression Omnibus (GEO) database is a major repository that stores high-throughput functional genomics data sets that are generated using both microarray-based and sequence-based technologies. Data sets are submitted to GEO primarily by researchers who are publishing their results in journals that require original data to be made freely available for review and analysis. In addition to serving as a public archive for these data, GEO has a suite of tools that allow users to identify, analyze and visualize data relevant to their specific interests. These tools include sample comparison applications, gene expression profile charts, data set clusters, genome browser tracks, and a powerful search engine that enables users to construct complex queries. PMID:22130872
Brazil, Kevin; Cloutier, Michelle M; Tennen, Howard; Bailit, Howard; Higgins, Pamela S
2008-04-01
The purpose of this study was to examine the challenges of integrating an asthma disease management (DM) program into a primary care setting from the perspective of primary care practitioners. A second goal was to examine whether barriers differed between urban-based and nonurban-based practices. Using a qualitative design, data were gathered using focus groups in primary care pediatric practices. A purposeful sample included an equal number of urban and nonurban practices. Participants represented all levels in the practice setting. Important themes that emerged from the data were coded and categorized. A total of 151 individuals, including physicians, advanced practice clinicians, registered nurses, other medical staff, and nonmedical staff participated in 16 focus groups that included 8 urban and 8 nonurban practices. Content analyses identified 4 primary factors influencing the implementation of a DM program in a primary care setting. They were related to providers, the organization, patients, and characteristics of the DM program. This study illustrates the complexity of the primary care environment and the challenge of changing practice in these settings. The results of this study identified areas in a primary care setting that influence the adoption of a DM program. These findings can assist in identifying effective strategies to change clinical behavior in primary care practices.
Roberts, Jason A; Choi, Gordon Y S; Joynt, Gavin M; Paul, Sanjoy K; Deans, Renae; Peake, Sandra; Cole, Louise; Stephens, Dianne; Bellomo, Rinaldo; Turnidge, John; Wallis, Steven C; Roberts, Michael S; Roberts, Darren M; Lassig-Smith, Melissa; Starr, Therese; Lipman, Jeffrey
2016-03-01
Optimal antibiotic dosing is key to maximising patient survival, and minimising the emergence of bacterial resistance. Evidence-based antibiotic dosing guidelines for critically ill patients receiving RRT are currently not available, as RRT techniques and settings vary greatly between ICUs and even individual patients. We aim to develop a robust, evidence-based antibiotic dosing guideline for critically ill patients receiving various forms of RRT. We further aim to observe whether therapeutic antibiotic concentrations are associated with reduced 28-day mortality. We designed a multi-national, observational pharmacokinetic study in critically ill patients requiring RRT. The study antibiotics will be vancomycin, linezolid, piperacillin/tazobactam and meropenem. Pharmacokinetic sampling of each patient's blood, RRT effluent and urine will take place during two separate dosing intervals. In addition, a comprehensive data set, which includes the patients' demographic and clinical parameters, as well as modality, technique and settings of RRT, will be collected. Pharmacokinetic data will be analysed using a population pharmacokinetic approach to identify covariates associated with changes in pharmacokinetic parameters in critically ill patients with AKI who are undergoing RRT for the five commonly prescribed antibiotics. Using the comprehensive data set collected, the pharmacokinetic profile of the five antibiotics will be constructed, including identification of RRT and other factors indicative of the need for altered antibiotic dosing requirements. This will enable us to develop a dosing guideline for each individual antibiotic that is likely to be relevant to any critically ill patient with acute kidney injury receiving any of the included forms of RRT. Australian New Zealand Clinical Trial Registry ( ACTRN12613000241730 ) registered 28 February 2013.
Improvement of Predictive Ability by Uniform Coverage of the Target Genetic Space
Bustos-Korts, Daniela; Malosetti, Marcos; Chapman, Scott; Biddulph, Ben; van Eeuwijk, Fred
2016-01-01
Genome-enabled prediction provides breeders with the means to increase the number of genotypes that can be evaluated for selection. One of the major challenges in genome-enabled prediction is how to construct a training set of genotypes from a calibration set that represents the target population of genotypes, where the calibration set is composed of a training and validation set. A random sampling protocol of genotypes from the calibration set will lead to low quality coverage of the total genetic space by the training set when the calibration set contains population structure. As a consequence, predictive ability will be affected negatively, because some parts of the genotypic diversity in the target population will be under-represented in the training set, whereas other parts will be over-represented. Therefore, we propose a training set construction method that uniformly samples the genetic space spanned by the target population of genotypes, thereby increasing predictive ability. To evaluate our method, we constructed training sets alongside with the identification of corresponding genomic prediction models for four genotype panels that differed in the amount of population structure they contained (maize Flint, maize Dent, wheat, and rice). Training sets were constructed using uniform sampling, stratified-uniform sampling, stratified sampling and random sampling. We compared these methods with a method that maximizes the generalized coefficient of determination (CD). Several training set sizes were considered. We investigated four genomic prediction models: multi-locus QTL models, GBLUP models, combinations of QTL and GBLUPs, and Reproducing Kernel Hilbert Space (RKHS) models. For the maize and wheat panels, construction of the training set under uniform sampling led to a larger predictive ability than under stratified and random sampling. The results of our methods were similar to those of the CD method. For the rice panel, all training set construction methods led to similar predictive ability, a reflection of the very strong population structure in this panel. PMID:27672112
Granberg, Sarah; Dahlström, Jennie; Möller, Claes; Kähäri, Kim; Danermark, Berth
2014-02-01
To review the literature in order to identify outcome measures used in research on adults with hearing loss (HL) as part of the ICF Core Sets development project, and to describe study and population characteristics of the reviewed studies. A systematic review methodology was applied using multiple databases. A comprehensive search was conducted and two search pools were created, pool I and pool II. The study population included adults (≥ 18 years of age) with HL and oral language as the primary mode of communication. 122 studies were included. Outcome measures were distinguished by 'instrument type', and 10 types were identified. In total, 246 (pool I) and 122 (pool II) different measures were identified, and only approximately 20% were extracted twice or more. Most measures were related to speech recognition. Fifty-one different questionnaires were identified. Many studies used small sample sizes, and the sex of participants was not revealed in several studies. The low prevalence of identified measures reflects a lack of consensus regarding the optimal outcome measures to use in audiology. Reflections and discussions are made in relation to small sample sizes and the lack of sex differentiation/descriptions within the included articles.
An Exploratory Survey of Transition Teaching Practices: Results from a National Sample
ERIC Educational Resources Information Center
Pham, Yen K.
2013-01-01
This study used multilevel modeling to examine the extent to which secondary special educators promoted nonacademic behaviors that were positively linked to postschool outcomes for students with disabilities, including self-advocacy, goal setting and attainment, disability awareness, employment, and utilization of supports. Respondents were 248…
Using Soil Seed Banks for Ecological Education in Primary School
ERIC Educational Resources Information Center
Ju, Eun Jeong; Kim, Jae Geun
2011-01-01
In this study, we developed an educational programme using soil seed banks to promote ecological literacy among primary school-aged children. The programme consisted of seven student activities, including sampling and setting soil seed banks around the school, watering, identifying seedlings, and making observations about the plants and their…
Adolescent School Experiences and Dropout, Adolescent Pregnancy, and Young Adult Deviant Behavior.
ERIC Educational Resources Information Center
Kasen, Stephanie; Cohen, Patricia; Brook, Judith S.
1998-01-01
This study examined predictability of inappropriate behavior in a random sample of 452 adolescents. Behaviors examined included dropping out, teen pregnancy, criminal activities and conviction, antisocial personality disorder, and alcohol abuse. Found that academic achievement and aspirations, and learning-focused school settings related to…
9 CFR 381.412 - Reference amounts customarily consumed per eating occasion.
Code of Federal Regulations, 2013 CFR
2013-01-01
... appropriate national food consumption surveys. (2) The Reference Amounts are calculated for an infant or child... are based on data set forth in appropriate national food consumption surveys. Such Reference Amounts... child under 4 years of age. (3) An appropriate national food consumption survey includes a large sample...
9 CFR 381.412 - Reference amounts customarily consumed per eating occasion.
Code of Federal Regulations, 2012 CFR
2012-01-01
... appropriate national food consumption surveys. (2) The Reference Amounts are calculated for an infant or child... are based on data set forth in appropriate national food consumption surveys. Such Reference Amounts... child under 4 years of age. (3) An appropriate national food consumption survey includes a large sample...
9 CFR 317.312 - Reference amounts customarily consumed per eating occasion.
Code of Federal Regulations, 2013 CFR
2013-01-01
... appropriate national food consumption surveys. (2) The Reference Amounts are calculated for an infant or child... are based on data set forth in appropriate national food consumption surveys. Such Reference Amounts... child under 4 years of age. (3) An appropriate national food consumption survey includes a large sample...
9 CFR 317.312 - Reference amounts customarily consumed per eating occasion.
Code of Federal Regulations, 2014 CFR
2014-01-01
... appropriate national food consumption surveys. (2) The Reference Amounts are calculated for an infant or child... are based on data set forth in appropriate national food consumption surveys. Such Reference Amounts... child under 4 years of age. (3) An appropriate national food consumption survey includes a large sample...
9 CFR 381.412 - Reference amounts customarily consumed per eating occasion.
Code of Federal Regulations, 2014 CFR
2014-01-01
... appropriate national food consumption surveys. (2) The Reference Amounts are calculated for an infant or child... are based on data set forth in appropriate national food consumption surveys. Such Reference Amounts... child under 4 years of age. (3) An appropriate national food consumption survey includes a large sample...
9 CFR 317.312 - Reference amounts customarily consumed per eating occasion.
Code of Federal Regulations, 2012 CFR
2012-01-01
... appropriate national food consumption surveys. (2) The Reference Amounts are calculated for an infant or child... are based on data set forth in appropriate national food consumption surveys. Such Reference Amounts... child under 4 years of age. (3) An appropriate national food consumption survey includes a large sample...
A Web-Hosted R Workflow to Simplify and Automate the Analysis of 16S NGS Data
Next-Generation Sequencing (NGS) produces large data sets that include tens-of-thousands of sequence reads per sample. For analysis of bacterial diversity, 16S NGS sequences are typically analyzed in a workflow that containing best-of-breed bioinformatics packages that may levera...
Coastal Studies for Primary Grades.
ERIC Educational Resources Information Center
Butler, Venetia R.; Roach, Ellen M.
1986-01-01
Describes a set of field trips for participants of the Coastal Environmental Education for Primary Grades program in Georgia. Includes a sample of the activities used by first- and second-grade students. Discusses follow-up activities and the need for more educational programs dealing with sand dunes and saltwater marshes. (TW)
NASA Astrophysics Data System (ADS)
Devi, Elok A.; Rachman, Faisal; Satyana, Awang H.; Fahrudin; Setyawan, Reddy
2018-02-01
The Eocene Lower Ngimbang carbonaceous shales are geochemically proven hydrocarbon source rocks in the East Java Basin. Sedimentary facies of source rock is important for the source evaluation that can be examined by using biomarkers and carbon-13 isotopes data. Furthermore, paleogeography of the source sedimentation can be reconstructed. The case study was conducted on rock samples of Lower Ngimbang from two exploration wells drilled in Cepu area, East Java Basin, Kujung-1 and Ngimbang-1 wells. The biomarker data include GC and GC-MS data of normal alkanes, isoprenoids, triterpanes, and steranes. Carbon-13 isotope data include saturate and aromatic fractions. Various crossplots of biomarker and carbon-13 isotope data of the Lower Ngimbang source samples from the two wells show that the source facies of Lower Ngimbang shales changed from transitional/deltaic setting at Kujung-1 well location to marginal marine setting at Ngimbang-1 well location. This reveals that the Eocene paleogeography of the Cepu area was composed of land area in the north and marine setting to the south. Biomarkers and carbon-13 isotopes are powerful data for reconstructing paleogeography and paleofacies. In the absence of fossils in some sedimentary facies, these geochemical data are good alternatives.
Characterization of polymer decomposition products by laser desorption mass spectrometry
NASA Technical Reports Server (NTRS)
Pallix, Joan B.; Lincoln, Kenneth A.; Miglionico, Charles J.; Roybal, Robert E.; Stein, Charles; Shively, Jon H.
1993-01-01
Laser desorption mass spectrometry has been used to characterize the ash-like substances formed on the surfaces of polymer matrix composites (PMC's) during exposure on LDEF. In an effort to minimize fragmentation, material was removed from the sample surfaces by laser desorption and desorbed neutrals were ionized by electron impact. Ions were detected in a time-of-flight mass analyzer which allows the entire mass spectrum to be collected for each laser shot. The method is ideal for these studies because only a small amount of ash is available for analysis. Three sets of samples were studied including C/polysulfone, C/polyimide and C/phenolic. Each set contains leading and trailing edge LDEF samples and their respective controls. In each case, the mass spectrum of the ash shows a number of high mass peaks which can be assigned to fragments of the associated polymer. These high mass peaks are not observed in the spectra of the control samples. In general, the results indicate that the ash is formed from decomposition of the polymer matrix.
Citizen science contributes to our knowledge of invasive plant species distributions
Crall, Alycia W.; Jarnevich, Catherine S.; Young, Nicholas E.; Panke, Brendon; Renz, Mark; Stohlgren, Thomas
2015-01-01
Citizen science is commonly cited as an effective approach to expand the scale of invasive species data collection and monitoring. However, researchers often hesitate to use these data due to concerns over data quality. In light of recent research on the quality of data collected by volunteers, we aimed to demonstrate the extent to which citizen science data can increase sampling coverage, fill gaps in species distributions, and improve habitat suitability models compared to professionally generated data sets used in isolation. We combined data sets from professionals and volunteers for five invasive plant species (Alliaria petiolata, Berberis thunbergii, Cirsium palustre, Pastinaca sativa, Polygonum cuspidatum) in portions of Wisconsin. Volunteers sampled counties not sampled by professionals for three of the five species. Volunteers also added presence locations within counties not included in professional data sets, especially in southern portions of the state where professional monitoring activities had been minimal. Volunteers made a significant contribution to the known distribution, environmental gradients sampled, and the habitat suitability of P. cuspidatum. Models generated with professional data sets for the other four species performed reasonably well according to AUC values (>0.76). The addition of volunteer data did not greatly change model performance (AUC > 0.79) but did change the suitability surface generated by the models, making them more realistic. Our findings underscore the need to merge data from multiple sources to improve knowledge of current species distributions, and to predict their movement under present and future environmental conditions. The efficiency and success of these approaches require that monitoring efforts involve multiple stakeholders in continuous collaboration via established monitoring networks.
On the Spectrum of the Plenoptic Function.
Gilliam, Christopher; Dragotti, Pier-Luigi; Brookes, Mike
2014-02-01
The plenoptic function is a powerful tool to analyze the properties of multi-view image data sets. In particular, the understanding of the spectral properties of the plenoptic function is essential in many computer vision applications, including image-based rendering. In this paper, we derive for the first time an exact closed-form expression of the plenoptic spectrum of a slanted plane with finite width and use this expression as the elementary building block to derive the plenoptic spectrum of more sophisticated scenes. This is achieved by approximating the geometry of the scene with a set of slanted planes and evaluating the closed-form expression for each plane in the set. We then use this closed-form expression to revisit uniform plenoptic sampling. In this context, we derive a new Nyquist rate for the plenoptic sampling of a slanted plane and a new reconstruction filter. Through numerical simulations, on both real and synthetic scenes, we show that the new filter outperforms alternative existing filters.
Gravel Transport Measured With Bedload Traps in Mountain Streams: Field Data Sets to be Published
NASA Astrophysics Data System (ADS)
Bunte, K.; Swingle, K. W.; Abt, S. R.; Ettema, R.; Cenderelli, D. A.
2017-12-01
Direct, accurate measurements of coarse bedload transport exist for only a few streams worldwide, because the task is laborious and requires a suitable device. However, sets of accurate field data would be useful for reference with unsampled sites and as a basis for model developments. The authors have carefully measured gravel transport and are compiling their data sets for publication. To ensure accurate measurements of gravel bedload in wadeable flow, the designed instrument consisted of an unflared aluminum frame (0.3 x 0.2 m) large enough for entry of cobbles. The attached 1 m or longer net with a 4 mm mesh held large bedload volumes. The frame was strapped onto a ground plate anchored onto the channel bed. This setup avoided involuntary sampler particle pick-up and enabled long sampling times, integrating over fluctuating transport. Beveled plates and frames facilitated easy particle entry. Accelerating flow over smooth plates compensated for deceleration within the net. Spacing multiple frames by 1 m enabled sampling much of the stream width. Long deployment, and storage of sampled bedload away from the frame's entrance, were attributes of traps rather than samplers; hence the name "bedload traps". The authors measured gravel transport with 4-6 bedload traps per cross-section at 10 mountain streams in CO, WY, and OR, accumulating 14 data sets (>1,350 samples). In 10 data sets, measurements covered much of the snowmelt high-flow season yielding 50-200 samples. Measurement time was typically 1 hour but ranged from 3 minutes to 3 hours, depending on transport intensity. Measuring back-to-back provided 6 to 10 samples over a 6 to 10-hour field day. Bedload transport was also measured with a 3-inch Helley-Smith sampler. The data set provides fractional (0.5 phi) transport rates in terms of particle mass and number for each bedload trap in the cross-section, the largest particle size, as well as total cross-sectional gravel transport rates. Ancillary field data include stage, discharge, long-term flow records if available, surface and subsurface sediment sizes, as well as longitudinal and cross-sectional site surveys. Besides transport relations, incipient motion conditions, hysteresis, and lateral variation, the data provide a reliable modeling basis to test insights and hypotheses regarding bedload transport.
Virulotyping of Shigella spp. isolated from pediatric patients in Tehran, Iran.
Ranjbar, Reza; Bolandian, Masomeh; Behzadi, Payam
2017-03-01
Shigellosis is a considerable infectious disease with high morbidity and mortality among children worldwide. In this survey the prevalence of four important virulence genes including ial, ipaH, set1A, and set1B were investigated among Shigella strains and the related gene profiles identified in the present investigation, stool specimens were collected from children who were referred to two hospitals in Tehran, Iran. The samples were collected during 3 years (2008-2010) from children who were suspected to shigellosis. Shigella spp. were identified throughout microbiological and serological tests and then subjected to PCR for virulotyping. Shigella sonnei was ranking first (65.5%) followed by Shigella flexneri (25.9%), Shigella boydii (6.9%), and Shigella dysenteriae (1.7%). The ial gene was the most frequent virulence gene among isolated bacterial strains and was followed by ipaH, set1B, and set1A. S. flexneri possessed all of the studied virulence genes (ial 65.51%, ipaH 58.62%, set1A 12.07%, and set1B 22.41%). Moreover, the pattern of virulence gene profiles including ial, ial-ipaH, ial-ipaH-set1B, and ial-ipaH-set1B-set1A was identified for isolated Shigella spp. strains. The pattern of virulence genes is changed in isolated strains of Shigella in this study. So, the ial gene is placed first and the ipaH in second.
Eblen, Denise R; Barlow, Kristina E; Naugle, Alecia Larew
2006-11-01
The U.S. Food Safety and Inspection Service (FSIS) pathogen reduction-hazard analysis critical control point systems final rule, published in 1996, established Salmonella performance standards for broiler chicken, cow and bull, market hog, and steer and heifer carcasses and for ground beef, chicken, and turkey meat. In 1998, the FSIS began testing to verify that establishments are meeting performance standards. Samples are collected in sets in which the number of samples is defined but varies according to product class. A sample set fails when the number of positive Salmonella samples exceeds the maximum number of positive samples allowed under the performance standard. Salmonella sample sets collected at 1,584 establishments from 1998 through 2003 were examined to identify factors associated with failure of one or more sets. Overall, 1,282 (80.9%) of establishments never had failed sets. In establishments that did experience set failure(s), generally the failed sets were collected early in the establishment testing history, with the exception of broiler establishments where failure(s) occurred both early and late in the course of testing. Small establishments were more likely to have experienced a set failure than were large or very small establishments, and broiler establishments were more likely to have failed than were ground beef, market hog, or steer-heifer establishments. Agency response to failed Salmonella sample sets in the form of in-depth verification reviews and related establishment-initiated corrective actions have likely contributed to declines in the number of establishments that failed sets. A focus on food safety measures in small establishments and broiler processing establishments should further reduce the number of sample sets that fail to meet the Salmonella performance standard.
Nosocomial transmission of respiratory syncytial virus in an outpatient cancer center.
Chu, Helen Y; Englund, Janet A; Podczervinski, Sara; Kuypers, Jane; Campbell, Angela P; Boeckh, Michael; Pergam, Steven A; Casper, Corey
2014-06-01
Respiratory syncytial virus (RSV) outbreaks in inpatient settings are associated with poor outcomes in cancer patients. The use of molecular epidemiology to document RSV transmission in the outpatient setting has not been well described. We performed a retrospective cohort study of 2 nosocomial outbreaks of RSV at the Seattle Cancer Care Alliance. Subjects included patients seen at the Seattle Cancer Care Alliance with RSV detected in 2 outbreaks in 2007-2008 and 2012 and all employees with respiratory viruses detected in the 2007-2008 outbreak. A subset of samples was sequenced using semi-nested PCR targeting the RSV attachment glycoprotein coding region. Fifty-one cases of RSV were identified in 2007-2008. Clustering of identical viral strains was detected in 10 of 15 patients (67%) with RSV sequenced from 2007 to 2008. As part of a multimodal infection control strategy implemented as a response to the outbreak, symptomatic employees had nasal washes collected. Of 254 employee samples, 91 (34%) tested positive for a respiratory virus, including 14 with RSV. In another RSV outbreak in 2012, 24 cases of RSV were identified; 9 of 10 patients (90%) had the same viral strain, and 1 (10%) had another viral strain. We document spread of clonal strains within an outpatient cancer care setting. Infection control interventions should be implemented in outpatient, as well as inpatient, settings to reduce person-to-person transmission and limit progression of RSV outbreaks. Copyright © 2014 American Society for Blood and Marrow Transplantation. All rights reserved.
Nosocomial Transmission of Respiratory Syncytial Virus in an Outpatient Cancer Center
Chu, Helen Y.; Englund, Janet A.; Podczervinski, Sara; Kuypers, Jane; Campbell, Angela P.; Boeckh, Michael; Pergam, Steven A.; Casper, Crey
2014-01-01
Background Respiratory syncytial virus (RSV) outbreaks in inpatient settings are associated with poor outcomes in cancer patients. The use of molecular epidemiology to document RSV transmission in the outpatient setting has not been well described. Methods We performed a retrospective cohort study of two nosocomial outbreaks of RSV at the Seattle Cancer Care Alliance (SCCA). Subjects included patients seen at the SCCA with RSV detected in two outbreaks in 2007-2008 and 2012, and all employees with respiratory viruses detected in the 2007-2008 outbreak. A subset of samples was sequenced using semi-nested polymerase chain reaction targeting the RSV attachment glycoprotein coding region. Results Fifty-one cases of RSV were identified in 2007-2008. Clustering of identical viral strains was detected in 10 (67%) of 15 patients with RSV sequenced from 2007-2008. As part of a multimodal infection control strategy implemented as a response to the outbreak, symptomatic employees had nasal washes collected. Of 254 employee samples, 91 (34%) tested positive for a respiratory virus, including 14 with RSV. In another RSV outbreak in 2012, 24 cases of RSV were identified; nine (90%) of 10 patients had the same viral strain, and 1 (10%) had another viral strain. Conclusions We document spread of clonal strains within an outpatient cancer care setting. Infection control interventions should be implemented in outpatient, as well as inpatient, settings to reduce person-to-person transmission and limit progression of RSV outbreaks. PMID:24607551
Diagnosing intramammary infections: evaluation of definitions based on a single milk sample.
Dohoo, I R; Smith, J; Andersen, S; Kelton, D F; Godden, S
2011-01-01
Criteria for diagnosing intramammary infections (IMI) have been debated for many years. Factors that may be considered in making a diagnosis include the organism of interest being found on culture, the number of colonies isolated, whether or not the organism was recovered in pure or mixed culture, and whether or not concurrent evidence of inflammation existed (often measured by somatic cell count). However, research using these criteria has been hampered by the lack of a "gold standard" test (i.e., a perfect test against which the criteria can be evaluated) and the need for very large data sets of culture results to have sufficient numbers of quarters with infections with a variety of organisms. This manuscript used 2 large data sets of culture results to evaluate several definitions (sets of criteria) for classifying a quarter as having, or not having an IMI by comparing the results from a single culture to a gold standard diagnosis based on a set of 3 milk samples. The first consisted of 38,376 milk samples from which 25,886 triplicate sets of milk samples taken 1 wk apart were extracted. The second consisted of 784 quarters that were classified as infected or not based on a set of 3 milk samples collected at 2-d intervals. From these quarters, a total of 3,136 additional samples were evaluated. A total of 12 definitions (named A to L) based on combinations of the number of colonies isolated, whether or not the organism was recovered in pure or mixed culture, and the somatic cell count were evaluated for each organism (or group of organisms) with sufficient data. The sensitivity (ability of a definition to detect IMI) and the specificity (Sp; ability of a definition to correctly classify noninfected quarters) were both computed. For all species, except Staphylococcus aureus, the sensitivity of all definitions was <90% (and in many cases<50%). Consequently, if identifying as many existing infections as possible is important, then the criteria for considering a quarter positive should be a single colony (from a 0.01-mL milk sample) isolated (definition A). With the exception of "any organism" and coagulase-negative staphylococci, all Sp estimates were over 94% in the daily data and over 97% in the weekly data, suggesting that for most species, definition A may be acceptable. For coagulase-negative staphylococci, definitions B (2 colonies from a 0.01-mL milk sample) raised the Sp to 92 and 95% in the daily and weekly data, respectively. For "any organism," using definition B raised the Sp to 88 and 93% in the 2 data sets, respectively. The final choice of definition will depend on the objectives of study or control program for which the sample was collected. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Weak lensing magnification in the Dark Energy Survey Science Verification data
NASA Astrophysics Data System (ADS)
Garcia-Fernandez, M.; Sanchez, E.; Sevilla-Noarbe, I.; Suchyta, E.; Huff, E. M.; Gaztanaga, E.; Aleksić, J.; Ponce, R.; Castander, F. J.; Hoyle, B.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Benoit-Lévy, A.; Bernstein, G. M.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Diehl, H. T.; Eifler, T. F.; Evrard, A. E.; Fernandez, E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Giannantonio, T.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; James, D. J.; Jarvis, M.; Kirk, D.; Krause, E.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Lima, M.; MacCrann, N.; Maia, M. A. G.; March, M.; Marshall, J. L.; Melchior, P.; Miquel, R.; Mohr, J. J.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Rykoff, E. S.; Scarpine, V.; Schubnell, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Tarle, G.; Thomas, D.; Walker, A. R.; Wester, W.; DES Collaboration
2018-05-01
In this paper, the effect of weak lensing magnification on galaxy number counts is studied by cross-correlating the positions of two galaxy samples, separated by redshift, using the Dark Energy Survey Science Verification data set. This analysis is carried out for galaxies that are selected only by its photometric redshift. An extensive analysis of the systematic effects, using new methods based on simulations is performed, including a Monte Carlo sampling of the selection function of the survey.
Restoring a smooth function from its noisy integrals
NASA Astrophysics Data System (ADS)
Goulko, Olga; Prokof'ev, Nikolay; Svistunov, Boris
2018-05-01
Numerical (and experimental) data analysis often requires the restoration of a smooth function from a set of sampled integrals over finite bins. We present the bin hierarchy method that efficiently computes the maximally smooth function from the sampled integrals using essentially all the information contained in the data. We perform extensive tests with different classes of functions and levels of data quality, including Monte Carlo data suffering from a severe sign problem and physical data for the Green's function of the Fröhlich polaron.
Temperature dependence of photoluminescence peaks of porous silicon structures
NASA Astrophysics Data System (ADS)
Brunner, Róbert; Pinčík, Emil; Kučera, Michal; Greguš, Ján; Vojtek, Pavel; Zábudlá, Zuzana
2017-12-01
Evaluation of photoluminescence spectra of porous silicon (PS) samples prepared by electrochemical etching is presented. The samples were measured at temperatures 30, 70 and 150 K. Peak parameters (energy, intensity and width) were calculated. The PL spectrum was approximated by a set of Gaussian peaks. Their parameters were fixed using fitting a procedure in which the optimal number of peeks included into the model was estimated using the residuum of the approximation. The weak thermal dependence of the spectra indicates the strong influence of active defects.
TEM Study of SAFARI-2000 Aerosols
NASA Technical Reports Server (NTRS)
Buseck, Peter R.
2004-01-01
The aim of our research was to obtain data on the chemical and physical properties of individual aerosol particles from biomass smoke plume s in southern Africa and from air masses in the region that are affec ted by the smoke. We used analytical transmission electron microscopy (ATEM), including energy-dispersive X-ray spectrometry (EDS) and ele ctron energy-loss spectroscopy (EELS), and field-emission electron microscopy (FESEM) to study aerosol particles from several smoke and haz e samples and from a set of cloud samples.
Characterizations of linear sufficient statistics
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Reoner, R.; Decell, H. P., Jr.
1977-01-01
A surjective bounded linear operator T from a Banach space X to a Banach space Y must be a sufficient statistic for a dominated family of probability measures defined on the Borel sets of X. These results were applied, so that they characterize linear sufficient statistics for families of the exponential type, including as special cases the Wishart and multivariate normal distributions. The latter result was used to establish precisely which procedures for sampling from a normal population had the property that the sample mean was a sufficient statistic.
Polkowski, M; Larghi, A; Weynand, B; Boustière, C; Giovannini, M; Pujol, B; Dumonceau, J-M
2012-02-01
This article is the second of a two-part publication that expresses the current view of the European Society of Gastrointestinal Endoscopy (ESGE) about endoscopic ultrasound (EUS)-guided sampling, including EUS-guided fine needle aspiration (EUS-FNA) and EUS-guided Trucut biopsy. The first part (the Clinical Guideline) focused on the results obtained with EUS-guided sampling, and the role of this technique in patient management, and made recommendations on circumstances that warrant its use. The current Technical Guideline discusses issues related to learning, techniques, and complications of EUS-guided sampling, and to processing of specimens. Technical issues related to maximizing the diagnostic yield (e.g., rapid on-site cytopathological evaluation, needle diameter, microcore isolation for histopathological examination, and adequate number of needle passes) are discussed and recommendations are made for various settings, including solid and cystic pancreatic lesions, submucosal tumors, and lymph nodes. The target readership for the Clinical Guideline mostly includes gastroenterologists, oncologists, internists, and surgeons while the Technical Guideline should be most useful to endoscopists who perform EUS-guided sampling. A two-page executive summary of evidence statements and recommendations is provided. © Georg Thieme Verlag KG Stuttgart · New York.
Artist Material BRDF Database for Computer Graphics Rendering
NASA Astrophysics Data System (ADS)
Ashbaugh, Justin C.
The primary goal of this thesis was to create a physical library of artist material samples. This collection provides necessary data for the development of a gonio-imaging system for use in museums to more accurately document their collections. A sample set was produced consisting of 25 panels and containing nearly 600 unique samples. Selected materials are representative of those commonly used by artists both past and present. These take into account the variability in visual appearance resulting from the materials and application techniques used. Five attributes of variability were identified including medium, color, substrate, application technique and overcoat. Combinations of these attributes were selected based on those commonly observed in museum collections and suggested by surveying experts in the field. For each sample material, image data is collected and used to measure an average bi-directional reflectance distribution function (BRDF). The results are available as a public-domain image and optical database of artist materials at art-si.org. Additionally, the database includes specifications for each sample along with other information useful for computer graphics rendering such as the rectified sample images and normal maps.
Agarwal, Anupriya; MacKenzie, Ryan J.; Pippa, Raffaella; Eide, Christopher A.; Oddo, Jessica; Tyner, Jeffrey W.; Sears, Rosalie; Vitek, Michael P.; Odero, María D.; Christensen, Dale; Druker, Brian J.
2014-01-01
Purpose The SET oncoprotein, a potent inhibitor of the protein phosphatase 2A (PP2A), is overexpressed in leukemia. We evaluated the efficacy of SET antagonism in chronic myeloid leukemia (CML) and acute myeloid leukemia (AML) cell lines, a murine leukemia model, and primary patient samples using OP449, a specific, cell-penetrating peptide that antagonizes SET's inhibition of PP2A. Experimental Design In vitro cytotoxicity and specificity of OP449 in CML and AML cell lines and primary samples were measured using proliferation, apoptosis and colonogenic assays. Efficacy of target inhibition by OP449 is evaluated by immunoblotting and PP2A assay. In vivo antitumor efficacy of OP449 was measured in human HL-60 xenografted murine model. Results We observed that OP449 inhibited growth of CML cells including those from patients with blastic phase disease and patients harboring highly drug-resistant BCR-ABL1 mutations. Combined treatment with OP449 and ABL1 tyrosine kinase inhibitors was significantly more cytotoxic to K562 cells and primary CD34+ CML cells. SET protein levels remained unchanged with OP449 treatment, but BCR-ABL1-mediated downstream signaling was significantly inhibited with the degradation of key signaling molecules such as BCR-ABL1, STAT5, and AKT. Similarly, AML cell lines and primary patient samples with various genetic lesions showed inhibition of cell growth after treatment with OP449 alone or in combination with respective kinase inhibitors. Finally, OP449 reduced the tumor burden of mice xenografted with human leukemia cells. Conclusions We demonstrate a novel therapeutic paradigm of SET antagonism using OP449 in combination with tyrosine kinase inhibitors for the treatment of CML and AML. PMID:24436473
Nilles, M.A.; Gordon, J.D.; Schroder, L.J.
1994-01-01
A collocated, wet-deposition sampler program has been operated since October 1988 by the U.S. Geological Survey to estimate the overall sampling precision of wet atmospheric deposition data collected at selected sites in the National Atmospheric Deposition Program and National Trends Network (NADP/NTN). A duplicate set of wet-deposition sampling instruments was installed adjacent to existing sampling instruments at four different NADP/NTN sites for each year of the study. Wet-deposition samples from collocated sites were collected and analysed using standard NADP/NTN procedures. Laboratory analyses included determinations of pH, specific conductance, and concentrations of major cations and anions. The estimates of precision included all variability in the data-collection system, from the point of sample collection through storage in the NADP/NTN database. Sampling precision was determined from the absolute value of differences in the analytical results for the paired samples in terms of median relative and absolute difference. The median relative difference for Mg2+, Na+, K+ and NH4+ concentration and deposition was quite variable between sites and exceeded 10% at most sites. Relative error for analytes whose concentrations typically approached laboratory method detection limits were greater than for analytes that did not typically approach detection limits. The median relative difference for SO42- and NO3- concentration, specific conductance, and sample volume at all sites was less than 7%. Precision for H+ concentration and deposition ranged from less than 10% at sites with typically high levels of H+ concentration to greater than 30% at sites with low H+ concentration. Median difference for analyte concentration and deposition was typically 1.5-2-times greater for samples collected during the winter than during other seasons at two northern sites. Likewise, the median relative difference in sample volume for winter samples was more than double the annual median relative difference at the two northern sites. Bias accounted for less than 25% of the collocated variability in analyte concentration and deposition from weekly collocated precipitation samples at most sites.A collocated, wet-deposition sampler program has been operated since OCtober 1988 by the U.S Geological Survey to estimate the overall sampling precision of wet atmospheric deposition data collected at selected sites in the National Atmospheric Deposition Program and National Trends Network (NADP/NTN). A duplicate set of wet-deposition sampling instruments was installed adjacent to existing sampling instruments four different NADP/NTN sites for each year of the study. Wet-deposition samples from collocated sites were collected and analysed using standard NADP/NTN procedures. Laboratory analyses included determinations of pH, specific conductance, and concentrations of major cations and anions. The estimates of precision included all variability in the data-collection system, from the point of sample collection through storage in the NADP/NTN database.
Space flight effects on antioxidant molecules in dry tardigrades: the TARDIKISS experiment.
Rizzo, Angela Maria; Altiero, Tiziana; Corsetto, Paola Antonia; Montorfano, Gigliola; Guidetti, Roberto; Rebecchi, Lorena
2015-01-01
The TARDIKISS (Tardigrades in Space) experiment was part of the Biokon in Space (BIOKIS) payload, a set of multidisciplinary experiments performed during the DAMA (Dark Matter) mission organized by Italian Space Agency and Italian Air Force in 2011. This mission supported the execution of experiments in short duration (16 days) taking the advantage of the microgravity environment on board of the Space Shuttle Endeavour (its last mission STS-134) docked to the International Space Station. TARDIKISS was composed of three sample sets: one flight sample and two ground control samples. These samples provided the biological material used to test as space stressors, including microgravity, affected animal survivability, life cycle, DNA integrity, and pathways of molecules working as antioxidants. In this paper we compared the molecular pathways of some antioxidant molecules, thiobarbituric acid reactive substances, and fatty acid composition between flight and control samples in two tardigrade species, namely, Paramacrobiotus richtersi and Ramazzottius oberhaeuseri. In both species, the activities of ROS scavenging enzymes, the total content of glutathione, and the fatty acids composition between flight and control samples showed few significant differences. TARDIKISS experiment, together with a previous space experiment (TARSE), further confirms that both desiccated and hydrated tardigrades represent useful animal tool for space research.
Speciation of airborne dust from a nickel refinery roasting operation.
Andersen, I; Berge, S R; Resmann, F
1998-04-01
Earlier work-related lung and nasal cancer studies included estimates of exposures to different nickel species in the refinery. Based on the metallurgy, only insoluble nickel was believed to be present around the roasters but mixed exposure was assumed in most areas, including the tankhouse. Occasional leaching tests of samples from the roaster area have indicated the presence of soluble nickel. This study reports on five parallel sets of dust samples collected from different floors with standard equipment and treated as follows. Two sets were leached with an ammonium citrate buffer at pH 4.4. Undissolved material was treated with HClO4/HNO3, evaporated to dryness and dissolved in HCl, Ni, Cu, Co, Fe, Se, and As were determined in both fractions. Water soluble Ni was found in all samples, ranging from 5-35%. Sulfate in the solutions correlated nearly stoichiometrically to the total metal content. The three remaining sets were investigated by, respectively, differential leaching, X-ray diffraction and scanning electron microscopy. The percentage of soluble nickel found by differential leaching corresponded well with those obtained by the simplified procedure. X-Ray diffraction analysis showed the presence of NiSO4.6H2O as well as oxides of Ni and Cu. This study indicates mixed exposures also in the roaster area. It also clearly indicates that basing exposure on the metallurgy alone can lead to serious misjudgements. The impact of this new information on the interpretation of cancer incidence at this refinery must await the analysis in an ongoing case-reference study.
On-line Geoscience Data Resources for Today's Undergraduates
NASA Astrophysics Data System (ADS)
Goodwillie, A. M.; Ryan, W.; Carbotte, S.; Melkonian, A.; Coplan, J.; Arko, R.; O'Hara, S.; Ferrini, V.; Leung, A.; Bonckzowski, J.
2008-12-01
Broadening the experience of undergraduates can be achieved by enabling free, unrestricted and convenient access to real scientific data. With funding from the U.S. National Science Foundation, the Marine Geoscience Data System (MGDS) (http://www.marine-geo.org/) serves as the integrated data portal for various NSF-funded projects and provides free public access and preservation to a wide variety of marine and terrestrial data including rock, fluid, biology and sediment samples information, underway geophysical data and multibeam bathymetry, water column and multi-channel seismics data. Users can easily view the locations of cruise tracks, sample and station locations against a backdrop of a multi-resolution global digital elevation model. A Search For Data web page rapidly extracts data holdings from the database and can be filtered on data and device type, field program ID, investigator name, geographical and date bounds. The data access experience is boosted by the MGDS use of standardised OGC-compliant Web Services to support uniform programmatic interfaces. GeoMapApp (http://www.geomapapp.org/), a free MGDS data visualization tool, supports map-based dynamic exploration of a broad suite of geosciences data. Built-in land and marine data sets include tectonic plate boundary compilations, DSDP/ODP core logs, earthquake events, seafloor photos, and submersible dive tracks. Seamless links take users to data held by external partner repositories including PetDB, UNAVCO, IRIS and NGDC. Users can generate custom maps and grids and import their own data sets and grids. A set of short, video-style on-line tutorials familiarises users step- by-step with GeoMapApp functionality (http://www.geomapapp.org/tutorials/). Virtual Ocean (http://www.virtualocean.org/) combines the functionality of GeoMapApp with a 3-D earth browser built using the NASA WorldWind API for a powerful new data resource. MGDS education involvement (http://www.marine-geo.org/, go to Education tab) includes the searchable Media Bank of images and video; KML files for viewing several MGDS data sets in Google Earth (tm); support in developing undergraduate- level teaching modules using NSF-MARGINS data. Examples of many of these data sets will be shown.
Effects of the built environment on physical activity of adults living in rural settings.
Frost, Stephanie S; Goins, R Turner; Hunter, Rebecca H; Hooker, Steven P; Bryant, Lucinda L; Kruger, Judy; Pluto, Delores
2010-01-01
To conduct a systematic review of the literature to examine the influence of the built environment (BE) on the physical activity (PA) of adults in rural settings. Key word searches of Academic Search Premier, PubMed, CINAHL, Web of Science, and Sport Discus were conducted. Studies published prior to June 2008 were included if they assessed one or more elements of the BE, examined relationships between the BE and PA, and focused on rural locales. Studies only reporting descriptive statistics or assessing the reliability of measures were excluded. Objective(s), sample size, sampling technique, geographic location, and definition of rural were extracted from each study. Methods of assessment and outcomes were extracted from the quantitative literature, and overarching themes were identified from the qualitative literature. Key characteristics and findings from the data are summarized in Tables 1 through 3. Twenty studies met inclusion and exclusion criteria. Positive associations were found among pleasant aesthetics, trails, safety/crime, parks, and walkable destinations. Research in this area is limited. Associations among elements of the BE and PA among adults appear to differ between rural and urban areas. Considerations for future studies include identifying parameters used to define rural, longitudinal research, and more diverse geographic sampling. Development and refinement of BE assessment tools specific to rural locations are also warranted.
Multiagency Urban Search Experiment Detector and Algorithm Test Bed
NASA Astrophysics Data System (ADS)
Nicholson, Andrew D.; Garishvili, Irakli; Peplow, Douglas E.; Archer, Daniel E.; Ray, William R.; Swinney, Mathew W.; Willis, Michael J.; Davidson, Gregory G.; Cleveland, Steven L.; Patton, Bruce W.; Hornback, Donald E.; Peltz, James J.; McLean, M. S. Lance; Plionis, Alexander A.; Quiter, Brian J.; Bandstra, Mark S.
2017-07-01
In order to provide benchmark data sets for radiation detector and algorithm development, a particle transport test bed has been created using experimental data as model input and validation. A detailed radiation measurement campaign at the Combined Arms Collective Training Facility in Fort Indiantown Gap, PA (FTIG), USA, provides sample background radiation levels for a variety of materials present at the site (including cinder block, gravel, asphalt, and soil) using long dwell high-purity germanium (HPGe) measurements. In addition, detailed light detection and ranging data and ground-truth measurements inform model geometry. This paper describes the collected data and the application of these data to create background and injected source synthetic data for an arbitrary gamma-ray detection system using particle transport model detector response calculations and statistical sampling. In the methodology presented here, HPGe measurements inform model source terms while detector response calculations are validated via long dwell measurements using 2"×4"×16" NaI(Tl) detectors at a variety of measurement points. A collection of responses, along with sampling methods and interpolation, can be used to create data sets to gauge radiation detector and algorithm (including detection, identification, and localization) performance under a variety of scenarios. Data collected at the FTIG site are available for query, filtering, visualization, and download at muse.lbl.gov.
Textual data in psychiatry: reasoning by analogy to quantitative principles.
Yang, Suzanne; Mulvey, Edward P; Falissard, Bruno
2012-08-01
Personal meaning in subjective experience is a key element in the treatment of persons with mental disorders. Open-response speech samples would appear to be suitable for studying this type of subjective experience, but there are still important challenges in using language as data. Scientific principles involved in sample size calculation, validity, and reliability may be applicable, by analogy, to data collected in the form of words. We describe a rationale for including computer-assisted techniques as one step of a qualitative analysis procedure that includes manual reading. Clarification of a framework for including language as data in psychiatric research may allow us to more effectively bridge biological and psychometric research with clinical practice, a setting where the patient's clinical "data" are, in large part, conveyed in words.
Community-Based Validation of the Social Phobia Screener (SOPHS).
Batterham, Philip J; Mackinnon, Andrew J; Christensen, Helen
2017-10-01
There is a need for brief, accurate screening scales for social anxiety disorder to enable better identification of the disorder in research and clinical settings. A five-item social anxiety screener, the Social Phobia Screener (SOPHS), was developed to address this need. The screener was validated in two samples: (a) 12,292 Australian young adults screened for a clinical trial, including 1,687 participants who completed a phone-based clinical interview and (b) 4,214 population-based Australian adults recruited online. The SOPHS (78% sensitivity, 72% specificity) was found to have comparable screening performance to the Social Phobia Inventory (77% sensitivity, 71% specificity) and Mini-Social Phobia Inventory (74% sensitivity, 73% specificity) relative to clinical criteria in the trial sample. In the population-based sample, the SOPHS was also accurate (95% sensitivity, 73% specificity) in identifying Diagnostic and Statistical Manual of Mental Disorders-Fifth edition social anxiety disorder. The SOPHS is a valid and reliable screener for social anxiety that is freely available for use in research and clinical settings.
Goicoechea, Héctor C; Olivieri, Alejandro C; Tauler, Romà
2010-03-01
Correlation constrained multivariate curve resolution-alternating least-squares is shown to be a feasible method for processing first-order instrumental data and achieve analyte quantitation in the presence of unexpected interferences. Both for simulated and experimental data sets, the proposed method could correctly retrieve the analyte and interference spectral profiles and perform accurate estimations of analyte concentrations in test samples. Since no information concerning the interferences was present in calibration samples, the proposed multivariate calibration approach including the correlation constraint facilitates the achievement of the so-called second-order advantage for the analyte of interest, which is known to be present for more complex higher-order richer instrumental data. The proposed method is tested using a simulated data set and two experimental data systems, one for the determination of ascorbic acid in powder juices using UV-visible absorption spectral data, and another for the determination of tetracycline in serum samples using fluorescence emission spectroscopy.
Kernel K-Means Sampling for Nyström Approximation.
He, Li; Zhang, Hong
2018-05-01
A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.
Frank, Richard A; Milestone, Craig B; Rowland, Steve J; Headley, John V; Kavanagh, Richard J; Lengger, Sabine K; Scarlett, Alan G; West, Charles E; Peru, Kerry M; Hewitt, L Mark
2016-10-01
The acid-extractable organic compounds (AEOs), including naphthenic acids (NAs), present within oil sands process-affected water (OSPW) receive great attention due to their known toxicity. While recent progress in advanced separation and analytical methodologies for AEOs has improved our understanding of the composition of these mixtures, little is known regarding any variability (i.e., spatial, temporal) inherent within, or between, tailings ponds. In this study, 5 samples were collected from the same location of one tailings pond over a 2-week period. In addition, 5 samples were collected simultaneously from different locations within a tailings pond from a different mine site, as well as its associated recycling pond. In both cases, the AEOs were analyzed using SFS, ESI-MS, HRMS, GC×GC-ToF/MS, and GC- & LC-QToF/MS (GC analyses following conversion to methyl esters). Principal component analysis of HRMS data was able to distinguish the ponds from each other, while data from GC×GC-ToF/MS, and LC- and GC-QToF/MS were used to differentiate samples from within the temporal and spatial sample sets, with the greater variability associated with the latter. Spatial differences could be attributed to pond dynamics, including differences in inputs of tailings and surface run-off. Application of novel chemometric data analyses of unknown compounds detected by LC- and GC-QToF/MS allowed further differentiation of samples both within and between data sets, providing an innovative approach for future fingerprinting studies. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Robust model selection and the statistical classification of languages
NASA Astrophysics Data System (ADS)
García, J. E.; González-López, V. A.; Viola, M. L. L.
2012-10-01
In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating a model which represent the main law for each language. Our findings agree with the linguistic conjecture, related to the rhythm of the languages included on our dataset.
Improving compound-protein interaction prediction by building up highly credible negative samples.
Liu, Hui; Sun, Jianjiang; Guan, Jihong; Zheng, Jie; Zhou, Shuigeng
2015-06-15
Computational prediction of compound-protein interactions (CPIs) is of great importance for drug design and development, as genome-scale experimental validation of CPIs is not only time-consuming but also prohibitively expensive. With the availability of an increasing number of validated interactions, the performance of computational prediction approaches is severely impended by the lack of reliable negative CPI samples. A systematic method of screening reliable negative sample becomes critical to improving the performance of in silico prediction methods. This article aims at building up a set of highly credible negative samples of CPIs via an in silico screening method. As most existing computational models assume that similar compounds are likely to interact with similar target proteins and achieve remarkable performance, it is rational to identify potential negative samples based on the converse negative proposition that the proteins dissimilar to every known/predicted target of a compound are not much likely to be targeted by the compound and vice versa. We integrated various resources, including chemical structures, chemical expression profiles and side effects of compounds, amino acid sequences, protein-protein interaction network and functional annotations of proteins, into a systematic screening framework. We first tested the screened negative samples on six classical classifiers, and all these classifiers achieved remarkably higher performance on our negative samples than on randomly generated negative samples for both human and Caenorhabditis elegans. We then verified the negative samples on three existing prediction models, including bipartite local model, Gaussian kernel profile and Bayesian matrix factorization, and found that the performances of these models are also significantly improved on the screened negative samples. Moreover, we validated the screened negative samples on a drug bioactivity dataset. Finally, we derived two sets of new interactions by training an support vector machine classifier on the positive interactions annotated in DrugBank and our screened negative interactions. The screened negative samples and the predicted interactions provide the research community with a useful resource for identifying new drug targets and a helpful supplement to the current curated compound-protein databases. Supplementary files are available at: http://admis.fudan.edu.cn/negative-cpi/. © The Author 2015. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Cao, X.; Tian, F.; Telford, R.; Ni, J.; Xu, Q.; Chen, F.; Liu, X.; Stebich, M.; Zhao, Y.; Herzschuh, U.
2017-12-01
Pollen-based quantitative reconstructions of past climate variables is a standard palaeoclimatic approach. Despite knowing that the spatial extent of the calibration-set affects the reconstruction result, guidance is lacking as to how to determine a suitable spatial extent of the pollen-climate calibration-set. In this study, past mean annual precipitation (Pann) during the Holocene (since 11.5 cal ka BP) is reconstructed repeatedly for pollen records from Qinghai Lake (36.7°N, 100.5°E; north-east Tibetan Plateau), Gonghai Lake (38.9°N, 112.2°E; north China) and Sihailongwan Lake (42.3°N, 126.6°E; north-east China) using calibration-sets of varying spatial extents extracted from the modern pollen dataset of China and Mongolia (2559 sampling sites and 168 pollen taxa in total). Results indicate that the spatial extent of the calibration-set has a strong impact on model performance, analogue quality and reconstruction diagnostics (absolute value, range, trend, optimum). Generally, these effects are stronger with the modern analogue technique (MAT) than with weighted averaging partial least squares (WA-PLS). With respect to fossil spectra from northern China, the spatial extent of calibration-sets should be restricted to ca. 1000 km in radius because small-scale calibration-sets (<800 km radius) will likely fail to include enough spatial variation in the modern pollen assemblages to reflect the temporal range shifts during the Holocene, while too broad a scale calibration-set (>1500 km radius) will include taxa with very different pollen-climate relationships. Based on our results we conclude that the optimal calibration-set should 1) cover a reasonably large spatial extent with an even distribution of modern pollen samples; 2) possess good model performance as indicated by cross-validation, high analogue quality, and excellent fit with the target fossil pollen spectra; 3) possess high taxonomic resolution, and 4) obey the modern and past distribution ranges of taxa inferred from palaeo-genetic and macrofossil studies.
Development of Strain-Specific Primers for Identification of Bifidobacterium bifidum BGN4.
Youn, So Youn; Ji, Geun Eog; Han, Yoo Ri; Park, Myeong Soo
2017-05-28
Bifidobacterium bifidum BGN4 (BGN4) has many proven beneficial effects, including antiallergy and anticancer properties. It has been commercialized and used in several probiotic products, and thus strain-specific identification of this strain is very valuable for further strain-dependent physiological study. For this purpose, we developed novel multiplex polymerase chain reaction (PCR) primer sets for strain-specific detection of BGN4 in commercial products and fecal samples of animal models. The primer set was tested on seven strains of B. bifidum and 75 strains of the other Bifidobacterium species. The BGN4-specific regions were derived using megaBLAST against genome sequences of various B. bifidum databases and four sets of primers were designed. As a result, only BGN4 produced four PCR products simultaneously whereas the other strains did not. The PCR detection limit using BGN4-specific primer sets was 2.8 × 10 1 CFU/ml of BGN4. Those primer sets also detected and identified BGN4 in the probiotic products containing BNG4 and fecal samples from a BGN4-fed animal model with high specificity. Our results indicate that the PCR assay from this study is an efficient tool for the simple, rapid, and reliable identification of BGN4, for which probiotic strains are known.
HLA imputation in an admixed population: An assessment of the 1000 Genomes data as a training set.
Nunes, Kelly; Zheng, Xiuwen; Torres, Margareth; Moraes, Maria Elisa; Piovezan, Bruno Z; Pontes, Gerlandia N; Kimura, Lilian; Carnavalli, Juliana E P; Mingroni Netto, Regina C; Meyer, Diogo
2016-03-01
Methods to impute HLA alleles based on dense single nucleotide polymorphism (SNP) data provide a valuable resource to association studies and evolutionary investigation of the MHC region. The availability of appropriate training sets is critical to the accuracy of HLA imputation, and the inclusion of samples with various ancestries is an important pre-requisite in studies of admixed populations. We assess the accuracy of HLA imputation using 1000 Genomes Project data as a training set, applying it to a highly admixed Brazilian population, the Quilombos from the state of São Paulo. To assess accuracy, we compared imputed and experimentally determined genotypes for 146 samples at 4 HLA classical loci. We found imputation accuracies of 82.9%, 81.8%, 94.8% and 86.6% for HLA-A, -B, -C and -DRB1 respectively (two-field resolution). Accuracies were improved when we included a subset of Quilombo individuals in the training set. We conclude that the 1000 Genomes data is a valuable resource for construction of training sets due to the diversity of ancestries and the potential for a large overlap of SNPs with the target population. We also show that tailoring training sets to features of the target population substantially enhances imputation accuracy. Copyright © 2016 American Society for Histocompatibility and Immunogenetics. Published by Elsevier Inc. All rights reserved.
Haab, Brian B; Huang, Ying; Balasenthil, Seetharaman; Partyka, Katie; Tang, Huiyuan; Anderson, Michelle; Allen, Peter; Sasson, Aaron; Zeh, Herbert; Kaul, Karen; Kletter, Doron; Ge, Shaokui; Bern, Marshall; Kwon, Richard; Blasutig, Ivan; Srivastava, Sudhir; Frazier, Marsha L; Sen, Subrata; Hollingsworth, Michael A; Rinaudo, Jo Ann; Killary, Ann M; Brand, Randall E
2015-01-01
The validation of candidate biomarkers often is hampered by the lack of a reliable means of assessing and comparing performance. We present here a reference set of serum and plasma samples to facilitate the validation of biomarkers for resectable pancreatic cancer. The reference set includes a large cohort of stage I-II pancreatic cancer patients, recruited from 5 different institutions, and relevant control groups. We characterized the performance of the current best serological biomarker for pancreatic cancer, CA 19-9, using plasma samples from the reference set to provide a benchmark for future biomarker studies and to further our knowledge of CA 19-9 in early-stage pancreatic cancer and the control groups. CA 19-9 distinguished pancreatic cancers from the healthy and chronic pancreatitis groups with an average sensitivity and specificity of 70-74%, similar to previous studies using all stages of pancreatic cancer. Chronic pancreatitis patients did not show CA 19-9 elevations, but patients with benign biliary obstruction had elevations nearly as high as the cancer patients. We gained additional information about the biomarker by comparing two distinct assays. The two CA 9-9 assays agreed well in overall performance but diverged in measurements of individual samples, potentially due to subtle differences in antibody specificity as revealed by glycan array analysis. Thus, the reference set promises be a valuable resource for biomarker validation and comparison, and the CA 19-9 data presented here will be useful for benchmarking and for exploring relationships to CA 19-9.
Haab, Brian B.; Huang, Ying; Balasenthil, Seetharaman; Partyka, Katie; Tang, Huiyuan; Anderson, Michelle; Allen, Peter; Sasson, Aaron; Zeh, Herbert; Kaul, Karen; Kletter, Doron; Ge, Shaokui; Bern, Marshall; Kwon, Richard; Blasutig, Ivan; Srivastava, Sudhir; Frazier, Marsha L.; Sen, Subrata; Hollingsworth, Michael A.; Rinaudo, Jo Ann; Killary, Ann M.; Brand, Randall E.
2015-01-01
The validation of candidate biomarkers often is hampered by the lack of a reliable means of assessing and comparing performance. We present here a reference set of serum and plasma samples to facilitate the validation of biomarkers for resectable pancreatic cancer. The reference set includes a large cohort of stage I-II pancreatic cancer patients, recruited from 5 different institutions, and relevant control groups. We characterized the performance of the current best serological biomarker for pancreatic cancer, CA 19–9, using plasma samples from the reference set to provide a benchmark for future biomarker studies and to further our knowledge of CA 19–9 in early-stage pancreatic cancer and the control groups. CA 19–9 distinguished pancreatic cancers from the healthy and chronic pancreatitis groups with an average sensitivity and specificity of 70–74%, similar to previous studies using all stages of pancreatic cancer. Chronic pancreatitis patients did not show CA 19–9 elevations, but patients with benign biliary obstruction had elevations nearly as high as the cancer patients. We gained additional information about the biomarker by comparing two distinct assays. The two CA 9–9 assays agreed well in overall performance but diverged in measurements of individual samples, potentially due to subtle differences in antibody specificity as revealed by glycan array analysis. Thus, the reference set promises be a valuable resource for biomarker validation and comparison, and the CA 19–9 data presented here will be useful for benchmarking and for exploring relationships to CA 19–9. PMID:26431551
Callaway, John C.; Cahoon, Donald R.; Lynch, James C.
2014-01-01
Tidal wetlands are highly sensitive to processes that affect their elevation relative to sea level. The surface elevation table–marker horizon (SET–MH) method has been used to successfully measure these processes, including sediment accretion, changes in relative elevation, and shallow soil processes (subsidence and expansion due to root production). The SET–MH method is capable of measuring changes at very high resolution (±millimeters) and has been used worldwide both in natural wetlands and under experimental conditions. Marker horizons are typically deployed using feldspar over 50- by 50-cm plots, with replicate plots at each sampling location. Plots are sampled using a liquid N2 cryocorer that freezes a small sample, allowing the handling and measurement of soft and easily compressed soils with minimal compaction. The SET instrument is a portable device that is attached to a permanent benchmark to make high-precision measurements of wetland surface elevation. The SET instrument has evolved substantially in recent decades, and the current rod SET (RSET) is widely used. For the RSET, a 15-mm-diameter stainless steel rod is pounded into the ground until substantial resistance is achieved to establish a benchmark. The SET instrument is attached to the benchmark and leveled such that it reoccupies the same reference plane in space, and pins lowered from the instrument repeatedly measure the same point on the soil surface. Changes in the height of the lowered pins reflect changes in the soil surface. Permanent or temporary platforms provide access to SET and MH locations without disturbing the wetland surface.
Manning, T.K.; Smith, K.E.; Wood, C.D.; Williams, J.B.
1994-01-01
Water-quality samples were collected from Chicod Creek in the Coastal Plain Province of North Carolina during the summer of 1992 as part of the U.S. Geological Survey's National Water-Quality Assessment Program. Chicod Creek is in the Albemarle-Pamlico drainage area, one of four study units designated to test equipment and procedures for collecting and processing samples for the solid-phase extraction of selected pesticides, The equipment and procedures were used to isolate 47 pesticides, including organonitrogen, carbamate, organochlorine, organophosphate, and other compounds, targeted to be analyzed by gas chromatography/mass spectrometry. Sample-collection and processing equipment equipment cleaning and set-up procedures, methods pertaining to collecting, splitting, and solid-phase extraction of samples, and water-quality data resulting from the field test are presented in this report Most problems encountered during this intensive sampling exercise were operational difficulties relating to equipment used to process samples.
Sampling design for long-term regional trends in marine rocky intertidal communities
Irvine, Gail V.; Shelley, Alice
2013-01-01
Probability-based designs reduce bias and allow inference of results to the pool of sites from which they were chosen. We developed and tested probability-based designs for monitoring marine rocky intertidal assemblages at Glacier Bay National Park and Preserve (GLBA), Alaska. A multilevel design was used that varied in scale and inference. The levels included aerial surveys, extensive sampling of 25 sites, and more intensive sampling of 6 sites. Aerial surveys of a subset of intertidal habitat indicated that the original target habitat of bedrock-dominated sites with slope ≤30° was rare. This unexpected finding illustrated one value of probability-based surveys and led to a shift in the target habitat type to include steeper, more mixed rocky habitat. Subsequently, we evaluated the statistical power of different sampling methods and sampling strategies to detect changes in the abundances of the predominant sessile intertidal taxa: barnacles Balanomorpha, the mussel Mytilus trossulus, and the rockweed Fucus distichus subsp. evanescens. There was greatest power to detect trends in Mytilus and lesser power for barnacles and Fucus. Because of its greater power, the extensive, coarse-grained sampling scheme was adopted in subsequent years over the intensive, fine-grained scheme. The sampling attributes that had the largest effects on power included sampling of “vertical” line transects (vs. horizontal line transects or quadrats) and increasing the number of sites. We also evaluated the power of several management-set parameters. Given equal sampling effort, sampling more sites fewer times had greater power. The information gained through intertidal monitoring is likely to be useful in assessing changes due to climate, including ocean acidification; invasive species; trampling effects; and oil spills.
Marcano Belisario, José S; Jamsek, Jan; Huckvale, Kit; O'Donoghue, John; Morrison, Cecily P; Car, Josip
2015-07-27
Self-administered survey questionnaires are an important data collection tool in clinical practice, public health research and epidemiology. They are ideal for achieving a wide geographic coverage of the target population, dealing with sensitive topics and are less resource-intensive than other data collection methods. These survey questionnaires can be delivered electronically, which can maximise the scalability and speed of data collection while reducing cost. In recent years, the use of apps running on consumer smart devices (i.e., smartphones and tablets) for this purpose has received considerable attention. However, variation in the mode of delivering a survey questionnaire could affect the quality of the responses collected. To assess the impact that smartphone and tablet apps as a delivery mode have on the quality of survey questionnaire responses compared to any other alternative delivery mode: paper, laptop computer, tablet computer (manufactured before 2007), short message service (SMS) and plastic objects. We searched MEDLINE, EMBASE, PsycINFO, IEEEXplore, Web of Science, CABI: CAB Abstracts, Current Contents Connect, ACM Digital, ERIC, Sociological Abstracts, Health Management Information Consortium, the Campbell Library and CENTRAL. We also searched registers of current and ongoing clinical trials such as ClinicalTrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform. We also searched the grey literature in OpenGrey, Mobile Active and ProQuest Dissertation & Theses. Lastly, we searched Google Scholar and the reference lists of included studies and relevant systematic reviews. We performed all searches up to 12 and 13 April 2015. We included parallel randomised controlled trials (RCTs), crossover trials and paired repeated measures studies that compared the electronic delivery of self-administered survey questionnaires via a smartphone or tablet app with any other delivery mode. We included data obtained from participants completing health-related self-administered survey questionnaire, both validated and non-validated. We also included data offered by both healthy volunteers and by those with any clinical diagnosis. We included studies that reported any of the following outcomes: data equivalence; data accuracy; data completeness; response rates; differences in the time taken to complete a survey questionnaire; differences in respondent's adherence to the original sampling protocol; and acceptability to respondents of the delivery mode. We included studies that were published in 2007 or after, as devices that became available during this time are compatible with the mobile operating system (OS) framework that focuses on apps. Two review authors independently extracted data from the included studies using a standardised form created for this systematic review in REDCap. They then compared their forms to reach consensus. Through an initial systematic mapping on the included studies, we identified two settings in which survey completion took place: controlled and uncontrolled. These settings differed in terms of (i) the location where surveys were completed, (ii) the frequency and intensity of sampling protocols, and (iii) the level of control over potential confounders (e.g., type of technology, level of help offered to respondents). We conducted a narrative synthesis of the evidence because a meta-analysis was not appropriate due to high levels of clinical and methodological diversity. We reported our findings for each outcome according to the setting in which the studies were conducted. We included 14 studies (15 records) with a total of 2275 participants; although we included only 2272 participants in the final analyses as there were missing data for three participants from one included study.Regarding data equivalence, in both controlled and uncontrolled settings, the included studies found no significant differences in the mean overall scores between apps and other delivery modes, and that all correlation coefficients exceeded the recommended thresholds for data equivalence. Concerning the time taken to complete a survey questionnaire in a controlled setting, one study found that an app was faster than paper, whereas the other study did not find a significant difference between the two delivery modes. In an uncontrolled setting, one study found that an app was faster than SMS. Data completeness and adherence to sampling protocols were only reported in uncontrolled settings. Regarding the former, an app was found to result in more complete records than paper, and in significantly more data entries than an SMS-based survey questionnaire. Regarding adherence to the sampling protocol, apps may be better than paper but no different from SMS. We identified multiple definitions of acceptability to respondents, with inconclusive results: preference; ease of use; willingness to use a delivery mode; satisfaction; effectiveness of the system informativeness; perceived time taken to complete the survey questionnaire; perceived benefit of a delivery mode; perceived usefulness of a delivery mode; perceived ability to complete a survey questionnaire; maximum length of time that participants would be willing to use a delivery mode; and reactivity to the delivery mode and its successful integration into respondents' daily routine. Finally, regardless of the study setting, none of the included studies reported data accuracy or response rates. Our results, based on a narrative synthesis of the evidence, suggest that apps might not affect data equivalence as long as the intended clinical application of the survey questionnaire, its intended frequency of administration and the setting in which it was validated remain unchanged. There were no data on data accuracy or response rates, and findings on the time taken to complete a self-administered survey questionnaire were contradictory. Furthermore, although apps might improve data completeness, there is not enough evidence to assess their impact on adherence to sampling protocols. None of the included studies assessed how elements of user interaction design, survey questionnaire design and intervention design might influence mode effects. Those conducting research in public health and epidemiology should not assume that mode effects relevant to other delivery modes apply to apps running on consumer smart devices. Those conducting methodological research might wish to explore the issues highlighted by this systematic review.
Freedson, Patty S; Lyden, Kate; Kozey-Keadle, Sarah; Staudenmayer, John
2011-12-01
Previous work from our laboratory provided a "proof of concept" for use of artificial neural networks (nnets) to estimate metabolic equivalents (METs) and identify activity type from accelerometer data (Staudenmayer J, Pober D, Crouter S, Bassett D, Freedson P, J Appl Physiol 107: 1330-1307, 2009). The purpose of this study was to develop new nnets based on a larger, more diverse, training data set and apply these nnet prediction models to an independent sample to evaluate the robustness and flexibility of this machine-learning modeling technique. The nnet training data set (University of Massachusetts) included 277 participants who each completed 11 activities. The independent validation sample (n = 65) (University of Tennessee) completed one of three activity routines. Criterion measures were 1) measured METs assessed using open-circuit indirect calorimetry; and 2) observed activity to identify activity type. The nnet input variables included five accelerometer count distribution features and the lag-1 autocorrelation. The bias and root mean square errors for the nnet MET trained on University of Massachusetts and applied to University of Tennessee were +0.32 and 1.90 METs, respectively. Seventy-seven percent of the activities were correctly classified as sedentary/light, moderate, or vigorous intensity. For activity type, household and locomotion activities were correctly classified by the nnet activity type 98.1 and 89.5% of the time, respectively, and sport was correctly classified 23.7% of the time. Use of this machine-learning technique operates reasonably well when applied to an independent sample. We propose the creation of an open-access activity dictionary, including accelerometer data from a broad array of activities, leading to further improvements in prediction accuracy for METs, activity intensity, and activity type.
Comparison of diagnostic techniques for the detection of Cryptosporidium oocysts in animal samples
Mirhashemi, Marzieh Ezzaty; Zintl, Annetta; Grant, Tim; Lucy, Frances E.; Mulcahy, Grace; De Waal, Theo
2015-01-01
While a large number of laboratory methods for the detection of Cryptosporidium oocysts in faecal samples are now available, their efficacy for identifying asymptomatic cases of cryptosporidiosis is poorly understood. This study was carried out to determine a reliable screening test for epidemiological studies in livestock. In addition, three molecular tests were compared to identify Cryptosporidium species responsible for the infection in cattle, sheep and horses. A variety of diagnostic tests including microscopic (Kinyoun's staining), immunological (Direct Fluorescence Antibody tests or DFAT), enzyme-linked immunosorbent assay (ELISA), and molecular methods (nested PCR) were compared to assess their ability to detect Cryptosporidium in cattle, horse and sheep faecal samples. The results indicate that the sensitivity and specificity of each test is highly dependent on the input samples; while Kinyoun's and DFAT proved to be reliable screening tools for cattle samples, DFAT and PCR analysis (targeted at the 18S rRNA gene fragment) were more sensitive for screening sheep and horse samples. Finally different PCR primer sets targeted at the same region resulted in the preferential amplification of certain Cryptosporidium species when multiple species were present in the sample. Therefore, for identification of Cryptosporidium spp. in the event of asymptomatic cryptosporidiosis, the combination of different 18S rRNA nested PCR primer sets is recommended for further epidemiological applications and also tracking the sources of infection. PMID:25662435
The biobank of the Norwegian mother and child cohort Study: A resource for the next 100 years
Rønningen, Kjersti S.; Paltiel, Liv; Meltzer, Helle M.; Nordhagen, Rannveig; Lie, Kari K.; Hovengen, Ragnhild; Haugen, Margaretha; Nystad, Wenche; Magnus, Per; Hoppin, Jane A.
2007-01-01
Introduction Long-term storage of biological materials is a critical component of any epidemiological study. In designing specimen repositories, efforts need to balance future needs for samples with logistical constraints necessary to process and store samples in a timely fashion. Objectives In the Norwegian Mother and Child Cohort Study (MoBa), the Biobank was charged with long-term storage of more than 380,000 biological samples from pregnant women, their partners and their children for up to 100 years. Methods Biological specimens include whole blood, plasma, DNA and urine; samples are collected at 50 hospitals in Norway. All samples are sent via ordinary mail to the Biobank in Oslo where the samples are registered, aliquoted and DNA extracted. DNA is stored at −20 °C while whole blood, urine and plasma are stored at − 80 °C. Results As of July 2006, over 227,000 sample sets have been collected, processed and stored at the Biobank. Currently 250–300 sets are received daily. An important part of the Biobank is the quality control program. Conclusion With the unique combination of biological specimens and questionnaire data, the MoBa Study will constitute a resource for many future investigations of the separate and combined effects of genetic, environmental factors on pregnancy outcome and on human morbidity, mortality and health in general. PMID:17031521
Streicher, Jeffrey W; Schulte, James A; Wiens, John J
2016-01-01
Targeted sequence capture is becoming a widespread tool for generating large phylogenomic data sets to address difficult phylogenetic problems. However, this methodology often generates data sets in which increasing the number of taxa and loci increases amounts of missing data. Thus, a fundamental (but still unresolved) question is whether sampling should be designed to maximize sampling of taxa or genes, or to minimize the inclusion of missing data cells. Here, we explore this question for an ancient, rapid radiation of lizards, the pleurodont iguanians. Pleurodonts include many well-known clades (e.g., anoles, basilisks, iguanas, and spiny lizards) but relationships among families have proven difficult to resolve strongly and consistently using traditional sequencing approaches. We generated up to 4921 ultraconserved elements with sampling strategies including 16, 29, and 44 taxa, from 1179 to approximately 2.4 million characters per matrix and approximately 30% to 60% total missing data. We then compared mean branch support for interfamilial relationships under these 15 different sampling strategies for both concatenated (maximum likelihood) and species tree (NJst) approaches (after showing that mean branch support appears to be related to accuracy). We found that both approaches had the highest support when including loci with up to 50% missing taxa (matrices with ~40-55% missing data overall). Thus, our results show that simply excluding all missing data may be highly problematic as the primary guiding principle for the inclusion or exclusion of taxa and genes. The optimal strategy was somewhat different for each approach, a pattern that has not been shown previously. For concatenated analyses, branch support was maximized when including many taxa (44) but fewer characters (1.1 million). For species-tree analyses, branch support was maximized with minimal taxon sampling (16) but many loci (4789 of 4921). We also show that the choice of these sampling strategies can be critically important for phylogenomic analyses, since some strategies lead to demonstrably incorrect inferences (using the same method) that have strong statistical support. Our preferred estimate provides strong support for most interfamilial relationships in this important but phylogenetically challenging group. © The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Total-Evidence Dating under the Fossilized Birth–Death Process
Zhang, Chi; Stadler, Tanja; Klopfstein, Seraina; Heath, Tracy A.; Ronquist, Fredrik
2016-01-01
Bayesian total-evidence dating involves the simultaneous analysis of morphological data from the fossil record and morphological and sequence data from recent organisms, and it accommodates the uncertainty in the placement of fossils while dating the phylogenetic tree. Due to the flexibility of the Bayesian approach, total-evidence dating can also incorporate additional sources of information. Here, we take advantage of this and expand the analysis to include information about fossilization and sampling processes. Our work is based on the recently described fossilized birth–death (FBD) process, which has been used to model speciation, extinction, and fossilization rates that can vary over time in a piecewise manner. So far, sampling of extant and fossil taxa has been assumed to be either complete or uniformly at random, an assumption which is only valid for a minority of data sets. We therefore extend the FBD process to accommodate diversified sampling of extant taxa, which is standard practice in studies of higher-level taxa. We verify the implementation using simulations and apply it to the early radiation of Hymenoptera (wasps, ants, and bees). Previous total-evidence dating analyses of this data set were based on a simple uniform tree prior and dated the initial radiation of extant Hymenoptera to the late Carboniferous (309 Ma). The analyses using the FBD prior under diversified sampling, however, date the radiation to the Triassic and Permian (252 Ma), slightly older than the age of the oldest hymenopteran fossils. By exploring a variety of FBD model assumptions, we show that it is mainly the accommodation of diversified sampling that causes the push toward more recent divergence times. Accounting for diversified sampling thus has the potential to close the long-discussed gap between rocks and clocks. We conclude that the explicit modeling of fossilization and sampling processes can improve divergence time estimates, but only if all important model aspects, including sampling biases, are adequately addressed. PMID:26493827
Total-Evidence Dating under the Fossilized Birth-Death Process.
Zhang, Chi; Stadler, Tanja; Klopfstein, Seraina; Heath, Tracy A; Ronquist, Fredrik
2016-03-01
Bayesian total-evidence dating involves the simultaneous analysis of morphological data from the fossil record and morphological and sequence data from recent organisms, and it accommodates the uncertainty in the placement of fossils while dating the phylogenetic tree. Due to the flexibility of the Bayesian approach, total-evidence dating can also incorporate additional sources of information. Here, we take advantage of this and expand the analysis to include information about fossilization and sampling processes. Our work is based on the recently described fossilized birth-death (FBD) process, which has been used to model speciation, extinction, and fossilization rates that can vary over time in a piecewise manner. So far, sampling of extant and fossil taxa has been assumed to be either complete or uniformly at random, an assumption which is only valid for a minority of data sets. We therefore extend the FBD process to accommodate diversified sampling of extant taxa, which is standard practice in studies of higher-level taxa. We verify the implementation using simulations and apply it to the early radiation of Hymenoptera (wasps, ants, and bees). Previous total-evidence dating analyses of this data set were based on a simple uniform tree prior and dated the initial radiation of extant Hymenoptera to the late Carboniferous (309 Ma). The analyses using the FBD prior under diversified sampling, however, date the radiation to the Triassic and Permian (252 Ma), slightly older than the age of the oldest hymenopteran fossils. By exploring a variety of FBD model assumptions, we show that it is mainly the accommodation of diversified sampling that causes the push toward more recent divergence times. Accounting for diversified sampling thus has the potential to close the long-discussed gap between rocks and clocks. We conclude that the explicit modeling of fossilization and sampling processes can improve divergence time estimates, but only if all important model aspects, including sampling biases, are adequately addressed. ©The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasudevamurthy, Gokul; Katoh, Yutai; Hunn, John D
2010-09-01
Zirconium carbide is a candidate to either replace or supplement silicon carbide as a coating material in TRISO fuel particles for high temperature gas-cooled reactor fuels. Six sets of ZrC coated surrogate microsphere samples, fabricated by the Japan Atomic Energy Agency using the fluidized bed chemical vapor deposition method, were irradiated in the High Flux Isotope Reactor at the Oak Ridge National Laboratory. These developmental samples available for the irradiation experiment were in conditions of either as-fabricated coated particles or particles that had been heat-treated to simulate the fuel compacting process. Five sets of samples were composed of nominally stoichiometricmore » compositions, with the sixth being richer in carbon (C/Zr = 1.4). The samples were irradiated at 800 and 1250 C with fast neutron fluences of 2 and 6 dpa. Post-irradiation, the samples were retrieved from the irradiation capsules followed by microstructural examination performed at the Oak Ridge National Laboratory's Low Activation Materials Development and Analysis Laboratory. This work was supported by the US Department of Energy Office of Nuclear Energy's Advanced Gas Reactor program as part of International Nuclear Energy Research Initiative collaboration with Japan. This report includes progress from that INERI collaboration, as well as results of some follow-up examination of the irradiated specimens. Post-irradiation examination items included microstructural characterization, and nanoindentation hardness/modulus measurements. The examinations revealed grain size enhancement and softening as the primary effects of both heat-treatment and irradiation in stoichiometric ZrC with a non-layered, homogeneous grain structure, raising serious concerns on the mechanical suitability of these particular developmental coatings as a replacement for SiC in TRISO fuel. Samples with either free carbon or carbon-rich layers dispersed in the ZrC coatings experienced negligible grain size enhancement during both heat treatment and irradiation. However, these samples experienced irradiation induced softening similar to stoichiometric ZrC samples.« less
Novy, Ari; Flory, S Luke; Honig, Joshua A; Bonos, Stacy; Hartman, Jean Marie
2012-02-01
Microsatellite markers were developed for the invasive plant Microstegium vimineum (Poaceae) to assess its population structure and to facilitate tracking of invasion expansion. Using 454 sequencing, 11 polymorphic and six monomorphic microsatellite primer sets were developed for M. vimineum. The primer sets were tested on individuals sampled from six populations in the United States and China. The polymorphic primers amplified di-, tri-, and tetranucleotide repeats with three to 10 alleles per locus. These markers will be useful for a variety of applications including tracking of invasion dynamics and population genetics studies.
Naugle, Alecia Larew; Barlow, Kristina E; Eblen, Denise R; Teter, Vanessa; Umholtz, Robert
2006-11-01
The U.S. Food Safety and Inspection Service (FSIS) tests sets of samples of selected raw meat and poultry products for Salmonella to ensure that federally inspected establishments meet performance standards defined in the pathogen reduction-hazard analysis and critical control point system (PR-HACCP) final rule. In the present report, sample set results are described and associations between set failure and set and establishment characteristics are identified for 4,607 sample sets collected from 1998 through 2003. Sample sets were obtained from seven product classes: broiler chicken carcasses (n = 1,010), cow and bull carcasses (n = 240), market hog carcasses (n = 560), steer and heifer carcasses (n = 123), ground beef (n = 2,527), ground chicken (n = 31), and ground turkey (n = 116). Of these 4,607 sample sets, 92% (4,255) were collected as part of random testing efforts (A sets), and 93% (4,166) passed. However, the percentage of positive samples relative to the maximum number of positive results allowable in a set increased over time for broilers but decreased or stayed the same for the other product classes. Three factors associated with set failure were identified: establishment size, product class, and year. Set failures were more likely early in the testing program (relative to 2003). Small and very small establishments were more likely to fail than large ones. Set failure was less likely in ground beef than in other product classes. Despite an overall decline in set failures through 2003, these results highlight the need for continued vigilance to reduce Salmonella contamination in broiler chicken and continued implementation of programs designed to assist small and very small establishments with PR-HACCP compliance issues.
Tissue-aware RNA-Seq processing and normalization for heterogeneous and sparse data.
Paulson, Joseph N; Chen, Cho-Yi; Lopes-Ramos, Camila M; Kuijjer, Marieke L; Platig, John; Sonawane, Abhijeet R; Fagny, Maud; Glass, Kimberly; Quackenbush, John
2017-10-03
Although ultrahigh-throughput RNA-Sequencing has become the dominant technology for genome-wide transcriptional profiling, the vast majority of RNA-Seq studies typically profile only tens of samples, and most analytical pipelines are optimized for these smaller studies. However, projects are generating ever-larger data sets comprising RNA-Seq data from hundreds or thousands of samples, often collected at multiple centers and from diverse tissues. These complex data sets present significant analytical challenges due to batch and tissue effects, but provide the opportunity to revisit the assumptions and methods that we use to preprocess, normalize, and filter RNA-Seq data - critical first steps for any subsequent analysis. We find that analysis of large RNA-Seq data sets requires both careful quality control and the need to account for sparsity due to the heterogeneity intrinsic in multi-group studies. We developed Yet Another RNA Normalization software pipeline (YARN), that includes quality control and preprocessing, gene filtering, and normalization steps designed to facilitate downstream analysis of large, heterogeneous RNA-Seq data sets and we demonstrate its use with data from the Genotype-Tissue Expression (GTEx) project. An R package instantiating YARN is available at http://bioconductor.org/packages/yarn .
The Mira-Titan Universe. II. Matter Power Spectrum Emulation
NASA Astrophysics Data System (ADS)
Lawrence, Earl; Heitmann, Katrin; Kwan, Juliana; Upadhye, Amol; Bingham, Derek; Habib, Salman; Higdon, David; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas
2017-09-01
We introduce a new cosmic emulator for the matter power spectrum covering eight cosmological parameters. Targeted at optical surveys, the emulator provides accurate predictions out to a wavenumber k˜ 5 Mpc-1 and redshift z≤slant 2. In addition to covering the standard set of ΛCDM parameters, massive neutrinos and a dynamical dark energy of state are included. The emulator is built on a sample set of 36 cosmological models, carefully chosen to provide accurate predictions over the wide and large parameter space. For each model, we have performed a high-resolution simulation, augmented with 16 medium-resolution simulations and TimeRG perturbation theory results to provide accurate coverage over a wide k-range; the data set generated as part of this project is more than 1.2Pbytes. With the current set of simulated models, we achieve an accuracy of approximately 4%. Because the sampling approach used here has established convergence and error-control properties, follow-up results with more than a hundred cosmological models will soon achieve ˜ 1 % accuracy. We compare our approach with other prediction schemes that are based on halo model ideas and remapping approaches. The new emulator code is publicly available.
The Mira-Titan Universe. II. Matter Power Spectrum Emulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawrence, Earl; Heitmann, Katrin; Kwan, Juliana
We introduce a new cosmic emulator for the matter power spectrum covering eight cosmological parameters. Targeted at optical surveys, the emulator provides accurate predictions out to a wavenumber k similar to 5 Mpc(-1) and redshift z <= 2. In addition to covering the standard set of Lambda CDM parameters, massive neutrinos and a dynamical dark energy of state are included. The emulator is built on a sample set of 36 cosmological models, carefully chosen to provide accurate predictions over the wide and large parameter space. For each model, we have performed a high-resolution simulation, augmented with 16 medium-resolution simulations andmore » TimeRG perturbation theory results to provide accurate coverage over a wide k-range; the data set generated as part of this project is more than 1.2Pbytes. With the current set of simulated models, we achieve an accuracy of approximately 4%. Because the sampling approach used here has established convergence and error-control properties, follow-up results with more than a hundred cosmological models will soon achieve similar to 1% accuracy. We compare our approach with other prediction schemes that are based on halo model ideas and remapping approaches.« less
An overview of STRUCTURE: applications, parameter settings, and supporting software
Porras-Hurtado, Liliana; Ruiz, Yarimar; Santos, Carla; Phillips, Christopher; Carracedo, Ángel; Lareu, Maria V.
2013-01-01
Objectives: We present an up-to-date review of STRUCTURE software: one of the most widely used population analysis tools that allows researchers to assess patterns of genetic structure in a set of samples. STRUCTURE can identify subsets of the whole sample by detecting allele frequency differences within the data and can assign individuals to those sub-populations based on analysis of likelihoods. The review covers STRUCTURE's most commonly used ancestry and frequency models, plus an overview of the main applications of the software in human genetics including case-control association studies (CCAS), population genetics, and forensic analysis. The review is accompanied by supplementary material providing a step-by-step guide to running STRUCTURE. Methods: With reference to a worked example, we explore the effects of changing the principal analysis parameters on STRUCTURE results when analyzing a uniform set of human genetic data. Use of the supporting software: CLUMPP and distruct is detailed and we provide an overview and worked example of STRAT software, applicable to CCAS. Conclusion: The guide offers a simplified view of how STRUCTURE, CLUMPP, distruct, and STRAT can be applied to provide researchers with an informed choice of parameter settings and supporting software when analyzing their own genetic data. PMID:23755071
78 FR 55762 - National Environmental Policy Act; Mars 2020 Mission
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-11
... set of soil and rock samples that could be returned to Earth in the future, and test new technology to... include the use of one multi-mission radioisotope thermoelectric generator (MMRTG) for rover electrical... would use the proven design and technology developed for the Mars Science Laboratory mission and rover...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-12-31
This report details progress made in setting up a laboratory for optical microscopy of genes. The apparatus including a fluorescence microscope, a scanning optical microscope, various spectrometers, and supporting computers is described. Results in developing photon and exciton tips, and in preparing samples are presented. (GHH)
The Processes that Promote Learning in Adult Mentoring and Coaching Dyadic Settings
ERIC Educational Resources Information Center
Marx, Michael J.
2009-01-01
This is a study of 10 adults participating in one-to-one mentoring and/or coaching. Participants were selected for interviewing through a purposive sampling process from leading international mentoring and coaching organizations. Selection criteria included (a) being an adult, (b) participating in a dyadic learning, and (c) regarding that…
Students' Appreciation of Expectation and Variation as a Foundation for Statistical Understanding
ERIC Educational Resources Information Center
Watson, Jane M.; Callingham, Rosemary A.; Kelly, Ben A.
2007-01-01
This study presents the results of a partial credit Rasch analysis of in-depth interview data exploring statistical understanding of 73 school students in 6 contextual settings. The use of Rasch analysis allowed the exploration of a single underlying variable across contexts, which included probability sampling, representation of temperature…
ERIC Educational Resources Information Center
Vulcano, Brent A.
2007-01-01
I surveyed 2 samples of Canadian undergraduates (N = 629) concerning their views of a "perfect instructor." Students identified as many descriptors as they wished; I categorized them into 26 sets of qualities and behaviors. The top 10 categories included: (a) knowledgeable; (b) interesting and creative lectures; (c) approachable; (d)…
Figure 4 from Integrative Genomics Viewer: Visualizing Big Data | Office of Cancer Genomics
Gene-list view of genomic data. The gene-list view allows users to compare data across a set of loci. The data in this figure includes copy number, mutation, and clinical data from 202 glioblastoma samples from TCGA. Adapted from Figure 7; Thorvaldsdottir H et al. 2012
Postschool Goals and Transition Services for Students with Learning Disabilities
ERIC Educational Resources Information Center
Daviso, Alfred W.; Denney, Stephen C.; Baer, Robert M.; Flexer, Robert
2011-01-01
This article describes the initial findings for students with learning disabilities from the first year of The Ohio Longitudinal Transition Study (OLTS). The study included 416 participants with learning disabilities who were exiting high school. Data from an in-school survey were analyzed by sample demographics (e.g. school setting, school type,…
Developing Inquiry-as-Stance and Repertoires of Practice: Teacher Learning across Two Settings
ERIC Educational Resources Information Center
Braaten, Melissa L.
2011-01-01
Sixteen science educators joined a science teacher video club for one school year to collaboratively inquire into each other's classroom practice through the use of records of practice including classroom video clips and samples of student work. This group was focused on developing ambitious, equitable science teaching that capitalizes on…
ERIC Educational Resources Information Center
Bonney, Lewis A.
the steps taken by a large urban school district to develop and implement an objectives-based curriculum with criterion-referenced assessment of student progress are described. These steps include: goal setting, development of curriculum objectives, construction of assessment exercises, matrix sampling in test administration, and reporting of…
Investing in K-12 Technology Equipment: Strategies for State Policymakers.
ERIC Educational Resources Information Center
Good, Dixie Griffin
This report examines decisions regarding investments in K-12 technology. The first section presents an overview of technology in K-12 public schools, including a sampling of how technology is being used to further education goals for teachers, students, and administrators. The second section establishes a set of figures that indicate the current…
Using the Short Mood and Feelings Questionnaire to Detect Depression in Detained Adolescents
ERIC Educational Resources Information Center
Kuo, Elena S.; Stoep, Ann Vander; Stewart, David G.
2005-01-01
The Mood and Feelings Questionnaire (MFQ) is examined for its utility in screening youth in juvenile justice settings for depression. In a cross-sectional study conducted at King County Juvenile Detention Center, a representative sample of 228 detained adolescents complete structured assessments, including the MFQ and the Massachusetts Youth…
HERO HELPS for Home Economics Related Occupation Coordinators. Volume I.
ERIC Educational Resources Information Center
Northern Arizona Univ., Flagstaff. Center for Vocational Education.
These 25 modules for independent study comprise the first volume of a two-volume set of HERO (Home Economics Related Occupations) HELPS for student use in competency-based professional development. A management system that includes a filing system, testing, record keeping, and scheduling is discussed. A sample contract and other class management…
A Quantitative Correlational Study of Teacher Preparation Program on Student Achievement
ERIC Educational Resources Information Center
Dingman, Jacob Blackstone
2010-01-01
The purpose of this quantitative correlational study was to identify the relationship between the type of teacher preparation program and student performance on the seventh and eighth grade mathematics state assessments in rural school settings. The study included a survey of a convenience sample of 36 teachers from Colorado and Washington school…
Exploring Bullying: An Early Childhood Perspective from Mainland China
ERIC Educational Resources Information Center
Arndt, Janet S.; Luo, Nili
2008-01-01
This article explores bullying in mainland China. The authors conducted a study to determine the existence of a problem with bullying in younger Chinese children. Samples included 40 randomly selected, early childhood educators serving children ages 2 through 6, located in 10 different urban school settings along the Yangzi River. The authors…
Methods and apparatuses for self-generating fault-tolerant keys in spread-spectrum systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moradi, Hussein; Farhang, Behrouz; Subramanian, Vijayarangam
Self-generating fault-tolerant keys for use in spread-spectrum systems are disclosed. At a communication device, beacon signals are received from another communication device and impulse responses are determined from the beacon signals. The impulse responses are circularly shifted to place a largest sample at a predefined position. The impulse responses are converted to a set of frequency responses in a frequency domain. The frequency responses are shuffled with a predetermined shuffle scheme to develop a set of shuffled frequency responses. A set of phase differences is determined as a difference between an angle of the frequency response and an angle ofmore » the shuffled frequency response at each element of the corresponding sets. Each phase difference is quantized to develop a set of secret-key quantized phases and a set of spreading codes is developed wherein each spreading code includes a corresponding phase of the set of secret-key quantized phases.« less
A Protein Standard That Emulates Homology for the Characterization of Protein Inference Algorithms.
The, Matthew; Edfors, Fredrik; Perez-Riverol, Yasset; Payne, Samuel H; Hoopmann, Michael R; Palmblad, Magnus; Forsström, Björn; Käll, Lukas
2018-05-04
A natural way to benchmark the performance of an analytical experimental setup is to use samples of known composition and see to what degree one can correctly infer the content of such a sample from the data. For shotgun proteomics, one of the inherent problems of interpreting data is that the measured analytes are peptides and not the actual proteins themselves. As some proteins share proteolytic peptides, there might be more than one possible causative set of proteins resulting in a given set of peptides and there is a need for mechanisms that infer proteins from lists of detected peptides. A weakness of commercially available samples of known content is that they consist of proteins that are deliberately selected for producing tryptic peptides that are unique to a single protein. Unfortunately, such samples do not expose any complications in protein inference. Hence, for a realistic benchmark of protein inference procedures, there is a need for samples of known content where the present proteins share peptides with known absent proteins. Here, we present such a standard, that is based on E. coli expressed human protein fragments. To illustrate the application of this standard, we benchmark a set of different protein inference procedures on the data. We observe that inference procedures excluding shared peptides provide more accurate estimates of errors compared to methods that include information from shared peptides, while still giving a reasonable performance in terms of the number of identified proteins. We also demonstrate that using a sample of known protein content without proteins with shared tryptic peptides can give a false sense of accuracy for many protein inference methods.
2009-01-01
Background The SD FK80 P.f/P.v Malaria Antigen Rapid Test (Standard Diagnostics, Korea) (FK80) is a three-band malaria rapid diagnostic test detecting Plasmodium falciparum histidine-rich protein-2 (HRP-2) and Plasmodium vivax-specific lactate dehydrogenase (Pv-pLDH). The present study assessed its performance in a non-endemic setting. Methods Stored blood samples (n = 416) from international travellers suspected of malaria were used, with microscopy corrected by PCR as the reference method. Samples infected by Plasmodium falciparum (n = 178), Plasmodium vivax (n = 99), Plasmodium ovale (n = 75) and Plasmodium malariae (n = 24) were included, as well as 40 malaria negative samples. Results Overall sensitivities for the diagnosis of P. falciparum and P. vivax were 91.6% (95% confidence interval (CI): 86.2% - 95.0%) and 75.8% (65.9% - 83.6%). For P. falciparum, sensitivity at parasite densities ≥ 100/μl was 94.6% (88.8% - 97.6%); for P. vivax, sensitivity at parasite densities ≥ 500/μl was 86.8% (75.4% - 93.4%). Four P. falciparum samples showed a Pv-pLDH line, three of them had parasite densities exceeding 50.000/μl. Two P. vivax samples, one P. ovale and one P. malariae sample showed a HRP-2 line. For the HRP-2 and Pv-pLDH lines, respectively 81.4% (136/167) and 55.8% (43/77) of the true positive results were read as medium or strong line intensities. The FK80 showed good reproducibility and reliability for test results and line intensities (kappa values for both exceeding 0.80). Conclusion The FK80 test performed satisfactorily in diagnosing P. falciparum and P. vivax infections in a non-endemic setting. PMID:19930609
NASA Astrophysics Data System (ADS)
Becker, Holger; Carstens, Cornelia; Kuhlmeier, Dirk; Sandetskaya, Natalia; Schröter, Nicole; Zilch, Christian; Gärtner, Claudia
2013-03-01
Commonly, microfluidic devices are based on the movement of fluids. For molecular diagnostics assays which often include steps like PCR, this practically always involves a more or less complicated set of external pumps, valves and liquid controls. In the presented paper, we follow a different approach in which the fluid after sample introduction remains stationary and the main bioactive sample molecules are moved through a chain of reaction compartments which contain the different reagents necessary for the assay. The big advantage of this concept is the lack of any external fluid actuation/control. Results on sample carry-over experiments and complete assays will be given.
Meyer, Golo M; Maurer, Hans H; Meyer, Markus R
2016-01-01
This paper reviews MS approaches applied to metabolism studies, structure elucidation and qualitative or quantitative screening of drugs (of abuse) and/or their metabolites. Applications in clinical and forensic toxicology were included using blood plasma or serum, urine, in vitro samples, liquids, solids or plant material. Techniques covered are liquid chromatography coupled to low-resolution and high-resolution multiple stage mass analyzers. Only PubMed listed studies published in English between January 2008 and January 2015 were considered. Approaches are discussed focusing on sample preparation and mass spectral settings. Comments on advantages and limitations of these techniques complete the review.
A high-throughput microRNA expression profiling system.
Guo, Yanwen; Mastriano, Stephen; Lu, Jun
2014-01-01
As small noncoding RNAs, microRNAs (miRNAs) regulate diverse biological functions, including physiological and pathological processes. The expression and deregulation of miRNA levels contain rich information with diagnostic and prognostic relevance and can reflect pharmacological responses. The increasing interest in miRNA-related research demands global miRNA expression profiling on large numbers of samples. We describe here a robust protocol that supports high-throughput sample labeling and detection on hundreds of samples simultaneously. This method employs 96-well-based miRNA capturing from total RNA samples and on-site biochemical reactions, coupled with bead-based detection in 96-well format for hundreds of miRNAs per sample. With low-cost, high-throughput, high detection specificity, and flexibility to profile both small and large numbers of samples, this protocol can be adapted in a wide range of laboratory settings.
Oatts, Thomas J; Hicks, Cheryl E; Adams, Amy R; Brisson, Michael J; Youmans-McDonald, Linda D; Hoover, Mark D; Ashley, Kevin
2012-02-01
Occupational sampling and analysis for multiple elements is generally approached using various approved methods from authoritative government sources such as the National Institute for Occupational Safety and Health (NIOSH), the Occupational Safety and Health Administration (OSHA) and the Environmental Protection Agency (EPA), as well as consensus standards bodies such as ASTM International. The constituents of a sample can exist as unidentified compounds requiring sample preparation to be chosen appropriately, as in the case of beryllium in the form of beryllium oxide (BeO). An interlaboratory study was performed to collect analytical data from volunteer laboratories to examine the effectiveness of methods currently in use for preparation and analysis of samples containing calcined BeO powder. NIST SRM(®) 1877 high-fired BeO powder (1100 to 1200 °C calcining temperature; count median primary particle diameter 0.12 μm) was used to spike air filter media as a representative form of beryllium particulate matter present in workplace sampling that is known to be resistant to dissolution. The BeO powder standard reference material was gravimetrically prepared in a suspension and deposited onto 37 mm mixed cellulose ester air filters at five different levels between 0.5 μg and 25 μg of Be (as BeO). Sample sets consisting of five BeO-spiked filters (in duplicate) and two blank filters, for a total of twelve unique air filter samples per set, were submitted as blind samples to each of 27 participating laboratories. Participants were instructed to follow their current process for sample preparation and utilize their normal analytical methods for processing samples containing substances of this nature. Laboratories using more than one sample preparation and analysis method were provided with more than one sample set. Results from 34 data sets ultimately received from the 27 volunteer laboratories were subjected to applicable statistical analyses. The observed performance data show that sample preparations using nitric acid alone, or combinations of nitric and hydrochloric acids, are not effective for complete extraction of Be from the SRM 1877 refractory BeO particulate matter spiked on air filters; but that effective recovery can be achieved by using sample preparation procedures utilizing either sulfuric or hydrofluoric acid, or by using methodologies involving ammonium bifluoride with heating. Laboratories responsible for quantitative determination of Be in workplace samples that may contain high-fired BeO should use quality assurance schemes that include BeO-spiked sampling media, rather than solely media spiked with soluble Be compounds, and should ensure that methods capable of quantitative digestion of Be from the actual material present are used.
Chen, Po-Yi; Yang, Chien-Ming; Morin, Charles M
2015-05-01
The purpose of this study is to examine the factor structure of the Insomnia Severity Index (ISI) across samples recruited from different countries. We tried to identify the most appropriate factor model for the ISI and further examined the measurement invariance property of the ISI across samples from different countries. Our analyses included one data set collected from a Taiwanese sample and two data sets obtained from samples in Hong Kong and Canada. The data set collected in Taiwan was analyzed with ordinal exploratory factor analysis (EFA) to obtain the appropriate factor model for the ISI. After that, we conducted a series of confirmatory factor analyses (CFAs), which is a special case of the structural equation model (SEM) that concerns the parameters in the measurement model, to the statistics collected in Canada and Hong Kong. The purposes of these CFA were to cross-validate the result obtained from EFA and further examine the cross-cultural measurement invariance of the ISI. The three-factor model outperforms other models in terms of global fit indices in Taiwan's population. Its external validity is also supported by confirmatory factor analyses. Furthermore, the measurement invariance analyses show that the strong invariance property between the samples from different cultures holds, providing evidence that the ISI results obtained in different cultures are comparable. The factorial validity of the ISI is stable in different populations. More importantly, its invariance property across cultures suggests that the ISI is a valid measure of the insomnia severity construct across countries. Copyright © 2014 Elsevier B.V. All rights reserved.
Analysis of sampling techniques for imbalanced data: An n = 648 ADNI study.
Dubey, Rashmi; Zhou, Jiayu; Wang, Yalin; Thompson, Paul M; Ye, Jieping
2014-02-15
Many neuroimaging applications deal with imbalanced imaging data. For example, in Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, the mild cognitive impairment (MCI) cases eligible for the study are nearly two times the Alzheimer's disease (AD) patients for structural magnetic resonance imaging (MRI) modality and six times the control cases for proteomics modality. Constructing an accurate classifier from imbalanced data is a challenging task. Traditional classifiers that aim to maximize the overall prediction accuracy tend to classify all data into the majority class. In this paper, we study an ensemble system of feature selection and data sampling for the class imbalance problem. We systematically analyze various sampling techniques by examining the efficacy of different rates and types of undersampling, oversampling, and a combination of over and undersampling approaches. We thoroughly examine six widely used feature selection algorithms to identify significant biomarkers and thereby reduce the complexity of the data. The efficacy of the ensemble techniques is evaluated using two different classifiers including Random Forest and Support Vector Machines based on classification accuracy, area under the receiver operating characteristic curve (AUC), sensitivity, and specificity measures. Our extensive experimental results show that for various problem settings in ADNI, (1) a balanced training set obtained with K-Medoids technique based undersampling gives the best overall performance among different data sampling techniques and no sampling approach; and (2) sparse logistic regression with stability selection achieves competitive performance among various feature selection algorithms. Comprehensive experiments with various settings show that our proposed ensemble model of multiple undersampled datasets yields stable and promising results. © 2013 Elsevier Inc. All rights reserved.
Steel, Mike
2012-10-01
Neutral macroevolutionary models, such as the Yule model, give rise to a probability distribution on the set of discrete rooted binary trees over a given leaf set. Such models can provide a signal as to the approximate location of the root when only the unrooted phylogenetic tree is known, and this signal becomes relatively more significant as the number of leaves grows. In this short note, we show that among models that treat all taxa equally, and are sampling consistent (i.e. the distribution on trees is not affected by taxa yet to be included), all such models, except one (the so-called PDA model), convey some information as to the location of the ancestral root in an unrooted tree. Copyright © 2012 Elsevier Inc. All rights reserved.
Computer-Aided Diagnosis Of Leukemic Blood Cells
NASA Astrophysics Data System (ADS)
Gunter, U.; Harms, H.; Haucke, M.; Aus, H. M.; ter Meulen, V.
1982-11-01
In a first clinical test, computer programs are being used to diagnose leukemias. The data collected include blood samples from patients suffering from acute myelomonocytic-, acute monocytic- and acute promyelocytic, myeloblastic, prolymphocytic, chronic lymphocytic leukemias and leukemic transformed immunocytoma. The proper differentiation of the leukemic cells is essential because the therapy depends on the type of leukemia. The algorithms analyse the fine chromatin texture and distribution in the nuclei as well as size and shape parameters from the cells and nuclei. Cells with similar nuclei from different leukemias can be distinguished from each other by analyzing the cell cytoplasm images. Recognition of these subtle differences in the cells require an image sampling rate of 15-30 pixel/micron. The results for the entire data set correlate directly to established hematological parameters and support the previously published initial training set .
Ochiai, Nobuo; Sasamoto, Kikuo; Tsunokawa, Jun; Hoffmann, Andreas; Okanoya, Kazunori; MacNamara, Kevin
2015-11-20
An extension of multi-volatile method (MVM) technology using the combination of a standard dynamic headspace (DHS) configuration, and a modified DHS configuration incorporating an additional vacuum module, was developed for milliliter injection volume of aqueous sample with full sample evaporation. A prior step involved investigation of water management by weighing of the water residue in the adsorbent trap. The extended MVM for 1 mL aqueous sample consists of five different DHS method parameter sets including choice of the replaceable adsorbent trap. An initial two DHS sampling sets at 25°C with the standard DHS configuration using a carbon-based adsorbent trap target very volatile solutes with high vapor pressure (>10 kPa) and volatile solutes with moderate vapor pressure (1-10 kPa). Subsequent three DHS sampling sets at 80°C with the modified DHS configuration using a Tenax TA trap target solutes with low vapor pressure (<1 kPa) and/or hydrophilic characteristics. After the five sequential DHS samplings using the same HS vial, the five traps are sequentially desorbed with thermal desorption in reverse order of the DHS sampling and the desorbed compounds are trapped and concentrated in a programmed temperature vaporizing (PTV) inlet and subsequently analyzed in a single GC-MS run. Recoveries of 21 test aroma compounds in 1 mL water for each separate DHS sampling and the combined MVM procedure were evaluated as a function of vapor pressure in the range of 0.000088-120 kPa. The MVM procedure provided high recoveries (>88%) for 17 test aroma compounds and moderate recoveries (44-71%) for 4 test compounds. The method showed good linearity (r(2)>0.9913) and high sensitivity (limit of detection: 0.1-0.5 ng mL(-1)) even with MS scan mode. The improved sensitivity of the method was demonstrated with analysis of a wide variety of aroma compounds in brewed green tea. Compared to the original 100 μL MVM procedure, this extension to 1 mL MVM allowed detection of nearly twice the number of aroma compounds, including 18 potent aroma compounds from top-note to base-note (e.g. 2,3-butanedione, coumarin, furaneol, guaiacol, cis-3-hexenol, linalool, maltol, methional, 3-methyl butanal, 2,3,5-trimethyl pyrazine, and vanillin). Sensitivity for 23 compounds improved by a factor of 3.4-15 under 1 mL MVM conditions. Copyright © 2015 Elsevier B.V. All rights reserved.
Grinter, Sam Z; Yan, Chengfei; Huang, Sheng-You; Jiang, Lin; Zou, Xiaoqin
2013-08-26
In this study, we use the recently released 2012 Community Structure-Activity Resource (CSAR) data set to evaluate two knowledge-based scoring functions, ITScore and STScore, and a simple force-field-based potential (VDWScore). The CSAR data set contains 757 compounds, most with known affinities, and 57 crystal structures. With the help of the script files for docking preparation, we use the full CSAR data set to evaluate the performances of the scoring functions on binding affinity prediction and active/inactive compound discrimination. The CSAR subset that includes crystal structures is used as well, to evaluate the performances of the scoring functions on binding mode and affinity predictions. Within this structure subset, we investigate the importance of accurate ligand and protein conformational sampling and find that the binding affinity predictions are less sensitive to non-native ligand and protein conformations than the binding mode predictions. We also find the full CSAR data set to be more challenging in making binding mode predictions than the subset with structures. The script files used for preparing the CSAR data set for docking, including scripts for canonicalization of the ligand atoms, are offered freely to the academic community.
NASA Astrophysics Data System (ADS)
Huang, Po-Jung; Baghbani Kordmahale, Sina; Chou, Chao-Kai; Yamaguchi, Hirohito; Hung, Mien-Chie; Kameoka, Jun
2016-03-01
Signal transductions including multiple protein post-translational modifications (PTM), protein-protein interactions (PPI), and protein-nucleic acid interaction (PNI) play critical roles for cell proliferation and differentiation that are directly related to the cancer biology. Traditional methods, like mass spectrometry, immunoprecipitation, fluorescence resonance energy transfer, and fluorescence correlation spectroscopy require a large amount of sample and long processing time. "microchannel for multiple-parameter analysis of proteins in single-complex (mMAPS)"we proposed can reduce the process time and sample volume because this system is composed by microfluidic channels, fluorescence microscopy, and computerized data analysis. In this paper, we will present an automated mMAPS including integrated microfluidic device, automated stage and electrical relay for high-throughput clinical screening. Based on this result, we estimated that this automated detection system will be able to screen approximately 150 patient samples in a 24-hour period, providing a practical application to analyze tissue samples in a clinical setting.
Barreto, Goncalo; Soininen, Antti; Sillat, Tarvo; Konttinen, Yrjö T; Kaivosoja, Emilia
2014-01-01
Time-of-flight secondary ion mass spectrometry (ToF-SIMS) is increasingly being used in analysis of biological samples. For example, it has been applied to distinguish healthy and osteoarthritic human cartilage. This chapter discusses ToF-SIMS principle and instrumentation including the three modes of analysis in ToF-SIMS. ToF-SIMS sets certain requirements for the samples to be analyzed; for example, the samples have to be vacuum compatible. Accordingly, sample processing steps for different biological samples, i.e., proteins, cells, frozen and paraffin-embedded tissues and extracellular matrix for the ToF-SIMS are presented. Multivariate analysis of the ToF-SIMS data and the necessary data preprocessing steps (peak selection, data normalization, mean-centering, and scaling and transformation) are discussed in this chapter.
NASA Astrophysics Data System (ADS)
Arabshahi, P.; Howe, B. M.; Chao, Y.; Businger, S.; Chien, S.
2010-12-01
We present a virtual ocean observatory (VOO) that supports climate and ocean science as addressed in the NRC decadal survey. The VOO is composed of an autonomous software system, in-situ and space-based sensing assets, data sets, and interfaces to ocean and atmosphere models. The purpose of this observatory and its output data products are: 1) to support SWOT mission planning, 2) to serve as a vanguard for fusing SWOT, XOVWM, and in-situ data sets through fusion of OSTM (SWOT proxy) and QuikSCAT (XOVWM proxy) data with in-situ data, and 3) to serve as a feed-forward platform for high-resolution measurements of ocean surface topography (OST) in island and coastal environments utilizing space-based and in-situ adaptive sampling. The VOO will enable models capable of simulating and estimating realistic oceanic processes and atmospheric forcing of the ocean in these environments. Such measurements are critical in understanding the oceans' effects on global climate. The information systems innovations of the VOO are: 1. Development of an autonomous software platform for automated mission planning and combining science data products of QuikSCAT and OSTM with complementary in-situ data sets to deliver new data products. This software will present first-step demonstrations of technology that, once matured, will offer increased operational capability to SWOT by providing automated planning, and new science data sets using automated workflows. The future data sets to be integrated include those from SWOT and XOVWM. 2. A capstone demonstration of the effort utilizes the elements developed in (1) above to achieve adaptive in-situ sampling through feedback from space-based-assets via the SWOT simulator. This effort will directly contribute to orbit design during the experimental phase (first 6-9 months) of the SWOT mission by high resolution regional atmospheric and ocean modeling and sampling. It will also contribute to SWOT science via integration of in-situ data, QuikSCAT, and OSTM data sets, and models, thus serving as technology pathfinder for SWOT and XOVWM data fusion; and will contribute to SWOT operations via data fusion and mission planning technology. The goals of our project are as follows: (a) Develop and test the VOO, including hardware, in-situ science platforms (Seagliders) and instruments, and two autonomous software modules: 1) automated data fusion/assimilation, and 2) automated planning technology; (b) Generate new data sets (OST data in the Hawaiian Islands region) from fusion of in-situ data with QuikSCAT and OSTM data; (c) Integrate data sets derived from the VOO into the SWOT simulator for improved SWOT mission planning; (d) Demonstrate via Hawaiian Islands region field experiments and simulation the operational capability of the VOO to generate improved hydrologic cycle/ocean science, in particular: mesoscale and submesoscale ocean circulation including velocities, vorticity, and stress measurements, that are important to the modeling of ocean currents, eddies and mixing.
Goesling, Brian; Colman, Silvie; Trenholm, Christopher; Terzian, Mary; Moore, Kristin
2014-05-01
This systematic review provides a comprehensive, updated assessment of programs with evidence of effectiveness in reducing teen pregnancy, sexually transmitted infections (STIs), or associated sexual risk behaviors. The review was conducted in four steps. First, multiple literature search strategies were used to identify relevant studies released from 1989 through January 2011. Second, identified studies were screened against prespecified eligibility criteria. Third, studies were assessed by teams of two trained reviewers for the quality and execution of their research designs. Fourth, for studies that passed the quality assessment, the review team extracted and analyzed information on the research design, study sample, evaluation setting, and program impacts. A total of 88 studies met the review criteria for study quality and were included in the data extraction and analysis. The studies examined a range of programs delivered in diverse settings. Most studies had mixed-gender and predominately African-American research samples (70% and 51%, respectively). Randomized controlled trials accounted for the large majority (87%) of included studies. Most studies (76%) included multiple follow-ups, with sample sizes ranging from 62 to 5,244. Analysis of the study impact findings identified 31 programs with evidence of effectiveness. Research conducted since the late 1980s has identified more than two dozen teen pregnancy and STI prevention programs with evidence of effectiveness. Key strengths of this research are the large number of randomized controlled trials, the common use of multiple follow-up periods, and attention to a broad range of programs delivered in diverse settings. Two main gaps are a lack of replication studies and the need for more research on Latino youth and other high-risk populations. In addressing these gaps, researchers must overcome common limitations in study design, analysis, and reporting that have negatively affected prior research. Copyright © 2014 Society for Adolescent Health and Medicine. All rights reserved.
Mansion, Guilhem; Parolly, Gerald; Crowl, Andrew A.; Mavrodiev, Evgeny; Cellinese, Nico; Oganesian, Marine; Fraunhofer, Katharina; Kamari, Georgia; Phitos, Dimitrios; Haberle, Rosemarie; Akaydin, Galip; Ikinci, Nursel; Raus, Thomas; Borsch, Thomas
2012-01-01
Background Speciose clades usually harbor species with a broad spectrum of adaptive strategies and complex distribution patterns, and thus constitute ideal systems to disentangle biotic and abiotic causes underlying species diversification. The delimitation of such study systems to test evolutionary hypotheses is difficult because they often rely on artificial genus concepts as starting points. One of the most prominent examples is the bellflower genus Campanula with some 420 species, but up to 600 species when including all lineages to which Campanula is paraphyletic. We generated a large alignment of petD group II intron sequences to include more than 70% of described species as a reference. By comparison with partial data sets we could then assess the impact of selective taxon sampling strategies on phylogenetic reconstruction and subsequent evolutionary conclusions. Methodology/Principal Findings Phylogenetic analyses based on maximum parsimony (PAUP, PRAP), Bayesian inference (MrBayes), and maximum likelihood (RAxML) were first carried out on the large reference data set (D680). Parameters including tree topology, branch support, and age estimates, were then compared to those obtained from smaller data sets resulting from “classification-guided” (D088) and “phylogeny-guided sampling” (D101). Analyses of D088 failed to fully recover the phylogenetic diversity in Campanula, whereas D101 inferred significantly different branch support and age estimates. Conclusions/Significance A short genomic region with high phylogenetic utility allowed us to easily generate a comprehensive phylogenetic framework for the speciose Campanula clade. Our approach recovered 17 well-supported and circumscribed sub-lineages. Knowing these will be instrumental for developing more specific evolutionary hypotheses and guide future research, we highlight the predictive value of a mass taxon-sampling strategy as a first essential step towards illuminating the detailed evolutionary history of diverse clades. PMID:23209646
Flood, David; Garcia, Pablo; Douglas, Kate; Hawkins, Jessica
2018-01-01
Objective Screening is a key strategy to address the rising burden of chronic kidney disease (CKD) in low-income and middle-income countries. However, there are few reports regarding the implementation of screening programmes in resource-limited settings. The objectives of this study are to (1) to share programmatic experiences implementing CKD screening in a rural, resource-limited setting and (2) to assess the burden of renal disease in a community-based diabetes programme in rural Guatemala. Design Cross-sectional assessment of glomerular filtration rate (GFR) and urine albumin. Setting Central Highlands of Guatemala. Participants We enrolled 144 adults with type 2 diabetes in a community-based CKD screening activity carried out by the sponsoring institution. Outcome measures Prevalence of renal disease and risk of CKD progression using Kidney Disease: Improving Global Outcomes definitions and classifications. Results We found that 57% of the sample met GFR and/or albuminuria criteria suggestive of CKD. Over half of the sample had moderate or greater increased risk for CKD progression, including nearly 20% who were classified as high or very high risk. Hypertension was common in the sample (42%), and glycaemic control was suboptimal (mean haemoglobin A1c 9.4%±2.5% at programme enrolment and 8.6%±2.3% at time of CKD screening). Conclusions The high burden of renal disease in our patient sample suggests an imperative to better understand the burden and risk factors of CKD in Guatemala. The implementation details we share reveal the tension between evidence-based CKD screening versus screening that can feasibly be delivered in resource-limited global settings. PMID:29358450
Automated storm water sampling on small watersheds
Harmel, R.D.; King, K.W.; Slade, R.M.
2003-01-01
Few guidelines are currently available to assist in designing appropriate automated storm water sampling strategies for small watersheds. Therefore, guidance is needed to develop strategies that achieve an appropriate balance between accurate characterization of storm water quality and loads and limitations of budget, equipment, and personnel. In this article, we explore the important sampling strategy components (minimum flow threshold, sampling interval, and discrete versus composite sampling) and project-specific considerations (sampling goal, sampling and analysis resources, and watershed characteristics) based on personal experiences and pertinent field and analytical studies. These components and considerations are important in achieving the balance between sampling goals and limitations because they determine how and when samples are taken and the potential sampling error. Several general recommendations are made, including: setting low minimum flow thresholds, using flow-interval or variable time-interval sampling, and using composite sampling to limit the number of samples collected. Guidelines are presented to aid in selection of an appropriate sampling strategy based on user's project-specific considerations. Our experiences suggest these recommendations should allow implementation of a successful sampling strategy for most small watershed sampling projects with common sampling goals.
FLORIDA TOWER FOOTPRINT EXPERIMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
WATSON,T.B.; DIETZ, R.N.; WILKE, R.
2007-01-01
The Florida Footprint experiments were a series of field programs in which perfluorocarbon tracers were released in different configurations centered on a flux tower to generate a data set that can be used to test transport and dispersion models. These models are used to determine the sources of the CO{sub 2} that cause the fluxes measured at eddy covariance towers. Experiments were conducted in a managed slash pine forest, 10 km northeast of Gainesville, Florida, in 2002, 2004, and 2006 and in atmospheric conditions that ranged from well mixed, to very stable, including the transition period between convective conditions atmore » midday to stable conditions after sun set. There were a total of 15 experiments. The characteristics of the PFTs, details of sampling and analysis methods, quality control measures, and analytical statistics including confidence limits are presented. Details of the field programs including tracer release rates, tracer source configurations, and configuration of the samplers are discussed. The result of this experiment is a high quality, well documented tracer and meteorological data set that can be used to improve and validate canopy dispersion models.« less
Shumow, Laura; Bodor, Alison
2011-07-05
This manuscript describes the results of an HPLC study for the determination of the flavan-3-ol monomers, (±)-catechin and (±)-epicatechin, in cocoa and plain dark and milk chocolate products. The study was performed under the auspices of the National Confectioners Association (NCA) and involved the analysis of a series of samples by laboratories of five member companies using a common method. The method reported in this paper uses reversed phase HPLC with fluorescence detection to analyze (±)-epicatechin and (±)-catechin extracted with an acidic solvent from defatted cocoa and chocolate. In addition to a variety of cocoa and chocolate products, the sample set included a blind duplicate used to assess method reproducibility. All data were subjected to statistical analysis with outliers eliminated from the data set. The percent coefficient of variation (%CV) of the sample set ranged from approximately 7 to 15%. Further experimental details are described in the body of the manuscript and the results indicate the method is suitable for the determination of (±)-catechin and (±)-epicatechin in cocoa and chocolate products and represents the first collaborative study of this HPLC method for these compounds in these matrices.
Chen, Hongda; Werner, Simone; Butt, Julia; Zörnig, Inka; Knebel, Phillip; Michel, Angelika; Eichmüller, Stefan B; Jäger, Dirk; Waterboer, Tim; Pawlita, Michael; Brenner, Hermann
2016-03-29
Novel blood-based screening tests are strongly desirable for early detection of colorectal cancer (CRC). We aimed to identify and evaluate autoantibodies against tumor-associated antigens as biomarkers for early detection of CRC. 380 clinically identified CRC patients and samples of participants with selected findings from a cohort of screening colonoscopy participants in 2005-2013 (N=6826) were included in this analysis. Sixty-four serum autoantibody markers were measured by multiplex bead-based serological assays. A two-step approach with selection of biomarkers in a training set, and validation of findings in a validation set, the latter exclusively including participants from the screening setting, was applied. Anti-MAGEA4 exhibited the highest sensitivity for detecting early stage CRC and advanced adenoma. Multi-marker combinations substantially increased sensitivity at the price of a moderate loss of specificity. Anti-TP53, anti-IMPDH2, anti-MDM2 and anti-MAGEA4 were consistently included in the best-performing 4-, 5-, and 6-marker combinations. This four-marker panel yielded a sensitivity of 26% (95% CI, 13-45%) for early stage CRC at a specificity of 90% (95% CI, 83-94%) in the validation set. Notably, it also detected 20% (95% CI, 13-29%) of advanced adenomas. Taken together, the identified biomarkers could contribute to the development of a useful multi-marker blood-based test for CRC early detection.
Ranking metrics in gene set enrichment analysis: do they matter?
Zyla, Joanna; Marczyk, Michal; Weiner, January; Polanska, Joanna
2017-05-12
There exist many methods for describing the complex relation between changes of gene expression in molecular pathways or gene ontologies under different experimental conditions. Among them, Gene Set Enrichment Analysis seems to be one of the most commonly used (over 10,000 citations). An important parameter, which could affect the final result, is the choice of a metric for the ranking of genes. Applying a default ranking metric may lead to poor results. In this work 28 benchmark data sets were used to evaluate the sensitivity and false positive rate of gene set analysis for 16 different ranking metrics including new proposals. Furthermore, the robustness of the chosen methods to sample size was tested. Using k-means clustering algorithm a group of four metrics with the highest performance in terms of overall sensitivity, overall false positive rate and computational load was established i.e. absolute value of Moderated Welch Test statistic, Minimum Significant Difference, absolute value of Signal-To-Noise ratio and Baumgartner-Weiss-Schindler test statistic. In case of false positive rate estimation, all selected ranking metrics were robust with respect to sample size. In case of sensitivity, the absolute value of Moderated Welch Test statistic and absolute value of Signal-To-Noise ratio gave stable results, while Baumgartner-Weiss-Schindler and Minimum Significant Difference showed better results for larger sample size. Finally, the Gene Set Enrichment Analysis method with all tested ranking metrics was parallelised and implemented in MATLAB, and is available at https://github.com/ZAEDPolSl/MrGSEA . Choosing a ranking metric in Gene Set Enrichment Analysis has critical impact on results of pathway enrichment analysis. The absolute value of Moderated Welch Test has the best overall sensitivity and Minimum Significant Difference has the best overall specificity of gene set analysis. When the number of non-normally distributed genes is high, using Baumgartner-Weiss-Schindler test statistic gives better outcomes. Also, it finds more enriched pathways than other tested metrics, which may induce new biological discoveries.
Evidence for a Global Sampling Process in Extraction of Summary Statistics of Item Sizes in a Set.
Tokita, Midori; Ueda, Sachiyo; Ishiguchi, Akira
2016-01-01
Several studies have shown that our visual system may construct a "summary statistical representation" over groups of visual objects. Although there is a general understanding that human observers can accurately represent sets of a variety of features, many questions on how summary statistics, such as an average, are computed remain unanswered. This study investigated sampling properties of visual information used by human observers to extract two types of summary statistics of item sets, average and variance. We presented three models of ideal observers to extract the summary statistics: a global sampling model without sampling noise, global sampling model with sampling noise, and limited sampling model. We compared the performance of an ideal observer of each model with that of human observers using statistical efficiency analysis. Results suggest that summary statistics of items in a set may be computed without representing individual items, which makes it possible to discard the limited sampling account. Moreover, the extraction of summary statistics may not necessarily require the representation of individual objects with focused attention when the sets of items are larger than 4.
NASA Astrophysics Data System (ADS)
Allen, C.
2010-12-01
During the Year of the Solar System spacecraft will encounter two comets; orbit the asteroid Vesta, continue to explore Mars with rovers, and launch robotic explorers to the Moon and Mars. We have pieces of all these worlds in our laboratories. Extensive information about these unique materials, as well as actual lunar samples and meteorites, is available for display and education. The Johnson Space Center (JSC) curates NASA's extraterrestrial samples to support research, education, and public outreach. At the current time JSC curates five types of extraterrestrial samples: Moon rocks and soils collected by the Apollo astronauts Meteorites collected on US expeditions to Antarctica (including rocks from the Moon, Mars, and many asteroids including Vesta) “Cosmic dust” (asteroid and comet particles) collected by high-altitude aircraft Solar wind atoms collected by the Genesis spacecraft Comet and interstellar dust particles collected by the Stardust spacecraft These rocks, soils, dust particles, and atoms continue to be studied intensively by scientists around the world. Descriptions of the samples, research results, thousands of photographs, and information on how to request research samples are on the JSC Curation website: http://curator.jsc.nasa.gov/ NASA is eager for scientists and the public to have access to these exciting samples through our various loan procedures. NASA provides a limited number of Moon rock samples for either short-term or long-term displays at museums, planetariums, expositions, and professional events that are open to the public. The JSC Public Affairs Office handles requests for such display samples. Requestors should apply in writing to Mr. Louis Parker, JSC Exhibits Manager. He will advise successful applicants regarding provisions for receipt, display, and return of the samples. All loans will be preceded by a signed loan agreement executed between NASA and the requestor's organization. Email address: louis.a.parker@nasa.gov Sets of twelve thin sections of Apollo lunar samples and sets of twelve thin sections of meteorites are available for short-term loan from JSC Curation. The thin sections are designed for use in college and university courses where petrographic microscopes are available for viewing. Requestors should contact Ms. Mary Luckey, Education Sample Curator. Email address: mary.k.luckey@nasa.gov NASA also loans sets of Moon rocks and meteorites for use in classrooms, libraries, museums and planetariums. Lunar samples (three soils and three rocks) are encapsulated in a six-inch diameter clear plastic disk. Disks containing six different samples of meteorites are also available. A CD with PowerPoint presentations, a classroom activity guide, and additional printed material accompany the disks. Educators may qualify for the use of these disks by attending a security certification workshop sponsored by NASA's Aerospace Education Services Program (AESP). Contact Ms. Margaret Maher, AESP Director. Email address: mjm67@psu.edu Please take advantage of the wealth of data and the samples that we have from an exciting variety of solar system bodies.
Mabood, Fazal; Abbas, Ghulam; Jabeen, Farah; Naureen, Zakira; Al-Harrasi, Ahmed; Hamaed, Ahmad M; Hussain, Javid; Al-Nabhani, Mahmood; Al Shukaili, Maryam S; Khan, Alamgir; Manzoor, Suryyia
2018-03-01
Cows' butterfat may be adulterated with animal fat materials like tallow which causes increased serum cholesterol and triglycerides levels upon consumption. There is no reliable technique to detect and quantify tallow adulteration in butter samples in a feasible way. In this study a highly sensitive near-infrared (NIR) spectroscopy combined with chemometric methods was developed to detect as well as quantify the level of tallow adulterant in clarified butter samples. For this investigation the pure clarified butter samples were intentionally adulterated with tallow at the following percentage levels: 1%, 3%, 5%, 7%, 9%, 11%, 13%, 15%, 17% and 20% (wt/wt). Altogether 99 clarified butter samples were used including nine pure samples (un-adulterated clarified butter) and 90 clarified butter samples adulterated with tallow. Each sample was analysed by using NIR spectroscopy in the reflection mode in the range 10,000-4000 cm -1 , at 2 cm -1 resolution and using the transflectance sample accessory which provided a total path length of 0.5 mm. Chemometric models including principal components analysis (PCA), partial least-squares discriminant analysis (PLSDA), and partial least-squares regressions (PLSR) were applied for statistical treatment of the obtained NIR spectral data. The PLSDA model was employed to differentiate pure butter samples from those adulterated with tallow. The employed model was then externally cross-validated by using a test set which included 30% of the total butter samples. The excellent performance of the model was proved by the low RMSEP value of 1.537% and the high correlation factor of 0.95. This newly developed method is robust, non-destructive, highly sensitive, and economical with very minor sample preparation and good ability to quantify less than 1.5% of tallow adulteration in clarified butter samples.
NASA Astrophysics Data System (ADS)
Rivenson, Yair; Wu, Chris; Wang, Hongda; Zhang, Yibo; Ozcan, Aydogan
2017-03-01
Microscopic imaging of biological samples such as pathology slides is one of the standard diagnostic methods for screening various diseases, including cancer. These biological samples are usually imaged using traditional optical microscopy tools; however, the high cost, bulkiness and limited imaging throughput of traditional microscopes partially restrict their deployment in resource-limited settings. In order to mitigate this, we previously demonstrated a cost-effective and compact lens-less on-chip microscopy platform with a wide field-of-view of >20-30 mm^2. The lens-less microscopy platform has shown its effectiveness for imaging of highly connected biological samples, such as pathology slides of various tissue samples and smears, among others. This computational holographic microscope requires a set of super-resolved holograms acquired at multiple sample-to-sensor distances, which are used as input to an iterative phase recovery algorithm and holographic reconstruction process, yielding high-resolution images of the samples in phase and amplitude channels. Here we demonstrate that in order to reconstruct clinically relevant images with high resolution and image contrast, we require less than 50% of the previously reported nominal number of holograms acquired at different sample-to-sensor distances. This is achieved by incorporating a loose sparsity constraint as part of the iterative holographic object reconstruction. We demonstrate the success of this sparsity-based computational lens-less microscopy platform by imaging pathology slides of breast cancer tissue and Papanicolaou (Pap) smears.
Statistical characterization of a large geochemical database and effect of sample size
Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.
2005-01-01
The authors investigated statistical distributions for concentrations of chemical elements from the National Geochemical Survey (NGS) database of the U.S. Geological Survey. At the time of this study, the NGS data set encompasses 48,544 stream sediment and soil samples from the conterminous United States analyzed by ICP-AES following a 4-acid near-total digestion. This report includes 27 elements: Al, Ca, Fe, K, Mg, Na, P, Ti, Ba, Ce, Co, Cr, Cu, Ga, La, Li, Mn, Nb, Nd, Ni, Pb, Sc, Sr, Th, V, Y and Zn. The goal and challenge for the statistical overview was to delineate chemical distributions in a complex, heterogeneous data set spanning a large geographic range (the conterminous United States), and many different geological provinces and rock types. After declustering to create a uniform spatial sample distribution with 16,511 samples, histograms and quantile-quantile (Q-Q) plots were employed to delineate subpopulations that have coherent chemical and mineral affinities. Probability groupings are discerned by changes in slope (kinks) on the plots. Major rock-forming elements, e.g., Al, Ca, K and Na, tend to display linear segments on normal Q-Q plots. These segments can commonly be linked to petrologic or mineralogical associations. For example, linear segments on K and Na plots reflect dilution of clay minerals by quartz sand (low in K and Na). Minor and trace element relationships are best displayed on lognormal Q-Q plots. These sensitively reflect discrete relationships in subpopulations within the wide range of the data. For example, small but distinctly log-linear subpopulations for Pb, Cu, Zn and Ag are interpreted to represent ore-grade enrichment of naturally occurring minerals such as sulfides. None of the 27 chemical elements could pass the test for either normal or lognormal distribution on the declustered data set. Part of the reasons relate to the presence of mixtures of subpopulations and outliers. Random samples of the data set with successively smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.
Suitability of public use secondary data sets to study multiple activities.
Putnam, Michelle; Morrow-Howell, Nancy; Inoue, Megumi; Greenfield, Jennifer C; Chen, Huajuan; Lee, YungSoo
2014-10-01
The aims of this study were to inventory activity items within and across U.S. public use data sets, to identify gaps in represented activity domains and challenges in interpreting domains, and to assess the potential for studying multiple activity engagement among older adults using existing data. We engaged in content analysis of activity measures of 5U.S. public use data sets with nationally representative samples of older adults. Data sets included the Health & Retirement Survey (HRS), Americans' Changing Lives Survey (ACL), Midlife in the United States Survey (MIDUS), the National Health Interview Survey (NHIS), and the Panel Study of Income Dynamics survey (PSID). Two waves of each data set were analyzed. We identified 13 distinct activity domains across the 5 data sets, with substantial differences in representation of those domains among the data sets, and variance in the number and type of activity measures included in each. Our findings indicate that although it is possible to study multiple activity engagement within existing data sets, fuller sets of activity measures need to be developed in order to evaluate the portfolio of activities older adults engage in and the relationship of these portfolios to health and wellness outcomes. Importantly, clearer conceptual models of activity broadly conceived are required to guide this work. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
The relationship between drug use and sexual aggression in men across time.
Swartout, Kevin M; White, Jacquelyn W
2010-09-01
The relationship between drug use and sexual aggression in a sample of men was examined at five time points from adolescence through the 4th year of college. Hierarchical linear modeling explored the relationship between proximal drug use and severity of sexual aggression after controlling for proximal alcohol use at each time period. Results revealed that proximal drug use was associated with sexual aggression severity: Increased drug use predicted increased severity of sexual aggression across time. A second set of analyses explored the relationship between distal marijuana use and severity of sexual aggression after controlling for distal alcohol use. Results indicated that increased marijuana use predicted increased severity of sexual aggression across time. A third set of analyses explored the relationship between distal use of other illicit drugs and severity of sexual aggression after controlling for distal alcohol use. Results mirrored those of the second set of analyses and are discussed in terms of drug use as a component of deviant lifestyles that may include sexually aggressive behavior, including implications for applied settings.
Temperature Control Diagnostics for Sample Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santodonato, Louis J; Walker, Lakeisha MH; Church, Andrew J
2010-01-01
In a scientific laboratory setting, standard equipment such as cryocoolers are often used as part of a custom sample environment system designed to regulate temperature over a wide range. The end user may be more concerned with precise sample temperature control than with base temperature. But cryogenic systems tend to be specified mainly in terms of cooling capacity and base temperature. Technical staff at scientific user facilities (and perhaps elsewhere) often wonder how to best specify and evaluate temperature control capabilities. Here we describe test methods and give results obtained at a user facility that operates a large sample environmentmore » inventory. Although this inventory includes a wide variety of temperature, pressure, and magnetic field devices, the present work focuses on cryocooler-based systems.« less
Characterization of Friction Joints Subjected to High Levels of Random Vibration
NASA Technical Reports Server (NTRS)
deSantos, Omar; MacNeal, Paul
2012-01-01
This paper describes the test program in detail including test sample description, test procedures, and vibration test results of multiple test samples. The material pairs used in the experiment were Aluminum-Aluminum, Aluminum- Dicronite coated Aluminum, and Aluminum-Plasmadize coated Aluminum. Levels of vibration for each set of twelve samples of each material pairing were gradually increased until all samples experienced substantial displacement. Data was collected on 1) acceleration in all three axes, 2) relative static displacement between vibration runs utilizing photogrammetry techniques, and 3) surface galling and contaminant generation. This data was used to estimate the values of static friction during random vibratory motion when "stick-slip" occurs and compare these to static friction coefficients measured before and after vibration testing.
Deep Charging Evaluation of Satellite Power and Communication System Components
NASA Technical Reports Server (NTRS)
Schneider, T. A.; Vaughn, J. A.; Chu, B.; Wong, F.; Gardiner, G.; Wright, K. H.; Phillips, B.
2016-01-01
A set of deep charging tests has been carried out by NASA's Marshall Space Flight Center on subscale flight-like samples developed by Space Systems/Loral, LLC. The samples, which included solar array wire coupons, a photovoltaic cell coupon, and a coaxial microwave transmission cable, were placed in passive and active (powered) circuit configurations and exposed to electron radiation. The energy of the electron radiation was chosen to deeply penetrate insulating (dielectric) materials on each sample. Each circuit configuration was monitored to determine if potentially damaging electrostatic discharge events (arcs) were developed on the coupon as a result of deep charging. The motivation for the test, along with charging levels, experimental setup, sample details, and results will be discussed.
National Databases for Neurosurgical Outcomes Research: Options, Strengths, and Limitations.
Karhade, Aditya V; Larsen, Alexandra M G; Cote, David J; Dubois, Heloise M; Smith, Timothy R
2017-08-05
Quality improvement, value-based care delivery, and personalized patient care depend on robust clinical, financial, and demographic data streams of neurosurgical outcomes. The neurosurgical literature lacks a comprehensive review of large national databases. To assess the strengths and limitations of various resources for outcomes research in neurosurgery. A review of the literature was conducted to identify surgical outcomes studies using national data sets. The databases were assessed for the availability of patient demographics and clinical variables, longitudinal follow-up of patients, strengths, and limitations. The number of unique patients contained within each data set ranged from thousands (Quality Outcomes Database [QOD]) to hundreds of millions (MarketScan). Databases with both clinical and financial data included PearlDiver, Premier Healthcare Database, Vizient Clinical Data Base and Resource Manager, and the National Inpatient Sample. Outcomes collected by databases included patient-reported outcomes (QOD); 30-day morbidity, readmissions, and reoperations (National Surgical Quality Improvement Program); and disease incidence and disease-specific survival (Surveillance, Epidemiology, and End Results-Medicare). The strengths of large databases included large numbers of rare pathologies and multi-institutional nationally representative sampling; the limitations of these databases included variable data veracity, variable data completeness, and missing disease-specific variables. The improvement of existing large national databases and the establishment of new registries will be crucial to the future of neurosurgical outcomes research. Copyright © 2017 by the Congress of Neurological Surgeons
NASA Astrophysics Data System (ADS)
Hedberg, Emma; Gidhagen, Lars; Johansson, Christer
Sampling of particles (PM10) was conducted during a one-year period at two rural sites in Central Chile, Quillota and Linares. The samples were analyzed for elemental composition. The data sets have undergone source-receptor analyses in order to estimate the sources and their abundance's in the PM10 size fraction, by using the factor analytical method positive matrix factorization (PMF). The analysis showed that PM10 was dominated by soil resuspension at both sites during the summer months, while during winter traffic dominated the particle mass at Quillota and local wood burning dominated the particle mass at Linares. Two copper smelters impacted the Quillota station, and contributed to 10% and 16% of PM10 as an average during summer and winter, respectively. One smelter impacted Linares by 8% and 19% of PM10 in the summer and winter, respectively. For arsenic the two smelters accounted for 87% of the monitored arsenic levels at Quillota and at Linares one smelter contributed with 72% of the measured mass. In comparison with PMF, the use of a dispersion model tended to overestimate the smelter contribution to arsenic levels at both sites. The robustness of the PMF model was tested by using randomly reduced data sets, where 85%, 70%, 50% and 33% of the samples were included. In this way the ability of the model to reconstruct the sources initially found by the original data set could be tested. On average for all sources the relative standard deviation increased from 7% to 25% for the variables identifying the sources, when decreasing the data set from 85% to 33% of the samples, indicating that the solution initially found was very stable to begin with. But it was also noted that sources due to industrial or combustion processes were more sensitive for the size of the data set, compared to the natural sources as local soil and sea spray sources.
Optical method for determining the mechanical properties of a material
Maris, H.J.; Stoner, R.J.
1998-12-01
Disclosed is a method for characterizing a sample, comprising the steps of: (a) acquiring data from the sample using at least one probe beam wavelength to measure, for times less than a few nanoseconds, a change in the reflectivity of the sample induced by a pump beam; (b) analyzing the data to determine at least one material property by comparing a background signal component of the data with data obtained for a similar delay time range from one or more samples prepared under conditions known to give rise to certain physical and chemical material properties; and (c) analyzing a component of the measured time dependent reflectivity caused by ultrasonic waves generated by the pump beam using the at least one determined material property. The first step of analyzing may include a step of interpolating between reference samples to obtain an intermediate set of material properties. The material properties may include sound velocity, density, and optical constants. In one embodiment, only a correlation is made with the background signal, and at least one of the structural phase, grain orientation, and stoichiometry is determined. 14 figs.
NASA Astrophysics Data System (ADS)
Straková, Petra; Laiho, Raija
2016-04-01
In this presentation, we assess the merits of using Fourier transform infrared (FTIR) spectra to estimate the organic matter composition in different plant biomass and peat soil samples. Infrared spectroscopy has a great potential in large-scale peatland studies that require low cost and high throughput techniques, as it gives a unique "chemical overview" of a sample, with all the chemical compounds present contributing to the spectrum produced. Our extensive sample sets include soil samples ranging from boreal to tropical peatlands, including sites under different environmental and/or land-use changes; above- and below-ground biomass of different peatland plant species; plant root mixtures. We mainly use FTIR to estimate (1) chemical composition of the samples (e.g., total C and N, C:N ratio, holocellulose, lignin and ash content), (2) proportion of each plant species in root mixtures, and (3) respiration of surface peat. The satisfactory results of our predictive models suggest that this experimental approach can, for example, be used as a screening tool in the evaluation of organic matter composition in peatlands during monitoring of their degradation and/or restoration success.
Optical method for determining the mechanical properties of a material
Maris, Humphrey J.; Stoner, Robert J.
1998-01-01
Disclosed is a method for characterizing a sample, comprising the steps of: (a) acquiring data from the sample using at least one probe beam wavelength to measure, for times less than a few nanoseconds, a change in the reflectivity of the sample induced by a pump beam; (b) analyzing the data to determine at least one material property by comparing a background signal component of the data with data obtained for a similar delay time range from one or more samples prepared under conditions known to give rise to certain physical and chemical material properties; and (c) analyzing a component of the measured time dependent reflectivity caused by ultrasonic waves generated by the pump beam using the at least one determined material property. The first step of analyzing may include a step of interpolating between reference samples to obtain an intermediate set of material properties. The material properties may include sound velocity, density, and optical constants. In one embodiment, only a correlation is made with the background signal, and at least one of the structural phase, grain orientation, and stoichiometry is determined.
Estimating acreage by double sampling using LANDSAT data
NASA Technical Reports Server (NTRS)
Pont, F.; Horwitz, H.; Kauth, R. (Principal Investigator)
1982-01-01
Double sampling techniques employing LANDSAT data for estimating the acreage of corn and soybeans was investigated and evaluated. The evaluation was based on estimated costs and correlations between two existing procedures having differing cost/variance characteristics, and included consideration of their individual merits when coupled with a fictional 'perfect' procedure of zero bias and variance. Two features of the analysis are: (1) the simultaneous estimation of two or more crops; and (2) the imposition of linear cost constraints among two or more types of resource. A reasonably realistic operational scenario was postulated. The costs were estimated from current experience with the measurement procedures involved, and the correlations were estimated from a set of 39 LACIE-type sample segments located in the U.S. Corn Belt. For a fixed variance of the estimate, double sampling with the two existing LANDSAT measurement procedures can result in a 25% or 50% cost reduction. Double sampling which included the fictional perfect procedure results in a more cost effective combination when it is used with the lower cost/higher variance representative of the existing procedures.
Effective use of metadata in the integration and analysis of multi-dimensional optical data
NASA Astrophysics Data System (ADS)
Pastorello, G. Z.; Gamon, J. A.
2012-12-01
Data discovery and integration relies on adequate metadata. However, creating and maintaining metadata is time consuming and often poorly addressed or avoided altogether, leading to problems in later data analysis and exchange. This is particularly true for research fields in which metadata standards do not yet exist or are under development, or within smaller research groups without enough resources. Vegetation monitoring using in-situ and remote optical sensing is an example of such a domain. In this area, data are inherently multi-dimensional, with spatial, temporal and spectral dimensions usually being well characterized. Other equally important aspects, however, might be inadequately translated into metadata. Examples include equipment specifications and calibrations, field/lab notes and field/lab protocols (e.g., sampling regimen, spectral calibration, atmospheric correction, sensor view angle, illumination angle), data processing choices (e.g., methods for gap filling, filtering and aggregation of data), quality assurance, and documentation of data sources, ownership and licensing. Each of these aspects can be important as metadata for search and discovery, but they can also be used as key data fields in their own right. If each of these aspects is also understood as an "extra dimension," it is possible to take advantage of them to simplify the data acquisition, integration, analysis, visualization and exchange cycle. Simple examples include selecting data sets of interest early in the integration process (e.g., only data collected according to a specific field sampling protocol) or applying appropriate data processing operations to different parts of a data set (e.g., adaptive processing for data collected under different sky conditions). More interesting scenarios involve guided navigation and visualization of data sets based on these extra dimensions, as well as partitioning data sets to highlight relevant subsets to be made available for exchange. The DAX (Data Acquisition to eXchange) Web-based tool uses a flexible metadata representation model and takes advantage of multi-dimensional data structures to translate metadata types into data dimensions, effectively reshaping data sets according to available metadata. With that, metadata is tightly integrated into the acquisition-to-exchange cycle, allowing for more focused exploration of data sets while also increasing the value of, and incentives for, keeping good metadata. The tool is being developed and tested with optical data collected in different settings, including laboratory, field, airborne, and satellite platforms.
NASA Technical Reports Server (NTRS)
Vonderhaar, T. H.; Reinke, Donald L.; Randel, David L.; Stephens, Graeme L.; Combs, Cynthia L.; Greenwald, Thomas J.; Ringerud, Mark A.; Wittmeyer, Ian L.
1993-01-01
During the next decade, many programs and experiments under the Global Energy and Water Cycle Experiment (GEWEX) will utilize present day and future data sets to improve our understanding of the role of moisture in climate, and its interaction with other variables such as clouds and radiation. An important element of GEWEX will be the GEWEX Water Vapor Project (GVaP), which will eventually initiate a routine, real-time assimilation of the highest quality, global water vapor data sets including information gained from future data collection systems, both ground and space based. The comprehensive global water vapor data set being produced by METSAT Inc. uses a combination of ground-based radiosonde data, and infrared and microwave satellite retrievals. This data is needed to provide the desired foundation from which future GEWEX-related research, such as GVaP, can build. The first year of this project was designed to use a combination of the best available atmospheric moisture data including: radiosonde (balloon/acft/rocket), HIRS/MSU (TOVS) retrievals, and SSM/I retrievals, to produce a one-year, global, high resolution data set of integrated column water vapor (precipitable water) with a horizontal resolution of 1 degree, and a temporal resolution of one day. The time period of this pilot product was to be det3ermined by the availability of all the input data sets. January 1988 through December 1988 were selected. In addition, a sample of vertically integrated liquid water content (LWC) was to be produced with the same temporal and spatial parameters. This sample was to be produced over ocean areas only. Three main steps are followed to produce a merged water vapor and liquid water product. Input data from Radiosondes, TOVS, and SSMI/I is quality checked in steps one and two. Processing is done in step two to generate individual total column water vapor and liquid water data sets. The third step, and final processing task, involves merging the individual output products to produce the integrated water vapor product. A final quality control is applied to the merged data sets.
Jockusch, Elizabeth L; Martínez-Solano, Iñigo; Timpe, Elizabeth K
2015-01-01
Species tree methods are now widely used to infer the relationships among species from multilocus data sets. Many methods have been developed, which differ in whether gene and species trees are estimated simultaneously or sequentially, and in how gene trees are used to infer the species tree. While these methods perform well on simulated data, less is known about what impacts their performance on empirical data. We used a data set including five nuclear genes and one mitochondrial gene for 22 species of Batrachoseps to compare the effects of method of analysis, within-species sampling and gene sampling on species tree inferences. For this data set, the choice of inference method had the largest effect on the species tree topology. Exclusion of individual loci had large effects in *BEAST and STEM, but not in MP-EST. Different loci carried the greatest leverage in these different methods, showing that the causes of their disproportionate effects differ. Even though substantial information was present in the nuclear loci, the mitochondrial gene dominated the *BEAST species tree. This leverage is inherent to the mtDNA locus and results from its high variation and lower assumed ploidy. This mtDNA leverage may be problematic when mtDNA has undergone introgression, as is likely in this data set. By contrast, the leverage of RAG1 in STEM analyses does not reflect properties inherent to the locus, but rather results from a gene tree that is strongly discordant with all others, and is best explained by introgression between distantly related species. Within-species sampling was also important, especially in *BEAST analyses, as shown by differences in tree topology across 100 subsampled data sets. Despite the sensitivity of the species tree methods to multiple factors, five species groups, the relationships among these, and some relationships within them, are generally consistently resolved for Batrachoseps. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Biostatistics Series Module 3: Comparing Groups: Numerical Variables.
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Numerical data that are normally distributed can be analyzed with parametric tests, that is, tests which are based on the parameters that define a normal distribution curve. If the distribution is uncertain, the data can be plotted as a normal probability plot and visually inspected, or tested for normality using one of a number of goodness of fit tests, such as the Kolmogorov-Smirnov test. The widely used Student's t-test has three variants. The one-sample t-test is used to assess if a sample mean (as an estimate of the population mean) differs significantly from a given population mean. The means of two independent samples may be compared for a statistically significant difference by the unpaired or independent samples t-test. If the data sets are related in some way, their means may be compared by the paired or dependent samples t-test. The t-test should not be used to compare the means of more than two groups. Although it is possible to compare groups in pairs, when there are more than two groups, this will increase the probability of a Type I error. The one-way analysis of variance (ANOVA) is employed to compare the means of three or more independent data sets that are normally distributed. Multiple measurements from the same set of subjects cannot be treated as separate, unrelated data sets. Comparison of means in such a situation requires repeated measures ANOVA. It is to be noted that while a multiple group comparison test such as ANOVA can point to a significant difference, it does not identify exactly between which two groups the difference lies. To do this, multiple group comparison needs to be followed up by an appropriate post hoc test. An example is the Tukey's honestly significant difference test following ANOVA. If the assumptions for parametric tests are not met, there are nonparametric alternatives for comparing data sets. These include Mann-Whitney U-test as the nonparametric counterpart of the unpaired Student's t-test, Wilcoxon signed-rank test as the counterpart of the paired Student's t-test, Kruskal-Wallis test as the nonparametric equivalent of ANOVA and the Friedman's test as the counterpart of repeated measures ANOVA.
Acetaminophen-cysteine adducts during therapeutic dosing and following overdose
2011-01-01
Background Acetaminophen-cysteine adducts (APAP-CYS) are a specific biomarker of acetaminophen exposure. APAP-CYS concentrations have been described in the setting of acute overdose, and a concentration >1.1 nmol/ml has been suggested as a marker of hepatic injury from acetaminophen overdose in patients with an ALT >1000 IU/L. However, the concentrations of APAP-CYS during therapeutic dosing, in cases of acetaminophen toxicity from repeated dosing and in cases of hepatic injury from non-acetaminophen hepatotoxins have not been well characterized. The objective of this study is to describe APAP-CYS concentrations in these clinical settings as well as to further characterize the concentrations observed following acetaminophen overdose. Methods Samples were collected during three clinical trials in which subjects received 4 g/day of acetaminophen and during an observational study of acetaminophen overdose patients. Trial 1 consisted of non-drinkers who received APAP for 10 days, Trial 2 consisted of moderate drinkers dosed for 10 days and Trial 3 included subjects who chronically abuse alcohol dosed for 5 days. Patients in the observational study were categorized by type of acetaminophen exposure (single or repeated). Serum APAP-CYS was measured using high pressure liquid chromatography with electrochemical detection. Results Trial 1 included 144 samples from 24 subjects; Trial 2 included 182 samples from 91 subjects and Trial 3 included 200 samples from 40 subjects. In addition, we collected samples from 19 subjects with acute acetaminophen ingestion, 7 subjects with repeated acetaminophen exposure and 4 subjects who ingested another hepatotoxin. The mean (SD) peak APAP-CYS concentrations for the Trials were: Trial 1- 0.4 (0.20) nmol/ml, Trial 2- 0.1 (0.09) nmol/ml and Trial 3- 0.3 (0.12) nmol/ml. APAP-CYS concentrations varied substantially among the patients with acetaminophen toxicity (0.10 to 27.3 nmol/ml). No subject had detectable APAP-CYS following exposure to a non-acetaminophen hepatotoxin. Conclusions Lower concentrations of APAP-CYS are detectable after exposure to therapeutic doses of acetaminophen and higher concentrations are detected after acute acetaminophen overdose and in patients with acetaminophen toxicity following repeated exposure. PMID:21401949
Sehmel, George A.
1979-01-01
An isokinetic air sampler includes a filter, a holder for the filter, an air pump for drawing air through the filter at a fixed, predetermined rate, an inlet assembly for the sampler having an inlet opening therein of a size such that isokinetic air sampling is obtained at a particular wind speed, a closure for the inlet opening and means for simultaneously opening the closure and turning on the air pump when the wind speed is such that isokinetic air sampling is obtained. A system incorporating a plurality of such samplers provided with air pumps set to draw air through the filter at the same fixed, predetermined rate and having different inlet opening sizes for use at different wind speeds is included within the ambit of the present invention as is a method of sampling air to measure airborne concentrations of particulate pollutants as a function of wind speed.
Joint Inference of Population Assignment and Demographic History
Choi, Sang Chul; Hey, Jody
2011-01-01
A new approach to assigning individuals to populations using genetic data is described. Most existing methods work by maximizing Hardy–Weinberg and linkage equilibrium within populations, neither of which will apply for many demographic histories. By including a demographic model, within a likelihood framework based on coalescent theory, we can jointly study demographic history and population assignment. Genealogies and population assignments are sampled from a posterior distribution using a general isolation-with-migration model for multiple populations. A measure of partition distance between assignments facilitates not only the summary of a posterior sample of assignments, but also the estimation of the posterior density for the demographic history. It is shown that joint estimates of assignment and demographic history are possible, including estimation of population phylogeny for samples from three populations. The new method is compared to results of a widely used assignment method, using simulated and published empirical data sets. PMID:21775468
Bungay, Vicky; Oliffe, John; Atchison, Chris
2016-06-01
Men, transgender people, and those working in off-street locales have historically been underrepresented in sex work health research. Failure to include all sections of sex worker populations precludes comprehensive understandings about a range of population health issues, including potential variations in the manifestation of such issues within and between population subgroups, which in turn can impede the development of effective services and interventions. In this article, we describe our attempts to define, determine, and recruit a purposeful sample for a qualitative study examining the interrelationships between sex workers' health and the working conditions in the Vancouver off-street sex industry. Detailed is our application of ethnographic mapping approaches to generate information about population diversity and work settings within distinct geographical boundaries. Bearing in mind the challenges and the overwhelming discrimination sex workers experience, we scope recommendations for safe and effective purposeful sampling inclusive of sex workers' heterogeneity. © The Author(s) 2015.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
A Problem Confirmation Study was performed at three sites on Mather AFB identified in the Phase I investigation as requiring further study (the Air command and warning Area, the 7100 Area, the West Ditch) and the Northeast Perimeter. The field investigation was conducted from February 1984 to June 1985 and included installation of 11 monitor wells, collection of groundwater samples from the monitor wells and 15 base production wells, and collection of sediment samples from two locations on the West Ditch. Analytes included oil and grease, TOC, volatile organic compounds (VOA), as well as dimethylnitrosamine, phenols, pesticides, and dissolved metalsmore » at some specific sites. Based on the hydrogeologic complexity of the physical setting and the findings of the sampling and analytical work, follow-on investigations were recommended at all three sites.« less
Outcomes research in pediatric settings: recent trends and future directions.
Forrest, Christopher B; Shipman, Scott A; Dougherty, Denise; Miller, Marlene R
2003-01-01
Pediatric outcomes research examines the effects of health care delivered in everyday medical settings on the health of children and adolescents. It is an area of inquiry in its nascent stages of development. We conducted a systematic literature review that covered articles published during the 6-year interval 1994-1999 and in 39 peer-reviewed journals chosen for their likelihood of containing child health services research. This article summarizes the article abstraction, reviews the literature, describes recent trends, and makes recommendations for future work. In the sample of journals that we examined, the number of pediatric outcomes research articles doubled between 1994 and 1999. Hospitals and primary care practices were the most common service sectors, accounting for more than half of the articles. Common clinical categories included neonatal conditions, asthma, psychosocial problems, and injuries. Approximately 1 in 5 studies included multistate or national samples; 1 in 10 used a randomized controlled trial study design. Remarkably few studies examined the health effects of preventive, diagnostic, long-term management, or curative services delivered to children and adolescents. Outcomes research in pediatric settings is a rapidly growing area of inquiry that is acquiring breadth but has achieved little depth in any single content area. Much work needs to be done to inform decision making regarding the optimal ways to finance, organize, and deliver child health care services. To improve the evidence base of pediatric health care, more effectiveness research is needed to evaluate the overall and relative effects of services delivered to children and adolescents in everyday settings.
Wright, Adam; Pang, Justine; Feblowitz, Joshua C; Maloney, Francine L; Wilcox, Allison R; Ramelson, Harley Z; Schneider, Louise I; Bates, David W
2011-01-01
Accurate knowledge of a patient's medical problems is critical for clinical decision making, quality measurement, research, billing and clinical decision support. Common structured sources of problem information include the patient problem list and billing data; however, these sources are often inaccurate or incomplete. To develop and validate methods of automatically inferring patient problems from clinical and billing data, and to provide a knowledge base for inferring problems. We identified 17 target conditions and designed and validated a set of rules for identifying patient problems based on medications, laboratory results, billing codes, and vital signs. A panel of physicians provided input on a preliminary set of rules. Based on this input, we tested candidate rules on a sample of 100,000 patient records to assess their performance compared to gold standard manual chart review. The physician panel selected a final rule for each condition, which was validated on an independent sample of 100,000 records to assess its accuracy. Seventeen rules were developed for inferring patient problems. Analysis using a validation set of 100,000 randomly selected patients showed high sensitivity (range: 62.8-100.0%) and positive predictive value (range: 79.8-99.6%) for most rules. Overall, the inference rules performed better than using either the problem list or billing data alone. We developed and validated a set of rules for inferring patient problems. These rules have a variety of applications, including clinical decision support, care improvement, augmentation of the problem list, and identification of patients for research cohorts.
Kawahara, Akito Y; Breinholt, Jesse W; Espeland, Marianne; Storer, Caroline; Plotkin, David; Dexter, Kelly M; Toussaint, Emmanuel F A; St Laurent, Ryan A; Brehm, Gunnar; Vargas, Sergio; Forero, Dimitri; Pierce, Naomi E; Lohman, David J
2018-06-11
The Neotropical moth-like butterflies (Hedylidae) are perhaps the most unusual butterfly family. In addition to being species-poor, this family is predominantly nocturnal and has anti-bat ultrasound hearing organs. Evolutionary relationships among the 36 described species are largely unexplored. A new, target capture, anchored hybrid enrichment probe set ('BUTTERFLY2.0') was developed to infer relationships of hedylids and some of their butterfly relatives. The probe set includes 13 genes that have historically been used in butterfly phylogenetics. Our dataset comprised of up to 10,898 aligned base pairs from 22 hedylid species and 19 outgroups. Eleven of the thirteen loci were successfully captured from all samples, and the remaining loci were captured from ≥94% of samples. The inferred phylogeny was consistent with recent molecular studies by placing Hedylidae sister to Hesperiidae, and the tree had robust support for 80% of nodes. Our results are also consistent with morphological studies, with Macrosoma tipulata as the sister species to all remaining hedylids, followed by M. semiermis sister to the remaining species in the genus. We tested the hypothesis that nocturnality evolved once from diurnality in Hedylidae, and demonstrate that the ancestral condition was likely diurnal, with a shift to nocturnality early in the diversification of this family. The BUTTERFLY2.0 probe set includes standard butterfly phylogenetics markers, captures sequences from decades-old museum specimens, and is a cost-effective technique to infer phylogenetic relationships of the butterfly tree of life. Copyright © 2018 Elsevier Inc. All rights reserved.
Feliubadaló, Lídia; Lopez-Doriga, Adriana; Castellsagué, Ester; del Valle, Jesús; Menéndez, Mireia; Tornero, Eva; Montes, Eva; Cuesta, Raquel; Gómez, Carolina; Campos, Olga; Pineda, Marta; González, Sara; Moreno, Victor; Brunet, Joan; Blanco, Ignacio; Serra, Eduard; Capellá, Gabriel; Lázaro, Conxi
2013-01-01
Next-generation sequencing (NGS) is changing genetic diagnosis due to its huge sequencing capacity and cost-effectiveness. The aim of this study was to develop an NGS-based workflow for routine diagnostics for hereditary breast and ovarian cancer syndrome (HBOCS), to improve genetic testing for BRCA1 and BRCA2. A NGS-based workflow was designed using BRCA MASTR kit amplicon libraries followed by GS Junior pyrosequencing. Data analysis combined Variant Identification Pipeline freely available software and ad hoc R scripts, including a cascade of filters to generate coverage and variant calling reports. A BRCA homopolymer assay was performed in parallel. A research scheme was designed in two parts. A Training Set of 28 DNA samples containing 23 unique pathogenic mutations and 213 other variants (33 unique) was used. The workflow was validated in a set of 14 samples from HBOCS families in parallel with the current diagnostic workflow (Validation Set). The NGS-based workflow developed permitted the identification of all pathogenic mutations and genetic variants, including those located in or close to homopolymers. The use of NGS for detecting copy-number alterations was also investigated. The workflow meets the sensitivity and specificity requirements for the genetic diagnosis of HBOCS and improves on the cost-effectiveness of current approaches. PMID:23249957
Automated Detection of Driver Fatigue Based on AdaBoost Classifier with EEG Signals.
Hu, Jianfeng
2017-01-01
Purpose: Driving fatigue has become one of the important causes of road accidents, there are many researches to analyze driver fatigue. EEG is becoming increasingly useful in the measuring fatigue state. Manual interpretation of EEG signals is impossible, so an effective method for automatic detection of EEG signals is crucial needed. Method: In order to evaluate the complex, unstable, and non-linear characteristics of EEG signals, four feature sets were computed from EEG signals, in which fuzzy entropy (FE), sample entropy (SE), approximate Entropy (AE), spectral entropy (PE), and combined entropies (FE + SE + AE + PE) were included. All these feature sets were used as the input vectors of AdaBoost classifier, a boosting method which is fast and highly accurate. To assess our method, several experiments including parameter setting and classifier comparison were conducted on 28 subjects. For comparison, Decision Trees (DT), Support Vector Machine (SVM) and Naive Bayes (NB) classifiers are used. Results: The proposed method (combination of FE and AdaBoost) yields superior performance than other schemes. Using FE feature extractor, AdaBoost achieves improved area (AUC) under the receiver operating curve of 0.994, error rate (ERR) of 0.024, Precision of 0.969, Recall of 0.984, F1 score of 0.976, and Matthews correlation coefficient (MCC) of 0.952, compared to SVM (ERR at 0.035, Precision of 0.957, Recall of 0.974, F1 score of 0.966, and MCC of 0.930 with AUC of 0.990), DT (ERR at 0.142, Precision of 0.857, Recall of 0.859, F1 score of 0.966, and MCC of 0.716 with AUC of 0.916) and NB (ERR at 0.405, Precision of 0.646, Recall of 0.434, F1 score of 0.519, and MCC of 0.203 with AUC of 0.606). It shows that the FE feature set and combined feature set outperform other feature sets. AdaBoost seems to have better robustness against changes of ratio of test samples for all samples and number of subjects, which might therefore aid in the real-time detection of driver fatigue through the classification of EEG signals. Conclusion: By using combination of FE features and AdaBoost classifier to detect EEG-based driver fatigue, this paper ensured confidence in exploring the inherent physiological mechanisms and wearable application.
Automated Detection of Driver Fatigue Based on AdaBoost Classifier with EEG Signals
Hu, Jianfeng
2017-01-01
Purpose: Driving fatigue has become one of the important causes of road accidents, there are many researches to analyze driver fatigue. EEG is becoming increasingly useful in the measuring fatigue state. Manual interpretation of EEG signals is impossible, so an effective method for automatic detection of EEG signals is crucial needed. Method: In order to evaluate the complex, unstable, and non-linear characteristics of EEG signals, four feature sets were computed from EEG signals, in which fuzzy entropy (FE), sample entropy (SE), approximate Entropy (AE), spectral entropy (PE), and combined entropies (FE + SE + AE + PE) were included. All these feature sets were used as the input vectors of AdaBoost classifier, a boosting method which is fast and highly accurate. To assess our method, several experiments including parameter setting and classifier comparison were conducted on 28 subjects. For comparison, Decision Trees (DT), Support Vector Machine (SVM) and Naive Bayes (NB) classifiers are used. Results: The proposed method (combination of FE and AdaBoost) yields superior performance than other schemes. Using FE feature extractor, AdaBoost achieves improved area (AUC) under the receiver operating curve of 0.994, error rate (ERR) of 0.024, Precision of 0.969, Recall of 0.984, F1 score of 0.976, and Matthews correlation coefficient (MCC) of 0.952, compared to SVM (ERR at 0.035, Precision of 0.957, Recall of 0.974, F1 score of 0.966, and MCC of 0.930 with AUC of 0.990), DT (ERR at 0.142, Precision of 0.857, Recall of 0.859, F1 score of 0.966, and MCC of 0.716 with AUC of 0.916) and NB (ERR at 0.405, Precision of 0.646, Recall of 0.434, F1 score of 0.519, and MCC of 0.203 with AUC of 0.606). It shows that the FE feature set and combined feature set outperform other feature sets. AdaBoost seems to have better robustness against changes of ratio of test samples for all samples and number of subjects, which might therefore aid in the real-time detection of driver fatigue through the classification of EEG signals. Conclusion: By using combination of FE features and AdaBoost classifier to detect EEG-based driver fatigue, this paper ensured confidence in exploring the inherent physiological mechanisms and wearable application. PMID:28824409
The Alzheimer’s Disease Centers’ Uniform Data Set (UDS): The Neuropsychological Test Battery
Weintraub, Sandra; Salmon, David; Mercaldo, Nathaniel; Ferris, Steven; Graff-Radford, Neill R.; Chui, Helena; Cummings, Jeffrey; DeCarli, Charles; Foster, Norman L.; Galasko, Douglas; Peskind, Elaine; Dietrich, Woodrow; Beekly, Duane L.; Kukull, Walter A.; Morris, John C.
2009-01-01
The neuropsychological test battery from the Uniform Data Set (UDS) of the Alzheimer’s Disease Centers (ADC) program of the National Institute on Aging (NIA) consists of brief measures of attention, processing speed, executive function, episodic memory and language. This paper describes development of the battery and preliminary data from the initial UDS evaluation of 3,268 clinically cognitively normal men and women collected over the first 24 months of utilization. The subjects represent a sample of community-dwelling, individuals who volunteer for studies of cognitive aging. Subjects were considered “clinically cognitively normal” based on clinical assessment, including the Clinical Dementia Rating scale and the Functional Assessment Questionnaire. The results demonstrate performance on tests sensitive to cognitive aging and to the early stages of Alzheimer disease (AD) in a relatively well-educated sample. Regression models investigating the impact of age, education, and gender on test scores indicate that these variables will need to be incorporated in subsequent normative studies. Future plans include: 1) determining the psychometric properties of the battery; 2) establishing normative data, including norms for different ethnic minority groups; and 3) conducting longitudinal studies on cognitively normal subjects, individuals with mild cognitive impairment, and individuals with AD and other forms of dementia. PMID:19474567
Extracting galactic structure parameters from multivariated density estimation
NASA Technical Reports Server (NTRS)
Chen, B.; Creze, M.; Robin, A.; Bienayme, O.
1992-01-01
Multivariate statistical analysis, including includes cluster analysis (unsupervised classification), discriminant analysis (supervised classification) and principle component analysis (dimensionlity reduction method), and nonparameter density estimation have been successfully used to search for meaningful associations in the 5-dimensional space of observables between observed points and the sets of simulated points generated from a synthetic approach of galaxy modelling. These methodologies can be applied as the new tools to obtain information about hidden structure otherwise unrecognizable, and place important constraints on the space distribution of various stellar populations in the Milky Way. In this paper, we concentrate on illustrating how to use nonparameter density estimation to substitute for the true densities in both of the simulating sample and real sample in the five-dimensional space. In order to fit model predicted densities to reality, we derive a set of equations which include n lines (where n is the total number of observed points) and m (where m: the numbers of predefined groups) unknown parameters. A least-square estimation will allow us to determine the density law of different groups and components in the Galaxy. The output from our software, which can be used in many research fields, will also give out the systematic error between the model and the observation by a Bayes rule.
UK audit of glomerular filtration rate measurement from plasma sampling in 2013.
Murray, Anthony W; Lawson, Richard S; Cade, Sarah C; Hall, David O; Kenny, Bob; O'Shaughnessy, Emma; Taylor, Jon; Towey, David; White, Duncan; Carson, Kathryn
2014-11-01
An audit was carried out into UK glomerular filtration rate (GFR) calculation. The results were compared with an identical 2001 audit. Participants used their routine method to calculate GFR for 20 data sets (four plasma samples) in millilitres per minute and also the GFR normalized for body surface area. Some unsound data sets were included to analyse the applied quality control (QC) methods. Variability between centres was assessed for each data set, compared with the national median and a reference value calculated using the method recommended in the British Nuclear Medicine Society guidelines. The influence of the number of samples on variability was studied. Supplementary data were requested on workload and methodology. The 59 returns showed widespread standardization. The applied early exponential clearance correction was the main contributor to the observed variability. These corrections were applied by 97% of centres (50% - 2001) with 80% using the recommended averaged Brochner-Mortenson correction. Approximately 75% applied the recommended Haycock body surface area formula for adults (78% for children). The effect of the number of samples used was not significant. There was wide variability in the applied QC techniques, especially in terms of the use of the volume of distribution. The widespread adoption of the guidelines has harmonized national GFR calculation compared with the previous audit. Further standardization could further reduce variability. This audit has highlighted the need to address the national standardization of QC methods. Radionuclide techniques are confirmed as the preferred method for GFR measurement when an unequivocal result is required.
Study on fast discrimination of varieties of yogurt using Vis/NIR-spectroscopy
NASA Astrophysics Data System (ADS)
He, Yong; Feng, Shuijuan; Deng, Xunfei; Li, Xiaoli
2006-09-01
A new approach for discrimination of varieties of yogurt by means of VisINTR-spectroscopy was present in this paper. Firstly, through the principal component analysis (PCA) of spectroscopy curves of 5 typical kinds of yogurt, the clustering of yogurt varieties was processed. The analysis results showed that the cumulate reliabilities of PC1 and PC2 (the first two principle components) were more than 98.956%, and the cumulate reliabilities from PC1 to PC7 (the first seven principle components) was 99.97%. Secondly, a discrimination model of Artificial Neural Network (ANN-BP) was set up. The first seven principles components of the samples were applied as ANN-BP inputs, and the value of type of yogurt were applied as outputs, then the three-layer ANN-BP model was build. In this model, every variety yogurt includes 27 samples, the total number of sample is 135, and the rest 25 samples were used as prediction set. The results showed the distinguishing rate of the five yogurt varieties was 100%. It presented that this model was reliable and practicable. So a new approach for the rapid and lossless discrimination of varieties of yogurt was put forward.
Kaspi, Omer; Yosipof, Abraham; Senderowitz, Hanoch
2017-06-06
An important aspect of chemoinformatics and material-informatics is the usage of machine learning algorithms to build Quantitative Structure Activity Relationship (QSAR) models. The RANdom SAmple Consensus (RANSAC) algorithm is a predictive modeling tool widely used in the image processing field for cleaning datasets from noise. RANSAC could be used as a "one stop shop" algorithm for developing and validating QSAR models, performing outlier removal, descriptors selection, model development and predictions for test set samples using applicability domain. For "future" predictions (i.e., for samples not included in the original test set) RANSAC provides a statistical estimate for the probability of obtaining reliable predictions, i.e., predictions within a pre-defined number of standard deviations from the true values. In this work we describe the first application of RNASAC in material informatics, focusing on the analysis of solar cells. We demonstrate that for three datasets representing different metal oxide (MO) based solar cell libraries RANSAC-derived models select descriptors previously shown to correlate with key photovoltaic properties and lead to good predictive statistics for these properties. These models were subsequently used to predict the properties of virtual solar cells libraries highlighting interesting dependencies of PV properties on MO compositions.
NASA Technical Reports Server (NTRS)
Evans, Cindy; Todd, Nancy
2014-01-01
The Astromaterials Acquisition & Curation Office at NASA's Johnson Space Center (JSC) is the designated facility for curating all of NASA's extraterrestrial samples. Today, the suite of collections includes the lunar samples from the Apollo missions, cosmic dust particles falling into the Earth's atmosphere, meteorites collected in Antarctica, comet and interstellar dust particles from the Stardust mission, asteroid particles from Japan's Hayabusa mission, solar wind atoms collected during the Genesis mission, and space-exposed hardware from several missions. To support planetary science research on these samples, JSC's Astromaterials Curation Office hosts NASA's Astromaterials Curation digital repository and data access portal [http://curator.jsc.nasa.gov/], providing descriptions of the missions and collections, and critical information about each individual sample. Our office is designing and implementing several informatics initiatives to better serve the planetary research community. First, we are re-hosting the basic database framework by consolidating legacy databases for individual collections and providing a uniform access point for information (descriptions, imagery, classification) on all of our samples. Second, we continue to upgrade and host digital compendia that summarize and highlight published findings on the samples (e.g., lunar samples, meteorites from Mars). We host high resolution imagery of samples as it becomes available, including newly scanned images of historical prints from the Apollo missions. Finally we are creating plans to collect and provide new data, including 3D imagery, point cloud data, micro CT data, and external links to other data sets on selected samples. Together, these individual efforts will provide unprecedented digital access to NASA's Astromaterials, enabling preservation of the samples through more specific and targeted requests, and supporting new planetary science research and collaborations on the samples.
Ramkissoon, Shakti H.; Bi, Wenya Linda; Schumacher, Steven E.; Ramkissoon, Lori A.; Haidar, Sam; Knoff, David; Dubuc, Adrian; Brown, Loreal; Burns, Margot; Cryan, Jane B.; Abedalthagafi, Malak; Kang, Yun Jee; Schultz, Nikolaus; Reardon, David A.; Lee, Eudocia Q.; Rinne, Mikael L.; Norden, Andrew D.; Nayak, Lakshmi; Ruland, Sandra; Doherty, Lisa M.; LaFrankie, Debra C.; Horvath, Margaret; Aizer, Ayal A.; Russo, Andrea; Arvold, Nils D.; Claus, Elizabeth B.; Al-Mefty, Ossama; Johnson, Mark D.; Golby, Alexandra J.; Dunn, Ian F.; Chiocca, E. Antonio; Trippa, Lorenzo; Santagata, Sandro; Folkerth, Rebecca D.; Kantoff, Philip; Rollins, Barrett J.; Lindeman, Neal I.; Wen, Patrick Y.; Ligon, Azra H.; Beroukhim, Rameen; Alexander, Brian M.; Ligon, Keith L.
2015-01-01
Background Multidimensional genotyping of formalin-fixed paraffin-embedded (FFPE) samples has the potential to improve diagnostics and clinical trials for brain tumors, but prospective use in the clinical setting is not yet routine. We report our experience with implementing a multiplexed copy number and mutation-testing program in a diagnostic laboratory certified by the Clinical Laboratory Improvement Amendments. Methods We collected and analyzed clinical testing results from whole-genome array comparative genomic hybridization (OncoCopy) of 420 brain tumors, including 148 glioblastomas. Mass spectrometry–based mutation genotyping (OncoMap, 471 mutations) was performed on 86 glioblastomas. Results OncoCopy was successful in 99% of samples for which sufficient DNA was obtained (n = 415). All clinically relevant loci for glioblastomas were detected, including amplifications (EGFR, PDGFRA, MET) and deletions (EGFRvIII, PTEN, 1p/19q). Glioblastoma patients ≤40 years old had distinct profiles compared with patients >40 years. OncoMap testing reliably identified mutations in IDH1, TP53, and PTEN. Seventy-seven glioblastoma patients enrolled on trials, of whom 51% participated in targeted therapeutic trials where multiplex data informed eligibility or outcomes. Data integration identified patients with complete tumor suppressor inactivation, albeit rarely (5% of patients) due to lack of whole-gene coverage in OncoMap. Conclusions Combined use of multiplexed copy number and mutation detection from FFPE samples in the clinical setting can efficiently replace singleton tests for clinical diagnosis and prognosis in most settings. Our results support incorporation of these assays into clinical trials as integral biomarkers and their potential to impact interpretation of results. Limited tumor suppressor variant capture by targeted genotyping highlights the need for whole-gene sequencing in glioblastoma. PMID:25754088
Effect of the absolute statistic on gene-sampling gene-set analysis methods.
Nam, Dougu
2017-06-01
Gene-set enrichment analysis and its modified versions have commonly been used for identifying altered functions or pathways in disease from microarray data. In particular, the simple gene-sampling gene-set analysis methods have been heavily used for datasets with only a few sample replicates. The biggest problem with this approach is the highly inflated false-positive rate. In this paper, the effect of absolute gene statistic on gene-sampling gene-set analysis methods is systematically investigated. Thus far, the absolute gene statistic has merely been regarded as a supplementary method for capturing the bidirectional changes in each gene set. Here, it is shown that incorporating the absolute gene statistic in gene-sampling gene-set analysis substantially reduces the false-positive rate and improves the overall discriminatory ability. Its effect was investigated by power, false-positive rate, and receiver operating curve for a number of simulated and real datasets. The performances of gene-set analysis methods in one-tailed (genome-wide association study) and two-tailed (gene expression data) tests were also compared and discussed.
Mangroves and Sediments - It's not all about mud!
NASA Astrophysics Data System (ADS)
Lokier, Stephen; Paul, Andreas; Fiorini, Flavia
2016-04-01
Mangals occur both as natural mangals and as plantations along the Arabian Gulf coastline of the United Arab Emirates (UAE). Over recent years there has been a significant campaign to extend the area of the mangrove forests, a project that has resulted in significant dredging activity in tandem with the planting of mangrove samplings. The philosophy for this operation has been in order to increase coastal protection from erosion and as a bid to somewhat offset the UAE's carbon footprint. This project, along with significant coastal infrastructure development, has, regrettably, reduced the number of mangal settings that may be considered as pristine. With this in mind, we have undertaken an extensive sampling campaign in order to fully characterise the sediments associated within the depositional sub-environments of mangal systems. Satellite imagery and ground-based reconnaissance were employed to identify a natural mangal area to the East of Abu Dhabi Island. Within this area, a transect was established across a naturally-occurring mangal channel system. Along-transect sampling stations were selected in order to reflect the range of environmental conditions, both in terms of energy and in relation to the degree of tidal exposure. At each station an array of environmental parameters were monitored. These included, but were not limited to, temperature, salinity, current velocity and turbidity. The surface sediment at each sample station was regularly sampled and returned to the laboratory where it was subjected to a range of analysis including grain size and modal analysis, identification of biota and measurement of total organic content. The results of this study allow us to develop a mangal sediment facies map that accurately establishes the relationships between sediments, depositional setting and environmental parameters. These results can be employed to inform the interpretation of ancient successions deposited under similar conditions. Further, the findings of this study will aid in the development of accurate petroleum reservoir models that are constrained by a quantitative data set. Lastly, a comparison between the environmental and sediment characteristics of natural and artificial mangals will aid our understanding of the effects of these new systems on the sedimentary dynamics of the UAE's coastline.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., including sample collection, transportation, storage, and analysis. (4) Survey plan design requirements. The... contractual agreement between the branded refiner or importer and the person designed to prevent such action... conditions and limitations set forth in this paragraph (e): (1) Independent survey association. To comply...
Code of Federal Regulations, 2010 CFR
2010-07-01
..., including sample collection, transportation, storage, and analysis. (4) Survey plan design requirements. The... contractual agreement between the branded refiner or importer and the person designed to prevent such action... subject to the conditions and limitations set forth in this paragraph (e): (1) Independent survey...
Code of Federal Regulations, 2012 CFR
2012-07-01
..., including sample collection, transportation, storage, and analysis. (4) Survey plan design requirements. The... contractual agreement between the branded refiner or importer and the person designed to prevent such action... conditions and limitations set forth in this paragraph (e): (1) Independent survey association. To comply...
BRSCW Reference Set Application: Joe Buechler - Biosite Inc (2009) — EDRN Public Portal
Over 40 marker assays are available to run on the samples. These include markers such as Osteopontin, Mesothelin, Periostin, Endoglin, intestinal Fatty Acid Binding Protein, and FAS-Ligand, some of which have been previously described in the literature. Other proprietary markers are derived from internal discovery efforts and from collaborator programs.
ERIC Educational Resources Information Center
Bratsch-Hines, Mary E.; Vernon-Feagans, Lynne
2013-01-01
Research Findings: Recent work has demonstrated that the changes young children experience in their child care settings before age 5 may be related to subsequent development, especially social development. Several of these studies have included samples of middle-class children, with almost no emphasis on understanding these processes for…
Functional Grammar in the ESL Classroom: Noticing, Exploring and Practicing
ERIC Educational Resources Information Center
Lock, Graham; Jones, Rodney
2011-01-01
A set of easy to use techniques helps students discover for themselves how grammar works in real world contexts and how grammatical choices are not just about form but about meaning. Sample teaching ideas, covering a wide range of grammatical topics including verb tense, voice, reference and the organization of texts, accompanies each procedure.…
ERIC Educational Resources Information Center
Primich, Tracy
1992-01-01
Discusses computer viruses that attack the Macintosh and describes Symantec AntiVirus for Macintosh (SAM), a commercial program designed to detect and eliminate viruses; sample screen displays are included. SAM is recommended for use in library settings as well as two public domain virus protection programs. (four references) (MES)
ERIC Educational Resources Information Center
Spalter-Roth, Roberta; And Others
A study used data for the 1987 calendar year from the 1986 and 1987 panels of the Survey of Income and Program Participation (SIPP) to examine the impact of union membership on women's wages and job tenure. The data set included 17,200 sample members, representing about 79 million workers, aged 16-64. The study mapped the distribution of union…
Hope & Achievement Goals as Predictors of Student Behavior & Achievement in a Rural Middle School
ERIC Educational Resources Information Center
Walker, Christopher O.; Winn, Tina D.; Adams, Blakely N.; Shepard, Misty R.; Huddleston, Chelsea D.; Godwin, Kayce L.
2009-01-01
Relations among a set of cognitive-motivational variables were examined with the intent being to assess and clarify the nature of their interconnections within a middle school sample. Student perception of hope, which includes perceptions of agency and pathways, was investigated, along with personal achievement goal orientation, as predictors of…
ERIC Educational Resources Information Center
Machovec, George S., Ed.
1995-01-01
Explains the Common Gateway Interface (CGI) protocol as a set of rules for passing information from a Web server to an external program such as a database search engine. Topics include advantages over traditional client/server solutions, limitations, sample library applications, and sources of information from the Internet. (LRW)
ERIC Educational Resources Information Center
Ross, Scott R.; Benning, Stephen D.; Patrick, Christopher J.; Thompson, Angela; Thurston, Amanda
2009-01-01
Psychopathy is a personality disorder that includes interpersonal-affective and antisocial deviance features. The Psychopathic Personality Inventory (PPI) contains two underlying factors (fearless dominance and impulsive antisociality) that may differentially tap these two sets of features. In a mixed-gender sample of undergraduates and prisoners,…
Assets and Life Satisfaction Patterns among Korean Older Adults: Latent Class Analysis
ERIC Educational Resources Information Center
Han, Chang-Keun; Hong, Song-Iee
2011-01-01
This study aims to examine the association of assets with life satisfaction patterns among Korean older adults aged 50 and above. This study used the first two panel data sets (2005 and 2007) from the Korean Retirement and Income Study, which collected information from a nationally representative sample. Key independent variables include financial…
ERIC Educational Resources Information Center
Dariotis, Jacinda K.; Bumbarger, Brian K.; Duncan, Larissa G.; Greenberg, Mark T.
2008-01-01
Widespread replications of evidence-based prevention programs (EBPPs) prompt prevention scientists to examine program implementation adherence in real world settings. Based on Chen's model (1990), we identified five key factors of the implementation system and assessed which characteristics related to program adherence. The sample included 32…
ERIC Educational Resources Information Center
Platten, Marvin R.; Williams, Larry R.
1979-01-01
The Piers-Harris Children's Self-Concept Scale was administered twice to a sample of elementary school pupils and both sets of data were factor analyzed. Results led the authors to question the factor stability of the instrument. (Items are included). (JKS)
ERIC Educational Resources Information Center
Park, Amanda; Nitzke, Susan; Kritsch, Karen; Kattelmann, Kendra; White, Adrienne; Boeckner, Linda; Lohse, Barbara; Hoerr, Sharon; Greene, Geoffrey; Zhang, Zhumin
2008-01-01
Objective: Evaluate a theory-based, Internet-delivered nutrition education module. Design: Randomized, treatment-control design with pre-post intervention assessments. Setting and Participants: Convenience sample of 160 young adults (aged 18-24) recruited by community educators in 4 states. Study completers (n = 96) included a mix of…
ERIC Educational Resources Information Center
Pas, Elise T.; Johnson, Stacy R.; Larson, Kristine E.; Brandenburg, Linda; Church, Robin; Bradshaw, Catherine P.
2016-01-01
Most approaches aiming to reduce behavior problems among youth with Autism Spectrum Disorder (ASD) focus on individual students; however, school personnel also need professional development to better support students. This study targeted teachers' skill development to promote positive outcomes for students with ASD. The sample included 19 teachers…
The Geochemical Databases GEOROC and GeoReM - What's New?
NASA Astrophysics Data System (ADS)
Sarbas, B.; Jochum, K. P.; Nohl, U.; Weis, U.
2017-12-01
The geochemical databases GEOROC (http: georoc.mpch-mainz.gwdg.de) and GeoReM (http: georem.mpch-mainz.gwdg.de) are maintained by the Max Planck Institute for Chemistry in Mainz, Germany. Both online databases became crucial tools for geoscientists from different research areas. They are regularly upgraded by new tools and new data from recent publications obtained from a wide range of international journals. GEOROC is a collection of published analyses of volcanic rocks and mantle xenoliths. Since recently, data for plutonic rocks are added. The analyses include major and trace element concentrations, radiogenic and non-radiogenic isotope ratios as well as analytical ages for whole rocks, glasses, minerals and inclusions. Samples come from eleven geological settings and span the whole geological age scale from Archean to Recent. Metadata include, among others, geographic location, rock class and rock type, geological age, degree of alteration, analytical method, laboratory, and reference. The GEOROC web page allows selection of samples by geological setting, geography, chemical criteria, rock or sample name, and bibliographic criteria. In addition, it provides a large number of precompiled files for individual locations, minerals and rock classes. GeoReM is a database collecting information about reference materials of geological and environmental interest, such as rock powders, synthetic and natural glasses as well as mineral, isotopic, biological, river water and seawater reference materials. It contains published data and compilation values (major and trace element concentrations and mass fractions, radiogenic and stable isotope ratios). Metadata comprise, among others, uncertainty, analytical method and laboratory. Reference materials are important for calibration, method validation, quality control and to establish metrological traceability. GeoReM offers six different search strategies: samples or materials (published values), samples (GeoReM preferred values), chemical criteria, chemical criteria based on bibliography, bibliography, as well as methods and institutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkins-Smith, H.C.; Espey, J.L.; Rouse, A.A.
1991-06-01
This report describes the results of a set of five surveys designed to assess the perceived risks of nuclear waste management policy in Colorado and New Mexico. Within these states, mail surveys of randomly selected samples were taken of members of the American Association for the Advancement of Science, members of the Sierra Club, members of business associations, and state legislators. In addition, a telephone sample of randomly selected households was conducted in Colorado and New Mexico. Using these data, the perceptions of the risk of nuclear waste management -- from production of nuclear energy through permanent storage of nuclearmore » wastes -- are compared for each of the five samples. The degree of trust in, and the perceived political influence of, the more prominent policy actors are assessed. Certain cognitive attributes, including degree of subjective certainty regarding beliefs about risks of nuclear wastes, and likelihood of altering perceived risks when confronted with new information, are compared across samples. In addition, the sample scores from rudimentary knowledge tests about the characteristics of radiation are compared. The relationships among the knowledge scores, cognitive attributes and risk perceptions are evaluated. Perceptions of the balance of media coverage are measured, as are the possible direct and indirect roles of media exposure in risk perception. Aggregate models, testing an array of hypotheses about the bases of nuclear waste risk perceptions, are conducted. These tests indicate that risk perceptions are related to a complex set of factors, and that these factors may differ significantly across the different sub-populations. Finally, the relationships between risk perception and political participation -- including registering to vote, political party affiliation, and level of political activism -- are analyzed. 5 figs., 33 tabs.« less
Cannon, William F.; Woodruff, Laurel G.
2003-01-01
This data set consists of nine files of geochemical information on various types of surficial deposits in northwestern Wisconsin and immediately adjacent parts of Michigan and Minnesota. The files are presented in two formats: as dbase files in dbaseIV form and Microsoft Excel form. The data present multi-element chemical analyses of soils, stream sediments, and lake sediments. Latitude and longitude values are provided in each file so that the dbf files can be readily imported to GIS applications. Metadata files are provided in outline form, question and answer form and text form. The metadata includes information on procedures for sample collection, sample preparation, and chemical analyses including sensitivity and precision.
Method Development in Forensic Toxicology.
Peters, Frank T; Wissenbach, Dirk K; Busardo, Francesco Paolo; Marchei, Emilia; Pichini, Simona
2017-01-01
In the field of forensic toxicology, the quality of analytical methods is of great importance to ensure the reliability of results and to avoid unjustified legal consequences. A key to high quality analytical methods is a thorough method development. The presented article will provide an overview on the process of developing methods for forensic applications. This includes the definition of the method's purpose (e.g. qualitative vs quantitative) and the analytes to be included, choosing an appropriate sample matrix, setting up separation and detection systems as well as establishing a versatile sample preparation. Method development is concluded by an optimization process after which the new method is subject to method validation. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
The limitations of simple gene set enrichment analysis assuming gene independence.
Tamayo, Pablo; Steinhardt, George; Liberzon, Arthur; Mesirov, Jill P
2016-02-01
Since its first publication in 2003, the Gene Set Enrichment Analysis method, based on the Kolmogorov-Smirnov statistic, has been heavily used, modified, and also questioned. Recently a simplified approach using a one-sample t-test score to assess enrichment and ignoring gene-gene correlations was proposed by Irizarry et al. 2009 as a serious contender. The argument criticizes Gene Set Enrichment Analysis's nonparametric nature and its use of an empirical null distribution as unnecessary and hard to compute. We refute these claims by careful consideration of the assumptions of the simplified method and its results, including a comparison with Gene Set Enrichment Analysis's on a large benchmark set of 50 datasets. Our results provide strong empirical evidence that gene-gene correlations cannot be ignored due to the significant variance inflation they produced on the enrichment scores and should be taken into account when estimating gene set enrichment significance. In addition, we discuss the challenges that the complex correlation structure and multi-modality of gene sets pose more generally for gene set enrichment methods. © The Author(s) 2012.
MAVTgsa: An R Package for Gene Set (Enrichment) Analysis
Chien, Chih-Yi; Chang, Ching-Wei; Tsai, Chen-An; ...
2014-01-01
Gene semore » t analysis methods aim to determine whether an a priori defined set of genes shows statistically significant difference in expression on either categorical or continuous outcomes. Although many methods for gene set analysis have been proposed, a systematic analysis tool for identification of different types of gene set significance modules has not been developed previously. This work presents an R package, called MAVTgsa, which includes three different methods for integrated gene set enrichment analysis. (1) The one-sided OLS (ordinary least squares) test detects coordinated changes of genes in gene set in one direction, either up- or downregulation. (2) The two-sided MANOVA (multivariate analysis variance) detects changes both up- and downregulation for studying two or more experimental conditions. (3) A random forests-based procedure is to identify gene sets that can accurately predict samples from different experimental conditions or are associated with the continuous phenotypes. MAVTgsa computes the P values and FDR (false discovery rate) q -value for all gene sets in the study. Furthermore, MAVTgsa provides several visualization outputs to support and interpret the enrichment results. This package is available online.« less
Ground-Cover Measurements: Assessing Correlation Among Aerial and Ground-Based Methods
NASA Astrophysics Data System (ADS)
Booth, D. Terrance; Cox, Samuel E.; Meikle, Tim; Zuuring, Hans R.
2008-12-01
Wyoming’s Green Mountain Common Allotment is public land providing livestock forage, wildlife habitat, and unfenced solitude, amid other ecological services. It is also the center of ongoing debate over USDI Bureau of Land Management’s (BLM) adjudication of land uses. Monitoring resource use is a BLM responsibility, but conventional monitoring is inadequate for the vast areas encompassed in this and other public-land units. New monitoring methods are needed that will reduce monitoring costs. An understanding of data-set relationships among old and new methods is also needed. This study compared two conventional methods with two remote sensing methods using images captured from two meters and 100 meters above ground level from a camera stand (a ground, image-based method) and a light airplane (an aerial, image-based method). Image analysis used SamplePoint or VegMeasure software. Aerial methods allowed for increased sampling intensity at low cost relative to the time and travel required by ground methods. Costs to acquire the aerial imagery and measure ground cover on 162 aerial samples representing 9000 ha were less than 3000. The four highest correlations among data sets for bare ground—the ground-cover characteristic yielding the highest correlations (r)—ranged from 0.76 to 0.85 and included ground with ground, ground with aerial, and aerial with aerial data-set associations. We conclude that our aerial surveys are a cost-effective monitoring method, that ground with aerial data-set correlations can be equal to, or greater than those among ground-based data sets, and that bare ground should continue to be investigated and tested for use as a key indicator of rangeland health.
In silico pathway analysis in cervical carcinoma reveals potential new targets for treatment
van Dam, Peter A.; van Dam, Pieter-Jan H. H.; Rolfo, Christian; Giallombardo, Marco; van Berckelaer, Christophe; Trinh, Xuan Bich; Altintas, Sevilay; Huizing, Manon; Papadimitriou, Kostas; Tjalma, Wiebren A. A.; van Laere, Steven
2016-01-01
An in silico pathway analysis was performed in order to improve current knowledge on the molecular drivers of cervical cancer and detect potential targets for treatment. Three publicly available Affymetrix gene expression data-sets (GSE5787, GSE7803, GSE9750) were retrieved, vouching for a total of 9 cervical cancer cell lines (CCCLs), 39 normal cervical samples, 7 CIN3 samples and 111 cervical cancer samples (CCSs). Predication analysis of microarrays was performed in the Affymetrix sets to identify cervical cancer biomarkers. To select cancer cell-specific genes the CCSs were compared to the CCCLs. Validated genes were submitted to a gene set enrichment analysis (GSEA) and Expression2Kinases (E2K). In the CCSs a total of 1,547 probe sets were identified that were overexpressed (FDR < 0.1). Comparing to CCCLs 560 probe sets (481 unique genes) had a cancer cell-specific expression profile, and 315 of these genes (65%) were validated. GSEA identified 5 cancer hallmarks enriched in CCSs (P < 0.01 and FDR < 0.25) showing that deregulation of the cell cycle is a major component of cervical cancer biology. E2K identified a protein-protein interaction (PPI) network of 162 nodes (including 20 drugable kinases) and 1626 edges. This PPI-network consists of 5 signaling modules associated with MYC signaling (Module 1), cell cycle deregulation (Module 2), TGFβ-signaling (Module 3), MAPK signaling (Module 4) and chromatin modeling (Module 5). Potential targets for treatment which could be identified were CDK1, CDK2, ABL1, ATM, AKT1, MAPK1, MAPK3 among others. The present study identified important driver pathways in cervical carcinogenesis which should be assessed for their potential therapeutic drugability. PMID:26701206
NASA Astrophysics Data System (ADS)
Richards, Joseph W.; Starr, Dan L.; Brink, Henrik; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; James, J. Berian; Long, James P.; Rice, John
2012-01-01
Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL—where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up—is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.
Ebrahimi-Najafabadi, Heshmatollah; Leardi, Riccardo; Oliveri, Paolo; Casolino, Maria Chiara; Jalali-Heravi, Mehdi; Lanteri, Silvia
2012-09-15
The current study presents an application of near infrared spectroscopy for identification and quantification of the fraudulent addition of barley in roasted and ground coffee samples. Nine different types of coffee including pure Arabica, Robusta and mixtures of them at different roasting degrees were blended with four types of barley. The blending degrees were between 2 and 20 wt% of barley. D-optimal design was applied to select 100 and 30 experiments to be used as calibration and test set, respectively. Partial least squares regression (PLS) was employed to build the models aimed at predicting the amounts of barley in coffee samples. In order to obtain simplified models, taking into account only informative regions of the spectral profiles, a genetic algorithm (GA) was applied. A completely independent external set was also used to test the model performances. The models showed excellent predictive ability with root mean square errors (RMSE) for the test and external set equal to 1.4% w/w and 0.8% w/w, respectively. Copyright © 2012 Elsevier B.V. All rights reserved.
Bradshaw, Richard T; Essex, Jonathan W
2016-08-09
Hydration free energy (HFE) calculations are often used to assess the performance of biomolecular force fields and the quality of assigned parameters. The AMOEBA polarizable force field moves beyond traditional pairwise additive models of electrostatics and may be expected to improve upon predictions of thermodynamic quantities such as HFEs over and above fixed-point-charge models. The recent SAMPL4 challenge evaluated the AMOEBA polarizable force field in this regard but showed substantially worse results than those using the fixed-point-charge GAFF model. Starting with a set of automatically generated AMOEBA parameters for the SAMPL4 data set, we evaluate the cumulative effects of a series of incremental improvements in parametrization protocol, including both solute and solvent model changes. Ultimately, the optimized AMOEBA parameters give a set of results that are not statistically significantly different from those of GAFF in terms of signed and unsigned error metrics. This allows us to propose a number of guidelines for new molecule parameter derivation with AMOEBA, which we expect to have benefits for a range of biomolecular simulation applications such as protein-ligand binding studies.
Metric Sex Determination of the Human Coxal Bone on a Virtual Sample using Decision Trees.
Savall, Frédéric; Faruch-Bilfeld, Marie; Dedouit, Fabrice; Sans, Nicolas; Rousseau, Hervé; Rougé, Daniel; Telmon, Norbert
2015-11-01
Decision trees provide an alternative to multivariate discriminant analysis, which is still the most commonly used in anthropometric studies. Our study analyzed the metric characterization of a recent virtual sample of 113 coxal bones using decision trees for sex determination. From 17 osteometric type I landmarks, a dataset was built with five classic distances traditionally reported in the literature and six new distances selected using the two-step ratio method. A ten-fold cross-validation was performed, and a decision tree was established on two subsamples (training and test sets). The decision tree established on the training set included three nodes and its application to the test set correctly classified 92% of individuals. This percentage was similar to the data of the literature. The usefulness of decision trees has been demonstrated in numerous fields. They have been already used in sex determination, body mass prediction, and ancestry estimation. This study shows another use of decision trees enabling simple and accurate sex determination. © 2015 American Academy of Forensic Sciences.
Dataset from chemical gas sensor array in turbulent wind tunnel.
Fonollosa, Jordi; Rodríguez-Luján, Irene; Trincavelli, Marco; Huerta, Ramón
2015-06-01
The dataset includes the acquired time series of a chemical detection platform exposed to different gas conditions in a turbulent wind tunnel. The chemo-sensory elements were sampling directly the environment. In contrast to traditional approaches that include measurement chambers, open sampling systems are sensitive to dispersion mechanisms of gaseous chemical analytes, namely diffusion, turbulence, and advection, making the identification and monitoring of chemical substances more challenging. The sensing platform included 72 metal-oxide gas sensors that were positioned at 6 different locations of the wind tunnel. At each location, 10 distinct chemical gases were released in the wind tunnel, the sensors were evaluated at 5 different operating temperatures, and 3 different wind speeds were generated in the wind tunnel to induce different levels of turbulence. Moreover, each configuration was repeated 20 times, yielding a dataset of 18,000 measurements. The dataset was collected over a period of 16 months. The data is related to "On the performance of gas sensor arrays in open sampling systems using Inhibitory Support Vector Machines", by Vergara et al.[1]. The dataset can be accessed publicly at the UCI repository upon citation of [1]: http://archive.ics.uci.edu/ml/datasets/Gas+sensor+arrays+in+open+sampling+settings.
The Neuropsychology of Starvation: Set-Shifting and Central Coherence in a Fasted Nonclinical Sample
Pender, Sarah; Gilbert, Sam J.; Serpell, Lucy
2014-01-01
Objectives Recent research suggests certain neuropsychological deficits occur in anorexia nervosa (AN). The role of starvation in these deficits remains unclear. Studies of individuals without AN can elucidate our understanding of the effect of short-term starvation on neuropsychological performance. Methods Using a within-subjects repeated measures design, 60 healthy female participants were tested once after fasting for 18 hours, and once when satiated. Measures included two tasks to measure central coherence and a set-shifting task. Results Fasting exacerbated set-shifting difficulties on a rule-change task. Fasting was associated with stronger local and impaired global processing, indicating weaker central coherence. Conclusions Models of AN that propose a central role for set-shifting difficulties or weak central coherence should also consider the impact of short-term fasting on these processes. PMID:25338075
On the Asymptotic Relative Efficiency of Planned Missingness Designs.
Rhemtulla, Mijke; Savalei, Victoria; Little, Todd D
2016-03-01
In planned missingness (PM) designs, certain data are set a priori to be missing. PM designs can increase validity and reduce cost; however, little is known about the loss of efficiency that accompanies these designs. The present paper compares PM designs to reduced sample (RN) designs that have the same total number of data points concentrated in fewer participants. In 4 studies, we consider models for both observed and latent variables, designs that do or do not include an "X set" of variables with complete data, and a full range of between- and within-set correlation values. All results are obtained using asymptotic relative efficiency formulas, and thus no data are generated; this novel approach allows us to examine whether PM designs have theoretical advantages over RN designs removing the impact of sampling error. Our primary findings are that (a) in manifest variable regression models, estimates of regression coefficients have much lower relative efficiency in PM designs as compared to RN designs, (b) relative efficiency of factor correlation or latent regression coefficient estimates is maximized when the indicators of each latent variable come from different sets, and (c) the addition of an X set improves efficiency in manifest variable regression models only for the parameters that directly involve the X-set variables, but it substantially improves efficiency of most parameters in latent variable models. We conclude that PM designs can be beneficial when the model of interest is a latent variable model; recommendations are made for how to optimize such a design.
WEIGHTED LIKELIHOOD ESTIMATION UNDER TWO-PHASE SAMPLING
Saegusa, Takumi; Wellner, Jon A.
2013-01-01
We develop asymptotic theory for weighted likelihood estimators (WLE) under two-phase stratified sampling without replacement. We also consider several variants of WLEs involving estimated weights and calibration. A set of empirical process tools are developed including a Glivenko–Cantelli theorem, a theorem for rates of convergence of M-estimators, and a Donsker theorem for the inverse probability weighted empirical processes under two-phase sampling and sampling without replacement at the second phase. Using these general results, we derive asymptotic distributions of the WLE of a finite-dimensional parameter in a general semiparametric model where an estimator of a nuisance parameter is estimable either at regular or nonregular rates. We illustrate these results and methods in the Cox model with right censoring and interval censoring. We compare the methods via their asymptotic variances under both sampling without replacement and the more usual (and easier to analyze) assumption of Bernoulli sampling at the second phase. PMID:24563559
The Outer Solar System Origin Survey full data release orbit catalog and characterization.
NASA Astrophysics Data System (ADS)
Kavelaars, J. J.; Bannister, Michele T.; Gladman, Brett; Petit, Jean-Marc; Gwyn, Stephen; Alexandersen, Mike; Chen, Ying-Tung; Volk, Kathryn; OSSOS Collaboration.
2017-10-01
The Outer Solar System Origin Survey (OSSOS) completed main data acquisition in February 2017. Here we report the release of our full orbit sample, which include 836 TNOs with high precision orbit determination and classification. We combine the OSSOS orbit sample with previously release Canada-France Ecliptic Plane Survey (CFEPS) and a precursor survey to OSSOS by Alexandersen et al. to provide a sample of over 1100 TNO orbits with high precision classified orbits and precisely determined discovery and tracking circumstances (characterization). We are releasing the full sample and characterization to the world community, along with software for conducting ‘Survey Simulations’, so that this sample of orbits can be used to test models of the formation of our outer solar system against the observed sample. Here I will present the characteristics of the data set and present a parametric model for the structure of the classical Kuiper belt.
Martins, Angélica Rocha; Talhavini, Márcio; Vieira, Maurício Leite; Zacca, Jorge Jardim; Braga, Jez Willian Batista
2017-08-15
The discrimination of whisky brands and counterfeit identification were performed by UV-Vis spectroscopy combined with partial least squares for discriminant analysis (PLS-DA). In the proposed method all spectra were obtained with no sample preparation. The discrimination models were built with the employment of seven whisky brands: Red Label, Black Label, White Horse, Chivas Regal (12years), Ballantine's Finest, Old Parr and Natu Nobilis. The method was validated with an independent test set of authentic samples belonging to the seven selected brands and another eleven brands not included in the training samples. Furthermore, seventy-three counterfeit samples were also used to validate the method. Results showed correct classification rates for genuine and false samples over 98.6% and 93.1%, respectively, indicating that the method can be helpful for the forensic analysis of whisky samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
Arita, Minetaro; Ling, Hua; Yan, Dongmei; Nishimura, Yorihiro; Yoshida, Hiromu; Wakita, Takaji; Shimizu, Hiroyuki
2009-12-16
In the global eradication program for poliomyelitis, the laboratory diagnosis plays a critical role by isolating poliovirus (PV) from the stool samples of acute flaccid paralysis (AFP) cases. In this study, we developed a reverse transcription-loop-mediated isothermal amplification (RT-LAMP) system for a rapid and highly sensitive detection of enterovirus including PV to identify stool samples positive for enterovirus including PV. A primer set was designed for RT-LAMP to detect enterovirus preferably those with PV-like 5'NTRs of the viral genome. The sensitivity of RT-LAMP system was evaluated with prototype strains of enterovirus. Detection of enterovirus from stool extracts was examined by using RT-LAMP system. We detected at least 400 copies of the viral genomes of PV(Sabin) strains within 90 min by RT-LAMP with the primer set. This RT-LAMP system showed a preference for Human enterovirus species C (HEV-C) strains including PV, but exhibited less sensitivity to the prototype strains of HEV-A and HEV-B (detection limits of 7,400 to 28,000 copies). Stool extracts, from which PV, HEV-C, or HEV-A was isolated in the cell culture system, were mostly positive by RT-LAMP method (positive rates of 15/16 (= 94%), 13/14 (= 93%), and 4/4 (= 100%), respectively). The positive rate of this RT-LAMP system for stool extracts from which HEV-B was isolated was lower than that of HEV-C (positive rate of 11/21 (= 52%)). In the stool samples, which were negative for enterovirus isolation by the cell culture system, we found that two samples were positive for RT-LAMP (positive rates of 2/38 (= 5.3%)). In these samples, enterovirus 96 was identified by sequence analysis utilizing a seminested PCR system. RT-LAMP system developed in this study showed a high sensitivity comparable to that of the cell culture system for the detection of PV, HEV-A, and HEV-C, but less sensitivity to HEV-B. This RT-LAMP system would be useful for the direct detection of enterovirus from the stool extracts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nee, K.; Bryan, S.; Levitskaia, T.
The reliability of chemical processes can be greatly improved by implementing inline monitoring systems. Combining multivariate analysis with non-destructive sensors can enhance the process without interfering with the operation. Here, we present here hierarchical models using both principal component analysis and partial least square analysis developed for different chemical components representative of solvent extraction process streams. A training set of 380 samples and an external validation set of 95 samples were prepared and Near infrared and Raman spectral data as well as conductivity under variable temperature conditions were collected. The results from the models indicate that careful selection of themore » spectral range is important. By compressing the data through Principal Component Analysis (PCA), we lower the rank of the data set to its most dominant features while maintaining the key principal components to be used in the regression analysis. Within the studied data set, concentration of five chemical components were modeled; total nitrate (NO 3 -), total acid (H +), neodymium (Nd 3+), sodium (Na +), and ionic strength (I.S.). The best overall model prediction for each of the species studied used a combined data set comprised of complementary techniques including NIR, Raman, and conductivity. Finally, our study shows that chemometric models are powerful but requires significant amount of carefully analyzed data to capture variations in the chemistry.« less
Incremental Learning of Context Free Grammars by Parsing-Based Rule Generation and Rule Set Search
NASA Astrophysics Data System (ADS)
Nakamura, Katsuhiko; Hoshina, Akemi
This paper discusses recent improvements and extensions in Synapse system for inductive inference of context free grammars (CFGs) from sample strings. Synapse uses incremental learning, rule generation based on bottom-up parsing, and the search for rule sets. The form of production rules in the previous system is extended from Revised Chomsky Normal Form A→βγ to Extended Chomsky Normal Form, which also includes A→B, where each of β and γ is either a terminal or nonterminal symbol. From the result of bottom-up parsing, a rule generation mechanism synthesizes minimum production rules required for parsing positive samples. Instead of inductive CYK algorithm in the previous version of Synapse, the improved version uses a novel rule generation method, called ``bridging,'' which bridges the lacked part of the derivation tree for the positive string. The improved version also employs a novel search strategy, called serial search in addition to minimum rule set search. The synthesis of grammars by the serial search is faster than the minimum set search in most cases. On the other hand, the size of the generated CFGs is generally larger than that by the minimum set search, and the system can find no appropriate grammar for some CFL by the serial search. The paper shows experimental results of incremental learning of several fundamental CFGs and compares the methods of rule generation and search strategies.
Nee, K.; Bryan, S.; Levitskaia, T.; ...
2017-12-28
The reliability of chemical processes can be greatly improved by implementing inline monitoring systems. Combining multivariate analysis with non-destructive sensors can enhance the process without interfering with the operation. Here, we present here hierarchical models using both principal component analysis and partial least square analysis developed for different chemical components representative of solvent extraction process streams. A training set of 380 samples and an external validation set of 95 samples were prepared and Near infrared and Raman spectral data as well as conductivity under variable temperature conditions were collected. The results from the models indicate that careful selection of themore » spectral range is important. By compressing the data through Principal Component Analysis (PCA), we lower the rank of the data set to its most dominant features while maintaining the key principal components to be used in the regression analysis. Within the studied data set, concentration of five chemical components were modeled; total nitrate (NO 3 -), total acid (H +), neodymium (Nd 3+), sodium (Na +), and ionic strength (I.S.). The best overall model prediction for each of the species studied used a combined data set comprised of complementary techniques including NIR, Raman, and conductivity. Finally, our study shows that chemometric models are powerful but requires significant amount of carefully analyzed data to capture variations in the chemistry.« less
Analysis of Duplicated Multiple-Samples Rank Data Using the Mack-Skillings Test.
Carabante, Kennet Mariano; Alonso-Marenco, Jose Ramon; Chokumnoyporn, Napapan; Sriwattana, Sujinda; Prinyawiwatkul, Witoon
2016-07-01
Appropriate analysis for duplicated multiple-samples rank data is needed. This study compared analysis of duplicated rank preference data using the Friedman versus Mack-Skillings tests. Panelists (n = 125) ranked twice 2 orange juice sets: different-samples set (100%, 70%, vs. 40% juice) and similar-samples set (100%, 95%, vs. 90%). These 2 sample sets were designed to get contrasting differences in preference. For each sample set, rank sum data were obtained from (1) averaged rank data of each panelist from the 2 replications (n = 125), (2) rank data of all panelists from each of the 2 separate replications (n = 125 each), (3) jointed rank data of all panelists from the 2 replications (n = 125), and (4) rank data of all panelists pooled from the 2 replications (n = 250); rank data (1), (2), and (4) were separately analyzed by the Friedman test, although those from (3) by the Mack-Skillings test. The effect of sample sizes (n = 10 to 125) was evaluated. For the similar-samples set, higher variations in rank data from the 2 replications were observed; therefore, results of the main effects were more inconsistent among methods and sample sizes. Regardless of analysis methods, the larger the sample size, the higher the χ(2) value, the lower the P-value (testing H0 : all samples are not different). Analyzing rank data (2) separately by replication yielded inconsistent conclusions across sample sizes, hence this method is not recommended. The Mack-Skillings test was more sensitive than the Friedman test. Furthermore, it takes into account within-panelist variations and is more appropriate for analyzing duplicated rank data. © 2016 Institute of Food Technologists®
SKATE: a docking program that decouples systematic sampling from scoring.
Feng, Jianwen A; Marshall, Garland R
2010-11-15
SKATE is a docking prototype that decouples systematic sampling from scoring. This novel approach removes any interdependence between sampling and scoring functions to achieve better sampling and, thus, improves docking accuracy. SKATE systematically samples a ligand's conformational, rotational and translational degrees of freedom, as constrained by a receptor pocket, to find sterically allowed poses. Efficient systematic sampling is achieved by pruning the combinatorial tree using aggregate assembly, discriminant analysis, adaptive sampling, radial sampling, and clustering. Because systematic sampling is decoupled from scoring, the poses generated by SKATE can be ranked by any published, or in-house, scoring function. To test the performance of SKATE, ligands from the Asetex/CDCC set, the Surflex set, and the Vertex set, a total of 266 complexes, were redocked to their respective receptors. The results show that SKATE was able to sample poses within 2 A RMSD of the native structure for 98, 95, and 98% of the cases in the Astex/CDCC, Surflex, and Vertex sets, respectively. Cross-docking accuracy of SKATE was also assessed by docking 10 ligands to thymidine kinase and 73 ligands to cyclin-dependent kinase. 2010 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Harvey, A. S.; Fotopoulos, G.; Hall, B.; Amolins, K.
2017-06-01
Geological observations can be made on multiple scales, including micro- (e.g. thin section), meso- (e.g. hand-sized to outcrop) and macro- (e.g. outcrop and larger) scales. Types of meso-scale samples include, but are not limited to, rocks (including drill cores), minerals, and fossils. The spatial relationship among samples paired with physical (e.g. granulometric composition, density, roughness) and chemical (e.g. mineralogical and isotopic composition) properties can aid in interpreting geological settings, such as paleo-environmental and formational conditions as well as geomorphological history. Field samples are collected along traverses in the area of interest based on characteristic representativeness of a region, predetermined rate of sampling, and/or uniqueness. The location of a sample can provide relative context in seeking out additional key samples. Beyond labelling and recording of geospatial coordinates for samples, further analysis of physical and chemical properties may be conducted in the field and laboratory. The main motivation for this paper is to present a workflow for the digital preservation of samples (via 3D laser scanning) paired with the development of cyber infrastructure, which offers geoscientists and engineers the opportunity to access an increasingly diverse worldwide collection of digital Earth materials. This paper describes a Web-based graphical user interface developed using Web AppBuilder for ArcGIS for digitized meso-scale 3D scans of geological samples to be viewed alongside the macro-scale environment. Over 100 samples of virtual rocks, minerals and fossils populate the developed geological database and are linked explicitly with their associated attributes, characteristic properties, and location. Applications of this new Web-based geological visualization paradigm in the geosciences demonstrate the utility of such a tool in an age of increasing global data sharing.
User's guide to resin infusion simulation program in the FORTRAN language
NASA Technical Reports Server (NTRS)
Weideman, Mark H.; Hammond, Vince H.; Loos, Alfred C.
1992-01-01
RTMCL is a user friendly computer code which simulates the manufacture of fabric composites by the resin infusion process. The computer code is based on the process simulation model described in reference 1. Included in the user's guide is a detailed step by step description of how to run the program and enter and modify the input data set. Sample input and output files are included along with an explanation of the results. Finally, a complete listing of the program is provided.
Review of the SAFARI 2000 RC-10 Aerial Photography
NASA Technical Reports Server (NTRS)
Myers, Jeff; Shelton, Gary; Annegarn, Harrold; Peterson, David L. (Technical Monitor)
2001-01-01
This presentation will review the aerial photography collected by the NASA ER-2 aircraft during the SAFARI (Southern African Regional Science Initiative) year 2000 campaign. It will include specifications on the camera and film, and will show examples of the imagery. It will also detail the extent of coverage, and the procedures to obtain film products from the South African government. Also included will be some sample applications of aerial photography for various environmental applications, and its use in augmenting other SAFARI data sets.
Benschop, Corina C G; Quaak, Frederike C A; Boon, Mathilde E; Sijen, Titia; Kuiper, Irene
2012-03-01
Forensic analysis of biological traces generally encompasses the investigation of both the person who contributed to the trace and the body site(s) from which the trace originates. For instance, for sexual assault cases, it can be beneficial to distinguish vaginal samples from skin or saliva samples. In this study, we explored the use of microbial flora to indicate vaginal origin. First, we explored the vaginal microbiome for a large set of clinical vaginal samples (n = 240) by next generation sequencing (n = 338,184 sequence reads) and found 1,619 different sequences. Next, we selected 389 candidate probes targeting genera or species and designed a microarray, with which we analysed a diverse set of samples; 43 DNA extracts from vaginal samples and 25 DNA extracts from samples from other body sites, including sites in close proximity of or in contact with the vagina. Finally, we used the microarray results and next generation sequencing dataset to assess the potential for a future approach that uses microbial markers to indicate vaginal origin. Since no candidate genera/species were found to positively identify all vaginal DNA extracts on their own, while excluding all non-vaginal DNA extracts, we deduce that a reliable statement about the cellular origin of a biological trace should be based on the detection of multiple species within various genera. Microarray analysis of a sample will then render a microbial flora pattern that is probably best analysed in a probabilistic approach.
A catalog of porosity and permeability from core plugs in siliciclastic rocks
Nelson, Philip H.; Kibler, Joyce E.
2003-01-01
Porosity and permeability measurements on cored samples from siliciclastic formations are presented for 70 data sets, taken from published data and descriptions. Data sets generally represent specific formations, usually from a limited number of wells. Each data set is represented by a written summary, a plot of permeability versus porosity, and a digital file of the data. The summaries include a publication reference, the geologic age of the formation, location, well names, depth range, various geologic descriptions, and core measurement conditions. Attributes such as grain size or depositional environment are identified by symbols on the plots. An index lists the authors and date, geologic age, formation name, sandstone classification, location, basin or structural province, and field name.
NASA Technical Reports Server (NTRS)
Switzer, George F.
2008-01-01
This document contains a general description for data sets of a wake vortex system in a turbulent environment. The turbulence and thermal stratification of the environment are representative of the conditions on November 12, 2001 near John F. Kennedy International Airport. The simulation assumes no ambient winds. The full three dimensional simulation of the wake vortex system from a Boeing 747 predicts vortex circulation levels at 80% of their initial value at the time of the proposed vortex encounter. The linked vortex oval orientation showed no twisting, and the oval elevations at the widest point were about 20 meters higher than where the vortex pair joined. Fred Proctor of NASA?s Langley Research Center presented the results from this work at the NTSB public hearing that started 29 October 2002. This document contains a description of each data set including: variables, coordinate system, data format, and sample plots. Also included are instructions on how to read the data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bautista, Julian E.; Busca, Nicolas G.; Bailey, Stephen
We describe mock data-sets generated to simulate the high-redshift quasar sample in Data Release 11 (DR11) of the SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS). The mock spectra contain Lyα forest correlations useful for studying the 3D correlation function including Baryon Acoustic Oscillations (BAO). They also include astrophysical effects such as quasar continuum diversity and high-density absorbers, instrumental effects such as noise and spectral resolution, as well as imperfections introduced by the SDSS pipeline treatment of the raw data. The Lyα forest BAO analysis of the BOSS collaboration, described in Delubac et al. 2014, has used these mock data-sets to developmore » and cross-check analysis procedures prior to performing the BAO analysis on real data, and for continued systematic cross checks. Tests presented here show that the simulations reproduce sufficiently well important characteristics of real spectra. These mock data-sets will be made available together with the data at the time of the Data Release 11.« less
Parodi, Stefano; Manneschi, Chiara; Verda, Damiano; Ferrari, Enrico; Muselli, Marco
2018-03-01
This study evaluates the performance of a set of machine learning techniques in predicting the prognosis of Hodgkin's lymphoma using clinical factors and gene expression data. Analysed samples from 130 Hodgkin's lymphoma patients included a small set of clinical variables and more than 54,000 gene features. Machine learning classifiers included three black-box algorithms ( k-nearest neighbour, Artificial Neural Network, and Support Vector Machine) and two methods based on intelligible rules (Decision Tree and the innovative Logic Learning Machine method). Support Vector Machine clearly outperformed any of the other methods. Among the two rule-based algorithms, Logic Learning Machine performed better and identified a set of simple intelligible rules based on a combination of clinical variables and gene expressions. Decision Tree identified a non-coding gene ( XIST) involved in the early phases of X chromosome inactivation that was overexpressed in females and in non-relapsed patients. XIST expression might be responsible for the better prognosis of female Hodgkin's lymphoma patients.
Colon Reference Set Application: Mary Disis - University of Washington (2008) — EDRN Public Portal
The proposed study aims to validate the diagnostic value of a panel of serum antibodies for the early detection of colorectal cancer (CRC). We have developed a serum antibody based assay that shows promise in discriminating sera from CRC patients from healthy donors. We have evaluated two separate sample sets of sera that were available either commercially or were comprised of left over samples from previous studies by our group. Both sample sets showed concordance in discriminatory power. We have not been able to identify investigators with a larger, well defined sample set of early stage colon cancer sera and request assistance from the EDRN in obtaining such samples to help assess the potential diagnostic value of our autoantibody panel
Brandstätter, Christian; Laner, David; Prantl, Roman; Fellner, Johann
2014-12-01
Municipal solid waste landfills pose a threat on environment and human health, especially old landfills which lack facilities for collection and treatment of landfill gas and leachate. Consequently, missing information about emission flows prevent site-specific environmental risk assessments. To overcome this gap, the combination of waste sampling and analysis with statistical modeling is one option for estimating present and future emission potentials. Optimizing the tradeoff between investigation costs and reliable results requires knowledge about both: the number of samples to be taken and variables to be analyzed. This article aims to identify the optimized number of waste samples and variables in order to predict a larger set of variables. Therefore, we introduce a multivariate linear regression model and tested the applicability by usage of two case studies. Landfill A was used to set up and calibrate the model based on 50 waste samples and twelve variables. The calibrated model was applied to Landfill B including 36 waste samples and twelve variables with four predictor variables. The case study results are twofold: first, the reliable and accurate prediction of the twelve variables can be achieved with the knowledge of four predictor variables (Loi, EC, pH and Cl). For the second Landfill B, only ten full measurements would be needed for a reliable prediction of most response variables. The four predictor variables would exhibit comparably low analytical costs in comparison to the full set of measurements. This cost reduction could be used to increase the number of samples yielding an improved understanding of the spatial waste heterogeneity in landfills. Concluding, the future application of the developed model potentially improves the reliability of predicted emission potentials. The model could become a standard screening tool for old landfills if its applicability and reliability would be tested in additional case studies. Copyright © 2014 Elsevier Ltd. All rights reserved.
A test for patterns of modularity in sequences of developmental events.
Poe, Steven
2004-08-01
This study presents a statistical test for modularity in the context of relative timing of developmental events. The test assesses whether sets of developmental events show special phylogenetic conservation of rank order. The test statistic is the correlation coefficient of developmental ranks of the N events of the hypothesized module across taxa. The null distribution is obtained by taking correlation coefficients for randomly sampled sets of N events. This test was applied to two datasets, including one where phylogenetic information was taken into account. The events of limb development in two frog species were found to behave as a module.
NASA Technical Reports Server (NTRS)
Edwards, C. D.; Adams, J. T.; Agre, J. R.; Bell, D. J.; Clare, L. P.; Durning, J. F.; Ely, T. A.; Hemmati, H.; Leung, R. Y.; McGraw, C. A.
2000-01-01
The coming decade of Mars exploration will involve a diverse set of robotic science missions, including in situ and sample return investigations, and ultimately moving towards sustained robotic presence on the Martian surface. In supporting this mission set, NASA must establish a robust telecommunications architecture that meets the specific science needs of near-term missions while enabling new methods of future exploration. This paper will assess the anticipated telecommunications needs of future Mars exploration, examine specific options for deploying capabilities, and quantify the performance of these options in terms of key figures of merit.
Reconnaissance study of water quality in the mining-affected Aries River Basin, Romania
Friedel, Michael J.; Tindall, James A.; Sardan, Daniel; Fey, David L.; Poputa, G.L.
2008-01-01
The Aries River basin of western Romania has been subject to mining activities as far back as Roman times. Present mining activities are associated with the extraction and processing of various metals including Au, Cu, Pb, and Zn. To understand the effects of these mining activities on the environment, this study focused on three objectives: (1) establish a baseline set of physical parameters, and water- and sediment-associated concentrations of metals in river-valley floors and floodplains; (2) establish a baseline set of physical and chemical measurements of pore water and sediment in tailings; and (3) provide training in sediment and water sampling to personnel in the National Agency for Mineral Resources and the Rosia Poieni Mine. This report summarizes basin findings of physical parameters and chemistry (sediment and water), and ancillary data collected during the low-flow synoptic sampling of May 2006.
Predicting protein interactions by Brownian dynamics simulations.
Meng, Xuan-Yu; Xu, Yu; Zhang, Hong-Xing; Mezei, Mihaly; Cui, Meng
2012-01-01
We present a newly adapted Brownian-Dynamics (BD)-based protein docking method for predicting native protein complexes. The approach includes global BD conformational sampling, compact complex selection, and local energy minimization. In order to reduce the computational costs for energy evaluations, a shell-based grid force field was developed to represent the receptor protein and solvation effects. The performance of this BD protein docking approach has been evaluated on a test set of 24 crystal protein complexes. Reproduction of experimental structures in the test set indicates the adequate conformational sampling and accurate scoring of this BD protein docking approach. Furthermore, we have developed an approach to account for the flexibility of proteins, which has been successfully applied to reproduce the experimental complex structure from the structure of two unbounded proteins. These results indicate that this adapted BD protein docking approach can be useful for the prediction of protein-protein interactions.
Belyanina, S I
2015-02-01
Cytogenetic analysis was performed on samples of Chironomus plumosus L. (Diptera, Chironomidae) taken from waterbodies of various types in Bryansk region (Russia) and Gomel region (Belarus). Karyotypes of specimens taken from stream pools of the Volga were used as reference samples. The populations of Bryansk and Gomel regions (except for a population of Lake Strativa in Starodubskii district, Bryansk region) exhibit broad structural variation, including somatic mosaicism for morphotypes of the salivary gland chromosome set, decondensation of telomeric sites, and the presence of small structural changes, as opposed to populations of Saratov region. As compared with Saratov and Bryansk regions, the Balbiani ring in the B-arm of chromosome I is repressed in populations of Gomel region. It is concluded that the chromosome set of Ch. plumosus in a range of waterbodies of Bryansk and Gomel regions is unstable.
Burow, Karen R.; Shelton, Jennifer L.; Dubrovsky, Neil M.
1998-01-01
The processes that affect nitrate and pesticide occurrence may be better understood by relating ground-water quality to natural and human factors in the context of distinct, regionally extensive, land- use settings. This study assesses nitrate and pesticide occurrence in ground water beneath three agricultural land-use settings in the eastern San Joaquin Valley, California. Water samples were collected from 60 domestic wells in vineyard, almond, and a crop grouping of corn, alfalfa, and vegetable land-use settings. Each well was sampled once during 1993?1995. This study is one element of the U.S. Geological Survey?s National Water-Quality Assessment Program, which is designed to assess the status of, and trends in, the quality of the nation?s ground- and surface-water resources and to link the status and trends with an understanding of the natural and human factors that affect the quality of water. The concentrations and occurrence of nitrate and pesticides in ground-water samples from domestic wells in the eastern alluvial fan physiographic region were related to differences in chemical applica- tions and to the physical and biogeochemical processes that charac- terize each of the three land-use settings. Ground water beneath the vineyard and almond land-use settings on the coarse-grained, upper and middle parts of the alluvial fans is more vulnerable to nonpoint- source agricultural contamination than is the ground water beneath the corn, alfalfa, and vegetable land-use setting on the lower part of the fans, near the basin physiographic region. Nitrate concentrations ranged from less than 0.05 to 55 milligrams per liter, as nitrogen. Nitrate concentrations were significantly higher in the almond land-use setting than in the vineyard land-use setting, whereas concentrations in the corn, alfalfa, and vegetable land-use setting were intermediate. Nitrate concentrations exceeded the maximum contaminant level in eight samples from the almond land- use setting (40 percent), in seven samples from the corn, alfalfa, and vegetable land-use setting (35 percent), and in three samples from the vineyard land-use setting (15 percent). The physical and chemical characteristics of the vineyard and the almond land-use settings are similar, characterized by coarse-grained sediments and high dissolved- oxygen concentrations, reflecting processes that promote rapid infiltration of water and solutes. The high nitrate concentrations in the almond land-use setting reflect the high amount of nitrogen appli- cations in this setting, whereas the low nitrate concentrations in the vineyard land-use setting reflect relatively low nitrogen applications. In the corn, alfalfa, and vegetable land-use setting, the relatively fine-grained sediments, and low dissolved-oxygen concentrations, reflect processes that result in slow infiltration rates and longer ground-water residence times. The intermediate nitrate concentrations in the corn, alfalfa, and vegetable land-use setting are a result of these physical and chemical characteristics, combined with generally high (but variable) nitrogen applications. Twenty-three different pesticides were detected in 41 of 60 ground- water samples (68 percent). Eighty percent of the ground-water samples from the vineyard land-use setting had at least one pesticide detection, followed by 70 percent in the almond land-use setting, and 55 percent in the corn, alfalfa, and vegetable land-use setting. All concentra- tions were less than state or federal maximum contaminant levels only 5 of the detected pesticides have established maximum contaminant levels) with the exception of 1,2-dibromo-3-chloropropane, which exceeded the maximum contaminant level of 0.2 micrograms per liter in 10 ground-water samples from vineyard land-use wells and in 5 ground- water samples from almond land-use wells. Simazine was detected most often, occurring in 50 percent of the ground-water samples from the vineyard land-use wells and in 30 percent
Hackley, Paul C.; Kolak, Jonathan J.
2008-01-01
This report presents vitrinite reflectance and detailed organic composition data for nine high volatile bituminous coal samples. These samples were selected to provide a single, internally consistent set of reflectance and composition analyses to facilitate the study of linkages among coal composition, bitumen generation during thermal maturation, and geochemical characteristics of generated hydrocarbons. Understanding these linkages is important for addressing several issues, including: the role of coal as a source rock within a petroleum system, the potential for conversion of coal resources to liquid hydrocarbon fuels, and the interactions between coal and carbon dioxide during enhanced coalbed methane recovery and(or) carbon dioxide sequestration in coal beds.
Application of Handheld Laser-Induced Breakdown Spectroscopy (LIBS) to Geochemical Analysis.
Connors, Brendan; Somers, Andrew; Day, David
2016-05-01
While laser-induced breakdown spectroscopy (LIBS) has been in use for decades, only within the last two years has technology progressed to the point of enabling true handheld, self-contained instruments. Several instruments are now commercially available with a range of capabilities and features. In this paper, the SciAps Z-500 handheld LIBS instrument functionality and sub-systems are reviewed. Several assayed geochemical sample sets, including igneous rocks and soils, are investigated. Calibration data are presented for multiple elements of interest along with examples of elemental mapping in heterogeneous samples. Sample preparation and the data collection method from multiple locations and data analysis are discussed. © The Author(s) 2016.
Studies of the physical, yield and failure behavior of aliphatic polyketones
NASA Astrophysics Data System (ADS)
Karttunen, Nicole Renee
This thesis describes an investigation into the multiaxial yield and failure behavior of an aliphatic polyketone terpolymer. The behavior is studied as a function of: stress state, strain rate, temperature, and sample processing conditions. Results of this work include: elucidation of the behavior of a recently commercialized polymer, increased understanding of the effects listed above, insight into the effects of processing conditions on the morphology of the polyketone, and a description of yield strength of this material as a function of stress state, temperature, and strain rate. The first portion of work focuses on the behavior of a set of samples that are extruded under "common" processing conditions. Following this reference set of tests, the effect of testing this material at different temperatures is studied. A total of four different temperatures are examined. In addition, the effect of altering strain rate is examined. Testing is performed under pseudo-strain rate control at constant nominal octahedral shear strain rate for each failure envelope. A total of three different rates are studied. An extension of the first portion of work involves modeling the yield envelope. This is done by combining two approaches: continuum level and molecular level. The use of both methods allows the description of the yield envelope as a function of stress state, strain rate and temperature. The second portion of work involves the effects of processing conditions. For this work, additional samples are extruded with different shear and thermal histories than the "standard" material. One set of samples is processed with shear rates higher and lower than the standard. A second set is processed at higher and lower cooling rates than the standard. In order to understand the structural cause for changes in behavior with processing conditions, morphological characterization is performed on these samples. In particular, the effect on spherulitic structure is important. Residual stresses are also determined to be important to the behavior of the samples. Finally, an investigation into the crystalline structure of a family of aliphatic polyketones is performed. The effects of side group concentration and size are described.
Occupational exposure decisions: can limited data interpretation training help improve accuracy?
Logan, Perry; Ramachandran, Gurumurthy; Mulhausen, John; Hewett, Paul
2009-06-01
Accurate exposure assessments are critical for ensuring that potentially hazardous exposures are properly identified and controlled. The availability and accuracy of exposure assessments can determine whether resources are appropriately allocated to engineering and administrative controls, medical surveillance, personal protective equipment and other programs designed to protect workers. A desktop study was performed using videos, task information and sampling data to evaluate the accuracy and potential bias of participants' exposure judgments. Desktop exposure judgments were obtained from occupational hygienists for material handling jobs with small air sampling data sets (0-8 samples) and without the aid of computers. In addition, data interpretation tests (DITs) were administered to participants where they were asked to estimate the 95th percentile of an underlying log-normal exposure distribution from small data sets. Participants were presented with an exposure data interpretation or rule of thumb training which included a simple set of rules for estimating 95th percentiles for small data sets from a log-normal population. DIT was given to each participant before and after the rule of thumb training. Results of each DIT and qualitative and quantitative exposure judgments were compared with a reference judgment obtained through a Bayesian probabilistic analysis of the sampling data to investigate overall judgment accuracy and bias. There were a total of 4386 participant-task-chemical judgments for all data collections: 552 qualitative judgments made without sampling data and 3834 quantitative judgments with sampling data. The DITs and quantitative judgments were significantly better than random chance and much improved by the rule of thumb training. In addition, the rule of thumb training reduced the amount of bias in the DITs and quantitative judgments. The mean DIT % correct scores increased from 47 to 64% after the rule of thumb training (P < 0.001). The accuracy for quantitative desktop judgments increased from 43 to 63% correct after the rule of thumb training (P < 0.001). The rule of thumb training did not significantly impact accuracy for qualitative desktop judgments. The finding that even some simple statistical rules of thumb improve judgment accuracy significantly suggests that hygienists need to routinely use statistical tools while making exposure judgments using monitoring data.
Object Classification With Joint Projection and Low-Rank Dictionary Learning.
Foroughi, Homa; Ray, Nilanjan; Hong Zhang
2018-02-01
For an object classification system, the most critical obstacles toward real-world applications are often caused by large intra-class variability, arising from different lightings, occlusion, and corruption, in limited sample sets. Most methods in the literature would fail when the training samples are heavily occluded, corrupted or have significant illumination or viewpoint variations. Besides, most of the existing methods and especially deep learning-based methods, need large training sets to achieve a satisfactory recognition performance. Although using the pre-trained network on a generic large-scale data set and fine-tune it to the small-sized target data set is a widely used technique, this would not help when the content of base and target data sets are very different. To address these issues simultaneously, we propose a joint projection and low-rank dictionary learning method using dual graph constraints. Specifically, a structured class-specific dictionary is learned in the low-dimensional space, and the discrimination is further improved by imposing a graph constraint on the coding coefficients, that maximizes the intra-class compactness and inter-class separability. We enforce structural incoherence and low-rank constraints on sub-dictionaries to reduce the redundancy among them, and also make them robust to variations and outliers. To preserve the intrinsic structure of data, we introduce a supervised neighborhood graph into the framework to make the proposed method robust to small-sized and high-dimensional data sets. Experimental results on several benchmark data sets verify the superior performance of our method for object classification of small-sized data sets, which include a considerable amount of different kinds of variation, and may have high-dimensional feature vectors.
Software engineering the mixed model for genome-wide association studies on large samples.
Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J
2009-11-01
Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.
Sanitation in self-service automatic washers.
Buford, L E; Pickett, M S; Hartman, P A
1977-01-01
The potential for microbial transfer in self-service laundry washing machines was investigated by obtaining swab samples from the interior surfaces of commercial machines and wash water samples before and after disinfectant treatment. Three disinfectants (chlorine, a quaternary ammonium product, and a phenolic disinfectant) were used. Four self-service laundry facilities were sampled, with 10 replications of the procedure for each treatment at each location. Although washers were set on a warmwater setting, the wash water temperatures ranged from 24 to 51 degrees C. The quaternary ammonium product seemed most effective, averaging a 97% microbial kill; chlorine was the second most effective, with a 58% kill, and the phenolic disinfectant was least effective, with only a 25% kill. The efficacies of the chlorine and phenolic disinfectants were reduced at low water temperatures commonly experienced in self-service laundries. Interfamily cross-contamination in self-service facilities is a potential public health problem, which is aggravated by environmental conditions, such as water temperature and the practices of the previous users of the equipment. Procedural changes in laundering are recommended, including the use of a disinfectant to maintain adequate levels of sanitation. PMID:13714
Instrumental measurement of beer taste attributes using an electronic tongue.
Rudnitskaya, Alisa; Polshin, Evgeny; Kirsanov, Dmitry; Lammertyn, Jeroen; Nicolai, Bart; Saison, Daan; Delvaux, Freddy R; Delvaux, Filip; Legin, Andrey
2009-07-30
The present study deals with the evaluation of the electronic tongue multisensor system as an analytical tool for the rapid assessment of taste and flavour of beer. Fifty samples of Belgian and Dutch beers of different types (lager beers, ales, wheat beers, etc.), which were characterized with respect to the sensory properties, were measured using the electronic tongue (ET) based on potentiometric chemical sensors developed in Laboratory of Chemical Sensors of St. Petersburg University. The analysis of the sensory data and the calculation of the compromise average scores was made using STATIS. The beer samples were discriminated using both sensory panel and ET data based on PCA, and both data sets were compared using Canonical Correlation Analysis. The ET data were related to the sensory beer attributes using Partial Least Square regression for each attribute separately. Validation was done based on a test set comprising one-third of all samples. The ET was capable of predicting with good precision 20 sensory attributes of beer including such as bitter, sweet, sour, fruity, caramel, artificial, burnt, intensity and body.
Molecular typing of antibiotic-resistant Staphylococcus aureus in Nigeria.
O'Malley, S M; Emele, F E; Nwaokorie, F O; Idika, N; Umeizudike, A K; Emeka-Nwabunnia, I; Hanson, B M; Nair, R; Wardyn, S E; Smith, T C
2015-01-01
Antibiotic-resistant Staphylococcus aureus including methicillin-resistant strains (MRSA) are a major concern in densely populated urban areas. Initial studies of S. aureus in Nigeria indicated existence of antibiotic-resistant S. aureus strains in clinical and community settings. 73 biological samples (40 throat, 23 nasal, 10 wound) were collected from patients and healthcare workers in three populations in Nigeria: Lagos University Teaching Hospital, Nigerian Institute of Medical Research, and Owerri General Hospital. S. aureus was isolated from 38 of 73 samples (52%). Of the 38 S. aureus samples, 9 (24%) carried the Panton-Valentine leukocidin gene (PVL) while 16 (42%) possessed methicillin resistance genes (mecA). Antibiotic susceptibility profiles indicated resistance to several broad-spectrum antibiotics. Antibiotic-resistant S. aureus isolates were recovered from clinical and community settings in Nigeria. Insight about S. aureus in Nigeria may be used to improve antibiotic prescription methods and minimize the spread of antibiotic-resistant organisms in highly populated urban communities similar to Lagos, Nigeria. Copyright © 2014 King Saud Bin Abdulaziz University for Health Sciences. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Klemm, Richard; Schattschneider, Sebastian; Jahn, Tobias; Hlawatsch, Nadine; Julich, Sandra; Becker, Holger; Gärtner, Claudia
2013-05-01
The ability to integrate complete assays on a microfluidic chip helps to greatly simplify instrument requirements and allows the use of lab-on-a-chip technology in the field. A core application for such field-portable systems is the detection of pathogens in a CBRNE scenario such as permanent monitoring of airborne pathogens, e.g. in metro stations or hospitals etc. As one assay methodology for the pathogen identification, enzymatic assays were chosen. In order evaluate different detection strategies, the realized on-chip enzyme assay module has been designed as a general platform chip. In all application cases, the assays are based on immobilized probes located in microfluidic channels. Therefore a microfluidic chip was realized containing a set of three individually addressable channels, not only for detection of the sample itself also to have a set of references for a quantitative analysis. It furthermore includes two turning valves and a waste container for clear and sealed storage of potential pathogenic liquids to avoid contamination of the environment. All liquids remain in the chip and can be disposed of in proper way subsequently to the analysis. The chip design includes four inlet ports consisting of one sample port (Luer interface) and three mini Luer interfaces for fluidic support of e.g. washing buffer, substrate and enzyme solution. The sample can be applied via a special, sealable sampling vessel with integrated female Luer interface. Thereby also pre-anaytical contamination of the environment can be provided. Other reagents that are required for analysis will be stored off chip.
Nonpareil 3: Fast Estimation of Metagenomic Coverage and Sequence Diversity.
Rodriguez-R, Luis M; Gunturu, Santosh; Tiedje, James M; Cole, James R; Konstantinidis, Konstantinos T
2018-01-01
Estimations of microbial community diversity based on metagenomic data sets are affected, often to an unknown degree, by biases derived from insufficient coverage and reference database-dependent estimations of diversity. For instance, the completeness of reference databases cannot be generally estimated since it depends on the extant diversity sampled to date, which, with the exception of a few habitats such as the human gut, remains severely undersampled. Further, estimation of the degree of coverage of a microbial community by a metagenomic data set is prohibitively time-consuming for large data sets, and coverage values may not be directly comparable between data sets obtained with different sequencing technologies. Here, we extend Nonpareil, a database-independent tool for the estimation of coverage in metagenomic data sets, to a high-performance computing implementation that scales up to hundreds of cores and includes, in addition, a k -mer-based estimation as sensitive as the original alignment-based version but about three hundred times as fast. Further, we propose a metric of sequence diversity ( N d ) derived directly from Nonpareil curves that correlates well with alpha diversity assessed by traditional metrics. We use this metric in different experiments demonstrating the correlation with the Shannon index estimated on 16S rRNA gene profiles and show that N d additionally reveals seasonal patterns in marine samples that are not captured by the Shannon index and more precise rankings of the magnitude of diversity of microbial communities in different habitats. Therefore, the new version of Nonpareil, called Nonpareil 3, advances the toolbox for metagenomic analyses of microbiomes. IMPORTANCE Estimation of the coverage provided by a metagenomic data set, i.e., what fraction of the microbial community was sampled by DNA sequencing, represents an essential first step of every culture-independent genomic study that aims to robustly assess the sequence diversity present in a sample. However, estimation of coverage remains elusive because of several technical limitations associated with high computational requirements and limiting statistical approaches to quantify diversity. Here we described Nonpareil 3, a new bioinformatics algorithm that circumvents several of these limitations and thus can facilitate culture-independent studies in clinical or environmental settings, independent of the sequencing platform employed. In addition, we present a new metric of sequence diversity based on rarefied coverage and demonstrate its use in communities from diverse ecosystems.
Joint inversion of NMR and SIP data to estimate pore size distribution of geomaterials
NASA Astrophysics Data System (ADS)
Niu, Qifei; Zhang, Chi
2018-03-01
There are growing interests in using geophysical tools to characterize the microstructure of geomaterials because of the non-invasive nature and the applicability in field. In these applications, multiple types of geophysical data sets are usually processed separately, which may be inadequate to constrain the key feature of target variables. Therefore, simultaneous processing of multiple data sets could potentially improve the resolution. In this study, we propose a method to estimate pore size distribution by joint inversion of nuclear magnetic resonance (NMR) T2 relaxation and spectral induced polarization (SIP) spectra. The petrophysical relation between NMR T2 relaxation time and SIP relaxation time is incorporated in a nonlinear least squares problem formulation, which is solved using Gauss-Newton method. The joint inversion scheme is applied to a synthetic sample and a Berea sandstone sample. The jointly estimated pore size distributions are very close to the true model and results from other experimental method. Even when the knowledge of the petrophysical models of the sample is incomplete, the joint inversion can still capture the main features of the pore size distribution of the samples, including the general shape and relative peak positions of the distribution curves. It is also found from the numerical example that the surface relaxivity of the sample could be extracted with the joint inversion of NMR and SIP data if the diffusion coefficient of the ions in the electrical double layer is known. Comparing to individual inversions, the joint inversion could improve the resolution of the estimated pore size distribution because of the addition of extra data sets. The proposed approach might constitute a first step towards a comprehensive joint inversion that can extract the full pore geometry information of a geomaterial from NMR and SIP data.
eDNAoccupancy: An R package for multi-scale occupancy modeling of environmental DNA data
Dorazio, Robert; Erickson, Richard A.
2017-01-01
In this article we describe eDNAoccupancy, an R package for fitting Bayesian, multi-scale occupancy models. These models are appropriate for occupancy surveys that include three, nested levels of sampling: primary sample units within a study area, secondary sample units collected from each primary unit, and replicates of each secondary sample unit. This design is commonly used in occupancy surveys of environmental DNA (eDNA). eDNAoccupancy allows users to specify and fit multi-scale occupancy models with or without covariates, to estimate posterior summaries of occurrence and detection probabilities, and to compare different models using Bayesian model-selection criteria. We illustrate these features by analyzing two published data sets: eDNA surveys of a fungal pathogen of amphibians and eDNA surveys of an endangered fish species.
Analysis of suspicious powders following the post 9/11 anthrax scare.
Wills, Brandon; Leikin, Jerrold; Rhee, James; Saeedi, Bijan
2008-06-01
Following the 9/11 terrorist attacks, SET Environmental, Inc., a Chicago-based environmental and hazardous materials management company received a large number of suspicious powders for analysis. Samples of powders were submitted to SET for anthrax screening and/or unknown identification (UI). Anthrax screening was performed on-site using a ruggedized analytical pathogen identification device (R.A.P.I.D.) (Idaho Technologies, Salt Lake City, UT). UI was performed at SET headquarters (Wheeling, IL) utilizing a combination of wet chemistry techniques, infrared spectroscopy, and gas chromatography/mass spectroscopy. Turnaround time was approximately 2-3 hours for either anthrax or UI. Between October 10, 2001 and October 11, 2002, 161 samples were analyzed. Of these, 57 were for anthrax screening only, 78 were for anthrax and UI, and 26 were for UI only. Sources of suspicious powders included industries (66%), U.S. Postal Service (19%), law enforcement (9%), and municipalities (7%). There were 0/135 anthrax screens that were positive. There were no positive anthrax screens performed by SET in the Chicago area following the post-9/11 anthrax scare. The only potential biological or chemical warfare agent identified (cyanide) was provided by law enforcement. Rapid anthrax screening and identification of unknown substances at the scene are useful to prevent costly interruption of services and potential referral for medical evaluation.
External Quality Assessment for Avian Influenza A (H7N9) Virus Detection Using Armored RNA
Sun, Yu; Jia, Tingting; Sun, Yanli; Han, Yanxi; Wang, Lunan; Zhang, Rui; Zhang, Kuo; Lin, Guigao; Xie, Jiehong
2013-01-01
An external quality assessment (EQA) program for the molecular detection of avian influenza A (H7N9) virus was implemented by the National Center for Clinical Laboratories (NCCL) of China in June 2013. Virus-like particles (VLPs) that contained full-length RNA sequences of the hemagglutinin (HA), neuraminidase (NA), matrix protein (MP), and nucleoprotein (NP) genes from the H7N9 virus (armored RNAs) were constructed. The EQA panel, comprising 6 samples with different concentrations of armored RNAs positive for H7N9 viruses and four H7N9-negative samples (including one sample positive for only the MP gene of the H7N9 virus), was distributed to 79 laboratories in China that carry out the molecular detection of H7N9 viruses. The overall performances of the data sets were classified according to the results for the H7 and N9 genes. Consequently, we received 80 data sets (one participating group provided two sets of results) which were generated using commercial (n = 60) or in-house (n = 17) reverse transcription-quantitative PCR (qRT-PCR) kits and a commercial assay that employed isothermal amplification method (n = 3). The results revealed that the majority (82.5%) of the data sets correctly identified the H7N9 virus, while 17.5% of the data sets needed improvements in their diagnostic capabilities. These “improvable” data sets were derived mostly from false-negative results for the N9 gene at relatively low concentrations. The false-negative rate was 5.6%, and the false-positive rate was 0.6%. In addition, we observed varied diagnostic capabilities between the different commercially available kits and the in-house-developed assays, with the assay manufactured by BioPerfectus Technologies (Jiangsu, China) performing better than the others. Overall, the majority of laboratories have reliable diagnostic capacities for the detection of H7N9 virus. PMID:24088846
External quality assessment for Avian Influenza A (H7N9) Virus detection using armored RNA.
Sun, Yu; Jia, Tingting; Sun, Yanli; Han, Yanxi; Wang, Lunan; Zhang, Rui; Zhang, Kuo; Lin, Guigao; Xie, Jiehong; Li, Jinming
2013-12-01
An external quality assessment (EQA) program for the molecular detection of avian influenza A (H7N9) virus was implemented by the National Center for Clinical Laboratories (NCCL) of China in June 2013. Virus-like particles (VLPs) that contained full-length RNA sequences of the hemagglutinin (HA), neuraminidase (NA), matrix protein (MP), and nucleoprotein (NP) genes from the H7N9 virus (armored RNAs) were constructed. The EQA panel, comprising 6 samples with different concentrations of armored RNAs positive for H7N9 viruses and four H7N9-negative samples (including one sample positive for only the MP gene of the H7N9 virus), was distributed to 79 laboratories in China that carry out the molecular detection of H7N9 viruses. The overall performances of the data sets were classified according to the results for the H7 and N9 genes. Consequently, we received 80 data sets (one participating group provided two sets of results) which were generated using commercial (n = 60) or in-house (n = 17) reverse transcription-quantitative PCR (qRT-PCR) kits and a commercial assay that employed isothermal amplification method (n = 3). The results revealed that the majority (82.5%) of the data sets correctly identified the H7N9 virus, while 17.5% of the data sets needed improvements in their diagnostic capabilities. These "improvable" data sets were derived mostly from false-negative results for the N9 gene at relatively low concentrations. The false-negative rate was 5.6%, and the false-positive rate was 0.6%. In addition, we observed varied diagnostic capabilities between the different commercially available kits and the in-house-developed assays, with the assay manufactured by BioPerfectus Technologies (Jiangsu, China) performing better than the others. Overall, the majority of laboratories have reliable diagnostic capacities for the detection of H7N9 virus.