Sample records for larger sample set

  1. Tackling the conformational sampling of larger flexible compounds and macrocycles in pharmacology and drug discovery.

    PubMed

    Chen, I-Jen; Foloppe, Nicolas

    2013-12-15

    Computational conformational sampling underpins much of molecular modeling and design in pharmaceutical work. The sampling of smaller drug-like compounds has been an active area of research. However, few studies have tested in details the sampling of larger more flexible compounds, which are also relevant to drug discovery, including therapeutic peptides, macrocycles, and inhibitors of protein-protein interactions. Here, we investigate extensively mainstream conformational sampling methods on three carefully curated compound sets, namely the 'Drug-like', larger 'Flexible', and 'Macrocycle' compounds. These test molecules are chemically diverse with reliable X-ray protein-bound bioactive structures. The compared sampling methods include Stochastic Search and the recent LowModeMD from MOE, all the low-mode based approaches from MacroModel, and MD/LLMOD recently developed for macrocycles. In addition to default settings, key parameters of the sampling protocols were explored. The performance of the computational protocols was assessed via (i) the reproduction of the X-ray bioactive structures, (ii) the size, coverage and diversity of the output conformational ensembles, (iii) the compactness/extendedness of the conformers, and (iv) the ability to locate the global energy minimum. The influence of the stochastic nature of the searches on the results was also examined. Much better results were obtained by adopting search parameters enhanced over the default settings, while maintaining computational tractability. In MOE, the recent LowModeMD emerged as the method of choice. Mixed torsional/low-mode from MacroModel performed as well as LowModeMD, and MD/LLMOD performed well for macrocycles. The low-mode based approaches yielded very encouraging results with the flexible and macrocycle sets. Thus, one can productively tackle the computational conformational search of larger flexible compounds for drug discovery, including macrocycles. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Colon Reference Set Application: Mary Disis - University of Washington (2008) — EDRN Public Portal

    Cancer.gov

    The proposed study aims to validate the diagnostic value of a panel of serum antibodies for the early detection of colorectal cancer (CRC). We have developed a serum antibody based assay that shows promise in discriminating sera from CRC patients from healthy donors. We have evaluated two separate sample sets of sera that were available either commercially or were comprised of left over samples from previous studies by our group. Both sample sets showed concordance in discriminatory power. We have not been able to identify investigators with a larger, well defined sample set of early stage colon cancer sera and request assistance from the EDRN in obtaining such samples to help assess the potential diagnostic value of our autoantibody panel

  3. Ranked set sampling: cost and optimal set size.

    PubMed

    Nahhas, Ramzi W; Wolfe, Douglas A; Chen, Haiying

    2002-12-01

    McIntyre (1952, Australian Journal of Agricultural Research 3, 385-390) introduced ranked set sampling (RSS) as a method for improving estimation of a population mean in settings where sampling and ranking of units from the population are inexpensive when compared with actual measurement of the units. Two of the major factors in the usefulness of RSS are the set size and the relative costs of the various operations of sampling, ranking, and measurement. In this article, we consider ranking error models and cost models that enable us to assess the effect of different cost structures on the optimal set size for RSS. For reasonable cost structures, we find that the optimal RSS set sizes are generally larger than had been anticipated previously. These results will provide a useful tool for determining whether RSS is likely to lead to an improvement over simple random sampling in a given setting and, if so, what RSS set size is best to use in this case.

  4. Ultrasonic imaging of textured alumina

    NASA Technical Reports Server (NTRS)

    Stang, David B.; Salem, Jonathan A.; Generazio, Edward R.

    1989-01-01

    Ultrasonic images representing the bulk attenuation and velocity of a set of alumina samples were obtained by a pulse-echo contact scanning technique. The samples were taken from larger bodies that were chemically similar but were processed by extrusion or isostatic processing. The crack growth resistance and fracture toughness of the larger bodies were found to vary with processing method and test orientation. The results presented here demonstrate that differences in texture that contribute to variations in structural performance can be revealed by analytic ultrasonic techniques.

  5. Evidence for a Global Sampling Process in Extraction of Summary Statistics of Item Sizes in a Set.

    PubMed

    Tokita, Midori; Ueda, Sachiyo; Ishiguchi, Akira

    2016-01-01

    Several studies have shown that our visual system may construct a "summary statistical representation" over groups of visual objects. Although there is a general understanding that human observers can accurately represent sets of a variety of features, many questions on how summary statistics, such as an average, are computed remain unanswered. This study investigated sampling properties of visual information used by human observers to extract two types of summary statistics of item sets, average and variance. We presented three models of ideal observers to extract the summary statistics: a global sampling model without sampling noise, global sampling model with sampling noise, and limited sampling model. We compared the performance of an ideal observer of each model with that of human observers using statistical efficiency analysis. Results suggest that summary statistics of items in a set may be computed without representing individual items, which makes it possible to discard the limited sampling account. Moreover, the extraction of summary statistics may not necessarily require the representation of individual objects with focused attention when the sets of items are larger than 4.

  6. Utility of Inferential Norming with Smaller Sample Sizes

    ERIC Educational Resources Information Center

    Zhu, Jianjun; Chen, Hsin-Yi

    2011-01-01

    We examined the utility of inferential norming using small samples drawn from the larger "Wechsler Intelligence Scales for Children-Fourth Edition" (WISC-IV) standardization data set. The quality of the norms was estimated with multiple indexes such as polynomial curve fit, percentage of cases receiving the same score, average absolute…

  7. A novel atmospheric tritium sampling system

    NASA Astrophysics Data System (ADS)

    Qin, Lailai; Xia, Zhenghai; Gu, Shaozhong; Zhang, Dongxun; Bao, Guangliang; Han, Xingbo; Ma, Yuhua; Deng, Ke; Liu, Jiayu; Zhang, Qin; Ma, Zhaowei; Yang, Guo; Liu, Wei; Liu, Guimin

    2018-06-01

    The health hazard of tritium is related to its chemical form. Sampling different chemical forms of tritium simultaneously becomes significant. Here a novel atmospheric tritium sampling system (TS-212) was developed to collect the tritiated water (HTO), tritiated hydrogen (HT) and tritiated methane (CH3T) simultaneously. It consisted of an air inlet system, three parallel connected sampling channels, a hydrogen supply module, a methane supply module and a remote control system. It worked at air flow rate of 1 L/min to 5 L/min, with temperature of catalyst furnace at 200 °C for HT sampling and 400 °C for CH3T sampling. Conversion rates of both HT and CH3T to HTO were larger than 99%. The collecting efficiency of the two-stage trap sets for HTO was larger than 96% in 12 h working-time without being blocked. Therefore, the collected efficiencies of TS-212 are larger than 95% for tritium with different chemical forms in environment. Besides, the remote control system made sampling more intelligent, reducing the operator's work intensity. Based on the performance parameters described above, the TS-212 can be used to sample atmospheric tritium in different chemical forms.

  8. Ecological tolerances of Miocene larger benthic foraminifera from Indonesia

    NASA Astrophysics Data System (ADS)

    Novak, Vibor; Renema, Willem

    2018-01-01

    To provide a comprehensive palaeoenvironmental reconstruction based on larger benthic foraminifera (LBF), a quantitative analysis of their assemblage composition is needed. Besides microfacies analysis which includes environmental preferences of foraminiferal taxa, statistical analyses should also be employed. Therefore, detrended correspondence analysis and cluster analysis were performed on relative abundance data of identified LBF assemblages deposited in mixed carbonate-siliciclastic (MCS) systems and blue-water (BW) settings. Studied MCS system localities include ten sections from the central part of the Kutai Basin in East Kalimantan, ranging from late Burdigalian to Serravallian age. The BW samples were collected from eleven sections of the Bulu Formation on Central Java, dated as Serravallian. Results from detrended correspondence analysis reveal significant differences between these two environmental settings. Cluster analysis produced five clusters of samples; clusters 1 and 2 comprise dominantly MCS samples, clusters 3 and 4 with dominance of BW samples, and cluster 5 showing a mixed composition with both MCS and BW samples. The results of cluster analysis were afterwards subjected to indicator species analysis resulting in the interpretation that generated three groups among LBF taxa: typical assemblage indicators, regularly occurring taxa and rare taxa. By interpreting the results of detrended correspondence analysis, cluster analysis and indicator species analysis, along with environmental preferences of identified LBF taxa, a palaeoenvironmental model is proposed for the distribution of LBF in Miocene MCS systems and adjacent BW settings of Indonesia.

  9. Low altitude wind shear statistics derived from measured and FAA proposed standard wind profiles

    NASA Technical Reports Server (NTRS)

    Dunham, R. E., Jr.; Usry, J. W.

    1984-01-01

    Wind shear statistics were calculated for a simulated data set using wind profiles proposed as a standard and compared to statistics derived from measured wind profile data. Wind shear values were grouped in altitude bands of 100 ft between 100 and 1400 ft, and in wind shear increments of 0.025 kt/ft between + or - 0.600 kt/ft for the simulated data set and between + or - 0.200 kt/ft for the measured set. No values existed outside the + or - 0.200 kt/ft boundaries for the measured data. Frequency distributions, means, and standard deviations were derived for each altitude band for both data sets, and compared. Also, frequency distributions were derived for the total sample for both data sets and compared. Frequency of occurrence of a given wind shear was about the same for both data sets for wind shears, but less than + or 0.10 kt/ft, but the simulated data set had larger values outside these boundaries. Neglecting the vertical wind component did not significantly affect the statistics for these data sets. The frequency of occurrence of wind shears for the flight measured data was essentially the same for each altitude band and the total sample, but the simulated data distributions were different for each altitude band. The larger wind shears for the flight measured data were found to have short durations.

  10. Detecting representative data and generating synthetic samples to improve learning accuracy with imbalanced data sets.

    PubMed

    Li, Der-Chiang; Hu, Susan C; Lin, Liang-Sian; Yeh, Chun-Wu

    2017-01-01

    It is difficult for learning models to achieve high classification performances with imbalanced data sets, because with imbalanced data sets, when one of the classes is much larger than the others, most machine learning and data mining classifiers are overly influenced by the larger classes and ignore the smaller ones. As a result, the classification algorithms often have poor learning performances due to slow convergence in the smaller classes. To balance such data sets, this paper presents a strategy that involves reducing the sizes of the majority data and generating synthetic samples for the minority data. In the reducing operation, we use the box-and-whisker plot approach to exclude outliers and the Mega-Trend-Diffusion method to find representative data from the majority data. To generate the synthetic samples, we propose a counterintuitive hypothesis to find the distributed shape of the minority data, and then produce samples according to this distribution. Four real datasets were used to examine the performance of the proposed approach. We used paired t-tests to compare the Accuracy, G-mean, and F-measure scores of the proposed data pre-processing (PPDP) method merging in the D3C method (PPDP+D3C) with those of the one-sided selection (OSS), the well-known SMOTEBoost (SB) study, and the normal distribution-based oversampling (NDO) approach, and the proposed data pre-processing (PPDP) method. The results indicate that the classification performance of the proposed approach is better than that of above-mentioned methods.

  11. An interferometric fiber optic hydrophone with large upper limit of dynamic range

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Kan, Baoxi; Zheng, Baichao; Wang, Xuefeng; Zhang, Haiyan; Hao, Liangbin; Wang, Hailiang; Hou, Zhenxing; Yu, Wenpeng

    2017-10-01

    Interferometric fiber optic hydrophone based on heterodyne detection is used to measure the missile dropping point in the sea. The signal caused by the missile dropping in the water will be too large to be detected, so it is necessary to boost the upper limit of dynamic range (ULODR) of fiber optic hydrophone. In this article we analysis the factors which influence the ULODR of fiber optic hydrophone based on heterodyne detection, the ULODR is decided by the sampling frequency fsam and the heterodyne frequency Δf. The sampling frequency and the heterodyne frequency should be satisfied with the Nyquist sampling theorem which fsam will be two times larger than Δf, in this condition the ULODR is depended on the heterodyne frequency. In order to enlarge the ULODR, the Nyquist sampling theorem was broken, and we proposed a fiber optic hydrophone which the heterodyne frequency is larger than the sampling frequency. Both the simulation and experiment were done in this paper, the consequences are similar: When the sampling frequency is 100kHz, the ULODR of large heterodyne frequency fiber optic hydrophone is 2.6 times larger than that of the small heterodyne frequency fiber optic hydrophone. As the heterodyne frequency is larger than the sampling frequency, the ULODR is depended on the sampling frequency. If the sampling frequency was set at 2MHz, the ULODR of fiber optic hydrophone based on heterodyne detection will be boosted to 1000rad at 1kHz, and this large heterodyne fiber optic hydrophone can be applied to locate the drop position of the missile in the sea.

  12. Protein and glycomic plasma markers for early detection of adenoma and colon cancer.

    PubMed

    Rho, Jung-Hyun; Ladd, Jon J; Li, Christopher I; Potter, John D; Zhang, Yuzheng; Shelley, David; Shibata, David; Coppola, Domenico; Yamada, Hiroyuki; Toyoda, Hidenori; Tada, Toshifumi; Kumada, Takashi; Brenner, Dean E; Hanash, Samir M; Lampe, Paul D

    2018-03-01

    To discover and confirm blood-based colon cancer early-detection markers. We created a high-density antibody microarray to detect differences in protein levels in plasma from individuals diagnosed with colon cancer <3 years after blood was drawn (ie, prediagnostic) and cancer-free, matched controls. Potential markers were tested on plasma samples from people diagnosed with adenoma or cancer, compared with controls. Components of an optimal 5-marker panel were tested via immunoblotting using a third sample set, Luminex assay in a large fourth sample set and immunohistochemistry (IHC) on tissue microarrays. In the prediagnostic samples, we found 78 significantly (t-test) increased proteins, 32 of which were confirmed in the diagnostic samples. From these 32, optimal 4-marker panels of BAG family molecular chaperone regulator 4 (BAG4), interleukin-6 receptor subunit beta (IL6ST), von Willebrand factor (VWF) and CD44 or epidermal growth factor receptor (EGFR) were established. Each panel member and the panels also showed increases in the diagnostic adenoma and cancer samples in independent third and fourth sample sets via immunoblot and Luminex, respectively. IHC results showed increased levels of BAG4, IL6ST and CD44 in adenoma and cancer tissues. Inclusion of EGFR and CD44 sialyl Lewis-A and Lewis-X content increased the panel performance. The protein/glycoprotein panel was statistically significantly higher in colon cancer samples, characterised by a range of area under the curves from 0.90 (95% CI 0.82 to 0.98) to 0.86 (95% CI 0.83 to 0.88), for the larger second and fourth sets, respectively. A panel including BAG4, IL6ST, VWF, EGFR and CD44 protein/glycomics performed well for detection of early stages of colon cancer and should be further examined in larger studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  13. A Low Resistance Infrared Bolometer for Use with a Squid Detection System.

    DTIC Science & Technology

    1982-09-24

    sensitivity. After treatment at 350C (near the Au-Ge eutectic temperature) the sensitivity and resistance decreased, as shown in Fig. 6. The as-evaporated... recrystallize with a very fine grain size, however, the thicker film (Sample No. 7A) revealed larger topographic bumps, Figure 21(b). The light etch sample (No...LOS ANGEI.ES. CA. 900fi ....... areas were of a similar thickness on both sets of samples. The thin film (Sample 3A) recrystallized in the contact

  14. An improved initialization center k-means clustering algorithm based on distance and density

    NASA Astrophysics Data System (ADS)

    Duan, Yanling; Liu, Qun; Xia, Shuyin

    2018-04-01

    Aiming at the problem of the random initial clustering center of k means algorithm that the clustering results are influenced by outlier data sample and are unstable in multiple clustering, a method of central point initialization method based on larger distance and higher density is proposed. The reciprocal of the weighted average of distance is used to represent the sample density, and the data sample with the larger distance and the higher density are selected as the initial clustering centers to optimize the clustering results. Then, a clustering evaluation method based on distance and density is designed to verify the feasibility of the algorithm and the practicality, the experimental results on UCI data sets show that the algorithm has a certain stability and practicality.

  15. The signature-based radiation-scanning approach to standoff detection of improvised explosive devices.

    PubMed

    Brewer, R L; Dunn, W L; Heider, S; Matthew, C; Yang, X

    2012-07-01

    The signature-based radiation-scanning technique for detection of improvised explosive devices is described. The technique seeks to detect nitrogen-rich chemical explosives present in a target. The technology compares a set of "signatures" obtained from a test target to a collection of "templates", sets of signatures for a target that contain an explosive in a specific configuration. Interrogation of nitrogen-rich fertilizer samples, which serve as surrogates for explosives, is shown experimentally to be able to discriminate samples of 3.8L and larger. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Precision of channel catfish catch estimates using hoop nets in larger Oklahoma reservoirs

    USGS Publications Warehouse

    Stewart, David R.; Long, James M.

    2012-01-01

    Hoop nets are rapidly becoming the preferred gear type used to sample channel catfish Ictalurus punctatus, and many managers have reported that hoop nets effectively sample channel catfish in small impoundments (<200 ha). However, the utility and precision of this approach in larger impoundments have not been tested. We sought to determine how the number of tandem hoop net series affected the catch of channel catfish and the time involved in using 16 tandem hoop net series in larger impoundments (>200 ha). Hoop net series were fished once, set for 3 d; then we used Monte Carlo bootstrapping techniques that allowed us to estimate the number of net series required to achieve two levels of precision (relative standard errors [RSEs] of 15 and 25) at two levels of confidence (80% and 95%). Sixteen hoop net series were effective at obtaining an RSE of 25 with 80% and 95% confidence in all but one reservoir. Achieving an RSE of 15 was often less effective and required 18-96 hoop net series given the desired level of confidence. We estimated that an hour was needed, on average, to deploy and retrieve three hoop net series, which meant that 16 hoop net series per reservoir could be "set" and "retrieved" within a day, respectively. The estimated number of net series to achieve an RSE of 25 or 15 was positively associated with the coefficient of variation (CV) of the sample but not with reservoir surface area or relative abundance. Our results suggest that hoop nets are capable of providing reasonably precise estimates of channel catfish relative abundance and that the relationship with the CV of the sample reported herein can be used to determine the sampling effort for a desired level of precision.

  17. VAPOR PRESSURE ISOTOPE EFFECTS IN THE MEASUREMENT OF ENVIRONMENTAL TRITIUM SAMPLES.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhne, W.

    2012-12-03

    Standard procedures for the measurement of tritium in water samples often require distillation of an appropriate sample aliquot. This distillation process may result in a fractionation of tritiated water and regular light water due to the vapor pressure isotope effect, introducing either a bias or an additional contribution to the total tritium measurement uncertainty. The magnitude of the vapor pressure isotope effect is characterized as functions of the amount of water distilled from the sample aliquot and the heat settings for the distillation process. The tritium concentration in the distillate is higher than the tritium concentration in the sample earlymore » in the distillation process, it then sharply decreases due to the vapor pressure isotope effect and becomes lower than the tritium concentration in the sample, until the high tritium concentration retained in the boiling flask is evaporated at the end of the process. At that time, the tritium concentration in the distillate again overestimates the sample tritium concentration. The vapor pressure isotope effect is more pronounced the slower the evaporation and distillation process is conducted; a lower heat setting during the evaporation of the sample results in a larger bias in the tritium measurement. The experimental setup used and the fact that the current study allowed for an investigation of the relative change in vapor pressure isotope effect in the course of the distillation process distinguish it from and extend previously published measurements. The separation factor as a quantitative measure of the vapor pressure isotope effect is found to assume values of 1.034 {+-} 0.033, 1.052 {+-} 0.025, and 1.066 {+-} 0.037, depending on the vigor of the boiling process during distillation of the sample. A lower heat setting in the experimental setup, and therefore a less vigorous boiling process, results in a larger value for the separation factor. For a tritium measurement in water samples, this implies that the tritium concentration could be underestimated by 3 - 6%.« less

  18. Comparison of low-altitude wind-shear statistics derived from measured and proposed standard wind profiles

    NASA Technical Reports Server (NTRS)

    Usry, J. W.

    1983-01-01

    Wind shear statistics were calculated for a simulated set of wind profiles based on a proposed standard wind field data base. Wind shears were grouped in altitude in altitude bands of 100 ft between 100 and 1400 ft and in wind shear increments of 0.025 knot/ft. Frequency distributions, means, and standard deviations for each altitude band were derived for the total sample were derived for both sets. It was found that frequency distributions in each altitude band for the simulated data set were more dispersed below 800 ft and less dispersed above 900 ft than those for the measured data set. Total sample frequency of occurrence for the two data sets was about equal for wind shear values between +0.075 knot/ft, but the simulated data set had significantly larger values for all wind shears outside these boundaries. It is shown that normal distribution in both data sets neither data set was normally distributed; similar results are observed from the cumulative frequency distributions.

  19. Expression signature as a biomarker for prenatal diagnosis of trisomy 21.

    PubMed

    Volk, Marija; Maver, Aleš; Lovrečić, Luca; Juvan, Peter; Peterlin, Borut

    2013-01-01

    A universal biomarker panel with the potential to predict high-risk pregnancies or adverse pregnancy outcome does not exist. Transcriptome analysis is a powerful tool to capture differentially expressed genes (DEG), which can be used as biomarker-diagnostic-predictive tool for various conditions in prenatal setting. In search of biomarker set for predicting high-risk pregnancies, we performed global expression profiling to find DEG in Ts21. Subsequently, we performed targeted validation and diagnostic performance evaluation on a larger group of case and control samples. Initially, transcriptomic profiles of 10 cultivated amniocyte samples with Ts21 and 9 with normal euploid constitution were determined using expression microarrays. Datasets from Ts21 transcriptomic studies from GEO repository were incorporated. DEG were discovered using linear regression modelling and validated using RT-PCR quantification on an independent sample of 16 cases with Ts21 and 32 controls. The classification performance of Ts21 status based on expression profiling was performed using supervised machine learning algorithm and evaluated using a leave-one-out cross validation approach. Global gene expression profiling has revealed significant expression changes between normal and Ts21 samples, which in combination with data from previously performed Ts21 transcriptomic studies, were used to generate a multi-gene biomarker for Ts21, comprising of 9 gene expression profiles. In addition to biomarker's high performance in discriminating samples from global expression profiling, we were also able to show its discriminatory performance on a larger sample set 2, validated using RT-PCR experiment (AUC=0.97), while its performance on data from previously published studies reached discriminatory AUC values of 1.00. Our results show that transcriptomic changes might potentially be used to discriminate trisomy of chromosome 21 in the prenatal setting. As expressional alterations reflect both, causal and reactive cellular mechanisms, transcriptomic changes may thus have future potential in the diagnosis of a wide array of heterogeneous diseases that result from genetic disturbances.

  20. Improvement of Predictive Ability by Uniform Coverage of the Target Genetic Space

    PubMed Central

    Bustos-Korts, Daniela; Malosetti, Marcos; Chapman, Scott; Biddulph, Ben; van Eeuwijk, Fred

    2016-01-01

    Genome-enabled prediction provides breeders with the means to increase the number of genotypes that can be evaluated for selection. One of the major challenges in genome-enabled prediction is how to construct a training set of genotypes from a calibration set that represents the target population of genotypes, where the calibration set is composed of a training and validation set. A random sampling protocol of genotypes from the calibration set will lead to low quality coverage of the total genetic space by the training set when the calibration set contains population structure. As a consequence, predictive ability will be affected negatively, because some parts of the genotypic diversity in the target population will be under-represented in the training set, whereas other parts will be over-represented. Therefore, we propose a training set construction method that uniformly samples the genetic space spanned by the target population of genotypes, thereby increasing predictive ability. To evaluate our method, we constructed training sets alongside with the identification of corresponding genomic prediction models for four genotype panels that differed in the amount of population structure they contained (maize Flint, maize Dent, wheat, and rice). Training sets were constructed using uniform sampling, stratified-uniform sampling, stratified sampling and random sampling. We compared these methods with a method that maximizes the generalized coefficient of determination (CD). Several training set sizes were considered. We investigated four genomic prediction models: multi-locus QTL models, GBLUP models, combinations of QTL and GBLUPs, and Reproducing Kernel Hilbert Space (RKHS) models. For the maize and wheat panels, construction of the training set under uniform sampling led to a larger predictive ability than under stratified and random sampling. The results of our methods were similar to those of the CD method. For the rice panel, all training set construction methods led to similar predictive ability, a reflection of the very strong population structure in this panel. PMID:27672112

  1. Performance evaluation of the Abbott CELL-DYN Emerald for use as a bench-top analyzer in a research setting.

    PubMed

    Khoo, T-L; Xiros, N; Guan, F; Orellana, D; Holst, J; Joshua, D E; Rasko, J E J

    2013-08-01

    The CELL-DYN Emerald is a compact bench-top hematology analyzer that can be used for a three-part white cell differential analysis. To determine its utility for analysis of human and mouse samples, we evaluated this machine against the larger CELL-DYN Sapphire and Sysmex XT2000iV hematology analyzers. 120 human (normal and abnormal) and 30 mouse (normal and abnormal) samples were analyzed on both the CELL-DYN Emerald and CELL-DYN Sapphire or Sysmex XT2000iV analyzers. For mouse samples, the CELL-DYN Emerald analyzer required manual recalibration based on the histogram populations. Analysis of the CELL-DYN Emerald showed excellent precision, within accepted ranges (white cell count CV% = 2.09%; hemoglobin CV% = 1.68%; platelets CV% = 4.13%). Linearity was excellent (R² ≥ 0.99), carryover was minimal (<1%), and overall interinstrument agreement was acceptable for both human and mouse samples. Comparison between the CELL-DYN Emerald and Sapphire analyzers for human samples or Sysmex XT2000iV analyzer for mouse samples showed excellent correlation for all parameters. The CELL-DYN Emerald was generally comparable to the larger reference analyzer for both human and mouse samples. It would be suitable for use in satellite research laboratories or as a backup system in larger laboratories. © 2012 John Wiley & Sons Ltd.

  2. A scalable kernel-based semisupervised metric learning algorithm with out-of-sample generalization ability.

    PubMed

    Yeung, Dit-Yan; Chang, Hong; Dai, Guang

    2008-11-01

    In recent years, metric learning in the semisupervised setting has aroused a lot of research interest. One type of semisupervised metric learning utilizes supervisory information in the form of pairwise similarity or dissimilarity constraints. However, most methods proposed so far are either limited to linear metric learning or unable to scale well with the data set size. In this letter, we propose a nonlinear metric learning method based on the kernel approach. By applying low-rank approximation to the kernel matrix, our method can handle significantly larger data sets. Moreover, our low-rank approximation scheme can naturally lead to out-of-sample generalization. Experiments performed on both artificial and real-world data show very promising results.

  3. Local X-ray Computed Tomography Imaging for Mineralogical and Pore Characterization

    NASA Astrophysics Data System (ADS)

    Mills, G.; Willson, C. S.

    2015-12-01

    Sample size, material properties and image resolution are all tradeoffs that must be considered when imaging porous media samples with X-ray computed tomography. In many natural and engineered samples, pore and throat sizes span several orders of magnitude and are often correlated with the material composition. Local tomography is a nondestructive technique that images a subvolume, within a larger specimen, at high resolution and uses low-resolution tomography data from the larger specimen to reduce reconstruction error. The high-resolution, subvolume data can be used to extract important fine-scale properties but, due to the additional noise associated with the truncated dataset, it makes segmentation of different materials and mineral phases a challenge. The low-resolution data of a larger specimen is typically of much higher-quality making material characterization much easier. In addition, the imaging of a larger domain, allows for mm-scale bulk properties and heterogeneities to be determined. In this research, a 7 mm diameter and ~15 mm in length sandstone core was scanned twice. The first scan was performed to cover the entire diameter and length of the specimen at an image voxel resolution of 4.1 μm. The second scan was performed on a subvolume, ~1.3 mm in length and ~2.1 mm in diameter, at an image voxel resolution of 1.08 μm. After image processing and segmentation, the pore network structure and mineralogical features were extracted from the low-resolution dataset. Due to the noise in the truncated high-resolution dataset, several image processing approaches were applied prior to image segmentation and extraction of the pore network structure and mineralogy. Results from the different truncated tomography segmented data sets are compared to each other to evaluate the potential of each approach in identifying the different solid phases from the original 16 bit data set. The truncated tomography segmented data sets were also compared to the whole-core tomography segmented data set in two ways: (1) assessment of the porosity and pore size distribution at different scales; and (2) comparison of the mineralogical composition and distribution. Finally, registration of the two datasets will be used to show how the pore structure and mineralogy details at the two scales can be used to supplement each other.

  4. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    PubMed

    Almutairy, Meznah; Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  5. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches

    PubMed Central

    Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989

  6. Analysis of Duplicated Multiple-Samples Rank Data Using the Mack-Skillings Test.

    PubMed

    Carabante, Kennet Mariano; Alonso-Marenco, Jose Ramon; Chokumnoyporn, Napapan; Sriwattana, Sujinda; Prinyawiwatkul, Witoon

    2016-07-01

    Appropriate analysis for duplicated multiple-samples rank data is needed. This study compared analysis of duplicated rank preference data using the Friedman versus Mack-Skillings tests. Panelists (n = 125) ranked twice 2 orange juice sets: different-samples set (100%, 70%, vs. 40% juice) and similar-samples set (100%, 95%, vs. 90%). These 2 sample sets were designed to get contrasting differences in preference. For each sample set, rank sum data were obtained from (1) averaged rank data of each panelist from the 2 replications (n = 125), (2) rank data of all panelists from each of the 2 separate replications (n = 125 each), (3) jointed rank data of all panelists from the 2 replications (n = 125), and (4) rank data of all panelists pooled from the 2 replications (n = 250); rank data (1), (2), and (4) were separately analyzed by the Friedman test, although those from (3) by the Mack-Skillings test. The effect of sample sizes (n = 10 to 125) was evaluated. For the similar-samples set, higher variations in rank data from the 2 replications were observed; therefore, results of the main effects were more inconsistent among methods and sample sizes. Regardless of analysis methods, the larger the sample size, the higher the χ(2) value, the lower the P-value (testing H0 : all samples are not different). Analyzing rank data (2) separately by replication yielded inconsistent conclusions across sample sizes, hence this method is not recommended. The Mack-Skillings test was more sensitive than the Friedman test. Furthermore, it takes into account within-panelist variations and is more appropriate for analyzing duplicated rank data. © 2016 Institute of Food Technologists®

  7. Lunar Samples: Apollo Collection Tools, Curation Handling, Surveyor III and Soviet Luna Samples

    NASA Technical Reports Server (NTRS)

    Allton, J.H.

    2009-01-01

    The 6 Apollo missions that landed on the lunar surface returned 2196 samples comprised of 382 kg. The 58 samples weighing 21.5 kg collected on Apollo 11 expanded to 741 samples weighing 110.5 kg by the time of Apollo 17. The main goal on Apollo 11 was to obtain some material and return it safely to Earth. As we gained experience, the sampling tools and a more specific sampling strategy evolved. A summary of the sample types returned is shown in Table 1. By year 1989, some statistics on allocation by sample type were compiled [2]. The "scientific interest index" is based on the assumption that the more allocations per gram of sample, the higher the scientific interest. It is basically a reflection of the amount of diversity within a given sample type. Samples were also set aside for biohazard testing. The samples set aside and used for biohazard testing were represen-tative, as opposed to diverse. They tended to be larger and be comprised of less scientifically valuable mate-rial, such as dust and debris in the bottom of sample containers.

  8. Experimental investigation of the hydraulic and heat-transfer properties of artificially fractured granite.

    PubMed

    Luo, Jin; Zhu, Yongqiang; Guo, Qinghai; Tan, Long; Zhuang, Yaqin; Liu, Mingliang; Zhang, Canhai; Xiang, Wei; Rohn, Joachim

    2017-01-05

    In this paper, the hydraulic and heat-transfer properties of two sets of artificially fractured granite samples are investigated. First, the morphological information is determined using 3D modelling technology. The area ratio is used to describe the roughness of the fracture surface. Second, the hydraulic properties of fractured granite are tested by exposing samples to different confining pressures and temperatures. The results show that the hydraulic properties of the fractures are affected mainly by the area ratio, with a larger area ratio producing a larger fracture aperture and higher hydraulic conductivity. Both the hydraulic apertureand the hydraulic conductivity decrease with an increase in the confining pressure. Furthermore, the fracture aperture decreases with increasing rock temperature, but the hydraulic conductivity increases owing to a reduction of the viscosity of the fluid flowing through. Finally, the heat-transfer efficiency of the samples under coupled hydro-thermal-mechanical conditions is analysed and discussed.

  9. Experimental investigation of the hydraulic and heat-transfer properties of artificially fractured granite

    PubMed Central

    Luo, Jin; Zhu, Yongqiang; Guo, Qinghai; Tan, Long; Zhuang, Yaqin; Liu, Mingliang; Zhang, Canhai; Xiang, Wei; Rohn, Joachim

    2017-01-01

    In this paper, the hydraulic and heat-transfer properties of two sets of artificially fractured granite samples are investigated. First, the morphological information is determined using 3D modelling technology. The area ratio is used to describe the roughness of the fracture surface. Second, the hydraulic properties of fractured granite are tested by exposing samples to different confining pressures and temperatures. The results show that the hydraulic properties of the fractures are affected mainly by the area ratio, with a larger area ratio producing a larger fracture aperture and higher hydraulic conductivity. Both the hydraulic apertureand the hydraulic conductivity decrease with an increase in the confining pressure. Furthermore, the fracture aperture decreases with increasing rock temperature, but the hydraulic conductivity increases owing to a reduction of the viscosity of the fluid flowing through. Finally, the heat-transfer efficiency of the samples under coupled hydro-thermal-mechanical conditions is analysed and discussed. PMID:28054594

  10. Sampling Based Influence Maximization on Linear Threshold Model

    NASA Astrophysics Data System (ADS)

    Jia, Su; Chen, Ling

    2018-04-01

    A sampling based influence maximization on linear threshold (LT) model method is presented. The method samples the routes in the possible worlds in the social networks, and uses Chernoff bound to estimate the number of samples so that the error can be constrained within a given bound. Then the active possibilities of the routes in the possible worlds are calculated, and are used to compute the influence spread of each node in the network. Our experimental results show that our method can effectively select appropriate seed nodes set that spreads larger influence than other similar methods.

  11. Fluorescence Excitation Spectroscopy for Phytoplankton Species Classification Using an All-Pairs Method: Characterization of a System with Unexpectedly Low Rank.

    PubMed

    Rekully, Cameron M; Faulkner, Stefan T; Lachenmyer, Eric M; Cunningham, Brady R; Shaw, Timothy J; Richardson, Tammi L; Myrick, Michael L

    2018-03-01

    An all-pairs method is used to analyze phytoplankton fluorescence excitation spectra. An initial set of nine phytoplankton species is analyzed in pairwise fashion to select two optical filter sets, and then the two filter sets are used to explore variations among a total of 31 species in a single-cell fluorescence imaging photometer. Results are presented in terms of pair analyses; we report that 411 of the 465 possible pairings of the larger group of 31 species can be distinguished using the initial nine-species-based selection of optical filters. A bootstrap analysis based on the larger data set shows that the distribution of possible pair separation results based on a randomly selected nine-species initial calibration set is strongly peaked in the 410-415 pair separation range, consistent with our experimental result. Further, the result for filter selection using all 31 species is also 411 pair separations; The set of phytoplankton fluorescence excitation spectra is intuitively high in rank due to the number and variety of pigments that contribute to the spectrum. However, the results in this report are consistent with an effective rank as determined by a variety of heuristic and statistical methods in the range of 2-3. These results are reviewed in consideration of how consistent the filter selections are from model to model for the data presented here. We discuss the common observation that rank is generally found to be relatively low even in many seemingly complex circumstances, so that it may be productive to assume a low rank from the beginning. If a low-rank hypothesis is valid, then relatively few samples are needed to explore an experimental space. Under very restricted circumstances for uniformly distributed samples, the minimum number for an initial analysis might be as low as 8-11 random samples for 1-3 factors.

  12. Ecoinformatics (Big Data) for Agricultural Entomology: Pitfalls, Progress, and Promise.

    PubMed

    Rosenheim, Jay A; Gratton, Claudio

    2017-01-31

    Ecoinformatics, as defined in this review, is the use of preexisting data sets to address questions in ecology. We provide the first review of ecoinformatics methods in agricultural entomology. Ecoinformatics methods have been used to address the full range of questions studied by agricultural entomologists, enabled by the special opportunities associated with data sets, nearly all of which have been observational, that are larger and more diverse and that embrace larger spatial and temporal scales than most experimental studies do. We argue that ecoinformatics research methods and traditional, experimental research methods have strengths and weaknesses that are largely complementary. We address the important interpretational challenges associated with observational data sets, highlight common pitfalls, and propose some best practices for researchers using these methods. Ecoinformatics methods hold great promise as a vehicle for capitalizing on the explosion of data emanating from farmers, researchers, and the public, as novel sampling and sensing techniques are developed and digital data sharing becomes more widespread.

  13. Improved imputation accuracy in Hispanic/Latino populations with larger and more diverse reference panels: applications in the Hispanic Community Health Study/Study of Latinos (HCHS/SOL)

    PubMed Central

    Nelson, Sarah C.; Stilp, Adrienne M.; Papanicolaou, George J.; Taylor, Kent D.; Rotter, Jerome I.; Thornton, Timothy A.; Laurie, Cathy C.

    2016-01-01

    Imputation is commonly used in genome-wide association studies to expand the set of genetic variants available for analysis. Larger and more diverse reference panels, such as the final Phase 3 of the 1000 Genomes Project, hold promise for improving imputation accuracy in genetically diverse populations such as Hispanics/Latinos in the USA. Here, we sought to empirically evaluate imputation accuracy when imputing to a 1000 Genomes Phase 3 versus a Phase 1 reference, using participants from the Hispanic Community Health Study/Study of Latinos. Our assessments included calculating the correlation between imputed and observed allelic dosage in a subset of samples genotyped on a supplemental array. We observed that the Phase 3 reference yielded higher accuracy at rare variants, but that the two reference panels were comparable at common variants. At a sample level, the Phase 3 reference improved imputation accuracy in Hispanic/Latino samples from the Caribbean more than for Mainland samples, which we attribute primarily to the additional reference panel samples available in Phase 3. We conclude that a 1000 Genomes Project Phase 3 reference panel can yield improved imputation accuracy compared with Phase 1, particularly for rare variants and for samples of certain genetic ancestry compositions. Our findings can inform imputation design for other genome-wide association studies of participants with diverse ancestries, especially as larger and more diverse reference panels continue to become available. PMID:27346520

  14. Ensemble coding of face identity is present but weaker in congenital prosopagnosia.

    PubMed

    Robson, Matthew K; Palermo, Romina; Jeffery, Linda; Neumann, Markus F

    2018-03-01

    Individuals with congenital prosopagnosia (CP) are impaired at identifying individual faces but do not appear to show impairments in extracting the average identity from a group of faces (known as ensemble coding). However, possible deficits in ensemble coding in a previous study (CPs n = 4) may have been masked because CPs relied on pictorial (image) cues rather than identity cues. Here we asked whether a larger sample of CPs (n = 11) would show intact ensemble coding of identity when availability of image cues was minimised. Participants viewed a "set" of four faces and then judged whether a subsequent individual test face, either an exemplar or a "set average", was in the preceding set. Ensemble coding occurred when matching (vs. mismatching) averages were mistakenly endorsed as set members. We assessed both image- and identity-based ensemble coding, by varying whether test faces were either the same or different images of the identities in the set. CPs showed significant ensemble coding in both tasks, indicating that their performance was independent of image cues. As a group, CPs' ensemble coding was weaker than controls in both tasks, consistent with evidence that perceptual processing of face identity is disrupted in CP. This effect was driven by CPs (n= 3) who, in addition to having impaired face memory, also performed particularly poorly on a measure of face perception (CFPT). Future research, using larger samples, should examine whether deficits in ensemble coding may be restricted to CPs who also have substantial face perception deficits. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Compassion Fatigue, Compassion Satisfaction, and Burnout: Factors Impacting a Professional's Quality of Life

    ERIC Educational Resources Information Center

    Sprang, Ginny; Whitt-Woosley, Adrienne; Clark, James J.

    2007-01-01

    This study examined the relationship between three variables, compassion fatigue (CF), compassion satisfaction (CS), and burnout, and provider and setting characteristics in a sample of 1,121 mental health providers in a rural southern state. Respondents completed the Professional Quality of Life Scale as part of a larger survey of provider…

  16. SABRE: a method for assessing the stability of gene modules in complex tissues and subject populations.

    PubMed

    Shannon, Casey P; Chen, Virginia; Takhar, Mandeep; Hollander, Zsuzsanna; Balshaw, Robert; McManus, Bruce M; Tebbutt, Scott J; Sin, Don D; Ng, Raymond T

    2016-11-14

    Gene network inference (GNI) algorithms can be used to identify sets of coordinately expressed genes, termed network modules from whole transcriptome gene expression data. The identification of such modules has become a popular approach to systems biology, with important applications in translational research. Although diverse computational and statistical approaches have been devised to identify such modules, their performance behavior is still not fully understood, particularly in complex human tissues. Given human heterogeneity, one important question is how the outputs of these computational methods are sensitive to the input sample set, or stability. A related question is how this sensitivity depends on the size of the sample set. We describe here the SABRE (Similarity Across Bootstrap RE-sampling) procedure for assessing the stability of gene network modules using a re-sampling strategy, introduce a novel criterion for identifying stable modules, and demonstrate the utility of this approach in a clinically-relevant cohort, using two different gene network module discovery algorithms. The stability of modules increased as sample size increased and stable modules were more likely to be replicated in larger sets of samples. Random modules derived from permutated gene expression data were consistently unstable, as assessed by SABRE, and provide a useful baseline value for our proposed stability criterion. Gene module sets identified by different algorithms varied with respect to their stability, as assessed by SABRE. Finally, stable modules were more readily annotated in various curated gene set databases. The SABRE procedure and proposed stability criterion may provide guidance when designing systems biology studies in complex human disease and tissues.

  17. Study of the Effect of Temporal Sampling Frequency on DSCOVR Observations Using the GEOS-5 Nature Run Results. Part II; Cloud Coverage

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Yang, Yuekui

    2016-01-01

    This is the second part of a study on how temporal sampling frequency affects satellite retrievals in support of the Deep Space Climate Observatory (DSCOVR) mission. Continuing from Part 1, which looked at Earth's radiation budget, this paper presents the effect of sampling frequency on DSCOVR-derived cloud fraction. The output from NASA's Goddard Earth Observing System version 5 (GEOS-5) Nature Run is used as the "truth". The effect of temporal resolution on potential DSCOVR observations is assessed by subsampling the full Nature Run data. A set of metrics, including uncertainty and absolute error in the subsampled time series, correlation between the original and the subsamples, and Fourier analysis have been used for this study. Results show that, for a given sampling frequency, the uncertainties in the annual mean cloud fraction of the sunlit half of the Earth are larger over land than over ocean. Analysis of correlation coefficients between the subsamples and the original time series demonstrates that even though sampling at certain longer time intervals may not increase the uncertainty in the mean, the subsampled time series is further and further away from the "truth" as the sampling interval becomes larger and larger. Fourier analysis shows that the simulated DSCOVR cloud fraction has underlying periodical features at certain time intervals, such as 8, 12, and 24 h. If the data is subsampled at these frequencies, the uncertainties in the mean cloud fraction are higher. These results provide helpful insights for the DSCOVR temporal sampling strategy.

  18. Determination of thickness of thin turbid painted over-layers using micro-scale spatially offset Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Conti, Claudia; Realini, Marco; Colombo, Chiara; Botteon, Alessandra; Bertasa, Moira; Striova, Jana; Barucci, Marco; Matousek, Pavel

    2016-12-01

    We present a method for estimating the thickness of thin turbid layers using defocusing micro-spatially offset Raman spectroscopy (micro-SORS). The approach, applicable to highly turbid systems, enables one to predict depths in excess of those accessible with conventional Raman microscopy. The technique can be used, for example, to establish the paint layer thickness on cultural heritage objects, such as panel canvases, mural paintings, painted statues and decorated objects. Other applications include analysis in polymer, biological and biomedical disciplines, catalytic and forensics sciences where highly turbid overlayers are often present and where invasive probing may not be possible or is undesirable. The method comprises two stages: (i) a calibration step for training the method on a well characterized sample set with a known thickness, and (ii) a prediction step where the prediction of layer thickness is carried out non-invasively on samples of unknown thickness of the same chemical and physical make up as the calibration set. An illustrative example of a practical deployment of this method is the analysis of larger areas of paintings. In this case, first, a calibration would be performed on a fragment of painting of a known thickness (e.g. derived from cross-sectional analysis) and subsequently the analysis of thickness across larger areas of painting could then be carried out non-invasively. The performance of the method is compared with that of the more established optical coherence tomography (OCT) technique on identical sample set. This article is part of the themed issue "Raman spectroscopy in art and archaeology".

  19. Functional Analysis of Metabolomics Data.

    PubMed

    Chagoyen, Mónica; López-Ibáñez, Javier; Pazos, Florencio

    2016-01-01

    Metabolomics aims at characterizing the repertory of small chemical compounds in a biological sample. As it becomes more massive and larger sets of compounds are detected, a functional analysis is required to convert these raw lists of compounds into biological knowledge. The most common way of performing such analysis is "annotation enrichment analysis," also used in transcriptomics and proteomics. This approach extracts the annotations overrepresented in the set of chemical compounds arisen in a given experiment. Here, we describe the protocols for performing such analysis as well as for visualizing a set of compounds in different representations of the metabolic networks, in both cases using free accessible web tools.

  20. Reported Use of and Satisfaction with Vocational Rehabilitation Services among Lesbian, Gay, Bisexual, and Transgender Persons

    ERIC Educational Resources Information Center

    Dispenza, Franco; Hunter, Tameeka

    2015-01-01

    Purpose: Reported use of and satisfaction rates of vocational rehabilitation (VR) services among a small sample of lesbian, gay, bisexual, and transgender (LGBT) persons living with various chronic illness and disability (CID) conditions in the United States were explored. Method: Data were pulled from a larger data set that was collected via the…

  1. Sampling Practices and Social Spaces: Exploring a Hip-Hop Approach to Higher Education

    ERIC Educational Resources Information Center

    Petchauer, Emery

    2010-01-01

    Much more than a musical genre, hip-hop culture exists as an animating force in the lives of many young adults. This article looks beyond the moral concerns often associated with rap music to explore how hip-hop as a larger set of expressions and practices implicates the educational experiences, activities, and approaches for students. The article…

  2. A New Electromagnetic Instrument for Thickness Gauging of Conductive Materials

    NASA Technical Reports Server (NTRS)

    Fulton, J. P.; Wincheski, B.; Nath, S.; Reilly, J.; Namkung, M.

    1994-01-01

    Eddy current techniques are widely used to measure the thickness of electrically conducting materials. The approach, however, requires an extensive set of calibration standards and can be quite time consuming to set up and perform. Recently, an electromagnetic sensor was developed which eliminates the need for impedance measurements. The ability to monitor the magnitude of a voltage output independent of the phase enables the use of extremely simple instrumentation. Using this new sensor a portable hand-held instrument was developed. The device makes single point measurements of the thickness of nonferromagnetic conductive materials. The technique utilized by this instrument requires calibration with two samples of known thicknesses that are representative of the upper and lower thickness values to be measured. The accuracy of the instrument depends upon the calibration range, with a larger range giving a larger error. The measured thicknesses are typically within 2-3% of the calibration range (the difference between the thin and thick sample) of their actual values. In this paper the design, operational and performance characteristics of the instrument along with a detailed description of the thickness gauging algorithm used in the device are presented.

  3. Cryobiopsy: Should This Be Used in Place of Endobronchial Forceps Biopsies?

    PubMed Central

    Rubio, Edmundo R.; le, Susanti R.; Whatley, Ralph E.; Boyd, Michael B.

    2013-01-01

    Forceps biopsies of airway lesions have variable yields. The yield increases when combining techniques in order to collect more material. With the use of cryotherapy probes (cryobiopsy) larger specimens can be obtained, resulting in an increase in the diagnostic yield. However, the utility and safety of cryobiopsy with all types of lesions, including flat mucosal lesions, is not established. Aims. Demonstrate the utility/safety of cryobiopsy versus forceps biopsy to sample exophytic and flat airway lesions. Settings and Design. Teaching hospital-based retrospective analysis. Methods. Retrospective analysis of patients undergoing cryobiopsies (singly or combined with forceps biopsies) from August 2008 through August 2010. Statistical Analysis. Wilcoxon signed-rank test. Results. The comparative analysis of 22 patients with cryobiopsy and forceps biopsy of the same lesion showed the mean volumes of material obtained with cryobiopsy were significantly larger (0.696 cm3 versus 0.0373 cm3, P = 0.0014). Of 31 cryobiopsies performed, one had minor bleeding. Cryopbiopsy allowed sampling of exophytic and flat lesions that were located centrally or distally. Cryobiopsies were shown to be safe, free of artifact, and provided a diagnostic yield of 96.77%. Conclusions. Cryobiopsy allows safe sampling of exophytic and flat airway lesions, with larger specimens, excellent tissue preservation and high diagnostic accuracy. PMID:24066296

  4. Evaluating information content of SNPs for sample-tagging in re-sequencing projects.

    PubMed

    Hu, Hao; Liu, Xiang; Jin, Wenfei; Hilger Ropers, H; Wienker, Thomas F

    2015-05-15

    Sample-tagging is designed for identification of accidental sample mix-up, which is a major issue in re-sequencing studies. In this work, we develop a model to measure the information content of SNPs, so that we can optimize a panel of SNPs that approach the maximal information for discrimination. The analysis shows that as low as 60 optimized SNPs can differentiate the individuals in a population as large as the present world, and only 30 optimized SNPs are in practice sufficient in labeling up to 100 thousand individuals. In the simulated populations of 100 thousand individuals, the average Hamming distances, generated by the optimized set of 30 SNPs are larger than 18, and the duality frequency, is lower than 1 in 10 thousand. This strategy of sample discrimination is proved robust in large sample size and different datasets. The optimized sets of SNPs are designed for Whole Exome Sequencing, and a program is provided for SNP selection, allowing for customized SNP numbers and interested genes. The sample-tagging plan based on this framework will improve re-sequencing projects in terms of reliability and cost-effectiveness.

  5. Moderate resolution spectrophotometry of high redshift quasars

    NASA Technical Reports Server (NTRS)

    Schneider, Donald P.; Schmidt, Maarten; Gunn, James E.

    1991-01-01

    A uniform set of photometry and high signal-to-noise moderate resolution spectroscopy of 33 quasars with redshifts larger than 3.1 is presented. The sample consists of 17 newly discovered quasars (two with redshifts in excess of 4.4) and 16 sources drawn from the literature. The objects in this sample have r magnitudes between 17.4 and 21.4; their luminosities range from -28.8 to -24.9. Three of the 33 objects are broad absorption line quasars. A number of possible high redshift damped Ly-alpha systems were found.

  6. Modeling ultrasound propagation through material of increasing geometrical complexity.

    PubMed

    Odabaee, Maryam; Odabaee, Mostafa; Pelekanos, Matthew; Leinenga, Gerhard; Götz, Jürgen

    2018-06-01

    Ultrasound is increasingly being recognized as a neuromodulatory and therapeutic tool, inducing a broad range of bio-effects in the tissue of experimental animals and humans. To achieve these effects in a predictable manner in the human brain, the thick cancellous skull presents a problem, causing attenuation. In order to overcome this challenge, as a first step, the acoustic properties of a set of simple bone-modeling resin samples that displayed an increasing geometrical complexity (increasing step sizes) were analyzed. Using two Non-Destructive Testing (NDT) transducers, we found that Wiener deconvolution predicted the Ultrasound Acoustic Response (UAR) and attenuation caused by the samples. However, whereas the UAR of samples with step sizes larger than the wavelength could be accurately estimated, the prediction was not accurate when the sample had a smaller step size. Furthermore, a Finite Element Analysis (FEA) performed in ANSYS determined that the scattering and refraction of sound waves was significantly higher in complex samples with smaller step sizes compared to simple samples with a larger step size. Together, this reveals an interaction of frequency and geometrical complexity in predicting the UAR and attenuation. These findings could in future be applied to poro-visco-elastic materials that better model the human skull. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Geologic setting and petrology of Apollo 15 anorthosite /15415/.

    NASA Technical Reports Server (NTRS)

    Wilshire, H. G.; Schaber, G. G.; Jackson, E. D.; Silver, L. T.; Phinney, W. C.

    1972-01-01

    The geological setting, petrography and history of this Apollo 15 lunar rock sample are discussed, characterizing the sample as coarse-grained anorthosite composed largely of calcic plagioclase with small amounts of three pyroxene phases. The presence of shattered and granulated minerals in the texture of the rock is traced to two or more fragmentation events, and the presence of irregular bands of coarsely recrystallized plagioclase and minor pyroxene crossing larger plagioclase grains is traced to an earlier thermal metamorphic event. It is pointed out that any of these events may have affected apparent radiometric ages of elements in this rock. A comparative summarization of data suggests that this rock is the least-deformed member of a suite of similar rocks ejected from beneath the regolith at Spur crater.

  8. Standard Specimen Reference Set: Pancreatic — EDRN Public Portal

    Cancer.gov

    The primary objective of the EDRN Pancreatic Cancer Working Group Proposal is to create a reference set consisting of well-characterized serum/plasma specimens to use as a resource for the development of biomarkers for the early detection of pancreatic adenocarcinoma. The testing of biomarkers on the same sample set permits direct comparison among them; thereby, allowing the development of a biomarker panel that can be evaluated in a future validation study. Additionally, the establishment of an infrastructure with core data elements and standardized operating procedures for specimen collection, processing and storage, will provide the necessary preparatory platform for larger validation studies when the appropriate marker/panel for pancreatic adenocarcinoma has been identified.

  9. The topology of large-scale structure. III - Analysis of observations

    NASA Astrophysics Data System (ADS)

    Gott, J. Richard, III; Miller, John; Thuan, Trinh X.; Schneider, Stephen E.; Weinberg, David H.; Gammie, Charles; Polk, Kevin; Vogeley, Michael; Jeffrey, Scott; Bhavsar, Suketu P.; Melott, Adrian L.; Giovanelli, Riccardo; Hayes, Martha P.; Tully, R. Brent; Hamilton, Andrew J. S.

    1989-05-01

    A recently developed algorithm for quantitatively measuring the topology of large-scale structures in the universe was applied to a number of important observational data sets. The data sets included an Abell (1958) cluster sample out to Vmax = 22,600 km/sec, the Giovanelli and Haynes (1985) sample out to Vmax = 11,800 km/sec, the CfA sample out to Vmax = 5000 km/sec, the Thuan and Schneider (1988) dwarf sample out to Vmax = 3000 km/sec, and the Tully (1987) sample out to Vmax = 3000 km/sec. It was found that, when the topology is studied on smoothing scales significantly larger than the correlation length (i.e., smoothing length, lambda, not below 1200 km/sec), the topology is spongelike and is consistent with the standard model in which the structure seen today has grown from small fluctuations caused by random noise in the early universe. When the topology is studied on the scale of lambda of about 600 km/sec, a small shift is observed in the genus curve in the direction of a 'meatball' topology.

  10. The topology of large-scale structure. III - Analysis of observations. [in universe

    NASA Technical Reports Server (NTRS)

    Gott, J. Richard, III; Weinberg, David H.; Miller, John; Thuan, Trinh X.; Schneider, Stephen E.

    1989-01-01

    A recently developed algorithm for quantitatively measuring the topology of large-scale structures in the universe was applied to a number of important observational data sets. The data sets included an Abell (1958) cluster sample out to Vmax = 22,600 km/sec, the Giovanelli and Haynes (1985) sample out to Vmax = 11,800 km/sec, the CfA sample out to Vmax = 5000 km/sec, the Thuan and Schneider (1988) dwarf sample out to Vmax = 3000 km/sec, and the Tully (1987) sample out to Vmax = 3000 km/sec. It was found that, when the topology is studied on smoothing scales significantly larger than the correlation length (i.e., smoothing length, lambda, not below 1200 km/sec), the topology is spongelike and is consistent with the standard model in which the structure seen today has grown from small fluctuations caused by random noise in the early universe. When the topology is studied on the scale of lambda of about 600 km/sec, a small shift is observed in the genus curve in the direction of a 'meatball' topology.

  11. Set Shifting Among Adolescents with Anorexia Nervosa

    PubMed Central

    Fitzpatrick, Kathleen Kara; Darcy, Alison; Colborn, Danielle; Gudorf, Caroline; Lock, James

    2012-01-01

    Objective Set shifting difficulties are documented for adults with anorexia nervosa (AN). However, AN typically onsets in adolescents and it is unclear if set-shifting difficulties are a result of chronic AN or present earlier in its course. This study examined whether adolescents with short duration AN demonstrated set shifting difficulties compared to healthy controls (HC). Method Data on set shifting collected from the Delis-Kaplan Executive Functioning System (DKEFS) and Wisconsin Card Sort Task (WCST) as well as eating psychopathology were collected from 32 adolescent inpatients with AN and compared to those from 22 HCs. Results There were no differences in set-shifting in adolescents with AN compared to HCs on most measures. Conclusion The findings suggest that set-shifting difficulties in AN may be a consequence of AN. Future studies should explore set-shifting difficulties in a larger sample of adolescents with the AN to determine if there is sub-set of adolescents with these difficulties and determine any relationship of set-shifting to the development of a chronic from of AN. PMID:22692985

  12. Improved sensitivity of the urine CAA lateral-flow assay for diagnosing active Schistosoma infections by using larger sample volumes.

    PubMed

    Corstjens, Paul L A M; Nyakundi, Ruth K; de Dood, Claudia J; Kariuki, Thomas M; Ochola, Elizabeth A; Karanja, Diana M S; Mwinzi, Pauline N M; van Dam, Govert J

    2015-04-22

    Accurate determination of Schistosoma infection rates in low endemic regions to examine progress towards interruption of transmission and elimination requires highly sensitive diagnostic tools. An existing lateral flow (LF) based test demonstrating ongoing infections through detection of worm circulating anodic antigen (CAA), was improved for sensitivity through implementation of a protocol allowing increased sample input. Urine is the preferred sample as collection is non-invasive and sample volume is generally not a restriction. Centrifugal filtration devices provided a method to concentrate supernatant of urine samples extracted with trichloroacetic acid (TCA). For field trials a practical sample volume of 2 mL urine allowed detection of CAA down to 0.3 pg/mL. The method was evaluated on a set of urine samples (n = 113) from an S. mansoni endemic region (Kisumu, Kenya) and compared to stool microscopy (Kato Katz, KK). In this analysis true positivity was defined as a sample with either a positive KK or UCAA test. Implementation of the concentration method increased clinical sensitivity (Sn) from 44 to 98% when moving from the standard 10 μL (UCAA10 assay) to 2000 μL (UCAA2000 assay) urine sample input. Sn for KK varied between 23 and 35% for a duplicate KK (single stool, two slides) to 52% for a six-fold KK (three consecutive day stools, two slides). The UCAA2000 assay indicated 47 positive samples with CAA concentration above 0.3 pg/mL. The six-fold KK detected 25 egg positives; 1 sample with 2 eggs detected in the 6-fold KK was not identified with the UCAA2000 assay. Larger sample input increased Sn of the UCAA assay to a level indicating 'true' infection. Only a single 2 mL urine sample is needed, but analysing larger sample volumes could still increase test accuracy. The UCAA2000 test is an appropriate candidate for accurate identification of all infected individuals in low-endemic regions. Assay materials do not require refrigeration and collected urine samples may be stored and transported to central test laboratories without the need to be frozen.

  13. A comparison of small-area estimation techniques to estimate selected stand attributes using LiDAR-derived auxiliary variables

    Treesearch

    Michael E. Goerndt; Vicente J. Monleon; Hailemariam Temesgen

    2011-01-01

    One of the challenges often faced in forestry is the estimation of forest attributes for smaller areas of interest within a larger population. Small-area estimation (SAE) is a set of techniques well suited to estimation of forest attributes for small areas in which the existing sample size is small and auxiliary information is available. Selected SAE methods were...

  14. Comparative analyses of basal rate of metabolism in mammals: data selection does matter.

    PubMed

    Genoud, Michel; Isler, Karin; Martin, Robert D

    2018-02-01

    Basal rate of metabolism (BMR) is a physiological parameter that should be measured under strictly defined experimental conditions. In comparative analyses among mammals BMR is widely used as an index of the intensity of the metabolic machinery or as a proxy for energy expenditure. Many databases with BMR values for mammals are available, but the criteria used to select metabolic data as BMR estimates have often varied and the potential effect of this variability has rarely been questioned. We provide a new, expanded BMR database reflecting compliance with standard criteria (resting, postabsorptive state; thermal neutrality; adult, non-reproductive status for females) and examine potential effects of differential selectivity on the results of comparative analyses. The database includes 1739 different entries for 817 species of mammals, compiled from the original sources. It provides information permitting assessment of the validity of each estimate and presents the value closest to a proper BMR for each entry. Using different selection criteria, several alternative data sets were extracted and used in comparative analyses of (i) the scaling of BMR to body mass and (ii) the relationship between brain mass and BMR. It was expected that results would be especially dependent on selection criteria with small sample sizes and with relatively weak relationships. Phylogenetically informed regression (phylogenetic generalized least squares, PGLS) was applied to the alternative data sets for several different clades (Mammalia, Eutheria, Metatheria, or individual orders). For Mammalia, a 'subsampling procedure' was also applied, in which random subsamples of different sample sizes were taken from each original data set and successively analysed. In each case, two data sets with identical sample size and species, but comprising BMR data with different degrees of reliability, were compared. Selection criteria had minor effects on scaling equations computed for large clades (Mammalia, Eutheria, Metatheria), although less-reliable estimates of BMR were generally about 12-20% larger than more-reliable ones. Larger effects were found with more-limited clades, such as sciuromorph rodents. For the relationship between BMR and brain mass the results of comparative analyses were found to depend strongly on the data set used, especially with more-limited, order-level clades. In fact, with small sample sizes (e.g. <100) results often appeared erratic. Subsampling revealed that sample size has a non-linear effect on the probability of a zero slope for a given relationship. Depending on the species included, results could differ dramatically, especially with small sample sizes. Overall, our findings indicate a need for due diligence when selecting BMR estimates and caution regarding results (even if seemingly significant) with small sample sizes. © 2017 Cambridge Philosophical Society.

  15. A Bayesian sequential design using alpha spending function to control type I error.

    PubMed

    Zhu, Han; Yu, Qingzhao

    2017-10-01

    We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.

  16. Metabolomics biomarkers to predict acamprosate treatment response in alcohol-dependent subjects.

    PubMed

    Hinton, David J; Vázquez, Marely Santiago; Geske, Jennifer R; Hitschfeld, Mario J; Ho, Ada M C; Karpyak, Victor M; Biernacka, Joanna M; Choi, Doo-Sup

    2017-05-31

    Precision medicine for alcohol use disorder (AUD) allows optimal treatment of the right patient with the right drug at the right time. Here, we generated multivariable models incorporating clinical information and serum metabolite levels to predict acamprosate treatment response. The sample of 120 patients was randomly split into a training set (n = 80) and test set (n = 40) five independent times. Treatment response was defined as complete abstinence (no alcohol consumption during 3 months of acamprosate treatment) while nonresponse was defined as any alcohol consumption during this period. In each of the five training sets, we built a predictive model using a least absolute shrinkage and section operator (LASSO) penalized selection method and then evaluated the predictive performance of each model in the corresponding test set. The models predicted acamprosate treatment response with a mean sensitivity and specificity in the test sets of 0.83 and 0.31, respectively, suggesting our model performed well at predicting responders, but not non-responders (i.e. many non-responders were predicted to respond). Studies with larger sample sizes and additional biomarkers will expand the clinical utility of predictive algorithms for pharmaceutical response in AUD.

  17. The 60 Month All-Sky Burst Alert Telescope Survey of Active Galactic Nucleus and the Anisotropy of Nearby AGNs

    NASA Technical Reports Server (NTRS)

    Ajello, M.; Alexander, D. M.; Greiner, J.; Madejeski, G. M.; Gehrels, N.; Burlon, D.

    2014-01-01

    Surveys above 10 keV represent one of the best resources to provide an unbiased census of the population of active galactic nuclei (AGNs). We present the results of 60 months of observation of the hard X-ray sky with Swift/Burst Alert Telescope (BAT). In this time frame, BAT-detected (in the 15-55 keV band) 720 sources in an all-sky survey of which 428 are associated with AGNs, most of which are nearby. Our sample has negligible incompleteness and statistics a factor of approx. 2 larger over similarly complete sets of AGNs. Our sample contains (at least) 15 bona fide Compton-thick AGNs and 3 likely candidates. Compton-thick AGNs represent approx. 5% of AGN samples detected above 15 keV. We use the BAT data set to refine the determination of the log N-log S of AGNs which is extremely important, now that NuSTAR prepares for launch, toward assessing the AGN contribution to the cosmic X-ray background. We show that the log N-log S of AGNs selected above 10 keV is now established to approx. 10% precision. We derive the luminosity function of Compton-thick AGNs and measure a space density of 7.9(+4.1/-2.9)× 10(exp -5)/cubic Mpc for objects with a de-absorbed luminosity larger than 2 × 10(exp 42) erg / s. As the BAT AGNs are all mostly local, they allow us to investigate the spatial distribution of AGNs in the nearby universe regardless of absorption. We find concentrations of AGNs that coincide spatially with the largest congregations of matter in the local (much < 85 Mpc) universe. There is some evidence that the fraction of Seyfert 2 objects is larger than average in the direction of these dense regions..

  18. Image reconstructions from super-sampled data sets with resolution modeling in PET imaging.

    PubMed

    Li, Yusheng; Matej, Samuel; Metzler, Scott D

    2014-12-01

    Spatial resolution in positron emission tomography (PET) is still a limiting factor in many imaging applications. To improve the spatial resolution for an existing scanner with fixed crystal sizes, mechanical movements such as scanner wobbling and object shifting have been considered for PET systems. Multiple acquisitions from different positions can provide complementary information and increased spatial sampling. The objective of this paper is to explore an efficient and useful reconstruction framework to reconstruct super-resolution images from super-sampled low-resolution data sets. The authors introduce a super-sampling data acquisition model based on the physical processes with tomographic, downsampling, and shifting matrices as its building blocks. Based on the model, we extend the MLEM and Landweber algorithms to reconstruct images from super-sampled data sets. The authors also derive a backprojection-filtration-like (BPF-like) method for the super-sampling reconstruction. Furthermore, they explore variant methods for super-sampling reconstructions: the separate super-sampling resolution-modeling reconstruction and the reconstruction without downsampling to further improve image quality at the cost of more computation. The authors use simulated reconstruction of a resolution phantom to evaluate the three types of algorithms with different super-samplings at different count levels. Contrast recovery coefficient (CRC) versus background variability, as an image-quality metric, is calculated at each iteration for all reconstructions. The authors observe that all three algorithms can significantly and consistently achieve increased CRCs at fixed background variability and reduce background artifacts with super-sampled data sets at the same count levels. For the same super-sampled data sets, the MLEM method achieves better image quality than the Landweber method, which in turn achieves better image quality than the BPF-like method. The authors also demonstrate that the reconstructions from super-sampled data sets using a fine system matrix yield improved image quality compared to the reconstructions using a coarse system matrix. Super-sampling reconstructions with different count levels showed that the more spatial-resolution improvement can be obtained with higher count at a larger iteration number. The authors developed a super-sampling reconstruction framework that can reconstruct super-resolution images using the super-sampling data sets simultaneously with known acquisition motion. The super-sampling PET acquisition using the proposed algorithms provides an effective and economic way to improve image quality for PET imaging, which has an important implication in preclinical and clinical region-of-interest PET imaging applications.

  19. Robust algorithm for aligning two-dimensional chromatograms.

    PubMed

    Gros, Jonas; Nabi, Deedar; Dimitriou-Christidis, Petros; Rutler, Rebecca; Arey, J Samuel

    2012-11-06

    Comprehensive two-dimensional gas chromatography (GC × GC) chromatograms typically exhibit run-to-run retention time variability. Chromatogram alignment is often a desirable step prior to further analysis of the data, for example, in studies of environmental forensics or weathering of complex mixtures. We present a new algorithm for aligning whole GC × GC chromatograms. This technique is based on alignment points that have locations indicated by the user both in a target chromatogram and in a reference chromatogram. We applied the algorithm to two sets of samples. First, we aligned the chromatograms of twelve compositionally distinct oil spill samples, all analyzed using the same instrument parameters. Second, we applied the algorithm to two compositionally distinct wastewater extracts analyzed using two different instrument temperature programs, thus involving larger retention time shifts than the first sample set. For both sample sets, the new algorithm performed favorably compared to two other available alignment algorithms: that of Pierce, K. M.; Wood, Lianna F.; Wright, B. W.; Synovec, R. E. Anal. Chem.2005, 77, 7735-7743 and 2-D COW from Zhang, D.; Huang, X.; Regnier, F. E.; Zhang, M. Anal. Chem.2008, 80, 2664-2671. The new algorithm achieves the best matches of retention times for test analytes, avoids some artifacts which result from the other alignment algorithms, and incurs the least modification of quantitative signal information.

  20. Army Synthetic Validity Project. Report of Phase 3 Results. Volume 1.

    DTIC Science & Technology

    1991-02-01

    the " new " jobs . The "existing" jobs...were the Batch A MOS for which we had the more comprehensive data sets and, generally speaking, larger samples. The " new " jobs were the nine Batch Z...4.2). Second, we applied the empirical least squares equation for the "existing" job that most closely matched each "new" job to the " new " jobs

  1. Observation of subsecond variations in auroral region total electron content using 100 Hz sampling of GPS observables

    NASA Astrophysics Data System (ADS)

    McCaffrey, A. M.; Jayachandran, P. T.

    2017-06-01

    First ever auroral region total electron content (TEC) measurements at 100 Hz using a Septentrio PolaRxS Pro receiver are analyzed to discover ionospheric signatures which would otherwise be unobtainable with the frequently used lower sampling rates. Two types of variations are observed: small-magnitude (amplitude) variations, which are present consistently throughout the data set, and larger-magnitude (amplitude) variations, which are less frequent. Small-amplitude TEC fluctuations are accounted for by the receiver phase jitter. However, estimated secondary ionospheric effects in the calculation of TEC and the receiver phase jitter were unable to account for the larger-amplitude TEC fluctuations. These variations are also accompanied by fluctuations in the magnetic field, which seems to indicate that these fluctuations are real and of geophysical significance. This paper presents a technique and the capability of high-rate TEC measurements in the study of auroral dynamics. Further detailed study is needed to identify the cause of these subsecond TEC fluctuations and associated magnetic field fluctuations.

  2. Floodplain complexity and surface metrics: influences of scale and geomorphology

    USGS Publications Warehouse

    Scown, Murray W.; Thoms, Martin C.; DeJager, Nathan R.

    2015-01-01

    Many studies of fluvial geomorphology and landscape ecology examine a single river or landscape, thus lack generality, making it difficult to develop a general understanding of the linkages between landscape patterns and larger-scale driving variables. We examined the spatial complexity of eight floodplain surfaces in widely different geographic settings and determined how patterns measured at different scales relate to different environmental drivers. Floodplain surface complexity is defined as having highly variable surface conditions that are also highly organised in space. These two components of floodplain surface complexity were measured across multiple sampling scales from LiDAR-derived DEMs. The surface character and variability of each floodplain were measured using four surface metrics; namely, standard deviation, skewness, coefficient of variation, and standard deviation of curvature from a series of moving window analyses ranging from 50 to 1000 m in radius. The spatial organisation of each floodplain surface was measured using spatial correlograms of the four surface metrics. Surface character, variability, and spatial organisation differed among the eight floodplains; and random, fragmented, highly patchy, and simple gradient spatial patterns were exhibited, depending upon the metric and window size. Differences in surface character and variability among the floodplains became statistically stronger with increasing sampling scale (window size), as did their associations with environmental variables. Sediment yield was consistently associated with differences in surface character and variability, as were flow discharge and variability at smaller sampling scales. Floodplain width was associated with differences in the spatial organization of surface conditions at smaller sampling scales, while valley slope was weakly associated with differences in spatial organisation at larger scales. A comparison of floodplain landscape patterns measured at different scales would improve our understanding of the role that different environmental variables play at different scales and in different geomorphic settings.

  3. A pleiotropy-informed Bayesian false discovery rate adapted to a shared control design finds new disease associations from GWAS summary statistics.

    PubMed

    Liley, James; Wallace, Chris

    2015-02-01

    Genome-wide association studies (GWAS) have been successful in identifying single nucleotide polymorphisms (SNPs) associated with many traits and diseases. However, at existing sample sizes, these variants explain only part of the estimated heritability. Leverage of GWAS results from related phenotypes may improve detection without the need for larger datasets. The Bayesian conditional false discovery rate (cFDR) constitutes an upper bound on the expected false discovery rate (FDR) across a set of SNPs whose p values for two diseases are both less than two disease-specific thresholds. Calculation of the cFDR requires only summary statistics and have several advantages over traditional GWAS analysis. However, existing methods require distinct control samples between studies. Here, we extend the technique to allow for some or all controls to be shared, increasing applicability. Several different SNP sets can be defined with the same cFDR value, and we show that the expected FDR across the union of these sets may exceed expected FDR in any single set. We describe a procedure to establish an upper bound for the expected FDR among the union of such sets of SNPs. We apply our technique to pairwise analysis of p values from ten autoimmune diseases with variable sharing of controls, enabling discovery of 59 SNP-disease associations which do not reach GWAS significance after genomic control in individual datasets. Most of the SNPs we highlight have previously been confirmed using replication studies or larger GWAS, a useful validation of our technique; we report eight SNP-disease associations across five diseases not previously declared. Our technique extends and strengthens the previous algorithm, and establishes robust limits on the expected FDR. This approach can improve SNP detection in GWAS, and give insight into shared aetiology between phenotypically related conditions.

  4. The impact of hypnotic suggestibility in clinical care settings.

    PubMed

    Montgomery, Guy H; Schnur, Julie B; David, Daniel

    2011-07-01

    Hypnotic suggestibility has been described as a powerful predictor of outcomes associated with hypnotic interventions. However, there have been no systematic approaches to quantifying this effect across the literature. This meta-analysis evaluates the magnitude of the effect of hypnotic suggestibility on hypnotic outcomes in clinical settings. PsycINFO and PubMed were searched from their inception through July 2009. Thirty-four effects from 10 studies and 283 participants are reported. Results revealed a statistically significant overall effect size in the small to medium range (r = .24; 95% Confidence Interval = -0.28 to 0.75), indicating that greater hypnotic suggestibility led to greater effects of hypnosis interventions. Hypnotic suggestibility accounted for 6% of the variance in outcomes. Smaller sample size studies, use of the SHCS, and pediatric samples tended to result in larger effect sizes. The authors question the usefulness of assessing hypnotic suggestibility in clinical contexts.

  5. The impact of hypnotic suggestibility in clinical care settings

    PubMed Central

    Montgomery, Guy H.; Schnur, Julie B.; David, Daniel

    2013-01-01

    Hypnotic suggestibility has been described as a powerful predictor of outcomes associated with hypnotic interventions. However, there have been no systematic approaches to quantifying this effect across the literature. The present meta-analysis evaluates the magnitude of the effect of hypnotic suggestibility on hypnotic outcomes in clinical settings. PsycINFO and PubMed were searched from their inception through July 2009. Thirty-four effects from ten studies and 283 participants are reported. Results revealed a statistically significant overall effect size in the small to medium range (r = 0.24; 95% Confidence Interval = −0.28 to 0.75), indicating that greater hypnotic suggestibility led to greater effects of hypnosis interventions. Hypnotic suggestibility accounted for 6% of the variance in outcomes. Smaller sample size studies, use of the SHCS, and pediatric samples tended to result in larger effect sizes. Results question the usefulness of assessing hypnotic suggestibility in clinical contexts. PMID:21644122

  6. Multi-Pixel Photon Counters for Optofluidic Characterization of Particles and Microalgae

    PubMed Central

    Asrar, Pouya; Sucur, Marta; Hashemi, Nastaran

    2015-01-01

    We have developed an optofluidic biosensor to study microscale particles and different species of microalgae. The system is comprised of a microchannel with a set of chevron-shaped grooves. The chevrons allows for hydrodynamic focusing of the core stream in the center using a sheath fluid. The device is equipped with a new generation of highly sensitive photodetectors, multi-pixel photon counter (MPPC), with high gain values and an extremely small footprint. Two different sizes of high intensity fluorescent microspheres and three different species of algae (Chlamydomonas reinhardtii strain 21 gr, Chlamydomonas suppressor, and Chlorella sorokiniana) were studied. The forward scattering emissions generated by samples passing through the interrogation region were carried through a multimode fiber, located in 135 degree with respect to the excitation fiber, and detected by a MPPC. The signal outputs obtained from each sample were collected using a data acquisition system and utilized for further statistical analysis. Larger particles or cells demonstrated larger peak height and width, and consequently larger peak area. The average signal output (integral of the peak) for Chlamydomonas reinhardtii strain 21 gr, Chlamydomonas suppressor, and Chlorella sorokiniana falls between the values found for the 3.2 and 10.2 μm beads. Different types of algae were also successfully characterized. PMID:26075506

  7. Sparsely sampling the sky: a Bayesian experimental design approach

    NASA Astrophysics Data System (ADS)

    Paykari, P.; Jaffe, A. H.

    2013-08-01

    The next generation of galaxy surveys will observe millions of galaxies over large volumes of the Universe. These surveys are expensive both in time and cost, raising questions regarding the optimal investment of this time and money. In this work, we investigate criteria for selecting amongst observing strategies for constraining the galaxy power spectrum and a set of cosmological parameters. Depending on the parameters of interest, it may be more efficient to observe a larger, but sparsely sampled, area of sky instead of a smaller contiguous area. In this work, by making use of the principles of Bayesian experimental design, we will investigate the advantages and disadvantages of the sparse sampling of the sky and discuss the circumstances in which a sparse survey is indeed the most efficient strategy. For the Dark Energy Survey (DES), we find that by sparsely observing the same area in a smaller amount of time, we only increase the errors on the parameters by a maximum of 0.45 per cent. Conversely, investing the same amount of time as the original DES to observe a sparser but larger area of sky, we can in fact constrain the parameters with errors reduced by 28 per cent.

  8. A Phylogenomic Approach Based on PCR Target Enrichment and High Throughput Sequencing: Resolving the Diversity within the South American Species of Bartsia L. (Orobanchaceae)

    PubMed Central

    Tank, David C.

    2016-01-01

    Advances in high-throughput sequencing (HTS) have allowed researchers to obtain large amounts of biological sequence information at speeds and costs unimaginable only a decade ago. Phylogenetics, and the study of evolution in general, is quickly migrating towards using HTS to generate larger and more complex molecular datasets. In this paper, we present a method that utilizes microfluidic PCR and HTS to generate large amounts of sequence data suitable for phylogenetic analyses. The approach uses the Fluidigm Access Array System (Fluidigm, San Francisco, CA, USA) and two sets of PCR primers to simultaneously amplify 48 target regions across 48 samples, incorporating sample-specific barcodes and HTS adapters (2,304 unique amplicons per Access Array). The final product is a pooled set of amplicons ready to be sequenced, and thus, there is no need to construct separate, costly genomic libraries for each sample. Further, we present a bioinformatics pipeline to process the raw HTS reads to either generate consensus sequences (with or without ambiguities) for every locus in every sample or—more importantly—recover the separate alleles from heterozygous target regions in each sample. This is important because it adds allelic information that is well suited for coalescent-based phylogenetic analyses that are becoming very common in conservation and evolutionary biology. To test our approach and bioinformatics pipeline, we sequenced 576 samples across 96 target regions belonging to the South American clade of the genus Bartsia L. in the plant family Orobanchaceae. After sequencing cleanup and alignment, the experiment resulted in ~25,300bp across 486 samples for a set of 48 primer pairs targeting the plastome, and ~13,500bp for 363 samples for a set of primers targeting regions in the nuclear genome. Finally, we constructed a combined concatenated matrix from all 96 primer combinations, resulting in a combined aligned length of ~40,500bp for 349 samples. PMID:26828929

  9. International Space Station (ISS) Bacterial Filter Elements (BFEs): Filter Efficiency and Pressure Drop Testing of Returned Units

    NASA Technical Reports Server (NTRS)

    Green, Robert D.; Agui, Juan H.; Vijayakumar, R.; Berger, Gordon M.; Perry, Jay L.

    2017-01-01

    The air quality control equipment aboard the International Space Station (ISS) and future deep space exploration vehicles provide the vital function of maintaining a clean cabin environment for the crew and the hardware. This becomes a serious challenge in pressurized space compartments since no outside air ventilation is possible, and a larger particulate load is imposed on the filtration system due to lack of sedimentation. The ISS Environmental Control and Life Support (ECLS) system architecture in the U.S. Segment uses a distributed particulate filtration approach consisting of traditional High-Efficiency Particulate Air (HEPA) filters deployed at multiple locations in each U.S. Seg-ment module; these filters are referred to as Bacterial Filter Elements, or BFEs. In our previous work, we presented results of efficiency and pressure drop measurements for a sample set of two returned BFEs with a service life of 2.5 years. In this follow-on work, we present similar efficiency, pressure drop, and leak tests results for a larger sample set of six returned BFEs. The results of this work can aid the ISS Program in managing BFE logistics inventory through the stations planned lifetime as well as provide insight for managing filter element logistics for future exploration missions. These results also can provide meaningful guidance for particulate filter designs under consideration for future deep space exploration missions.

  10. Filter Efficiency and Pressure Testing of Returned ISS Bacterial Filter Elements (BFEs)

    NASA Technical Reports Server (NTRS)

    Green, Robert D.; Agui, Juan H.; Berger, Gordon M.; Vijayakumar, R.; Perry, Jay L.

    2017-01-01

    The air quality control equipment aboard the International Space Station (ISS) and future deep space exploration vehicles provide the vital function of maintaining a clean cabin environment for the crew and the hardware. This becomes a serious challenge in pressurized space compartments since no outside air ventilation is possible, and a larger particulate load is imposed on the filtration system due to lack of sedimentation. The ISS Environmental Control and Life Support (ECLS) system architecture in the U.S. Segment uses a distributed particulate filtration approach consisting of traditional High-Efficiency Particulate Air (HEPA) filters deployed at multiple locations in each U.S. Seg-ment module; these filters are referred to as Bacterial Filter Elements, or BFEs. In our previous work, we presented results of efficiency and pressure drop measurements for a sample set of two returned BFEs with a service life of 2.5 years. In this follow-on work, we present similar efficiency, pressure drop, and leak tests results for a larger sample set of six returned BFEs. The results of this work can aid the ISS Program in managing BFE logistics inventory through the stations planned lifetime as well as provide insight for managing filter element logistics for future exploration missions. These results also can provide meaningful guidance for particulate filter designs under consideration for future deep space exploration missions.

  11. Convergence and Efficiency of Adaptive Importance Sampling Techniques with Partial Biasing

    NASA Astrophysics Data System (ADS)

    Fort, G.; Jourdain, B.; Lelièvre, T.; Stoltz, G.

    2018-04-01

    We propose a new Monte Carlo method to efficiently sample a multimodal distribution (known up to a normalization constant). We consider a generalization of the discrete-time Self Healing Umbrella Sampling method, which can also be seen as a generalization of well-tempered metadynamics. The dynamics is based on an adaptive importance technique. The importance function relies on the weights (namely the relative probabilities) of disjoint sets which form a partition of the space. These weights are unknown but are learnt on the fly yielding an adaptive algorithm. In the context of computational statistical physics, the logarithm of these weights is, up to an additive constant, the free-energy, and the discrete valued function defining the partition is called the collective variable. The algorithm falls into the general class of Wang-Landau type methods, and is a generalization of the original Self Healing Umbrella Sampling method in two ways: (i) the updating strategy leads to a larger penalization strength of already visited sets in order to escape more quickly from metastable states, and (ii) the target distribution is biased using only a fraction of the free-energy, in order to increase the effective sample size and reduce the variance of importance sampling estimators. We prove the convergence of the algorithm and analyze numerically its efficiency on a toy example.

  12. Tissue-aware RNA-Seq processing and normalization for heterogeneous and sparse data.

    PubMed

    Paulson, Joseph N; Chen, Cho-Yi; Lopes-Ramos, Camila M; Kuijjer, Marieke L; Platig, John; Sonawane, Abhijeet R; Fagny, Maud; Glass, Kimberly; Quackenbush, John

    2017-10-03

    Although ultrahigh-throughput RNA-Sequencing has become the dominant technology for genome-wide transcriptional profiling, the vast majority of RNA-Seq studies typically profile only tens of samples, and most analytical pipelines are optimized for these smaller studies. However, projects are generating ever-larger data sets comprising RNA-Seq data from hundreds or thousands of samples, often collected at multiple centers and from diverse tissues. These complex data sets present significant analytical challenges due to batch and tissue effects, but provide the opportunity to revisit the assumptions and methods that we use to preprocess, normalize, and filter RNA-Seq data - critical first steps for any subsequent analysis. We find that analysis of large RNA-Seq data sets requires both careful quality control and the need to account for sparsity due to the heterogeneity intrinsic in multi-group studies. We developed Yet Another RNA Normalization software pipeline (YARN), that includes quality control and preprocessing, gene filtering, and normalization steps designed to facilitate downstream analysis of large, heterogeneous RNA-Seq data sets and we demonstrate its use with data from the Genotype-Tissue Expression (GTEx) project. An R package instantiating YARN is available at http://bioconductor.org/packages/yarn .

  13. Selecting the most appropriate time points to profile in high-throughput studies

    PubMed Central

    Kleyman, Michael; Sefer, Emre; Nicola, Teodora; Espinoza, Celia; Chhabra, Divya; Hagood, James S; Kaminski, Naftali; Ambalavanan, Namasivayam; Bar-Joseph, Ziv

    2017-01-01

    Biological systems are increasingly being studied by high throughput profiling of molecular data over time. Determining the set of time points to sample in studies that profile several different types of molecular data is still challenging. Here we present the Time Point Selection (TPS) method that solves this combinatorial problem in a principled and practical way. TPS utilizes expression data from a small set of genes sampled at a high rate. As we show by applying TPS to study mouse lung development, the points selected by TPS can be used to reconstruct an accurate representation for the expression values of the non selected points. Further, even though the selection is only based on gene expression, these points are also appropriate for representing a much larger set of protein, miRNA and DNA methylation changes over time. TPS can thus serve as a key design strategy for high throughput time series experiments. Supporting Website: www.sb.cs.cmu.edu/TPS DOI: http://dx.doi.org/10.7554/eLife.18541.001 PMID:28124972

  14. Refined Estimates of Carbon Abundances for Carbon-Enhanced Metal-Poor Stars

    NASA Astrophysics Data System (ADS)

    Rossi, S.; Placco, V. M.; Beers, T. C.; Marsteller, B.; Kennedy, C. R.; Sivarani, T.; Masseron, T.; Plez, B.

    2008-03-01

    We present results from a refined set of procedures for estimation of the metallicities ([Fe/H]) and carbon abundance ratios ([C/Fe]) based on a much larger sample of calibration objects (on the order of 500 stars) then were available to Rossi et al. (2005), due to a dramatic increase in the number of stars with measurements obtained from high-resolution analyses in the past few years. We compare results obtained from a new calibration of the KP and GP indices with that obtained from a custom set of spectral synthesis based on MOOG. In cases where the GP index approaches saturation, it is clear that only spectral synthesis achieve reliable results.

  15. An exploratory study of a text classification framework for Internet-based surveillance of emerging epidemics

    PubMed Central

    Torii, Manabu; Yin, Lanlan; Nguyen, Thang; Mazumdar, Chand T.; Liu, Hongfang; Hartley, David M.; Nelson, Noele P.

    2014-01-01

    Purpose Early detection of infectious disease outbreaks is crucial to protecting the public health of a society. Online news articles provide timely information on disease outbreaks worldwide. In this study, we investigated automated detection of articles relevant to disease outbreaks using machine learning classifiers. In a real-life setting, it is expensive to prepare a training data set for classifiers, which usually consists of manually labeled relevant and irrelevant articles. To mitigate this challenge, we examined the use of randomly sampled unlabeled articles as well as labeled relevant articles. Methods Naïve Bayes and Support Vector Machine (SVM) classifiers were trained on 149 relevant and 149 or more randomly sampled unlabeled articles. Diverse classifiers were trained by varying the number of sampled unlabeled articles and also the number of word features. The trained classifiers were applied to 15 thousand articles published over 15 days. Top-ranked articles from each classifier were pooled and the resulting set of 1337 articles was reviewed by an expert analyst to evaluate the classifiers. Results Daily averages of areas under ROC curves (AUCs) over the 15-day evaluation period were 0.841 and 0.836, respectively, for the naïve Bayes and SVM classifier. We referenced a database of disease outbreak reports to confirm that this evaluation data set resulted from the pooling method indeed covered incidents recorded in the database during the evaluation period. Conclusions The proposed text classification framework utilizing randomly sampled unlabeled articles can facilitate a cost-effective approach to training machine learning classifiers in a real-life Internet-based biosurveillance project. We plan to examine this framework further using larger data sets and using articles in non-English languages. PMID:21134784

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    KLARER,PAUL R.; BINDER,ALAN B.; LENARD,ROGER X.

    A preliminary set of requirements for a robotic rover mission to the lunar polar region are described and assessed. Tasks to be performed by the rover include core drill sample acquisition, mineral and volatile soil content assay, and significant wide area traversals. Assessment of the postulated requirements is performed using first order estimates of energy, power, and communications throughput issues. Two potential rover system configurations are considered, a smaller rover envisioned as part of a group of multiple rovers, and a larger single rover envisioned along more traditional planetary surface rover concept lines.

  17. A cross-cultural study of eating attitudes in adolescent South African females

    PubMed Central

    Szabo, Christopher Paul; Allwood, Clifford W

    2004-01-01

    Eating disorders were first described in black females in South Africa in 1995. A subsequent community based study of eating attitudes amongst adolescent females in an urban setting suggested that there would be increasing numbers of sufferers from within the black community. The current study sought to extend these findings using a larger, more representative urban sample. The results support those of the preliminary study. The underlying basis for the emerging phenomenon is discussed PMID:16633453

  18. Factors Associated with the Performance and Cost-Effectiveness of Using Lymphatic Filariasis Transmission Assessment Surveys for Monitoring Soil-Transmitted Helminths: A Case Study in Kenya

    PubMed Central

    Smith, Jennifer L.; Sturrock, Hugh J. W.; Assefa, Liya; Nikolay, Birgit; Njenga, Sammy M.; Kihara, Jimmy; Mwandawiro, Charles S.; Brooker, Simon J.

    2015-01-01

    Transmission assessment surveys (TAS) for lymphatic filariasis have been proposed as a platform to assess the impact of mass drug administration (MDA) on soil-transmitted helminths (STHs). This study used computer simulation and field data from pre- and post-MDA settings across Kenya to evaluate the performance and cost-effectiveness of the TAS design for STH assessment compared with alternative survey designs. Variations in the TAS design and different sample sizes and diagnostic methods were also evaluated. The district-level TAS design correctly classified more districts compared with standard STH designs in pre-MDA settings. Aggregating districts into larger evaluation units in a TAS design decreased performance, whereas age group sampled and sample size had minimal impact. The low diagnostic sensitivity of Kato-Katz and mini-FLOTAC methods was found to increase misclassification. We recommend using a district-level TAS among children 8–10 years of age to assess STH but suggest that key consideration is given to evaluation unit size. PMID:25487730

  19. Community Heavy Metal Exposure, San Francisco, California

    NASA Astrophysics Data System (ADS)

    Chavez, A.; Devine, M.; Ho, T.; Zapata, I.; Bissell, M.; Neiss, J.

    2008-12-01

    Heavy metals are natural elements that generally occur in minute concentrations in the earth's crust. While some of these elements, in small quantities, are vital to life, most are harmful in larger doses. Various industrial and agricultural processes can result in dangerously high concentrations of heavy metals in our environment. Consequently, humans can be exposed to unsafe levels of these elements via the air we breathe, the water and food we consume, and the many products we use. During a two week study we collected numerous samples of sediments, water, food, and household items from around the San Francisco Bay Area that represent industrial, agricultural, and urban/residential settings. We analyzed these samples for Mercury (Hg), Lead (Pb), and Arsenic (As). Our goal was to examine the extent of our exposure to heavy metals in our daily lives. We discovered that many of the common foods and materials in our lives have become contaminated with unhealthy concentrations of these metals. Of our food samples, many exceeded the EPA's Maximum Contaminant Levels (MCL) set for each metal. Meats (fish, chicken, and beef) had higher amounts of each metal than did non-meat items. Heavy metals were also prevalent in varying concentrations in the environment. While many of our samples exceeded the EPA's Sediment Screening Level (SSL) for As, only two other samples surpassed the SSL set for Pb, and zero of our samples exceeded the SSL for Hg. Because of the serious health effects that can result from over-exposure to heavy metals, the information obtained in this study should be used to influence our future dietary and recreational habits.

  20. Effectiveness of an existing estuarine no-take fish sanctuary within the Kennedy Space Center, Florida

    USGS Publications Warehouse

    Johnson, D.R.; Funicelli, N.A.; Bohnsack, James A.

    1999-01-01

    Approximately 22% of the waters of the Merritt Island National Wildlife Refuge, which encompasses the Kennedy Space Center, Florida, have been closed to public access and fishing since 1962. These closed areas offer an opportunity to test the effectiveness of 'no-take' sanctuaries by analyzing two replicated estuarine areas. Areas open and closed to fishing were sampled from November 1986 to January 1990 with 653 random trammel-net sets, each enclosing 3,721 m2. Samples from no-fishing areas had significantly (P < 0.05) greater abundance and larger fishes than fished areas. Relative abundance (standardized catch per unit effort, CPUE) in protected areas (6.4 fish/set) was 2.6 times greater than in the fished areas (2.4 fish/set) for total game fish, 2.4 times greater for spotted seatrout Cynoscion nebulosus, 6.3 times greater for red drum Sciaenops ocellatus, 12.8 times greater for black drum Pogonias cromis, 5.3 times greater for common snook Centropomus undecimalis, and 2.6 times greater for striped mullet Mugil cephalus. Fishing had the primary effect on CPUE, independent of habitat and other environmental factors. Salinity and depth were important secondary factors affecting CPUE, followed by season or month, and temperature. The importance of specific factors varied with each species. Median and maximum size of red drum, spotted seatrout, black drum, and striped mullet were also significantly greater in the unfished areas. More and larger fish of spawning age were observed in the unfished areas for red drum, spotted seatrout, and black drum. Tagging studies documented export of important sport fish from protected areas to fished areas.

  1. Examining Masculine Norms and Peer Support within a Sample of Incarcerated African American Males

    PubMed Central

    Gordon, Derrick M.; Hawes, Samuel W.; Perez-Cabello, M. Arturo; Brabham-Hollis, Tamika; Lanza, A. Stephen; Dyson, William J.

    2015-01-01

    The adherence to masculine norms has been suggested to be influenced by social settings and context. Prisons have been described as a context where survival is dependent on adhering to strict masculine norms that may undermine reintegration back into the larger society. This study attempted to examine the relationship between masculine norms, peer support, and an individual’s length of incarceration on a sample of 139 African American men taking part in a pre-release community re-entry program. Results indicate that peer support was associated with length of incarceration and the interaction between the endorsement of masculine norms and peer support significantly predicted the length of incarceration for African American men in this sample. Implications for incarcerated African American men and future research directions are discussed. PMID:25866486

  2. Examining Masculine Norms and Peer Support within a Sample of Incarcerated African American Males.

    PubMed

    Gordon, Derrick M; Hawes, Samuel W; Perez-Cabello, M Arturo; Brabham-Hollis, Tamika; Lanza, A Stephen; Dyson, William J

    2013-01-01

    The adherence to masculine norms has been suggested to be influenced by social settings and context. Prisons have been described as a context where survival is dependent on adhering to strict masculine norms that may undermine reintegration back into the larger society. This study attempted to examine the relationship between masculine norms, peer support, and an individual's length of incarceration on a sample of 139 African American men taking part in a pre-release community re-entry program. Results indicate that peer support was associated with length of incarceration and the interaction between the endorsement of masculine norms and peer support significantly predicted the length of incarceration for African American men in this sample. Implications for incarcerated African American men and future research directions are discussed.

  3. Allometry and Ecology of the Bilaterian Gut Microbiome.

    PubMed

    Sherrill-Mix, Scott; McCormick, Kevin; Lauder, Abigail; Bailey, Aubrey; Zimmerman, Laurie; Li, Yingying; Django, Jean-Bosco N; Bertolani, Paco; Colin, Christelle; Hart, John A; Hart, Terese B; Georgiev, Alexander V; Sanz, Crickette M; Morgan, David B; Atencia, Rebeca; Cox, Debby; Muller, Martin N; Sommer, Volker; Piel, Alexander K; Stewart, Fiona A; Speede, Sheri; Roman, Joe; Wu, Gary; Taylor, Josh; Bohm, Rudolf; Rose, Heather M; Carlson, John; Mjungu, Deus; Schmidt, Paul; Gaughan, Celeste; Bushman, Joyslin I; Schmidt, Ella; Bittinger, Kyle; Collman, Ronald G; Hahn, Beatrice H; Bushman, Frederic D

    2018-03-27

    Classical ecology provides principles for construction and function of biological communities, but to what extent these apply to the animal-associated microbiota is just beginning to be assessed. Here, we investigated the influence of several well-known ecological principles on animal-associated microbiota by characterizing gut microbial specimens from bilaterally symmetrical animals ( Bilateria ) ranging from flies to whales. A rigorously vetted sample set containing 265 specimens from 64 species was assembled. Bacterial lineages were characterized by 16S rRNA gene sequencing. Previously published samples were also compared, allowing analysis of over 1,098 samples in total. A restricted number of bacterial phyla was found to account for the great majority of gut colonists. Gut microbial composition was associated with host phylogeny and diet. We identified numerous gut bacterial 16S rRNA gene sequences that diverged deeply from previously studied taxa, identifying opportunities to discover new bacterial types. The number of bacterial lineages per gut sample was positively associated with animal mass, paralleling known species-area relationships from island biogeography and implicating body size as a determinant of community stability and niche complexity. Samples from larger animals harbored greater numbers of anaerobic communities, specifying a mechanism for generating more-complex microbial environments. Predictions for species/abundance relationships from models of neutral colonization did not match the data set, pointing to alternative mechanisms such as selection of specific colonists by environmental niche. Taken together, the data suggest that niche complexity increases with gut size and that niche selection forces dominate gut community construction. IMPORTANCE The intestinal microbiome of animals is essential for health, contributing to digestion of foods, proper immune development, inhibition of pathogen colonization, and catabolism of xenobiotic compounds. How these communities assemble and persist is just beginning to be investigated. Here we interrogated a set of gut samples from a wide range of animals to investigate the roles of selection and random processes in microbial community construction. We show that the numbers of bacterial species increased with the weight of host organisms, paralleling findings from studies of island biogeography. Communities in larger organisms tended to be more anaerobic, suggesting one mechanism for niche diversification. Nonselective processes enable specific predictions for community structure, but our samples did not match the predictions of the neutral model. Thus, these findings highlight the importance of niche selection in community construction and suggest mechanisms of niche diversification. Copyright © 2018 Sherrill-Mix et al.

  4. Rapid Sampling of Hydrogen Bond Networks for Computational Protein Design.

    PubMed

    Maguire, Jack B; Boyken, Scott E; Baker, David; Kuhlman, Brian

    2018-05-08

    Hydrogen bond networks play a critical role in determining the stability and specificity of biomolecular complexes, and the ability to design such networks is important for engineering novel structures, interactions, and enzymes. One key feature of hydrogen bond networks that makes them difficult to rationally engineer is that they are highly cooperative and are not energetically favorable until the hydrogen bonding potential has been satisfied for all buried polar groups in the network. Existing computational methods for protein design are ill-equipped for creating these highly cooperative networks because they rely on energy functions and sampling strategies that are focused on pairwise interactions. To enable the design of complex hydrogen bond networks, we have developed a new sampling protocol in the molecular modeling program Rosetta that explicitly searches for sets of amino acid mutations that can form self-contained hydrogen bond networks. For a given set of designable residues, the protocol often identifies many alternative sets of mutations/networks, and we show that it can readily be applied to large sets of residues at protein-protein interfaces or in the interior of proteins. The protocol builds on a recently developed method in Rosetta for designing hydrogen bond networks that has been experimentally validated for small symmetric systems but was not extensible to many larger protein structures and complexes. The sampling protocol we describe here not only recapitulates previously validated designs with performance improvements but also yields viable hydrogen bond networks for cases where the previous method fails, such as the design of large, asymmetric interfaces relevant to engineering protein-based therapeutics.

  5. Design of Phase II Non-inferiority Trials.

    PubMed

    Jung, Sin-Ho

    2017-09-01

    With the development of inexpensive treatment regimens and less invasive surgical procedures, we are confronted with non-inferiority study objectives. A non-inferiority phase III trial requires a roughly four times larger sample size than that of a similar standard superiority trial. Because of the large required sample size, we often face feasibility issues to open a non-inferiority trial. Furthermore, due to lack of phase II non-inferiority trial design methods, we do not have an opportunity to investigate the efficacy of the experimental therapy through a phase II trial. As a result, we often fail to open a non-inferiority phase III trial and a large number of non-inferiority clinical questions still remain unanswered. In this paper, we want to develop some designs for non-inferiority randomized phase II trials with feasible sample sizes. At first, we review a design method for non-inferiority phase III trials. Subsequently, we propose three different designs for non-inferiority phase II trials that can be used under different settings. Each method is demonstrated with examples. Each of the proposed design methods is shown to require a reasonable sample size for non-inferiority phase II trials. The three different non-inferiority phase II trial designs are used under different settings, but require similar sample sizes that are typical for phase II trials.

  6. Effect of photodegradation and biodegradation on the concentration and composition of dissolved organic matter in diverse waterbodies

    NASA Astrophysics Data System (ADS)

    Manalilkada Sasidharan, S.; Dash, P.; Singh, S.; Lu, Y.

    2017-12-01

    The objective of this research was to quantify the effects of photodegradation and biodegradation on the dissolved organic matter (DOM) concentration and composition in five distinct waterbodies with diverse types of watershed land use and land cover in the southeastern United States. The water bodies included an agricultural pond, a lake in a predominantly forested watershed, a man-made reservoir, an estuary, and a bay. Two sets of samples were prepared from these water bodies by dispensing filtered water samples to unfiltered samples in 10:1 ratio. The first set was kept in the sunlight during the day (12 hours), and colored dissolved organic matter (CDOM) absorption and fluorescence were measured periodically over a 30-day period for examining the effects of combined photo- and biodegradation. The second set of samples was kept in the dark for examining the effects of biodegradation alone, and CDOM absorption and fluorescence were measured at the same time as the sunlight-exposed samples. Subsequently, spectrometric results in tandem with multivariate statistical analysis were used to interpret the lability vs. composition of DOM. Parallel factor analysis (PARAFAC) revealed the presence of four DOM components (C1-C4). C1 and C4 were microbial tryptophan-like, labile lighter components, while C2 and C3 were terrestrial humic like or fulvic acid type, larger aromatic refractory components. The principal component analysis (PCA) also revealed two distinct groups of DOM - C1 and C4 vs. C2 and C3. The negative PC1 loadings of C2, C3, HIX, a254 and SUVA indicated humic-like or fulvic-like structurally complex refractory aromatic DOM originated from higher plants in forested areas. C1, C4, SR, FI and BI had positive PC1 loadings, which indicated structurally simpler labile DOM were derived from agricultural areas or microbial activity. There was a decrease in dissolved organic carbon (DOC) due to combined photo- and biodegradation, and transformation of components C2, C3 into components C1, C4 was at a much faster rate than only biodegradation. This observation suggests that the presence of sunlight facilitated the degradation of larger, recalcitrant, terrestrial humic-like compounds into smaller, labile microbial components.

  7. IMa2p - Parallel MCMC and inference of ancient demography under the Isolation with Migration (IM) model

    PubMed Central

    Sethuraman, Arun; Hey, Jody

    2015-01-01

    IMa2 and related programs are used to study the divergence of closely related species and of populations within species. These methods are based on the sampling of genealogies using MCMC, and they can proceed quite slowly for larger data sets. We describe a parallel implementation, called IMa2p, that provides a nearly linear increase in genealogy sampling rate with the number of processors in use. IMa2p is written in OpenMPI and C++, and scales well for demographic analyses of a large number of loci and populations, which are difficult to study using the serial version of the program. PMID:26059786

  8. Craniofacial changes in Icelandic children between 6 and 16 years of age - a longitudinal study.

    PubMed

    Thordarson, Arni; Johannsdottir, Berglind; Magnusson, Thordur Eydal

    2006-04-01

    The aim of the present study was to describe the craniofacial changes between 6 and 16 years of age in a sample of Icelandic children. Complete sets of lateral cephalometric radiographs were available from 95 males and 87 females. Twenty-two reference points were digitized and processed by standard methods, using the Dentofacial Planner computer software program. Thirty-three angular and linear variables were calculated, including: basal sagittal and vertical measurements, facial ratio, and dental, cranial base and mandibular measurements. For the angular measurements, gender differences were not statistically different for any of the measurements, in either age group, except for the variable s-n-na, which was larger in the 16-year-old boys (P < or = 0.001). Linear variables were consistently larger in the boys compared with the girls at both age levels. During the observation period mandibular prognathism increased but the basal sagittal jaw relationship, the jaw angle, the mandibular plane angle and cranial base flexure (n-s-ba) decreased in both genders (P < or = 0.001). Maxillary prognathism increased only in the boys from 6 to 16 years. Inclination of the lower incisors and all the cranial base dimensions increased in both genders during the observation period. When the Icelandic sample was compared with a similar Norwegian sample, small differences could be noted in the maxillary prognathism, mandibular plane angle and in the inclination of the maxilla. Larger differences were identified in the inclination of the lower incisors. These findings could be used as normative cephalometric standards for 6- and 16-year-old Icelandic children.

  9. Estimation After a Group Sequential Trial.

    PubMed

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.

  10. Degradation analysis in the estimation of photometric redshifts from non-representative training sets

    NASA Astrophysics Data System (ADS)

    Rivera, J. D.; Moraes, B.; Merson, A. I.; Jouvel, S.; Abdalla, F. B.; Abdalla, M. C. B.

    2018-07-01

    We perform an analysis of photometric redshifts estimated by using a non-representative training sets in magnitude space. We use the ANNz2 and GPz algorithms to estimate the photometric redshift both in simulations and in real data from the Sloan Digital Sky Survey (DR12). We show that for the representative case, the results obtained by using both algorithms have the same quality, using either magnitudes or colours as input. In order to reduce the errors when estimating the redshifts with a non-representative training set, we perform the training in colour space. We estimate the quality of our results by using a mock catalogue which is split samples cuts in the r band between 19.4 < r < 20.8. We obtain slightly better results with GPz on single point z-phot estimates in the complete training set case, however the photometric redshifts estimated with ANNz2 algorithm allows us to obtain mildly better results in deeper r-band cuts when estimating the full redshift distribution of the sample in the incomplete training set case. By using a cumulative distribution function and a Monte Carlo process, we manage to define a photometric estimator which fits well the spectroscopic distribution of galaxies in the mock testing set, but with a larger scatter. To complete this work, we perform an analysis of the impact on the detection of clusters via density of galaxies in a field by using the photometric redshifts obtained with a non-representative training set.

  11. Degradation analysis in the estimation of photometric redshifts from non-representative training sets

    NASA Astrophysics Data System (ADS)

    Rivera, J. D.; Moraes, B.; Merson, A. I.; Jouvel, S.; Abdalla, F. B.; Abdalla, M. C. B.

    2018-04-01

    We perform an analysis of photometric redshifts estimated by using a non-representative training sets in magnitude space. We use the ANNz2 and GPz algorithms to estimate the photometric redshift both in simulations as well as in real data from the Sloan Digital Sky Survey (DR12). We show that for the representative case, the results obtained by using both algorithms have the same quality, either using magnitudes or colours as input. In order to reduce the errors when estimating the redshifts with a non-representative training set, we perform the training in colour space. We estimate the quality of our results by using a mock catalogue which is split samples cuts in the r-band between 19.4 < r < 20.8. We obtain slightly better results with GPz on single point z-phot estimates in the complete training set case, however the photometric redshifts estimated with ANNz2 algorithm allows us to obtain mildly better results in deeper r-band cuts when estimating the full redshift distribution of the sample in the incomplete training set case. By using a cumulative distribution function and a Monte-Carlo process, we manage to define a photometric estimator which fits well the spectroscopic distribution of galaxies in the mock testing set, but with a larger scatter. To complete this work, we perform an analysis of the impact on the detection of clusters via density of galaxies in a field by using the photometric redshifts obtained with a non-representative training set.

  12. Changes in malnutrition and quality of nutritional care among aged residents in all nursing homes and assisted living facilities in Helsinki 2003-2011.

    PubMed

    Saarela, Riitta K T; Muurinen, Seija; Suominen, Merja H; Savikko, Niina N; Soini, Helena; Pitkälä, Kaisu H

    2017-09-01

    While nutritional problems have been recognized as common in institutional settings for several decades, less is known about how nutritional care and nutrition has changed in these settings over time. To describe and compare the nutritional problems and nutritional care of residents in all nursing homes (NH) in 2003 and 2011 and residents in all assisted living facilities (ALF) in 2007 and 2011, in Helsinki, Finland. We combined four cross-sectional datasets of (1) residents from all NHs in 2003 (N=1987), (2) residents from all ALFs in 2007 (N=1377), (3) residents from all NHs in 2011 (N=1576) and (4) residents from all ALFs in 2011 (N=1585). All participants at each time point were assessed using identical methods, including the Mini Nutritional Assessment (MNA). The mean age of both samples from 2011 was higher and a larger proportion suffered from dementia, compared to earlier collected samples. A larger proportion of the residents in 2011 were assessed either malnourished or at-risk for malnutrition, according to the MNA, than in 2003 (NH: 93.5% vs. 88.9%, p<0.001) and in 2007 (ALF: 82.1% vs. 78.1%, p=0.007). The use of nutritional, vitamin D and calcium supplements, and snacks between meals was significantly more common in the 2011 residents, compared to the respective earlier samples. In 2011, institutionalized residents were more disabled and more prone to malnourishment than in 2003 or 2007. Institutions do seem to be more aware of good nutritional care for vulnerable older people, although there is still room for improvement. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. AN INDEPENDENT MEASUREMENT OF THE INCIDENCE OF Mg II ABSORBERS ALONG GAMMA-RAY BURST SIGHT LINES: THE END OF THE MYSTERY?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cucchiara, A.; Prochaska, J. X.; Zhu, G.

    2013-08-20

    In 2006, Prochter et al. reported a statistically significant enhancement of very strong Mg II absorption systems intervening the sight lines to gamma-ray bursts (GRBs) relative to the incidence of such absorption along quasar sight lines. This counterintuitive result has inspired a diverse set of astrophysical explanations (e.g., dust, gravitational lensing) but none of these has obviously resolved the puzzle. Using the largest set of GRB afterglow spectra available, we reexamine the purported enhancement. In an independent sample of GRB spectra with a survey path three times larger than Prochter et al., we measure the incidence per unit redshift ofmore » {>=}1 A rest-frame equivalent width Mg II absorbers at z Almost-Equal-To 1 to be l(z) = 0.18 {+-} 0.06. This is fully consistent with current estimates for the incidence of such absorbers along quasar sight lines. Therefore, we do not confirm the original enhancement and suggest those results suffered from a statistical fluke. Signatures of the original result do remain in our full sample (l(z) shows an Almost-Equal-To 1.5 enhancement over l(z){sub QSO}), but the statistical significance now lies at Almost-Equal-To 90% c.l. Restricting our analysis to the subset of high-resolution spectra of GRB afterglows (which overlaps substantially with Prochter et al.), we still reproduce a statistically significant enhancement of Mg II absorption. The reason for this excess, if real, is still unclear since there is no connection between the rapid afterglow follow-up process with echelle (or echellette) spectrographs and the detectability of strong Mg II doublets. Only a larger sample of such high-resolution data will shed some light on this matter.« less

  14. Will Big Data Close the Missing Heritability Gap?

    PubMed

    Kim, Hwasoon; Grueneberg, Alexander; Vazquez, Ana I; Hsu, Stephen; de Los Campos, Gustavo

    2017-11-01

    Despite the important discoveries reported by genome-wide association (GWA) studies, for most traits and diseases the prediction R-squared (R-sq.) achieved with genetic scores remains considerably lower than the trait heritability. Modern biobanks will soon deliver unprecedentedly large biomedical data sets: Will the advent of big data close the gap between the trait heritability and the proportion of variance that can be explained by a genomic predictor? We addressed this question using Bayesian methods and a data analysis approach that produces a surface response relating prediction R-sq. with sample size and model complexity ( e.g. , number of SNPs). We applied the methodology to data from the interim release of the UK Biobank. Focusing on human height as a model trait and using 80,000 records for model training, we achieved a prediction R-sq. in testing ( n = 22,221) of 0.24 (95% C.I.: 0.23-0.25). Our estimates show that prediction R-sq. increases with sample size, reaching an estimated plateau at values that ranged from 0.1 to 0.37 for models using 500 and 50,000 (GWA-selected) SNPs, respectively. Soon much larger data sets will become available. Using the estimated surface response, we forecast that larger sample sizes will lead to further improvements in prediction R-sq. We conclude that big data will lead to a substantial reduction of the gap between trait heritability and the proportion of interindividual differences that can be explained with a genomic predictor. However, even with the power of big data, for complex traits we anticipate that the gap between prediction R-sq. and trait heritability will not be fully closed. Copyright © 2017 by the Genetics Society of America.

  15. Will Big Data Close the Missing Heritability Gap?

    PubMed Central

    Kim, Hwasoon; Grueneberg, Alexander; Vazquez, Ana I.; Hsu, Stephen; de los Campos, Gustavo

    2017-01-01

    Despite the important discoveries reported by genome-wide association (GWA) studies, for most traits and diseases the prediction R-squared (R-sq.) achieved with genetic scores remains considerably lower than the trait heritability. Modern biobanks will soon deliver unprecedentedly large biomedical data sets: Will the advent of big data close the gap between the trait heritability and the proportion of variance that can be explained by a genomic predictor? We addressed this question using Bayesian methods and a data analysis approach that produces a surface response relating prediction R-sq. with sample size and model complexity (e.g., number of SNPs). We applied the methodology to data from the interim release of the UK Biobank. Focusing on human height as a model trait and using 80,000 records for model training, we achieved a prediction R-sq. in testing (n = 22,221) of 0.24 (95% C.I.: 0.23–0.25). Our estimates show that prediction R-sq. increases with sample size, reaching an estimated plateau at values that ranged from 0.1 to 0.37 for models using 500 and 50,000 (GWA-selected) SNPs, respectively. Soon much larger data sets will become available. Using the estimated surface response, we forecast that larger sample sizes will lead to further improvements in prediction R-sq. We conclude that big data will lead to a substantial reduction of the gap between trait heritability and the proportion of interindividual differences that can be explained with a genomic predictor. However, even with the power of big data, for complex traits we anticipate that the gap between prediction R-sq. and trait heritability will not be fully closed. PMID:28893854

  16. A biological assessment of streams in the eastern United States using a predictive model for macroinvertebrate assemblages

    USGS Publications Warehouse

    Carlisle, D.M.; Meador, M.R.

    2007-01-01

    A predictive model (RIVPACS-type) for benthic macroinvertebrates was constructed to assess the biological condition of 1,087 streams sampled throughout the eastern United States from 1993-2003 as part of the U.S. Geological Survey's National Water-Quality Assessment Program. A subset of 338 sites was designated as reference quality, 28 of which were withheld from model calibration and used to independently evaluate model precision and accuracy. The ratio of observed (O) to expected (E) taxa richness was used as a continuous measure of biological condition, and sites with O/E values <0.8 were classified as biologically degraded. Spatiotemporal variability of O/E values was evaluated with repeated annual and within-site samples at reference sites. Values of O/E were regressed on a measure of urbanization in three regions and compared among streams in different land-use settings. The model accurately predicted the expected taxa at validation sites with high precision (SD = 0.11). Within-site spatial variability in O/E values was much larger than annual and among-site variation at reference sites and was likely caused by environmental differences among sampled reaches. Values of O/E were significantly correlated with basin road density in the Boston, Massachusetts (p < 0.001), Birmingham, Alabama (p = 0.002), and Green Bay, Wisconsin (p = 0.034) metropolitan areas, but the strength of the relations varied among regions. Urban streams were more depleted of taxa than streams in other land-use settings, but larger networks of riparian forest appeared to mediate biological degradation. Taxa that occurred less frequently than predicted by the model were those known to be generally intolerant of a variety of anthropogenic stressors. ?? 2007 American Water Resources Association.

  17. An aftereffect of adaptation to mean size

    PubMed Central

    Corbett, Jennifer E.; Wurnitsch, Nicole; Schwartz, Alex; Whitney, David

    2013-01-01

    The visual system rapidly represents the mean size of sets of objects. Here, we investigated whether mean size is explicitly encoded by the visual system, along a single dimension like texture, numerosity, and other visual dimensions susceptible to adaptation. Observers adapted to two sets of dots with different mean sizes, presented simultaneously in opposite visual fields. After adaptation, two test patches replaced the adapting dot sets, and participants judged which test appeared to have the larger average dot diameter. They generally perceived the test that replaced the smaller mean size adapting set as being larger than the test that replaced the larger adapting set. This differential aftereffect held for single test dots (Experiment 2) and high-pass filtered displays (Experiment 3), and changed systematically as a function of the variance of the adapting dot sets (Experiment 4), providing additional support that mean size is adaptable, and therefore explicitly encoded dimension of visual scenes. PMID:24348083

  18. Tungsten Carbide Grain Size Computation for WC-Co Dissimilar Welds

    NASA Astrophysics Data System (ADS)

    Zhou, Dongran; Cui, Haichao; Xu, Peiquan; Lu, Fenggui

    2016-06-01

    A "two-step" image processing method based on electron backscatter diffraction in scanning electron microscopy was used to compute the tungsten carbide (WC) grain size distribution for tungsten inert gas (TIG) welds and laser welds. Twenty-four images were collected on randomly set fields per sample located at the top, middle, and bottom of a cross-sectional micrograph. Each field contained 500 to 1500 WC grains. The images were recognized through clustering-based image segmentation and WC grain growth recognition. According to the WC grain size computation and experiments, a simple WC-WC interaction model was developed to explain the WC dissolution, grain growth, and aggregation in welded joints. The WC-WC interaction and blunt corners were characterized using scanning and transmission electron microscopy. The WC grain size distribution and the effects of heat input E on grain size distribution for the laser samples were discussed. The results indicate that (1) the grain size distribution follows a Gaussian distribution. Grain sizes at the top of the weld were larger than those near the middle and weld root because of power attenuation. (2) Significant WC grain growth occurred during welding as observed in the as-welded micrographs. The average grain size was 11.47 μm in the TIG samples, which was much larger than that in base metal 1 (BM1 2.13 μm). The grain size distribution curves for the TIG samples revealed a broad particle size distribution without fine grains. The average grain size (1.59 μm) in laser samples was larger than that in base metal 2 (BM2 1.01 μm). (3) WC-WC interaction exhibited complex plane, edge, and blunt corner characteristics during grain growth. A WC ( { 1 {bar{{1}}}00} ) to WC ( {0 1 1 {bar{{0}}}} ) edge disappeared and became a blunt plane WC ( { 10 1 {bar{{0}}}} ) , several grains with two- or three-sided planes and edges disappeared into a multi-edge, and a WC-WC merged.

  19. The influence of locus number and information content on species delimitation: an empirical test case in an endangered Mexican salamander.

    PubMed

    Hime, Paul M; Hotaling, Scott; Grewelle, Richard E; O'Neill, Eric M; Voss, S Randal; Shaffer, H Bradley; Weisrock, David W

    2016-12-01

    Perhaps the most important recent advance in species delimitation has been the development of model-based approaches to objectively diagnose species diversity from genetic data. Additionally, the growing accessibility of next-generation sequence data sets provides powerful insights into genome-wide patterns of divergence during speciation. However, applying complex models to large data sets is time-consuming and computationally costly, requiring careful consideration of the influence of both individual and population sampling, as well as the number and informativeness of loci on species delimitation conclusions. Here, we investigated how locus number and information content affect species delimitation results for an endangered Mexican salamander species, Ambystoma ordinarium. We compared results for an eight-locus, 137-individual data set and an 89-locus, seven-individual data set. For both data sets, we used species discovery methods to define delimitation models and species validation methods to rigorously test these hypotheses. We also used integrated demographic model selection tools to choose among delimitation models, while accounting for gene flow. Our results indicate that while cryptic lineages may be delimited with relatively few loci, sampling larger numbers of loci may be required to ensure that enough informative loci are available to accurately identify and validate shallow-scale divergences. These analyses highlight the importance of striking a balance between dense sampling of loci and individuals, particularly in shallowly diverged lineages. They also suggest the presence of a currently unrecognized, endangered species in the western part of A. ordinarium's range. © 2016 John Wiley & Sons Ltd.

  20. Day of Week, Site of Service, and Patient Complexity Differences in Venous Ultrasound Interpreted by Radiologists Versus Nonradiologists.

    PubMed

    Prabhakar, Anand M; Gottumukkala, Ravi V; Wang, Wenyi; Hughes, Danny R; Duszak, Richard

    2018-05-07

    Nationally, nonradiologists interpret an increasing proportion of lower extremity venous duplex ultrasound (LEVDU) examinations. We aimed to study day of week, site of service, and patient complexity differences in LEVDU services interpreted by radiologists versus nonradiologists. Using carrier claims files for a 5% national sample of Medicare beneficiaries from 2012 to 2015, we retrospectively classified all LEVDU examinations by physician specialty (radiologist versus nonradiologist), day of week (weekday versus weekend), site of service, and patient Charlson Comorbidity Index (CCI) scores. Pearson's χ 2 was used to test statistical significance. Of 760,433 LEVDU examinations for which provider specialty could be determined, 439,964 (58%) were interpreted by radiologists and 320,469 (42%) by nonradiologists. On weekends, radiologists interpreted 75% (66,094 of 88,244) and nonradiologists 25% (22,150 of 88,244) (P < .0001). Of LEVDU examinations interpreted by radiologists, 57% were performed in the inpatient or emergency department settings, and 70% of LEVDU examinations interpreted by nonradiologists were performed in the private office or outpatient hospital setting. Radiologists interpreted a slightly larger proportion (17%) of their examinations on patients with more comorbidities (CCI of ≥3) than nonradiologists (15%) (P < .0001). Compared with nonradiologists, radiologists interpret a disproportionately larger share of weekend (versus weekday) LEVDU examinations and a considerably larger proportion in higher acuity settings. Additionally, the patients on whom they render services have more comorbidities. To optimize around-the-clock patient access to necessary imaging, emerging quality payment programs should consider the timing and sites of service, as well as patient complexity. Copyright © 2018 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  1. Understanding the role of conscientiousness in healthy aging: where does the brain come in?

    PubMed

    Patrick, Christopher J

    2014-05-01

    In reviewing this impressive series of articles, I was struck by 2 points in particular: (a) the fact that the empirically oriented articles focused on analyses of data from very large samples, with the articles by Friedman, Kern, Hampson, and Duckworth (2014) and Kern, Hampson, Goldbert, and Friedman (2014) highlighting an approach to merging existing data sets through use of "metric bridges" to address key questions not addressable through 1 data set alone, and (b) the fact that the articles as a whole included limited mention of neuroscientific (i.e., brain research) concepts, methods, and findings. One likely reason for the lack of reference to brain-oriented work is the persisting gap between smaller sample size lab-experimental and larger sample size multivariate-correlational approaches to psychological research. As a strategy for addressing this gap and bringing a distinct neuroscientific component to the National Institute on Aging's conscientiousness and health initiative, I suggest that the metric bridging approach highlighted by Friedman and colleagues could be used to connect existing large-scale data sets containing both neurophysiological variables and measures of individual difference constructs to other data sets containing richer arrays of nonphysiological variables-including data from longitudinal or twin studies focusing on personality and health-related outcomes (e.g., Terman Life Cycle study and Hawaii longitudinal studies, as described in the article by Kern et al., 2014). (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  2. Computational tools for exact conditional logistic regression.

    PubMed

    Corcoran, C; Mehta, C; Patel, N; Senchaudhuri, P

    Logistic regression analyses are often challenged by the inability of unconditional likelihood-based approximations to yield consistent, valid estimates and p-values for model parameters. This can be due to sparseness or separability in the data. Conditional logistic regression, though useful in such situations, can also be computationally unfeasible when the sample size or number of explanatory covariates is large. We review recent developments that allow efficient approximate conditional inference, including Monte Carlo sampling and saddlepoint approximations. We demonstrate through real examples that these methods enable the analysis of significantly larger and more complex data sets. We find in this investigation that for these moderately large data sets Monte Carlo seems a better alternative, as it provides unbiased estimates of the exact results and can be executed in less CPU time than can the single saddlepoint approximation. Moreover, the double saddlepoint approximation, while computationally the easiest to obtain, offers little practical advantage. It produces unreliable results and cannot be computed when a maximum likelihood solution does not exist. Copyright 2001 John Wiley & Sons, Ltd.

  3. Key issues to consider and innovative ideas on fall prevention in the geriatric department of a teaching hospital.

    PubMed

    Chan, Daniel Ky; Sherrington, Cathie; Naganathan, Vasi; Xu, Ying Hua; Chen, Jack; Ko, Anita; Kneebone, Ian; Cumming, Robert

    2018-06-01

    Falls in hospital are common and up to 70% result in injury, leading to increased length of stay and accounting for 10% of patient safety-related deaths. Yet, high-quality evidence guiding best practice is lacking. Fall prevention strategies have worked in some trials but not in others. Differences in study setting (acute, subacute, rehabilitation) and sampling of patients (cognitively intact or impaired) may explain the difference in results. This article discusses these important issues and describes the strategies to prevent falls in the acute hospital setting we have studied, which engage the cognitively impaired who are more likely to fall. We have used video clips rather than verbal instruction to educate patients, and are optimistic that this approach may work. We have also explored the option of co-locating high fall risk patients in a close observation room for supervision, with promising results. Further studies, using larger sample sizes are required to confirm our findings. © 2018 AJA Inc.

  4. Incremental Learning of Context Free Grammars by Parsing-Based Rule Generation and Rule Set Search

    NASA Astrophysics Data System (ADS)

    Nakamura, Katsuhiko; Hoshina, Akemi

    This paper discusses recent improvements and extensions in Synapse system for inductive inference of context free grammars (CFGs) from sample strings. Synapse uses incremental learning, rule generation based on bottom-up parsing, and the search for rule sets. The form of production rules in the previous system is extended from Revised Chomsky Normal Form A→βγ to Extended Chomsky Normal Form, which also includes A→B, where each of β and γ is either a terminal or nonterminal symbol. From the result of bottom-up parsing, a rule generation mechanism synthesizes minimum production rules required for parsing positive samples. Instead of inductive CYK algorithm in the previous version of Synapse, the improved version uses a novel rule generation method, called ``bridging,'' which bridges the lacked part of the derivation tree for the positive string. The improved version also employs a novel search strategy, called serial search in addition to minimum rule set search. The synthesis of grammars by the serial search is faster than the minimum set search in most cases. On the other hand, the size of the generated CFGs is generally larger than that by the minimum set search, and the system can find no appropriate grammar for some CFL by the serial search. The paper shows experimental results of incremental learning of several fundamental CFGs and compares the methods of rule generation and search strategies.

  5. Using Remote Sensing to Determine the Spatial Scales of Estuaries

    NASA Astrophysics Data System (ADS)

    Davis, C. O.; Tufillaro, N.; Nahorniak, J.

    2016-02-01

    One challenge facing Earth system science is to understand and quantify the complexity of rivers, estuaries, and coastal zone regions. Earlier studies using data from airborne hyperspectral imagers (Bissett et al., 2004, Davis et al., 2007) demonstrated from a very limited data set that the spatial scales of the coastal ocean could be resolved with spatial sampling of 100 m Ground Sample Distance (GSD) or better. To develop a much larger data set (Aurin et al., 2013) used MODIS 250 m data for a wide range of coastal regions. Their conclusion was that farther offshore 500 m GSD was adequate to resolve large river plume features while nearshore regions (a few kilometers from the coast) needed higher spatial resolution data not available from MODIS. Building on our airborne experience, the Hyperspectral Imager for the Coastal Ocean (HICO, Lucke et al., 2011) was designed to provide hyperspectral data for the coastal ocean at 100 m GSD. HICO operated on the International Space Station for 5 years and collected over 10,000 scenes of the coastal ocean and other regions around the world. Here we analyze HICO data from an example set of major river delta regions to assess the spatial scales of variability in those systems. In one system, the San Francisco Bay and Delta, we also analyze Landsat 8 OLI data at 30 m and 15 m to validate the 100 m GSD sampling scale for the Bay and assess spatial sampling needed as you move up river.

  6. Ground-water quality beneath irrigated agriculture in the central High Plains aquifer, 1999-2000

    USGS Publications Warehouse

    Bruce, Breton W.; Becker, Mark F.; Pope, Larry M.; Gurdak, Jason J.

    2003-01-01

    In 1999 and 2000, 30 water-quality monitoring wells were installed in the central High Plains aquifer to evaluate the quality of recently recharged ground water in areas of irrigated agriculture and to identify the factors affecting ground-water quality. Wells were installed adjacent to irrigated agricultural fields with 10- or 20-foot screened intervals placed near the water table. Each well was sampled once for about 100 waterquality constituents associated with agricultural practices. Water samples from 70 percent of the wells (21 of 30 sites) contained nitrate concentrations larger than expected background concentrations (about 3 mg/L as N) and detectable pesticides. Atrazine or its metabolite, deethylatrazine, were detected with greater frequency than other pesticides and were present in all 21 samples where pesticides were detected. The 21 samples with detectable pesticides also contained tritium concentrations large enough to indicate that at least some part of the water sample had been recharged within about the last 50 years. These 21 ground-water samples are considered to show water-quality effects related to irrigated agriculture. The remaining 9 groundwater samples contained no pesticides, small tritium concentrations, and nitrate concentrations less than 3.45 milligrams per liter as nitrogen. These samples are considered unaffected by the irrigated agricultural land-use setting. Nitrogen isotope ratios indicate that commercial fertilizer was the dominant source of nitrate in 13 of the 21 samples affected by irrigated agriculture. Nitrogen isotope ratios for 4 of these 21 samples were indicative of an animal waste source. Dissolved-solids concentrations were larger in samples affected by irrigated agriculture, with large sulfate concentrations having strong correlation with large dissolved solids concentrations in these samples. A strong statistical correlation is shown between samples affected by irrigated agriculture and sites with large rates of pesticide and nitrogen applications and shallow depths to ground water.

  7. Multivariate normality

    NASA Technical Reports Server (NTRS)

    Crutcher, H. L.; Falls, L. W.

    1976-01-01

    Sets of experimentally determined or routinely observed data provide information about the past, present and, hopefully, future sets of similarly produced data. An infinite set of statistical models exists which may be used to describe the data sets. The normal distribution is one model. If it serves at all, it serves well. If a data set, or a transformation of the set, representative of a larger population can be described by the normal distribution, then valid statistical inferences can be drawn. There are several tests which may be applied to a data set to determine whether the univariate normal model adequately describes the set. The chi-square test based on Pearson's work in the late nineteenth and early twentieth centuries is often used. Like all tests, it has some weaknesses which are discussed in elementary texts. Extension of the chi-square test to the multivariate normal model is provided. Tables and graphs permit easier application of the test in the higher dimensions. Several examples, using recorded data, illustrate the procedures. Tests of maximum absolute differences, mean sum of squares of residuals, runs and changes of sign are included in these tests. Dimensions one through five with selected sample sizes 11 to 101 are used to illustrate the statistical tests developed.

  8. Cognitive Reserve and Social Capital Accrued in Early and Midlife Moderate the Relation of Psychological Stress to Cognitive Performance in Old Age.

    PubMed

    Ihle, Andreas; Oris, Michel; Sauter, Julia; Rimmele, Ulrike; Kliegel, Matthias

    2018-06-05

    The present study set out to investigate the relation of psychological stress to cognitive performance and its interplay with key life course markers of cognitive reserve and social capital in a large sample of older adults. We assessed cognitive performance (verbal abilities and processing speed) and psychological stress in 2,812 older adults. The Participants reported information on education, occupation, leisure activities, family, and close friends. Greater psychological stress was significantly related to lower performance in verbal abilities and processing speed. Moderation analyses suggested that the relations of psychological stress to cognitive performance were reduced in individuals with higher education, a higher cognitive level of the first profession practiced after education, a larger number of midlife leisure activities, a larger number of significant family members, and a larger number of close friends. Cognitive reserve and social capital accrued in early and midlife may reduce the detrimental influences of psychological stress on cognitive functioning in old age. © 2018 S. Karger AG, Basel.

  9. GIS-based niche modeling for mapping species' habitats

    USGS Publications Warehouse

    Rotenberry, J.T.; Preston, K.L.; Knick, S.

    2006-01-01

    Ecological a??niche modelinga?? using presence-only locality data and large-scale environmental variables provides a powerful tool for identifying and mapping suitable habitat for species over large spatial extents. We describe a niche modeling approach that identifies a minimum (rather than an optimum) set of basic habitat requirements for a species, based on the assumption that constant environmental relationships in a species' distribution (i.e., variables that maintain a consistent value where the species occurs) are most likely to be associated with limiting factors. Environmental variables that take on a wide range of values where a species occurs are less informative because they do not limit a species' distribution, at least over the range of variation sampled. This approach is operationalized by partitioning Mahalanobis D2 (standardized difference between values of a set of environmental variables for any point and mean values for those same variables calculated from all points at which a species was detected) into independent components. The smallest of these components represents the linear combination of variables with minimum variance; increasingly larger components represent larger variances and are increasingly less limiting. We illustrate this approach using the California Gnatcatcher (Polioptila californica Brewster) and provide SAS code to implement it.

  10. Multigenic Delineation of Lower Jaw Deformity in Triploid Atlantic Salmon (Salmo salar L.)

    PubMed Central

    Amoroso, Gianluca; Ventura, Tomer; Elizur, Abigail; Carter, Chris G.

    2016-01-01

    Lower jaw deformity (LJD) is a skeletal anomaly affecting farmed triploid Atlantic salmon (Salmo salar L.) which leads to considerable economic losses for industry and has animal welfare implications. The present study employed transcriptome analysis in parallel with real-time qPCR techniques to characterise for the first time the LJD condition in triploid Atlantic salmon juveniles using two independent sample sets: experimentally-sourced salmon (60 g) and commercially produced salmon (100 g). A total of eleven genes, some detected/identified through the transcriptome analysis (fbn2, gal and gphb5) and others previously determined to be related to skeletal physiology (alp, bmp4, col1a1, col2a1, fgf23, igf1, mmp13, ocn), were tested in the two independent sample sets. Gphb5, a recently discovered hormone, was significantly (P < 0.05) down-regulated in LJD affected fish in both sample sets, suggesting a possible hormonal involvement. In-situ hybridization detected gphb5 expression in oral epithelium, teeth and skin of the lower jaw. Col2a1 showed the same consistent significant (P < 0.05) down-regulation in LJD suggesting a possible cartilaginous impairment as a distinctive feature of the condition. Significant (P < 0.05) differential expression of other genes found in either one or the other sample set highlighted the possible effect of stage of development or condition progression on transcription and showed that anomalous bone development, likely driven by cartilage impairment, is more evident at larger fish sizes. The present study improved our understanding of LJD suggesting that a cartilage impairment likely underlies the condition and col2a1 may be a marker. In addition, the involvement of gphb5 urges further investigation of a hormonal role in LJD and skeletal physiology in general. PMID:27977809

  11. Multigenic Delineation of Lower Jaw Deformity in Triploid Atlantic Salmon (Salmo salar L.).

    PubMed

    Amoroso, Gianluca; Ventura, Tomer; Cobcroft, Jennifer M; Adams, Mark B; Elizur, Abigail; Carter, Chris G

    2016-01-01

    Lower jaw deformity (LJD) is a skeletal anomaly affecting farmed triploid Atlantic salmon (Salmo salar L.) which leads to considerable economic losses for industry and has animal welfare implications. The present study employed transcriptome analysis in parallel with real-time qPCR techniques to characterise for the first time the LJD condition in triploid Atlantic salmon juveniles using two independent sample sets: experimentally-sourced salmon (60 g) and commercially produced salmon (100 g). A total of eleven genes, some detected/identified through the transcriptome analysis (fbn2, gal and gphb5) and others previously determined to be related to skeletal physiology (alp, bmp4, col1a1, col2a1, fgf23, igf1, mmp13, ocn), were tested in the two independent sample sets. Gphb5, a recently discovered hormone, was significantly (P < 0.05) down-regulated in LJD affected fish in both sample sets, suggesting a possible hormonal involvement. In-situ hybridization detected gphb5 expression in oral epithelium, teeth and skin of the lower jaw. Col2a1 showed the same consistent significant (P < 0.05) down-regulation in LJD suggesting a possible cartilaginous impairment as a distinctive feature of the condition. Significant (P < 0.05) differential expression of other genes found in either one or the other sample set highlighted the possible effect of stage of development or condition progression on transcription and showed that anomalous bone development, likely driven by cartilage impairment, is more evident at larger fish sizes. The present study improved our understanding of LJD suggesting that a cartilage impairment likely underlies the condition and col2a1 may be a marker. In addition, the involvement of gphb5 urges further investigation of a hormonal role in LJD and skeletal physiology in general.

  12. Collision rates and impact velocities in the Main Asteroid Belt

    NASA Technical Reports Server (NTRS)

    Farinella, Paolo; Davis, Donald R.

    1992-01-01

    Wetherill's (1967) algorithm is presently used to compute the mutual collision probabilities and impact velocities of a set of 682 asteroids with large-than-50-km radius representative of a bias-free sample of asteroid orbits. While collision probabilities are nearly independent of eccentricities, a significant decrease is associated with larger inclinations. Collisional velocities grow steeply with orbital eccentricity and inclination, but with curiously small variation across the asteroid belt. Family asteroids are noted to undergo collisions with other family members 2-3 times more often than with nonmembers.

  13. A self-sampling method to obtain large volumes of undiluted cervicovaginal secretions.

    PubMed

    Boskey, Elizabeth R; Moench, Thomas R; Hees, Paul S; Cone, Richard A

    2003-02-01

    Studies of vaginal physiology and pathophysiology sometime require larger volumes of undiluted cervicovaginal secretions than can be obtained by current methods. A convenient method for self-sampling these secretions outside a clinical setting can facilitate such studies of reproductive health. The goal was to develop a vaginal self-sampling method for collecting large volumes of undiluted cervicovaginal secretions. A menstrual collection device (the Instead cup) was inserted briefly into the vagina to collect secretions that were then retrieved from the cup by centrifugation in a 50-ml conical tube. All 16 women asked to perform this procedure found it feasible and acceptable. Among 27 samples, an average of 0.5 g of secretions (range, 0.1-1.5 g) was collected. This is a rapid and convenient self-sampling method for obtaining relatively large volumes of undiluted cervicovaginal secretions. It should prove suitable for a wide range of assays, including those involving sexually transmitted diseases, microbicides, vaginal physiology, immunology, and pathophysiology.

  14. Methane Leaks from Natural Gas Systems Follow Extreme Distributions.

    PubMed

    Brandt, Adam R; Heath, Garvin A; Cooley, Daniel

    2016-11-15

    Future energy systems may rely on natural gas as a low-cost fuel to support variable renewable power. However, leaking natural gas causes climate damage because methane (CH 4 ) has a high global warming potential. In this study, we use extreme-value theory to explore the distribution of natural gas leak sizes. By analyzing ∼15 000 measurements from 18 prior studies, we show that all available natural gas leakage data sets are statistically heavy-tailed, and that gas leaks are more extremely distributed than other natural and social phenomena. A unifying result is that the largest 5% of leaks typically contribute over 50% of the total leakage volume. While prior studies used log-normal model distributions, we show that log-normal functions poorly represent tail behavior. Our results suggest that published uncertainty ranges of CH 4 emissions are too narrow, and that larger sample sizes are required in future studies to achieve targeted confidence intervals. Additionally, we find that cross-study aggregation of data sets to increase sample size is not recommended due to apparent deviation between sampled populations. Understanding the nature of leak distributions can improve emission estimates, better illustrate their uncertainty, allow prioritization of source categories, and improve sampling design. Also, these data can be used for more effective design of leak detection technologies.

  15. Study on the lifetime of Mo/Si multilayer optics with pulsed EUV-source at the ETS

    NASA Astrophysics Data System (ADS)

    Schürmann, Mark; Yulin, Sergiy; Nesterenko, Viatcheslav; Feigl, Torsten; Kaiser, Norbert; Tkachenko, Boris; Schürmann, Max C.

    2011-06-01

    As EUV lithography is on its way into production stage, studies of optics contamination and cleaning under realistic conditions become more and more important. Due to this fact an Exposure Test Stand (ETS) has been constructed at XTREME technologies GmbH in collaboration with Fraunhofer IOF and with financial support of Intel Corporation. This test stand is equipped with a pulsed DPP source and allows for the simultaneous exposure of several samples. In the standard set-up four samples with an exposed area larger than 35 mm2 per sample can be exposed at a homogeneous intensity of 0.25 mW/mm2. A recent update of the ETS allows for simultaneous exposures of two samples with intensities up to 1.0 mW/mm2. The first application of this alternative set-up was a comparative study of carbon contamination rates induced by EUV radiation from the pulsed source with contamination rates induced by quasicontinuous synchrotron radiation. A modified gas-inlet system allows for the introduction of a second gas to the exposure chamber. This possibility was applied to investigate the efficiency of EUV-induced cleaning with different gas mixtures. In particular the enhancement of EUV-induced cleaning by addition of a second gas to the cleaning gas was studied.

  16. Active galactic nuclei and galaxy interactions

    NASA Astrophysics Data System (ADS)

    Alonso, M. Sol; Lambas, Diego G.; Tissera, Patricia; Coldwell, Georgina

    2007-03-01

    We perform a statistical analysis of active galactic nucleus (AGN) host characteristics and nuclear activity for AGNs in pairs and without companions. Our study concerns a sample of AGNs derived from the Sloan Digital Sky Survey Data Release 4 data by Kauffmann et al. and pair galaxies obtained from the same data set by Alonso et al. An eye-ball classification of images of 1607 close pairs (rp < 25 kpc h-1,ΔV < 350 km s-1) according to the evidence of interaction through distorted morphologies and tidal features provides us with a more confident assessment of galaxy interactions from this sample. We notice that, at a given luminosity or stellar mass content, the fraction of AGNs is larger for pair galaxies exhibiting evidence for strong interaction and tidal features which also show signs of strong star formation activity. Nevertheless, this process accounts only for a ~10per cent increase of the fraction of AGNs. As in previous works, we find AGN hosts to be redder and with a larger concentration morphological index than non-AGN galaxies. This effect does not depend on whether AGN hosts are in pairs or in isolation. The OIII luminosity of AGNs with strong interaction features is found to be significantly larger than that of other AGNs, either in pairs or in isolation. Estimations of the accretion rate, L[OIII]/MBH, show that AGNs in merging pairs are actively feeding their black holes, regardless of their stellar masses. We also find that the luminosity of the companion galaxy seems to be a key parameter in the determination of the black hole activity. At a given host luminosity, both the OIII luminosity and the L[ OIII]/MBH are significantly larger in AGNs with a bright companion (Mr < -20) than otherwise.

  17. Social Behaviour of Captive Belugas, Delphinapterus Leucas.

    NASA Astrophysics Data System (ADS)

    Recchia, Cheri Anne

    1994-01-01

    Focal-animal sampling techniques developed for investigating social behaviour of terrestrial animals were adapted for studying captive belugas, providing quantitative descriptions of social relationships among individuals. Five groups of captive belugas were observed, allowing a cross -sectional view of sociality in groups of diverse sizes and compositions. Inter-individual distances were used to quantify patterns of spatial association. A set of social behaviours for which actor and recipient could be identified was defined to characterize dyadic interactions. The mother-calf pair spent more time together, and interacted more often than adults. The calf maintained proximity with his mother; larger adults generally maintained proximity with smaller adults. Among adults, larger groups performed more kinds of behaviours and interacted at higher rates than smaller groups. Within dyads, the larger whale performed more aggressive behaviours and the smaller whale more submissive behaviours. Clear dominance relations existed in three groups, with larger whales dominant to smaller whales. Vocalizations of three groups were classified subjectively, based on aural impressions and visual inspection of spectrograms, but most signals appeared graded. Statistical analyses of measured acoustic features confirmed subjective impressions that vocalizations could not be classified into discrete and homogeneous categories. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-553-5668; Fax 617-253-1690.).

  18. What Is the Shape of Developmental Change?

    PubMed Central

    Adolph, Karen E.; Robinson, Scott R.; Young, Jesse W.; Gill-Alvarez, Felix

    2009-01-01

    Developmental trajectories provide the empirical foundation for theories about change processes during development. However, the ability to distinguish among alternative trajectories depends on how frequently observations are sampled. This study used real behavioral data, with real patterns of variability, to examine the effects of sampling at different intervals on characterization of the underlying trajectory. Data were derived from a set of 32 infant motor skills indexed daily during the first 18 months. Larger sampling intervals (2-31 days) were simulated by systematically removing observations from the daily data and interpolating over the gaps. Infrequent sampling caused decreasing sensitivity to fluctuations in the daily data: Variable trajectories erroneously appeared as step-functions and estimates of onset ages were increasingly off target. Sensitivity to variation decreased as an inverse power function of sampling interval, resulting in severe degradation of the trajectory with intervals longer than 7 days. These findings suggest that sampling rates typically used by developmental researchers may be inadequate to accurately depict patterns of variability and the shape of developmental change. Inadequate sampling regimes therefore may seriously compromise theories of development. PMID:18729590

  19. The Living Dead: Bacterial Community Structure of a Cadaver at the Onset and End of the Bloat Stage of Decomposition

    PubMed Central

    Hyde, Embriette R.; Haarmann, Daniel P.; Lynne, Aaron M.; Bucheli, Sibyl R.; Petrosino, Joseph F.

    2013-01-01

    Human decomposition is a mosaic system with an intimate association between biotic and abiotic factors. Despite the integral role of bacteria in the decomposition process, few studies have catalogued bacterial biodiversity for terrestrial scenarios. To explore the microbiome of decomposition, two cadavers were placed at the Southeast Texas Applied Forensic Science facility and allowed to decompose under natural conditions. The bloat stage of decomposition, a stage easily identified in taphonomy and readily attributed to microbial physiology, was targeted. Each cadaver was sampled at two time points, at the onset and end of the bloat stage, from various body sites including internal locations. Bacterial samples were analyzed by pyrosequencing of the 16S rRNA gene. Our data show a shift from aerobic bacteria to anaerobic bacteria in all body sites sampled and demonstrate variation in community structure between bodies, between sample sites within a body, and between initial and end points of the bloat stage within a sample site. These data are best not viewed as points of comparison but rather additive data sets. While some species recovered are the same as those observed in culture-based studies, many are novel. Our results are preliminary and add to a larger emerging data set; a more comprehensive study is needed to further dissect the role of bacteria in human decomposition. PMID:24204941

  20. The living dead: bacterial community structure of a cadaver at the onset and end of the bloat stage of decomposition.

    PubMed

    Hyde, Embriette R; Haarmann, Daniel P; Lynne, Aaron M; Bucheli, Sibyl R; Petrosino, Joseph F

    2013-01-01

    Human decomposition is a mosaic system with an intimate association between biotic and abiotic factors. Despite the integral role of bacteria in the decomposition process, few studies have catalogued bacterial biodiversity for terrestrial scenarios. To explore the microbiome of decomposition, two cadavers were placed at the Southeast Texas Applied Forensic Science facility and allowed to decompose under natural conditions. The bloat stage of decomposition, a stage easily identified in taphonomy and readily attributed to microbial physiology, was targeted. Each cadaver was sampled at two time points, at the onset and end of the bloat stage, from various body sites including internal locations. Bacterial samples were analyzed by pyrosequencing of the 16S rRNA gene. Our data show a shift from aerobic bacteria to anaerobic bacteria in all body sites sampled and demonstrate variation in community structure between bodies, between sample sites within a body, and between initial and end points of the bloat stage within a sample site. These data are best not viewed as points of comparison but rather additive data sets. While some species recovered are the same as those observed in culture-based studies, many are novel. Our results are preliminary and add to a larger emerging data set; a more comprehensive study is needed to further dissect the role of bacteria in human decomposition.

  1. Improved Dark Energy Constraints From ~ 100 New CfA Supernova Type Ia Light Curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hicken, Malcolm; /Harvard-Smithsonian Ctr. Astrophys. /Harvard U.; Wood-Vasey, W.Michael

    2012-04-06

    We combine the CfA3 supernovae Type Ia (SN Ia) sample with samples from the literature to calculate improved constraints on the dark energy equation of state parameter, w. The CfA3 sample is added to the Union set of Kowalski et al. to form the Constitution set and, combined with a BAO prior, produces 1 + w = 0.013{sub -0.068}{sup +0.066} (0.11 syst), consistent with the cosmological constant. The CfA3 addition makes the cosmologically useful sample of nearby SN Ia between 2.6 and 2.9 times larger than before, reducing the statistical uncertainty to the point where systematics play the largest role.more » We use four light-curve fitters to test for systematic differences: SALT, SALT2, MLCS2k2 (R{sub V} = 3.1), and MLCS2k2 (R{sub V} = 1.7). SALT produces high-redshift Hubble residuals with systematic trends versus color and larger scatter than MLCS2k2. MLCS2k2 overestimates the intrinsic luminosity of SN Ia with 0.7 < {Delta} < 1.2. MLCS2k2 with R{sub V} = 3.1 overestimates host-galaxy extinction while R{sub V} {approx} 1.7 does not. Our investigation is consistent with no Hubble bubble. We also find that, after light-curve correction, SN Ia in Scd/Sd/Irr hosts are intrinsically fainter than those in E/S0 hosts by 2{sigma}, suggesting that they may come from different populations. We also find that SN Ia in Scd/Sd/Irr hosts have low scatter (0.1 mag) and reddening. Current systematic errors can be reduced by improving SN Ia photometric accuracy, by including the CfA3 sample to retrain light-curve fitters, by combining optical SN Ia photometry with near-infrared photometry to understand host-galaxy extinction, and by determining if different environments give rise to different intrinsic SN Ia luminosity after correction for light-curve shape and color.« less

  2. IMPROVED DARK ENERGY CONSTRAINTS FROM {approx}100 NEW CfA SUPERNOVA TYPE Ia LIGHT CURVES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hicken, Malcolm; Challis, Peter; Kirshner, Robert P.

    2009-08-01

    We combine the CfA3 supernovae Type Ia (SN Ia) sample with samples from the literature to calculate improved constraints on the dark energy equation of state parameter, w. The CfA3 sample is added to the Union set of Kowalski et al. to form the Constitution set and, combined with a BAO prior, produces 1 + w = 0.013{sup +0.066} {sub -0.068} (0.11 syst), consistent with the cosmological constant. The CfA3 addition makes the cosmologically useful sample of nearby SN Ia between 2.6 and 2.9 times larger than before, reducing the statistical uncertainty to the point where systematics play the largestmore » role. We use four light-curve fitters to test for systematic differences: SALT, SALT2, MLCS2k2 (R{sub V} = 3.1), and MLCS2k2 (R{sub V} = 1.7). SALT produces high-redshift Hubble residuals with systematic trends versus color and larger scatter than MLCS2k2. MLCS2k2 overestimates the intrinsic luminosity of SN Ia with 0.7 < {delta} < 1.2. MLCS2k2 with R{sub V} = 3.1 overestimates host-galaxy extinction while R{sub V} {approx} 1.7 does not. Our investigation is consistent with no Hubble bubble. We also find that, after light-curve correction, SN Ia in Scd/Sd/Irr hosts are intrinsically fainter than those in E/S0 hosts by 2{sigma}, suggesting that they may come from different populations. We also find that SN Ia in Scd/Sd/Irr hosts have low scatter (0.1 mag) and reddening. Current systematic errors can be reduced by improving SN Ia photometric accuracy, by including the CfA3 sample to retrain light-curve fitters, by combining optical SN Ia photometry with near-infrared photometry to understand host-galaxy extinction, and by determining if different environments give rise to different intrinsic SN Ia luminosity after correction for light-curve shape and color.« less

  3. Multiple Imputation For Combined-Survey Estimation With Incomplete Regressors In One But Not Both Surveys

    PubMed Central

    Rendall, Michael S.; Ghosh-Dastidar, Bonnie; Weden, Margaret M.; Baker, Elizabeth H.; Nazarov, Zafar

    2013-01-01

    Within-survey multiple imputation (MI) methods are adapted to pooled-survey regression estimation where one survey has more regressors, but typically fewer observations, than the other. This adaptation is achieved through: (1) larger numbers of imputations to compensate for the higher fraction of missing values; (2) model-fit statistics to check the assumption that the two surveys sample from a common universe; and (3) specificying the analysis model completely from variables present in the survey with the larger set of regressors, thereby excluding variables never jointly observed. In contrast to the typical within-survey MI context, cross-survey missingness is monotonic and easily satisfies the Missing At Random (MAR) assumption needed for unbiased MI. Large efficiency gains and substantial reduction in omitted variable bias are demonstrated in an application to sociodemographic differences in the risk of child obesity estimated from two nationally-representative cohort surveys. PMID:24223447

  4. FSR: feature set reduction for scalable and accurate multi-class cancer subtype classification based on copy number.

    PubMed

    Wong, Gerard; Leckie, Christopher; Kowalczyk, Adam

    2012-01-15

    Feature selection is a key concept in machine learning for microarray datasets, where features represented by probesets are typically several orders of magnitude larger than the available sample size. Computational tractability is a key challenge for feature selection algorithms in handling very high-dimensional datasets beyond a hundred thousand features, such as in datasets produced on single nucleotide polymorphism microarrays. In this article, we present a novel feature set reduction approach that enables scalable feature selection on datasets with hundreds of thousands of features and beyond. Our approach enables more efficient handling of higher resolution datasets to achieve better disease subtype classification of samples for potentially more accurate diagnosis and prognosis, which allows clinicians to make more informed decisions in regards to patient treatment options. We applied our feature set reduction approach to several publicly available cancer single nucleotide polymorphism (SNP) array datasets and evaluated its performance in terms of its multiclass predictive classification accuracy over different cancer subtypes, its speedup in execution as well as its scalability with respect to sample size and array resolution. Feature Set Reduction (FSR) was able to reduce the dimensions of an SNP array dataset by more than two orders of magnitude while achieving at least equal, and in most cases superior predictive classification performance over that achieved on features selected by existing feature selection methods alone. An examination of the biological relevance of frequently selected features from FSR-reduced feature sets revealed strong enrichment in association with cancer. FSR was implemented in MATLAB R2010b and is available at http://ww2.cs.mu.oz.au/~gwong/FSR.

  5. [Mokken scaling of the Cognitive Screening Test].

    PubMed

    Diesfeldt, H F A

    2009-10-01

    The Cognitive Screening Test (CST) is a twenty-item orientation questionnaire in Dutch, that is commonly used to evaluate cognitive impairment. This study applied Mokken Scale Analysis, a non-parametric set of techniques derived from item response theory (IRT), to CST-data of 466 consecutive participants in psychogeriatric day care. The full item set and the standard short version of fourteen items both met the assumptions of the monotone homogeneity model, with scalability coefficient H = 0.39, which is considered weak. In order to select items that would fulfil the assumption of invariant item ordering or the double monotonicity model, the subjects were randomly partitioned into a training set (50% of the sample) and a test set (the remaining half). By means of an automated item selection eleven items were found to measure one latent trait, with H = 0.67 and item H coefficients larger than 0.51. Cross-validation of the item analysis in the remaining half of the subjects gave comparable values (H = 0.66; item H coefficients larger than 0.56). The selected items involve year, place of residence, birth date, the monarch's and prime minister's names, and their predecessors. Applying optimal discriminant analysis (ODA) it was found that the full set of twenty CST items performed best in distinguishing two predefined groups of patients of lower or higher cognitive ability, as established by an independent criterion derived from the Amsterdam Dementia Screening Test. The chance corrected predictive value or prognostic utility was 47.5% for the full item set, 45.2% for the fourteen items of the standard short version of the CST, and 46.1% for the homogeneous, unidimensional set of selected eleven items. The results of the item analysis support the application of the CST in cognitive assessment, and revealed a more reliable 'short' version of the CST than the standard short version (CST14).

  6. Effect of concentration of dispersed organic matter on optical maturity parameters: Interlaboratory results of the organic matter concentration working group of the ICCP.

    USGS Publications Warehouse

    Mendonca, Filho J.G.; Araujo, C.V.; Borrego, A.G.; Cook, A.; Flores, D.; Hackley, P.; Hower, J.C.; Kern, M.L.; Kommeren, K.; Kus, J.; Mastalerz, Maria; Mendonca, J.O.; Menezes, T.R.; Newman, J.; Ranasinghe, P.; Souza, I.V.A.F.; Suarez-Ruiz, I.; Ujiie, Y.

    2010-01-01

    The main objective of this work was to study the effect of the kerogen isolation procedures on maturity parameters of organic matter using optical microscopes. This work represents the results of the Organic Matter Concentration Working Group (OMCWG) of the International Committee for Coal and Organic Petrology (ICCP) during the years 2008 and 2009. Four samples have been analysed covering a range of maturity (low and moderate) and terrestrial and marine geological settings. The analyses comprise random vitrinite reflectance measured on both kerogen concentrate and whole rock mounts and fluorescence spectra taken on alginite. Eighteen participants from twelve laboratories from all over the world performed the analyses. Samples of continental settings contained enough vitrinite for participants to record around 50 measurements whereas fewer readings were taken on samples from marine setting. The scatter of results was also larger in the samples of marine origin. Similar vitrinite reflectance values were in general recorded in the whole rock and in the kerogen concentrate. The small deviations of the trend cannot be attributed to the acid treatment involved in kerogen isolation but to reasons related to components identification or to the difficulty to achieve a good polish of samples with high mineral matter content. In samples difficult to polish, vitrinite reflectance was measured on whole rock tended to be lower. The presence or absence of rock fabric affected the selection of the vitrinite population for measurement and this also had an influence in the average value reported and in the scatter of the results. Slightly lower standard deviations were reported for the analyses run on kerogen concentrates. Considering the spectral fluorescence results, it was observed that the ??max presents a shift to higher wavelengths in the kerogen concentrate sample in comparison to the whole-rock sample, thus revealing an influence of preparation methods (acid treatment) on fluorescence properties. ?? 2010 Elsevier B.V.

  7. Ranking metrics in gene set enrichment analysis: do they matter?

    PubMed

    Zyla, Joanna; Marczyk, Michal; Weiner, January; Polanska, Joanna

    2017-05-12

    There exist many methods for describing the complex relation between changes of gene expression in molecular pathways or gene ontologies under different experimental conditions. Among them, Gene Set Enrichment Analysis seems to be one of the most commonly used (over 10,000 citations). An important parameter, which could affect the final result, is the choice of a metric for the ranking of genes. Applying a default ranking metric may lead to poor results. In this work 28 benchmark data sets were used to evaluate the sensitivity and false positive rate of gene set analysis for 16 different ranking metrics including new proposals. Furthermore, the robustness of the chosen methods to sample size was tested. Using k-means clustering algorithm a group of four metrics with the highest performance in terms of overall sensitivity, overall false positive rate and computational load was established i.e. absolute value of Moderated Welch Test statistic, Minimum Significant Difference, absolute value of Signal-To-Noise ratio and Baumgartner-Weiss-Schindler test statistic. In case of false positive rate estimation, all selected ranking metrics were robust with respect to sample size. In case of sensitivity, the absolute value of Moderated Welch Test statistic and absolute value of Signal-To-Noise ratio gave stable results, while Baumgartner-Weiss-Schindler and Minimum Significant Difference showed better results for larger sample size. Finally, the Gene Set Enrichment Analysis method with all tested ranking metrics was parallelised and implemented in MATLAB, and is available at https://github.com/ZAEDPolSl/MrGSEA . Choosing a ranking metric in Gene Set Enrichment Analysis has critical impact on results of pathway enrichment analysis. The absolute value of Moderated Welch Test has the best overall sensitivity and Minimum Significant Difference has the best overall specificity of gene set analysis. When the number of non-normally distributed genes is high, using Baumgartner-Weiss-Schindler test statistic gives better outcomes. Also, it finds more enriched pathways than other tested metrics, which may induce new biological discoveries.

  8. Analysis of Darwin Rainfall Data: Implications on Sampling Strategy

    NASA Technical Reports Server (NTRS)

    Rafael, Qihang Li; Bras, Rafael L.; Veneziano, Daniele

    1996-01-01

    Rainfall data collected by radar in the vicinity of Darwin, Australia, have been analyzed in terms of their mean, variance, autocorrelation of area-averaged rain rate, and diurnal variation. It is found that, when compared with the well-studied GATE (Global Atmospheric Research Program Atlantic Tropical Experiment) data, Darwin rainfall has larger coefficient of variation (CV), faster reduction of CV with increasing area size, weaker temporal correlation, and a strong diurnal cycle and intermittence. The coefficient of variation for Darwin rainfall has larger magnitude and exhibits larger spatial variability over the sea portion than over the land portion within the area of radar coverage. Stationary, and nonstationary models have been used to study the sampling errors associated with space-based rainfall measurement. The nonstationary model shows that the sampling error is sensitive to the starting sampling time for some sampling frequencies, due to the diurnal cycle of rain, but not for others. Sampling experiments using data also show such sensitivity. When the errors are averaged over starting time, the results of the experiments and the stationary and nonstationary models match each other very closely. In the small areas for which data are available for I>oth Darwin and GATE, the sampling error is expected to be larger for Darwin due to its larger CV.

  9. A genome-wide association study of breast cancer in women of African ancestry

    PubMed Central

    Chen, Fang; Chen, Gary K.; Stram, Daniel O.; Millikan, Robert C.; Ambrosone, Christine B.; John, Esther M.; Bernstein, Leslie; Zheng, Wei; Palmer, Julie R.; Hu, Jennifer J.; Rebbeck, Tim R.; Ziegler, Regina G.; Nyante, Sarah; Bandera, Elisa V.; Ingles, Sue A.; Press, Michael F.; Ruiz-Narvaez, Edward A.; Deming, Sandra L.; Rodriguez-Gil, Jorge L.; DeMichele, Angela; Chanock, Stephen J.; Blot, William; Signorello, Lisa; Cai, Qiuyin; Li, Guoliang; Long, Jirong; Huo, Dezheng; Zheng, Yonglan; Cox, Nancy J.; Olopade, Olufunmilayo I.; Ogundiran, Temidayo O.; Adebamowo, Clement; Nathanson, Katherine L.; Domchek, Susan M.; Simon, Michael S.; Hennis, Anselm; Nemesure, Barbara; Wu, Suh-Yuh; Leske, M. Cristina; Ambs, Stefan; Hutter, Carolyn M.; Young, Alicia; Kooperberg, Charles; Peters, Ulrike; Rhie, Suhn K.; Wan, Peggy; Sheng, Xin; Pooler, Loreall C.; Van Den Berg, David J.; Le Marchand, Loic; Kolonel, Laurence N.; Henderson, Brian E.; Haiman, Christopher A.

    2013-01-01

    Genome-wide association studies (GWAS) in diverse populations are needed to reveal variants that are more common and/or limited to defined populations. We conducted a GWAS of breast cancer in women of African ancestry, with genotyping of > 1,000,000 SNPs in 3,153 African American cases and 2,831 controls, and replication testing of the top 66 associations in an additional 3,607 breast cancer cases and 11,330 controls of African ancestry. Two of the 66 SNPs replicated (p < 0.05) in stage 2, which reached statistical significance levels of 10−6 and 10−5 in the stage 1 and 2 combined analysis (rs4322600 at chromosome 14q31: OR = 1.18, p = 4.3×10−6; rs10510333 at chromosome 3p26: OR = 1.15, p = 1.5×10−5). These suggestive risk loci have not been identified in previous GWAS in other populations and will need to be examined in additional samples. Identification of novel risk variants for breast cancer in women of African ancestry will demand testing of a substantially larger set of markers from stage 1 in a larger replication sample. PMID:22923054

  10. Bioenhanced dissolution of dense non-aqueous phase of trichloroethylene as affected by iron reducing conditions: model systems and environmental samples.

    PubMed

    Paul, Laiby; Smolders, Erik

    2015-01-01

    The anaerobic biotransformation of trichloroethylene (TCE) can be affected by competing electron acceptors such as Fe (III). This study assessed the role of Fe (III) reduction on the bioenhanced dissolution of TCE dense non-aqueous phase liquid (DNAPL). Columns were set up as 1-D diffusion cells consisting of a lower DNAPL layer, a layer with an aquifer substratum and an upper water layer that is regularly refreshed. The substrata used were either inert sand or sand coated with 2-line ferrihydrite (HFO) or two environmental Fe (III) containing samples. The columns were inoculated with KB-1 and were repeatedly fed with formate. In none of the diffusion cells, vinyl chloride or ethene was detected while dissolved and extractable Fe (II) increased strongly during 60 d of incubation. The cis-DCE concentration peaked at 4.0 cm from the DNAPL (inert sand) while it was at 3.4 cm (sand+HFO), 1.7 cm and 2.5 cm (environmental samples). The TCE concentration gradients near the DNAPL indicate that the DNAPL dissolution rate was larger than that in an abiotic cell by factors 1.3 (inert sand), 1.0 (sand+HFO) and 2.2 (both environmental samples). This results show that high bioavailable Fe (III) in HFO reduces the TCE degradation by competitive Fe (III) reduction, yielding lower bioenhanced dissolution. However, Fe (III) reduction in environmental samples was not reducing TCE degradation and the dissolution factor was even larger than that of inert sand. It is speculated that physical factors, e.g. micro-niches in the environmental samples protect microorganisms from toxic concentrations of TCE. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Chimpanzees can point to smaller amounts of food to accumulate larger amounts but they still fail the reverse-reward contingency task.

    PubMed

    Beran, Michael J; James, Brielle T; Whitham, Will; Parrish, Audrey E

    2016-10-01

    The reverse-reward contingency task presents 2 food sets to an animal, and they are required to choose the smaller of the 2 sets in order to receive the larger food set. Intriguingly, the majority of species tested on the reverse-reward task fail to learn this contingency in the absence of large trial counts, correction trials, and punishment techniques. The unique difficulty of this seemingly simple task likely reflects a failure of inhibitory control which is required to point toward a smaller and less desirable reward rather than a larger and more desirable reward. This failure by chimpanzees and other primates to pass the reverse-reward task is striking given the self-control they exhibit in a variety of other paradigms. For example, chimpanzees have consistently demonstrated a high capacity for delay of gratification in order to maximize accumulating food rewards in which foods are added item-by-item to a growing set until the subject consumes the rewards. To study the mechanisms underlying success in the accumulation task and failure in the reverse-reward task, we presented chimpanzees with several combinations of these 2 tasks to determine when chimpanzees might succeed in pointing to smaller food sets over larger food sets and how the nature of the task might determine the animals' success or failure. Across experiments, 3 chimpanzees repeatedly failed to solve the reverse-reward task, whereas they accumulated nearly all food items across all instances of the accumulation self-control task, even when they had to point to small amounts of food to accumulate larger amounts. These data indicate that constraints of these 2 related but still different tasks of behavioral inhibition are dependent upon the animals' perceptions of the choice set, their sense of control over the contents of choice sets, and the nature of the task constraints. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  12. Patterns of orchid bee species diversity and turnover among forested plateaus of central Amazonia

    PubMed Central

    Machado, Carolina de Barros; Galetti, Pedro Manoel; Oliveira, Marcio; Dirzo, Rodolfo; Fernandes, Geraldo Wilson

    2017-01-01

    The knowledge of spatial pattern and geographic beta-diversity is of great importance for biodiversity conservation and interpreting ecological information. Tropical forests, especially the Amazon Rainforest, are well known for their high species richness and low similarity in species composition between sites, both at local and regional scales. We aimed to determine the effect and relative importance of area, isolation and climate on species richness and turnover in orchid bee assemblages among plateaus in central Brazilian Amazonia. Variance partitioning techniques were applied to assess the relative effects of spatial and environmental variables on bee species richness, phylogeny and composition. We hypothesized that greater abundance and richness of orchid bees would be found on larger plateaus, with a set of core species occurring on all of them. We also hypothesized that smaller plateaus would possess lower phylogenetic diversity. We found 55 bee species distributed along the nine sampling sites (plateaus) with 17 of them being singletons. There was a significant decrease in species richness with decreasing size of plateaus, and a significant decrease in the similarity in species composition with greater distance and climatic variation among sampling sites. Phylogenetic diversity varied among the sampling sites but was directly related to species richness. Although not significantly related to plateau area, smaller or larger PDFaith were observed in the smallest and the largest plateaus, respectively. PMID:28410432

  13. Patterns of orchid bee species diversity and turnover among forested plateaus of central Amazonia.

    PubMed

    Antonini, Yasmine; Machado, Carolina de Barros; Galetti, Pedro Manoel; Oliveira, Marcio; Dirzo, Rodolfo; Fernandes, Geraldo Wilson

    2017-01-01

    The knowledge of spatial pattern and geographic beta-diversity is of great importance for biodiversity conservation and interpreting ecological information. Tropical forests, especially the Amazon Rainforest, are well known for their high species richness and low similarity in species composition between sites, both at local and regional scales. We aimed to determine the effect and relative importance of area, isolation and climate on species richness and turnover in orchid bee assemblages among plateaus in central Brazilian Amazonia. Variance partitioning techniques were applied to assess the relative effects of spatial and environmental variables on bee species richness, phylogeny and composition. We hypothesized that greater abundance and richness of orchid bees would be found on larger plateaus, with a set of core species occurring on all of them. We also hypothesized that smaller plateaus would possess lower phylogenetic diversity. We found 55 bee species distributed along the nine sampling sites (plateaus) with 17 of them being singletons. There was a significant decrease in species richness with decreasing size of plateaus, and a significant decrease in the similarity in species composition with greater distance and climatic variation among sampling sites. Phylogenetic diversity varied among the sampling sites but was directly related to species richness. Although not significantly related to plateau area, smaller or larger PDFaith were observed in the smallest and the largest plateaus, respectively.

  14. A New Approach on Sampling Microorganisms from the Lower Stratosphere

    NASA Astrophysics Data System (ADS)

    Gunawan, B.; Lehnen, J. N.; Prince, J.; Bering, E., III; Rodrigues, D.

    2017-12-01

    University of Houston's Undergraduate Student Instrumentation Project (USIP) astrobiology group will attempt to provide a cross-sectional analysis of microorganisms in the lower stratosphere by collecting living microbial samples using a sterile and lightweight balloon-borne payload. Refer to poster by Dr. Edgar Bering in session ED032. The purpose of this research is two-fold: first, to design a new system that is capable of greater mass air intake, unlike the previous iterations where heavy and power-intensive pumps are used; and second, to provide proof of concept that live samples are accumulated in the upper atmosphere and are viable for extensive studies and consequent examination for their potential weather-altering characteristics. Multiple balloon deployments will be conducted to increase accuracy and to provide larger set of data. This paper will also discuss visual presentation of the payload along with analyzed information of the captured samples. Design details will be presented to NASA investigators for professional studies

  15. Predicting Learning-Related Emotions from Students' Textual Classroom Feedback via Twitter

    ERIC Educational Resources Information Center

    Altrabsheh, Nabeela; Cocea, Mihaela; Fallahkhair, Sanaz

    2015-01-01

    Teachers/lecturers typically adapt their teaching to respond to students' emotions, e.g. provide more examples when they think the students are confused. While getting a feel of the students' emotions is easier in small settings, it is much more difficult in larger groups. In these larger settings textual feedback from students could provide…

  16. Using multivariate regression modeling for sampling and predicting chemical characteristics of mixed waste in old landfills.

    PubMed

    Brandstätter, Christian; Laner, David; Prantl, Roman; Fellner, Johann

    2014-12-01

    Municipal solid waste landfills pose a threat on environment and human health, especially old landfills which lack facilities for collection and treatment of landfill gas and leachate. Consequently, missing information about emission flows prevent site-specific environmental risk assessments. To overcome this gap, the combination of waste sampling and analysis with statistical modeling is one option for estimating present and future emission potentials. Optimizing the tradeoff between investigation costs and reliable results requires knowledge about both: the number of samples to be taken and variables to be analyzed. This article aims to identify the optimized number of waste samples and variables in order to predict a larger set of variables. Therefore, we introduce a multivariate linear regression model and tested the applicability by usage of two case studies. Landfill A was used to set up and calibrate the model based on 50 waste samples and twelve variables. The calibrated model was applied to Landfill B including 36 waste samples and twelve variables with four predictor variables. The case study results are twofold: first, the reliable and accurate prediction of the twelve variables can be achieved with the knowledge of four predictor variables (Loi, EC, pH and Cl). For the second Landfill B, only ten full measurements would be needed for a reliable prediction of most response variables. The four predictor variables would exhibit comparably low analytical costs in comparison to the full set of measurements. This cost reduction could be used to increase the number of samples yielding an improved understanding of the spatial waste heterogeneity in landfills. Concluding, the future application of the developed model potentially improves the reliability of predicted emission potentials. The model could become a standard screening tool for old landfills if its applicability and reliability would be tested in additional case studies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Individualization of pubic hair bacterial communities and the effects of storage time and temperature.

    PubMed

    Williams, Diana W; Gibson, Greg

    2017-01-01

    A potential application of microbial genetics in forensic science is detection of transfer of the pubic hair microbiome between individuals during sexual intercourse using high-throughput sequencing. In addition to the primary need to show whether the pubic hair microbiome is individualizing, one aspect that must be addressed before using the microbiome in criminal casework involves the impact of storage on the microbiome of samples recovered for forensic testing. To test the effects of short-term storage, pubic hair samples were collected from volunteers and stored at room temperature (∼20°C), refrigerated (4°C), and frozen (-20°C) for 1 week, 2 weeks, 4 weeks, and 6 weeks along with a baseline sample. Individual microbial profiles (R 2 =0.69) and gender (R 2 =0.17) were the greatest sources of variation between samples. Because of this variation, individual and gender could be predicted using Random Forests supervised classification in this sample set with an overall error rate of 2.7%± 5.8% and 1.7%±5.2%, respectively. There was no statistically significant difference attributable to time of sampling or temperature of storage within individuals. Further work on larger sample sets will quantify the temporal consistency of individual profiles and define whether it is plausible to detect transfer between sexual partners. For short-term storage (≤6 weeks), recovery of the microbiome was not affected significantly by either storage time or temperature, suggesting that investigators and crime laboratories can use existing evidence storage methods. Published by Elsevier Ireland Ltd.

  18. Ground-Water Quality Beneath Irrigated Cropland of the Northern and Southern High Plains Aquifer, Nebraska and Texas, 2003-04

    USGS Publications Warehouse

    Stanton, Jennifer S.; Fahlquist, Lynne

    2006-01-01

    A study of the quality of ground water beneath irrigated cropland was completed for the northern and southern High Plains aquifer. Ground-water samples were collected from 30 water-table monitoring wells in the northern agricultural land-use (NAL) study area in Nebraska in 2004 and 29 water-table monitoring wells in the southern agricultural land-use (SAL) study area in Texas in 2003. The two study areas represented different agricultural and hydrogeologic settings. The primary crops grown in the NAL study area were corn and soybeans, and the primary crop in the SAL study area was cotton. Overall, pesticide and fertilizer application rates were larger in the NAL study area. Also, precipitation and recharge rates were greater in the NAL study area, and depths to water and evapotranspiration rates were greater in the SAL study area. Ground-water quality beneath irrigated cropland was different in the two study areas. Nitrate concentrations were larger and pesticide detections were more frequent in the NAL study area. Nitrate concentrations in NAL samples ranged from 1.96 to 106 mg/L (milligrams per liter) as nitrogen, with a median concentration of 10.6 mg/L. Water in 73 percent of NAL samples had at least one pesticide or pesticide degradate detected. Most of the pesticide compounds detected (atrazine, alachlor, metolachlor, simazine, and degradates of those pesticides) are applied to corn and soybean fields. Nitrate concentrations in SAL samples ranged from 0.96 to 21.6 mg/L, with a median of 4.12 mg/L. Water in 24 percent of SAL samples had at least one pesticide or pesticide degradate detected. The pesticide compounds detected were deethylatrazine (a degradate of atrazine and propazine), propazine, fluometuron, and tebuthiuron. Most of the pesticides detected are applied to cotton fields. Dissolved-solids concentrations were larger in the SAL area and were positively correlated with both nitrate and chloride concentrations, suggesting a combination of human and natural sources. Dissolved-solids concentrations in NAL samples ranged from 272 to 2,160 mg/L, with a median of 442 mg/L, and dissolved solids in SAL samples ranged from 416 to 3,580 mg/L, with a median of 814 mg/L.

  19. Results from the OPERA experiment

    NASA Astrophysics Data System (ADS)

    Crescenzo, A. Di

    2017-12-01

    The OPERA neutrino experiment was designed to perform a unique vτ appearance measurement in the vμ CNGS beam to confirm the oscillation mechanism in the atmospheric sector vμ → vτ. The detection of vτ leptons produced in vτ CC interactions and of their decays is accomplished exploiting the high spatial resolution of nuclear emulsions. Five vτ candidate events have been detected in the full data sample from 2008-2012 CNGS runs, with an expected background of 0.25 events. The background only hypothesis is rejected with a significance larger than 5 σ. The analysis of the tau neutrino sample in the framework of the 3+1 neutrino model is also presented. Furthermore OPERA good capabilities in detecting electron neutrino interactions allow setting limits on the vμ → ve oscillation channel.

  20. Ground-water quality of the southern High Plains aquifer, Texas and New Mexico, 2001

    USGS Publications Warehouse

    Fahlquist, Lynne

    2003-01-01

    In 2001, the U.S. Geological Survey National Water-Quality Assessment Program collected water samples from 48 wells in the southern High Plains as part of a larger scientific effort to broadly characterize and understand factors affecting water quality of the High Plains aquifer across the entire High Plains. Water samples were collected primarily from domestic wells in Texas and eastern New Mexico. Depths of wells sampled ranged from 100 to 500 feet, with a median depth of 201 feet. Depths to water ranged from 34 to 445 feet below land surface, with a median depth of 134 feet. Of 240 properties or constituents measured or analyzed, 10 exceeded U.S. Environmental Protection Agency public drinking-water standards or guidelines in one or more samples - arsenic, boron, chloride, dissolved solids, fluoride, manganese, nitrate, radon, strontium, and sulfate. Measured dissolved solids concentrations in 29 samples were larger than the public drinking-water guideline of 500 milligrams per liter. Fluoride concentrations in 16 samples, mostly in the southern part of the study area, were larger than the public drinking-water standard of 4 milligrams per liter. Nitrate was detected in all samples, and concentrations in six samples were larger than the public drinking-water standard of 10 milligrams per liter. Arsenic concentrations in 14 samples in the southern part of the study area were larger than the new (2002) public drinking-water standard of 10 micrograms per liter. Radon concentrations in 36 samples were larger than a proposed public drinking-water standard of 300 picocuries per liter. Pesticides were detected at very small concentrations, less than 1 microgram per liter, in less than 20 percent of the samples. The most frequently detected compounds were atrazine and breakdown products of atrazine, a finding similar to those of National Water-Quality Assessment aquifer studies across the Nation. Four volatile organic compounds were detected at small concentrations in six water samples. About 70 percent of the 48 primarily domestic wells sampled contained some fraction of recently (less than about 50 years ago) recharged ground water, as indicated by the presence of one or more pesticides, or tritium or nitrate concentrations greater than threshold levels.

  1. A Search for the Standard Model Higgs Boson Produced in Association with a $W$ Boson

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frank, Martin Johannes

    2011-05-01

    We present a search for a standard model Higgs boson produced in association with a W boson using data collected with the CDF II detector from pmore » $$\\bar{p}$$ collisions at √s = 1.96 TeV. The search is performed in the WH → ℓvb$$\\bar{b}$$ channel. The two quarks usually fragment into two jets, but sometimes a third jet can be produced via gluon radiation, so we have increased the standard two-jet sample by including events that contain three jets. We reconstruct the Higgs boson using two or three jets depending on the kinematics of the event. We find an improvement in our search sensitivity using the larger sample together with this multijet reconstruction technique. Our data show no evidence of a Higgs boson, so we set 95% confidence level upper limits on the WH production rate. We set limits between 3.36 and 28.7 times the standard model prediction for Higgs boson masses ranging from 100 to 150 GeV/c 2.« less

  2. Fully automated three-dimensional microscopy system

    NASA Astrophysics Data System (ADS)

    Kerschmann, Russell L.

    2000-04-01

    Tissue-scale structures such as vessel networks are imaged at micron resolution with the Virtual Tissue System (VT System). VT System imaging of cubic millimeters of tissue and other material extends the capabilities of conventional volumetric techniques such as confocal microscopy, and allows for the first time the integrated 2D and 3D analysis of important tissue structural relationships. The VT System eliminates the need for glass slide-mounted tissue sections and instead captures images directly from the surface of a block containing a sample. Tissues are en bloc stained with fluorochrome compounds, embedded in an optically conditioned polymer that suppresses image signals form dep within the block , and serially sectioned for imaging. Thousands of fully registered 2D images are automatically captured digitally to completely convert tissue samples into blocks of high-resolution information. The resulting multi gigabyte data sets constitute the raw material for precision visualization and analysis. Cellular function may be seen in a larger anatomical context. VT System technology makes tissue metrics, accurate cell enumeration and cell cycle analyses possible while preserving full histologic setting.

  3. Malthus in the Bedroom: Birth Spacing as Birth Control in Pre-Transition England.

    PubMed

    Cinnirella, Francesco; Klemp, Marc; Weisdorf, Jacob

    2017-04-01

    We use duration models on a well-known historical data set of more than 15,000 families and 60,000 births in England for the period 1540-1850 to show that the sampled families adjusted the timing of their births in accordance with the economic conditions as well as their stock of dependent children. The effects were larger among the lower socioeconomic ranks. Our findings on the existence of parity-dependent as well as parity-independent birth spacing in England are consistent with the growing evidence that marital birth control was present in pre-transitional populations.

  4. [Splash basins are contaminated even during operations in a laminar air flow environment].

    PubMed

    Christensen, Mikkel; Sundstrup, Mikkel; Larsen, Helle Raagaard; Olesen, Bente; Ryge, Camilla

    2014-03-03

    Few studies have investigated the potential contamination of splash basins and they have shown very divergent results: contamination ranging from 2.13% to 74% has been reported. This study set out to examine if splash basins used in a laminar air flow (LAF) environment during elective knee and hip arthroplasty constitute an unnecessary risk. Of the 49 cases sampled two cultures were positive (4%; 95% confidence interval = 0.49-13.9). We conclude that splash basins do get contaminated even in an LAF environment. Further studies with larger populations are needed to validate our findings.

  5. Online versus offline: The Web as a medium for response time data collection.

    PubMed

    Chetverikov, Andrey; Upravitelev, Philipp

    2016-09-01

    The Internet provides a convenient environment for data collection in psychology. Modern Web programming languages, such as JavaScript or Flash (ActionScript), facilitate complex experiments without the necessity of experimenter presence. Yet there is always a question of how much noise is added due to the differences between the setups used by participants and whether it is compensated for by increased ecological validity and larger sample sizes. This is especially a problem for experiments that measure response times (RTs), because they are more sensitive (and hence more susceptible to noise) than, for example, choices per se. We used a simple visual search task with different set sizes to compare laboratory performance with Web performance. The results suggest that although the locations (means) of RT distributions are different, other distribution parameters are not. Furthermore, the effect of experiment setting does not depend on set size, suggesting that task difficulty is not important in the choice of a data collection method. We also collected an additional online sample to investigate the effects of hardware and software diversity on the accuracy of RT data. We found that the high diversity of browsers, operating systems, and CPU performance may have a detrimental effect, though it can partly be compensated for by increased sample sizes and trial numbers. In sum, the findings show that Web-based experiments are an acceptable source of RT data, comparable to a common keyboard-based setup in the laboratory.

  6. Socioeconomic status, urbanicity and risk behaviors in Mexican youth: an analysis of three cross-sectional surveys

    PubMed Central

    2011-01-01

    Background The relationship between urbanicity and adolescent health is a critical issue for which little empirical evidence has been reported. Although an association has been suggested, a dichotomous rural versus urban comparison may not succeed in identifying differences between adolescent contexts. This study aims to assess the influence of locality size on risk behaviors in a national sample of young Mexicans living in low-income households, while considering the moderating effect of socioeconomic status (SES). Methods This is a secondary analysis of three national surveys of low-income households in Mexico in different settings: rural, semi-urban and urban areas. We analyzed risk behaviors in 15-21-year-olds and their potential relation to urbanicity. The risk behaviors explored were: tobacco and alcohol consumption, sexual initiation and condom use. The adolescents' localities of residence were classified according to the number of inhabitants in each locality. We used a logistical model to identify an association between locality size and risk behaviors, including an interaction term with SES. Results The final sample included 17,974 adolescents from 704 localities in Mexico. Locality size was associated with tobacco and alcohol consumption, showing a similar effect throughout all SES levels: the larger the size of the locality, the lower the risk of consuming tobacco or alcohol compared with rural settings. The effect of locality size on sexual behavior was more complex. The odds of adolescent condom use were higher in larger localities only among adolescents in the lowest SES levels. We found no statically significant association between locality size and sexual initiation. Conclusions The results suggest that in this sample of adolescents from low-income areas in Mexico, risk behaviors are related to locality size (number of inhabitants). Furthermore, for condom use, this relation is moderated by SES. Such heterogeneity suggests the need for more detailed analyses of both the effects of urbanicity on behavior, and the responses--which are also heterogeneous--required to address this situation. PMID:22129110

  7. Particle Concentrations in Occupational Settings Measured with a Nanoparticle Respiratory Deposition (NRD) Sampler.

    PubMed

    Stebounova, Larissa V; Gonzalez-Pech, Natalia I; Park, Jae Hong; Anthony, T Renee; Grassian, Vicki H; Peters, Thomas M

    2018-05-18

    There is an increasing need to evaluate concentrations of nanoparticles in occupational settings due to their potential negative health effects. The Nanoparticle Respiratory Deposition (NRD) personal sampler was developed to collect nanoparticles separately from larger particles in the breathing zone of workers, while simultaneously providing a measure of respirable mass concentration. This study compared concentrations measured with the NRD sampler to those measured with a nano Micro Orifice Uniform-Deposit Impactor (nanoMOUDI) and respirable samplers in three workplaces. The NRD sampler performed well at two out of three locations, where over 90% of metal particles by mass were submicrometer particle size (a heavy vehicle machining and assembly facility and a shooting range). At the heavy vehicle facility, the mean metal mass concentration of particles collected on the diffusion stage of the NRD was 42.5 ± 10.0 µg/m3, within 5% of the nanoMOUDI concentration of 44.4 ± 7.4 µg/m3. At the shooting range, the mass concentration for the diffusion stage of the NRD was 5.9 µg/m3, 28% above the nanoMOUDI concentration of 4.6 µg/m3. In contrast, less favorable results were obtained at an iron foundry, where 95% of metal particles by mass were larger than 1 µm. The accuracy of nanoparticle collection by NRD diffusion stage may have been compromised by high concentrations of coarse particles at the iron foundry, where the NRD collected almost 5-fold more nanoparticle mass compared to the nanoMOUDI on one sampling day and was more than 40% different on other sampling days. The respirable concentrations measured by NRD samplers agreed well with concentrations measured by respirable samplers at all sampling locations. Overall, the NRD sampler accurately measured concentrations of nanoparticles in industrial environments when concentrations of large, coarse mode, particles were low.

  8. COMPARISON OF PRE- AND POSTQUARANTINE BLOOD CHEMISTRY AND HEMATOLOGY VALUES FROM WILD-CAUGHT COWNOSE RAYS (RHINOPTERA BONASUS).

    PubMed

    Cusack, Lara; Field, Cara L; Hoopes, Lisa; McDermott, Alexa; Clauss, Tonya

    2016-06-01

    Though one of the most widely kept elasmobranchs in human care, the cownose ray (CNR; Rhinoptera bonasus ), remains a species with minimal published information on hematologic reference intervals. As part of a larger study investigating the health and nutrition of the CNR, this study established a preliminary data set of plasma chemistry and hematology values specific to animals recently caught from the wild and compared this data set (intake sample) to values obtained following a period of quarantine (27-40 days) in an aquarium (exit sample). Blood samples were collected from 47 wild female (n = 46) and male (n = 1) CNR caught in pound nets off the coast of North Carolina and South Carolina. Differences between intake and exit values were analyzed. Due to the preponderance of female animals, data were not analyzed for sex differences. Plasma biochemical profiles were performed and analyzed. A select number of complete blood cell counts were performed (n = 24 from 12 animals). Statistically significant differences (P < 0.05) specific to time of sampling were determined for packed cell volume, total solids, blood urea nitrogen, sodium, chloride, potassium, phosphorus, cholesterol, glucose, and aspartate aminotransferase. Values reported are a significant expansion on the existing limited data for CNRs and will serve as a reference for health assessment of individuals both in the wild and in exhibit populations.

  9. Missing heritability in the tails of quantitative traits? A simulation study on the impact of slightly altered true genetic models.

    PubMed

    Pütter, Carolin; Pechlivanis, Sonali; Nöthen, Markus M; Jöckel, Karl-Heinz; Wichmann, Heinz-Erich; Scherag, André

    2011-01-01

    Genome-wide association studies have identified robust associations between single nucleotide polymorphisms and complex traits. As the proportion of phenotypic variance explained is still limited for most of the traits, larger and larger meta-analyses are being conducted to detect additional associations. Here we investigate the impact of the study design and the underlying assumption about the true genetic effect in a bimodal mixture situation on the power to detect associations. We performed simulations of quantitative phenotypes analysed by standard linear regression and dichotomized case-control data sets from the extremes of the quantitative trait analysed by standard logistic regression. Using linear regression, markers with an effect in the extremes of the traits were almost undetectable, whereas analysing extremes by case-control design had superior power even for much smaller sample sizes. Two real data examples are provided to support our theoretical findings and to explore our mixture and parameter assumption. Our findings support the idea to re-analyse the available meta-analysis data sets to detect new loci in the extremes. Moreover, our investigation offers an explanation for discrepant findings when analysing quantitative traits in the general population and in the extremes. Copyright © 2011 S. Karger AG, Basel.

  10. Apparent Trend of the Iron Abundance in NGC 3201: The Same Outcome with Different Data

    NASA Astrophysics Data System (ADS)

    Kravtsov, Valery V.

    2017-08-01

    We further study the unusual trend we found at statistically significant levels in some globular clusters, including NGC 3201: a decreasing iron abundance in red giants toward the cluster centers. We first show that recently published new estimates of iron abundance in the cluster reproduce this trend, in spite of the authors’ statement about no metallicity spread due to a low scatter achieved in the [Fe II/H] ratio. The mean of [Fe II/H] within R˜ 2\\prime from the cluster center is lower, by Δ[Fe II/H] = 0.05 ± 0.02 dex, than in the outer region, in agreement with our original estimate for a much larger sample size within R≈ 9\\prime . We found that an older data set traces the trend to a much larger radial distance, comparable with the cluster tidal radius, at Δ[Fe/H] ˜ 0.2 dex, due to higher metallicity of distant stars. We conclude the trend is reproduced by independent data sets and find that it is accompanied by both a notable same-sign trend of oxygen abundance that can vary by up to Δ[O/Fe] ˜ 0.3 dex within R≈ 9\\prime and an opposite-sign trend of sodium abundance.

  11. Methane concentrations in water wells unrelated to proximity to existing oil and gas wells in northeastern Pennsylvania.

    PubMed

    Siegel, Donald I; Azzolina, Nicholas A; Smith, Bert J; Perry, A Elizabeth; Bothun, Rikka L

    2015-04-07

    Recent studies in northeastern Pennsylvania report higher concentrations of dissolved methane in domestic water wells associated with proximity to nearby gas-producing wells [ Osborn et al. Proc. Natl. Acad. Sci. U. S. A. 2011 , 108 , 8172 ] and [ Jackson et al. Proc. Natl. Acad. Sci. U. S. A. , 2013 , 110 , 11250 ]. We test this possible association by using Chesapeake Energy's baseline data set of over 11,300 dissolved methane analyses from domestic water wells, densely arrayed in Bradford and nearby counties (Pennsylvania), and near 661 pre-existing oil and gas wells. The majority of these, 92%, were unconventional wells, drilled with horizontal legs and hydraulically fractured. Our data set is hundreds of times larger than data sets used in prior studies. In contrast to prior findings, we found no statistically significant relationship between dissolved methane concentrations in groundwater from domestic water wells and proximity to pre-existing oil or gas wells. Previous analyses used small sample sets compared to the population of domestic wells available, which may explain the difference in prior findings compared to ours.

  12. Extreme Quantum Memory Advantage for Rare-Event Sampling

    NASA Astrophysics Data System (ADS)

    Aghamohammadi, Cina; Loomis, Samuel P.; Mahoney, John R.; Crutchfield, James P.

    2018-02-01

    We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r , for any large real number r . Then, for a sequence of processes each labeled by an integer size N , we compare how the classical and quantum required memories scale with N . In this setting, since both memories can diverge as N →∞ , the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N →∞ , but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.

  13. Near-infrared reflectance spectra-applications to problems in asteroid-meteorite relationships

    NASA Technical Reports Server (NTRS)

    Mcfadden, Lucy A.; Chamberlin, Alan; Vilas, Faith

    1991-01-01

    Near-infrared spectral reflectance data were collected at the Infrared Telescope Facility (IRTF) at Mauna Kea Observatories in 1985 and 1986 for the purpose of searching the region near the 3:1 Kirkwood gap for asteroids with the spectral signatures of ordinary chondrite parent bodies. Twelve reflectance spectra are observed. The presence of ordinary chondrite parent bodies among this specific set of observed asteroids is not obvious, though the sample is biased towards the larger asteroids in the region due to limitations imposed by detector sensitivity. The data set, which was acquired with the same instrumentation used for the 52-color asteroid survey (Bell et al., 1987), also presents some additional findings. The range of spectral characteristics that exist among asteroids of the same taxonomic type is noted. Conclusions based on the findings are discussed.

  14. Sequential CFAR detectors using a dead-zone limiter

    NASA Astrophysics Data System (ADS)

    Tantaratana, Sawasd

    1990-09-01

    The performances of some proposed sequential constant-false-alarm-rate (CFAR) detectors are evaluated. The observations are passed through a dead-zone limiter, the output of which is -1, 0, or +1, depending on whether the input is less than -c, between -c and c, or greater than c, where c is a constant. The test statistic is the sum of the outputs. The test is performed on a reduced set of data (those with absolute value larger than c), with the test statistic being the sum of the signs of the reduced set of data. Both constant and linear boundaries are considered. Numerical results show a significant reduction of the average number of observations needed to achieve the same false alarm and detection probabilities as a fixed-sample-size CFAR detector using the same kind of test statistic.

  15. Modern Foraminifera from a depth transect offshore Brunei Darussalam: diversity, sedimentation rate and preservation pathways.

    NASA Astrophysics Data System (ADS)

    Briguglio, Antonino; Goeting, Sulia; Kusli, Rosnani; Roslim, Amajida; Polgar, Gianluca; Kocsis, Laszlo

    2016-04-01

    For this study, 11 samples have been collected by scuba diving from 5 to 35 meters water depth off shore Brunei Darussalam. The locations sampled are known as: Pelong Rock (5 samples, shallow reef with soft and stony corals and larger foraminifera, 5 to 8 meters water depth), Abana Rock (1 sample, shallow reef with mainly soft corals and larger foraminifera, 13 to 18 meters water depth), Oil Rig wreck (1 sample, very sandy bottom with larger foraminifera, 18 meters water depth), Dolphin wreck (1 sample, muddy sand with many small rotaliids, 24 meters water depth), US wreck, (1 sample, sand with small clay fraction, 28 meters water depth), Australian wreck (1 sample, mainly medium to coarse sand with larger foraminifera, 34 meters water depth) and Blue water wreck (1 sample, mainly coarse sand, coral rubble and larger foraminifera, 35 meters water depth). Those samples closer to the river inputs are normally richer in clay, while the most distant samples are purely sandy. Some additional samples have been collected next to reef environments which, even if very shallow, are mainly sandy with almost no clay fraction. The deepest sample, which is 30 km offshore, contains some planktonic foraminifera and is characterized by a large range of preservations concerning foraminifera, thus testifying the presence or relict sediments at the sea bottom. The presence of relict sediments was already pointed out by older oil-related field studies offshore Brunei Darussalam, and now it is possible to draw the depth limit of these deposits. The diversity of the benthic foraminiferal fauna is relatively high but not as higher as neighboring regions as some studies have highlighted. The species collected and identified are more than 50: in reef environment the most abundant are Calcarina defrancii, Neorotalia calcar and the amphisteginidae; deeper in the muddy sediments the most abundant is Pararotalia schroeteriana and in the deepest sandy sample the most abundant are Calcarina hispida, followed by Operculina ammonoides.

  16. Improved immunomagnetic enrichment of CD34(+) cells from umbilical cord blood using the CliniMACS cell separation system.

    PubMed

    Blake, Joseph M; Nicoud, Ian B; Weber, Daniel; Voorhies, Howard; Guthrie, Katherine A; Heimfeld, Shelly; Delaney, Colleen

    2012-08-01

    CD34(+) enrichment from cord blood units (CBU) is used increasingly in clinical applications involving ex vivo expansion. The CliniMACS instrument from Miltenyi Biotec is a current good manufacturing practice (cGMP) immunomagnetic selection system primarily designed for processing larger numbers of cells: a standard tubing set (TS) can process a maximum of 60 billion cells, while the larger capacity tubing set (LS) will handle 120 billion cells. In comparison, most CBU contain only 1-2 billion cells, raising a question regarding the optimal tubing set for CBU CD34(+) enrichment. We compared CD34(+) cell recovery and overall viability after CliniMACS processing of fresh CBU with either TS or LS. Forty-six freshly collected CBU (≤ 36 h) were processed for CD34(+) enrichment; 22 consecutive units were selected using TS and a subsequent 24 processed with LS. Cell counts and immunophenotyping were performed pre- and post-selection to assess total nucleated cells (TNC), viability and CD34(+) cell content. Two-sample t-tests of mean CD34(+) recovery and viability revealed significant differences in favor of LS (CD34(+) recovery, LS = 56%, TS = 45%, P = 0.003; viability, LS = 74%, TS = 59%, P = 0.011). Stepwise linear regression, considering pre-processing unit age, viability, TNC and CD34(+) purity, demonstrated statistically significant correlations only with the tubing set used and age of unit. For CD34(+) enrichment from fresh CBU, LS provided higher post-selection viability and more efficient recovery. In this case, a lower maximum TNC specification of TS was not predictive of better performance. The same may hold for smaller scale enrichment of other cell types with the CliniMACS instrument.

  17. Control of mixing hotspots over the vertical turbulent flux in the Southern Ocean

    NASA Astrophysics Data System (ADS)

    Mashayek, Ali; Ferrari, Raffaele; Ledwell, Jim; Merrifield, Sophia; St. Laurent, Louis

    2015-11-01

    Vertical turbulent mixing in the Southern Ocean is believed to play a role in setting the rate of the ocean Meridional Overturning Circulation (MOC), one of the key regulators of the climate system. The extent to which mixing influences the MOC, however, depends on its strength and is still under debate. To address this, a passive tracer was released upstream of the Drake Passage in 2009 as a part of the Diapycnal and Isopycnal Mixing Experiment in the Southern Ocean (DIMES). Vertical dispersion of the tracer was measured in subsequent years to estimate vertical mixing. The inferred effective turbulent diffusivity values have proven larger than those obtained from localized measurements of shear made at various locations along the path of the tracer. While the values inferred from tracer imply a key role played by mixing in setting the MOC, those based on localized measurements suggest otherwise. In this work we employ the tracer data and localized turbulence measurements from DIMES in combination with a high resolution numerical ocean model to investigate whether these discrepancies are the result of different sampling strategies: the microstructure profiles sampled mixing only in a few regions, while the tracer sampled mixing over a much wider area as it spread spatially.

  18. Tailoring magnetic properties of Co nanocluster assembled films using hydrogen

    NASA Astrophysics Data System (ADS)

    Romero, C. P.; Volodin, A.; Paddubrouskaya, H.; Van Bael, M. J.; Van Haesendonck, C.; Lievens, P.

    2018-07-01

    Tailoring magnetic properties in nanocluster assembled cobalt (Co) thin films was achieved by admitting a small percentage of H2 gas (∼2%) into the Co gas phase cluster formation chamber prior to deposition. The oxygen content in the films is considerably reduced by the presence of hydrogen during the cluster formation, leading to enhanced magnetic interactions between clusters. Two sets of Co samples were fabricated, one without hydrogen gas and one with hydrogen gas. Magnetic properties of the non-hydrogenated and the hydrogen-treated Co nanocluster assembled films are comparatively studied using magnetic force microscopy and vibrating sample magnetometry. When comparing the two sets of samples the considerably larger coercive field of the H2-treated Co nanocluster film and the extended micrometer-sized magnetic domain structure confirm the enhancement of magnetic interactions between clusters. The thickness of the antiferromagnetic CoO layer is controlled with this procedure and modifies the exchange bias effect in these films. The exchange bias shift is lower for the H2-treated Co nanocluster film, which indicates that a thinner antiferromagnetic CoO reduces the coupling with the ferromagnetic Co. The hydrogen-treatment method can be used to tailor the oxidation levels thus controlling the magnetic properties of ferromagnetic cluster-assembled films.

  19. Optimal selection of epitopes for TXP-immunoaffinity mass spectrometry.

    PubMed

    Planatscher, Hannes; Supper, Jochen; Poetz, Oliver; Stoll, Dieter; Joos, Thomas; Templin, Markus F; Zell, Andreas

    2010-06-25

    Mass spectrometry (MS) based protein profiling has become one of the key technologies in biomedical research and biomarker discovery. One bottleneck in MS-based protein analysis is sample preparation and an efficient fractionation step to reduce the complexity of the biological samples, which are too complex to be analyzed directly with MS. Sample preparation strategies that reduce the complexity of tryptic digests by using immunoaffinity based methods have shown to lead to a substantial increase in throughput and sensitivity in the proteomic mass spectrometry approach. The limitation of using such immunoaffinity-based approaches is the availability of the appropriate peptide specific capture antibodies. Recent developments in these approaches, where subsets of peptides with short identical terminal sequences can be enriched using antibodies directed against short terminal epitopes, promise a significant gain in efficiency. We show that the minimal set of terminal epitopes for the coverage of a target protein list can be found by the formulation as a set cover problem, preceded by a filtering pipeline for the exclusion of peptides and target epitopes with undesirable properties. For small datasets (a few hundred proteins) it is possible to solve the problem to optimality with moderate computational effort using commercial or free solvers. Larger datasets, like full proteomes require the use of heuristics.

  20. Identification of Key Odorants in Used Disposable Absorbent Incontinence Products

    PubMed Central

    Hall, Gunnar; Forsgren-Brusk, Ulla

    2017-01-01

    PURPOSE: The purpose of this study was to identify key odorants in used disposable absorbent incontinence products. DESIGN: Descriptive in vitro study SUBJECTS AND SETTING: Samples of used incontinence products were collected from 8 residents with urinary incontinence living in geriatric nursing homes in the Gothenburg area of Sweden. Products were chosen from a larger set of products that had previously been characterized by descriptive odor analysis. METHODS: Pieces of the used incontinence products were cut from the wet area, placed in glass bottles, and kept frozen until dynamic headspace sampling of volatile compounds was completed. Gas chromatography–olfactometry was used to identify which compounds contributed most to the odors in the samples. Compounds were identified by gas chromatography–mass spectrometry. RESULTS: Twenty-eight volatiles were found to be key odorants in the used incontinence products. Twenty-six were successfully identified. They belonged to the following classes of chemical compounds: aldehydes (6); amines (1); aromatics (3); isothiocyanates (1); heterocyclics (2); ketones (6); sulfur compounds (6); and terpenes (1). CONCLUSION: Nine of the 28 key odorants were considered to be of particular importance to the odor of the used incontinence products: 3-methylbutanal, trimethylamine, cresol, guaiacol, 4,5-dimethylthiazole-S-oxide, diacetyl, dimethyl trisulfide, 5-methylthio-4-penten-2-ol, and an unidentified compound. PMID:28328644

  1. Setting the magic angle for fast magic-angle spinning probes.

    PubMed

    Penzel, Susanne; Smith, Albert A; Ernst, Matthias; Meier, Beat H

    2018-06-15

    Fast magic-angle spinning, coupled with 1 H detection is a powerful method to improve spectral resolution and signal to noise in solid-state NMR spectra. Commercial probes now provide spinning frequencies in excess of 100 kHz. Then, one has sufficient resolution in the 1 H dimension to directly detect protons, which have a gyromagnetic ratio approximately four times larger than 13 C spins. However, the gains in sensitivity can quickly be lost if the rotation angle is not set precisely. The most common method of magic-angle calibration is to optimize the number of rotary echoes, or sideband intensity, observed on a sample of KBr. However, this typically uses relatively low spinning frequencies, where the spinning of fast-MAS probes is often unstable, and detection on the 13 C channel, for which fast-MAS probes are typically not optimized. Therefore, we compare the KBr-based optimization of the magic angle with two alternative approaches: optimization of the splitting observed in 13 C-labeled glycine-ethylester on the carbonyl due to the Cα-C' J-coupling, or optimization of the H-N J-coupling spin echo in the protein sample itself. The latter method has the particular advantage that no separate sample is necessary for the magic-angle optimization. Copyright © 2018. Published by Elsevier Inc.

  2. Concordant integrative gene set enrichment analysis of multiple large-scale two-sample expression data sets.

    PubMed

    Lai, Yinglei; Zhang, Fanni; Nayak, Tapan K; Modarres, Reza; Lee, Norman H; McCaffrey, Timothy A

    2014-01-01

    Gene set enrichment analysis (GSEA) is an important approach to the analysis of coordinate expression changes at a pathway level. Although many statistical and computational methods have been proposed for GSEA, the issue of a concordant integrative GSEA of multiple expression data sets has not been well addressed. Among different related data sets collected for the same or similar study purposes, it is important to identify pathways or gene sets with concordant enrichment. We categorize the underlying true states of differential expression into three representative categories: no change, positive change and negative change. Due to data noise, what we observe from experiments may not indicate the underlying truth. Although these categories are not observed in practice, they can be considered in a mixture model framework. Then, we define the mathematical concept of concordant gene set enrichment and calculate its related probability based on a three-component multivariate normal mixture model. The related false discovery rate can be calculated and used to rank different gene sets. We used three published lung cancer microarray gene expression data sets to illustrate our proposed method. One analysis based on the first two data sets was conducted to compare our result with a previous published result based on a GSEA conducted separately for each individual data set. This comparison illustrates the advantage of our proposed concordant integrative gene set enrichment analysis. Then, with a relatively new and larger pathway collection, we used our method to conduct an integrative analysis of the first two data sets and also all three data sets. Both results showed that many gene sets could be identified with low false discovery rates. A consistency between both results was also observed. A further exploration based on the KEGG cancer pathway collection showed that a majority of these pathways could be identified by our proposed method. This study illustrates that we can improve detection power and discovery consistency through a concordant integrative analysis of multiple large-scale two-sample gene expression data sets.

  3. Effect of two prophylaxis methods on marginal gap of Cl Vresin-modified glass-ionomer restorations.

    PubMed

    Kimyai, Soodabeh; Pournaghi-Azar, Fatemeh; Daneshpooy, Mehdi; Abed Kahnamoii, Mehdi; Davoodi, Farnaz

    2016-01-01

    Background. This study evaluated the effect of two prophylaxis techniques on the marginal gap of CI V resin-modified glass-ionomer restorations. Methods. Standard Cl V cavities were prepared on the buccal surfaces of 48 sound bovine mandibular incisors in this in vitro study. After restoration of the cavities with GC Fuji II LC resin-modified glass-ionomer, the samples were randomly assigned to 3 groups of 16. In group 1, the prophylactic procedures were carried out with rubber cup and pumice powder and in group 2 with air-powder polishing device (APD). In group 3 (control), the samples did not undergo any prophylactic procedures. Then the marginal gaps were measured. Two-way ANOVA was used to compare marginal gaps at the occlusal and gingival margins between the groups. Post hoc Tukey test was used for two-by-two comparisons. Statistical significance was set at P < 0.05. Results. There were significant differences in the means of marginal gaps in terms of prophylactic techniques (P < 0.001), with significantly larger marginal gaps in the APD group compared to the pumice and rubber cup group, which in turn exhibited significantly larger marginal gaps compared to the control group (P < 0.0005). In addition, the means of marginal gaps were significant in terms of the margin type (P < 0.001), with significantly larger gaps at gingival margins compared to the occlusal margins (P < 0.0005). Conclusion. The prophylactic techniques used in this study had a negative effect on the marginal gaps of Cl V resin-modified glass-ionomer restorations.

  4. Determination of phenylurea herbicides in water samples using online sorptive preconcentration and high-performance liquid chromatography with UV or electrospray mass spectrometric detection.

    PubMed

    Baltussen, E; Snijders, H; Janssen, H G; Sandra, P; Cramers, C A

    1998-04-10

    A recently developed method for the extraction of organic micropollutants from aqueous samples based on sorptive enrichment in columns packed with 100% polydimethylsiloxane (PDMS) particles was coupled on-line with HPLC analysis. The sorptive enrichment procedure originally developed for relatively nonpolar analytes was used to preconcentrate polar phenylurea herbicides from aqueous samples. PDMS extraction columns of 5, 10 and 25 cm were used to extract the herbicides from distilled, tap and river water samples. A model that allows prediction of retention and breakthrough volumes is presented. Despite the essentially apolar nature of the PDMS material, it is possible to concentrate sample volumes up to 10 ml on PDMS cartridges without losses of the most polar analyte under investigation, fenuron. For less polar analytes significantly larger sample volumes can be applied. Since standard UV detection does not provide adequate selectivity for river water samples, an electrospray (ES)-MS instrument was used to determine phenylurea herbicides in a water sample from the river Dommel. Methoxuron was present at a level of 80 ng/l. The detection limit of the current set-up, using 10 ml water samples and ES-MS detection is 10 ng/l in river water samples. Strategies for further improvement of the detection limits are identified.

  5. Spreadsheet Simulation of the Law of Large Numbers

    ERIC Educational Resources Information Center

    Boger, George

    2005-01-01

    If larger and larger samples are successively drawn from a population and a running average calculated after each sample has been drawn, the sequence of averages will converge to the mean, [mu], of the population. This remarkable fact, known as the law of large numbers, holds true if samples are drawn from a population of discrete or continuous…

  6. Cross-Study Homogeneity of Psoriasis Gene Expression in Skin across a Large Expression Range

    PubMed Central

    Kerkof, Keith; Timour, Martin; Russell, Christopher B.

    2013-01-01

    Background In psoriasis, only limited overlap between sets of genes identified as differentially expressed (psoriatic lesional vs. psoriatic non-lesional) was found using statistical and fold-change cut-offs. To provide a framework for utilizing prior psoriasis data sets we sought to understand the consistency of those sets. Methodology/Principal Findings Microarray expression profiling and qRT-PCR were used to characterize gene expression in PP and PN skin from psoriasis patients. cDNA (three new data sets) and cRNA hybridization (four existing data sets) data were compared using a common analysis pipeline. Agreement between data sets was assessed using varying qualitative and quantitative cut-offs to generate a DEG list in a source data set and then using other data sets to validate the list. Concordance increased from 67% across all probe sets to over 99% across more than 10,000 probe sets when statistical filters were employed. The fold-change behavior of individual genes tended to be consistent across the multiple data sets. We found that genes with <2-fold change values were quantitatively reproducible between pairs of data-sets. In a subset of transcripts with a role in inflammation changes detected by microarray were confirmed by qRT-PCR with high concordance. For transcripts with both PN and PP levels within the microarray dynamic range, microarray and qRT-PCR were quantitatively reproducible, including minimal fold-changes in IL13, TNFSF11, and TNFRSF11B and genes with >10-fold changes in either direction such as CHRM3, IL12B and IFNG. Conclusions/Significance Gene expression changes in psoriatic lesions were consistent across different studies, despite differences in patient selection, sample handling, and microarray platforms but between-study comparisons showed stronger agreement within than between platforms. We could use cut-offs as low as log10(ratio) = 0.1 (fold-change = 1.26), generating larger gene lists that validate on independent data sets. The reproducibility of PP signatures across data sets suggests that different sample sets can be productively compared. PMID:23308107

  7. Respondent-driven sampling for an adolescent health study in vulnerable urban settings: a multi-country study.

    PubMed

    Decker, Michele R; Marshall, Beth Dail; Emerson, Mark; Kalamar, Amanda; Covarrubias, Laura; Astone, Nan; Wang, Ziliang; Gao, Ersheng; Mashimbye, Lawrence; Delany-Moretlwe, Sinead; Acharya, Rajib; Olumide, Adesola; Ojengbede, Oladosu; Blum, Robert W; Sonenstein, Freya L

    2014-12-01

    The global adolescent population is larger than ever before and is rapidly urbanizing. Global surveillance systems to monitor youth health typically use household- and school-based recruitment methods. These systems risk not reaching the most marginalized youth made vulnerable by conditions of migration, civil conflict, and other forms of individual and structural vulnerability. We describe the methodology of the Well-Being of Adolescents in Vulnerable Environments survey, which used respondent-driven sampling (RDS) to recruit male and female youth aged 15-19 years and living in economically distressed urban settings in Baltimore, MD; Johannesburg, South Africa; Ibadan, Nigeria; New Delhi, India; and Shanghai, China (migrant youth only) for a cross-sectional study. We describe a shared recruitment and survey administration protocol across the five sites, present recruitment parameters, and illustrate challenges and necessary adaptations for use of RDS with youth in disadvantaged urban settings. We describe the reach of RDS into populations of youth who may be missed by traditional household- and school-based sampling. Across all sites, an estimated 9.6% were unstably housed; among those enrolled in school, absenteeism was pervasive with 29% having missed over 6 days of school in the past month. Overall findings confirm the feasibility, efficiency, and utility of RDS in quickly reaching diverse samples of youth, including those both in and out of school and those unstably housed, and provide direction for optimizing RDS methods with this population. In our rapidly urbanizing global landscape with an unprecedented youth population, RDS may serve as a valuable tool in complementing existing household- and school-based methods for health-related surveillance that can guide policy. Copyright © 2014 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  8. Respondent-driven sampling for an adolescent health study in vulnerable urban settings: a multi-country study

    PubMed Central

    Decker, Michele R.; Marshall, Beth; Emerson, Mark; Kalamar, Amanda; Covarrubias, Laura; Astone, Nan; Wang, Ziliang; Gao, Ersheng; Mashimbye, Lawrence; Delany-Moretlwe, Sinead; Acharya, Rajib; Olumide, Adesola; Ojengbede, Oladosu; Blum, Robert

    2015-01-01

    The global adolescent population is larger than ever before and is rapidly urbanizing. Global surveillance systems to monitor youth health typically use household- and school-based recruitment methods. These systems risk not reaching the most marginalized youth made vulnerable by conditions of migration, civil conflict and other forms of individual and structural vulnerability. We describe the methodology of the Well Being of Adolescents in Vulnerable Environments (WAVE) survey, which used respondent-driven sampling (RDS) to recruit male and female youth aged 15 to 19 years and living in economically distressed urban settings in Baltimore, USA, Johannesburg, South Africa, Ibadan, Nigeria, Delhi, India and Shanghai, China (migrant youth only) for a cross-sectional study. We describe a shared recruitment and survey administration protocol across the five sites, present recruitment parameters, and illustrate challenges and necessary adaptations for use of RDS with youth in disadvantaged urban settings. We describe the reach of RDS into populations of youth who may be missed by traditional householdbased and school-based sampling. Across all sites, an estimated 9.6% were unstably housed; among those enrolled in school, absenteeism was pervasive with 29% having missed over 6 days of school in the past month. Overall findings confirm the feasibility, efficiency and utility of RDS in quickly reaching diverse samples of youth, including those both in and out of school and those unstably housed, and provide direction for optimizing RDS methods with this population. In our rapidly urbanizing global landscape with an unprecedented youth population, RDS may serve as a valuable tool in complementing existing household- and school-based methods for health-related surveillance that can guide policy. PMID:25454005

  9. Patient Experience-based Value Sets: Are They Stable?

    PubMed

    Pickard, A Simon; Hung, Yu-Ting; Lin, Fang-Ju; Lee, Todd A

    2017-11-01

    Although societal preference weights are desirable to inform resource-allocation decision-making, patient experienced health state-based value sets can be useful for clinical decision-making, but context may matter. To estimate EQ-5D value sets using visual analog scale (VAS) ratings for patients undergoing knee replacement surgery and compare the estimates before and after surgery. We used the Patient Reported Outcome Measures data collected by the UK National Health Service on patients undergoing knee replacement from 2009 to 2012. Generalized least squares regression models were used to derive value sets based on the EQ-5D-3 level using a development sample before and after surgery, and model performance was examined using a validation sample. A total of 90,450 preoperative and postoperative valuations were included. For preoperative valuations, the largest decrement in VAS values was associated with the dimension of anxiety/depression, followed by self-care, mobility, usual activities, and pain/discomfort. However, pain/discomfort had a greater impact on VAS value decrement in postoperative valuations. Compared with preoperative health problems, postsurgical health problems were associated with larger value decrements, with significant differences in several levels and dimensions, including level 2 of mobility, level 2/3 of usual activities, level 3 of pain/discomfort, and level 3 of anxiety/depression. Similar results were observed across subgroups stratified by age and sex. Findings suggest patient experience-based value sets are not stable (ie, context such as timing matters). However, the knowledge that lower values are assigned to health states postsurgery compared with presurgery may be useful for the patient-doctor decision-making process.

  10. STBase: one million species trees for comparative biology.

    PubMed

    McMahon, Michelle M; Deepak, Akshay; Fernández-Baca, David; Boss, Darren; Sanderson, Michael J

    2015-01-01

    Comprehensively sampled phylogenetic trees provide the most compelling foundations for strong inferences in comparative evolutionary biology. Mismatches are common, however, between the taxa for which comparative data are available and the taxa sampled by published phylogenetic analyses. Moreover, many published phylogenies are gene trees, which cannot always be adapted immediately for species level comparisons because of discordance, gene duplication, and other confounding biological processes. A new database, STBase, lets comparative biologists quickly retrieve species level phylogenetic hypotheses in response to a query list of species names. The database consists of 1 million single- and multi-locus data sets, each with a confidence set of 1000 putative species trees, computed from GenBank sequence data for 413,000 eukaryotic taxa. Two bodies of theoretical work are leveraged to aid in the assembly of multi-locus concatenated data sets for species tree construction. First, multiply labeled gene trees are pruned to conflict-free singly-labeled species-level trees that can be combined between loci. Second, impacts of missing data in multi-locus data sets are ameliorated by assembling only decisive data sets. Data sets overlapping with the user's query are ranked using a scheme that depends on user-provided weights for tree quality and for taxonomic overlap of the tree with the query. Retrieval times are independent of the size of the database, typically a few seconds. Tree quality is assessed by a real-time evaluation of bootstrap support on just the overlapping subtree. Associated sequence alignments, tree files and metadata can be downloaded for subsequent analysis. STBase provides a tool for comparative biologists interested in exploiting the most relevant sequence data available for the taxa of interest. It may also serve as a prototype for future species tree oriented databases and as a resource for assembly of larger species phylogenies from precomputed trees.

  11. Short-term effects of goal-setting focusing on the life goal concept on subjective well-being and treatment engagement in subacute inpatients: a quasi-randomized controlled trial

    PubMed Central

    Ogawa, Tatsuya; Omon, Kyohei; Yuda, Tomohisa; Ishigaki, Tomoya; Imai, Ryota; Ohmatsu, Satoko; Morioka, Shu

    2016-01-01

    Objective: To investigate the short-term effects of the life goal concept on subjective well-being and treatment engagement, and to determine the sample size required for a larger trial. Design: A quasi-randomized controlled trial that was not blinded. Setting: A subacute rehabilitation ward. Subjects: A total of 66 patients were randomized to a goal-setting intervention group with the life goal concept (Life Goal), a standard rehabilitation group with no goal-setting intervention (Control 1), or a goal-setting intervention group without the life goal concept (Control 2). Interventions: The goal-setting intervention in the Life Goal and Control 2 was Goal Attainment Scaling. The Life Goal patients were assessed in terms of their life goals, and the hierarchy of goals was explained. The intervention duration was four weeks. Main measures: Patients were assessed pre- and post-intervention. The outcome measures were the Hospital Anxiety and Depression Scale, 12-item General Health Questionnaire, Pittsburgh Rehabilitation Participation Scale, and Functional Independence Measure. Results: Of the 296 potential participants, 66 were enrolled; Life Goal (n = 22), Control 1 (n = 22) and Control 2 (n = 22). Anxiety was significantly lower in the Life Goal (4.1 ±3.0) than in Control 1 (6.7 ±3.4), but treatment engagement was significantly higher in the Life Goal (5.3 ±0.4) compared with both the Control 1 (4.8 ±0.6) and Control 2 (4.9 ±0.5). Conclusions: The life goal concept had a short-term effect on treatment engagement. A sample of 31 patients per group would be required for a fully powered clinical trial. PMID:27496700

  12. Focus of a multilayer Laue lens with an aperture of 102 microns determined by ptychography at beamline 1-BM at the Advanced Photon Source

    NASA Astrophysics Data System (ADS)

    Macrander, Albert; Wojcik, Michael; Maser, Jörg; Bouet, Nathalie; Conley, Raymond

    2017-09-01

    Ptychography was used to determine the focus of a Multilayer-Laue-Lens (MLL) at beamline 1-BM at the Advanced Photon Source (APS). The MLL had a record aperture of 102 microns with 15170 layers. The measurements were made at 12 keV. The focal length was 9.6 mm, and the outer-most zone was 4 nm thick. MLLs with ever larger apertures are under continuous development since ever longer focal lengths, ever larger working distances, and ever increased flux in the focus are desired. A focus size of 25 nm was determined by ptychographic phase retrieval from a gold grating sample with 1 micron lines and spaces over 3.0 microns horizontal distance. The MLL was set to focus in the horizontal plane of the bending magnet beamline. A CCD with 13.0 micron pixel size positioned 1.13 m downstream of the sample was used to collect the transmitted intensity distribution. The beam incident on the MLL covered the whole 102 micron aperture in the horizontal focusing direction and 20 microns in the vertical direction. 160 iterations of the difference map algorithm were sufficient to obtain a reconstructed image of the sample. The present work highlights the utility of a bending magnet source at the APS for performing coherence-based experiments. Use of ptychography at 1-BM on MLL optics opens the way to study diffraction-limited imaging of other hard x-ray optics.

  13. Assessing the Reliability of the Dynamics Reconstructed from Metadynamics.

    PubMed

    Salvalaglio, Matteo; Tiwary, Pratyush; Parrinello, Michele

    2014-04-08

    Sampling a molecular process characterized by an activation free energy significantly larger than kBT is a well-known challenge in molecular dynamics simulations. In a recent work [Tiwary and Parrinello, Phys. Rev. Lett. 2013, 111, 230602], we have demonstrated that the transition times of activated molecular transformations can be computed from well-tempered metadynamics provided that no bias is deposited in the transition state region and that the set of collective variables chosen to enhance sampling does not display hysteresis. Ensuring though that these two criteria are met may not always be simple. Here we build on the fact that the times of escape from a long-lived metastable state obey Poisson statistics. This allows us to identify quantitative measures of trustworthiness of our calculation. We test our method on a few paradigmatic examples.

  14. Experimental generalized quantum suppression law in Sylvester interferometers

    NASA Astrophysics Data System (ADS)

    Viggianiello, Niko; Flamini, Fulvio; Innocenti, Luca; Cozzolino, Daniele; Bentivegna, Marco; Spagnolo, Nicolò; Crespi, Andrea; Brod, Daniel J.; Galvão, Ernesto F.; Osellame, Roberto; Sciarrino, Fabio

    2018-03-01

    Photonic interference is a key quantum resource for optical quantum computation, and in particular for so-called boson sampling devices. In interferometers with certain symmetries, genuine multiphoton quantum interference effectively suppresses certain sets of events, as in the original Hong–Ou–Mandel effect. Recently, it was shown that some classical and semi-classical models could be ruled out by identifying such suppressions in Fourier interferometers. Here we propose a suppression law suitable for random-input experiments in multimode Sylvester interferometers, and verify it experimentally using 4- and 8-mode integrated interferometers. The observed suppression occurs for a much larger fraction of input–output combinations than what is observed in Fourier interferometers of the same size, and could be relevant to certification of boson sampling machines and other experiments relying on bosonic interference, such as quantum simulation and quantum metrology.

  15. Long-term mental health outcome in post-conflict settings: Similarities and differences between Kosovo and Rwanda.

    PubMed

    Eytan, Ariel; Munyandamutsa, Naasson; Nkubamugisha, Paul Mahoro; Gex-Fabry, Marianne

    2015-06-01

    Few studies investigated the long-term mental health outcome in culturally different post-conflict settings. This study considers two surveys conducted in Kosovo 8 years after the Balkans war and in Rwanda 14 years after the genocide. All participants (n = 864 in Kosovo; n = 962 in Rwanda) were interviewed using the posttraumatic stress disorder (PTSD) and major depressive episode (MDE) sections of the Mini International Neuropsychiatric Interview (MINI) and the Medical Outcomes Study 36-Item Short-Form Health Survey (SF-36). Proportions of participants who met diagnostic criteria for either PTSD or MDE were 33.0% in Kosovo and 31.0% in Rwanda, with co-occurrence of both disorders in 17.8% of the Rwandan sample and 9.5% of the Kosovan sample. Among patients with PTSD, patterns of symptoms significantly differed in the two settings, with avoidance and inability to recall less frequent and sense of a foreshortened future and increased startle response more common in Rwanda. Significant differences were also observed in patients with MDE, with loss of energy and difficulties concentrating less frequent and suicidal ideation more common in Rwanda. Comorbid PTSD and MDE were associated with decreased SF-36 subjective mental and physical health scores in both settings, but significantly larger effects in Kosovo than in Rwanda. Culturally different civilian populations exposed to mass trauma may differ with respect to their long-term mental health outcome, including comorbidity, symptom profile and health perception. © The Author(s) 2014.

  16. The 60-month all-sky BAT Survey of AGN and the Anisotropy of Nearby AGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ajello, M.; /KIPAC, Menlo Park; Alexander, D.M.

    2012-04-02

    Surveys above 10 keV represent one of the the best resources to provide an unbiased census of the population of Active Galactic Nuclei (AGN). We present the results of 60 months of observation of the hard X-ray sky with Swift/BAT. In this timeframe, BAT detected (in the 15-55 keV band) 720 sources in an all-sky survey of which 428 are associated with AGN, most of which are nearby. Our sample has negligible incompleteness and statistics a factor of {approx}2 larger over similarly complete sets of AGN. Our sample contains (at least) 15 bona-fide Compton-thick AGN and 3 likely candidates. Compton-thickmore » AGN represent a {approx}5% of AGN samples detected above 15 keV. We use the BAT dataset to refine the determination of the LogN-LogS of AGN which is extremely important, now that NuSTAR prepares for launch, towards assessing the AGN contribution to the cosmic X-ray background. We show that the LogN-LogS of AGN selected above 10 keV is now established to a {approx}10% precision. We derive the luminosity function of Compton-thick AGN and measure a space density of 7.9{sub -2.9}{sup +4.1} x 10{sup -5} Mpc{sup -3} for objects with a de-absorbed luminosity larger than 2 x 10{sup 42} erg s{sup -1}. As the BAT AGN are all mostly local, they allow us to investigate the spatial distribution of AGN in the nearby Universe regardless of absorption. We find concentrations of AGN that coincide spatially with the largest congregations of matter in the local ({le} 85 Mpc) Universe. There is some evidence that the fraction of Seyfert 2 objects is larger than average in the direction of these dense regions.« less

  17. Natural language processing in an intelligent writing strategy tutoring system.

    PubMed

    McNamara, Danielle S; Crossley, Scott A; Roscoe, Rod

    2013-06-01

    The Writing Pal is an intelligent tutoring system that provides writing strategy training. A large part of its artificial intelligence resides in the natural language processing algorithms to assess essay quality and guide feedback to students. Because writing is often highly nuanced and subjective, the development of these algorithms must consider a broad array of linguistic, rhetorical, and contextual features. This study assesses the potential for computational indices to predict human ratings of essay quality. Past studies have demonstrated that linguistic indices related to lexical diversity, word frequency, and syntactic complexity are significant predictors of human judgments of essay quality but that indices of cohesion are not. The present study extends prior work by including a larger data sample and an expanded set of indices to assess new lexical, syntactic, cohesion, rhetorical, and reading ease indices. Three models were assessed. The model reported by McNamara, Crossley, and McCarthy (Written Communication 27:57-86, 2010) including three indices of lexical diversity, word frequency, and syntactic complexity accounted for only 6% of the variance in the larger data set. A regression model including the full set of indices examined in prior studies of writing predicted 38% of the variance in human scores of essay quality with 91% adjacent accuracy (i.e., within 1 point). A regression model that also included new indices related to rhetoric and cohesion predicted 44% of the variance with 94% adjacent accuracy. The new indices increased accuracy but, more importantly, afford the means to provide more meaningful feedback in the context of a writing tutoring system.

  18. [Fast discrimination of edible vegetable oil based on Raman spectroscopy].

    PubMed

    Zhou, Xiu-Jun; Dai, Lian-Kui; Li, Sheng

    2012-07-01

    A novel method to fast discriminate edible vegetable oils by Raman spectroscopy is presented. The training set is composed of different edible vegetable oils with known classes. Based on their original Raman spectra, baseline correction and normalization were applied to obtain standard spectra. Two characteristic peaks describing the unsaturated degree of vegetable oil were selected as feature vectors; then the centers of all classes were calculated. For an edible vegetable oil with unknown class, the same pretreatment and feature extraction methods were used. The Euclidian distances between the feature vector of the unknown sample and the center of each class were calculated, and the class of the unknown sample was finally determined by the minimum distance. For 43 edible vegetable oil samples from seven different classes, experimental results show that the clustering effect of each class was more obvious and the class distance was much larger with the new feature extraction method compared with PCA. The above classification model can be applied to discriminate unknown edible vegetable oils rapidly and accurately.

  19. Face recognition: a convolutional neural-network approach.

    PubMed

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  20. Class III dento-skeletal anomalies: rotational growth and treatment timing.

    PubMed

    Mosca, G; Grippaudo, C; Marchionni, P; Deli, R

    2006-03-01

    The interception of a Class III malocclusion requires a long-term growth prediction in order to estimate the subject's evolution from the prepubertal phase to adulthood. The aim of this retrospective longitudinal study was to highlight the differences in facial morphology in relation to the direction of mandibular growth in a sample of subjects with Class III skeletal anomalies divided on the basis of their Petrovic's auxological categories and rotational types. The study involved 20 patients (11 females and 9 males) who started therapy before reaching their pubertal peak and were followed up for a mean of 4.3 years (range: 3.9-5.5 years). Despite the small sample size, the definition of the rotational type of growth was the main diagnostic element for setting the correct individualised therapy. We therefore believe that the observation of a larger sample would reinforce the diagnostic-therapeutic validity of Petrovic's auxological categories, allow an evaluation off all rotational types, and improve the statistical significance of the results obtained.

  1. Deformation mechanism study of a hot rolled Zr-2.5Nb alloy by transmission electron microscopy. I. Dislocation microstructures in as-received state and at different plastic strains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Fei; Daymond, Mark R., E-mail: mark.daymond@queensu.ca; Yao, Zhongwen

    Thin foil dog bone samples prepared from a hot rolled Zr-2.5Nb alloy have been deformed by tensile deformation to different plastic strains. The development of slip traces during loading was observed in situ through SEM, revealing that deformation starts preferentially in certain sets of grains during the elastic-plastic transition region. TEM characterization showed that sub-grain boundaries formed during hot rolling consisted of screw 〈a〉 dislocations or screw 〈c〉 and 〈a〉 dislocations. Prismatic 〈a〉 dislocations with large screw or edge components have been identified from the sample with 0.5% plastic strain. Basal 〈a〉 and pyramidal 〈c + a〉 dislocations were found in themore » sample that had been deformed with 1.5% plastic strain, implying that these dislocations require larger stresses to be activated.« less

  2. RNA sequencing of transformed lymphoblastoid cells from siblings discordant for autism spectrum disorders reveals transcriptomic and functional alterations: Evidence for sex-specific effects.

    PubMed

    Tylee, Daniel S; Espinoza, Alfred J; Hess, Jonathan L; Tahir, Muhammad A; McCoy, Sarah Y; Rim, Joshua K; Dhimal, Totadri; Cohen, Ori S; Glatt, Stephen J

    2017-03-01

    Genome-wide expression studies of samples derived from individuals with autism spectrum disorder (ASD) and their unaffected siblings have been widely used to shed light on transcriptomic differences associated with this condition. Females have historically been under-represented in ASD genomic studies. Emerging evidence from studies of structural genetic variants and peripheral biomarkers suggest that sex-differences may exist in the biological correlates of ASD. Relatively few studies have explicitly examined whether sex-differences exist in the transcriptomic signature of ASD. The present study quantified genome-wide expression values by performing RNA sequencing on transformed lymphoblastoid cell lines and identified transcripts differentially expressed between same-sex, proximal-aged sibling pairs. We found that performing separate analyses for each sex improved our ability to detect ASD-related transcriptomic differences; we observed a larger number of dysregulated genes within our smaller set of female samples (n = 12 sibling pairs), as compared with the set of male samples (n = 24 sibling pairs), with small, but statistically significant overlap between the sexes. Permutation-based gene-set analyses and weighted gene co-expression network analyses also supported the idea that the transcriptomic signature of ASD may differ between males and females. We discuss our findings in the context of the relevant literature, underscoring the need for future ASD studies to explicitly account for differences between the sexes. Autism Res 2017, 10: 439-455. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  3. Viewpoints: Interactive Exploration of Large Multivariate Earth and Space Science Data Sets

    NASA Astrophysics Data System (ADS)

    Levit, C.; Gazis, P. R.

    2006-05-01

    Analysis and visualization of extremely large and complex data sets may be one of the most significant challenges facing earth and space science investigators in the forthcoming decades. While advances in hardware speed and storage technology have roughly kept up with (indeed, have driven) increases in database size, the same is not of our abilities to manage the complexity of these data. Current missions, instruments, and simulations produce so much data of such high dimensionality that they outstrip the capabilities of traditional visualization and analysis software. This problem can only be expected to get worse as data volumes increase by orders of magnitude in future missions and in ever-larger supercomputer simulations. For large multivariate data (more than 105 samples or records with more than 5 variables per sample) the interactive graphics response of most existing statistical analysis, machine learning, exploratory data analysis, and/or visualization tools such as Torch, MLC++, Matlab, S++/R, and IDL stutters, stalls, or stops working altogether. Fortunately, the graphics processing units (GPUs) built in to all professional desktop and laptop computers currently on the market are capable of transforming, filtering, and rendering hundreds of millions of points per second. We present a prototype open-source cross-platform application which leverages much of the power latent in the GPU to enable smooth interactive exploration and analysis of large high- dimensional data using a variety of classical and recent techniques. The targeted application is the interactive analysis of large, complex, multivariate data sets, with dimensionalities that may surpass 100 and sample sizes that may exceed 106-108.

  4. Psychometrics of a Brief Measure of Anxiety to Detect Severity and Impairment: The Overall Anxiety Severity and Impairment Scale (OASIS)

    PubMed Central

    Norman, Sonya B.; Campbell-Sills, Laura; Hitchcock, Carla A.; Sullivan, Sarah; Rochlin, Alexis; Wilkins, Kendall C.; Stein, Murray B.

    2010-01-01

    Brief measures of anxiety related severity and impairment that can be used across anxiety disorders and with subsyndromal anxiety are lacking. The Overall Anxiety Severity and Impairment Scale (OASIS) have shown strong psychometric properties with college students and primary care patients. This study examines sensitivity, specificity, and efficiency of an abbreviated version of the OASIS that takes only 2–3 minutes to complete using a non-clinical (college student) sample. 48 participants completed the OASIS and SCID for anxiety disorders, 21 had a diagnosis of ≥1 anxiety disorder, and 4 additional participants had a subthreshold diagnosis. A cut-score of 8 best discriminated those with anxiety disorders from those without, successfully classifying 78% of the sample with 69% sensitivity and 74% specificity. Results from a larger sample (n=171) showed a single factor structure and excellent convergent and divergent validity. The availability of cut-scores for a non-clinical sample furthers the utility of this measure for settings where screening or brief assessment of anxiety is needed. PMID:20609450

  5. ProteinSeq: High-Performance Proteomic Analyses by Proximity Ligation and Next Generation Sequencing

    PubMed Central

    Vänelid, Johan; Siegbahn, Agneta; Ericsson, Olle; Fredriksson, Simon; Bäcklin, Christofer; Gut, Marta; Heath, Simon; Gut, Ivo Glynne; Wallentin, Lars; Gustafsson, Mats G.; Kamali-Moghaddam, Masood; Landegren, Ulf

    2011-01-01

    Despite intense interest, methods that provide enhanced sensitivity and specificity in parallel measurements of candidate protein biomarkers in numerous samples have been lacking. We present herein a multiplex proximity ligation assay with readout via realtime PCR or DNA sequencing (ProteinSeq). We demonstrate improved sensitivity over conventional sandwich assays for simultaneous analysis of sets of 35 proteins in 5 µl of blood plasma. Importantly, we observe a minimal tendency to increased background with multiplexing, compared to a sandwich assay, suggesting that higher levels of multiplexing are possible. We used ProteinSeq to analyze proteins in plasma samples from cardiovascular disease (CVD) patient cohorts and matched controls. Three proteins, namely P-selectin, Cystatin-B and Kallikrein-6, were identified as putative diagnostic biomarkers for CVD. The latter two have not been previously reported in the literature and their potential roles must be validated in larger patient cohorts. We conclude that ProteinSeq is promising for screening large numbers of proteins and samples while the technology can provide a much-needed platform for validation of diagnostic markers in biobank samples and in clinical use. PMID:21980495

  6. Convolutional neural networks based on augmented training samples for synthetic aperture radar target recognition

    NASA Astrophysics Data System (ADS)

    Yan, Yue

    2018-03-01

    A synthetic aperture radar (SAR) automatic target recognition (ATR) method based on the convolutional neural networks (CNN) trained by augmented training samples is proposed. To enhance the robustness of CNN to various extended operating conditions (EOCs), the original training images are used to generate the noisy samples at different signal-to-noise ratios (SNRs), multiresolution representations, and partially occluded images. Then, the generated images together with the original ones are used to train a designed CNN for target recognition. The augmented training samples can contrapuntally improve the robustness of the trained CNN to the covered EOCs, i.e., the noise corruption, resolution variance, and partial occlusion. Moreover, the significantly larger training set effectively enhances the representation capability for other conditions, e.g., the standard operating condition (SOC), as well as the stability of the network. Therefore, better performance can be achieved by the proposed method for SAR ATR. For experimental evaluation, extensive experiments are conducted on the Moving and Stationary Target Acquisition and Recognition dataset under SOC and several typical EOCs.

  7. Confirmatory factor analysis applied to the Force Concept Inventory

    NASA Astrophysics Data System (ADS)

    Eaton, Philip; Willoughby, Shannon D.

    2018-06-01

    In 1995, Huffman and Heller used exploratory factor analysis to draw into question the factors of the Force Concept Inventory (FCI). Since then several papers have been published examining the factors of the FCI on larger sets of student responses and understandable factors were extracted as a result. However, none of these proposed factor models have been verified to not be unique to their original sample through the use of independent sets of data. This paper seeks to confirm the factor models proposed by Scott et al. in 2012, and Hestenes et al. in 1992, as well as another expert model proposed within this study through the use of confirmatory factor analysis (CFA) and a sample of 20 822 postinstruction student responses to the FCI. Upon application of CFA using the full sample, all three models were found to fit the data with acceptable global fit statistics. However, when CFA was performed using these models on smaller sample sizes the models proposed by Scott et al. and Eaton and Willoughby were found to be far more stable than the model proposed by Hestenes et al. The goodness of fit of these models to the data suggests that the FCI can be scored on factors that are not unique to a single class. These scores could then be used to comment on how instruction methods effect the performance of students along a single factor and more in-depth analyses of curriculum changes may be possible as a result.

  8. Allometry and Ecology of the Bilaterian Gut Microbiome

    PubMed Central

    Sherrill-Mix, Scott; McCormick, Kevin; Lauder, Abigail; Bailey, Aubrey; Zimmerman, Laurie; Li, Yingying; Django, Jean-Bosco N.; Bertolani, Paco; Colin, Christelle; Hart, John A.; Hart, Terese B.; Georgiev, Alexander V.; Sanz, Crickette M.; Morgan, David B.; Atencia, Rebeca; Cox, Debby; Muller, Martin N.; Sommer, Volker; Piel, Alexander K.; Stewart, Fiona A.; Speede, Sheri; Roman, Joe; Wu, Gary; Taylor, Josh; Bohm, Rudolf; Rose, Heather M.; Carlson, John; Mjungu, Deus; Schmidt, Paul; Gaughan, Celeste; Bushman, Joyslin I.; Schmidt, Ella; Bittinger, Kyle; Collman, Ronald G.; Hahn, Beatrice H.

    2018-01-01

    ABSTRACT Classical ecology provides principles for construction and function of biological communities, but to what extent these apply to the animal-associated microbiota is just beginning to be assessed. Here, we investigated the influence of several well-known ecological principles on animal-associated microbiota by characterizing gut microbial specimens from bilaterally symmetrical animals (Bilateria) ranging from flies to whales. A rigorously vetted sample set containing 265 specimens from 64 species was assembled. Bacterial lineages were characterized by 16S rRNA gene sequencing. Previously published samples were also compared, allowing analysis of over 1,098 samples in total. A restricted number of bacterial phyla was found to account for the great majority of gut colonists. Gut microbial composition was associated with host phylogeny and diet. We identified numerous gut bacterial 16S rRNA gene sequences that diverged deeply from previously studied taxa, identifying opportunities to discover new bacterial types. The number of bacterial lineages per gut sample was positively associated with animal mass, paralleling known species-area relationships from island biogeography and implicating body size as a determinant of community stability and niche complexity. Samples from larger animals harbored greater numbers of anaerobic communities, specifying a mechanism for generating more-complex microbial environments. Predictions for species/abundance relationships from models of neutral colonization did not match the data set, pointing to alternative mechanisms such as selection of specific colonists by environmental niche. Taken together, the data suggest that niche complexity increases with gut size and that niche selection forces dominate gut community construction. PMID:29588401

  9. Peromyscus ranges at high and low population densities

    USGS Publications Warehouse

    Stickel, L.F.

    1960-01-01

    Live-trapping studies at the Patuxent Wildlife Research Center, Maryland, showed that the ranges of wood mice were larger when the population density was lower and smaller when the population density was higher. When the population density was about 1.3 male mice per acre in June 1954, the average distance recorded between traps after four or more captures was 258 feet. When the population density was about 4.1 male mice per acre in June 1957, the average distance was 119 feet. Differences were statistically significant. Females were so scarce at the low that comparisons could not be made for them. Examples from the literature also show that home range of a species may vary with population density. Other examples show that the range may vary with habitat, breeding condition and food supply. These variations in range size reduce the reliability of censuses in which relative methods are used: Lines of traps sample the population of a larger area when ranges are large than they do when ranges are small. Direct comparisons therefore will err in some degree. Error may be introduced also when line-trap data are transformed to per acre figures on the basis of home-range estimates made by area-trapping at another place or time. Variation in range size also can make it necessary to change area-trapping plans, for larger quadrants are needed when ranges are larger. It my be necessary to set traps closer together when ranges are small than when ranges are large.

  10. Optimal tumor sampling for immunostaining of biomarkers in breast carcinoma

    PubMed Central

    2011-01-01

    Introduction Biomarkers, such as Estrogen Receptor, are used to determine therapy and prognosis in breast carcinoma. Immunostaining assays of biomarker expression have a high rate of inaccuracy; for example, estimates are as high as 20% for Estrogen Receptor. Biomarkers have been shown to be heterogeneously expressed in breast tumors and this heterogeneity may contribute to the inaccuracy of immunostaining assays. Currently, no evidence-based standards exist for the amount of tumor that must be sampled in order to correct for biomarker heterogeneity. The aim of this study was to determine the optimal number of 20X fields that are necessary to estimate a representative measurement of expression in a whole tissue section for selected biomarkers: ER, HER-2, AKT, ERK, S6K1, GAPDH, Cytokeratin, and MAP-Tau. Methods Two collections of whole tissue sections of breast carcinoma were immunostained for biomarkers. Expression was quantified using the Automated Quantitative Analysis (AQUA) method of quantitative immunofluorescence. Simulated sampling of various numbers of fields (ranging from one to thirty five) was performed for each marker. The optimal number was selected for each marker via resampling techniques and minimization of prediction error over an independent test set. Results The optimal number of 20X fields varied by biomarker, ranging between three to fourteen fields. More heterogeneous markers, such as MAP-Tau protein, required a larger sample of 20X fields to produce representative measurement. Conclusions The optimal number of 20X fields that must be sampled to produce a representative measurement of biomarker expression varies by marker with more heterogeneous markers requiring a larger number. The clinical implication of these findings is that breast biopsies consisting of a small number of fields may be inadequate to represent whole tumor biomarker expression for many markers. Additionally, for biomarkers newly introduced into clinical use, especially if therapeutic response is dictated by level of expression, the optimal size of tissue sample must be determined on a marker-by-marker basis. PMID:21592345

  11. Liver Full Reference Set Application: David Lubman - Univ of Michigan (2011) — EDRN Public Portal

    Cancer.gov

    In this work we will perform the next step in the biomarker development and validation. This step will be the Phase 2 validation of glycoproteins that have passed Phase 1 blinded validation using ELISA kits based on target glycoproteins selected based on our previous work. This will be done in a large Phase 2 sample set obtained in a multicenter study funded by the EDRN. The assays will be performed in our research lab located in the Center for Cancer Proteomics in the University of Michigan Medical Center. This study will include patients in whom serum was stored for future validation and includes samples from early HCC (n = 158), advanced cases (n=214) and cirrhotic controls (n = 417). These samples will be supplied by the EDRN (per Dr. Jo Ann Rinaudo) and will be analyzed in a blinded fashion by Dr. Feng from the Fred Hutchinson Cancer Center. This phase 2 study was designed to have above 90% power at one-sided 5% type-I error for comparing the joint sensitivity and specificity for differentiating early stage HCC from cirrhotic patients between AFP and a new marker. Sample sizes of 200 for early stage HCC and 400 for cirrhotics were required to achieve the stated power (14). We will select our candidates for this larger phase validation set based on the results of previous work. These will include HGF and CD14 and the results of these assays will be used to evaluate the performance of each of these markers and combinations of HGF and CD14 and AFP and HGF. It is expected that each assay will be repeated three times for each marker and will also be performed for AFP as the standard for comparison. 250 uL of each sample is requested for analysis.

  12. An investigation to improve the Menhaden fishery prediction and detection model through the application of ERTS-A data

    NASA Technical Reports Server (NTRS)

    Maughan, P. M. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Linear regression of secchi disc visibility against number of sets yielded significant results in a number of instances. The variability seen in the slope of the regression lines is due to the nonuniformity of sample size. The longer the period sampled, the larger the total number of attempts. Further, there is no reason to expect either the influence of transparency or of other variables to remain constant throughout the season. However, the fact that the data for the entire season, variable as it is, was significant at the 5% level, suggests its potential utility for predictive modeling. Thus, this regression equation will be considered representative and will be utilized for the first numerical model. Secchi disc visibility was also regressed against number of sets for the three day period September 27-September 29, 1972 to determine if surface truth data supported the intense relationship between ERTS-1 identified turbidity and fishing effort previously discussed. A very negative correlation was found. These relationship lend additional credence to the hypothesis that ERTS imagery, when utilized as a source of visibility (turbidity) data, may be useful as a predictive tool.

  13. Concise Review: Progress and Challenges in Using Human Stem Cells for Biological and Therapeutics Discovery: Neuropsychiatric Disorders.

    PubMed

    Panchision, David M

    2016-03-01

    In facing the daunting challenge of using human embryonic and induced pluripotent stem cells to study complex neural circuit disorders such as schizophrenia, mood and anxiety disorders, and autism spectrum disorders, a 2012 National Institute of Mental Health workshop produced a set of recommendations to advance basic research and engage industry in cell-based studies of neuropsychiatric disorders. This review describes progress in meeting these recommendations, including the development of novel tools, strides in recapitulating relevant cell and tissue types, insights into the genetic basis of these disorders that permit integration of risk-associated gene regulatory networks with cell/circuit phenotypes, and promising findings of patient-control differences using cell-based assays. However, numerous challenges are still being addressed, requiring further technological development, approaches to resolve disease heterogeneity, and collaborative structures for investigators of different disciplines. Additionally, since data obtained so far is on small sample sizes, replication in larger sample sets is needed. A number of individual success stories point to a path forward in developing assays to translate discovery science to therapeutics development. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.

  14. Three Way Comparison between Two OMI/Aura and One POLDER/PARASOL Cloud Pressure Products

    NASA Technical Reports Server (NTRS)

    Sneep, M.; deHaan, J. F.; Stammes, P.; Vanbaunce, C.; Joiner, J.; Vasilkov, A. P.; Levelt, P. F.

    2007-01-01

    The cloud pressures determined by three different algorithms, operating on reflectances measured by two space-borne instruments in the "A" train, are compared with each other. The retrieval algorithms are based on absorption in the oxygen A-band near 760 nm, absorption by a collision induced absorption in oxygen near 477nm, and the filling in of Fraunhofer lines by rotational Raman scattering. The first algorithm operates on data collected by the POLDER instrument on board PARASOL, while the latter two operate on data from the OMI instrument on board Aura. The satellites sample the same air mass within about 15 minutes. Using one month of data, the cloud pressures from the three algorithms are found to show a similar behavior, with correlation coefficients larger than 0.85 between the data sets for thick clouds. The average differences in the cloud pressure are also small, between 2 and 45 hPa, for the whole data set. For optically thin to medium thick clouds, the cloud pressure the distribution found by POLDER is very similar to that found by OMI using the O2 - O2 absorption. Somewhat larger differences are found for very thick clouds, and we hypothesise that the strong absorption in the oxygen A-band causes the POLDER instrument to retrieve lower pressures for those scenes.

  15. Public attitudes toward larger cigarette pack warnings: Results from a nationally representative U.S. sample

    PubMed Central

    2017-01-01

    A large body of evidence supports the effectiveness of larger health warnings on cigarette packages. However, there is limited research examining attitudes toward such warning labels, which has potential implications for implementation of larger warning labels. The purpose of the current study was to examine attitudes toward larger warning sizes on cigarette packages and examine variables associated with more favorable attitudes. In a nationally representative survey of U.S. adults (N = 5,014), participants were randomized to different warning size conditions, assessing attitude toward “a health warning that covered (25, 50, 75) % of a cigarette pack.” SAS logistic regression survey procedures were used to account for the complex survey design and sampling weights. Across experimental groups, nearly three-quarters (72%) of adults had attitudes supportive of larger warning labels on cigarette packs. Among the full sample and smokers only (N = 1,511), most adults had favorable attitudes toward labels that covered 25% (78.2% and 75.2%, respectively), 50% (70% and 58.4%, respectively), and 75% (67.9% and 61%, respectively) of a cigarette pack. Young adults, females, racial/ethnic minorities, and non-smokers were more likely to have favorable attitudes toward larger warning sizes. Among smokers only, females and those with higher quit intentions held more favorable attitudes toward larger warning sizes. Widespread support exists for larger warning labels on cigarette packages among U.S. adults, including among smokers. Our findings support the implementation of larger health warnings on cigarette packs in the U.S. as required by the 2009 Tobacco Control Act. PMID:28253257

  16. Numerical judgments by chimpanzees (Pan troglodytes) in a token economy.

    PubMed

    Beran, Michael J; Evans, Theodore A; Hoyle, Daniel

    2011-04-01

    We presented four chimpanzees with a series of tasks that involved comparing two token sets or comparing a token set to a quantity of food. Selected tokens could be exchanged for food items on a one-to-one basis. Chimpanzees successfully selected the larger numerical set for comparisons of 1 to 5 items when both sets were visible and when sets were presented through one-by-one addition of tokens into two opaque containers. Two of four chimpanzees used the number of tokens and food items to guide responding in all conditions, rather than relying on token color, size, total amount, or duration of set presentation. These results demonstrate that judgments of simultaneous and sequential sets of stimuli are made by some chimpanzees on the basis of the numerousness of sets rather than other non-numerical dimensions. The tokens were treated as equivalent to food items on the basis of their numerousness, and the chimpanzees maximized reward by choosing the larger number of items in all situations.

  17. Auditory ossicles from southwest Asian Mousterian sites.

    PubMed

    Quam, Rolf; Rak, Yoel

    2008-03-01

    The present study describes and analyzes new Neandertal and early modern human auditory ossicles from the sites of Qafzeh and Amud in southwest Asia. Some methodological issues in the measurement of these bones are considered, and a set of standardized measurement protocols is proposed. Evidence of erosive pathological processes, most likely attributed to otitis media, is present on the ossicles of Qafzeh 12 and Amud 7 but none can be detected in the other Qafzeh specimens. Qafzeh 12 and 15 extend the known range of variation in the fossil H. sapiens sample in some metric variables, but morphologically, the new specimens do not differ in any meaningful way from living humans. In most metric dimensions, the Amud 7 incus falls within our modern human range of variation, but the more closed angle between the short and long processes stands out. Morphologically, all the Neandertal incudi described to date show a very straight long process. Several tentative hypotheses can be suggested regarding the evolution of the ear ossicles in the genus Homo. First, the degree of metric and morphological variation seems greater among the fossil H. sapiens sample than in Neandertals. Second, there is a real difference in the size of the malleus between Neandertals and fossil H. sapiens, with Neandertals showing larger values in most dimensions. Third, the wider malleus head implies a larger articular facet in the Neandertals, and this also appears to be reflected in the larger (taller) incus articular facet. Fourth, there is limited evidence for a potential temporal trend toward reduction of the long process within the Neandertal lineage. Fifth, a combination of features in the malleus, incus, and stapes may indicate a slightly different relative positioning of either the tip of the incus long process or stapes footplate within the tympanic cavity in the Neandertal lineage.

  18. Characterization of Factors Affecting Nanoparticle Tracking Analysis Results With Synthetic and Protein Nanoparticles.

    PubMed

    Krueger, Aaron B; Carnell, Pauline; Carpenter, John F

    2016-04-01

    In many manufacturing and research areas, the ability to accurately monitor and characterize nanoparticles is becoming increasingly important. Nanoparticle tracking analysis is rapidly becoming a standard method for this characterization, yet several key factors in data acquisition and analysis may affect results. Nanoparticle tracking analysis is prone to user input and bias on account of a high number of parameters available, contains a limited analysis volume, and individual sample characteristics such as polydispersity or complex protein solutions may affect analysis results. This study systematically addressed these key issues. The integrated syringe pump was used to increase the sample volume analyzed. It was observed that measurements recorded under flow caused a reduction in total particle counts for both polystyrene and protein particles compared to those collected under static conditions. In addition, data for polydisperse samples tended to lose peak resolution at higher flow rates, masking distinct particle populations. Furthermore, in a bimodal particle population, a bias was seen toward the larger species within the sample. The impacts of filtration on an agitated intravenous immunoglobulin sample and operating parameters including "MINexps" and "blur" were investigated to optimize the method. Taken together, this study provides recommendations on instrument settings and sample preparations to properly characterize complex samples. Copyright © 2016. Published by Elsevier Inc.

  19. Sampling optimization for high-speed weigh-in-motion measurements using in-pavement strain-based sensors

    NASA Astrophysics Data System (ADS)

    Zhang, Zhiming; Huang, Ying; Bridgelall, Raj; Palek, Leonard; Strommen, Robert

    2015-06-01

    Weigh-in-motion (WIM) measurement has been widely used for weight enforcement, pavement design, freight management, and intelligent transportation systems to monitor traffic in real-time. However, to use such sensors effectively, vehicles must exit the traffic stream and slow down to match their current capabilities. Hence, agencies need devices with higher vehicle passing speed capabilities to enable continuous weight measurements at mainline speeds. The current practices for data acquisition at such high speeds are fragmented. Deployment configurations and settings depend mainly on the experiences of operation engineers. To assure adequate data, most practitioners use very high frequency measurements that result in redundant samples, thereby diminishing the potential for real-time processing. The larger data memory requirements from higher sample rates also increase storage and processing costs. The field lacks a sampling design or standard to guide appropriate data acquisition of high-speed WIM measurements. This study develops the appropriate sample rate requirements as a function of the vehicle speed. Simulations and field experiments validate the methods developed. The results will serve as guidelines for future high-speed WIM measurements using in-pavement strain-based sensors.

  20. Integrative missing value estimation for microarray data.

    PubMed

    Hu, Jianjun; Li, Haifeng; Waterman, Michael S; Zhou, Xianghong Jasmine

    2006-10-12

    Missing value estimation is an important preprocessing step in microarray analysis. Although several methods have been developed to solve this problem, their performance is unsatisfactory for datasets with high rates of missing data, high measurement noise, or limited numbers of samples. In fact, more than 80% of the time-series datasets in Stanford Microarray Database contain less than eight samples. We present the integrative Missing Value Estimation method (iMISS) by incorporating information from multiple reference microarray datasets to improve missing value estimation. For each gene with missing data, we derive a consistent neighbor-gene list by taking reference data sets into consideration. To determine whether the given reference data sets are sufficiently informative for integration, we use a submatrix imputation approach. Our experiments showed that iMISS can significantly and consistently improve the accuracy of the state-of-the-art Local Least Square (LLS) imputation algorithm by up to 15% improvement in our benchmark tests. We demonstrated that the order-statistics-based integrative imputation algorithms can achieve significant improvements over the state-of-the-art missing value estimation approaches such as LLS and is especially good for imputing microarray datasets with a limited number of samples, high rates of missing data, or very noisy measurements. With the rapid accumulation of microarray datasets, the performance of our approach can be further improved by incorporating larger and more appropriate reference datasets.

  1. A dysbiosis index to assess microbial changes in fecal samples of dogs with chronic inflammatory enteropathy.

    PubMed

    AlShawaqfeh, M K; Wajid, B; Minamoto, Y; Markel, M; Lidbury, J A; Steiner, J M; Serpedin, E; Suchodolski, J S

    2017-11-01

    Recent studies have identified various bacterial groups that are altered in dogs with chronic inflammatory enteropathies (CE) compared to healthy dogs. The study aim was to use quantitative PCR (qPCR) assays to confirm these findings in a larger number of dogs, and to build a mathematical algorithm to report these microbiota changes as a dysbiosis index (DI). Fecal DNA from 95 healthy dogs and 106 dogs with histologically confirmed CE was analyzed. Samples were grouped into a training set and a validation set. Various mathematical models and combination of qPCR assays were evaluated to find a model with highest discriminatory power. The final qPCR panel consisted of eight bacterial groups: total bacteria, Faecalibacterium, Turicibacter, Escherichia coli, Streptococcus, Blautia, Fusobacterium and Clostridium hiranonis. The qPCR-based DI was built based on the nearest centroid classifier, and reports the degree of dysbiosis in a single numerical value that measures the closeness in the l2 - norm of the test sample to the mean prototype of each class. A negative DI indicates normobiosis, whereas a positive DI indicates dysbiosis. For a threshold of 0, the DI based on the combined dataset achieved 74% sensitivity and 95% specificity to separate healthy and CE dogs. © FEMS 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Examining the hemagglutinin subtype diversity among wild duck-origin influenza A viruses using ethanol-fixed cloacal swabs and a novel RT-PCR method.

    PubMed

    Wang, Ruixue; Soll, Lindsey; Dugan, Vivien; Runstadler, Jonathan; Happ, George; Slemons, Richard D; Taubenberger, Jeffery K

    2008-05-25

    This study presents an interconnected approach for circumventing two inherent limitations associated with studies defining the natural history of influenza A viruses in wild birds. The first limiting factor is the ability to maintain a cold chain from specimen collection to the laboratory when study sites are in more remote locations. The second limiting factor is the ability to identify all influenza A virus HA subtypes present in an original sample. We report a novel method for molecular subtyping of avian influenza A virus hemagglutinin genes using degenerate primers designed to amplify all known hemagglutinin subtypes. It was shown previously that templates larger than 200 bp were not consistently amplifiable from ethanol-fixed cloacal swabs. For this study, new primer sets were designed within these constraints. This method was used to perform subtyping RT-PCR on 191 influenza RNA-positive ethanol-fixed cloacal swabs obtained from 880 wild ducks in central Alaska in 2005. Seven different co-circulating hemagglutinin subtypes were identified in this study set, including H1, H3, H4, H5, H6, H8, and H12. In addition, 16% of original cloacal samples showed evidence of mixed infection, with samples yielding from two-to-five different hemagglutinin subtypes. This study further demonstrates the complex ecobiology of avian influenza A viruses in wild birds.

  3. Examining the hemagglutinin subtype diversity among wild duck-origin influenza A viruses using ethanol-fixed cloacal swabs and a novel RT-PCR method

    PubMed Central

    Wang, Ruixue; Soll, Lindsey; Dugan, Vivien; Runstadler, Jonathan; Happ, George; Slemons, Richard D.; Taubenberger, Jeffery K.

    2008-01-01

    This study presents an interconnected approach for circumventing two inherent limitations associated with studies defining the natural history of influenza A viruses in wild birds. The first limiting factor is the ability to maintain a cold chain from specimen collection to the laboratory when study sites are in more remote locations. The second limiting factor is the ability to identify all influenza A virus HA subtypes present in an original sample. We report a novel method for molecular subtyping of avian influenza A virus hemagglutinin genes using degenerate primers designed to amplify all known hemagglutinin subtypes. It was shown previously that templates larger than 200 bp were not consistently amplifiable from ethanol-fixed cloacal swabs. For this study, new primer sets were designed within these constraints. This method was used to perform subtyping RT-PCR on 191 influenza RNA-positive ethanol-fixed cloacal swabs obtained from 880 wild ducks in central Alaska in 2005. Seven different co-circulating hemagglutinin subtypes were identified in this study set, including H1, H3, H4, H5, H6, H8, and H12. In addition, 16% of original cloacal samples showed evidence of mixed infection, with samples yielding from two-to-five different hemagglutinin subtypes. This study further demonstrates the complex ecobiology of avian influenza A viruses in wild birds. PMID:18308356

  4. Rapid and Reliable Binding Affinity Prediction of Bromodomain Inhibitors: A Computational Study

    PubMed Central

    2016-01-01

    Binding free energies of bromodomain inhibitors are calculated with recently formulated approaches, namely ESMACS (enhanced sampling of molecular dynamics with approximation of continuum solvent) and TIES (thermodynamic integration with enhanced sampling). A set of compounds is provided by GlaxoSmithKline, which represents a range of chemical functionality and binding affinities. The predicted binding free energies exhibit a good Spearman correlation of 0.78 with the experimental data from the 3-trajectory ESMACS, and an excellent correlation of 0.92 from the TIES approach where applicable. Given access to suitable high end computing resources and a high degree of automation, we can compute individual binding affinities in a few hours with precisions no greater than 0.2 kcal/mol for TIES, and no larger than 0.34 and 1.71 kcal/mol for the 1- and 3-trajectory ESMACS approaches. PMID:28005370

  5. Surface buckling of black phosphorus: Determination, origin, and influence on electronic structure

    NASA Astrophysics Data System (ADS)

    Dai, Zhongwei; Jin, Wencan; Yu, Jie-Xiang; Grady, Maxwell; Sadowski, Jerzy T.; Kim, Young Duck; Hone, James; Dadap, Jerry I.; Zang, Jiadong; Osgood, Richard M.; Pohl, Karsten

    2017-12-01

    The surface structure of black phosphorus materials is determined using surface-sensitive dynamical microspot low energy electron diffraction (μ LEED ) analysis using a high spatial resolution low energy electron microscopy (LEEM) system. Samples of (i) crystalline cleaved black phosphorus (BP) at 300 K and (ii) exfoliated few-layer phosphorene (FLP) of about 10 nm thickness which were annealed at 573 K in vacuum were studied. In both samples, a significant surface buckling of 0.22 Å and 0.30 Å, respectively, is measured, which is one order of magnitude larger than previously reported. As direct evidence for large buckling, we observe a set of (for the flat surface forbidden) diffraction spots. Using first-principles calculations, we find that the presence of surface vacancies is responsible for the surface buckling in both BP and FLP, and is related to the intrinsic hole doping of phosphoresce materials previously reported.

  6. Complex disease and phenotype mapping in the domestic dog

    PubMed Central

    Hayward, Jessica J.; Castelhano, Marta G.; Oliveira, Kyle C.; Corey, Elizabeth; Balkman, Cheryl; Baxter, Tara L.; Casal, Margret L.; Center, Sharon A.; Fang, Meiying; Garrison, Susan J.; Kalla, Sara E.; Korniliev, Pavel; Kotlikoff, Michael I.; Moise, N. S.; Shannon, Laura M.; Simpson, Kenneth W.; Sutter, Nathan B.; Todhunter, Rory J.; Boyko, Adam R.

    2016-01-01

    The domestic dog is becoming an increasingly valuable model species in medical genetics, showing particular promise to advance our understanding of cancer and orthopaedic disease. Here we undertake the largest canine genome-wide association study to date, with a panel of over 4,200 dogs genotyped at 180,000 markers, to accelerate mapping efforts. For complex diseases, we identify loci significantly associated with hip dysplasia, elbow dysplasia, idiopathic epilepsy, lymphoma, mast cell tumour and granulomatous colitis; for morphological traits, we report three novel quantitative trait loci that influence body size and one that influences fur length and shedding. Using simulation studies, we show that modestly larger sample sizes and denser marker sets will be sufficient to identify most moderate- to large-effect complex disease loci. This proposed design will enable efficient mapping of canine complex diseases, most of which have human homologues, using far fewer samples than required in human studies. PMID:26795439

  7. On the degrees of freedom of reduced-rank estimators in multivariate regression

    PubMed Central

    Mukherjee, A.; Chen, K.; Wang, N.; Zhu, J.

    2015-01-01

    Summary We study the effective degrees of freedom of a general class of reduced-rank estimators for multivariate regression in the framework of Stein's unbiased risk estimation. A finite-sample exact unbiased estimator is derived that admits a closed-form expression in terms of the thresholded singular values of the least-squares solution and hence is readily computable. The results continue to hold in the high-dimensional setting where both the predictor and the response dimensions may be larger than the sample size. The derived analytical form facilitates the investigation of theoretical properties and provides new insights into the empirical behaviour of the degrees of freedom. In particular, we examine the differences and connections between the proposed estimator and a commonly-used naive estimator. The use of the proposed estimator leads to efficient and accurate prediction risk estimation and model selection, as demonstrated by simulation studies and a data example. PMID:26702155

  8. Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms

    PubMed Central

    Rechner, Steffen; Berger, Annabell

    2016-01-01

    We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time. PMID:26824442

  9. Surface buckling of black phosphorus: Determination, origin, and influence on electronic structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Zhongwei; Jin, Wencan; Yu, Jie-Xiang

    The surface structure of black phosphorus materials is determined using surface-sensitive dynamical microspot low energy electron diffraction ( μ LEED ) analysis using a high spatial resolution low energy electron microscopy (LEEM) system. Samples of (i) crystalline cleaved black phosphorus (BP) at 300 K and (ii) exfoliated few-layer phosphorene (FLP) of about 10 nm thickness which were annealed at 573 K in vacuum were studied. In both samples, a significant surface buckling of 0.22 Å and 0.30 Å, respectively, is measured, which is one order of magnitude larger than previously reported. As direct evidence for large buckling, we observe amore » set of (for the flat surface forbidden) diffraction spots. Using first-principles calculations, we find that the presence of surface vacancies is responsible for the surface buckling in both BP and FLP, and is related to the intrinsic hole doping of phosphoresce materials previously reported.« less

  10. Surface buckling of black phosphorus: Determination, origin, and influence on electronic structure

    DOE PAGES

    Dai, Zhongwei; Jin, Wencan; Yu, Jie-Xiang; ...

    2017-12-29

    The surface structure of black phosphorus materials is determined using surface-sensitive dynamical microspot low energy electron diffraction ( μ LEED ) analysis using a high spatial resolution low energy electron microscopy (LEEM) system. Samples of (i) crystalline cleaved black phosphorus (BP) at 300 K and (ii) exfoliated few-layer phosphorene (FLP) of about 10 nm thickness which were annealed at 573 K in vacuum were studied. In both samples, a significant surface buckling of 0.22 Å and 0.30 Å, respectively, is measured, which is one order of magnitude larger than previously reported. As direct evidence for large buckling, we observe amore » set of (for the flat surface forbidden) diffraction spots. Using first-principles calculations, we find that the presence of surface vacancies is responsible for the surface buckling in both BP and FLP, and is related to the intrinsic hole doping of phosphoresce materials previously reported.« less

  11. Propensity for intimate partner abuse and workplace productivity: why employers should care.

    PubMed

    Rothman, Emily F; Corso, Phaedra S

    2008-09-01

    It has been demonstrated that intimate partner violence (IPV) victimization is costly to employers, but little is known about the economic consequences associated with employing perpetrators. This study investigated propensity for partner abuse as a predictor of missed work time and on-the-job decreases in productivity among a small sample of male employees at a state agency (N=61). Results suggest that greater propensity for abusiveness is positively associated with missing work and experiencing worse productivity on the job, controlling for level of education, income, marital status, age, and part-time versus full-time employment status. Additional research could clarify whether IPV perpetration is a predictor of decreased productivity among larger samples and a wider variety of workplace settings. Employers and IPV advocates should consider responding to potential IPV perpetrators through the workplace in addition to developing victim-oriented policies and prevention initiatives.

  12. Advantages and challenges in automated apatite fission track counting

    NASA Astrophysics Data System (ADS)

    Enkelmann, E.; Ehlers, T. A.

    2012-04-01

    Fission track thermochronometer data are often a core element of modern tectonic and denudation studies. Soon after the development of the fission track methods interest emerged for the developed an automated counting procedure to replace the time consuming labor of counting fission tracks under the microscope. Automated track counting became feasible in recent years with increasing improvements in computer software and hardware. One such example used in this study is the commercial automated fission track counting procedure from Autoscan Systems Pty that has been highlighted through several venues. We conducted experiments that are designed to reliably and consistently test the ability of this fully automated counting system to recognize fission tracks in apatite and a muscovite external detector. Fission tracks were analyzed in samples with a step-wise increase in sample complexity. The first set of experiments used a large (mm-size) slice of Durango apatite cut parallel to the prism plane. Second, samples with 80-200 μm large apatite grains of Fish Canyon Tuff were analyzed. This second sample set is characterized by complexities often found in apatites in different rock types. In addition to the automated counting procedure, the same samples were also analyzed using conventional counting procedures. We found for all samples that the fully automated fission track counting procedure using the Autoscan System yields a larger scatter in the fission track densities measured compared to conventional (manual) track counting. This scatter typically resulted from the false identification of tracks due surface and mineralogical defects, regardless of the image filtering procedure used. Large differences between track densities analyzed with the automated counting persisted between different grains analyzed in one sample as well as between different samples. As a result of these differences a manual correction of the fully automated fission track counts is necessary for each individual surface area and grain counted. This manual correction procedure significantly increases (up to four times) the time required to analyze a sample with the automated counting procedure compared to the conventional approach.

  13. Social cognitive correlates of physical activity among persons with multiple sclerosis: Influence of depressive symptoms.

    PubMed

    Ensari, Ipek; Kinnett-Hopkins, Dominique; Motl, Robert W

    2017-10-01

    Physical inactivity and elevated depressive symptoms are both highly prevalent and correlated among persons with multiple sclerosis (MS). Variables from Social Cognitive Theory (SCT) might be differentially correlated with physical activity in persons with MS who have elevated depressive symptoms. This study investigated the influence of elevated depressive symptoms on correlates of physical activity based on SCT in persons with MS. Participants (mean age = 50.3 years, 87% female, 69% Caucasian) completed questionnaires on physical activity, depressive symptoms, self-efficacy, social support, outcome expectations, functional limitations, and goal setting. The questionnaires were delivered and returned through the U.S. Postal Service. The sample (N = 551) was divided into 2 subgroups (i.e., elevated vs non-elevated levels of depressive symptoms) for statistical analyses. Bivariate correlations and stepwise multiple regressions were conducted using SPSS. Self-efficacy (r = 0.16), functional limitations (r = 0.22) and goal-setting (r = 0.42) were significantly (p < 0.05) associated with physical activity among the elevated depressive sample. The regression analysis indicated that self-efficacy predicted physical activity in Step 1 (β = 0.16, p < 0.05), but was no longer significant when goal-setting (β = 0.06, p > 0.05) entered the model. All social cognitive variables were significantly associated with physical activity levels (r = 0.16-0.40, p < 0.001) among the non-elevated depressive sample. Self-efficacy predicted physical activity in Step 1 (β = 0.24, p < 0.001), but it was no longer significant once goal-setting, functional limitations, and self-evaluative outcome expectations entered the model. Based on SCT, self-efficacy and goal-setting represent possible targets of behavior interventions for increasing physical activity among persons with MS who have elevated depressive symptoms. There is a larger set of targets among those with MS who do not have elevated symptoms. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Normal fault earthquakes or graviquakes

    PubMed Central

    Doglioni, C.; Carminati, E.; Petricca, P.; Riguzzi, F.

    2015-01-01

    Earthquakes are dissipation of energy throughout elastic waves. Canonically is the elastic energy accumulated during the interseismic period. However, in crustal extensional settings, gravity is the main energy source for hangingwall fault collapsing. Gravitational potential is about 100 times larger than the observed magnitude, far more than enough to explain the earthquake. Therefore, normal faults have a different mechanism of energy accumulation and dissipation (graviquakes) with respect to other tectonic settings (strike-slip and contractional), where elastic energy allows motion even against gravity. The bigger the involved volume, the larger is their magnitude. The steeper the normal fault, the larger is the vertical displacement and the larger is the seismic energy released. Normal faults activate preferentially at about 60° but they can be shallower in low friction rocks. In low static friction rocks, the fault may partly creep dissipating gravitational energy without releasing great amount of seismic energy. The maximum volume involved by graviquakes is smaller than the other tectonic settings, being the activated fault at most about three times the hypocentre depth, explaining their higher b-value and the lower magnitude of the largest recorded events. Having different phenomenology, graviquakes show peculiar precursors. PMID:26169163

  15. Three plasma metabolite signatures for diagnosing high altitude pulmonary edema

    NASA Astrophysics Data System (ADS)

    Guo, Li; Tan, Guangguo; Liu, Ping; Li, Huijie; Tang, Lulu; Huang, Lan; Ren, Qian

    2015-10-01

    High-altitude pulmonary edema (HAPE) is a potentially fatal condition, occurring at altitudes greater than 3,000 m and affecting rapidly ascending, non-acclimatized healthy individuals. However, the lack of biomarkers for this disease still constitutes a bottleneck in the clinical diagnosis. Here, ultra-high performance liquid chromatography coupled with Q-TOF mass spectrometry was applied to study plasma metabolite profiling from 57 HAPE and 57 control subjects. 14 differential plasma metabolites responsible for the discrimination between the two groups from discovery set (35 HAPE subjects and 35 healthy controls) were identified. Furthermore, 3 of the 14 metabolites (C8-ceramide, sphingosine and glutamine) were selected as candidate diagnostic biomarkers for HAPE using metabolic pathway impact analysis. The feasibility of using the combination of these three biomarkers for HAPE was evaluated, where the area under the receiver operating characteristic curve (AUC) was 0.981 and 0.942 in the discovery set and the validation set (22 HAPE subjects and 22 healthy controls), respectively. Taken together, these results suggested that this composite plasma metabolite signature may be used in HAPE diagnosis, especially after further investigation and verification with larger samples.

  16. Animal-Assisted Interventions in the Classroom-A Systematic Review.

    PubMed

    Brelsford, Victoria L; Meints, Kerstin; Gee, Nancy R; Pfeffer, Karen

    2017-06-22

    The inclusion of animals in educational practice is becoming increasingly popular, but it is unclear how solid the evidence for this type of intervention is. The aim of this systematic review is to scrutinise the empirical research literature relating to animal-assisted interventions conducted in educational settings. The review included 25 papers; 21 from peer-reviewed journals and 4 obtained using grey literature databases. Most studies reported significant benefits of animal-assisted interventions in the school setting. Despite this, studies vary greatly in methods and design, in intervention types, measures, and sample sizes, and in the length of time exposed to an animal. Furthermore, a worrying lack of reference to risk assessment and animal welfare must be highlighted. Taken together, the results of this review show promising findings and emerging evidence suggestive of potential benefits related to animals in school settings. The review also indicates the need for a larger and more robust evidence base driven by thorough and strict protocols. The review further emphasises the need for safeguarding for all involved-welfare and safety are paramount.

  17. Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use

    USGS Publications Warehouse

    Arthur, Steve M.; Schwartz, Charles C.

    1999-01-01

    We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area <1%/additional location) and precise (CV < 50%). Although the radiotracking data appeared unbiased, except for the relationship between area and sample size, these data failed to indicate some areas that likely were important to bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.

  18. A Markov random field based approach to the identification of meat and bone meal in feed by near-infrared spectroscopic imaging.

    PubMed

    Jiang, Xunpeng; Yang, Zengling; Han, Lujia

    2014-07-01

    Contaminated meat and bone meal (MBM) in animal feedstuff has been the source of bovine spongiform encephalopathy (BSE) disease in cattle, leading to a ban in its use, so methods for its detection are essential. In this study, five pure feed and five pure MBM samples were used to prepare two sets of sample arrangements: set A for investigating the discrimination of individual feed/MBM particles and set B for larger numbers of overlapping particles. The two sets were used to test a Markov random field (MRF)-based approach. A Fourier transform infrared (FT-IR) imaging system was used for data acquisition. The spatial resolution of the near-infrared (NIR) spectroscopic image was 25 μm × 25 μm. Each spectrum was the average of 16 scans across the wavenumber range 7,000-4,000 cm(-1), at intervals of 8 cm(-1). This study introduces an innovative approach to analyzing NIR spectroscopic images: an MRF-based approach has been developed using the iterated conditional mode (ICM) algorithm, integrating initial labeling-derived results from support vector machine discriminant analysis (SVMDA) and observation data derived from the results of principal component analysis (PCA). The results showed that MBM covered by feed could be successfully recognized with an overall accuracy of 86.59% and a Kappa coefficient of 0.68. Compared with conventional methods, the MRF-based approach is capable of extracting spectral information combined with spatial information from NIR spectroscopic images. This new approach enhances the identification of MBM using NIR spectroscopic imaging.

  19. Measuring sperm backflow following female orgasm: a new method

    PubMed Central

    King, Robert; Dempsey, Maria; Valentine, Katherine A.

    2016-01-01

    Background Human female orgasm is a vexed question in the field while there is credible evidence of cryptic female choice that has many hallmarks of orgasm in other species. Our initial goal was to produce a proof of concept for allowing females to study an aspect of infertility in a home setting, specifically by aligning the study of human infertility and increased fertility with the study of other mammalian fertility. In the latter case - the realm of oxytocin-mediated sperm retention mechanisms seems to be at work in terms of ultimate function (differential sperm retention) while the proximate function (rapid transport or cervical tenting) remains unresolved. Method A repeated measures design using an easily taught technique in a natural setting was used. Participants were a small (n=6), non-representative sample of females. The introduction of a sperm-simulant combined with an orgasm-producing technique using a vibrator/home massager and other easily supplied materials. Results The sperm flowback (simulated) was measured using a technique that can be used in a home setting. There was a significant difference in simulant retention between the orgasm (M=4.08, SD=0.17) and non-orgasm (M=3.30, SD=0.22) conditions; t (5)=7.02, p=0.001. Cohen's d=3.97, effect size r=0.89. This indicates a medium to small effect size. Conclusions This method could allow females to test an aspect of sexual response that has been linked to lowered fertility in a home setting with minimal training. It needs to be replicated with a larger sample size. PMID:27799082

  20. Measuring sperm backflow following female orgasm: a new method.

    PubMed

    King, Robert; Dempsey, Maria; Valentine, Katherine A

    2016-01-01

    Human female orgasm is a vexed question in the field while there is credible evidence of cryptic female choice that has many hallmarks of orgasm in other species. Our initial goal was to produce a proof of concept for allowing females to study an aspect of infertility in a home setting, specifically by aligning the study of human infertility and increased fertility with the study of other mammalian fertility. In the latter case - the realm of oxytocin-mediated sperm retention mechanisms seems to be at work in terms of ultimate function (differential sperm retention) while the proximate function (rapid transport or cervical tenting) remains unresolved. A repeated measures design using an easily taught technique in a natural setting was used. Participants were a small (n=6), non-representative sample of females. The introduction of a sperm-simulant combined with an orgasm-producing technique using a vibrator/home massager and other easily supplied materials. The sperm flowback (simulated) was measured using a technique that can be used in a home setting. There was a significant difference in simulant retention between the orgasm (M=4.08, SD=0.17) and non-orgasm (M=3.30, SD=0.22) conditions; t (5)=7.02, p=0.001. Cohen's d=3.97, effect size r=0.89. This indicates a medium to small effect size. This method could allow females to test an aspect of sexual response that has been linked to lowered fertility in a home setting with minimal training. It needs to be replicated with a larger sample size.

  1. ILP-based maximum likelihood genome scaffolding

    PubMed Central

    2014-01-01

    Background Interest in de novo genome assembly has been renewed in the past decade due to rapid advances in high-throughput sequencing (HTS) technologies which generate relatively short reads resulting in highly fragmented assemblies consisting of contigs. Additional long-range linkage information is typically used to orient, order, and link contigs into larger structures referred to as scaffolds. Due to library preparation artifacts and erroneous mapping of reads originating from repeats, scaffolding remains a challenging problem. In this paper, we provide a scalable scaffolding algorithm (SILP2) employing a maximum likelihood model capturing read mapping uncertainty and/or non-uniformity of contig coverage which is solved using integer linear programming. A Non-Serial Dynamic Programming (NSDP) paradigm is applied to render our algorithm useful in the processing of larger mammalian genomes. To compare scaffolding tools, we employ novel quantitative metrics in addition to the extant metrics in the field. We have also expanded the set of experiments to include scaffolding of low-complexity metagenomic samples. Results SILP2 achieves better scalability throughg a more efficient NSDP algorithm than previous release of SILP. The results show that SILP2 compares favorably to previous methods OPERA and MIP in both scalability and accuracy for scaffolding single genomes of up to human size, and significantly outperforms them on scaffolding low-complexity metagenomic samples. Conclusions Equipped with NSDP, SILP2 is able to scaffold large mammalian genomes, resulting in the longest and most accurate scaffolds. The ILP formulation for the maximum likelihood model is shown to be flexible enough to handle metagenomic samples. PMID:25253180

  2. A meta-analytic review of the relationship between adolescent risky sexual behavior and impulsivity across gender, age, and race.

    PubMed

    Dir, Allyson L; Coskunpinar, Ayca; Cyders, Melissa A

    2014-11-01

    Impulsivity is frequently included as a risk factor in models of adolescent sexual risk-taking; however, findings on the magnitude of association between impulsivity and risky sexual behavior are variable across studies. The aims of the current meta-analysis were to examine (1) how specific impulsivity traits relate to specific risky sexual behaviors in adolescents, and (2) how the impulsivity-risky sex relationship might differ across gender, age, and race. Eighty-one studies were meta-analyzed using a random effects model to examine the overall impulsivity-risky sex relationship and relationships among specific impulsivity traits and risky sexual behaviors. Overall, results revealed a significant, yet small, association between impulsivity and adolescent risky sexual behavior (r=0.19, p<0.001) that did not differ across impulsivity trait. A pattern of stronger effects was associated with risky sexual behaviors as compared to negative outcomes related to these behaviors. Gender moderated the overall relationship (β=0.22, p=0.04), such that effect sizes were significantly larger in samples with more females. Age, race, study design, and sample type did not moderate the relationship, although there was a pattern suggesting smaller effects for adolescents in juvenile detention settings. Adolescent samples with more females showed a larger impulsivity-risky sex relationship, suggesting that impulsivity may be a more important risk factor for risky sex among adolescent females. Research and treatment should consider gender differences when investigating the role of impulsivity in adolescent sexual risk-taking. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Statistical Analysis of Large Scale Structure by the Discrete Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Pando, Jesus

    1997-10-01

    The discrete wavelet transform (DWT) is developed as a general statistical tool for the study of large scale structures (LSS) in astrophysics. The DWT is used in all aspects of structure identification including cluster analysis, spectrum and two-point correlation studies, scale-scale correlation analysis and to measure deviations from Gaussian behavior. The techniques developed are demonstrated on 'academic' signals, on simulated models of the Lymanα (Lyα) forests, and on observational data of the Lyα forests. This technique can detect clustering in the Ly-α clouds where traditional techniques such as the two-point correlation function have failed. The position and strength of these clusters in both real and simulated data is determined and it is shown that clusters exist on scales as large as at least 20 h-1 Mpc at significance levels of 2-4 σ. Furthermore, it is found that the strength distribution of the clusters can be used to distinguish between real data and simulated samples even where other traditional methods have failed to detect differences. Second, a method for measuring the power spectrum of a density field using the DWT is developed. All common features determined by the usual Fourier power spectrum can be calculated by the DWT. These features, such as the index of a power law or typical scales, can be detected even when the samples are geometrically complex, the samples are incomplete, or the mean density on larger scales is not known (the infrared uncertainty). Using this method the spectra of Ly-α forests in both simulated and real samples is calculated. Third, a method for measuring hierarchical clustering is introduced. Because hierarchical evolution is characterized by a set of rules of how larger dark matter halos are formed by the merging of smaller halos, scale-scale correlations of the density field should be one of the most sensitive quantities in determining the merging history. We show that these correlations can be completely determined by the correlations between discrete wavelet coefficients on adjacent scales and at nearly the same spatial position, Cj,j+12/cdot2. Scale-scale correlations on two samples of the QSO Ly-α forests absorption spectra are computed. Lastly, higher order statistics are developed to detect deviations from Gaussian behavior. These higher order statistics are necessary to fully characterize the Ly-α forests because the usual 2nd order statistics, such as the two-point correlation function or power spectrum, give inconclusive results. It is shown how this technique takes advantage of the locality of the DWT to circumvent the central limit theorem. A non-Gaussian spectrum is defined and this spectrum reveals not only the magnitude, but the scales of non-Gaussianity. When applied to simulated and observational samples of the Ly-α clouds, it is found that different popular models of structure formation have different spectra while two, independent observational data sets, have the same spectra. Moreover, the non-Gaussian spectra of real data sets are significantly different from the spectra of various possible random samples. (Abstract shortened by UMI.)

  4. The capability set for work - correlates of sustainable employability in workers with multiple sclerosis.

    PubMed

    van Gorp, D A M; van der Klink, J J L; Abma, F I; Jongen, P J; van Lieshout, I; Arnoldus, E P J; Beenakker, E A C; Bos, H M; van Eijk, J J J; Fermont, J; Frequin, S T F M; de Gans, K; Hengstman, G J D; Hupperts, R M M; Mostert, J P; Pop, P H M; Verhagen, W I M; Zemel, D; Heerings, M A P; Reneman, M F; Middelkoop, H A M; Visser, L H; van der Hiele, K

    2018-06-01

    The aim of this study was to examine whether work capabilities differ between workers with Multiple Sclerosis (MS) and workers from the general population. The second aim was to investigate whether the capability set was related to work and health outcomes. A total of 163 workers with MS from the MS@Work study and 163 workers from the general population were matched for gender, age, educational level and working hours. All participants completed online questionnaires on demographics, health and work functioning. The Capability Set for Work Questionnaire was used to explore whether a set of seven work values is considered valuable (A), is enabled in the work context (B), and can be achieved by the individual (C). When all three criteria are met a work value can be considered part of the individual's 'capability set'. Group differences and relationships with work and health outcomes were examined. Despite lower physical work functioning (U = 4250, p = 0.001), lower work ability (U = 10591, p = 0.006) and worse self-reported health (U = 9091, p ≤ 0.001) workers with MS had a larger capability set (U = 9649, p ≤ 0.001) than the general population. In workers with MS, a larger capability set was associated with better flexible work functioning (r = 0.30), work ability (r = 0.25), self-rated health (r = 0.25); and with less absenteeism (r = - 0.26), presenteeism (r = - 0.31), cognitive/neuropsychiatric impairment (r = - 0.35), depression (r = - 0.43), anxiety (r = - 0.31) and fatigue (r = - 0.34). Workers with MS have a larger capability set than workers from the general population. In workers with MS a larger capability set was associated with better work and health outcomes. This observational study is registered under NL43098.008.12: 'Voorspellers van arbeidsparticipatie bij mensen met relapsing-remitting Multiple Sclerose'. The study is registered at the Dutch CCMO register ( https://www.toetsingonline.nl ). This study is approved by the METC Brabant, 12 February 2014. First participants are enrolled 1 st of March 2014.

  5. Influence of calcium carbonate and charcoal application on aggregation processes and organic matter retention at the silt-size scale

    NASA Astrophysics Data System (ADS)

    Asefaw Berhe, Asmeret; Kaiser, Michael; Ghezzehei, Teamrat; Myrold, David; Kleber, Markus

    2013-04-01

    The effectiveness of charcoal and calcium carbonate applications to improve soil conditions has been well documented. However, their influence on the formation of silt-sized aggregates and the amount and protection of associated organic matter (OM) against microbial decomposition is still largely unknown. For sustainable management of agricultural soils, silt-sized aggregates (2-53 µm) are of particularly large importance because they store up to 60% of soil organic carbon with mean residence times between 70 and 400 years. The objectives are i) to analyze the ability of CaCO3 and/or charcoal application to increase the amount of silt-sized aggregates and associated OM, ii) vary soil mineral conditions to establish relevant boundary conditions for amendment-induced aggregation processes, iii) to determine how amendment-induced changes in formation of silt-sized aggregates relate to microbial decomposition of OM. We set up artificial high reactive (HR, clay: 40%, sand: 57%, OM: 3%) and low reactive soils (LR, clay: 10%, sand: 89%, OM: 1%) and mixed them with charcoal (CC, 1%) and/or calcium carbonate (Ca, 0.2%). The samples were adjusted to a water potential of 0.3 bar and sub samples were incubated with microbial inoculum (MO). After a 16-weeks aggregation experiment, size fractions were separated by wet-sieving and sedimentation. Since we did not use mineral compounds in the artificial mixtures within the size range of 2 to 53 µm, we consider material recovered in this fraction as silt-sized aggregates, which was confirmed by SEM analyses. For the LR mixtures, we detected increasing N concentrations within the 2-53 µm fractions of the charcoal amended samples (CC, CC+Ca, and CC+Ca+MO) as compared to the Control sample with the strongest effect for the CC+Ca+MO sample. This indicates an association of N-containing microbial derived OM with silt-sized aggregates. For the charcoal amended LR and HR mixtures, the C concentrations of the 2-53 µm fractions are larger than those of the respective fractions of the Control samples but the effect is several times stronger for the LR mixtures. The C concentrations of the 2-53 µm fractions relative to the total C amount of the LR and HR mixtures are between 30 and 50%. The charcoal amended samples show generally larger relative C amounts associated with the 2-53 µm fractions than the Control samples. Benefits for aggregate formation and OM storage were larger for sand (LR) than for clay soil (HR). The gained data are similar to respective data for natural soils. Consequently, the suggested microcosm experiments are suitable to analyze mechanisms within soil aggregation processes.

  6. Molecular phylogenetic reconstruction of the endemic Asian salamander family Hynobiidae (Amphibia, Caudata).

    PubMed

    Weisrock, David W; Macey, J Robert; Matsui, Masafumi; Mulcahy, Daniel G; Papenfuss, Theodore J

    2013-01-01

    The salamander family Hynobiidae contains over 50 species and has been the subject of a number of molecular phylogenetic investigations aimed at reconstructing branches across the entire family. In general, studies using the greatest amount of sequence data have used reduced taxon sampling, while the study with the greatest taxon sampling has used a limited sequence data set. Here, we provide insights into the phylogenetic history of the Hynobiidae using both dense taxon sampling and a large mitochondrial DNA sequence data set. We report exclusive new mitochondrial DNA data of 2566 aligned bases (with 151 excluded sites, of included sites 1157 are variable with 957 parsimony informative). This is sampled from two genic regions encoding a 12S-16S region (the 3' end of 12S rRNA, tRNA(VAI), and the 5' end of 16S rRNA), and a ND2-COI region (ND2, tRNA(Trp), tRNA(Ala), tRNA(Asn), the origin for light strand replication--O(L), tRNA(Cys), tRNAT(Tyr), and the 5' end of COI). Analyses using parsimony, Bayesian, and maximum likelihood optimality criteria produce similar phylogenetic trees, with discordant branches generally receiving low levels of branch support. Monophyly of the Hynobiidae is strongly supported across all analyses, as is the sister relationship and deep divergence between the genus Onychodactylus with all remaining hynobiids. Within this latter grouping our phylogenetic results identify six clades that are relatively divergent from one another, but for which there is minimal support for their phylogenetic placement. This includes the genus Batrachuperus, the genus Hynobius, the genus Pachyhynobius, the genus Salamandrella, a clade containing the genera Ranodon and Paradactylodon, and a clade containing the genera Liua and Pseudohynobius. This latter clade receives low bootstrap support in the parsimony analysis, but is consistent across all three analytical methods. Our results also clarify a number of well-supported relationships within the larger Batrachuperus and Hynobius clades. While the relationships identified in this study do much to clarify the phylogenetic history of the Hynobiidae, the poor resolution among major hynobiid clades, and the contrast of mtDNA-derived relationships with recent phylogenetic results from a small number of nuclear genes, highlights the need for continued phylogenetic study with larger numbers of nuclear loci.

  7. Possible Nuclear Safeguards Applications: Workshop on Next-Generation Laser Compton Gamma Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durham, J. Matthew

    2016-11-17

    These are a set of slides for the development of a next-generation photon source white paper. The following topics are covered in these slides: Nuclear Safeguards; The Nuclear Fuel Cycle; Precise isotopic determination via NRF; UF 6 Enrichment Assay; and Non-Destructive Assay of Spent Nuclear Fuel. In summary: A way to non-destructively measure precise isotopics of ~kg and larger samples has multiple uses in nuclear safeguards; Ideally this is a compact, fieldable device that can be used by international inspectors. Must be rugged and reliable; A next-generation source can be used as a testing ground for these techniques as technologymore » develops.« less

  8. Examining a participation-focused stroke self-management intervention in a day rehabilitation setting: a quasi-experimental pilot study.

    PubMed

    Lee, Danbi; Fischer, Heidi; Zera, Sarah; Robertson, Rosetta; Hammel, Joy

    2017-12-01

    Background People with stroke often find discharge from rehabilitation distressing because they do not feel prepared to participate in life roles as they want. A self-management approach can facilitate improvement in confidence and ability to manage post-stroke community living and participation after transitioning into the community. Objective To evaluate the feasibility and effectiveness of the Improving Participation After Stroke Self-management program - Rehab version (IPASS-R) in a day rehabilitation setting. Methods We used a mixed-method non-randomized quasi-experimental design. The IPASS-R program is a six-session group-based intervention led by a trained occupational therapist and lay person with stroke. The program uses an efficacy building approach to support aging adults to maintain active participation in home and community activities post-stroke. Primary outcome measures were the Reintegration to Normal Living Index (RNLI), Stroke Impact Scale (SIS), and Participation Strategies Self-Efficacy Scale. Qualitative feedback was collected post-treatment. Results Seventeen participants with stroke (intervention n = 9; control n = 8) were enrolled across two sites. Non-parametric effect sizes calculated using the Wilcoxon Signed-Rank test revealed larger effects on RNLI and SIS outcomes in the intervention group. The Mann-Whitney U test showed significant differences between the two groups' changes in scores on perceived recovery and strength. Conclusions The result shows that IPASS-R has the potential to be integrated into a day rehabilitation setting with a positive impact on community integration and perceived recovery outcomes. Future study is needed to investigate the IPASS-R with a larger sample size and more rigorous study design.

  9. Coordinated Platoon Routing in a Metropolitan Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, Jeffrey; Munson, Todd; Sokolov, Vadim

    2016-10-10

    Platooning vehicles—connected and automated vehicles traveling with small intervehicle distances—use less fuel because of reduced aerodynamic drag. Given a network de- fined by vertex and edge sets and a set of vehicles with origin/destination nodes/times, we model and solve the combinatorial optimization problem of coordinated routing of vehicles in a manner that routes them to their destination on time while using the least amount of fuel. Common approaches decompose the platoon coordination and vehicle routing into separate problems. Our model addresses both problems simultaneously to obtain the best solution. We use modern modeling techniques and constraints implied from analyzing themore » platoon routing problem to address larger numbers of vehicles and larger networks than previously considered. While the numerical method used is unable to certify optimality for candidate solutions to all networks and parameters considered, we obtain excellent solutions in approximately one minute for much larger networks and vehicle sets than previously considered in the literature.« less

  10. Brain tumor detection and segmentation in a CRF (conditional random fields) framework with pixel-pairwise affinity and superpixel-level features.

    PubMed

    Wu, Wei; Chen, Albert Y C; Zhao, Liang; Corso, Jason J

    2014-03-01

    Detection and segmentation of a brain tumor such as glioblastoma multiforme (GBM) in magnetic resonance (MR) images are often challenging due to its intrinsically heterogeneous signal characteristics. A robust segmentation method for brain tumor MRI scans was developed and tested. Simple thresholds and statistical methods are unable to adequately segment the various elements of the GBM, such as local contrast enhancement, necrosis, and edema. Most voxel-based methods cannot achieve satisfactory results in larger data sets, and the methods based on generative or discriminative models have intrinsic limitations during application, such as small sample set learning and transfer. A new method was developed to overcome these challenges. Multimodal MR images are segmented into superpixels using algorithms to alleviate the sampling issue and to improve the sample representativeness. Next, features were extracted from the superpixels using multi-level Gabor wavelet filters. Based on the features, a support vector machine (SVM) model and an affinity metric model for tumors were trained to overcome the limitations of previous generative models. Based on the output of the SVM and spatial affinity models, conditional random fields theory was applied to segment the tumor in a maximum a posteriori fashion given the smoothness prior defined by our affinity model. Finally, labeling noise was removed using "structural knowledge" such as the symmetrical and continuous characteristics of the tumor in spatial domain. The system was evaluated with 20 GBM cases and the BraTS challenge data set. Dice coefficients were computed, and the results were highly consistent with those reported by Zikic et al. (MICCAI 2012, Lecture notes in computer science. vol 7512, pp 369-376, 2012). A brain tumor segmentation method using model-aware affinity demonstrates comparable performance with other state-of-the art algorithms.

  11. Simulation Studies as Designed Experiments: The Comparison of Penalized Regression Models in the “Large p, Small n” Setting

    PubMed Central

    Chaibub Neto, Elias; Bare, J. Christopher; Margolin, Adam A.

    2014-01-01

    New algorithms are continuously proposed in computational biology. Performance evaluation of novel methods is important in practice. Nonetheless, the field experiences a lack of rigorous methodology aimed to systematically and objectively evaluate competing approaches. Simulation studies are frequently used to show that a particular method outperforms another. Often times, however, simulation studies are not well designed, and it is hard to characterize the particular conditions under which different methods perform better. In this paper we propose the adoption of well established techniques in the design of computer and physical experiments for developing effective simulation studies. By following best practices in planning of experiments we are better able to understand the strengths and weaknesses of competing algorithms leading to more informed decisions about which method to use for a particular task. We illustrate the application of our proposed simulation framework with a detailed comparison of the ridge-regression, lasso and elastic-net algorithms in a large scale study investigating the effects on predictive performance of sample size, number of features, true model sparsity, signal-to-noise ratio, and feature correlation, in situations where the number of covariates is usually much larger than sample size. Analysis of data sets containing tens of thousands of features but only a few hundred samples is nowadays routine in computational biology, where “omics” features such as gene expression, copy number variation and sequence data are frequently used in the predictive modeling of complex phenotypes such as anticancer drug response. The penalized regression approaches investigated in this study are popular choices in this setting and our simulations corroborate well established results concerning the conditions under which each one of these methods is expected to perform best while providing several novel insights. PMID:25289666

  12. [The relationship between Cognitive Emotion Regulation Questionnaire (CERQ) and depression, anxiety: Meta-analysis].

    PubMed

    Sakakibara, Ryota; Kitahara, Mizuho

    2016-06-01

    This study aimed to investigate the relations between CERQ and depression, and anxiety and also aimed to reveal the characteristics of a Japanese sample through meta-analysis. The results showed that self-blame, acceptance, rumination, catastrophizing, and blaming others had significantly positive correlations with both depression and anxiety, whereas positive refocusing, refocus on planning, positive reappraisal, and putting into perspective had significantly negative correlations with both variables. Moreover, when comparing the correlation coefficients of the Japanese samples and the combined value, correlations between depression and positive reappraisal were significantly larger than the combined value. On the other hand, regarding the correlation coefficients of depression and putting into perspective, the combined value was larger than the value of Japanese samples. In addition, compared to the combined value, the Japanese sample's positive correlation between anxiety and rumination, and negative correlation between anxiety and positive reappraisal were larger.

  13. Pyrosequencing-Derived Bacterial, Archaeal, and Fungal Diversity of Spacecraft Hardware Destined for Mars

    PubMed Central

    Vaishampayan, Parag; Nilsson, Henrik R.; Torok, Tamas; Venkateswaran, Kasthuri

    2012-01-01

    Spacecraft hardware and assembly cleanroom surfaces (233 m2 in total) were sampled, total genomic DNA was extracted, hypervariable regions of the 16S rRNA gene (bacteria and archaea) and ribosomal internal transcribed spacer (ITS) region (fungi) were subjected to 454 tag-encoded pyrosequencing PCR amplification, and 203,852 resulting high-quality sequences were analyzed. Bioinformatic analyses revealed correlations between operational taxonomic unit (OTU) abundance and certain sample characteristics, such as source (cleanroom floor, ground support equipment [GSE], or spacecraft hardware), cleaning regimen applied, and location about the facility or spacecraft. National Aeronautics and Space Administration (NASA) cleanroom floor and GSE surfaces gave rise to a larger number of diverse bacterial communities (619 OTU; 20 m2) than colocated spacecraft hardware (187 OTU; 162 m2). In contrast to the results of bacterial pyrosequencing, where at least some sequences were generated from each of the 31 sample sets examined, only 13 and 18 of these sample sets gave rise to archaeal and fungal sequences, respectively. As was the case for bacteria, the abundance of fungal OTU in the GSE surface samples dramatically diminished (9× less) once cleaning protocols had been applied. The presence of OTU representative of actinobacteria, deinococci, acidobacteria, firmicutes, and proteobacteria on spacecraft surfaces suggests that certain bacterial lineages persist even following rigorous quality control and cleaning practices. The majority of bacterial OTU observed as being recurrent belonged to actinobacteria and alphaproteobacteria, supporting the hypothesis that the measures of cleanliness exerted in spacecraft assembly cleanrooms (SAC) inadvertently select for the organisms which are the most fit to survive long journeys in space. PMID:22729532

  14. Human genetic resistance to malaria.

    PubMed

    Williams, Thomas N

    2009-01-01

    This brief chapter highlights the need for caution when designing and interpreting studies aimed at seeking new genes that may be associated with malaria protection, or investigating the potential mechanisms for protection in promising candidates. Judging genetic effects on the basis of the wrong clinical phenotype and missing true protective genes because their protective effects are masked by unpredictable epistatic effects are major potential pitfalls. These issues are by no means unique to malaria: in recent years, the importance of larger sample sizes and careful phenotypic definitions have become appreciated increasingly, particularly for genome-wide studies of complex diseases (Cordell and Clayton, 2005; Burton, Tobin and Hopper, 2005). Until recently, research in the field of malaria genetics has not enjoyed the sort of funding afforded to similar work investigating diseases of importance to the developed world. However, in the last few years, coupled with advances in genetic diagnostics that have led to massive automation and falling costs per gene explored, momentum has grown towards more generous funding that brings with it the opportunity for much larger, multisite cohesive studies. The stage is set for a giant leap forward in the coming years.

  15. Academic Culture.

    ERIC Educational Resources Information Center

    Clark, Burton R.

    With fragmentation the dominant trend in academic settings around the world, the larger wholes of profession, enterprise, and system are less held together by integrative ideology. Strong ideological bonding is characteristic of the parts, primarily the disciplines. The larger aggregations are made whole mainly by formal superstructure, many…

  16. Sampling procedures for throughfall monitoring: A simulation study

    NASA Astrophysics Data System (ADS)

    Zimmermann, Beate; Zimmermann, Alexander; Lark, Richard Murray; Elsenbeer, Helmut

    2010-01-01

    What is the most appropriate sampling scheme to estimate event-based average throughfall? A satisfactory answer to this seemingly simple question has yet to be found, a failure which we attribute to previous efforts' dependence on empirical studies. Here we try to answer this question by simulating stochastic throughfall fields based on parameters for statistical models of large monitoring data sets. We subsequently sampled these fields with different sampling designs and variable sample supports. We evaluated the performance of a particular sampling scheme with respect to the uncertainty of possible estimated means of throughfall volumes. Even for a relative error limit of 20%, an impractically large number of small, funnel-type collectors would be required to estimate mean throughfall, particularly for small events. While stratification of the target area is not superior to simple random sampling, cluster random sampling involves the risk of being less efficient. A larger sample support, e.g., the use of trough-type collectors, considerably reduces the necessary sample sizes and eliminates the sensitivity of the mean to outliers. Since the gain in time associated with the manual handling of troughs versus funnels depends on the local precipitation regime, the employment of automatically recording clusters of long troughs emerges as the most promising sampling scheme. Even so, a relative error of less than 5% appears out of reach for throughfall under heterogeneous canopies. We therefore suspect a considerable uncertainty of input parameters for interception models derived from measured throughfall, in particular, for those requiring data of small throughfall events.

  17. Meta-analysis of genome-wide association studies for personality

    PubMed Central

    de Moor, Marleen H.M.; Costa, Paul T.; Terracciano, Antonio; Krueger, Robert F.; de Geus, Eco J.C.; Toshiko, Tanaka; Penninx, Brenda W.J.H.; Esko, Tõnu; Madden, Pamela A F; Derringer, Jaime; Amin, Najaf; Willemsen, Gonneke; Hottenga, Jouke-Jan; Distel, Marijn A.; Uda, Manuela; Sanna, Serena; Spinhoven, Philip; Hartman, Catharina A.; Sullivan, Patrick; Realo, Anu; Allik, Jüri; Heath, Andrew C; Pergadia, Michele L; Agrawal, Arpana; Lin, Peng; Grucza, Richard; Nutile, Teresa; Ciullo, Marina; Rujescu, Dan; Giegling, Ina; Konte, Bettina; Widen, Elisabeth; Cousminer, Diana L; Eriksson, Johan G.; Palotie, Aarno; Luciano, Michelle; Tenesa, Albert; Davies, Gail; Lopez, Lorna M.; Hansell, Narelle K.; Medland, Sarah E.; Ferrucci, Luigi; Schlessinger, David; Montgomery, Grant W.; Wright, Margaret J.; Aulchenko, Yurii S.; Janssens, A.Cecile J.W.; Oostra, Ben A.; Metspalu, Andres; Abecasis, Gonçalo R.; Deary, Ian J.; Räikkönen, Katri; Bierut, Laura J.; Martin, Nicholas G.; van Duijn, Cornelia M.; Boomsma, Dorret I.

    2013-01-01

    Personality can be thought of as a set of characteristics that influence people’s thoughts, feelings, and behaviour across a variety of settings. Variation in personality is predictive of many outcomes in life, including mental health. Here we report on a meta-analysis of genome-wide association (GWA) data for personality in ten discovery samples (17 375 adults) and five in-silico replication samples (3 294 adults). All participants were of European ancestry. Personality scores for Neuroticism, Extraversion, Openness to Experience, Agreeableness, and Conscientiousness were based on the NEO Five-Factor Inventory. Genotype data were available of ~2.4M Single Nucleotide Polymorphisms (SNPs; directly typed and imputed using HAPMAP data). In the discovery samples, classical association analyses were performed under an additive model followed by meta-analysis using the weighted inverse variance method. Results showed genome-wide significance for Openness to Experience near the RASA1 gene on 5q14.3 (rs1477268 and rs2032794, P = 2.8 × 10−8 and 3.1 × 10−8) and for Conscientiousness in the brain-expressed KATNAL2 gene on 18q21.1 (rs2576037, P = 4.9 × 10−8). We further conducted a gene-based test that confirmed the association of KATNAL2 to Conscientiousness. In-silico replication did not, however, show significant associations of the top SNPs with Openness and Conscientiousness, although the direction of effect of the KATNAL2 SNP on Conscientiousness was consistent in all replication samples. Larger scale GWA studies and alternative approaches are required for confirmation of KATNAL2 as a novel gene affecting Conscientiousness. PMID:21173776

  18. Effect of attention on the detection and identification of masked spatial patterns.

    PubMed

    Põder, Endel

    2005-01-01

    The effect of attention on the detection and identification of vertically and horizontally oriented Gabor patterns in the condition of simultaneous masking with obliquely oriented Gabors was studied. Attention was manipulated by varying the set size in a visual-search experiment. In the first experiment, small target Gabors were presented on the background of larger masking Gabors. In the detection task, the effect of set size was as predicted by unlimited-capacity signal detection theory. In the orientation identification task, increasing the set size from 1 to 8 resulted in a much larger decline in performance. The results of the additional experiments suggest that attention can reduce the crowding effect of maskers.

  19. Do Programs for Runaway and Homeless Youth Work? A Qualitative Exploration From the Perspectives of Youth Clients in Diverse Settings.

    PubMed

    Gwadz, Marya; Freeman, Robert M; Kutnick, Alexandra H; Silverman, Elizabeth; Ritchie, Amanda S; Cleland, Charles M; Leonard, Noelle R; Srinagesh, Aradhana; Powlovich, Jamie; Bolas, James

    2018-01-01

    Runaway and homeless youth (RHY) comprise a large population of young people who reside outside the control and protection of parents and guardians and who experience numerous traumas and risk factors, but few buffering resources. Specialized settings have developed to serve RHY, but little is known about their effects. The present cross-sectional qualitative descriptive study, grounded in the positive youth development approach and the Youth Program Quality Assessment model, addressed this gap in the literature. From a larger sample of 29 RHY-specific settings across New York State, RHY ages 16-21 from 11 settings were purposively sampled for semi-structured in-depth interviews on their transitions into homelessness, experiences with settings, and unmet needs ( N  = 37 RHY). Data were analyzed with a theory-driven and inductive systematic content analysis approach. Half of participants (54%) were female; almost half (49%) identified as non-heterosexual; and 42% were African American/Black, 31% were Latino/Hispanic, and 28% were White/other. Results indicated that because RHY are a uniquely challenged population, distrustful of service settings and professional adults and skilled at surviving independently, the population-tailored approaches found in RHY-specific settings are vital to settings' abilities to effectively engage and serve RHY. We found the following four major themes regarding the positive effects of settings: (1) engaging with an RHY setting was emotionally challenging and frightening, and thus the experiences of safety and services tailored to RHY needs were critical; (2) instrumental support from staff was vital and most effective when received in a context of emotional support; (3) RHY were skilled at survival on the streets, but benefited from socialization into more traditional systems to foster future independent living; and (4) follow-through and aftercare were needed as RHY transitioned out of services. With respect to gaps in settings, RHY highlighted the following: (1) a desire for better management of tension between youths' needs for structure and wishes for autonomy and (2) lack of RHY input into program governance. This study advances our understanding of RHY, their service needs, and the ways settings meet these needs, as well as remaining gaps. It underscores the vital, life-changing, and even life-saving role these settings play for RHY.

  20. Using Stochastic Approximation Techniques to Efficiently Construct Confidence Intervals for Heritability.

    PubMed

    Schweiger, Regev; Fisher, Eyal; Rahmani, Elior; Shenhav, Liat; Rosset, Saharon; Halperin, Eran

    2018-06-22

    Estimation of heritability is an important task in genetics. The use of linear mixed models (LMMs) to determine narrow-sense single-nucleotide polymorphism (SNP)-heritability and related quantities has received much recent attention, due of its ability to account for variants with small effect sizes. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. The common way to report the uncertainty in REML estimation uses standard errors (SEs), which rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals (CIs). In addition, for larger data sets (e.g., tens of thousands of individuals), the construction of SEs itself may require considerable time, as it requires expensive matrix inversions and multiplications. Here, we present FIESTA (Fast confidence IntErvals using STochastic Approximation), a method for constructing accurate CIs. FIESTA is based on parametric bootstrap sampling, and, therefore, avoids unjustified assumptions on the distribution of the heritability estimator. FIESTA uses stochastic approximation techniques, which accelerate the construction of CIs by several orders of magnitude, compared with previous approaches as well as to the analytical approximation used by SEs. FIESTA builds accurate CIs rapidly, for example, requiring only several seconds for data sets of tens of thousands of individuals, making FIESTA a very fast solution to the problem of building accurate CIs for heritability for all data set sizes.

  1. Spectral gap optimization of order parameters for sampling complex molecular systems

    PubMed Central

    Tiwary, Pratyush; Berne, B. J.

    2016-01-01

    In modern-day simulations of many-body systems, much of the computational complexity is shifted to the identification of slowly changing molecular order parameters called collective variables (CVs) or reaction coordinates. A vast array of enhanced-sampling methods are based on the identification and biasing of these low-dimensional order parameters, whose fluctuations are important in driving rare events of interest. Here, we describe a new algorithm for finding optimal low-dimensional CVs for use in enhanced-sampling biasing methods like umbrella sampling, metadynamics, and related methods, when limited prior static and dynamic information is known about the system, and a much larger set of candidate CVs is specified. The algorithm involves estimating the best combination of these candidate CVs, as quantified by a maximum path entropy estimate of the spectral gap for dynamics viewed as a function of that CV. The algorithm is called spectral gap optimization of order parameters (SGOOP). Through multiple practical examples, we show how this postprocessing procedure can lead to optimization of CV and several orders of magnitude improvement in the convergence of the free energy calculated through metadynamics, essentially giving the ability to extract useful information even from unsuccessful metadynamics runs. PMID:26929365

  2. Ergonomic and usability analysis on a sample of automobile dashboards.

    PubMed

    Carvalho, Raíssa; Soares, Marcelo

    2012-01-01

    This is a research study based on an analysis which sets out to identify and pinpoint ergonomic and usability problems found in a sample of automobile dashboards. The sample consisted of three dashboards, of three different makes and characterized as being a popular model, an average model and a luxury model. The examination was conducted by observation, with the aid of photography, notes and open interview, questionnaires and performing tasks with users, the bases of which are on the principles laid down by methodologies. From this it was possible to point to the existence of problems such as: complaints about the layout, lighting, colors, available area, difficult access to points of interaction, such as buttons, and the difficult nomenclature of dials. Later, the findings and recommendations presented show the need for a further, deeper study, using more accurate tools, a larger sample of users, and an anthropometric study focused on the dashboard, since reading and understanding it have to be done quickly and accurately, and that more attention be given to the study of automobile dashboards, particularly in the most popular vehicles in order to maintain the standards of usability, and drivers' comfort and safety.

  3. What to expect from dynamical modelling of galactic haloes - II. The spherical Jeans equation

    NASA Astrophysics Data System (ADS)

    Wang, Wenting; Han, Jiaxin; Cole, Shaun; More, Surhud; Frenk, Carlos; Schaller, Matthieu

    2018-06-01

    The spherical Jeans equation (SJE) is widely used in dynamical modelling of the Milky Way (MW) halo potential. We use haloes and galaxies from the cosmological Millennium-II simulation and hydrodynamical APOSTLE (A Project of Simulations of The Local Environment) simulations to investigate the performance of the SJE in recovering the underlying mass profiles of MW mass haloes. The best-fitting halo mass and concentration parameters scatter by 25 per cent and 40 per cent around their input values, respectively, when dark matter particles are used as tracers. This scatter becomes as large as a factor of 3 when using star particles instead. This is significantly larger than the estimated statistical uncertainty associated with the use of the SJE. The existence of correlated phase-space structures that violate the steady-state assumption of the SJE as well as non-spherical geometries is the principal source of the scatter. Binary haloes show larger scatter because they are more aspherical in shape and have a more perturbed dynamical state. Our results confirm that the number of independent phase-space structures sets an intrinsic limiting precision on dynamical inferences based on the steady-state assumption. Modelling with a radius-independent velocity anisotropy, or using tracers within a limited outer radius, result in significantly larger scatter, but the ensemble-averaged measurement over the whole halo sample is approximately unbiased.

  4. An orthogonal oriented quadrature hexagonal image pyramid

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.

    1987-01-01

    An image pyramid has been developed with basis functions that are orthogonal, self-similar, and localized in space, spatial frequency, orientation, and phase. The pyramid operates on a hexagonal sample lattice. The set of seven basis functions consist of three even high-pass kernels, three odd high-pass kernels, and one low-pass kernel. The three even kernels are identified when rotated by 60 or 120 deg, and likewise for the odd. The seven basis functions occupy a point and a hexagon of six nearest neighbors on a hexagonal sample lattice. At the lowest level of the pyramid, the input lattice is the image sample lattice. At each higher level, the input lattice is provided by the low-pass coefficients computed at the previous level. At each level, the output is subsampled in such a way as to yield a new hexagonal lattice with a spacing sq rt 7 larger than the previous level, so that the number of coefficients is reduced by a factor of 7 at each level. The relationship between this image code and the processing architecture of the primate visual cortex is discussed.

  5. STBase: One Million Species Trees for Comparative Biology

    PubMed Central

    McMahon, Michelle M.; Deepak, Akshay; Fernández-Baca, David; Boss, Darren; Sanderson, Michael J.

    2015-01-01

    Comprehensively sampled phylogenetic trees provide the most compelling foundations for strong inferences in comparative evolutionary biology. Mismatches are common, however, between the taxa for which comparative data are available and the taxa sampled by published phylogenetic analyses. Moreover, many published phylogenies are gene trees, which cannot always be adapted immediately for species level comparisons because of discordance, gene duplication, and other confounding biological processes. A new database, STBase, lets comparative biologists quickly retrieve species level phylogenetic hypotheses in response to a query list of species names. The database consists of 1 million single- and multi-locus data sets, each with a confidence set of 1000 putative species trees, computed from GenBank sequence data for 413,000 eukaryotic taxa. Two bodies of theoretical work are leveraged to aid in the assembly of multi-locus concatenated data sets for species tree construction. First, multiply labeled gene trees are pruned to conflict-free singly-labeled species-level trees that can be combined between loci. Second, impacts of missing data in multi-locus data sets are ameliorated by assembling only decisive data sets. Data sets overlapping with the user’s query are ranked using a scheme that depends on user-provided weights for tree quality and for taxonomic overlap of the tree with the query. Retrieval times are independent of the size of the database, typically a few seconds. Tree quality is assessed by a real-time evaluation of bootstrap support on just the overlapping subtree. Associated sequence alignments, tree files and metadata can be downloaded for subsequent analysis. STBase provides a tool for comparative biologists interested in exploiting the most relevant sequence data available for the taxa of interest. It may also serve as a prototype for future species tree oriented databases and as a resource for assembly of larger species phylogenies from precomputed trees. PMID:25679219

  6. Geophysical Evidence to Link Terrestrial Insect Diversity and Groundwater Availability in Non-Riparian Ecosystems

    NASA Astrophysics Data System (ADS)

    Pehringer, M.; Carr, G.; Long, H.; Parsekian, A.

    2015-12-01

    Wyoming, the third driest state in the United States, is home to a high level of biodiversity. In many cases, ecosystems are dependent on the vast systems of water resting just below the surface. This groundwater supports a variety of organisms that live far from surface water and its surrounding riparian zone, where more than 70% of species reside. In order to observe the correlation of groundwater presence and biodiversity in non-riparian ecosystems, a study was conducted to look specifically at terrestrial insect species linked to groundwater in Bighorn National Forest, WY. It was hypothesized that the more groundwater present, the greater the diversity of insects would be. Sample areas were randomly selected in non-riparian zones and groundwater was evaluated using a transient electromagnetic (TEM) geophysical instrument. Electrical pulses were transmitted through a 40m by 40m square of wire to measure levels of resistivity from near the surface to several hundred meters below ground. Pulses are echoed back to the surface and received by a smaller 10m by 10m square of wire, and an even smaller 1m by 1m square of wire set inside the larger transmitting wire. An insect population and species count was then conducted within the perimeter set by the outer transmitting wire. The results were not as hypothesized. More inferred groundwater below the surface resulted in a smaller diversity of species. Inversely, the areas with a smaller diversity held a larger total population of terrestrial insects.

  7. Diagnostic grouping among adults with intellectual disabilities and autistic spectrum disorders in staffed housing.

    PubMed

    Felce, D; Perry, J

    2012-12-01

    There is little evidence to guide the commissioning of residential provision for adults with autistic spectrum disorder (ASD) in the UK. We aim to explore the degree and impact of diagnostic congregation among adults with intellectual disabilities (ID) and ASD living in staffed housing. One hundred and fifty-seven adults with intellectual disabilities from a sample of 424 in staffed housing were assessed as having the triad of impairments characteristic of ASD. They lived in 88 houses: 26 were non-congregate (40% or fewer residents had the triad) and 50 congregate (60% or more had the triad); 12 with intermediate groupings were eliminated. Non-congregate and congregate groups were compared on age, gender, adaptive and challenging behaviour, house size, staff per resident and various measures of quality of care and quality of outcome. Comparisons were repeated for Adaptive Behavior Scale (ABS)-matched, congregate and non-congregate subsamples. Non-congregate settings were larger, had lower staff per resident and more individualised social milieus. Groups were similar in age and gender but the non-congregate group had non-significantly higher ABS scores. The non-congregate group did more social, community and household activities. After matching for ABS, these outcome differences ceased to be significant. Non-congregate settings were significantly larger and had significantly more organised working methods. The findings are consistent with other research that finds few advantages to diagnostic grouping. © 2011 The Authors. Journal of Intellectual Disability Research © 2011 Blackwell Publishing Ltd.

  8. Protein family clustering for structural genomics.

    PubMed

    Yan, Yongpan; Moult, John

    2005-10-28

    A major goal of structural genomics is the provision of a structural template for a large fraction of protein domains. The magnitude of this task depends on the number and nature of protein sequence families. With a large number of bacterial genomes now fully sequenced, it is possible to obtain improved estimates of the number and diversity of families in that kingdom. We have used an automated clustering procedure to group all sequences in a set of genomes into protein families. Bench-marking shows the clustering method is sensitive at detecting remote family members, and has a low level of false positives. This comprehensive protein family set has been used to address the following questions. (1) What is the structure coverage for currently known families? (2) How will the number of known apparent families grow as more genomes are sequenced? (3) What is a practical strategy for maximizing structure coverage in future? Our study indicates that approximately 20% of known families with three or more members currently have a representative structure. The study indicates also that the number of apparent protein families will be considerably larger than previously thought: We estimate that, by the criteria of this work, there will be about 250,000 protein families when 1000 microbial genomes have been sequenced. However, the vast majority of these families will be small, and it will be possible to obtain structural templates for 70-80% of protein domains with an achievable number of representative structures, by systematically sampling the larger families.

  9. Do Programs for Runaway and Homeless Youth Work? A Qualitative Exploration From the Perspectives of Youth Clients in Diverse Settings

    PubMed Central

    Gwadz, Marya; Freeman, Robert M.; Kutnick, Alexandra H.; Silverman, Elizabeth; Ritchie, Amanda S.; Cleland, Charles M.; Leonard, Noelle R.; Srinagesh, Aradhana; Powlovich, Jamie; Bolas, James

    2018-01-01

    Runaway and homeless youth (RHY) comprise a large population of young people who reside outside the control and protection of parents and guardians and who experience numerous traumas and risk factors, but few buffering resources. Specialized settings have developed to serve RHY, but little is known about their effects. The present cross-sectional qualitative descriptive study, grounded in the positive youth development approach and the Youth Program Quality Assessment model, addressed this gap in the literature. From a larger sample of 29 RHY-specific settings across New York State, RHY ages 16–21 from 11 settings were purposively sampled for semi-structured in-depth interviews on their transitions into homelessness, experiences with settings, and unmet needs (N = 37 RHY). Data were analyzed with a theory-driven and inductive systematic content analysis approach. Half of participants (54%) were female; almost half (49%) identified as non-heterosexual; and 42% were African American/Black, 31% were Latino/Hispanic, and 28% were White/other. Results indicated that because RHY are a uniquely challenged population, distrustful of service settings and professional adults and skilled at surviving independently, the population-tailored approaches found in RHY-specific settings are vital to settings’ abilities to effectively engage and serve RHY. We found the following four major themes regarding the positive effects of settings: (1) engaging with an RHY setting was emotionally challenging and frightening, and thus the experiences of safety and services tailored to RHY needs were critical; (2) instrumental support from staff was vital and most effective when received in a context of emotional support; (3) RHY were skilled at survival on the streets, but benefited from socialization into more traditional systems to foster future independent living; and (4) follow-through and aftercare were needed as RHY transitioned out of services. With respect to gaps in settings, RHY highlighted the following: (1) a desire for better management of tension between youths’ needs for structure and wishes for autonomy and (2) lack of RHY input into program governance. This study advances our understanding of RHY, their service needs, and the ways settings meet these needs, as well as remaining gaps. It underscores the vital, life-changing, and even life-saving role these settings play for RHY. PMID:29725587

  10. A Study of One Learner Cognitive Style and the Ability to Generalize Behavioral Competencies.

    ERIC Educational Resources Information Center

    Carter, Heather L.

    The generalization of acquired competencies, specifically flexibility of closure, was the subject of this research. Flexibility of closure was defined as the ability to demonstrate selective attention to a specified set of elements when presented within various settings (the larger the number of settings from which the desired set of elements can…

  11. Workforce Skills Development and Engagement in Training through Skill Sets: Literature Review. Occasional Paper

    ERIC Educational Resources Information Center

    Mills, John; Bowman, Kaye; Crean, David; Ranshaw, Danielle

    2012-01-01

    This literature review examines the available research on skill sets. It provides background for a larger research project "Workforce skills development and engagement in training through skill sets," the report of which will be released early next year. This paper outlines the origin of skill sets and explains the difference between…

  12. Learning in data-limited multimodal scenarios: Scandent decision forests and tree-based features.

    PubMed

    Hor, Soheil; Moradi, Mehdi

    2016-12-01

    Incomplete and inconsistent datasets often pose difficulties in multimodal studies. We introduce the concept of scandent decision trees to tackle these difficulties. Scandent trees are decision trees that optimally mimic the partitioning of the data determined by another decision tree, and crucially, use only a subset of the feature set. We show how scandent trees can be used to enhance the performance of decision forests trained on a small number of multimodal samples when we have access to larger datasets with vastly incomplete feature sets. Additionally, we introduce the concept of tree-based feature transforms in the decision forest paradigm. When combined with scandent trees, the tree-based feature transforms enable us to train a classifier on a rich multimodal dataset, and use it to classify samples with only a subset of features of the training data. Using this methodology, we build a model trained on MRI and PET images of the ADNI dataset, and then test it on cases with only MRI data. We show that this is significantly more effective in staging of cognitive impairments compared to a similar decision forest model trained and tested on MRI only, or one that uses other kinds of feature transform applied to the MRI data. Copyright © 2016. Published by Elsevier B.V.

  13. Adaptive Landscape Flattening Accelerates Sampling of Alchemical Space in Multisite λ Dynamics.

    PubMed

    Hayes, Ryan L; Armacost, Kira A; Vilseck, Jonah Z; Brooks, Charles L

    2017-04-20

    Multisite λ dynamics (MSλD) is a powerful emerging method in free energy calculation that allows prediction of relative free energies for a large set of compounds from very few simulations. Calculating free energy differences between substituents that constitute large volume or flexibility jumps in chemical space is difficult for free energy methods in general, and for MSλD in particular, due to large free energy barriers in alchemical space. This study demonstrates that a simple biasing potential can flatten these barriers and introduces an algorithm that determines system specific biasing potential coefficients. Two sources of error, deep traps at the end points and solvent disruption by hard-core potentials, are identified. Both scale with the size of the perturbed substituent and are removed by sharp biasing potentials and a new soft-core implementation, respectively. MSλD with landscape flattening is demonstrated on two sets of molecules: derivatives of the heat shock protein 90 inhibitor geldanamycin and derivatives of benzoquinone. In the benzoquinone system, landscape flattening leads to 2 orders of magnitude improvement in transition rates between substituents and robust solvation free energies. Landscape flattening opens up new applications for MSλD by enabling larger chemical perturbations to be sampled with improved precision and accuracy.

  14. Binary constructs of forensic psychiatric nursing: a pilot study.

    PubMed

    Mason, T; Dulson, J; King, L

    2009-03-01

    The aim was to develop an Information Gathering Schedule (IGS) relevant to forensic psychiatric nursing in order to establish the perceived differences in the three levels of security, high, medium and low. Perceived differences in the role constructs of forensic psychiatric nursing is said to exist but the evidence is qualitative or anecdotal. This paper sets out a pilot study beginning in 2004 relating to the development of two rating scales for inclusion into an IGS to acquire data on the role constructs of nurses working in these environments. Following a thematic analysis from the literature two sets of binary frameworks were constructed and a number of questions/statements relating to them were tested. The Thurstone Scaling test was applied to compute medians resulting in a reduction to 48 and 20 items for each respective framework. Two 7-point Likert scales were constructed and test-retest procedures were applied on a sample population of forensic psychiatric nurses. Student's t-test was conducted on the data and the results suggest that the IGS is now suitable for application on a larger study. The IGS was piloted on a small sample of forensic psychiatric nurses. The two scales were validated to coefficient values ranging from 0.7 to 0.9. Amendments were made and the IGS was considered acceptable.

  15. An upper bound on the radius of a highly electrically conducting lunar core

    NASA Technical Reports Server (NTRS)

    Hobbs, B. A.; Hood, L. L.; Herbert, F.; Sonett, C. P.

    1983-01-01

    Parker's (1980) nonlinear inverse theory for the electromagnetic sounding problem is converted to a form suitable for analysis of lunar day-side transfer function data by: (1) transforming the solution in plane geometry to that in spherical geometry; and (2) transforming the theoretical lunar transfer function in the dipole limit to an apparent resistivity function. The theory is applied to the revised lunar transfer function data set of Hood et al. (1982), which extends in frequency from 10 to the -5th to 10 to the -3rd Hz. On the assumption that an iron-rich lunar core, whether molten or solid, can be represented by a perfect conductor at the minimum sampled frequency, an upper bound of 435 km on the maximum radius of such a core is calculated. This bound is somewhat larger than values of 360-375 km previously estimated from the same data set via forward model calculations because the prior work did not consider all possible mantle conductivity functions.

  16. The value of remote sensing techniques in supporting effective extrapolation across multiple marine spatial scales.

    PubMed

    Strong, James Asa; Elliott, Michael

    2017-03-15

    The reporting of ecological phenomena and environmental status routinely required point observations, collected with traditional sampling approaches to be extrapolated to larger reporting scales. This process encompasses difficulties that can quickly entrain significant errors. Remote sensing techniques offer insights and exceptional spatial coverage for observing the marine environment. This review provides guidance on (i) the structures and discontinuities inherent within the extrapolative process, (ii) how to extrapolate effectively across multiple spatial scales, and (iii) remote sensing techniques and data sets that can facilitate this process. This evaluation illustrates that remote sensing techniques are a critical component in extrapolation and likely to underpin the production of high-quality assessments of ecological phenomena and the regional reporting of environmental status. Ultimately, is it hoped that this guidance will aid the production of robust and consistent extrapolations that also make full use of the techniques and data sets that expedite this process. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Fragmentation efficiency of explosive volcanic eruptions: A study of experimentally generated pyroclasts

    NASA Astrophysics Data System (ADS)

    Kueppers, Ulrich; Scheu, Bettina; Spieler, Oliver; Dingwell, Donald B.

    2006-05-01

    Products of magma fragmentation can pose a severe threat to health, infrastructure, environment, and aviation. Systematic evaluation of the mechanisms and the consequences of volcanic fragmentation is very difficult as the adjacent processes cannot be observed directly and their deposits undergo transport-related sorting. However, enhanced knowledge is required for hazard assessment and risk mitigation. Laboratory experiments on natural samples allow the precise characterization of the generated pyroclasts and open the possibility for substantial advances in the quantification of fragmentation processes. They hold the promise of precise characterization and quantification of fragmentation efficiency and its dependence on changing material properties and the physical conditions at fragmentation. We performed a series of rapid decompression experiments on three sets of natural samples from Unzen volcano, Japan. The analysis comprised grain-size analysis and surface area measurements. The grain-size analysis is performed by dry sieving for particles larger than 250 μm and wet laser refraction for smaller particles. For all three sets of samples, the grain-size of the most abundant fraction decreases and the weight fraction of newly generated ash particles (up to 40 wt.%) increases with experimental pressure/potential energy for fragmentation. This energy can be estimated from the volume of the gas fraction and the applied pressure. The surface area was determined through Argon adsorption. The fragmentation efficiency is described by the degree of fine-particle generation. Results show that the fragmentation efficiency and the generated surface correlate positively with the applied energy.

  18. Pollination and reproduction of an invasive plant inside and outside its ancestral range

    NASA Astrophysics Data System (ADS)

    Petanidou, Theodora; Price, Mary V.; Bronstein, Judith L.; Kantsa, Aphrodite; Tscheulin, Thomas; Kariyat, Rupesh; Krigas, Nikos; Mescher, Mark C.; De Moraes, Consuelo M.; Waser, Nickolas M.

    2018-05-01

    Comparing traits of invasive species within and beyond their ancestral range may improve our understanding of processes that promote aggressive spread. Solanum elaeagnifolium (silverleaf nightshade) is a noxious weed in its ancestral range in North America and is invasive on other continents. We compared investment in flowers and ovules, pollination success, and fruit and seed set in populations from Arizona, USA ("AZ") and Greece ("GR"). In both countries, the populations we sampled varied in size and types of present-day disturbance. Stature of plants increased with population size in AZ samples whereas GR plants were uniformly tall. Taller plants produced more flowers, and GR plants produced more flowers for a given stature and allocated more ovules per flower. Similar functional groups of native bees pollinated in AZ and GR populations, but visits to flowers decreased with population size and we observed no visits in the largest GR populations. As a result, plants in large GR populations were pollen-limited, and estimates of fecundity were lower on average in GR populations despite the larger allocation to flowers and ovules. These differences between plants in our AZ and GR populations suggest promising directions for further study. It would be useful to sample S. elaeagnifolium in Mediterranean climates within the ancestral range (e.g., in California, USA), to study asexual spread via rhizomes, and to use common gardens and genetic studies to explore the basis of variation in allocation patterns and of relationships between visitation and fruit set.

  19. Negligible impact of rare autoimmune-locus coding-region variants on missing heritability.

    PubMed

    Hunt, Karen A; Mistry, Vanisha; Bockett, Nicholas A; Ahmad, Tariq; Ban, Maria; Barker, Jonathan N; Barrett, Jeffrey C; Blackburn, Hannah; Brand, Oliver; Burren, Oliver; Capon, Francesca; Compston, Alastair; Gough, Stephen C L; Jostins, Luke; Kong, Yong; Lee, James C; Lek, Monkol; MacArthur, Daniel G; Mansfield, John C; Mathew, Christopher G; Mein, Charles A; Mirza, Muddassar; Nutland, Sarah; Onengut-Gumuscu, Suna; Papouli, Efterpi; Parkes, Miles; Rich, Stephen S; Sawcer, Steven; Satsangi, Jack; Simmonds, Matthew J; Trembath, Richard C; Walker, Neil M; Wozniak, Eva; Todd, John A; Simpson, Michael A; Plagnol, Vincent; van Heel, David A

    2013-06-13

    Genome-wide association studies (GWAS) have identified common variants of modest-effect size at hundreds of loci for common autoimmune diseases; however, a substantial fraction of heritability remains unexplained, to which rare variants may contribute. To discover rare variants and test them for association with a phenotype, most studies re-sequence a small initial sample size and then genotype the discovered variants in a larger sample set. This approach fails to analyse a large fraction of the rare variants present in the entire sample set. Here we perform simultaneous amplicon-sequencing-based variant discovery and genotyping for coding exons of 25 GWAS risk genes in 41,911 UK residents of white European origin, comprising 24,892 subjects with six autoimmune disease phenotypes and 17,019 controls, and show that rare coding-region variants at known loci have a negligible role in common autoimmune disease susceptibility. These results do not support the rare-variant synthetic genome-wide-association hypothesis (in which unobserved rare causal variants lead to association detected at common tag variants). Many known autoimmune disease risk loci contain multiple, independently associated, common and low-frequency variants, and so genes at these loci are a priori stronger candidates for harbouring rare coding-region variants than other genes. Our data indicate that the missing heritability for common autoimmune diseases may not be attributable to the rare coding-region variant portion of the allelic spectrum, but perhaps, as others have proposed, may be a result of many common-variant loci of weak effect.

  20. Estimating Sampling Biases and Measurement Uncertainties of AIRS-AMSU-A Temperature and Water Vapor Observations Using MERRA Reanalysis

    NASA Technical Reports Server (NTRS)

    Hearty, Thomas J.; Savtchenko, Andrey K.; Tian, Baijun; Fetzer, Eric; Yung, Yuk L.; Theobald, Michael; Vollmer, Bruce; Fishbein, Evan; Won, Young-In

    2014-01-01

    We use MERRA (Modern Era Retrospective-Analysis for Research Applications) temperature and water vapor data to estimate the sampling biases of climatologies derived from the AIRS/AMSU-A (Atmospheric Infrared Sounder/Advanced Microwave Sounding Unit-A) suite of instruments. We separate the total sampling bias into temporal and instrumental components. The temporal component is caused by the AIRS/AMSU-A orbit and swath that are not able to sample all of time and space. The instrumental component is caused by scenes that prevent successful retrievals. The temporal sampling biases are generally smaller than the instrumental sampling biases except in regions with large diurnal variations, such as the boundary layer, where the temporal sampling biases of temperature can be +/- 2 K and water vapor can be 10% wet. The instrumental sampling biases are the main contributor to the total sampling biases and are mainly caused by clouds. They are up to 2 K cold and greater than 30% dry over mid-latitude storm tracks and tropical deep convective cloudy regions and up to 20% wet over stratus regions. However, other factors such as surface emissivity and temperature can also influence the instrumental sampling bias over deserts where the biases can be up to 1 K cold and 10% wet. Some instrumental sampling biases can vary seasonally and/or diurnally. We also estimate the combined measurement uncertainties of temperature and water vapor from AIRS/AMSU-A and MERRA by comparing similarly sampled climatologies from both data sets. The measurement differences are often larger than the sampling biases and have longitudinal variations.

  1. California Dental Hygiene Educators' Perceptions of an Application of the ADHA Advanced Dental Hygiene Practitioner (ADHP) Model in Medical Settings.

    PubMed

    Smith, Lauren; Walsh, Margaret

    2015-12-01

    To assess California dental hygiene educators' perceptions of an application of the American Dental Hygienists' Association's (ADHA) advanced dental hygiene practitioner model (ADHP) in medical settings where the advanced dental hygiene practitioner collaborates in medical settings with other health professionals to meet clients' oral health needs. In 2014, 30 directors of California dental hygiene programs were contacted to participate in and distribute an online survey to their faculty. In order to capture non-respondents, 2 follow-up e-mails were sent. Descriptive analysis and cross-tabulations were analyzed using the online survey software program, Qualtrics™. The educator response rate was 18% (70/387). Nearly 90% of respondents supported the proposed application of the ADHA ADHP model and believed it would increase access to care and reduce oral health disparities. They also agreed with most of the proposed services, target populations and workplace settings. Slightly over half believed a master's degree was the appropriate educational level needed. Among California dental hygiene educators responding to this survey, there was strong support for the proposed application of the ADHA model in medical settings. More research is needed among a larger sample of dental hygiene educators and clinicians, as well as among other health professionals such as physicians, nurses and dentists. Copyright © 2015 The American Dental Hygienists’ Association.

  2. Development and Preliminary Performance of a Risk Factor Screen to Predict Posttraumatic Psychological Disorder After Trauma Exposure

    PubMed Central

    Carlson, Eve B.; Palmieri, Patrick A.; Spain, David A.

    2017-01-01

    Objective We examined data from a prospective study of risk factors that increase vulnerability or resilience, exacerbate distress, or foster recovery to determine whether risk factors accurately predict which individuals will later have high posttraumatic (PT) symptom levels and whether brief measures of risk factors also accurately predict later symptom elevations. Method Using data from 129 adults exposed to traumatic injury of self or a loved one, we conducted receiver operating characteristic (ROC) analyses of 14 risk factors assessed by full-length measures, determined optimal cutoff scores and calculated predictive performance for the nine that were most predictive. For five risk factors, we identified sets of items that accounted for 90% of variance in total scores and calculated predictive performance for sets of brief risk measures. Results A set of nine risk factors assessed by full measures identified 89% of those who later had elevated PT symptoms (sensitivity) and 78% of those who did not (specificity). A set of four brief risk factor measures assessed soon after injury identified 86% of those who later had elevated PT symptoms and 72% of those who did not. Conclusions Use of sets of brief risk factor measures shows promise of accurate prediction of PT psychological disorder and probable PTSD or depression. Replication of predictive accuracy is needed in a new and larger sample. PMID:28622811

  3. Larger benthic foraminifera of the Paleogene Promina Beds (Croatia)

    NASA Astrophysics Data System (ADS)

    Cosovic, V.; Mrinjek, E.; Drobne, K.

    2012-04-01

    In order to add more information about complex origin of Promina Beds (traditionally interpreted as Paleogene molasse of Dinarides), two sections (Lišani Ostrovački and Ostrovica, Central Dalmatia, Croatia) have been studied in detail. Sampled carbonate sequences contain predominantly coralline red algae, larger benthic foraminifera and corals. Based on sedimentary textures, nummulitid (Nummulites s.str and Asterigerina sp.) test shapes and the associated skeletal components, altogether three types of the Middle Eocene (Lutetian to Bartonian) facies were recognized. The Ostrovica section is composed of alternating couples of marly limestones and marls, several decimeters thick with great lateral continuity. Two facies which vertically alternate are recognized as Nummulites - Asterigerina facies, where patchily dispersed large, robust and party reworked larger benthic foraminifera constitute 20% and small bioclasts (fomaniniferal fragments and whole tests less than 3 mm in diameters) 10% of rock volume and, Coral - Red algal facies with coral fragments of solitary and colonial taxa up to 1 cm in size constitute 5 - 40%, red algae 15 - 60% and lager benthic foraminifera up to 5% of rock volume. The textural and compositional differences among the facies suggest rhythmic exchanges of conditions that characterize shallower part of the mesophotic zone with abundant nummulithoclasts with deeper mesophotic, lime mud-dominated settings where nummulitids with the flat tests, coralline red algae and scleractinian corals are common. The scleractinian corals (comprising up to 20% of rock volume) encrusted by foraminifera (Acervulina, Haddonia and nubeculariids) or coralline red algae and foraminiferal assemblage made of orthophragminid and nummulitid tests scattered in matrix, are distributed uniformly throughout the studied Lišani Ostrovački section. In the central part of section, wavy to smooth thin (< 1 mm) crusts (laminas) alternating with encrusted corals occur. The characteristics of associated fauna and spatial relationship between corals and laminations indicate that this facies originated in a mid-ramp (shelf) setting.

  4. Decision Making and Learning while Taking Sequential Risks

    ERIC Educational Resources Information Center

    Pleskac, Timothy J.

    2008-01-01

    A sequential risk-taking paradigm used to identify real-world risk takers invokes both learning and decision processes. This article expands the paradigm to a larger class of tasks with different stochastic environments and different learning requirements. Generalizing a Bayesian sequential risk-taking model to the larger set of tasks clarifies…

  5. Tumor suppressor genes are larger than apoptosis-effector genes and have more regions of active chromatin: Connection to a stochastic paradigm for sequential gene expression programs.

    PubMed

    Garcia, Marlene; Mauro, James A; Ramsamooj, Michael; Blanck, George

    2015-08-03

    Apoptosis- and proliferation-effector genes are substantially regulated by the same transactivators, with E2F-1 and Oct-1 being notable examples. The larger proliferation-effector genes have more binding sites for the transactivators that regulate both sets of genes, and proliferation-effector genes have more regions of active chromatin, i.e, DNase I hypersensitive and histone 3, lysine-4 trimethylation sites. Thus, the size differences between the 2 classes of genes suggest a transcriptional regulation paradigm whereby the accumulation of transcription factors that regulate both sets of genes, merely as an aspect of stochastic behavior, accumulate first on the larger proliferation-effector gene "traps," and then accumulate on the apoptosis effector genes, thereby effecting sequential activation of the 2 different gene sets. As IRF-1 and p53 levels increase, tumor suppressor proteins are first activated, followed by the activation of apoptosis-effector genes, for example during S-phase pausing for DNA repair. Tumor suppressor genes are larger than apoptosis-effector genes and have more IRF-1 and p53 binding sites, thereby likewise suggesting a paradigm for transcription sequencing based on stochastic interactions of transcription factors with different gene classes. In this report, using the ENCODE database, we determined that tumor suppressor genes have a greater number of open chromatin regions and histone 3 lysine-4 trimethylation sites, consistent with the idea that a larger gene size can facilitate earlier transcriptional activation via the inclusion of more transactivator binding sites.

  6. Dielectric studies on PEG-LTMS based polymer composites

    NASA Astrophysics Data System (ADS)

    Patil, Ravikumar V.; Praveen, D.; Damle, R.

    2018-02-01

    PEG LTMS based polymer composites were prepared and studied for dielectric constant variation with frequency and temperature as a potential candidate with better dielectric properties. Solution cast technique is used for the preparation of polymer composite with five different compositions. Samples show variation in dielectric constant with frequency and temperature. Dielectric constant is large at low frequencies and higher temperatures. Samples with larger space charges have shown larger dielectric constant. The highest dielectric constant observed was about 29244 for PEG25LTMS sample at 100Hz and 312 K.

  7. Concentration comparison of selected constituents between groundwater samples collected within the Missouri River alluvial aquifer using purge and pump and grab-sampling methods, near the city of Independence, Missouri, 2013

    USGS Publications Warehouse

    Krempa, Heather M.

    2015-10-29

    Relative percent differences between methods were greater than 10 percent for most analyzed trace elements. Barium, cobalt, manganese, and boron had concentrations that were significantly different between sampling methods. Barium, molybdenum, boron, and uranium method concentrations indicate a close association between pump and grab samples based on bivariate plots and simple linear regressions. Grab sample concentrations were generally larger than pump concentrations for these elements and may be because of using a larger pore sized filter for grab samples. Analysis of zinc blank samples suggests zinc contamination in filtered grab samples. Variations of analyzed trace elements between pump and grab samples could reduce the ability to monitor temporal changes and potential groundwater contamination threats. The degree of precision necessary for monitoring potential groundwater threats and application objectives need to be considered when determining acceptable variation amounts.

  8. Recent advances in quantitative high throughput and high content data analysis.

    PubMed

    Moutsatsos, Ioannis K; Parker, Christian N

    2016-01-01

    High throughput screening has become a basic technique with which to explore biological systems. Advances in technology, including increased screening capacity, as well as methods that generate multiparametric readouts, are driving the need for improvements in the analysis of data sets derived from such screens. This article covers the recent advances in the analysis of high throughput screening data sets from arrayed samples, as well as the recent advances in the analysis of cell-by-cell data sets derived from image or flow cytometry application. Screening multiple genomic reagents targeting any given gene creates additional challenges and so methods that prioritize individual gene targets have been developed. The article reviews many of the open source data analysis methods that are now available and which are helping to define a consensus on the best practices to use when analyzing screening data. As data sets become larger, and more complex, the need for easily accessible data analysis tools will continue to grow. The presentation of such complex data sets, to facilitate quality control monitoring and interpretation of the results will require the development of novel visualizations. In addition, advanced statistical and machine learning algorithms that can help identify patterns, correlations and the best features in massive data sets will be required. The ease of use for these tools will be important, as they will need to be used iteratively by laboratory scientists to improve the outcomes of complex analyses.

  9. A System for Cost and Reimbursement Control in Hospitals

    PubMed Central

    Fetter, Robert B.; Thompson, John D.; Mills, Ronald E.

    1976-01-01

    This paper approaches the design of a regional or statewide hospital rate-setting system as the underpinning of a larger system which permits a regulatory agency to satisfy the requirements of various public laws now on the books or in process. It aims to generate valid interinstitutional monitoring on the three parameters of cost, utilization, and quality review. Such an approach requires the extension of the usual departmental cost and budgeting system to include consideration of the mix of patients treated and the utilization of various resources, including patient days, in the treatment of these patients. A sampling framework for the application of process-based quality studies and the generation of selected performance measurements is also included. PMID:941461

  10. Design, analysis, and interpretation of field quality-control data for water-sampling projects

    USGS Publications Warehouse

    Mueller, David K.; Schertz, Terry L.; Martin, Jeffrey D.; Sandstrom, Mark W.

    2015-01-01

    The report provides extensive information about statistical methods used to analyze quality-control data in order to estimate potential bias and variability in environmental data. These methods include construction of confidence intervals on various statistical measures, such as the mean, percentiles and percentages, and standard deviation. The methods are used to compare quality-control results with the larger set of environmental data in order to determine whether the effects of bias and variability might interfere with interpretation of these data. Examples from published reports are presented to illustrate how the methods are applied, how bias and variability are reported, and how the interpretation of environmental data can be qualified based on the quality-control analysis.

  11. Favoured local structures in liquids and solids: a 3D lattice model.

    PubMed

    Ronceray, Pierre; Harrowell, Peter

    2015-05-07

    We investigate the connection between the geometry of Favoured Local Structures (FLS) in liquids and the associated liquid and solid properties. We introduce a lattice spin model - the FLS model on a face-centered cubic lattice - where this geometry can be arbitrarily chosen among a discrete set of 115 possible FLS. We find crystalline groundstates for all choices of a single FLS. Sampling all possible FLS's, we identify the following trends: (i) low symmetry FLS's produce larger crystal unit cells but not necessarily higher energy groundstates, (ii) chiral FLS's exhibit peculiarly poor packing properties, (iii) accumulation of FLS's in supercooled liquids is linked to large crystal unit cells, and (iv) low symmetry FLS's tend to find metastable structures on cooling.

  12. Racial-ethnic disparities in health and the labor market: Losing and leaving jobs.

    PubMed

    Strully, Kate

    2009-09-01

    This study examines whether employment disruptions have varying health consequences for White and Black or Hispanic workers in the U.S. Since employment disruptions mark major shocks to socioeconomic status (SES), this analysis also speaks to a broader set of questions about how race/ethnicity and SES shape population-level health disparities. Data from 1999, 2001 and 2003 waves of the U.S. Panel Study of Income Dynamics provide no evidence of racial/ethnic variation in the health consequences of involuntary job loss. However, associations between leaving jobs voluntarily and poor self-assessed health are larger for Black and Hispanic workers than for White workers. This pattern may be linked to downward occupational mobility within the Black and Hispanic sample.

  13. Future time perspective and positive health practices in young adults: an extension.

    PubMed

    Mahon, N E; Yarcheski, T J; Yarcheski, A

    1997-06-01

    A sample of 69 young adults attending a public university responded to the Future Time Perspective Inventory, two subscales of the Time Experience Scales (Fast and Slow Tempo), and the Personal Lifestyle Questionnaire in classroom settings. A statistically significant correlation (.52) was found between scores for future time perspective and the ratings for the practice of positive health behaviors in young adults. This correlation was larger than those previously found for middle and late adolescents. Scores on subscales of individual health practices and future time perspective indicated statistically significant correlations for five (.25 to .56) of the six subscales. Scores on neither Fast nor Slow Tempo were related to ratings of positive health practices or ratings on subscales measuring positive health practices.

  14. Brief Report: Applying an Indicator Set to Survey the Health of People with Intellectual Disabilities in Europe

    ERIC Educational Resources Information Center

    Walsh, Patricia Noonan

    2008-01-01

    This report gives an account of applying a health survey tool by the "Pomona" Group that earlier documented the process of developing a set of health indicators for people with intellectual disabilities in Europe. The "Pomona" health indicator set mirrors the much larger set of health indicators prepared by the European…

  15. A Thousand Fly Genomes: An Expanded Drosophila Genome Nexus.

    PubMed

    Lack, Justin B; Lange, Jeremy D; Tang, Alison D; Corbett-Detig, Russell B; Pool, John E

    2016-12-01

    The Drosophila Genome Nexus is a population genomic resource that provides D. melanogaster genomes from multiple sources. To facilitate comparisons across data sets, genomes are aligned using a common reference alignment pipeline which involves two rounds of mapping. Regions of residual heterozygosity, identity-by-descent, and recent population admixture are annotated to enable data filtering based on the user's needs. Here, we present a significant expansion of the Drosophila Genome Nexus, which brings the current data object to a total of 1,121 wild-derived genomes. New additions include 305 previously unpublished genomes from inbred lines representing six population samples in Egypt, Ethiopia, France, and South Africa, along with another 193 genomes added from recently-published data sets. We also provide an aligned D. simulans genome to facilitate divergence comparisons. This improved resource will broaden the range of population genomic questions that can addressed from multi-population allele frequencies and haplotypes in this model species. The larger set of genomes will also enhance the discovery of functionally relevant natural variation that exists within and between populations. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  16. Photoanthropometric face iridial proportions for age estimation: An investigation using features selected via a joint mutual information criterion.

    PubMed

    Borges, Díbio L; Vidal, Flávio B; Flores, Marta R P; Melani, Rodolfo F H; Guimarães, Marco A; Machado, Carlos E P

    2018-03-01

    Age assessment from images is of high interest in the forensic community because of the necessity to establish formal protocols to identify child pornography, child missing and abuses where visual evidences are the mostly admissible. Recently, photoanthropometric methods have been found useful for age estimation correlating facial proportions in image databases with samples of some age groups. Notwithstanding the advances, newer facial features and further analysis are needed to improve accuracy and establish larger applicability. In this investigation, frontal images of 1000 individuals (500 females, 500 males), equally distributed in five age groups (6, 10, 14, 18, 22 years old) were used in a 10 fold cross-validated experiment for three age thresholds classifications (<10, <14, <18 years old). A set of novel 40 features, based on a relation between landmark distances and the iris diameter, is proposed and joint mutual information is used to select the most relevant and complementary features for the classification task. In a civil image identification database with diverse ancestry, receiver operating characteristic (ROC) curves were plotted to verify accuracy, and the resultant AUCs achieved 0.971, 0.969, and 0.903 for the age classifications (<10, <14, <18 years old), respectively. These results add support to continuing research in age assessment from images using the metric approach. Still, larger samples are necessary to evaluate reliability in extensive conditions. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Geometric morphometric footprint analysis of young women

    PubMed Central

    2013-01-01

    Background Most published attempts to quantify footprint shape are based on a small number of measurements. We applied geometric morphometric methods to study shape variation of the complete footprint outline in a sample of 83 adult women. Methods The outline of the footprint, including the toes, was represented by a comprehensive set of 85 landmarks and semilandmarks. Shape coordinates were computed by Generalized Procrustes Analysis. Results The first four principal components represented the major axes of variation in foot morphology: low-arched versus high-arched feet, long and narrow versus short and wide feet, the relative length of the hallux, and the relative length of the forefoot. These shape features varied across the measured individuals without any distinct clusters or discrete types of footprint shape. A high body mass index (BMI) was associated with wide and flat feet, and a high frequency of wearing high-heeled shoes was associated with a larger forefoot area of the footprint and a relatively long hallux. Larger feet had an increased length-to-width ratio of the footprint, a lower-arched foot, and longer toes relative to the remaining foot. Footprint shape differed on average between left and right feet, and the variability of footprint asymmetry increased with BMI. Conclusions Foot shape is affected by lifestyle factors even in a sample of young women (median age 23 years). Geometric morphometrics proved to be a powerful tool for the detailed analysis of footprint shape that is applicable in various scientific disciplines, including forensics, orthopedics, and footwear design. PMID:23886074

  18. Habitat fragmentation effects on birds in grasslands and wetlands: A critique of our knowledge

    USGS Publications Warehouse

    Johnson, D.H.

    2001-01-01

    Habitat fragmentation exacerbates the problem of habitat loss for grassland and wetland birds. Remaining patches of grasslands and wetlands may be too small, too isolated, and too influenced by edge effects to maintain viable populations of some breeding birds. Knowledge of the effects of fragmentation on bird populations is critically important for decisions about reserve design, grassland and wetland management, and implementation of cropland set-aside programs that benefit wildlife. In my review of research that has been conducted on habitat fragmentation, I found at least five common problems in the methodology used. The results of many studies are compromised by these problems: passive sampling (sampling larger areas in larger patches), confounding effects of habitat heterogeneity, consequences of inappropriate pooling of data from different species, artifacts associated with artificial nest data, and definition of actual habitat patches. As expected, some large-bodied birds with large territorial requirements, such as the northern harrier (Circus cyaneus), appear area sensitive. In addition, some small species of grassland birds favor patches of habitat far in excess of their territory size, including the Savannah (Passerculus sandwichensis), grasshopper (Ammodramus savannarum) and Henslow's (A. henslowii) sparrows, and the bobolink (Dolichonyx oryzivorus). Other species may be area sensitive as well, but the data are ambiguous. Area sensitivity among wetland birds remains unknown since virtually no studies have been based on solid methodologies. We need further research on grassland bird response to habitat that distinguishes supportable conclusions from those that may be artifactual.

  19. Evaluation of 4D-CT lung registration.

    PubMed

    Kabus, Sven; Klinder, Tobias; Murphy, Keelin; van Ginneken, Bram; van Lorenz, Cristian; Pluim, Josien P W

    2009-01-01

    Non-rigid registration accuracy assessment is typically performed by evaluating the target registration error at manually placed landmarks. For 4D-CT lung data, we compare two sets of landmark distributions: a smaller set primarily defined on vessel bifurcations as commonly described in the literature and a larger set being well-distributed throughout the lung volume. For six different registration schemes (three in-house schemes and three schemes frequently used by the community) the landmark error is evaluated and found to depend significantly on the distribution of the landmarks. In particular, lung regions near to the pleura show a target registration error three times larger than near-mediastinal regions. While the inter-method variability on the landmark positions is rather small, the methods show discriminating differences with respect to consistency and local volume change. In conclusion, both a well-distributed set of landmarks and a deformation vector field analysis are necessary for reliable non-rigid registration accuracy assessment.

  20. Moderation of effects of AAC based on setting and types of aided AAC on outcome variables: an aggregate study of single-case research with individuals with ASD.

    PubMed

    Ganz, Jennifer B; Rispoli, Mandy J; Mason, Rose Ann; Hong, Ee Rea

    2014-06-01

    The purpose of this meta-analysis was to evaluate the potential moderating effects of intervention setting and type of aided augmentative and alternative communication (AAC) on outcome variables for students with autism spectrum disorders. Improvement rate difference, an effect size measure, was used to calculate aggregate effects across 35 single-case research studies. Results indicated that the largest effects for aided AAC were observed in general education settings. With respect to communication outcomes, both speech generating devices (SGDs) and the Picture Exchange Communication System (PECS) were associated with larger effects than other picture-based systems. With respect to challenging behaviour outcomes, SGDs produced larger effects than PECS. This aggregate study highlights the importance of considering intervention setting, choice of AAC system and target outcomes when designing and planning an aided AAC intervention.

  1. A study of aerosol entrapment and the influence of wind speed, chamber design and foam density on polyurethane foam passive air samplers used for persistent organic pollutants.

    PubMed

    Chaemfa, Chakra; Wild, Edward; Davison, Brian; Barber, Jonathan L; Jones, Kevin C

    2009-06-01

    Polyurethane foam disks are a cheap and versatile tool for sampling persistent organic pollutants (POPs) from the air in ambient, occupational and indoor settings. This study provides important background information on the ways in which the performance of these commonly used passive air samplers may be influenced by the key environmental variables of wind speed and aerosol entrapment. Studies were performed in the field, a wind tunnel and with microscopy techniques, to investigate deployment conditions and foam density influence on gas phase sampling rates (not obtained in this study) and aerosol trapping. The study showed: wind speed inside the sampler is greater on the upper side of the sampling disk than the lower side and tethered samplers have higher wind speeds across the upper and lower surfaces of the foam disk at a wind speed > or = 4 m/s; particles are trapped on the foam surface and within the body of the foam disk; fine (<1 um) particles can form clusters of larger size inside the foam matrix. Whilst primarily designed to sample gas phase POPs, entrapment of particles ensures some 'sampling' of particle bound POPs species, such as higher molecular weight PAHs and PCDD/Fs. Further work is required to investigate how quantitative such entrapment or 'sampling' is under different ambient conditions, and with different aerosol sizes and types.

  2. Implications of High Molecular Divergence of Nuclear rRNA and Phylogenetic Structure for the Dinoflagellate Prorocentrum (Dinophyceae, Prorocentrales).

    PubMed

    Boopathi, Thangavelu; Faria, Daphne Georgina; Cheon, Ju-Yong; Youn, Seok Hyun; Ki, Jang-Seu

    2015-01-01

    The small and large nuclear subunit molecular phylogeny of the genus Prorocentrum demonstrated that the species are dichotomized into two clades. These two clades were significantly different (one-factor ANOVA, p < 0.01) with patterns compatible for both small and large subunit Bayesian phylogenetic trees, and for a larger taxon sampled dinoflagellate phylogeny. Evaluation of the molecular divergence levels showed that intraspecies genetic variations were significantly low (t-test, p < 0.05), than those for interspecies variations (> 2.9% and > 26.8% dissimilarity in the small and large subunit [D1/D2], respectively). Based on the calculated molecular divergence, the genus comprises two genetically distinct groups that should be considered as two separate genera, thereby setting the pace for major systematic changes for the genus Prorocentrum sensu Dodge. Moreover, the information presented in this study would be useful for improving species identification, detection of novel clades from environmental samples. © 2015 The Author(s) Journal of Eukaryotic Microbiology © 2015 International Society of Protistologists.

  3. Psychological vulnerability, burnout, and coping among employees of a business process outsourcing organization.

    PubMed

    Machado, Tanya; Sathyanarayanan, Vidya; Bhola, Poornima; Kamath, Kirthi

    2013-01-01

    The business process outsourcing (BPO) sector is a contemporary work setting in India, with a large and relatively young workforce. There is concern that the demands of the work environment may contribute to stress levels and psychological vulnerability among employees as well as to high attrition levels. As part of a larger study, questionnaires were used to assess psychological distress, burnout, and coping strategies in a sample of 1,209 employees of a BPO organization. The analysis indicated that 38% of the sample had significant psychological distress on the General Health Questionnaire (GHQ-28; Goldberg and Hillier, 1979). The vulnerable groups were women, permanent employees, data processors, and those employed for 6 months or longer. The reported levels of burnout were low and the employees reported a fairly large repertoire of coping behaviors. The study has implications for individual and systemic efforts at employee stress management and workplace prevention approaches. The results point to the emerging and growing role of mental health professionals in the corporate sector.

  4. Meta-analytic evidence of low convergence between implicit and explicit measures of the needs for achievement, affiliation, and power

    PubMed Central

    Köllner, Martin G.; Schultheiss, Oliver C.

    2014-01-01

    The correlation between implicit and explicit motive measures and potential moderators of this relationship were examined meta-analytically, using Hunter and Schmidt's (2004) approach. Studies from a comprehensive search in PsycINFO, data sets of our research group, a literature list compiled by an expert, and the results of a request for gray literature were examined for relevance and coded. Analyses were based on 49 papers, 56 independent samples, 6151 subjects, and 167 correlations. The correlations (ρ) between implicit and explicit measures were 0.130 (CI: 0.077–0.183) for the overall relationship, 0.116 (CI: 0.050–0.182) for affiliation, 0.139 (CI: 0.080–0.198) for achievement, and 0.038 (CI: −0.055–0.131) for power. Participant age did not moderate the size of these relationships. However, a greater proportion of males in the samples and an earlier publication year were associated with larger effect sizes. PMID:25152741

  5. Examining the Efficacy of HIV Risk-Reduction Counseling on the Sexual Risk Behaviors of a National Sample of Drug Abuse Treatment Clients: Analysis of Subgroups

    PubMed Central

    Metsch, Lisa R.; Pereyra, Margaret R.; Malotte, C. Kevin; Haynes, Louise F.; Douaihy, Antoine; Chally, Jack; Mandler, Raul N.; Feaster, Daniel J.

    2016-01-01

    HIV counseling with testing has been part of HIV prevention in the U.S. since the 1980s. Despite the long-standing history of HIV testing with prevention counseling, the CDC released HIV testing recommendations for health care settings contesting benefits of prevention counseling with testing in reducing sexual risk behaviors among HIV-negatives in 2006. Efficacy of brief HIV risk-reduction counseling (RRC) in decreasing sexual risk among subgroups of substance use treatment clients was examined using multisite RCT data. Interaction tests between RRC and subgroups were performed; multivariable regression evaluated the relationship between RRC (with rapid testing) and sex risk. Subgroups were defined by demographics, risk type and level, attitudes/perceptions, and behavioral history. There was an effect (p < .0028) of counseling on number of sex partners among some subgroups. Certain subgroups may benefit from HIV RRC; this should be examined in studies with larger sample sizes, designed to assess the specific subgroup(s). PMID:26837631

  6. Fully automated spectrometric protocols for determination of antioxidant activity: advantages and disadvantages.

    PubMed

    Sochor, Jiri; Ryvolova, Marketa; Krystofova, Olga; Salas, Petr; Hubalek, Jaromir; Adam, Vojtech; Trnkova, Libuse; Havel, Ladislav; Beklova, Miroslava; Zehnalek, Josef; Provaznik, Ivo; Kizek, Rene

    2010-11-29

    The aim of this study was to describe behaviour, kinetics, time courses and limitations of the six different fully automated spectrometric methods--DPPH, TEAC, FRAP, DMPD, Free Radicals and Blue CrO5. Absorption curves were measured and absorbance maxima were found. All methods were calibrated using the standard compounds Trolox® and/or gallic acid. Calibration curves were determined (relative standard deviation was within the range from 1.5 to 2.5%). The obtained characteristics were compared and discussed. Moreover, the data obtained were applied to optimize and to automate all mentioned protocols. Automatic analyzer allowed us to analyse simultaneously larger set of samples, to decrease the measurement time, to eliminate the errors and to provide data of higher quality in comparison to manual analysis. The total time of analysis for one sample was decreased to 10 min for all six methods. In contrary, the total time of manual spectrometric determination was approximately 120 min. The obtained data provided good correlations between studied methods (R=0.97-0.99).

  7. Intergenerational transmission of attachment for infants raised in a prison nursery.

    PubMed

    Byrne, M W; Goshin, L S; Joestl, S S

    2010-07-01

    Within a larger intervention study, attachment was assessed with the Strange Situation Procedure for 30 infants who co-resided with their mothers in a prison nursery. Sixty percent of infants were classified secure, 75% who co-resided a year or more and 43% who co-resided less than a year, all within the range of normative community samples. The year-long co-residing group had significantly more secure and fewer disorganized infants than predicted by their mothers' attachment status, measured by the Adult Attachment Interview, and a significantly greater proportion of secure infants than meta-analyzed community samples of mothers with low income, depression, or drug/alcohol abuse. Using intergenerational data collected with rigorous methods, this study provides the first evidence that mothers in a prison nursery setting can raise infants who are securely attached to them at rates comparable to healthy community children, even when the mother's own internal attachment representation has been categorized as insecure.

  8. Intergenerational Transmission of Attachment for Infants Raised in a Prison Nursery

    PubMed Central

    Byrne, M. W.; Goshin, L. S.; Joestl, S. S.

    2010-01-01

    Within a larger intervention study, attachment was assessed with the Strange Situation Procedure for 30 infants who co-resided with their mothers in a prison nursery. Sixty percent of infants were classified secure, 75% who co-resided a year or more and 43% who co-resided less than a year, all within the range of normative community samples. The year-long co-residing group had significantly more secure and fewer disorganized infants than predicted by their mothers’ attachment status, measured by the Adult Attachment Interview, and a significantly greater proportion of secure infants than meta-analyzed community samples of mothers with low income, depression, or drug/alcohol abuse. Using intergenerational data collected with rigorous methods, this study provides the first evidence that mothers in a prison nursery setting can raise infants who are securely attached to them at rates comparable to healthy community children, even when the mother’s own internal attachment representation has been categorized as insecure. PMID:20582846

  9. A consensus prognostic gene expression classifier for ER positive breast cancer

    PubMed Central

    Teschendorff, Andrew E; Naderi, Ali; Barbosa-Morais, Nuno L; Pinder, Sarah E; Ellis, Ian O; Aparicio, Sam; Brenton, James D; Caldas, Carlos

    2006-01-01

    Background A consensus prognostic gene expression classifier is still elusive in heterogeneous diseases such as breast cancer. Results Here we perform a combined analysis of three major breast cancer microarray data sets to hone in on a universally valid prognostic molecular classifier in estrogen receptor (ER) positive tumors. Using a recently developed robust measure of prognostic separation, we further validate the prognostic classifier in three external independent cohorts, confirming the validity of our molecular classifier in a total of 877 ER positive samples. Furthermore, we find that molecular classifiers may not outperform classical prognostic indices but that they can be used in hybrid molecular-pathological classification schemes to improve prognostic separation. Conclusion The prognostic molecular classifier presented here is the first to be valid in over 877 ER positive breast cancer samples and across three different microarray platforms. Larger multi-institutional studies will be needed to fully determine the added prognostic value of molecular classifiers when combined with standard prognostic factors. PMID:17076897

  10. Effects of bottom fishing on the benthic megafauna of Georges Bank

    USGS Publications Warehouse

    Collie, J.S.; Escanero, G.A.; Valentine, P.C.

    1997-01-01

    This study addresses ongoing concerns ever the effects of mobile fishing gear on benthic communities. Using side-scan sonar, bottom photographs and fishing records, we identified a set of disturbed and undisturbed sites on the gravel pavement area of northern Georges Bank in the northwest Atlantic. Replicate samples of the megofauna were collected with a 1 m Naturalists' dredge on 2 cruises in 1994. Compared with the disturbed sites, the undisturbed sites had higher numbers of organisms, biomass, species richness and species diversity; evenness was higher at the disturbed sites. Undisturbed sites were characterized by an abundance of bushy epifaunal taxa (bryozoans, hydroids, worm tubes) that provide a complex habitat for shrimps, polychaetes, brittle stars, mussels and small fish. Disturbed sites were dominated by larger, hard-shelled molluscs, and scavenging crabs and echinoderms. Many of the megafaunal species in our samples have also been identified in stomach contents of demersal fish on Georges Bank; the abundances of at feast some of these species were reduced at the disturbed sites.

  11. A comprehensive and scalable database search system for metaproteomics.

    PubMed

    Chatterjee, Sandip; Stupp, Gregory S; Park, Sung Kyu Robin; Ducom, Jean-Christophe; Yates, John R; Su, Andrew I; Wolan, Dennis W

    2016-08-16

    Mass spectrometry-based shotgun proteomics experiments rely on accurate matching of experimental spectra against a database of protein sequences. Existing computational analysis methods are limited in the size of their sequence databases, which severely restricts the proteomic sequencing depth and functional analysis of highly complex samples. The growing amount of public high-throughput sequencing data will only exacerbate this problem. We designed a broadly applicable metaproteomic analysis method (ComPIL) that addresses protein database size limitations. Our approach to overcome this significant limitation in metaproteomics was to design a scalable set of sequence databases assembled for optimal library querying speeds. ComPIL was integrated with a modified version of the search engine ProLuCID (termed "Blazmass") to permit rapid matching of experimental spectra. Proof-of-principle analysis of human HEK293 lysate with a ComPIL database derived from high-quality genomic libraries was able to detect nearly all of the same peptides as a search with a human database (~500x fewer peptides in the database), with a small reduction in sensitivity. We were also able to detect proteins from the adenovirus used to immortalize these cells. We applied our method to a set of healthy human gut microbiome proteomic samples and showed a substantial increase in the number of identified peptides and proteins compared to previous metaproteomic analyses, while retaining a high degree of protein identification accuracy and allowing for a more in-depth characterization of the functional landscape of the samples. The combination of ComPIL with Blazmass allows proteomic searches to be performed with database sizes much larger than previously possible. These large database searches can be applied to complex meta-samples with unknown composition or proteomic samples where unexpected proteins may be identified. The protein database, proteomic search engine, and the proteomic data files for the 5 microbiome samples characterized and discussed herein are open source and available for use and additional analysis.

  12. Spontaneous Analog Number Representations in 3-Year-Old Children

    ERIC Educational Resources Information Center

    Cantlon, Jessica F.; Safford, Kelley E.; Brannon, Elizabeth M.

    2010-01-01

    When enumerating small sets of elements nonverbally, human infants often show a set-size limitation whereby they are unable to represent sets larger than three elements. This finding has been interpreted as evidence that infants spontaneously represent small numbers with an object-file system instead of an analog magnitude system (Feigenson,…

  13. Microbial alteration of normal alkane δ13C and δD in sedimentary archives

    NASA Astrophysics Data System (ADS)

    Brittingham, A.; Hren, M. T.; Hartman, G.

    2016-12-01

    Long-carbon chain normal alkanes (e.g. C25-C33) are produced by a wide range of terrestrial plants and commonly preserved in ancient sediments. These serve as a potential paleoclimate proxy because their hydrogen (δD) and carbon (δ13C) isotope values reflect the combined effect of plant-specific species effects and responses to environmental conditions. While these are commonly believed to remain unaltered at low burial temperatures (e.g. <150°C), there is still uncertainty around the role microbes play during the breakdown of these compounds in stored sediment and the potential risk for isotopic alteration. We analyzed two sets of identical samples to assess the role of microbial and other degradation process on the hydrogen and carbon isotope composition of these compounds. The first set of sediment samples were collected in the summer of 2011 from central Armenia, a region with continental climate, and allowed to sit in sealed bags at room temperature for three years. A second and identical set was collected in 2014 and frozen immediately. Stored samples showed high amounts of medium chain length n-alkanes (C19-C26), produced by microorganisms, which were absent from the samples that were collected in 2014 and frozen immediately after sampling. Along with the presence of medium chain length n-alkanes, the average chain length of n-alkanes from C25-C33 decreased significantly in all 2011 samples. Storage of the samples over three years resulted in altered δD and δ13C values of C29 and C31 n-alkanes. While δD values were heavier relative to the control by 4-25‰, δ13C values were mostly lighter (maximum change of -4.2‰ in C29 and -2.9‰ in C31). DNA analysis of the soil showed Rhodococcus and Aeromicrobium, genera that contain multiple coding regions for alkane degrading enzymes CYP153 and AlkB, increased by an order of magnitude during sample storage (from 0.7% to 7.5% of bacteria present). The proliferation of alkane degrading bacteria, combined with the large changes of long-chain n-alkane isotope values, suggest that bacteria may play a larger role than previously expected in altering the measured δD and δ13C values of long-chain n-alkanes during storage. This poses a potentially significant issue for all manner of samples that are not stored frozen, including a variety of sedimentary cores.

  14. Evaluation of artificial neural network algorithms for predicting METs and activity type from accelerometer data: validation on an independent sample.

    PubMed

    Freedson, Patty S; Lyden, Kate; Kozey-Keadle, Sarah; Staudenmayer, John

    2011-12-01

    Previous work from our laboratory provided a "proof of concept" for use of artificial neural networks (nnets) to estimate metabolic equivalents (METs) and identify activity type from accelerometer data (Staudenmayer J, Pober D, Crouter S, Bassett D, Freedson P, J Appl Physiol 107: 1330-1307, 2009). The purpose of this study was to develop new nnets based on a larger, more diverse, training data set and apply these nnet prediction models to an independent sample to evaluate the robustness and flexibility of this machine-learning modeling technique. The nnet training data set (University of Massachusetts) included 277 participants who each completed 11 activities. The independent validation sample (n = 65) (University of Tennessee) completed one of three activity routines. Criterion measures were 1) measured METs assessed using open-circuit indirect calorimetry; and 2) observed activity to identify activity type. The nnet input variables included five accelerometer count distribution features and the lag-1 autocorrelation. The bias and root mean square errors for the nnet MET trained on University of Massachusetts and applied to University of Tennessee were +0.32 and 1.90 METs, respectively. Seventy-seven percent of the activities were correctly classified as sedentary/light, moderate, or vigorous intensity. For activity type, household and locomotion activities were correctly classified by the nnet activity type 98.1 and 89.5% of the time, respectively, and sport was correctly classified 23.7% of the time. Use of this machine-learning technique operates reasonably well when applied to an independent sample. We propose the creation of an open-access activity dictionary, including accelerometer data from a broad array of activities, leading to further improvements in prediction accuracy for METs, activity intensity, and activity type.

  15. Statistical Searches for Microlensing Events in Large, Non-uniformly Sampled Time-Domain Surveys: A Test Using Palomar Transient Factory Data

    NASA Astrophysics Data System (ADS)

    Price-Whelan, Adrian M.; Agüeros, Marcel A.; Fournier, Amanda P.; Street, Rachel; Ofek, Eran O.; Covey, Kevin R.; Levitan, David; Laher, Russ R.; Sesar, Branimir; Surace, Jason

    2014-01-01

    Many photometric time-domain surveys are driven by specific goals, such as searches for supernovae or transiting exoplanets, which set the cadence with which fields are re-imaged. In the case of the Palomar Transient Factory (PTF), several sub-surveys are conducted in parallel, leading to non-uniform sampling over its ~20,000 deg2 footprint. While the median 7.26 deg2 PTF field has been imaged ~40 times in the R band, ~2300 deg2 have been observed >100 times. We use PTF data to study the trade off between searching for microlensing events in a survey whose footprint is much larger than that of typical microlensing searches, but with far-from-optimal time sampling. To examine the probability that microlensing events can be recovered in these data, we test statistics used on uniformly sampled data to identify variables and transients. We find that the von Neumann ratio performs best for identifying simulated microlensing events in our data. We develop a selection method using this statistic and apply it to data from fields with >10 R-band observations, 1.1 × 109 light curves, uncovering three candidate microlensing events. We lack simultaneous, multi-color photometry to confirm these as microlensing events. However, their number is consistent with predictions for the event rate in the PTF footprint over the survey's three years of operations, as estimated from near-field microlensing models. This work can help constrain all-sky event rate predictions and tests microlensing signal recovery in large data sets, which will be useful to future time-domain surveys, such as that planned with the Large Synoptic Survey Telescope.

  16. Four-way-leaning test shows larger limits of stability than a circular-leaning test.

    PubMed

    Thomsen, Mikkel Højgaard; Støttrup, Nicolai; Larsen, Frederik Greve; Pedersen, Ann-Marie Sydow Krogh; Poulsen, Anne Grove; Hirata, Rogerio Pessoto

    2017-01-01

    Limits of stability (LOS) have extensive clinical and rehabilitational value yet no standard consensus on measuring LOS exists. LOS measured using a leaning or a circling protocol is commonly used in research and clinical settings, however differences in protocols and reliability problems exist. This study measured LOS using a four-way-leaning test and a circular-leaning test to test which showed larger LOS measurements. Furthermore, number of adaptation trials needed for consistent results was assessed. Limits of stability were measured using a force plate (Metitur Good Balance System ® ) sampling at 50Hz. Thirty healthy subjects completed 30 trials assessing LOS alternating between four-way-leaning test and circular-leaning test. A main effect of methods (ANOVA:F(1,28)=45.86, P<0.01) with the four-way-leaning test showing larger values than the circular-leaning test (NK, P<0.01). An interaction between method×directions was found (ANOVA:F(3, 84)=24.87, P<0.01). The four-way-leaning test showed larger LOS in anterior (NK, P<0.05), right (NK, P<0.01) and left direction (NK, P<0.01). Analysis of LOS for the four-way-leaning test showed a difference between trials (ANOVA:F(14,392)=7.81, P<0.01). Differences were found between trial 1 and 7 (NK, P<0.03), trial 6 and 8 (NK, P<0.02) and trial 7 and 15 (NK, P<0.02). Four-way-leaning test showed high correlation (ICC>0.87) between first and second trial for all directions. Four-way-leaning test yields larger LOS in anterior, right and left direction making it more reliable when measuring LOS. A learning effect was found up to the 8th trial, which suggests using 8 adaptation trials before reliable LOS is measured. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. The effects of forest conversion to oil palm on ground-foraging ant communities depend on beta diversity and sampling grain.

    PubMed

    Wang, Wendy Y; Foster, William A

    2015-08-01

    Beta diversity - the variation in species composition among spatially discrete communities - and sampling grain - the size of samples being compared - may alter our perspectives of diversity within and between landscapes before and after agricultural conversion. Such assumptions are usually based on point comparisons, which do not accurately capture actual differences in total diversity. Beta diversity is often not rigorously examined. We investigated the beta diversity of ground-foraging ant communities in fragmented oil palm and forest landscapes in Sabah, Malaysia, using diversity metrics transformed from Hill number equivalents to remove dependences on alpha diversity. We compared the beta diversities of oil palm and forest, across three hierarchically nested sampling grains. We found that oil palm and forest communities had a greater percentage of total shared species when larger samples were compared. Across all grains and disregarding relative abundances, there was higher beta diversity of all species among forest communities. However, there were higher beta diversities of common and very abundant (dominant) species in oil palm as compared to forests. Differences in beta diversities between oil palm and forest were greatest at the largest sampling grain. Larger sampling grains in oil palm may generate bigger species pools, increasing the probability of shared species with forest samples. Greater beta diversity of all species in forest may be attributed to rare species. Oil palm communities may be more heterogeneous in common and dominant species because of variable community assembly events. Rare and also common species are better captured at larger grains, boosting differences in beta diversity between larger samples of forest and oil palm communities. Although agricultural landscapes support a lower total diversity than natural forests, diversity especially of abundant species is still important for maintaining ecosystem stability. Diversity in agricultural landscapes may be greater than expected when beta diversity is accounted for at large spatial scales.

  18. Effects of a structured 20-session slow-cortical-potential-based neurofeedback program on attentional performance in children and adolescents with attention-deficit hyperactivity disorder: retrospective analysis of an open-label pilot-approach and 6-month follow-up.

    PubMed

    Albrecht, Johanna S; Bubenzer-Busch, Sarah; Gallien, Anne; Knospe, Eva Lotte; Gaber, Tilman J; Zepf, Florian D

    2017-01-01

    The aim of this approach was to conduct a structured electroencephalography-based neurofeedback training program for children and adolescents with attention-deficit hyperactivity disorder (ADHD) using slow cortical potentials with an intensive first (almost daily sessions) and second phase of training (two sessions per week) and to assess aspects of attentional performance. A total of 24 young patients with ADHD participated in the 20-session training program. During phase I of training (2 weeks, 10 sessions), participants were trained on weekdays. During phase II, neurofeedback training occurred twice per week (5 weeks). The patients' inattention problems were measured at three assessment time points before (pre, T0) and after (post, T1) the training and at a 6-month follow-up (T2); the assessments included neuropsychological tests (Alertness and Divided Attention subtests of the Test for Attentional Performance; Sustained Attention Dots and Shifting Attentional Set subtests of the Amsterdam Neuropsychological Test) and questionnaire data (inattention subscales of the so-called Fremdbeurteilungsbogen für Hyperkinetische Störungen and Child Behavior Checklist/4-18 [CBCL/4-18]). All data were analyzed retrospectively. The mean auditive reaction time in a Divided Attention task decreased significantly from T0 to T1 (medium effect), which was persistent over time and also found for a T0-T2 comparison (larger effects). In the Sustained Attention Dots task, the mean reaction time was reduced from T0-T1 and T1-T2 (small effects), whereas in the Shifting Attentional Set task, patients were able to increase the number of trials from T1-T2 and significantly diminished the number of errors (T1-T2 & T0-T2, large effects). First positive but very small effects and preliminary results regarding different parameters of attentional performance were detected in young individuals with ADHD. The limitations of the obtained preliminary data are the rather small sample size, the lack of a control group/a placebo condition and the open-label approach because of the clinical setting and retrospective analysis. The value of the current approach lies in providing pilot data for future studies involving larger samples.

  19. Embarking on large-scale qualitative research: reaping the benefits of mixed methods in studying youth, clubs and drugs

    PubMed Central

    Hunt, Geoffrey; Moloney, Molly; Fazio, Adam

    2012-01-01

    Qualitative research is often conceptualized as inherently small-scale research, primarily conducted by a lone researcher enmeshed in extensive and long-term fieldwork or involving in-depth interviews with a small sample of 20 to 30 participants. In the study of illicit drugs, traditionally this has often been in the form of ethnographies of drug-using subcultures. Such small-scale projects have produced important interpretive scholarship that focuses on the culture and meaning of drug use in situated, embodied contexts. Larger-scale projects are often assumed to be solely the domain of quantitative researchers, using formalistic survey methods and descriptive or explanatory models. In this paper, however, we will discuss qualitative research done on a comparatively larger scale—with in-depth qualitative interviews with hundreds of young drug users. Although this work incorporates some quantitative elements into the design, data collection, and analysis, the qualitative dimension and approach has nevertheless remained central. Larger-scale qualitative research shares some of the challenges and promises of smaller-scale qualitative work including understanding drug consumption from an emic perspective, locating hard-to-reach populations, developing rapport with respondents, generating thick descriptions and a rich analysis, and examining the wider socio-cultural context as a central feature. However, there are additional challenges specific to the scale of qualitative research, which include data management, data overload and problems of handling large-scale data sets, time constraints in coding and analyzing data, and personnel issues including training, organizing and mentoring large research teams. Yet large samples can prove to be essential for enabling researchers to conduct comparative research, whether that be cross-national research within a wider European perspective undertaken by different teams or cross-cultural research looking at internal divisions and differences within diverse communities and cultures. PMID:22308079

  20. Integration of multi-temporal airborne and terrestrial laser scanning data for the analysis and modelling of proglacial geomorphodynamic processes

    NASA Astrophysics Data System (ADS)

    Briese, Christian; Glira, Philipp; Pfeifer, Norbert

    2013-04-01

    The actual on-going and predicted climate change leads in sensitive areas like in high-mountain proglacial regions to significant geomorphodynamic processes (e.g. landslides). Within a short time period (even less than a year) these processes lead to a substantial change of the landscape. In order to study and analyse the recent changes in a proglacial environment the multi-disciplinary research project PROSA (high-resolution measurements of morphodynamics in rapidly changing PROglacial Systems of the Alps) selected the study area of the Gepatschferner (Tyrol), the second largest glacier in Austria. One of the challenges within the project is the geometric integration (i.e. georeferencing) of multi-temporal topographic data sets in a continuously changing environment. Furthermore, one has to deal with data sets of multiple scales (large area data sets vs. highly detailed local area observations) that are on one hand necessary to cover the complete proglacial area with the whole catchment and on the other hand guaranty a highly dense and accurate sampling of individual areas of interest (e.g. a certain highly affected slope). This contribution suggests a comprehensive method for the georeferencing of multi-temporal airborne and terrestrial laser scanning (ALS resp. TLS). It is studied by application to the data that was acquired within the project PROSA. In a first step a stable coordinate frame that allows the analysis of the changing environment has to be defined. Subsequently procedures for the transformation of the individual ALS and TLS data sets into this coordinate frame were developed. This includes the selection of appropriate reference areas as well as the development of special targets for the local TLS acquisition that can be used for the absolute georeferencing in the common coordinate frame. Due to the fact that different TLS instruments can be used (some larger distance sensors that allow covering larger areas vs. closer operating sensors that allow a denser surface sampling) the different sensor properties (wavelength, resolution, etc., and therefore suitability of targets) have to be considered. Subsequently the multi-temporal analysis of the data sets can be performed. Within this analysis it is important to consider the different instrument properties as well as the different data acquisition geometry (observation direction). The aim is to reach an accuracy of a few centimetre in georeferencing for a single measurement epoch. Furthermore, next to an understanding of the individual measurement process the integration of geomorphological knowledge is essential in order to separate errors of the measurement process from actual dynamic environmental changes. This leads to a multi-disciplinary analysis of the measurement data. In addition to a geometric analysis the radiometric changes of the ALS data will be studied. The presentation illustrates next to the method and data itself the analysis of multi-temporal proglacial data sets and the obtained accuracy.

  1. A scalable method for identifying frequent subtrees in sets of large phylogenetic trees.

    PubMed

    Ramu, Avinash; Kahveci, Tamer; Burleigh, J Gordon

    2012-10-03

    We consider the problem of finding the maximum frequent agreement subtrees (MFASTs) in a collection of phylogenetic trees. Existing methods for this problem often do not scale beyond datasets with around 100 taxa. Our goal is to address this problem for datasets with over a thousand taxa and hundreds of trees. We develop a heuristic solution that aims to find MFASTs in sets of many, large phylogenetic trees. Our method works in multiple phases. In the first phase, it identifies small candidate subtrees from the set of input trees which serve as the seeds of larger subtrees. In the second phase, it combines these small seeds to build larger candidate MFASTs. In the final phase, it performs a post-processing step that ensures that we find a frequent agreement subtree that is not contained in a larger frequent agreement subtree. We demonstrate that this heuristic can easily handle data sets with 1000 taxa, greatly extending the estimation of MFASTs beyond current methods. Although this heuristic does not guarantee to find all MFASTs or the largest MFAST, it found the MFAST in all of our synthetic datasets where we could verify the correctness of the result. It also performed well on large empirical data sets. Its performance is robust to the number and size of the input trees. Overall, this method provides a simple and fast way to identify strongly supported subtrees within large phylogenetic hypotheses.

  2. A scalable method for identifying frequent subtrees in sets of large phylogenetic trees

    PubMed Central

    2012-01-01

    Background We consider the problem of finding the maximum frequent agreement subtrees (MFASTs) in a collection of phylogenetic trees. Existing methods for this problem often do not scale beyond datasets with around 100 taxa. Our goal is to address this problem for datasets with over a thousand taxa and hundreds of trees. Results We develop a heuristic solution that aims to find MFASTs in sets of many, large phylogenetic trees. Our method works in multiple phases. In the first phase, it identifies small candidate subtrees from the set of input trees which serve as the seeds of larger subtrees. In the second phase, it combines these small seeds to build larger candidate MFASTs. In the final phase, it performs a post-processing step that ensures that we find a frequent agreement subtree that is not contained in a larger frequent agreement subtree. We demonstrate that this heuristic can easily handle data sets with 1000 taxa, greatly extending the estimation of MFASTs beyond current methods. Conclusions Although this heuristic does not guarantee to find all MFASTs or the largest MFAST, it found the MFAST in all of our synthetic datasets where we could verify the correctness of the result. It also performed well on large empirical data sets. Its performance is robust to the number and size of the input trees. Overall, this method provides a simple and fast way to identify strongly supported subtrees within large phylogenetic hypotheses. PMID:23033843

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avkshtol, V; Tanny, S; Reddy, K

    Purpose: Stereotactic radiation therapy (SRT) provides an excellent alternative to embolization and surgical excision for the management of appropriately selected cerebral arteriovenous malformations (AVMs). The currently accepted standard for delineating AVMs is planar digital subtraction angiography (DSA). DSA can be used to acquire a 3D data set that preserves osseous structures (3D-DA) at the time of the angiography for SRT planning. Magnetic resonance imaging (MRI) provides an alternative noninvasive method of visualizing the AVM nidus with comparable spatial resolution. We utilized 3D-DA and T1 post-contrast MRI data to evaluate the differences in SRT target volumes. Methods: Four patients underwent 3D-DAmore » and high-resolution MRI. 3D T1 post-contrast images were obtained in all three reconstruction planes. A planning CT was fused with MRI and 3D-DA data sets. The AVMs were contoured utilizing one of the image sets at a time. Target volume, centroid, and maximum and minimum dimensions were analyzed for each patient. Results: Targets delineated using post-contrast MRI demonstrated a larger mean volume. AVMs >2 cc were found to have a larger difference between MRI and 3D-DA volumes. Larger AVMs also demonstrated a smaller relative uncertainty in contour centroid position (1 mm). AVM targets <2 cc had smaller absolute differences in volume, but larger differences in contour centroid position (2.5 mm). MRI targets demonstrated a more irregular shape compared to 3D-DA targets. Conclusions: Our preliminary data supports the use of MRI alone to delineate AVM targets >2 cc. The greater centroid stability for AVMs >2 cc ensures accurate target localization during image fusion. The larger MRI target volumes did not result in prohibitively greater volumes of normal brain tissue receiving the prescription dose. The larger centroid instability for AVMs <2 cc precludes the use of MRI alone for target delineation. We recommend incorporating a 3D-DA for these patients.« less

  4. Neural Mechanisms Underlying the Cost of Task Switching: An ERP Study

    PubMed Central

    Li, Ling; Wang, Meng; Zhao, Qian-Jing; Fogelson, Noa

    2012-01-01

    Background When switching from one task to a new one, reaction times are prolonged. This phenomenon is called switch cost (SC). Researchers have recently used several kinds of task-switching paradigms to uncover neural mechanisms underlying the SC. Task-set reconfiguration and passive dissipation of a previously relevant task-set have been reported to contribute to the cost of task switching. Methodology/Principal Findings An unpredictable cued task-switching paradigm was used, during which subjects were instructed to switch between a color and an orientation discrimination task. Electroencephalography (EEG) and behavioral measures were recorded in 14 subjects. Response-stimulus interval (RSI) and cue-stimulus interval (CSI) were manipulated with short and long intervals, respectively. Switch trials delayed reaction times (RTs) and increased error rates compared with repeat trials. The SC of RTs was smaller in the long CSI condition. For cue-locked waveforms, switch trials generated a larger parietal positive event-related potential (ERP), and a larger slow parietal positivity compared with repeat trials in the short and long CSI condition. Neural SC of cue-related ERP positivity was smaller in the long RSI condition. For stimulus-locked waveforms, a larger switch-related central negative ERP component was observed, and the neural SC of the ERP negativity was smaller in the long CSI. Results of standardized low resolution electromagnetic tomography (sLORETA) for both ERP positivity and negativity showed that switch trials evoked larger activation than repeat trials in dorsolateral prefrontal cortex (DLPFC) and posterior parietal cortex (PPC). Conclusions/Significance The results provide evidence that both RSI and CSI modulate the neural activities in the process of task-switching, but that these have a differential role during task-set reconfiguration and passive dissipation of a previously relevant task-set. PMID:22860090

  5. Accurate Methods for Large Molecular Systems (Preprint)

    DTIC Science & Technology

    2009-01-06

    tensor, EFP calculations are basis set dependent. The smallest recommended basis set is 6- 31++G( d , p )52 The dependence of the computational cost of...and second order perturbation theory (MP2) levels with the 6-31G( d , p ) basis set. Additional SFM tests are presented for a small set of alpha...helices using the 6-31++G( d , p ) basis set. The larger 6-311++G(3df,2p) basis set is employed for creating all EFPs used for non- bonded interactions, since

  6. Correlates of Cooperation in a One-Shot High-Stakes Televised Prisoners' Dilemma

    PubMed Central

    Burton-Chellew, Maxwell N.; West, Stuart A.

    2012-01-01

    Explaining cooperation between non-relatives is a puzzle for both evolutionary biology and the social sciences. In humans, cooperation is often studied in a laboratory setting using economic games such as the prisoners' dilemma. However, such experiments are sometimes criticized for being played for low stakes and by misrepresentative student samples. Golden balls is a televised game show that uses the prisoners' dilemma, with a diverse range of participants, often playing for very large stakes. We use this non-experimental dataset to investigate the factors that influence cooperation when “playing” for considerably larger stakes than found in economic experiments. The game show has earlier stages that allow for an analysis of lying and voting decisions. We found that contestants were sensitive to the stakes involved, cooperating less when the stakes were larger in both absolute and relative terms. We also found that older contestants were more likely to cooperate, that liars received less cooperative behavior, but only if they told a certain type of lie, and that physical contact was associated with reduced cooperation, whereas laughter and promises were reliable signals or cues of cooperation, but were not necessarily detected. PMID:22485141

  7. Correlates of cooperation in a one-shot high-stakes televised prisoners' dilemma.

    PubMed

    Burton-Chellew, Maxwell N; West, Stuart A

    2012-01-01

    Explaining cooperation between non-relatives is a puzzle for both evolutionary biology and the social sciences. In humans, cooperation is often studied in a laboratory setting using economic games such as the prisoners' dilemma. However, such experiments are sometimes criticized for being played for low stakes and by misrepresentative student samples. Golden balls is a televised game show that uses the prisoners' dilemma, with a diverse range of participants, often playing for very large stakes. We use this non-experimental dataset to investigate the factors that influence cooperation when "playing" for considerably larger stakes than found in economic experiments. The game show has earlier stages that allow for an analysis of lying and voting decisions. We found that contestants were sensitive to the stakes involved, cooperating less when the stakes were larger in both absolute and relative terms. We also found that older contestants were more likely to cooperate, that liars received less cooperative behavior, but only if they told a certain type of lie, and that physical contact was associated with reduced cooperation, whereas laughter and promises were reliable signals or cues of cooperation, but were not necessarily detected.

  8. Separation of Peptides with Forward Osmosis Biomimetic Membranes

    PubMed Central

    Bajraktari, Niada; Madsen, Henrik T.; Gruber, Mathias F.; Truelsen, Sigurd; Jensen, Elzbieta L.; Jensen, Henrik; Hélix-Nielsen, Claus

    2016-01-01

    Forward osmosis (FO) membranes have gained interest in several disciplines for the rejection and concentration of various molecules. One application area for FO membranes that is becoming increasingly popular is the use of the membranes to concentrate or dilute high value compound solutions such as pharmaceuticals. It is crucial in such settings to control the transport over the membrane to avoid losses of valuable compounds, but little is known about the rejection and transport mechanisms of larger biomolecules with often flexible conformations. In this study, transport of two chemically similar peptides with molecular weight (Mw) of 375 and 692 Da across a thin film composite Aquaporin Inside™ Membrane (AIM) FO membrane was investigated. Despite the relative large size, both peptides were able to permeate the dense active layer of the AIM membrane and the transport mechanism was determined to be diffusion-based. Interestingly, the membrane permeability increased 3.65 times for the 692 Da peptide (1.39 × 10−12 m2·s−1) compared to the 375 Da peptide (0.38 × 10−12 m2·s−1). This increase thus occurs for an 85% increase in Mw but only for a 34% increase in peptide radius of gyration (Rg) as determined from molecular dynamics (MD) simulations. This suggests that Rg is a strong influencing factor for membrane permeability. Thus, an increased Rg reflects the larger peptide chains ability to sample a larger conformational space when interacting with the nanostructured active layer increasing the likelihood for permeation. PMID:27854275

  9. Development of portable defocusing micro-scale spatially offset Raman spectroscopy.

    PubMed

    Realini, Marco; Botteon, Alessandra; Conti, Claudia; Colombo, Chiara; Matousek, Pavel

    2016-05-10

    We present, for the first time, portable defocusing micro-Spatially Offset Raman Spectroscopy (micro-SORS). Micro-SORS is a concept permitting the analysis of thin, highly turbid stratified layers beyond the reach of conventional Raman microscopy. The technique is applicable to the analysis of painted layers in cultural heritage (panels, canvases and mural paintings, painted statues and decorated objects in general) as well as in many other areas including polymer, biological and biomedical applications, catalytic and forensics sciences where highly turbid stratified layers are present and where invasive analysis is undesirable or impossible. So far the technique has been demonstrated only on benchtop Raman microscopes precluding the non-invasive analysis of larger samples and samples in situ. The new set-up is characterised conceptually on a range of artificially assembled two-layer systems demonstrating its benefits and performance across several application areas. These included stratified polymer sample, pharmaceutical tablet and layered paint samples. The same samples were also analysed by a high performance (non-portable) benchtop Raman microscope to provide benchmarking against our earlier research. The realisation of the vision of delivering portability to micro-SORS has a transformative potential spanning across multiple disciplines as it fully unlocks, for the first time, the non-invasive and non-destructive aspects of micro-SORS enabling it to be applied also to large and non-portable samples in situ without recourse to removing samples, or their fragments, for laboratory analysis on benchtop Raman microscopes.

  10. Using a Team Approach to Address Bullying of Students with Asperger's Syndrome in Activity-Based Settings

    ERIC Educational Resources Information Center

    Biggs, Mary Jo Garcia; Simpson, Cynthia; Gaus, Mark D.

    2010-01-01

    The rate of bullying among individuals with disabilities is alarming. Because of the social and motor deficiencies that individuals with Asperger's syndrome (AS) often display, they are frequently targets of bullying. The physical education setting often consists of a larger number of students than the typical academic instructional setting. This…

  11. Rheological weakening due to phase mixing in olivine + orthopyroxene aggregates

    NASA Astrophysics Data System (ADS)

    Kohlstedt, D. L.; Tasaka, M.; Zimmerman, M. E.

    2016-12-01

    To understand the processes involved in rheological weakening due to phase mixing, we conducted torsion experiments on samples composed of iron-rich olivine + orthopyroxene. Samples with volume fractions of pyroxene of fpx= 0.1, 0.3, and 0.4 were deformed in torsion at a temperature of 1200°C and a confining pressure of 300 MPa using a gas-medium apparatus. The value of the stress exponent, n, decreases with increasing strain, γ, with the rate of decrease depending on fpx. In samples with larger amounts of pyroxene, fpx = 0.3 and 0.4, n decreases from n = 3.5 at lower strains of 1 ≤ γ ≤ 3 to n = 1.7 at higher strains of 24 ≤ γ ≤ 25. In contrast, the sample with fpx = 0.1, n = 3.5 at lower strain decreases only to n = 3.0 at higher strains. In samples with larger fpx, the value of p changes from p = 1 at lower strains to p = 3 at higher strains. Furthermore, Hansen et al. (2012) observed that n = 4.l and p = 0.7 in samples without pyroxene (fpx = 0) regardless of strain. For samples with larger fpx, these values of n and p indicate that the deformation mechanism changes with strain, whereas for samples with smaller fpxno change in mechanism occurs. The microstructures in our samples with larger amounts of pyroxene provide insight into the change in deformation mechanism identified from the experimental results. First, elongated olivine and pyroxene grains align sub-parallel to the shear direction with a strong crystallographic preferred orientation (CPO) in samples deformed to lower strains for which n = 3.5. Second, mixtures of small, rounded grains of both phases, with a nearly random CPO develop in samples deformed to higher strains that exhibit a smaller stress exponent and strain weakening. The microstructural development forming well-mixed fine-grained olivine-pyroxene aggregates can be explained by the diffusivity difference between Si, Me (= Fe or Mg), and O, such that transport of MeO is significantly faster than that of SiO2. These mechanical and associated microstructural properties provide important constraints for understanding rheological weakening and strain localization in upper mantle rocks.

  12. In chronic myeloid leukemia patients on second-line tyrosine kinase inhibitor therapy, deep sequencing of BCR-ABL1 at the time of warning may allow sensitive detection of emerging drug-resistant mutants.

    PubMed

    Soverini, Simona; De Benedittis, Caterina; Castagnetti, Fausto; Gugliotta, Gabriele; Mancini, Manuela; Bavaro, Luana; Machova Polakova, Katerina; Linhartova, Jana; Iurlo, Alessandra; Russo, Domenico; Pane, Fabrizio; Saglio, Giuseppe; Rosti, Gianantonio; Cavo, Michele; Baccarani, Michele; Martinelli, Giovanni

    2016-08-02

    Imatinib-resistant chronic myeloid leukemia (CML) patients receiving second-line tyrosine kinase inhibitor (TKI) therapy with dasatinib or nilotinib have a higher risk of disease relapse and progression and not infrequently BCR-ABL1 kinase domain (KD) mutations are implicated in therapeutic failure. In this setting, earlier detection of emerging BCR-ABL1 KD mutations would offer greater chances of efficacy for subsequent salvage therapy and limit the biological consequences of full BCR-ABL1 kinase reactivation. Taking advantage of an already set up and validated next-generation deep amplicon sequencing (DS) assay, we aimed to assess whether DS may allow a larger window of detection of emerging BCR-ABL1 KD mutants predicting for an impending relapse. a total of 125 longitudinal samples from 51 CML patients who had acquired dasatinib- or nilotinib-resistant mutations during second-line therapy were analyzed by DS from the time of failure and mutation detection by conventional sequencing backwards. BCR-ABL1/ABL1%(IS) transcript levels were used to define whether the patient had 'optimal response', 'warning' or 'failure' at the time of first mutation detection by DS. DS was able to backtrack dasatinib- or nilotinib-resistant mutations to the previous sample(s) in 23/51 (45 %) pts. Median mutation burden at the time of first detection by DS was 5.5 % (range, 1.5-17.5 %); median interval between detection by DS and detection by conventional sequencing was 3 months (range, 1-9 months). In 5 cases, the mutations were detectable at baseline. In the remaining cases, response level at the time mutations were first detected by DS could be defined as 'Warning' (according to the 2013 ELN definitions of response to 2nd-line therapy) in 13 cases, as 'Optimal response' in one case, as 'Failure' in 4 cases. No dasatinib- or nilotinib-resistant mutations were detected by DS in 15 randomly selected patients with 'warning' at various timepoints, that later turned into optimal responders with no treatment changes. DS enables a larger window of detection of emerging BCR-ABL1 KD mutations predicting for an impending relapse. A 'Warning' response may represent a rational trigger, besides 'Failure', for DS-based mutation screening in CML patients undergoing second-line TKI therapy.

  13. Variation in the cranial base orientation and facial skeleton in dry skulls sampled from three major populations.

    PubMed

    Kuroe, Kazuto; Rosas, Antonio; Molleson, Theya

    2004-04-01

    The aim of this study was to analyse the effects of cranial base orientation on the morphology of the craniofacial system in human populations. Three geographically distant populations from Europe (72), Africa (48) and Asia (24) were chosen. Five angular and two linear variables from the cranial base component and six angular and six linear variables from the facial component based on two reference lines of the vertical posterior maxillary and Frankfort horizontal planes were measured. The European sample presented dolichofacial individuals with a larger face height and a smaller face depth derived from a raised cranial base and facial cranium orientation which tended to be similar to the Asian sample. The African sample presented brachyfacial individuals with a reduced face height and a larger face depth as a result of a lowered cranial base and facial cranium orientation. The Asian sample presented dolichofacial individuals with a larger face height and depth due to a raised cranial base and facial cranium orientation. The findings of this study suggest that cranial base orientation and posterior cranial base length appear to be valid discriminating factors between different human populations.

  14. Correcting Estimates of the Occurrence Rate of Earth-like Exoplanets for Stellar Multiplicity

    NASA Astrophysics Data System (ADS)

    Cantor, Elliot; Dressing, Courtney D.; Ciardi, David R.; Christiansen, Jessie

    2018-06-01

    One of the most prominent questions in the exoplanet field has been determining the true occurrence rate of potentially habitable Earth-like planets. NASA’s Kepler mission has been instrumental in answering this question by searching for transiting exoplanets, but follow-up observations of Kepler target stars are needed to determine whether or not the surveyed Kepler targets are in multi-star systems. While many researchers have searched for companions to Kepler planet host stars, few studies have investigated the larger target sample. Regardless of physical association, the presence of nearby stellar companions biases our measurements of a system’s planetary parameters and reduces our sensitivity to small planets. Assuming that all Kepler target stars are single (as is done in many occurrence rate calculations) would overestimate our search completeness and result in an underestimate of the frequency of potentially habitable Earth-like planets. We aim to correct for this bias by characterizing the set of targets for which Kepler could have detected Earth-like planets. We are using adaptive optics (AO) imaging to reveal potential stellar companions and near-infrared spectroscopy to refine stellar parameters for a subset of the Kepler targets that are most amenable to the detection of Earth-like planets. We will then derive correction factors to correct for the biases in the larger set of target stars and determine the true frequency of systems with Earth-like planets. Due to the prevalence of stellar multiples, we expect to calculate an occurrence rate for Earth-like exoplanets that is higher than current figures.

  15. Upward counterfactual thinking and depression: A meta-analysis.

    PubMed

    Broomhall, Anne Gene; Phillips, Wendy J; Hine, Donald W; Loi, Natasha M

    2017-07-01

    This meta-analysis examined the strength of association between upward counterfactual thinking and depressive symptoms. Forty-two effect sizes from a pooled sample of 13,168 respondents produced a weighted average effect size of r=.26, p<.001. Moderator analyses using an expanded set of 96 effect sizes indicated that upward counterfactuals and regret produced significant positive effects that were similar in strength. Effects also did not vary as a function of the theme of the counterfactual-inducing situation or study design (cross-sectional versus longitudinal). Significant effect size heterogeneity was observed across sample types, methods of assessing upward counterfactual thinking, and types of depression scale. Significant positive effects were found in studies that employed samples of bereaved individuals, older adults, terminally ill patients, or university students, but not adolescent mothers or mixed samples. Both number-based and Likert-based upward counterfactual thinking assessments produced significant positive effects, with the latter generating a larger effect. All depression scales produced significant positive effects, except for the Psychiatric Epidemiology Research Interview. Research and theoretical implications are discussed in relation to cognitive theories of depression and the functional theory of upward counterfactual thinking, and important gaps in the extant research literature are identified. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. EUV-angle resolved scatter (EUV-ARS): a new tool for the characterization of nanometre structures

    NASA Astrophysics Data System (ADS)

    Fernández Herrero, Analía.; Mentzel, Heiko; Soltwisch, Victor; Jaroslawzew, Sina; Laubis, Christian; Scholze, Frank

    2018-03-01

    The advance of the semiconductor industry requires new metrology methods, which can deal with smaller and more complex nanostructures. Particularly for inline metrology a rapid, sensitive and non destructive method is needed. Small angle X-ray scattering under grazing incidence has already been investigated for this application and delivers significant statistical information which tracks the profile parameters as well as their variations, i.e. roughness. However, it suffers from the elongated footprint at the sample. The advantage of EUV radiation, with its longer wavelengths, is that larger incidence angles can be used, resulting in a significant reduction of the beam footprint. Targets with field sizes of 100 μm and smaller are accessible with our experimental set-up. We present a new experimental tool for the measurement of small structures based on the capabilities of soft X-ray and EUV scatterometry at the PTB soft X-ray beamline at the electron storage ring BESSY II. PTB's soft X-ray radiometry beamline uses a plane grating monochromator, which covers the spectral range from 0.7 nm to 25 nm and was especially designed to provide highly collimated radiation. An area detector covers the scattered radiation from a grazing exit angle up to an angle of 30° above the sample horizon and the fluorescence emission can be detected with an energy dispersive X-ray silicon drift detector. In addition, the sample can be rotated and linearly moved in vacuum. This new set-up will be used to explore the capabilities of EUV-scatterometry for the characterization of nanometre-sized structures.

  17. Variable selection under multiple imputation using the bootstrap in a prognostic study

    PubMed Central

    Heymans, Martijn W; van Buuren, Stef; Knol, Dirk L; van Mechelen, Willem; de Vet, Henrica CW

    2007-01-01

    Background Missing data is a challenging problem in many prognostic studies. Multiple imputation (MI) accounts for imputation uncertainty that allows for adequate statistical testing. We developed and tested a methodology combining MI with bootstrapping techniques for studying prognostic variable selection. Method In our prospective cohort study we merged data from three different randomized controlled trials (RCTs) to assess prognostic variables for chronicity of low back pain. Among the outcome and prognostic variables data were missing in the range of 0 and 48.1%. We used four methods to investigate the influence of respectively sampling and imputation variation: MI only, bootstrap only, and two methods that combine MI and bootstrapping. Variables were selected based on the inclusion frequency of each prognostic variable, i.e. the proportion of times that the variable appeared in the model. The discriminative and calibrative abilities of prognostic models developed by the four methods were assessed at different inclusion levels. Results We found that the effect of imputation variation on the inclusion frequency was larger than the effect of sampling variation. When MI and bootstrapping were combined at the range of 0% (full model) to 90% of variable selection, bootstrap corrected c-index values of 0.70 to 0.71 and slope values of 0.64 to 0.86 were found. Conclusion We recommend to account for both imputation and sampling variation in sets of missing data. The new procedure of combining MI with bootstrapping for variable selection, results in multivariable prognostic models with good performance and is therefore attractive to apply on data sets with missing values. PMID:17629912

  18. A Comprehensive Program for Measurement of Military Aircraft Emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Mengdawn

    2009-11-01

    Emissions of gases and particulate matter by military aircraft were characterized inplume by 'extractive' and 'optical remote-sensing (ORS)' technologies. Non-volatile particle size distribution, number and mass concentrations were measured with good precision and reproducibly. Time-integrated particulate filter samples were collected and analyzed for smoke number, elemental composition, carbon contents, and sulfate. Observed at EEP the geometric mean diameter (as measured by the mobility diameter) generally increased as the engine power setting increased, which is consistent with downstream observations. The modal diameters at the downstream locations are larger than that at EEP at the same engine power level. The results indicatemore » that engine particles were processed by condensation, for example, leading to particle growth in-plume. Elemental analysis indicated little metals were present in the exhaust, while most of the exhaust materials in the particulate phase were carbon and sulfate (in the JP-8 fuel). CO, CO{sub 2}, NO, NO{sub 2}, SO{sub 2}, HCHO, ethylene, acetylene, propylene, and alkanes were measured. The last five species were most noticeable under engine idle condition. The levels of hydrocarbons emitted at high engine power level were generally below the detection limits. ORS techniques yielded real-time gaseous measurement, but the same techniques could not be extended directly to ultrafine particles found in all engine exhausts. The results validated sampling methodology and measurement techniques used for non-volatile particulate aircraft emissions, which also highlighted the needs for further research on sampling and measurement for volatile particulate matter and semi-volatile species in the engine exhaust especially at the low engine power setting.« less

  19. Oxygen and Magnesium Isotopic Compositions of Asteroidal Materials Returned from Itokawa by the Hayabusa Mission

    NASA Technical Reports Server (NTRS)

    Yurimoto, H; Abe, M.; Ebihara, M.; Fujimura, A.; Hashizume, K.; Ireland, T. R.; Itoh, S.; Kawaguchi, K.; Kitajima, F.; Mukai, T.; hide

    2011-01-01

    The Hayabusa spacecraft made two touchdowns on the surface of Asteroid 25143 Itokawa on November 20th and 26th, 2005. The Asteroid 25143 Itokawa is classified as an S-type asteroid and inferred to consist of materials similar to ordinary chondrites or primitive achondrites [1]. Near-infrared spectroscopy by the Hayabusa spacecraft proposed that the surface of this body has an olivine-rich mineral assemblage potentially similar to that of LL5 or LL6 chondrites with different degrees of space weathering [2]. The spacecraft made the reentry into the Earth s atmosphere on June 12th, 2010 and the sample capsule was successfully recovered in Australia on June 13th, 2010. Although the sample collection processes on the Itokawa surface had not been made by the designed operations, more than 1,500 grains were identified as rocky particles in the sample curation facility of JAXA, and most of them were judged to be of extraterrestrial origin, and definitely from Asteroid Itokawa on November 17th, 2010 [3]. Although their sizes are mostly less than 10 microns, some larger grains of about 100 microns or larger were also included. The mineral assembly is olivine, pyroxene, plagioclase, iron sulfide and iron metal. The mean mineral compositions are consistent with the results of near-infrared spectroscopy from Hayabusa spacecraft [2], but the variations suggest that the petrologic type may be smaller than the spectroscopic results. Several tens of grains of relatively large sizes among the 1,500 grains will be selected by the Hayabusa sample curation team for preliminary examination [4]. Each grain will be subjected to one set of preliminary examinations, i.e., micro-tomography, XRD, XRF, TEM, SEM, EPMA and SIMS in this sequence. The preliminary examination will start from the last week of January 2011. Therefore, samples for isotope analyses in this study will start from the last week of February 2011. By the time of the LPSC meeting we will have measured the oxygen and magnesium isotopic composition of several grains. We will present the first results from the isotope analyses that will have been performed.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amblard, A.; Riguccini, L.; Temi, P.

    We compute the properties of a sample of 221 local, early-type galaxies with a spectral energy distribution (SED) modeling software, CIGALEMC. Concentrating on the star-forming (SF) activity and dust contents, we derive parameters such as the specific star formation rate (sSFR), the dust luminosity, dust mass, and temperature. In our sample, 52% is composed of elliptical (E) galaxies and 48% of lenticular (S0) galaxies. We find a larger proportion of S0 galaxies among galaxies with a large sSFR and large specific dust emission. The stronger activity of S0 galaxies is confirmed by larger dust masses. We investigate the relative proportionmore » of active galactic nuclei (AGNs) and SF galaxies in our sample using spectroscopic Sloan Digital Sky Survey data and near-infrared selection techniques, and find a larger proportion of AGN-dominated galaxies in the S0 sample than the E one. This could corroborate a scenario where blue galaxies evolve into red ellipticals by passing through an S0 AGN active period while quenching its star formation. Finally, we find a good agreement comparing our estimates with color indicators.« less

  1. Design and performance of tapered cubic anvil used for achieving higher pressure and larger sample cell

    NASA Astrophysics Data System (ADS)

    Han, Qi-Gang; Yang, Wen-Ke; Zhu, Pin-Wen; Ban, Qing-Chu; Yan, Ni; Zhang, Qiang

    2013-07-01

    In order to increase the maximum cell pressure of the cubic high pressure apparatus, we have developed a new structure of tungsten carbide cubic anvil (tapered cubic anvil), based on the principle of massive support and lateral support. Our results indicated that the tapered cubic anvil has some advantages. First, tapered cubic anvil can push the transfer rate of pressure well into the range above 36.37% compare to the conventional anvil. Second, the rate of failure crack decreases about 11.20% after the modification of the conventional anvil. Third, the limit of static high-pressure in the sample cell can be extended to 13 GPa, which can increase the maximum cell pressure about 73.3% than that of the conventional anvil. Fourth, the volume of sample cell compressed by tapered cubic anvils can be achieved to 14.13 mm3 (3 mm diameter × 2 mm long), which is three and six orders of magnitude larger than that of double-stage apparatus and diamond anvil cell, respectively. This work represents a relatively simple method for achieving higher pressures and larger sample cell.

  2. A time-varying effect model for examining group differences in trajectories of zero-inflated count outcomes with applications in substance abuse research.

    PubMed

    Yang, Songshan; Cranford, James A; Jester, Jennifer M; Li, Runze; Zucker, Robert A; Buu, Anne

    2017-02-28

    This study proposes a time-varying effect model for examining group differences in trajectories of zero-inflated count outcomes. The motivating example demonstrates that this zero-inflated Poisson model allows investigators to study group differences in different aspects of substance use (e.g., the probability of abstinence and the quantity of alcohol use) simultaneously. The simulation study shows that the accuracy of estimation of trajectory functions improves as the sample size increases; the accuracy under equal group sizes is only higher when the sample size is small (100). In terms of the performance of the hypothesis testing, the type I error rates are close to their corresponding significance levels under all settings. Furthermore, the power increases as the alternative hypothesis deviates more from the null hypothesis, and the rate of this increasing trend is higher when the sample size is larger. Moreover, the hypothesis test for the group difference in the zero component tends to be less powerful than the test for the group difference in the Poisson component. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. The systematic component of phylogenetic error as a function of taxonomic sampling under parsimony.

    PubMed

    Debry, Ronald W

    2005-06-01

    The effect of taxonomic sampling on phylogenetic accuracy under parsimony is examined by simulating nucleotide sequence evolution. Random error is minimized by using very large numbers of simulated characters. This allows estimation of the consistency behavior of parsimony, even for trees with up to 100 taxa. Data were simulated on 8 distinct 100-taxon model trees and analyzed as stratified subsets containing either 25 or 50 taxa, in addition to the full 100-taxon data set. Overall accuracy decreased in a majority of cases when taxa were added. However, the magnitude of change in the cases in which accuracy increased was larger than the magnitude of change in the cases in which accuracy decreased, so, on average, overall accuracy increased as more taxa were included. A stratified sampling scheme was used to assess accuracy for an initial subsample of 25 taxa. The 25-taxon analyses were compared to 50- and 100-taxon analyses that were pruned to include only the original 25 taxa. On average, accuracy for the 25 taxa was improved by taxon addition, but there was considerable variation in the degree of improvement among the model trees and across different rates of substitution.

  4. Label-Free, Flow-Imaging Methods for Determination of Cell Concentration and Viability.

    PubMed

    Sediq, A S; Klem, R; Nejadnik, M R; Meij, P; Jiskoot, Wim

    2018-05-30

    To investigate the potential of two flow imaging microscopy (FIM) techniques (Micro-Flow Imaging (MFI) and FlowCAM) to determine total cell concentration and cell viability. B-lineage acute lymphoblastic leukemia (B-ALL) cells of 2 different donors were exposed to ambient conditions. Samples were taken at different days and measured with MFI, FlowCAM, hemocytometry and automated cell counting. Dead and live cells from a fresh B-ALL cell suspension were fractionated by flow cytometry in order to derive software filters based on morphological parameters of separate cell populations with MFI and FlowCAM. The filter sets were used to assess cell viability in the measured samples. All techniques gave fairly similar cell concentration values over the whole incubation period. MFI showed to be superior with respect to precision, whereas FlowCAM provided particle images with a higher resolution. Moreover, both FIM methods were able to provide similar results for cell viability as the conventional methods (hemocytometry and automated cell counting). FIM-based methods may be advantageous over conventional cell methods for determining total cell concentration and cell viability, as FIM measures much larger sample volumes, does not require labeling, is less laborious and provides images of individual cells.

  5. Occurrence and origin of Escherichia coli in water and sediments at two public swimming beaches at Lake of the Ozarks State Park, Camden County, Missouri, 2011-13

    USGS Publications Warehouse

    Wilson, Jordan L.; Schumacher, John G.; Burken, Joel G.

    2014-01-01

    In the past several years, the Missouri Department of Natural Resources has closed two popular public beaches, Grand Glaize Beach and Public Beach 1, at Lake of the Ozarks State Park in Osage Beach, Missouri when monitoring results exceeded the established Escherichia coli (E. coli) standard. As a result of the beach closures, the U.S. Geological Survey and Missouri University of Science and Technology, in cooperation with the Missouri Department of Natural Resources, led an investigation into the occurrence and origins of E. coli at Grand Glaize Beach and Public Beach 1. The study included the collection of more than 1,300 water, sediment, and fecal source samples between August 2011 and February 2013 from the two beaches and vicinity. Spatial and temporal patterns of E. coli concentrations in water and sediments combined with measurements of environmental variables, beach-use patterns, and Missouri Department of Natural Resources water-tracing results were used to identify possible sources of E. coli contamination at the two beaches and to corroborate microbial source tracking (MST) sampling efforts. Results from a 2011 reconnaissance sampling indicate that water samples from Grand Glaize Beach cove contained significantly larger E. coli concentrations than adjacent coves and were largest at sites at the upper end of Grand Glaize Beach cove, indicating a probable local source of E. coli contamination within the upper end of the cove. Results from an intensive sampling effort during 2012 indicated that E. coli concentrations in water samples at Grand Glaize Beach cove were significantly larger in ankle-deep water than waist-deep water, trended downward during the recreational season, significantly increased with an increase in the total number of bathers at the beach, and were largest during the middle of the day. Concentrations of E. coli in nearshore sediment (sediment near the shoreline) at Grand Glaize Beach were significantly larger in foreshore samples (samples collected above the shoreline) than in samples collected in ankle-deep water below the shoreline, significantly larger in the left and middle areas of the beach than the right area, and substantially larger than similar studies at E. coli- contaminated beaches on Lake Erie in Ohio. Concentrations of E. coli in the water column also were significantly larger after resuspension of sediments. Results of MST indicate a predominance of waterfowl-associated markers in nearshore sediments at Grand Glaize Beach consistent with frequent observations of goose and vulture fecal matter in sediment, especially on the left and middle areas of the beach. The combination of spatial and temporal sampling and MST indicate that an important source of E. coli contamination at Grand Glaize Beach during 2012 was E. coli released into the water column by bathers resuspending E. coli-contaminated sediments, especially during high-use days early in the recreational season.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janik, Gregory

    Renders, saves, and analyzes pressure from several sensors in a prosthesis™ socket. The program receives pressure data from 64 manometers and parses the pressure for each individual sensor. The program can then display those pressures as number in a table. The program also interpolates pressures between manometers to create a larger set of data. This larger set of data is displayed as a simple contour plot. That same contour plot can also be placed on a three-dimensional surface in the shape of a prosthesis.This program allows for easy identification of high pressure areas in a prosthesis to reduce the user™smore » discomfort. The program parses the sensor pressures into a human-readable numeric format. The data may also be used to actively adjust bladders within the prosthesis to spread out pressure in real time, according to changing demands placed on the prosthesis. Interpolation of the pressures to create a larger data set makes it even easier for a human to identify particular areas of the prosthesis that are under high pressure. After identifying pressure points, a prosthetician can then redesign the prosthesis and/or command the bladders in the prosthesis to attempt to maintain constant pressures.« less

  7. Dynamic Range Across Music Genres and the Perception of Dynamic Compression in Hearing-Impaired Listeners

    PubMed Central

    Kirchberger, Martin

    2016-01-01

    Dynamic range compression serves different purposes in the music and hearing-aid industries. In the music industry, it is used to make music louder and more attractive to normal-hearing listeners. In the hearing-aid industry, it is used to map the variable dynamic range of acoustic signals to the reduced dynamic range of hearing-impaired listeners. Hence, hearing-aided listeners will typically receive a dual dose of compression when listening to recorded music. The present study involved an acoustic analysis of dynamic range across a cross section of recorded music as well as a perceptual study comparing the efficacy of different compression schemes. The acoustic analysis revealed that the dynamic range of samples from popular genres, such as rock or rap, was generally smaller than the dynamic range of samples from classical genres, such as opera and orchestra. By comparison, the dynamic range of speech, based on recordings of monologues in quiet, was larger than the dynamic range of all music genres tested. The perceptual study compared the effect of the prescription rule NAL-NL2 with a semicompressive and a linear scheme. Music subjected to linear processing had the highest ratings for dynamics and quality, followed by the semicompressive and the NAL-NL2 setting. These findings advise against NAL-NL2 as a prescription rule for recorded music and recommend linear settings. PMID:26868955

  8. A fast iterative convolution weighting approach for gridding-based direct Fourier three-dimensional reconstruction with correction for the contrast transfer function.

    PubMed

    Abrishami, V; Bilbao-Castro, J R; Vargas, J; Marabini, R; Carazo, J M; Sorzano, C O S

    2015-10-01

    We describe a fast and accurate method for the reconstruction of macromolecular complexes from a set of projections. Direct Fourier inversion (in which the Fourier Slice Theorem plays a central role) is a solution for dealing with this inverse problem. Unfortunately, the set of projections provides a non-equidistantly sampled version of the macromolecule Fourier transform in the single particle field (and, therefore, a direct Fourier inversion) may not be an optimal solution. In this paper, we introduce a gridding-based direct Fourier method for the three-dimensional reconstruction approach that uses a weighting technique to compute a uniform sampled Fourier transform. Moreover, the contrast transfer function of the microscope, which is a limiting factor in pursuing a high resolution reconstruction, is corrected by the algorithm. Parallelization of this algorithm, both on threads and on multiple CPU's, makes the process of three-dimensional reconstruction even faster. The experimental results show that our proposed gridding-based direct Fourier reconstruction is slightly more accurate than similar existing methods and presents a lower computational complexity both in terms of time and memory, thereby allowing its use on larger volumes. The algorithm is fully implemented in the open-source Xmipp package and is downloadable from http://xmipp.cnb.csic.es. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Dynamic Range Across Music Genres and the Perception of Dynamic Compression in Hearing-Impaired Listeners.

    PubMed

    Kirchberger, Martin; Russo, Frank A

    2016-02-10

    Dynamic range compression serves different purposes in the music and hearing-aid industries. In the music industry, it is used to make music louder and more attractive to normal-hearing listeners. In the hearing-aid industry, it is used to map the variable dynamic range of acoustic signals to the reduced dynamic range of hearing-impaired listeners. Hence, hearing-aided listeners will typically receive a dual dose of compression when listening to recorded music. The present study involved an acoustic analysis of dynamic range across a cross section of recorded music as well as a perceptual study comparing the efficacy of different compression schemes. The acoustic analysis revealed that the dynamic range of samples from popular genres, such as rock or rap, was generally smaller than the dynamic range of samples from classical genres, such as opera and orchestra. By comparison, the dynamic range of speech, based on recordings of monologues in quiet, was larger than the dynamic range of all music genres tested. The perceptual study compared the effect of the prescription rule NAL-NL2 with a semicompressive and a linear scheme. Music subjected to linear processing had the highest ratings for dynamics and quality, followed by the semicompressive and the NAL-NL2 setting. These findings advise against NAL-NL2 as a prescription rule for recorded music and recommend linear settings. © The Author(s) 2016.

  10. Dimensions of religiousness and cancer screening behaviors among church-going Latinas.

    PubMed

    Allen, Jennifer D; Pérez, John E; Pischke, Claudia R; Tom, Laura S; Juarez, Alan; Ospino, Hosffman; Gonzalez-Suarez, Elizabeth

    2014-02-01

    Churches are a promising setting through which to reach Latinas with cancer control efforts. A better understanding of the dimensions of religiousness that impact health behaviors could inform efforts to tailor cancer control programs for this setting. The purpose of this study was to explore relationships between dimensions of religiousness with adherence to cancer screening recommendations among church-going Latinas. Female Spanish-speaking members, aged 18 and older from a Baptist church in Boston, Massachusetts (N = 78), were interviewed about cancer screening behaviors and dimensions of religiousness. We examined adherence to individual cancer screening tests (mammography, Pap test, and colonoscopy), as well as adherence to all screening tests for which participants were age-eligible. Dimensions of religiousness assessed included church participation, religious support, active and passive spiritual health locus of control, and positive and negative religious coping. Results showed that roughly half (46 %) of the sample had not received all of the cancer screening tests for which they were age-eligible. In multivariate analyses, positive religious coping was significantly associated with adherence to all age-appropriate screening (OR = 5.30, p < .01). Additional research is warranted to replicate these results in larger, more representative samples and to examine the extent to which enhancement of religious coping could increase the impact of cancer control interventions for Latinas.

  11. Correcting for Optimistic Prediction in Small Data Sets

    PubMed Central

    Smith, Gordon C. S.; Seaman, Shaun R.; Wood, Angela M.; Royston, Patrick; White, Ian R.

    2014-01-01

    The C statistic is a commonly reported measure of screening test performance. Optimistic estimation of the C statistic is a frequent problem because of overfitting of statistical models in small data sets, and methods exist to correct for this issue. However, many studies do not use such methods, and those that do correct for optimism use diverse methods, some of which are known to be biased. We used clinical data sets (United Kingdom Down syndrome screening data from Glasgow (1991–2003), Edinburgh (1999–2003), and Cambridge (1990–2006), as well as Scottish national pregnancy discharge data (2004–2007)) to evaluate different approaches to adjustment for optimism. We found that sample splitting, cross-validation without replication, and leave-1-out cross-validation produced optimism-adjusted estimates of the C statistic that were biased and/or associated with greater absolute error than other available methods. Cross-validation with replication, bootstrapping, and a new method (leave-pair-out cross-validation) all generated unbiased optimism-adjusted estimates of the C statistic and had similar absolute errors in the clinical data set. Larger simulation studies confirmed that all 3 methods performed similarly with 10 or more events per variable, or when the C statistic was 0.9 or greater. However, with lower events per variable or lower C statistics, bootstrapping tended to be optimistic but with lower absolute and mean squared errors than both methods of cross-validation. PMID:24966219

  12. The development of search filters for adverse effects of surgical interventions in medline and Embase.

    PubMed

    Golder, Su; Wright, Kath; Loke, Yoon Kong

    2018-06-01

    Search filter development for adverse effects has tended to focus on retrieving studies of drug interventions. However, a different approach is required for surgical interventions. To develop and validate search filters for medline and Embase for the adverse effects of surgical interventions. Systematic reviews of surgical interventions where the primary focus was to evaluate adverse effect(s) were sought. The included studies within these reviews were divided randomly into a development set, evaluation set and validation set. Using word frequency analysis we constructed a sensitivity maximising search strategy and this was tested in the evaluation and validation set. Three hundred and fifty eight papers were included from 19 surgical intervention reviews. Three hundred and fifty two papers were available on medline and 348 were available on Embase. Generic adverse effects search strategies in medline and Embase could achieve approximately 90% relative recall. Recall could be further improved with the addition of specific adverse effects terms to the search strategies. We have derived and validated a novel search filter that has reasonable performance for identifying adverse effects of surgical interventions in medline and Embase. However, we appreciate the limitations of our methods, and recommend further research on larger sample sizes and prospective systematic reviews. © 2018 The Authors Health Information and Libraries Journal published by John Wiley & Sons Ltd on behalf of Health Libraries Group.

  13. Cluster randomised crossover trials with binary data and unbalanced cluster sizes: application to studies of near-universal interventions in intensive care.

    PubMed

    Forbes, Andrew B; Akram, Muhammad; Pilcher, David; Cooper, Jamie; Bellomo, Rinaldo

    2015-02-01

    Cluster randomised crossover trials have been utilised in recent years in the health and social sciences. Methods for analysis have been proposed; however, for binary outcomes, these have received little assessment of their appropriateness. In addition, methods for determination of sample size are currently limited to balanced cluster sizes both between clusters and between periods within clusters. This article aims to extend this work to unbalanced situations and to evaluate the properties of a variety of methods for analysis of binary data, with a particular focus on the setting of potential trials of near-universal interventions in intensive care to reduce in-hospital mortality. We derive a formula for sample size estimation for unbalanced cluster sizes, and apply it to the intensive care setting to demonstrate the utility of the cluster crossover design. We conduct a numerical simulation of the design in the intensive care setting and for more general configurations, and we assess the performance of three cluster summary estimators and an individual-data estimator based on binomial-identity-link regression. For settings similar to the intensive care scenario involving large cluster sizes and small intra-cluster correlations, the sample size formulae developed and analysis methods investigated are found to be appropriate, with the unweighted cluster summary method performing well relative to the more optimal but more complex inverse-variance weighted method. More generally, we find that the unweighted and cluster-size-weighted summary methods perform well, with the relative efficiency of each largely determined systematically from the study design parameters. Performance of individual-data regression is adequate with small cluster sizes but becomes inefficient for large, unbalanced cluster sizes. When outcome prevalences are 6% or less and the within-cluster-within-period correlation is 0.05 or larger, all methods display sub-nominal confidence interval coverage, with the less prevalent the outcome the worse the coverage. As with all simulation studies, conclusions are limited to the configurations studied. We confined attention to detecting intervention effects on an absolute risk scale using marginal models and did not explore properties of binary random effects models. Cluster crossover designs with binary outcomes can be analysed using simple cluster summary methods, and sample size in unbalanced cluster size settings can be determined using relatively straightforward formulae. However, caution needs to be applied in situations with low prevalence outcomes and moderate to high intra-cluster correlations. © The Author(s) 2014.

  14. Adjustment Costs, Firm Responses, and Micro vs. Macro Labor Supply Elasticities: Evidence from Danish Tax Records*

    PubMed Central

    Chetty, Raj; Friedman, John N.; Olsen, Tore; Pistaferri, Luigi

    2011-01-01

    We show that the effects of taxes on labor supply are shaped by interactions between adjustment costs for workers and hours constraints set by firms. We develop a model in which firms post job offers characterized by an hours requirement and workers pay search costs to find jobs. We present evidence supporting three predictions of this model by analyzing bunching at kinks using Danish tax records. First, larger kinks generate larger taxable income elasticities. Second, kinks that apply to a larger group of workers generate larger elasticities. Third, the distribution of job offers is tailored to match workers' aggregate tax preferences in equilibrium. Our results suggest that macro elasticities may be substantially larger than the estimates obtained using standard microeconometric methods. PMID:21836746

  15. Anomic Strain and External Constraints: A Reassessment of Merton's Anomie/Strain Theory Using Data From Ukraine.

    PubMed

    Antonaccio, Olena; Smith, William R; Gostjev, Feodor A

    2015-09-01

    This study provides a new assessment of Merton's anomie/strain theory and fills in several gaps in the literature. First, using the data from the sample of adolescents in an especially suitable and interesting setting, post-Soviet Ukraine, it investigates the applicability of the theory to this context and reveals that predictive powers of anomic strain may be influenced by larger sociocultural environments. Second, it evaluates the possibility of theoretical elaboration of Merton's theory through identifying contingencies such as external constraints on behavior and finds limited support for moderating effects of perceptions of risks of sanctioning and social bonds on anomic strain-delinquency relationships. Finally, it confirms that additional clarifications of the concept of anomic strain may be promising. © The Author(s) 2014.

  16. Sediment lithology and radiochemistry from the back-barrier environments along the northern Chandeleur Islands, Louisiana—March 2012

    USGS Publications Warehouse

    Marot, Marci E.; Smith, Christopher G.; Adams, C. Scott; Richwine, Kathryn A.

    2017-04-11

    Scientists from the U.S. Geological Survey (USGS) St. Petersburg Coastal and Marine Science Center collected a set of 8 sediment cores from the back-barrier environments along the northern Chandeleur Islands, Louisiana, in March 2012. The sampling efforts were part of a larger USGS study to evaluate effects on the geomorphology of the Chandeleur Islands following the construction of an artificial sand berm to reduce oil transport onto federally managed lands. The objective of this study was to evaluate the response of the back-barrier tidal and wetland environments to the berm. This report serves as an archive for sedimentological and radiochemical data derived from the sediment cores. The data described in this report are available for download on the data downloads page.

  17. Social competence intervention for youth with Asperger Syndrome and high-functioning autism: an initial investigation.

    PubMed

    Stichter, Janine P; Herzog, Melissa J; Visovsky, Karen; Schmidt, Carla; Randolph, Jena; Schultz, Tia; Gage, Nicholas

    2010-09-01

    Individuals with high functioning autism (HFA) or Asperger Syndrome (AS) exhibit difficulties in the knowledge or correct performance of social skills. This subgroup's social difficulties appear to be associated with deficits in three social cognition processes: theory of mind, emotion recognition and executive functioning. The current study outlines the development and initial administration of the group-based Social Competence Intervention (SCI), which targeted these deficits using cognitive behavioral principles. Across 27 students age 11-14 with a HFA/AS diagnosis, results indicated significant improvement on parent reports of social skills and executive functioning. Participants evidenced significant growth on direct assessments measuring facial expression recognition, theory of mind and problem solving. SCI appears promising, however, larger samples and application in naturalistic settings are warranted.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shirkov, Leonid; Makarewicz, Jan, E-mail: jama@amu.edu.pl

    An ab initio intermolecular potential energy surface (PES) has been constructed for the benzene-krypton (BKr) van der Waals (vdW) complex. The interaction energy has been calculated at the coupled cluster level of theory with single, double, and perturbatively included triple excitations using different basis sets. As a result, a few analytical PESs of the complex have been determined. They allowed a prediction of the complex structure and its vibrational vdW states. The vibrational energy level pattern exhibits a distinct polyad structure. Comparison of the equilibrium structure, the dipole moment, and vibrational levels of BKr with their experimental counterparts has allowedmore » us to design an optimal basis set composed of a small Dunning’s basis set for the benzene monomer, a larger effective core potential adapted basis set for Kr and additional midbond functions. Such a basis set yields vibrational energy levels that agree very well with the experimental ones as well as with those calculated from the available empirical PES derived from the microwave spectra of the BKr complex. The basis proposed can be applied to larger complexes including Kr because of a reasonable computational cost and accurate results.« less

  19. Polarized atomic orbitals for self-consistent field electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Lee, Michael S.; Head-Gordon, Martin

    1997-12-01

    We present a new self-consistent field approach which, given a large "secondary" basis set of atomic orbitals, variationally optimizes molecular orbitals in terms of a small "primary" basis set of distorted atomic orbitals, which are simultaneously optimized. If the primary basis is taken as a minimal basis, the resulting functions are termed polarized atomic orbitals (PAO's) because they are valence (or core) atomic orbitals which have distorted or polarized in an optimal way for their molecular environment. The PAO's derive their flexibility from the fact that they are formed from atom-centered linear-combinations of the larger set of secondary atomic orbitals. The variational conditions satisfied by PAO's are defined, and an iterative method for performing a PAO-SCF calculation is introduced. We compare the PAO-SCF approach against full SCF calculations for the energies, dipoles, and molecular geometries of various molecules. The PAO's are potentially useful for studying large systems that are currently intractable with larger than minimal basis sets, as well as offering potential interpretative benefits relative to calculations in extended basis sets.

  20. Template based rotation: A method for functional connectivity analysis with a priori templates☆

    PubMed Central

    Schultz, Aaron P.; Chhatwal, Jasmeer P.; Huijbers, Willem; Hedden, Trey; van Dijk, Koene R.A.; McLaren, Donald G.; Ward, Andrew M.; Wigman, Sarah; Sperling, Reisa A.

    2014-01-01

    Functional connectivity magnetic resonance imaging (fcMRI) is a powerful tool for understanding the network level organization of the brain in research settings and is increasingly being used to study large-scale neuronal network degeneration in clinical trial settings. Presently, a variety of techniques, including seed-based correlation analysis and group independent components analysis (with either dual regression or back projection) are commonly employed to compute functional connectivity metrics. In the present report, we introduce template based rotation,1 a novel analytic approach optimized for use with a priori network parcellations, which may be particularly useful in clinical trial settings. Template based rotation was designed to leverage the stable spatial patterns of intrinsic connectivity derived from out-of-sample datasets by mapping data from novel sessions onto the previously defined a priori templates. We first demonstrate the feasibility of using previously defined a priori templates in connectivity analyses, and then compare the performance of template based rotation to seed based and dual regression methods by applying these analytic approaches to an fMRI dataset of normal young and elderly subjects. We observed that template based rotation and dual regression are approximately equivalent in detecting fcMRI differences between young and old subjects, demonstrating similar effect sizes for group differences and similar reliability metrics across 12 cortical networks. Both template based rotation and dual-regression demonstrated larger effect sizes and comparable reliabilities as compared to seed based correlation analysis, though all three methods yielded similar patterns of network differences. When performing inter-network and sub-network connectivity analyses, we observed that template based rotation offered greater flexibility, larger group differences, and more stable connectivity estimates as compared to dual regression and seed based analyses. This flexibility owes to the reduced spatial and temporal orthogonality constraints of template based rotation as compared to dual regression. These results suggest that template based rotation can provide a useful alternative to existing fcMRI analytic methods, particularly in clinical trial settings where predefined outcome measures and conserved network descriptions across groups are at a premium. PMID:25150630

  1. Interlaboratory comparison of δ13C and δD measurements of atmospheric CH4 for combined use of data sets from different laboratories

    NASA Astrophysics Data System (ADS)

    Umezawa, Taku; Brenninkmeijer, Carl A. M.; Röckmann, Thomas; van der Veen, Carina; Tyler, Stanley C.; Fujita, Ryo; Morimoto, Shinji; Aoki, Shuji; Sowers, Todd; Schmitt, Jochen; Bock, Michael; Beck, Jonas; Fischer, Hubertus; Michel, Sylvia E.; Vaughn, Bruce H.; Miller, John B.; White, James W. C.; Brailsford, Gordon; Schaefer, Hinrich; Sperlich, Peter; Brand, Willi A.; Rothe, Michael; Blunier, Thomas; Lowry, David; Fisher, Rebecca E.; Nisbet, Euan G.; Rice, Andrew L.; Bergamaschi, Peter; Veidt, Cordelia; Levin, Ingeborg

    2018-03-01

    We report results from a worldwide interlaboratory comparison of samples among laboratories that measure (or measured) stable carbon and hydrogen isotope ratios of atmospheric CH4 (δ13C-CH4 and δD-CH4). The offsets among the laboratories are larger than the measurement reproducibility of individual laboratories. To disentangle plausible measurement offsets, we evaluated and critically assessed a large number of intercomparison results, some of which have been documented previously in the literature. The results indicate significant offsets of δ13C-CH4 and δD-CH4 measurements among data sets reported from different laboratories; the differences among laboratories at modern atmospheric CH4 level spread over ranges of 0.5 ‰ for δ13C-CH4 and 13 ‰ for δD-CH4. The intercomparison results summarized in this study may be of help in future attempts to harmonize δ13C-CH4 and δD-CH4 data sets from different laboratories in order to jointly incorporate them into modelling studies. However, establishing a merged data set, which includes δ13C-CH4 and δD-CH4 data from multiple laboratories with desirable compatibility, is still challenging due to differences among laboratories in instrument settings, correction methods, traceability to reference materials and long-term data management. Further efforts are needed to identify causes of the interlaboratory measurement offsets and to decrease those to move towards the best use of available δ13C-CH4 and δD-CH4 data sets.

  2. Automatic Earthquake Detection by Active Learning

    NASA Astrophysics Data System (ADS)

    Bergen, K.; Beroza, G. C.

    2017-12-01

    In recent years, advances in machine learning have transformed fields such as image recognition, natural language processing and recommender systems. Many of these performance gains have relied on the availability of large, labeled data sets to train high-accuracy models; labeled data sets are those for which each sample includes a target class label, such as waveforms tagged as either earthquakes or noise. Earthquake seismologists are increasingly leveraging machine learning and data mining techniques to detect and analyze weak earthquake signals in large seismic data sets. One of the challenges in applying machine learning to seismic data sets is the limited labeled data problem; learning algorithms need to be given examples of earthquake waveforms, but the number of known events, taken from earthquake catalogs, may be insufficient to build an accurate detector. Furthermore, earthquake catalogs are known to be incomplete, resulting in training data that may be biased towards larger events and contain inaccurate labels. This challenge is compounded by the class imbalance problem; the events of interest, earthquakes, are infrequent relative to noise in continuous data sets, and many learning algorithms perform poorly on rare classes. In this work, we investigate the use of active learning for automatic earthquake detection. Active learning is a type of semi-supervised machine learning that uses a human-in-the-loop approach to strategically supplement a small initial training set. The learning algorithm incorporates domain expertise through interaction between a human expert and the algorithm, with the algorithm actively posing queries to the user to improve detection performance. We demonstrate the potential of active machine learning to improve earthquake detection performance with limited available training data.

  3. Assessing the Impact of Model Parameter Uncertainty in Simulating Grass Biomass Using a Hybrid Carbon Allocation Strategy

    NASA Astrophysics Data System (ADS)

    Reyes, J. J.; Adam, J. C.; Tague, C.

    2016-12-01

    Grasslands play an important role in agricultural production as forage for livestock; they also provide a diverse set of ecosystem services including soil carbon (C) storage. The partitioning of C between above and belowground plant compartments (i.e. allocation) is influenced by both plant characteristics and environmental conditions. The objectives of this study are to 1) develop and evaluate a hybrid C allocation strategy suitable for grasslands, and 2) apply this strategy to examine the importance of various parameters related to biogeochemical cycling, photosynthesis, allocation, and soil water drainage on above and belowground biomass. We include allocation as an important process in quantifying the model parameter uncertainty, which identifies the most influential parameters and what processes may require further refinement. For this, we use the Regional Hydro-ecologic Simulation System, a mechanistic model that simulates coupled water and biogeochemical processes. A Latin hypercube sampling scheme was used to develop parameter sets for calibration and evaluation of allocation strategies, as well as parameter uncertainty analysis. We developed the hybrid allocation strategy to integrate both growth-based and resource-limited allocation mechanisms. When evaluating the new strategy simultaneously for above and belowground biomass, it produced a larger number of less biased parameter sets: 16% more compared to resource-limited and 9% more compared to growth-based. This also demonstrates its flexible application across diverse plant types and environmental conditions. We found that higher parameter importance corresponded to sub- or supra-optimal resource availability (i.e. water, nutrients) and temperature ranges (i.e. too hot or cold). For example, photosynthesis-related parameters were more important at sites warmer than the theoretical optimal growth temperature. Therefore, larger values of parameter importance indicate greater relative sensitivity in adequately representing the relevant process to capture limiting resources or manage atypical environmental conditions. These results may inform future experimental work by focusing efforts on quantifying specific parameters under various environmental conditions or across diverse plant functional types.

  4. Moon-Mars simulation campaign in volcanic Eifel: Remote science support and sample analysis

    NASA Astrophysics Data System (ADS)

    Offringa, Marloes; Foing, Bernard H.; Kamps, Oscar

    2016-07-01

    Moon-Mars analogue missions using a mock-up lander that is part of the ESA/ILEWG ExoGeoLab project were conducted during Eifel field campaigns in 2009, 2015 and 2016 (Foing et al., 2010). In the last EuroMoonMars2016 campaign the lander was used to conduct reconnaissance experiments and in situ geological scientific analysis of samples, with a payload that mainly consisted of a telescope and a UV-VIS reflectance spectrometer. The aim of the campaign was to exhibit possibilities for the ExoGeoLab lander to perform remotely controlled experiments and test its applicability in the field by simulating the interaction with astronauts. The Eifel region in Germany where the experiments with the ExoGeoLab lander were conducted is a Moon-Mars analogue due to its geological setting and volcanic rock composition. The research conducted by analysis equipment on the lander could function in support of Moon-Mars sample return missions, by providing preliminary insight into characteristics of the analyzed samples. The set-up of the prototype lander was that of a telescope with camera and solar power equipment deployed on the top, the UV-VIS reflectance spectrometer together with computers and a sample webcam were situated in the middle compartment and to the side a sample analysis test bench was attached, attainable by astronauts from outside the lander. An alternative light source that illuminated the samples in case of insufficient daylight was placed on top of the lander and functioned on solar power. The telescope, teleoperated from a nearby stationed pressurized transport vehicle that functioned as a base control center, attained an overview of the sampling area and assisted the astronauts in their initial scouting pursuits. Locations of suitable sampling sites based on these obtained images were communicated to the astronauts, before being acquired during a simulated EVA. Sampled rocks and soils were remotely analyzed by the base control center, while the astronauts assisted by placing the samples onto the sample holder and adjusting test bench settings in order to obtain spectra. After analysis the collected samples were documented and stored by the astronauts, before returning to the base. Points of improvement for the EuroMoonMars2016 analog campaign are the remote control of the computers using an established network between the base and the lander. During following missions the computers should preferably be operated over a larger distance without interference. In the bottom compartment of the lander a rover is stored that in future campaigns could replace astronaut functions by collecting and returning samples, as well as performing adjustments to the analysis test bench by using a remotely controlled robotic arm. Acknowledgements: we thank Dominic Doyle for ESTEC optical lab support, Aidan Cowley (EAC) and Matthias Sperl (DLR) for support discussions, and collaborators from EuroMoonMars Eifel 2015-16 campaign team.

  5. Operationalizing multimorbidity and autonomy for health services research in aging populations - the OMAHA study

    PubMed Central

    2011-01-01

    Background As part of a Berlin-based research consortium on health in old age, the OMAHA (Operationalizing Multimorbidity and Autonomy for Health Services Research in Aging Populations) study aims to develop a conceptual framework and a set of standardized instruments and indicators for continuous monitoring of multimorbidity and associated health care needs in the population 65 years and older. Methods/Design OMAHA is a longitudinal epidemiological study including a comprehensive assessment at baseline and at 12-month follow-up as well as brief intermediate telephone interviews at 6 and 18 months. In order to evaluate different sampling procedures and modes of data collection, the study is conducted in two different population-based samples of men and women aged 65 years and older. A geographically defined sample was recruited from an age and sex stratified random sample from the register of residents in Berlin-Mitte (Berlin OMAHA study cohort, n = 299) for assessment by face-to-face interview and examination. A larger nationwide sample (German OMAHA study cohort, n = 730) was recruited for assessment by telephone interview among participants in previous German Telephone Health Surveys. In both cohorts, we successfully applied a multi-dimensional set of instruments to assess multimorbidity, functional disability in daily life, autonomy, quality of life (QoL), health care services utilization, personal and social resources as well as socio-demographic and biographical context variables. Response rates considerably varied between the Berlin and German OMAHA study cohorts (22.8% vs. 59.7%), whereas completeness of follow-up at month 12 was comparably high in both cohorts (82.9% vs. 81.2%). Discussion The OMAHA study offers a wide spectrum of data concerning health, functioning, social involvement, psychological well-being, and cognitive capacity in community-dwelling older people in Germany. Results from the study will add to methodological and content-specific discourses on human resources for maintaining quality of life and autonomy throughout old age, even in the face of multiple health complaints. PMID:21352521

  6. Augmenting comprehension of geological relationships by integrating 3D laser scanned hand samples within a GIS environment

    NASA Astrophysics Data System (ADS)

    Harvey, A. S.; Fotopoulos, G.; Hall, B.; Amolins, K.

    2017-06-01

    Geological observations can be made on multiple scales, including micro- (e.g. thin section), meso- (e.g. hand-sized to outcrop) and macro- (e.g. outcrop and larger) scales. Types of meso-scale samples include, but are not limited to, rocks (including drill cores), minerals, and fossils. The spatial relationship among samples paired with physical (e.g. granulometric composition, density, roughness) and chemical (e.g. mineralogical and isotopic composition) properties can aid in interpreting geological settings, such as paleo-environmental and formational conditions as well as geomorphological history. Field samples are collected along traverses in the area of interest based on characteristic representativeness of a region, predetermined rate of sampling, and/or uniqueness. The location of a sample can provide relative context in seeking out additional key samples. Beyond labelling and recording of geospatial coordinates for samples, further analysis of physical and chemical properties may be conducted in the field and laboratory. The main motivation for this paper is to present a workflow for the digital preservation of samples (via 3D laser scanning) paired with the development of cyber infrastructure, which offers geoscientists and engineers the opportunity to access an increasingly diverse worldwide collection of digital Earth materials. This paper describes a Web-based graphical user interface developed using Web AppBuilder for ArcGIS for digitized meso-scale 3D scans of geological samples to be viewed alongside the macro-scale environment. Over 100 samples of virtual rocks, minerals and fossils populate the developed geological database and are linked explicitly with their associated attributes, characteristic properties, and location. Applications of this new Web-based geological visualization paradigm in the geosciences demonstrate the utility of such a tool in an age of increasing global data sharing.

  7. Iodine assisted retainment of implanted silver in 6H-SiC at high temperatures

    NASA Astrophysics Data System (ADS)

    Hlatshwayo, T. T.; van der Berg, N. G.; Msimanga, M.; Malherbe, J. B.; Kuhudzai, R. J.

    2014-09-01

    The effect of high temperature thermal annealing on the retainment and diffusion behaviour of iodine (I) and silver (Ag) both individually and co-implanted into 6H-SiC has been investigated using RBS, RBS-C and heavy ion ERDA (Elastic Recoil Detection Analysis). Iodine and silver ions at 360 keV were both individually and co-implanted into 6H-SiC at room temperature to fluences of the order of 1 × 1016 cm-2. RBS analyses of the as-implanted samples indicated that implantation of Ag and of I and co-implantation of 131I and 109Ag at room temperature resulted in complete amorphization of 6H-SiC from the surface to a depth of about 290 nm for the co-implanted samples. Annealing at 1500 °C for 30 h (also with samples annealed at 1700 °C for 5 h) caused diffusion accompanied by some loss of both species at the surface with some iodine remaining in the iodine implanted samples. In the Ag implanted samples, the RBS spectra showed that all the Ag disappeared. SEM images showed different recrystallization behaviour for all three sets of samples, with larger faceted crystals appearing in the SiC samples containing iodine. Heavy Ion ERDA analyses showed that both 109Ag and 131I remained in the co-implanted SiC samples after annealing at 1500 °C for 30 h. Therefore, iodine assisted in the retainment of silver in SiC even at high temperature.

  8. Study protocol for the translating research in elder care (TREC): building context – an organizational monitoring program in long-term care project (project one)

    PubMed Central

    Estabrooks, Carole A; Squires, Janet E; Cummings, Greta G; Teare, Gary F; Norton, Peter G

    2009-01-01

    Background While there is a growing awareness of the importance of organizational context (or the work environment/setting) to successful knowledge translation, and successful knowledge translation to better patient, provider (staff), and system outcomes, little empirical evidence supports these assumptions. Further, little is known about the factors that enhance knowledge translation and better outcomes in residential long-term care facilities, where care has been shown to be suboptimal. The project described in this protocol is one of the two main projects of the larger five-year Translating Research in Elder Care (TREC) program. Aims The purpose of this project is to establish the magnitude of the effect of organizational context on knowledge translation, and subsequently on resident, staff (unregulated, regulated, and managerial) and system outcomes in long-term care facilities in the three Canadian Prairie Provinces (Alberta, Saskatchewan, Manitoba). Methods/Design This study protocol describes the details of a multi-level – including provinces, regions, facilities, units within facilities, and individuals who receive care (residents) or work (staff) in facilities – and longitudinal (five-year) research project. A stratified random sample of 36 residential long-term care facilities (30 urban and 6 rural) from the Canadian Prairie Provinces will comprise the sample. Caregivers and care managers within these facilities will be asked to complete the TREC survey – a suite of survey instruments designed to assess organizational context and related factors hypothesized to be important to successful knowledge translation and to achieving better resident, staff, and system outcomes. Facility and unit level data will be collected using standardized data collection forms, and resident outcomes using the Resident Assessment Instrument-Minimum Data Set version 2.0 instrument. A variety of analytic techniques will be employed including descriptive analyses, psychometric analyses, multi-level modeling, and mixed-method analyses. Discussion Three key challenging areas associated with conducting this project are discussed: sampling, participant recruitment, and sample retention; survey administration (with unregulated caregivers); and the provision of a stable set of study definitions to guide the project. PMID:19671166

  9. Rock Geochemistry and Mineralogy from Fault Zones and Polymetallic Fault Veins of the Central Front Range, Colorado

    USGS Publications Warehouse

    Caine, Jonathan S.; Bove, Dana J.

    2010-01-01

    During the 2004 to 2008 field seasons, approximately 200 hand samples of fault and polymetallic vein-related rocks were collected for geochemical and mineralogical analyses. The samples were collected by the U.S. Geological Survey as part of the Evolution of Brittle Structures Task under the Central Colorado Assessment Project (CCAP) of the Mineral Resources Program (http://minerals.cr.usgs.gov/projects/colorado_assessment/index.html). The purpose of this work has been to characterize the relation between epithermal, polymetallic mineral deposits, paleostress, and the geological structures that hosted fluid flow and localization of the deposits. The data in this report will be used to document and better understand the processes that control epithermal mineral-deposit formation by attempting to relate the geochemistry of the primary structures that hosted hydrothermal fluid flow to their heat and fluid sources. This includes processes from the scale of the structures themselves to the far field scale, inclusive of the intrusive bodies that have been thought to be the sources for the hydrothermal fluid flow. The data presented in this report are part of a larger assessment effort on public lands. The larger study area spans the region of the southern Rocky Mountains in Colorado from the Wyoming to New Mexico borders and from the eastern boundary of the Front Range to approximately the longitude of Vail and Leadville, Colorado. Although the study area has had an extensive history of geological mapping, the mapping has resulted in a number of hypotheses that are still in their infancy of being tested. For example, the proximity of polymetallic veins to intrusive bodies has been thought to reflect a genetic relation between the two features; however, this idea has not been well tested with geochemical indicators. Recent knowledge regarding the coupled nature of stress, strain, fluid flow, and geochemistry warrant new investigations and approaches to test a variety of ideas regarding the genetic processes associated with ore-deposit formation. The central part of the eastern Front Range has excellent exposures of fault zones and polymetallic fault veins, subsequently resulting in some of the most detailed mapping and associated data sets in the region. Thus, the area was chosen for detailed data compilation, new sample and data collection, and a variety of structural and geochemical analyses. The data presented in this report come from samples of fault-related exposures in the Front Range and include elemental chemistry and mineralogy from the outcrop-scale study localities within the larger CCAP study area.

  10. Comparing the efficiency of digital and conventional soil mapping to predict soil types in a semi-arid region in Iran

    NASA Astrophysics Data System (ADS)

    Zeraatpisheh, Mojtaba; Ayoubi, Shamsollah; Jafari, Azam; Finke, Peter

    2017-05-01

    The efficiency of different digital and conventional soil mapping approaches to produce categorical maps of soil types is determined by cost, sample size, accuracy and the selected taxonomic level. The efficiency of digital and conventional soil mapping approaches was examined in the semi-arid region of Borujen, central Iran. This research aimed to (i) compare two digital soil mapping approaches including Multinomial logistic regression and random forest, with the conventional soil mapping approach at four soil taxonomic levels (order, suborder, great group and subgroup levels), (ii) validate the predicted soil maps by the same validation data set to determine the best method for producing the soil maps, and (iii) select the best soil taxonomic level by different approaches at three sample sizes (100, 80, and 60 point observations), in two scenarios with and without a geomorphology map as a spatial covariate. In most predicted maps, using both digital soil mapping approaches, the best results were obtained using the combination of terrain attributes and the geomorphology map, although differences between the scenarios with and without the geomorphology map were not significant. Employing the geomorphology map increased map purity and the Kappa index, and led to a decrease in the 'noisiness' of soil maps. Multinomial logistic regression had better performance at higher taxonomic levels (order and suborder levels); however, random forest showed better performance at lower taxonomic levels (great group and subgroup levels). Multinomial logistic regression was less sensitive than random forest to a decrease in the number of training observations. The conventional soil mapping method produced a map with larger minimum polygon size because of traditional cartographic criteria used to make the geological map 1:100,000 (on which the conventional soil mapping map was largely based). Likewise, conventional soil mapping map had also a larger average polygon size that resulted in a lower level of detail. Multinomial logistic regression at the order level (map purity of 0.80), random forest at the suborder (map purity of 0.72) and great group level (map purity of 0.60), and conventional soil mapping at the subgroup level (map purity of 0.48) produced the most accurate maps in the study area. The multinomial logistic regression method was identified as the most effective approach based on a combined index of map purity, map information content, and map production cost. The combined index also showed that smaller sample size led to a preference for the order level, while a larger sample size led to a preference for the great group level.

  11. RoboPol: first season rotations of optical polarization plane in blazars

    DOE PAGES

    Blinov, D.; Pavlidou, V.; Papadakis, I.; ...

    2015-08-26

    Here, we present first results on polarization swings in optical emission of blazars obtained by RoboPol, a monitoring programme of an unbiased sample of gamma-ray bright blazars specially designed for effective detection of such events. A possible connection of polarization swing events with periods of high activity in gamma-rays is investigated using the data set obtained during the first season of operation. It was found that the brightest gamma-ray flares tend to be located closer in time to rotation events, which may be an indication of two separate mechanisms responsible for the rotations. Blazars with detected rotations during non-rotating periodsmore » have significantly larger amplitude and faster variations of polarization angle than blazars without rotations. Our simulations show that the full set of observed rotations is not a likely outcome (probability ≤1.5 × 10 -2) of a random walk of the polarization vector simulated by a multicell model. Furthermore, it is highly unlikely (~5 × 10 -5) that none of our rotations is physically connected with an increase in gamma-ray activity.« less

  12. Estimation and Selection via Absolute Penalized Convex Minimization And Its Multistage Adaptive Applications

    PubMed Central

    Huang, Jian; Zhang, Cun-Hui

    2013-01-01

    The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including the generalized linear models. We study the estimation, prediction, selection and sparsity properties of the weighted ℓ1-penalized estimator in sparse, high-dimensional settings where the number of predictors p can be much larger than the sample size n. Adaptive Lasso is considered as a special case. A multistage method is developed to approximate concave regularized estimation by applying an adaptive Lasso recursively. We provide prediction and estimation oracle inequalities for single- and multi-stage estimators, a general selection consistency theorem, and an upper bound for the dimension of the Lasso estimator. Important models including the linear regression, logistic regression and log-linear models are used throughout to illustrate the applications of the general results. PMID:24348100

  13. Guidelines for a cancer prevention smartphone application: A mixed-methods study.

    PubMed

    Ribeiro, Nuno; Moreira, Luís; Barros, Ana; Almeida, Ana Margarida; Santos-Silva, Filipe

    2016-10-01

    This study sought to explore the views and experiences of healthy young adults concerning the fundamental features of a cancer prevention smartphone app that seeks behaviour change. Three focus groups were conducted with 16 healthy young adults that explored prior experiences, points of view and opinions about currently available health-related smartphone apps. Then, an online questionnaire was designed and applied to a larger sample of healthy young adults. Focus group and online questionnaire data were analysed and confronted. Study results identified behaviour tracking, goal setting, tailored information and use of reminders as the most desired features in a cancer prevention app. Participants highlighted the importance of privacy and were reluctant to share personal health information with other users. The results also point out important dimensions to be considered for long-term use of health promotion apps related with usability and perceived usefulness. Participants didn't consider gamification features as important dimensions for long-term use of apps. This study allowed the definition of a guideline set for the development of a cancer prevention app. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Adult interpersonal features of subtypes of sexual offenders.

    PubMed

    Sigre-Leirós, Vera; Carvalho, Joana; Nobre, Pedro J

    2015-08-01

    Although the role of interpersonal factors on sexual offending is already recognized, there is a need for further investigation on the psychosocial correlates of pedophilic behavior. This study aimed to examine the relationship between adult interpersonal features and subtypes of sexual offending. The study involved the participation of a total of 164 male convicted offenders namely 50 rapists, 63 child molesters (20 pedophilic and 43 nonpedophilic), and 51 nonsexual offenders. All participants were assessed using the Adult Attachment Scale, the Interpersonal Behavior Survey, the Brief Symptom Inventory, and the Socially Desirable Response Set Measure. Results from sets of multinomial logistic regression analyses showed that pedophilic offenders were more likely to present anxiety in adult relationships compared to nonsex offenders. Likewise, nonpedophilic child molesters were less likely to be generally aggressive compared to rapists and nonsex offenders, as well as less generally assertive than rapists. Overall, findings indicated that certain interpersonal features characterized subtypes of offenders, thus providing some insight on their particular therapeutic needs. Further replications with larger samples particularly of pedophilic child molesters are required. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  15. The simulation library of the Belle II software system

    NASA Astrophysics Data System (ADS)

    Kim, D. Y.; Ritter, M.; Bilka, T.; Bobrov, A.; Casarosa, G.; Chilikin, K.; Ferber, T.; Godang, R.; Jaegle, I.; Kandra, J.; Kodys, P.; Kuhr, T.; Kvasnicka, P.; Nakayama, H.; Piilonen, L.; Pulvermacher, C.; Santelj, L.; Schwenker, B.; Sibidanov, A.; Soloviev, Y.; Starič, M.; Uglov, T.

    2017-10-01

    SuperKEKB, the next generation B factory, has been constructed in Japan as an upgrade of KEKB. This brand new e+ e- collider is expected to deliver a very large data set for the Belle II experiment, which will be 50 times larger than the previous Belle sample. Both the triggered physics event rate and the background event rate will be increased by at least 10 times than the previous ones, and will create a challenging data taking environment for the Belle II detector. The software system of the Belle II experiment is designed to execute this ambitious plan. A full detector simulation library, which is a part of the Belle II software system, is created based on Geant4 and has been tested thoroughly. Recently the library has been upgraded with Geant4 version 10.1. The library is behaving as expected and it is utilized actively in producing Monte Carlo data sets for various studies. In this paper, we will explain the structure of the simulation library and the various interfaces to other packages including geometry and beam background simulation.

  16. Promoting Student-Teacher Interactions: Exploring a Peer Coaching Model for Teachers in a Preschool Setting.

    PubMed

    Johnson, Stacy R; Finlon, Kristy J; Kobak, Roger; Izard, Carroll E

    2017-07-01

    Peer coaching provides an attractive alternative to traditional professional development for promoting classroom quality in a sustainable, cost-effective manner by creating a collaborative teaching community. This exploratory study describes the development and evaluation of the Colleague Observation And CoacHing (COACH) program, a peer coaching program designed to increase teachers' effectiveness in enhancing classroom quality in a preschool Head Start setting. The COACH program consists of a training workshop on coaching skills and student-teacher interactions, six peer coaching sessions, and three center meetings. Pre-post observations of emotional support, classroom organization, and instructional support using the Classroom Assessment Scoring System of twelve classrooms assigned to peer coaching were compared to twelve control classrooms at baseline and following the intervention. Findings provide preliminary support that the peer coaching program is perceived as acceptable and feasible by the participating preschool teachers and that it may strengthen student-teacher interactions. Further program refinement and evaluation with larger samples is needed to enhance student-teacher interactions and, ultimately, children's adaptive development.

  17. A High-Granularity Approach to Modeling Energy Consumption and Savings Potential in the U.S. Residential Building Stock

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Building simulations are increasingly used in various applications related to energy efficient buildings. For individual buildings, applications include: design of new buildings, prediction of retrofit savings, ratings, performance path code compliance and qualification for incentives. Beyond individual building applications, larger scale applications (across the stock of buildings at various scales: national, regional and state) include: codes and standards development, utility program design, regional/state planning, and technology assessments. For these sorts of applications, a set of representative buildings are typically simulated to predict performance of the entire population of buildings. Focusing on the U.S. single-family residential building stock, this paper willmore » describe how multiple data sources for building characteristics are combined into a highly-granular database that preserves the important interdependencies of the characteristics. We will present the sampling technique used to generate a representative set of thousands (up to hundreds of thousands) of building models. We will also present results of detailed calibrations against building stock consumption data.« less

  18. Who Shall Not Be Treated: Public Attitudes on Setting Health Care Priorities by Person-Based Criteria in 28 Nations.

    PubMed

    Rogge, Jana; Kittel, Bernhard

    2016-01-01

    The principle of distributing health care according to medical need is being challenged by increasing costs. As a result, many countries have initiated a debate on the introduction of explicit priority regulations based on medical, economic and person-based criteria, or have already established such regulations. Previous research on individual attitudes towards setting health care priorities based on medical and economic criteria has revealed consistent results, whereas studies on the use of person-based criteria have generated controversial findings. This paper examines citizens' attitudes towards three person-based priority criteria, patients' smoking habits, age and being the parent of a young child. Using data from the ISSP Health Module (2011) in 28 countries, logistic regression analysis demonstrates that self-interest as well as socio-demographic predictors significantly influence respondents' attitudes towards the use of person-based criteria for health care prioritization. This study contributes to resolving the controversial findings on person-based criteria by using a larger country sample and by controlling for country-level differences with fixed effects models.

  19. Reexamining Sample Size Requirements for Multivariate, Abundance-Based Community Research: When Resources are Limited, the Research Does Not Have to Be.

    PubMed

    Forcino, Frank L; Leighton, Lindsey R; Twerdy, Pamela; Cahill, James F

    2015-01-01

    Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment.

  20. Quality of groundwater at and near an aquifer storage and recovery site, Bexar, Atascosa, and Wilson Counties, Texas, June 2004-August 2008

    USGS Publications Warehouse

    Otero, Cassi L.; Petri, Brian L.

    2010-01-01

    The U.S. Geological Survey, in cooperation with the San Antonio Water System, did a study during 2004-08 to characterize the quality of native groundwater from the Edwards aquifer and pre- and post-injection water from the Carrizo aquifer at and near an aquifer storage and recovery (ASR) site in Bexar, Atascosa, and Wilson Counties, Texas. Groundwater samples were collected and analyzed for selected physical properties and constituents to characterize the quality of native groundwater from the Edwards aquifer and pre- and post-injection water from the Carrizo aquifer at and near the ASR site. Geochemical and isotope data indicated no substantial changes in major-ion, trace-element, and isotope chemistry occurred as the water from the Edwards aquifer was transferred through a 38-mile pipeline to the aquifer storage and recovery site. The samples collected from the four ASR recovery wells were similar in major-ion and stable isotope chemistry compared to the samples collected from the Edwards aquifer source wells and the ASR injection well. The similarity could indicate that as Edwards aquifer water was injected, it displaced native Carrizo aquifer water, or, alternatively, if mixing of Edwards and Carrizo aquifer waters was occurring, the major-ion and stable isotope signatures for the Carrizo aquifer water might have been obscured by the signatures of the injected Edwards aquifer water. Differences in the dissolved iron and dissolved manganese concentrations indicate that either minor amounts of mixing occurred between the waters from the two aquifers, or as Edwards aquifer water displaced Carrizo aquifer water it dissolved the iron and manganese directly from the Carrizo Sand. Concentrations of radium-226 in the samples collected at the ASR recovery wells were smaller than the concentrations in samples collected from the Edwards aquifer source wells and from the ASR injection well. The smaller radium-226 concentrations in the samples collected from the ASR recovery wells likely indicate some degree of mixing of the two waters occurred rather than continued decay of radium-226 in the injected water. Geochemical and isotope data measured in samples collected in May 2005 from two Carrizo aquifer monitoring wells and in July 2008 from the three ASR production-only wells in the northern section of the ASR site indicate that injected Edwards aquifer water had not migrated to these five sites. Geochemical and isotope data measured in samples collected from Carrizo aquifer wells in 2004, 2005, and 2008 were graphically analyzed to determine if changes in chemistry could be detected. Major-ion, trace element, and isotope chemistry varied spatially in the samples collected from the Carrizo aquifer. With the exception of a few samples, major-ion concentrations measured in samples collected in Carrizo aquifer wells in 2004, 2005, and 2008 were similar. A slightly larger sulfate con-centration and a slightly smaller bicarbonate concentration were measured in samples collected in 2005 and 2008 from well NC1 compared to samples collected at well NC1 in 2004. Larger sodium concentrations and smaller calcium, magnesium, bicarbonate, and sulfate concentrations were measured in samples collected in 2008 from well WC1 than in samples collected at this well in 2004 and 2005. Larger calcium and magnesium concentrations and a smaller sodium concentration were measured in the samples collected in 2008 at well EC2 compared to samples collected at this well in 2004 and 2005. While in some cases the computed percent differences (compared to concentrations from June 2004) in dissolved iron and dissolved manganese concentrations in 11 wells sampled in the Carrizo aquifer in 2005 and 2008 were quite large, no trends that might have been caused by migration of injected Edwards aquifer water were observed. Because of the natural variation in geochemical data in the Carrizo aquifer and the small data set collected for this study, differences in major-ion and

  1. Socio-demographic and work-related risk factors for medium- and long-term sickness absence among Italian workers.

    PubMed

    d'Errico, Angelo; Costa, Giuseppe

    2012-10-01

    Few studies investigated determinants of sickness absence in representative samples of the general population, none of which in Italy. Aim of this study was to assess influence and relative importance of socio-demographic and work-related characteristics on medium- and long-term sickness absence in a random sample of Italian workers. Approximately 60,000 workers participating in a national survey in 2007 were interviewed regarding sickness absence during the whole previous week, and on socio-demographics, employment characteristics and exposure to a set of physical and psychosocial hazards in the workplace. The association between sickness absence and potential determinants was estimated by multivariable logistic regression models stratified by gender. From the final multivariate models, in both genders sickness absence was statistically significantly associated with tenure employment, working in larger firms, exposure to risk of injury and to bullying or discrimination and, among employees, with shift work. In males, sickness absence was also associated with lower education, employment in the public administration and with exposure to noise or vibration, whereas among women also with manual work and ergonomic factors. In both genders, the attributable fraction for employment-related characteristics was higher than that for socio-demographic ones. The association with tenure or salaried jobs, and with employment in larger firms or in the public sector suggests that, besides illness, job security is the most important determinant of sickness absence, consistently with the results of previous studies. However, our results indicate that a reduction in exposure to workplace hazards may contribute to reduce absenteeism.

  2. Implications of the Small Spin Changes Measured for Large Jupiter-Family Comet Nuclei

    NASA Astrophysics Data System (ADS)

    Kokotanekova, R.; Snodgrass, C.; Lacerda, P.; Green, S. F.; Nikolov, P.; Bonev, T.

    2018-06-01

    Rotational spin-up due to outgassing of comet nuclei has been identified as a possible mechanism for considerable mass-loss and splitting. We report a search for spin changes for three large Jupiter-family comets (JFCs): 14P/Wolf, 143P/Kowal-Mrkos, and 162P/Siding Spring. None of the three comets has detectable period changes, and we set conservative upper limits of 4.2 (14P), 6.6 (143P) and 25 (162P) minutes per orbit. Comparing these results with all eight other JFCs with measured rotational changes, we deduce that none of the observed large JFCs experiences significant spin changes. This suggests that large comet nuclei are less likely to undergo rotationally-driven splitting, and therefore more likely to survive more perihelion passages than smaller nuclei. We find supporting evidence for this hypothesis in the cumulative size distributions of JFCs and dormant comets, as well as in recent numerical studies of cometary orbital dynamics. We added 143P to the sample of 13 other JFCs with known albedos and phase-function slopes. This sample shows a possible correlation of increasing phase-function slopes for larger geometric albedos. Partly based on findings from recent space missions to JFCs, we hypothesise that this correlation corresponds to an evolutionary trend for JFCs. We propose that newly activated JFCs have larger albedos and steeper phase functions, which gradually decrease due to sublimation-driven erosion. If confirmed, this could be used to analyse surface erosion from ground and to distinguish between dormant comets and asteroids.

  3. Size-selective mortality of steelhead during freshwater and marine life stages related to freshwater growth in the Skagit River, Washington

    USGS Publications Warehouse

    Thompson, Jamie N.; Beauchamp, David A.

    2014-01-01

    We evaluated freshwater growth and survival from juvenile (ages 0–3) to smolt (ages 1–5) and adult stages in wild steelhead Oncorhynchus mykiss sampled in different precipitation zones of the Skagit River basin, Washington. Our objectives were to determine whether significant size-selective mortality (SSM) in steelhead could be detected between early and later freshwater stages and between each of these freshwater stages and returning adults and, if so, how SSM varied between these life stages and mixed and snow precipitation zones. Scale-based size-at-annulus comparisons indicated that steelhead in the snow zone were significantly larger at annulus 1 than those in the mixed rain–snow zone. Size at annuli 2 and 3 did not differ between precipitation zones, and we found no precipitation zone × life stage interaction effect on size at annulus. Significant freshwater and marine SSM was evident between the juvenile and adult samples at annulus 1 and between each life stage at annuli 2 and 3. Rapid growth between the final freshwater annulus and the smolt migration did not improve survival to adulthood; rather, it appears that survival in the marine environment may be driven by an overall higher growth rate set earlier in life, which results in a larger size at smolt migration. Efforts for recovery of threatened Puget Sound steelhead could benefit by considering that SSM between freshwater and marine life stages can be partially attributed to growth attained in freshwater habitats and by identifying those factors that limit growth during early life stages.

  4. X-Ray analysis of riverbank sediment of the Tisza (Hungary): identification of particles from a mine pollution event

    NASA Astrophysics Data System (ADS)

    Osán, J.; Kurunczi, S.; Török, S.; Van Grieken, R.

    2002-03-01

    A serious heavy metal pollution of the Tisza River occurred on March 10, 2000, arising from a mine-dumping site in Romania. Sediment samples were taken from the main riverbed at six sites in Hungary, on March 16, 2000. The objective of this work was to distinguish the anthropogenic and crustal erosion particles in the river sediment. The samples were investigated using both bulk X-ray fluorescence (XRF) and thin-window electron probe microanalysis (EPMA). For EPMA, a reverse Monte Carlo method calculated the quantitative elemental composition of each single sediment particle. A high abundance of pyrite type particles was observed in some of the samples, indicating the influence of the mine dumps. Backscattered electron images proved that the size of particles with a high atomic number matrix was in the range of 2 μm. In other words the pyrites and the heavy elements form either small particles or are fragments of larger agglomerates. The latter are formed during the flotation process of the mines or get trapped to the natural crustal erosion particles. The XRF analysis of pyrite-rich samples always showed much higher Cu, Zn and Pb concentrations than the rest of the samples, supporting the conclusions of the single-particle EPMA results. In the polluted samples, the concentration of Cu, Zn and Pb reached 0.1, 0.3 and 0.2 wt.%, respectively. As a new approach, the abundance of particle classes obtained from single-particle EPMA and the elemental concentration obtained by XRF were merged into one data set. The dimension of the common data set was reduced by principal component analysis. The first component was determined by the abundance of pyrite and zinc sulfide particles and the concentration of Cu, Zn and Pb. The polluted samples formed a distinct group in the principal component space. The same result was supported by powder diffraction data. These analytical data combined with Earth Observation Techniques can be further used to estimate the quantity of particles originating from mine tailings on a defined river section.

  5. Neural activity in the hippocampus predicts individual visual short-term memory capacity.

    PubMed

    von Allmen, David Yoh; Wurmitzer, Karoline; Martin, Ernst; Klaver, Peter

    2013-07-01

    Although the hippocampus had been traditionally thought to be exclusively involved in long-term memory, recent studies raised controversial explanations why hippocampal activity emerged during short-term memory tasks. For example, it has been argued that long-term memory processes might contribute to performance within a short-term memory paradigm when memory capacity has been exceeded. It is still unclear, though, whether neural activity in the hippocampus predicts visual short-term memory (VSTM) performance. To investigate this question, we measured BOLD activity in 21 healthy adults (age range 19-27 yr, nine males) while they performed a match-to-sample task requiring processing of object-location associations (delay period  =  900 ms; set size conditions 1, 2, 4, and 6). Based on individual memory capacity (estimated by Cowan's K-formula), two performance groups were formed (high and low performers). Within whole brain analyses, we found a robust main effect of "set size" in the posterior parietal cortex (PPC). In line with a "set size × group" interaction in the hippocampus, a subsequent Finite Impulse Response (FIR) analysis revealed divergent hippocampal activation patterns between performance groups: Low performers (mean capacity  =  3.63) elicited increased neural activity at set size two, followed by a drop in activity at set sizes four and six, whereas high performers (mean capacity  =  5.19) showed an incremental activity increase with larger set size (maximal activation at set size six). Our data demonstrated that performance-related neural activity in the hippocampus emerged below capacity limit. In conclusion, we suggest that hippocampal activity reflected successful processing of object-location associations in VSTM. Neural activity in the PPC might have been involved in attentional updating. Copyright © 2013 Wiley Periodicals, Inc.

  6. 7 CFR 201.52 - Noxious-weed seeds.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... the bulk examined for noxious-weed seeds need not be noted: 1/2-gram purity working sample, 16 or more seeds; 1-gram purity working sample, 23 or more seeds; 2-gram purity working sample or larger, 30 or...

  7. A global analysis of Y-chromosomal haplotype diversity for 23 STR loci

    PubMed Central

    Purps, Josephine; Siegert, Sabine; Willuweit, Sascha; Nagy, Marion; Alves, Cíntia; Salazar, Renato; Angustia, Sheila M.T.; Santos, Lorna H.; Anslinger, Katja; Bayer, Birgit; Ayub, Qasim; Wei, Wei; Xue, Yali; Tyler-Smith, Chris; Bafalluy, Miriam Baeta; Martínez-Jarreta, Begoña; Egyed, Balazs; Balitzki, Beate; Tschumi, Sibylle; Ballard, David; Court, Denise Syndercombe; Barrantes, Xinia; Bäßler, Gerhard; Wiest, Tina; Berger, Burkhard; Niederstätter, Harald; Parson, Walther; Davis, Carey; Budowle, Bruce; Burri, Helen; Borer, Urs; Koller, Christoph; Carvalho, Elizeu F.; Domingues, Patricia M.; Chamoun, Wafaa Takash; Coble, Michael D.; Hill, Carolyn R.; Corach, Daniel; Caputo, Mariela; D’Amato, Maria E.; Davison, Sean; Decorte, Ronny; Larmuseau, Maarten H.D.; Ottoni, Claudio; Rickards, Olga; Lu, Di; Jiang, Chengtao; Dobosz, Tadeusz; Jonkisz, Anna; Frank, William E.; Furac, Ivana; Gehrig, Christian; Castella, Vincent; Grskovic, Branka; Haas, Cordula; Wobst, Jana; Hadzic, Gavrilo; Drobnic, Katja; Honda, Katsuya; Hou, Yiping; Zhou, Di; Li, Yan; Hu, Shengping; Chen, Shenglan; Immel, Uta-Dorothee; Lessig, Rüdiger; Jakovski, Zlatko; Ilievska, Tanja; Klann, Anja E.; García, Cristina Cano; de Knijff, Peter; Kraaijenbrink, Thirsa; Kondili, Aikaterini; Miniati, Penelope; Vouropoulou, Maria; Kovacevic, Lejla; Marjanovic, Damir; Lindner, Iris; Mansour, Issam; Al-Azem, Mouayyad; Andari, Ansar El; Marino, Miguel; Furfuro, Sandra; Locarno, Laura; Martín, Pablo; Luque, Gracia M.; Alonso, Antonio; Miranda, Luís Souto; Moreira, Helena; Mizuno, Natsuko; Iwashima, Yasuki; Neto, Rodrigo S. Moura; Nogueira, Tatiana L.S.; Silva, Rosane; Nastainczyk-Wulf, Marina; Edelmann, Jeanett; Kohl, Michael; Nie, Shengjie; Wang, Xianping; Cheng, Baowen; Núñez, Carolina; Pancorbo, Marian Martínez de; Olofsson, Jill K.; Morling, Niels; Onofri, Valerio; Tagliabracci, Adriano; Pamjav, Horolma; Volgyi, Antonia; Barany, Gusztav; Pawlowski, Ryszard; Maciejewska, Agnieszka; Pelotti, Susi; Pepinski, Witold; Abreu-Glowacka, Monica; Phillips, Christopher; Cárdenas, Jorge; Rey-Gonzalez, Danel; Salas, Antonio; Brisighelli, Francesca; Capelli, Cristian; Toscanini, Ulises; Piccinini, Andrea; Piglionica, Marilidia; Baldassarra, Stefania L.; Ploski, Rafal; Konarzewska, Magdalena; Jastrzebska, Emila; Robino, Carlo; Sajantila, Antti; Palo, Jukka U.; Guevara, Evelyn; Salvador, Jazelyn; Ungria, Maria Corazon De; Rodriguez, Jae Joseph Russell; Schmidt, Ulrike; Schlauderer, Nicola; Saukko, Pekka; Schneider, Peter M.; Sirker, Miriam; Shin, Kyoung-Jin; Oh, Yu Na; Skitsa, Iulia; Ampati, Alexandra; Smith, Tobi-Gail; Calvit, Lina Solis de; Stenzl, Vlastimil; Capal, Thomas; Tillmar, Andreas; Nilsson, Helena; Turrina, Stefania; De Leo, Domenico; Verzeletti, Andrea; Cortellini, Venusia; Wetton, Jon H.; Gwynne, Gareth M.; Jobling, Mark A.; Whittle, Martin R.; Sumita, Denilce R.; Wolańska-Nowak, Paulina; Yong, Rita Y.Y.; Krawczak, Michael; Nothnagel, Michael; Roewer, Lutz

    2014-01-01

    In a worldwide collaborative effort, 19,630 Y-chromosomes were sampled from 129 different populations in 51 countries. These chromosomes were typed for 23 short-tandem repeat (STR) loci (DYS19, DYS389I, DYS389II, DYS390, DYS391, DYS392, DYS393, DYS385ab, DYS437, DYS438, DYS439, DYS448, DYS456, DYS458, DYS635, GATAH4, DYS481, DYS533, DYS549, DYS570, DYS576, and DYS643) and using the PowerPlex Y23 System (PPY23, Promega Corporation, Madison, WI). Locus-specific allelic spectra of these markers were determined and a consistently high level of allelic diversity was observed. A considerable number of null, duplicate and off-ladder alleles were revealed. Standard single-locus and haplotype-based parameters were calculated and compared between subsets of Y-STR markers established for forensic casework. The PPY23 marker set provides substantially stronger discriminatory power than other available kits but at the same time reveals the same general patterns of population structure as other marker sets. A strong correlation was observed between the number of Y-STRs included in a marker set and some of the forensic parameters under study. Interestingly a weak but consistent trend toward smaller genetic distances resulting from larger numbers of markers became apparent. PMID:24854874

  8. Realistic sampling of amino acid geometries for a multipolar polarizable force field

    PubMed Central

    Hughes, Timothy J.; Cardamone, Salvatore

    2015-01-01

    The Quantum Chemical Topological Force Field (QCTFF) uses the machine learning method kriging to map atomic multipole moments to the coordinates of all atoms in the molecular system. It is important that kriging operates on relevant and realistic training sets of molecular geometries. Therefore, we sampled single amino acid geometries directly from protein crystal structures stored in the Protein Databank (PDB). This sampling enhances the conformational realism (in terms of dihedral angles) of the training geometries. However, these geometries can be fraught with inaccurate bond lengths and valence angles due to artefacts of the refinement process of the X‐ray diffraction patterns, combined with experimentally invisible hydrogen atoms. This is why we developed a hybrid PDB/nonstationary normal modes (NM) sampling approach called PDB/NM. This method is superior over standard NM sampling, which captures only geometries optimized from the stationary points of single amino acids in the gas phase. Indeed, PDB/NM combines the sampling of relevant dihedral angles with chemically correct local geometries. Geometries sampled using PDB/NM were used to build kriging models for alanine and lysine, and their prediction accuracy was compared to models built from geometries sampled from three other sampling approaches. Bond length variation, as opposed to variation in dihedral angles, puts pressure on prediction accuracy, potentially lowering it. Hence, the larger coverage of dihedral angles of the PDB/NM method does not deteriorate the predictive accuracy of kriging models, compared to the NM sampling around local energetic minima used so far in the development of QCTFF. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26235784

  9. Theory of Positron Annihilation in Helium-Filled Bubbles in Plutonium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sterne, P A; Pask, J E

    2003-02-13

    Positron annihilation lifetime spectroscopy is a sensitive probe of vacancies and voids in materials. This non-destructive measurement technique can identify the presence of specific defects in materials at the part-per-million level. Recent experiments by Asoka-Kumar et al. have identified two lifetime components in aged plutonium samples--a dominant lifetime component of around 182 ps and a longer lifetime component of around 350-400ps. This second component appears to increase with the age of the sample, and accounts for only about 5 percent of the total intensity in 35 year-old plutonium samples. First-principles calculations of positron lifetimes are now used extensively to guidemore » the interpretation of positron lifetime data. At Livermore, we have developed a first-principles finite-element-based method for calculating positron lifetimes for defects in metals. This method is capable of treating system cell sizes of several thousand atoms, allowing us to model defects in plutonium ranging in size from a mono-vacancy to helium-filled bubbles of over 1 nm in diameter. In order to identify the defects that account for the observed lifetime values, we have performed positron lifetime calculations for a set of vacancies, vacancy clusters, and helium-filled vacancy clusters in delta-plutonium. The calculations produced values of 143ps for defect-free delta-Pu and 255ps for a mono-vacancy in Pu, both of which are inconsistent with the dominant experimental lifetime component of 182ps. Larger vacancy clusters have even longer lifetimes. The observed positron lifetime is significantly shorter than the calculated lifetimes for mono-vacancies and larger vacancy clusters, indicating that open vacancy clusters are not the dominant defect in the aged plutonium samples. When helium atoms are introduced into the vacancy cluster, the positron lifetime is reduced due to the increased density of electrons available for annihilation. For a mono-vacancy in Pu containing one helium atom, the calculated lifetime is 190 ps, while a di-vacancy containing two helium atoms has a positron lifetime of 205 ps. In general, increasing the helium density in a vacancy cluster or He-filled bubble reduces the positron lifetime, so that the same lifetime value can arise fi-om a range of vacancy cluster sizes with different helium densities. In order to understand the variation of positron lifetime with vacancy cluster size and helium density in the defect, we have performed over 60 positron lifetime calculations with vacancy cluster sizes ranging from 1 to 55 vacancies and helium densities ranging fi-om zero to five helium atoms per vacancy. The results indicate that the experimental lifetime of 182 ps is consistent with the theoretical value of 190 ps for a mono-vacancy with a single helium atom, but that slightly better agreement is obtained for larger clusters of 6 or more vacancies containing 2-3 helium atoms per vacancy. For larger vacancy clusters with diameters of about 3-5 nm or more, the annihilation with helium electrons dominates the positron annihilation rate; the observed lifetime of 180ps is then consistent with a helium concentration in the range of 3 to 3.5 Hehacancy, setting an upper bound on the helium concentration in the vacancy clusters. In practice, the single lifetime component is most probably associated with a family of helium-filled bubbles rather than with a specific unique defect size. The longer 350-400ps lifetime component is consistent with a relatively narrow range of defect sizes and He concentration. At zero He concentration, the lifetime values are matched by small vacancy clusters containing 6-12 vacancies. With increasing vacancy cluster size, a small amount of He is required to keep the lifetime in the 350-400 ps range, until the value saturates for larger helium bubbles of more than 50 vacancies (bubble diameter > 1.3 nm) at a helium concentration close to 1 He/vacancy. These results, taken together with the experimental data, indicate that the features observed in TEM data by Schwartz et al are not voids, but are in fact helium-filled bubbles with a helium pressure of around 2-3 helium atoms per vacancy, depending on the bubble size. This is consistent with the conclusions of recently developed models of He-bubble growth in aged plutonium.« less

  10. Direct analysis of hCGβcf glycosylation in normal and aberrant pregnancy by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry.

    PubMed

    Iles, Ray K; Cole, Laurence A; Butler, Stephen A

    2014-06-05

    The analysis of human chorionic gonadotropin (hCG) in clinical chemistry laboratories by specific immunoassay is well established. However, changes in glycosylation are not as easily assayed and yet alterations in hCG glycosylation is associated with abnormal pregnancy. hCGβ-core fragment (hCGβcf) was isolated from the urine of women, pregnant with normal, molar and hyperemesis gravidarum pregnancies. Each sample was subjected to matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI TOF MS) analysis following dithiothreitol (DTT) reduction and fingerprint spectra of peptide hCGβ 6-40 were analyzed. Samples were variably glycosylated, where most structures were small, core and largely mono-antennary. Larger single bi-antennary and mixtures of larger mono-antennary and bi-antennary moieties were also observed in some samples. Larger glycoforms were more abundant in the abnormal pregnancies and tri-antennary carbohydrate moieties were only observed in the samples from molar and hyperemesis gravidarum pregnancies. Given that such spectral profiling differences may be characteristic, development of small sample preparation for mass spectral analysis of hCG may lead to a simpler and faster approach to glycostructural analysis and potentially a novel clinical diagnostic test.

  11. Use of direct versus indirect preparation data for assessing risk associated with airborne exposures at asbestos-contaminated sites.

    PubMed

    Goldade, Mary Patricia; O'Brien, Wendy Pott

    2014-01-01

    At asbestos-contaminated sites, exposure assessment requires measurement of airborne asbestos concentrations; however, the choice of preparation steps employed in the analysis has been debated vigorously among members of the asbestos exposure and risk assessment communities for many years. This study finds that the choice of preparation technique used in estimating airborne amphibole asbestos exposures for risk assessment is generally not a significant source of uncertainty. Conventionally, the indirect preparation method has been less preferred by some because it is purported to result in false elevations in airborne asbestos concentrations, when compared to direct analysis of air filters. However, airborne asbestos sampling in non-occupational settings is challenging because non-asbestos particles can interfere with the asbestos measurements, sometimes necessitating analysis via indirect preparation. To evaluate whether exposure concentrations derived from direct versus indirect preparation techniques differed significantly, paired measurements of airborne Libby-type amphibole, prepared using both techniques, were compared. For the evaluation, 31 paired direct and indirect preparations originating from the same air filters were analyzed for Libby-type amphibole using transmission electron microscopy. On average, the total Libby-type amphibole airborne exposure concentration was 3.3 times higher for indirect preparation analysis than for its paired direct preparation analysis (standard deviation = 4.1), a difference which is not statistically significant (p = 0.12, two-tailed, Wilcoxon signed rank test). The results suggest that the magnitude of the difference may be larger for shorter particles. Overall, neither preparation technique (direct or indirect) preferentially generates more precise and unbiased data for airborne Libby-type amphibole concentration estimates. The indirect preparation method is reasonable for estimating Libby-type amphibole exposure and may be necessary given the challenges of sampling in environmental settings. Relative to the larger context of uncertainties inherent in the risk assessment process, uncertainties associated with the use of airborne Libby-type amphibole exposure measurements derived from indirect preparation analysis are low. Use of exposure measurements generated by either direct or indirect preparation analyses is reasonable to estimate Libby-type Amphibole exposures in a risk assessment.

  12. Reanalysis of Asteroid Families Structure Through Visible Spectroscopy

    NASA Astrophysics Data System (ADS)

    Mothé-Diniz, T.; Carvano, J.; Roig, F.; Lazzaro, D.

    In this work we re-analyse the presence of interlopers in asteroid families based on a larger spectral database and on a family determination which makes use of a larger set of proper elements. The asteroid families were defined using the HCM method (Zappalà et al. 1995) on the set of proper elements for 110,000 asteroids available at the Asteroid Dynamic Site (AstDyS http://hamilton.dm.unipi.it/astdys )). The spectroscopic analysis is performed using spectra on the 0.44-0.92 μ m range observed by the SMASS Xu et al. 1995, SMASSII (Bus and Binzel, 2002) and 3OS2 (Lazzaro et al. 2002) surveys, which together total around 2140 asteroids with observed spectra. The asteroid taxonomy used is the Bus taxonomy (Bus et al. 2000). A total of 22 two families were analysed . The families of Vesta, Eunomia, Hoffmeister, Dora, Merxia, Agnia, and Koronis were found to be spectrally homogeneous, which confirms previous studies. The Veritas family, on the other hand, which is quoted in the literature as an heterogeneous family was found to be quite homogeneous in the present work. The Eos family is noteworthy for being at one time spectrally heterogeneous and quite different from the background population. References Bus, S. J., and R. P. Binzel 2002. Phase II of the Small Main-Belt Asteroid Spectroscopic Survey - The Observations. Icarus 158, 106-145. Bus, S. J., R. P. Binzel, and T. H. Burbine 2000. A New Generation of Asteroid Taxonomy. Meteoritics and Planetary Science, vol. 35, Supplement, p.A36 35, 36 +. Lazzaro, D., C. A. Angeli, T. Mothe-Diniz, J. M. Carvano, R. Duffard, and M. Florczak 2002. The superficial characterization of a large sample of asteroids: the S3OS2. Bulletin of the American Astronomical Society 34, 859 +. Xu, S., R. P. Binzel, T. H. Burbine, and S. J. Bus 1995. Small main-belt asteroid spectroscopic survey: Initial results. Icarus 115, 1-35. Zappala, V., P. Bendjoya, A. Cellino, P. Farinella, and C. Froeschle 1995. Asteroid families: Search of a 12,487-asteroid sample using two different clustering techniques. Icarus 116, 291-314.

  13. Lateral Tip Control Effects in CD-AFM Metrology: The Large Tip Limit.

    PubMed

    Dixson, Ronald G; Orji, Ndubuisi G; Goldband, Ryan S

    2016-01-25

    Sidewall sensing in critical dimension atomic force microscopes (CD-AFMs) usually involves continuous lateral dithering of the tip or the use of a control algorithm and fast response piezo actuator to position the tip in a manner that resembles touch-triggering of coordinate measuring machine (CMM) probes. All methods of tip position control, however, induce an effective tip width that may deviate from the actual geometrical tip width. Understanding the influence and dependence of the effective tip width on the dither settings and lateral stiffness of the tip can improve the measurement accuracy and uncertainty estimation for CD-AFM measurements. Since CD-AFM typically uses tips that range from 15 nm to 850 nm in geometrical width, the behavior of effective tip width throughout this range should be understood. The National Institute of Standards and Technology (NIST) has been investigating the dependence of effective tip width on the dither settings and lateral stiffness of the tip, as well as the possibility of material effects due to sample composition. For tip widths of 130 nm and lower, which also have lower lateral stiffness, the response of the effective tip width to lateral dither is greater than for larger tips. However, we have concluded that these effects will not generally result in a residual bias, provided that the tip calibration and sample measurement are performed under the same conditions. To validate that our prior conclusions about the dependence of effective tip width on lateral stiffness are valid for large CD-tips, we recently performed experiments using a very large non-CD tip with an etched plateau of approximately 2 μm width. The effective lateral stiffness of these tips is at least 20 times greater than typical CD-AFM tips, and these results supported our prior conclusions about the expected behavior for larger tips. The bottom-line importance of these latest observations is that we can now reasonably conclude that a dither slope of 3 nm/V is the baseline response due to the induced motion of the cantilever base.

  14. Lateral Tip Control Effects in CD-AFM Metrology: The Large Tip Limit

    PubMed Central

    Dixson, Ronald G.; Orji, Ndubuisi G.; Goldband, Ryan S.

    2016-01-01

    Sidewall sensing in critical dimension atomic force microscopes (CD-AFMs) usually involves continuous lateral dithering of the tip or the use of a control algorithm and fast response piezo actuator to position the tip in a manner that resembles touch-triggering of coordinate measuring machine (CMM) probes. All methods of tip position control, however, induce an effective tip width that may deviate from the actual geometrical tip width. Understanding the influence and dependence of the effective tip width on the dither settings and lateral stiffness of the tip can improve the measurement accuracy and uncertainty estimation for CD-AFM measurements. Since CD-AFM typically uses tips that range from 15 nm to 850 nm in geometrical width, the behavior of effective tip width throughout this range should be understood. The National Institute of Standards and Technology (NIST) has been investigating the dependence of effective tip width on the dither settings and lateral stiffness of the tip, as well as the possibility of material effects due to sample composition. For tip widths of 130 nm and lower, which also have lower lateral stiffness, the response of the effective tip width to lateral dither is greater than for larger tips. However, we have concluded that these effects will not generally result in a residual bias, provided that the tip calibration and sample measurement are performed under the same conditions. To validate that our prior conclusions about the dependence of effective tip width on lateral stiffness are valid for large CD-tips, we recently performed experiments using a very large non-CD tip with an etched plateau of approximately 2 μm width. The effective lateral stiffness of these tips is at least 20 times greater than typical CD-AFM tips, and these results supported our prior conclusions about the expected behavior for larger tips. The bottom-line importance of these latest observations is that we can now reasonably conclude that a dither slope of 3 nm/V is the baseline response due to the induced motion of the cantilever base. PMID:27087883

  15. The use of mini-samples in palaeomagnetism

    NASA Astrophysics Data System (ADS)

    Böhnel, Harald; Michalk, Daniel; Nowaczyk, Norbert; Naranjo, Gildardo Gonzalez

    2009-10-01

    Rock cores of ~25 mm diameter are widely used in palaeomagnetism. Occasionally smaller diameters have been used as well which represents distinct advantages in terms of throughput, weight of equipment and core collections. How their orientation precision compares to 25 mm cores, however, has not been evaluated in detail before. Here we compare the site mean directions and their statistical parameters for 12 lava flows sampled with 25 mm cores (standard samples, typically 8 cores per site) and with 12 mm drill cores (mini-samples, typically 14 cores per site). The site-mean directions for both sample sizes appear to be indistinguishable in most cases. For the mini-samples, site dispersion parameters k on average are slightly lower than for the standard samples reflecting their larger orienting and measurement errors. Applying the Wilcoxon signed-rank test the probability that k or α95 have the same distribution for both sizes is acceptable only at the 17.4 or 66.3 per cent level, respectively. The larger mini-core numbers per site appears to outweigh the lower k values yielding also slightly smaller confidence limits α95. Further, both k and α95 are less variable for mini-samples than for standard size samples. This is interpreted also to result from the larger number of mini-samples per site, which better averages out the detrimental effect of undetected abnormal remanence directions. Sampling of volcanic rocks with mini-samples therefore does not present a disadvantage in terms of the overall obtainable uncertainty of site mean directions. Apart from this, mini-samples do present clear advantages during the field work, as about twice the number of drill cores can be recovered compared to 25 mm cores, and the sampled rock unit is then more widely covered, which reduces the contribution of natural random errors produced, for example, by fractures, cooling joints, and palaeofield inhomogeneities. Mini-samples may be processed faster in the laboratory, which is of particular advantage when carrying out palaeointensity experiments.

  16. Does working memory load facilitate target detection?

    PubMed

    Fruchtman-Steinbok, Tom; Kessler, Yoav

    2016-02-01

    Previous studies demonstrated that increasing working memory (WM) load delays performance of a concurrent task, by distracting attention and thus interfering with encoding and maintenance processes. The present study used a version of the change detection task with a target detection requirement during the retention interval. In contrast to the above prediction, target detection was faster following a larger set-size, specifically when presented shortly after the memory array (up to 400 ms). The effect of set-size on target detection was also evident when no memory retention was required. The set-size effect was also found using different modalities. Moreover, it was only observed when the memory array was presented simultaneously, but not sequentially. These results were explained by increased phasic alertness exerted by the larger visual display. The present study offers new evidence of ongoing attentional processes in the commonly-used change detection paradigm. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Hafnium Isotopic Variations in Central Atlantic Intraplate Volcanism

    NASA Astrophysics Data System (ADS)

    Geldmacher, J.; Hanan, B. B.; Hoernle, K.; Blichert-Toft, J.

    2008-12-01

    Although one of the geochemically best investigated volcanic regions on Earth, almost no Hf isotopic data have been published from the broad belt of intraplate seamounts and islands in the East Atlantic between 25° and 36° N. This study presents 176Hf/177Hf ratios from 61 representative samples from the Canary, Selvagen and Madeira Islands and nearby large seamounts, encompassing the full range of different evolutionary stages and geochemical endmembers. The majority of samples have mafic, mainly basaltic compositions with Mg-numbers within or near the range of magmas in equilibrium with mantle olivine (68-75). No correlation was found between Mg-number and 176Hf/177Hf ratios in the data set. In comparison to observed Nd isotope variations published for this volcanic province (6 ɛNd units), 176Hf/177Hf ratios span a larger range (14 ɛHf units). Samples from the Madeira archipelago have the most radiogenic compositions (176Hf/177Hfm= 0.283132-0.283335), widely overlapping the field for central Atlantic N-MORB. They form a relatively narrow, elongated trend (stretching over >6 ɛHf units) between a radiogenic MORB-like endmember and a composition located on the Nd-Hf mantle array. In contrast, all Canary Islands samples plot below the mantle array (176Hf/177Hfm = 0.282943-0.283067) and, despite being from an archipelago that stretches over a much larger geographic area, form a much denser cluster with less compositional variation (~4 ɛHf units). All samples from the seamounts NE of the Canaries, proposed to belong to the same Canary hotspot track (e.g. Geldmacher et al., 2001, JVGR 111; Geldmacher et al., 2005, EPSL 237), fall within the Hf isotopic range of this cluster. The cluster largely overlaps the composition of the proposed common mantle endmember 'C' (Hanan and Graham, 1996, Science 272) but spans a space between a more radiogenic (depleted) composition and a HIMU-type endmember. Although samples of Seine and Unicorn seamounts, attributed to the Madeira hotspot track, show less radiogenic Hf and Nd isotope ratios than Madeira, their isotopic compositions lie along an extension of the Madeira trend in plots of Hf versus Sr, Nd, Pb isotopes. The new Hf isotope ratios confirm the existence of at least two geochemically distinct volcanic provinces (Canary and Madeira) in the East Atlantic as previously proposed.

  18. Results of Large-Scale Spacecraft Flammability Tests

    NASA Technical Reports Server (NTRS)

    Ferkul, Paul; Olson, Sandra; Urban, David L.; Ruff, Gary A.; Easton, John; T'ien, James S.; Liao, Ta-Ting T.; Fernandez-Pello, A. Carlos; Torero, Jose L.; Eigenbrand, Christian; hide

    2017-01-01

    For the first time, a large-scale fire was intentionally set inside a spacecraft while in orbit. Testing in low gravity aboard spacecraft had been limited to samples of modest size: for thin fuels the longest samples burned were around 15 cm in length and thick fuel samples have been even smaller. This is despite the fact that fire is a catastrophic hazard for spaceflight and the spread and growth of a fire, combined with its interactions with the vehicle cannot be expected to scale linearly. While every type of occupied structure on earth has been the subject of full scale fire testing, this had never been attempted in space owing to the complexity, cost, risk and absence of a safe location. Thus, there is a gap in knowledge of fire behavior in spacecraft. The recent utilization of large, unmanned, resupply craft has provided the needed capability: a habitable but unoccupied spacecraft in low earth orbit. One such vehicle was used to study the flame spread over a 94 x 40.6 cm thin charring solid (fiberglasscotton fabric). The sample was an order of magnitude larger than anything studied to date in microgravity and was of sufficient scale that it consumed 1.5 of the available oxygen. The experiment which is called Saffire consisted of two tests, forward or concurrent flame spread (with the direction of flow) and opposed flame spread (against the direction of flow). The average forced air speed was 20 cms. For the concurrent flame spread test, the flame size remained constrained after the ignition transient, which is not the case in 1-g. These results were qualitatively different from those on earth where an upward-spreading flame on a sample of this size accelerates and grows. In addition, a curious effect of the chamber size is noted. Compared to previous microgravity work in smaller tunnels, the flame in the larger tunnel spread more slowly, even for a wider sample. This is attributed to the effect of flow acceleration in the smaller tunnels as a result of hot gas expansion. These results clearly demonstrate the unique features of purely forced flow in microgravity on flame spread, the dependence of flame behavior on the scale of the experiment, and the importance of full-scale testing for spacecraft fire safety.

  19. An Interactive Multiobjective Programming Approach to Combinatorial Data Analysis.

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Stahl, Stephanie

    2001-01-01

    Describes an interactive procedure for multiobjective asymmetric unidimensional seriation problems that uses a dynamic-programming algorithm to generate partially the efficient set of sequences for small to medium-sized problems and a multioperational heuristic to estimate the efficient set for larger problems. Applies the procedure to an…

  20. Incentive Design and Quality Improvements: Evidence from State Medicaid Nursing Home Pay-for-Performance Programs.

    PubMed

    Konetzka, R Tamara; Skira, Meghan M; Werner, Rachel M

    2018-01-01

    Pay-for-performance (P4P) programs have become a popular policy tool aimed at improving health care quality. We analyze how incentive design affects quality improvements in the nursing home setting, where several state Medicaid agencies have implemented P4P programs that vary in incentive structure. Using the Minimum Data Set and the Online Survey, Certification, and Reporting data from 2001 to 2009, we examine how the weights put on various performance measures that are tied to P4P bonuses, such as clinical outcomes, inspection deficiencies, and staffing levels, affect improvements in those measures. We find larger weights on clinical outcomes often lead to larger improvements, but small weights can lead to no improvement or worsening of some clinical outcomes. We find a qualifier for P4P eligibility based on having few or no severe inspection deficiencies is more effective at decreasing inspection deficiencies than using weights, suggesting simple rules for participation may incent larger improvement.

  1. Incentive Design and Quality Improvements: Evidence from State Medicaid Nursing Home Pay-for-Performance Programs

    PubMed Central

    Konetzka, R. Tamara; Skira, Meghan M.; Werner, Rachel M.

    2017-01-01

    Pay-for-performance (P4P) programs have become a popular policy tool aimed at improving health care quality. We analyze how incentive design affects quality improvements in the nursing home setting, where several state Medicaid agencies have implemented P4P programs that vary in incentive structure. Using the Minimum Data Set and the Online Survey, Certification, and Reporting data from 2001 to 2009, we examine how the weights put on various performance measures that are tied to P4P bonuses, such as clinical outcomes, inspection deficiencies, and staffing levels, affect improvements in those measures. We find larger weights on clinical outcomes often lead to larger improvements, but small weights can lead to no improvement or worsening of some clinical outcomes. We find a qualifier for P4P eligibility based on having few or no severe inspection deficiencies is more effective at decreasing inspection deficiencies than using weights, suggesting simple rules for participation may incent larger improvement. PMID:29594189

  2. WA3 Room for death - international museum - visitors' preferences regarding the end of their life.

    PubMed

    Lindqvist, Olav; Tishelman, Carol

    2015-04-01

    Just as pain medications aim to relieve physical suffering, supportive surrounding for death and dying may facilitate well-being and comfort. However, little has been written of the experience of or preferences for settings for death and dying. We investigate preferences for and reflections about settings for end-of-life (EoL) in an international sample of museum visitors. Data derive from a project teaming artists and craftspeople together to create prototypes of space for difficult conversations in EoL settings. These prototypes were presented in a museum exhibition, "Room for Death", in Stockholm in 2012. As project consultants, we contributed a question to the public viewing the exhibition: "How would you like it to be around you when you are dying?" and analysed responses with a phenomenographic approach. Five-hundred twelve responses were obtained from visitors from 46 countries. Responses were categorised in the following inductively- derived categories of types of deaths: The "Familiar", "Larger-than life", "Lone", "Mediated" "Calm and peaceful", "Sensuous", "'Green'", and "Distanced" death. Responses could relate to one category or be composites uniting different categories in individual combinations. These data provide insight into different facets of contemporary reflections about death and dying. Despite the selective sample, the findings give reason to consider how underlying assumptions and care provision in established forms for EoL care may differ from people's preferences. This project can be seen as an example of innovative endeavours to promote public awareness of issues related to death and dying, within the framework of health-promoting palliative care. © 2015, Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  3. The influence of caregiver depression on adolescent mental health outcomes: findings from refugee settlements in Uganda.

    PubMed

    Meyer, Sarah R; Steinhaus, Mara; Bangirana, Clare; Onyango-Mangen, Patrick; Stark, Lindsay

    2017-12-19

    Family-level predictors, including caregiver depression, are considered important influences on adolescent mental health. Adolescent depression and anxiety in refugee settings is known to be a significant public health concern, yet there is very limited literature from humanitarian settings focusing on the relationship between caregiver mental health and adolescent mental health. In the context of a larger study on child protection outcomes in refugee settings, researchers explored the relationship between caregiver depression and adolescent mental health in two refugee settlements, Kiryandongo and Adjumani, in Uganda. Adolescents between 13 and 17 and their caregivers participated in a household survey, which included measures of adolescent anxiety and depression, and caregiver depression. Analysis was conducted using multiple logistic regression models, and results were reported for the full sample and for each site separately. In Kiryandongo, a one-unit increase in a caregiver's depression score tripled the odds that the adolescent would have high levels of anxiety symptoms (AOR: 3.0, 95% CI: 1.4, 6.1), while in Adjumani, caregiver depression did not remain significant in the final model. Caregiver depression, gender and exposure to violence were all associated with higher symptoms of adolescent depression in both sites and the full sample, for example, a one unit increase in caregiver depression more than tripled the odds of higher levels of symptoms of adolescent depression (AOR: 3.6, 95% CI: 2.0, 6.2). Caregiver depression is a consistently significantly associated with adverse mental health outcomes for adolescents in this study. Adolescent well-being is significantly affected by caregiver mental health in this refugee context. Child protection interventions in humanitarian contexts do not adequately address the influence of caregivers' mental health, and there are opportunities to integrate child protection programming with prevention and treatment of caregivers' mental health symptoms.

  4. A QUICK KEY TO THE SUBFAMILIES AND GENERA OF ANTS OF THE SAVANNAH RIVER SITE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, D

    2007-09-04

    This taxonomic key was devised to support development of a Rapid Bioassessment Protocol using ants at the Savannah River Site. The emphasis is on 'rapid' and, because the available keys contained a very large number of genera not known to occur at the Savannah River Site, we found that the available keys were unwieldy. Because these keys contained many more genera than we would ever encounter and because this larger number of genera required more couplets in the key and often required examination of characters that are difficult to assess without higher magnifications (60X or higher), more time was requiredmore » to process samples. In developing this set of keys I emphasized character states that are easier for nonspecialists to recognize. I recognize that the character sets used may lead to some errors but I believe that the error rate will be small and, for the purpose of rapid bioassessment, this error rate will be acceptable provided that overall sample sizes are adequate. Oliver and Beattie (1996a, 1996b) found that for rapid assessment of biodiversity the same results were found when identifications were done to morphospecies by people with minimal expertise as when the same data sets were identified by subject matter experts. Basset et al. (2004) concluded that it was not as important to correctly identify all species as it was to be sure that the study included as many functional groups as possible. If your study requires high levels of accuracy, it is highly recommended that, when you key out a specimen and have any doubts concerning the identification, you should refer to keys in Bolton (1994) or to the other keys used to develop this area specific taxonomic key.« less

  5. A QUICK KEY TO THE SUBFAMILIES AND GENERA OF ANTS OF THE SAVANNAH RIVER SITE, AIKEN, SC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, D

    2006-10-04

    This taxonomic key was devised to support development of a Rapid Bioassessment Protocol using ants at the Savannah River Site. The emphasis is on ''rapid'' and, because the available keys contained a large number of genera not known to occur at the Savannah River Site, we found that the available keys were unwieldy. Because these keys contained more genera than we would likely encounter and because this larger number of genera required both more couplets in the key and often required examination of characters that are difficult to assess without higher magnifications (60X or higher) more time was required tomore » process samples. In developing this set of keys I recognize that the character sets used may lead to some errors but I believe that the error rate will be small and, for the purpose of rapid bioassessment, this error rate will be acceptable provided that overall sample sizes are adequate. Oliver and Beattie (1996a, 1996b) found that for rapid assessment of biodiversity the same results were found when identifications were done to morphospecies by people with minimal expertise as when the same data sets were identified by subject matter experts. Basset et al. (2004) concluded that it was not as important to correctly identify all species as it was to be sure that the study included as many functional groups as possible. If your study requires high levels of accuracy, it is highly recommended that when you key out a specimen and have any doubts concerning the identification, you should refer to keys in Bolton (1994) or to the other keys used to develop this area specific taxonomic key.« less

  6. Micro-scale Spatial Clustering of Cholera Risk Factors in Urban Bangladesh.

    PubMed

    Bi, Qifang; Azman, Andrew S; Satter, Syed Moinuddin; Khan, Azharul Islam; Ahmed, Dilruba; Riaj, Altaf Ahmed; Gurley, Emily S; Lessler, Justin

    2016-02-01

    Close interpersonal contact likely drives spatial clustering of cases of cholera and diarrhea, but spatial clustering of risk factors may also drive this pattern. Few studies have focused specifically on how exposures for disease cluster at small spatial scales. Improving our understanding of the micro-scale clustering of risk factors for cholera may help to target interventions and power studies with cluster designs. We selected sets of spatially matched households (matched-sets) near cholera case households between April and October 2013 in a cholera endemic urban neighborhood of Tongi Township in Bangladesh. We collected data on exposures to suspected cholera risk factors at the household and individual level. We used intra-class correlation coefficients (ICCs) to characterize clustering of exposures within matched-sets and households, and assessed if clustering depended on the geographical extent of the matched-sets. Clustering over larger spatial scales was explored by assessing the relationship between matched-sets. We also explored whether different exposures tended to appear together in individuals, households, and matched-sets. Household level exposures, including: drinking municipal supplied water (ICC = 0.97, 95%CI = 0.96, 0.98), type of latrine (ICC = 0.88, 95%CI = 0.71, 1.00), and intermittent access to drinking water (ICC = 0.96, 95%CI = 0.87, 1.00) exhibited strong clustering within matched-sets. As the geographic extent of matched-sets increased, the concordance of exposures within matched-sets decreased. Concordance between matched-sets of exposures related to water supply was elevated at distances of up to approximately 400 meters. Household level hygiene practices were correlated with infrastructure shown to increase cholera risk. Co-occurrence of different individual level exposures appeared to mostly reflect the differing domestic roles of study participants. Strong spatial clustering of exposures at a small spatial scale in a cholera endemic population suggests a possible role for highly targeted interventions. Studies with cluster designs in areas with strong spatial clustering of exposures should increase sample size to account for the correlation of these exposures.

  7. Achievement Flourishes in Larger Classes: Secondary School Students in Most Countries Achieved Better Literacy in Larger Classes

    ERIC Educational Resources Information Center

    Alharbi, Abeer A.; Stoet, Gijsbert

    2017-01-01

    There is no consensus among academics about whether children benefit from smaller classes. We analysed the data from the 2012 Programme for International Student Assessment (PISA) to test if smaller classes lead to higher performance. Advantages of using this data set are not only its size (478,120 15-year old students in 63 nations) and…

  8. Centennial increase in geomagnetic activity: Latitudinal differences and global estimates

    NASA Astrophysics Data System (ADS)

    Mursula, K.; Martini, D.

    2006-08-01

    We study here the centennial change in geomagnetic activity using the newly proposed Inter-Hour Variability (IHV) index. We correct the earlier estimates of the centennial increase by taking into account the effect of the change of the sampling of the magnetic field from one sample per hour to hourly means in the first years of the previous century. Since the IHV index is a variability index, the larger variability in the case of hourly sampling leads, without due correction, to excessively large values in the beginning of the century and an underestimated centennial increase. We discuss two ways to extract the necessary sampling calibration factors and show that they agree very well with each other. The effect of calibration is especially large at the midlatitude Cheltenham/Fredricksburg (CLH/FRD) station where the centennial increase changes from only 6% to 24% caused by calibration. Sampling calibration also leads to a larger centennial increase of global geomagnetic activity based on the IHV index. The results verify a significant centennial increase in global geomagnetic activity, in a qualitative agreement with the aa index, although a quantitative comparison is not warranted. We also find that the centennial increase has a rather strong and curious latitudinal dependence. It is largest at high latitudes. Quite unexpectedly, it is larger at low latitudes than at midlatitudes. These new findings indicate interesting long-term changes in near-Earth space. We also discuss possible internal and external causes for these observed differences. The centennial change of geomagnetic activity may be partly affected by changes in external conditions, partly by the secular decrease of the Earth's magnetic moment whose effect in near-Earth space may be larger than estimated so far.

  9. Are quantitative trait-dependent sampling designs cost-effective for analysis of rare and common variants?

    PubMed

    Yilmaz, Yildiz E; Bull, Shelley B

    2011-11-29

    Use of trait-dependent sampling designs in whole-genome association studies of sequence data can reduce total sequencing costs with modest losses of statistical efficiency. In a quantitative trait (QT) analysis of data from the Genetic Analysis Workshop 17 mini-exome for unrelated individuals in the Asian subpopulation, we investigate alternative designs that sequence only 50% of the entire cohort. In addition to a simple random sampling design, we consider extreme-phenotype designs that are of increasing interest in genetic association analysis of QTs, especially in studies concerned with the detection of rare genetic variants. We also evaluate a novel sampling design in which all individuals have a nonzero probability of being selected into the sample but in which individuals with extreme phenotypes have a proportionately larger probability. We take differential sampling of individuals with informative trait values into account by inverse probability weighting using standard survey methods which thus generalizes to the source population. In replicate 1 data, we applied the designs in association analysis of Q1 with both rare and common variants in the FLT1 gene, based on knowledge of the generating model. Using all 200 replicate data sets, we similarly analyzed Q1 and Q4 (which is known to be free of association with FLT1) to evaluate relative efficiency, type I error, and power. Simulation study results suggest that the QT-dependent selection designs generally yield greater than 50% relative efficiency compared to using the entire cohort, implying cost-effectiveness of 50% sample selection and worthwhile reduction of sequencing costs.

  10. The influence of hydrocarbons in changing the mechanical and acoustic properties of a carbonate reservoir: implications of laboratory results on larger scale processes

    NASA Astrophysics Data System (ADS)

    Trippetta, Fabio; Ruggieri, Roberta; Geremia, Davide; Brandano, Marco

    2017-04-01

    Understanding hydraulic and mechanical processes that acted in reservoir rocks and their effect on the rock properties is of a great interest for both scientific and industry fields. In this work we investigate the role of hydrocarbons in changing the petrophysical properties of rock by merging laboratory, outcrops, and subsurface data focusing on the carbonate-bearing Majella reservoir (Bolognano formation). This reservoir represents an interesting analogue for subsurface carbonate reservoirs and is made of high porosity (8 to 28%) ramp calcarenites saturated by hydrocarbon in the state of bitumen at the surface. Within this lithology clean and bitumen bearing samples were investigated. For both groups, density, porosity, P and S wave velocity, at increasing confining pressure and deformation tests were conducted on cylindrical specimens with BRAVA apparatus at the HP-HT Laboratory of the Istituto Nazionale di Geofisica e Vulcanologia (INGV) in Rome, Italy. The performed petrophysical characterization, shows a very good correlation between Vp, Vs and porosity and a pressure independent Vp/Vs ratio while the presence of bitumen within samples increases both Vp and Vs. P-wave velocity hysteresis measured at ambient pressure after 100 MPa of applied confining pressure, suggests an almost pure elastic behaviour for bitumen-bearing samples and a more inelastic behaviour for cleaner samples. Calculated dynamic Young's modulus is larger for bitumen-bearing samples and these data are confirmed by cyclic deformation tests where the same samples generally record larger strength, larger Young's modulus and smaller permanent strain respect to clean samples. Starting from laboratory data, we also derived a synthetic acoustic model highlighting an increase in acoustic impedance for bitumen-bearing samples. Models have been also performed simulating a saturation with decreasing API° hydrocarbons, showing opposite effects on the seismic properties of the reservoir respect to bitumen. In order to compare our laboratory results at larger scale we selected 11 outcrops of the same lithofacies of laboratory samples both clean and bitumen-saturated. Fractures orientations, from the scan-line method, are similar for the two types of outcrops and they follow the same trends of literature data collected on older rocks. On the other hand, spacing data show very lower fracture density for bitumen-saturated outcrops confirming laboratory observations. In conclusion, laboratory experiments highlight a more elastic behaviour for bitumen-bearing samples and saturated outcrops are less prone to fracture respect to clean outcrops. Presence of bitumen has, thus, a positive influence on mechanical properties of the reservoir while acoustic model suggests that lighter oils should have an opposite effect. Geologically, this suggests that hydrocarbons migration in the study area predates the last stage of deformation giving also clues about a relatively high density of the oil when deformation began.

  11. Preparation and wettability examinations of transparent SiO2 binder-added MgF2 nanoparticle coatings covered with fluoro-alkyl silane self-assembled monolayer.

    PubMed

    Murata, Tsuyoshi; Hieda, Junko; Saito, Nagahiro; Takai, Osamu

    2012-05-01

    SiO2-added MgF2 nanoparticle coatings with various surface roughness properties were formed on fused silica-glass substrates from autoclaved sols prepared at 100-180 °C. To give it hydrophobicity, we treated the samples with fluoro-alkyl silane (FAS) vapor to form self-assembled monolayers on the nanoparticle coating and we examined the wettability of the samples. The samples preserved good transparency even after the FAS treatment. The wettability examination revealed that higher autoclave temperatures produced a larger average MgF2 nanoparticle particle size, a larger surface roughness, and a higher contact angle and the roll-off angle.

  12. Microplastic abundances in a mussel bed and ingestion by the ribbed marsh mussel Geukensia demissa.

    PubMed

    Khan, Matthew B; Prezant, Robert S

    2018-05-01

    Human activities have generated large quantities of microplastics that can be consumed by filter-feeding organisms as potential food sources. As a result, organisms may experience marked reductions in growth and/or health. To date there has been no investigations connecting microplastics (MPs) with the critically important ribbed mussel Geukensia demissa. Here we examined MP abundances within a bed of G. demissa in New Jersey. Results indicate that MP densities ranged between 11,000-50,000 pieces/m 2 . The abundance of larger MPs (5 mm ≥ 1 mm) did not vary among sampling sites while the smaller MPs (<1 mm) abundances did vary between sampling sites. These smaller MPs also accounted for 79% of MPs recovered from these sites. Based on the higher abundance of smaller MPs, we also investigated MP ingestion/rejection in a laboratory setting. These results confirmed that ribbed mussels can ingest MPs with negative consequences for the buoyancy of plastics rejected in feces and pseudofeces. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. The relationship between organizational justice and quality performance among healthcare workers: a pilot study.

    PubMed

    Mohamed, Salwa Attia

    2014-01-01

    Organization justice refers to the extent to which employees perceive workplace procedure, interactions, and outcomes to be fair in nature. So, this study aimed to investigate the relationship between organizational justice and quality performance among health care workers. The study was conducted at the Public Hospital in Fayoum, Egypt. The study included a convenience sample of 100 healthcare workers (60 nurses and 40 physicians) that were recruited. Tools used for data collection included (1) questionnaire sheet which is used to measure health workers' perception of organizational justices. It includes four types: distributive, procedural, interpersonal, and informational justice. (2) Quality performance questionnaire sheet: this tool was used to examine health workers' perception regarding their quality performance. It contained three types: information, value, and skill. The results revealed that a positive correlation was found between organizational justice components and quality performance among the various categories of health workers' perception (P ≤ 0.05). It has been recommended to replicate the study on a larger probability sample from different hospital settings to achieve more generalizable results and reinforce justice during organization of ministry centers in Egypt.

  14. The Relationship between Organizational Justice and Quality Performance among Healthcare Workers: A Pilot Study

    PubMed Central

    Mohamed, Salwa Attia

    2014-01-01

    Organization justice refers to the extent to which employees perceive workplace procedure, interactions, and outcomes to be fair in nature. So, this study aimed to investigate the relationship between organizational justice and quality performance among health care workers. The study was conducted at the Public Hospital in Fayoum, Egypt. The study included a convenience sample of 100 healthcare workers (60 nurses and 40 physicians) that were recruited. Tools used for data collection included (1) questionnaire sheet which is used to measure health workers' perception of organizational justices. It includes four types: distributive, procedural, interpersonal, and informational justice. (2) Quality performance questionnaire sheet: this tool was used to examine health workers' perception regarding their quality performance. It contained three types: information, value, and skill. The results revealed that a positive correlation was found between organizational justice components and quality performance among the various categories of health workers' perception (P ≤ 0.05). It has been recommended to replicate the study on a larger probability sample from different hospital settings to achieve more generalizable results and reinforce justice during organization of ministry centers in Egypt. PMID:24982992

  15. Local but not long-range microstructural differences of the ventral temporal cortex in developmental prosopagnosia

    PubMed Central

    Song, Sunbin; Garrido, Lúcia; Nagy, Zoltan; Mohammadi, Siawoosh; Steel, Adam; Driver, Jon; Dolan, Ray J.; Duchaine, Bradley; Furl, Nicholas

    2015-01-01

    Individuals with developmental prosopagnosia (DP) experience face recognition impairments despite normal intellect and low-level vision and no history of brain damage. Prior studies using diffusion tensor imaging in small samples of subjects with DP (n=6 or n=8) offer conflicting views on the neurobiological bases for DP, with one suggesting white matter differences in two major long-range tracts running through the temporal cortex, and another suggesting white matter differences confined to fibers local to ventral temporal face-specific functional regions of interest (fROIs) in the fusiform gyrus. Here, we address these inconsistent findings using a comprehensive set of analyzes in a sample of DP subjects larger than both prior studies combined (n=16). While we found no microstructural differences in long-range tracts between DP and age-matched control participants, we found differences local to face-specific fROIs, and relationships between these microstructural measures with face recognition ability. We conclude that subtle differences in local rather than long-range tracts in the ventral temporal lobe are more likely associated with developmental prosopagnosia. PMID:26456436

  16. Examining the Efficacy of HIV Risk-Reduction Counseling on the Sexual Risk Behaviors of a National Sample of Drug Abuse Treatment Clients: Analysis of Subgroups.

    PubMed

    Gooden, Lauren; Metsch, Lisa R; Pereyra, Margaret R; Malotte, C Kevin; Haynes, Louise F; Douaihy, Antoine; Chally, Jack; Mandler, Raul N; Feaster, Daniel J

    2016-09-01

    HIV counseling with testing has been part of HIV prevention in the U.S. since the 1980s. Despite the long-standing history of HIV testing with prevention counseling, the CDC released HIV testing recommendations for health care settings contesting benefits of prevention counseling with testing in reducing sexual risk behaviors among HIV-negatives in 2006. Efficacy of brief HIV risk-reduction counseling (RRC) in decreasing sexual risk among subgroups of substance use treatment clients was examined using multi-site RCT data. Interaction tests between RRC and subgroups were performed; multivariable regression evaluated the relationship between RRC (with rapid testing) and sex risk. Subgroups were defined by demographics, risk type and level, attitudes/perceptions, and behavioral history. There was an effect (p < .0028) of counseling on number of sex partners among some subgroups. Certain subgroups may benefit from HIV RRC; this should be examined in studies with larger sample sizes, designed to assess the specific subgroup(s).

  17. Psychological vulnerability, burnout, and coping among employees of a business process outsourcing organization

    PubMed Central

    Machado, Tanya; Sathyanarayanan, Vidya; Bhola, Poornima; Kamath, Kirthi

    2013-01-01

    Background: The business process outsourcing (BPO) sector is a contemporary work setting in India, with a large and relatively young workforce. There is concern that the demands of the work environment may contribute to stress levels and psychological vulnerability among employees as well as to high attrition levels. Materials and Methods: As part of a larger study, questionnaires were used to assess psychological distress, burnout, and coping strategies in a sample of 1,209 employees of a BPO organization. Results: The analysis indicated that 38% of the sample had significant psychological distress on the General Health Questionnaire (GHQ-28; Goldberg and Hillier, 1979). The vulnerable groups were women, permanent employees, data processors, and those employed for 6 months or longer. The reported levels of burnout were low and the employees reported a fairly large repertoire of coping behaviors. Conclusions: The study has implications for individual and systemic efforts at employee stress management and workplace prevention approaches. The results point to the emerging and growing role of mental health professionals in the corporate sector. PMID:24459370

  18. Complement system biomarkers in epilepsy.

    PubMed

    Kopczynska, Maja; Zelek, Wioleta M; Vespa, Simone; Touchard, Samuel; Wardle, Mark; Loveless, Samantha; Thomas, Rhys H; Hamandi, Khalid; Morgan, B Paul

    2018-05-24

    To explore whether complement dysregulation occurs in a routinely recruited clinical cohort of epilepsy patients, and whether complement biomarkers have potential to be used as markers of disease severity and seizure control. Plasma samples from 157 epilepsy cases (106 with focal seizures, 46 generalised seizures, 5 unclassified) and 54 controls were analysed. Concentrations of 10 complement analytes (C1q, C3, C4, factor B [FB], terminal complement complex [TCC], iC3b, factor H [FH], Clusterin [Clu], Properdin, C1 Inhibitor [C1Inh] plus C-reactive protein [CRP]) were measured using enzyme linked immunosorbent assay (ELISA). Univariate and multivariate statistical analysis were used to test whether combinations of complement analytes were predictive of epilepsy diagnoses and seizure occurrence. Correlation between number and type of anti-epileptic drugs (AED) and complement analytes was also performed. We found: CONCLUSION: This study adds to evidence implicating complement in pathogenesis of epilepsy and may allow the development of better therapeutics and prognostic markers in the future. Replication in a larger sample set is needed to validate the findings of the study. Copyright © 2018. Published by Elsevier Ltd.

  19. Understanding Motivations for Abstinence among Adolescent Young Women: Insights into Effective Sexual Risk Reduction Strategies

    PubMed Central

    Long-Middleton, Ellen R.; Burke, Pamela J.; Lawrence, Cheryl A. Cahill; Blanchard, Lauren B.; Amudala, Naomi H.; Rankin, Sally H.

    2012-01-01

    Introduction Pregnancy and sexually transmitted infections pose a significant threat to the health and wellbeing of adolescent young women. Abstinence when practiced provides the most effective means in preventing these problems, yet the perspective of abstinent young women is not well understood. The purpose of the investigation was to characterize female adolescents’ motivations for abstinence. Method As part of a larger, cross-sectional quantitative study investigating predictors of HIV risk reduction behaviors, qualitative responses from study participants who never had intercourse were analyzed in a consensus-based process using content analysis and frequency counts. An urban primary care site in a tertiary care center served as the setting, with adolescent young women ages 15–19 years included in the sample. Results Five broad topic categories emerged from the data that characterized motivations for abstinence in this sample: 1) Personal Readiness, 2) Fear, 3) Beliefs and Values, 4) Partner Worthiness and 5) Lack of Opportunity. Discussion A better understanding of the motivations for abstinence may serve to guide the development of interventions to delay intercourse. PMID:22525893

  20. Variable Circumstellar Disks of Classical Be Stars in Clusters

    NASA Astrophysics Data System (ADS)

    Gerhartz, C.; Bjorkman, K. S.; Bjorkman, J. E.; Wisniewski, J. P.

    2016-11-01

    Circumstellar disks are common among many stars, at most spectral types, and at different stages of their lifetimes. Among the near-main-sequence classical Be stars, there is growing evidence that these disks form, dissipate, and reform on timescales that differ from star to star. Using data obtained with the Large Monolithic Imager (LMI) at the Lowell Observatory Discovery Channel Telescope (DCT), along with additional complementary data obtained at the University of Toledo Ritter Observatory (RO), we have begun a long-term monitoring project of a well-studied set of galactic star clusters that are known to contain Be stars. Our goal is to develop a statistically significant sample of variable circumstellar disk systems over multiple timescales. With a robust multi-epoch study we can determine the relative fraction of Be stars that exhibit disk-loss or disk-renewal phases, and investigate the range of timescales over which these events occur. A larger sample will improve our understanding of the prevalence and nature of the disk variability, and may provide insight about underlying physical mechanisms.

  1. Animal models for periodontal regeneration and peri-implant responses.

    PubMed

    Kantarci, Alpdogan; Hasturk, Hatice; Van Dyke, Thomas E

    2015-06-01

    Translation of experimental data to the clinical setting requires the safety and efficacy of such data to be confirmed in animal systems before application in humans. In dental research, the animal species used is dependent largely on the research question or on the disease model. Periodontal disease and, by analogy, peri-implant disease, are complex infections that result in a tissue-degrading inflammatory response. It is impossible to explore the complex pathogenesis of periodontitis or peri-implantitis using only reductionist in-vitro methods. Both the disease process and healing of the periodontal and peri-implant tissues can be studied in animals. Regeneration (after periodontal surgery), in response to various biologic materials with potential for tissue engineering, is a continuous process involving various types of tissue, including epithelia, connective tissues and alveolar bone. The same principles apply to peri-implant healing. Given the complexity of the biology, animal models are necessary and serve as the standard for successful translation of regenerative materials and dental implants to the clinical setting. Smaller species of animal are more convenient for disease-associated research, whereas larger animals are more appropriate for studies that target tissue healing as the anatomy of larger animals more closely resembles human dento-alveolar architecture. This review focuses on the animal models available for the study of regeneration in periodontal research and implantology; the advantages and disadvantages of each animal model; the interpretation of data acquired; and future perspectives of animal research, with a discussion of possible nonanimal alternatives. Power calculations in such studies are crucial in order to use a sample size that is large enough to generate statistically useful data, whilst, at the same time, small enough to prevent the unnecessary use of animals. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. Occurrence of organic wastewater compounds in effluent-dominated streams in Northeastern Kansas

    USGS Publications Warehouse

    Lee, C.J.; Rasmussen, T.J.

    2006-01-01

    Fifty-nine stream-water samples and 14 municipal wastewater treatment facility (WWTF) discharge samples in Johnson County, northeastern Kansas, were analyzed for 55 compounds collectively described as organic wastewater compounds (OWCs). Stream-water samples were collected upstream, in, and downstream from WWTF discharges in urban and rural areas during base-flow conditions. The effect of secondary treatment processes on OWC occurrence was evaluated by collecting eight samples from WWTF discharges using activated sludge and six from WWTFs samples using trickling filter treatment processes. Samples collected directly from WWTF discharges contained the largest concentrations of most OWCs in this study. Samples from trickling filter discharges had significantly larger concentrations of many OWCs (p-value < 0.05) compared to samples collected from activated sludge discharges. OWC concentrations decreased significantly in samples from WWTF discharges compared to stream-water samples collected from sites greater than 2000??m downstream. Upstream from WWTF discharges, base-flow samples collected in streams draining predominantly urban watersheds had significantly larger concentrations of cumulative OWCs (p-value = 0.03), caffeine (p-value = 0.01), and tris(2-butoxyethyl) phosphate (p-value < 0.01) than those collected downstream from more rural watersheds.

  3. Matching on the Disease Risk Score in Comparative Effectiveness Research of New Treatments

    PubMed Central

    Wyss, Richard; Ellis, Alan R.; Brookhart, M. Alan; Funk, Michele Jonsson; Girman, Cynthia J.; Simpson, Ross J.; Stürmer, Til

    2016-01-01

    Purpose We use simulations and an empirical example to evaluate the performance of disease risk score (DRS) matching compared with propensity score (PS) matching when controlling large numbers of covariates in settings involving newly introduced treatments. Methods We simulated a dichotomous treatment, a dichotomous outcome, and 100 baseline covariates that included both continuous and dichotomous random variables. For the empirical example, we evaluated the comparative effectiveness of dabigatran versus warfarin in preventing combined ischemic stroke and all-cause mortality. We matched treatment groups on a historically estimated DRS and again on the PS. We controlled for a high-dimensional set of covariates using 20% and 1% samples of Medicare claims data from October 2010 through December 2012. Results In simulations, matching on the DRS versus the PS generally yielded matches for more treated individuals and improved precision of the effect estimate. For the empirical example, PS and DRS matching in the 20% sample resulted in similar hazard ratios (0.88 and 0.87) and standard errors (0.04 for both methods). In the 1% sample, PS matching resulted in matches for only 92.0% of the treated population and a hazard ratio and standard error of 0.89 and 0.19, respectively, while DRS matching resulted in matches for 98.5% and a hazard ratio and standard error of 0.85 and 0.16, respectively. Conclusions When PS distributions are separated, DRS matching can improve the precision of effect estimates and allow researchers to evaluate the treatment effect in a larger proportion of the treated population. However, accurately modeling the DRS can be challenging compared with the PS. PMID:26112690

  4. Matching on the disease risk score in comparative effectiveness research of new treatments.

    PubMed

    Wyss, Richard; Ellis, Alan R; Brookhart, M Alan; Jonsson Funk, Michele; Girman, Cynthia J; Simpson, Ross J; Stürmer, Til

    2015-09-01

    We use simulations and an empirical example to evaluate the performance of disease risk score (DRS) matching compared with propensity score (PS) matching when controlling large numbers of covariates in settings involving newly introduced treatments. We simulated a dichotomous treatment, a dichotomous outcome, and 100 baseline covariates that included both continuous and dichotomous random variables. For the empirical example, we evaluated the comparative effectiveness of dabigatran versus warfarin in preventing combined ischemic stroke and all-cause mortality. We matched treatment groups on a historically estimated DRS and again on the PS. We controlled for a high-dimensional set of covariates using 20% and 1% samples of Medicare claims data from October 2010 through December 2012. In simulations, matching on the DRS versus the PS generally yielded matches for more treated individuals and improved precision of the effect estimate. For the empirical example, PS and DRS matching in the 20% sample resulted in similar hazard ratios (0.88 and 0.87) and standard errors (0.04 for both methods). In the 1% sample, PS matching resulted in matches for only 92.0% of the treated population and a hazard ratio and standard error of 0.89 and 0.19, respectively, while DRS matching resulted in matches for 98.5% and a hazard ratio and standard error of 0.85 and 0.16, respectively. When PS distributions are separated, DRS matching can improve the precision of effect estimates and allow researchers to evaluate the treatment effect in a larger proportion of the treated population. However, accurately modeling the DRS can be challenging compared with the PS. Copyright © 2015 John Wiley & Sons, Ltd.

  5. The detection of oral cancer using differential pathlength spectroscopy

    NASA Astrophysics Data System (ADS)

    Sterenborg, H. J. C. M.; Kanick, S.; de Visscher, S.; Witjes, M.; Amelink, A.

    2010-02-01

    The development of optical techniques for non-invasive diagnosis of cancer is an ongoing challenge to biomedical optics. For head and neck cancer we see two main fields of potential application 1) Screening for second primaries in patients with a history of oral cancer. This requires imaging techniques or an approach where a larger area can be scanned quickly. 2) Distinguishing potentially malignant visible primary lesions from benign ones. Here fiberoptic point measurements can be used as the location of the lesion is known. This presentation will focus on point measurement techniques. Various techniques for point measurements have been developed and investigated clinically for different applications. Differential Pathlength Spectroscopy is a recently developed fiberoptic point measurement technique that measures scattered light in a broad spectrum. Due to the specific fiberoptic geometry we measure only scattered photons that have travelled a predetermined pathlength. This allows us to analyse the spectrum mathematically and translate the measured curve into a set of parameters that are related to the microvasculature and to the intracellular morphology. DPS has been extensively evaluated on optical phantoms and tested clinically in various clinical applications. The first measurements in biopsy proven squamous cell carcinoma showed significant changes in both vascular and morphological parameters. Measurements on thick keratinized lesions however failed to generate any vascular signatures. This is related to the sampling depth of the standard optical fibers used. Recently we developed a fiberoptic probe with a ~1 mm sampling depth. Measurements on several leukoplakias showed that with this new probe we sample just below the keratin layer and can obtain vascular signatures. The results of a first set of clinical measurements will be presented and the significance for clinical diagnostics will be discussed.

  6. Study of Clinical Survival and Gene Expression in a Sample of Pancreatic Ductal Adenocarcinoma by Parsimony Phylogenetic Analysis.

    PubMed

    Nalbantoglu, Sinem; Abu-Asab, Mones; Tan, Ming; Zhang, Xuemin; Cai, Ling; Amri, Hakima

    2016-07-01

    Pancreatic ductal adenocarcinoma (PDAC) is one of the rapidly growing forms of pancreatic cancer with a poor prognosis and less than 5% 5-year survival rate. In this study, we characterized the genetic signatures and signaling pathways related to survival from PDAC, using a parsimony phylogenetic algorithm. We applied the parsimony phylogenetic algorithm to analyze the publicly available whole-genome in silico array analysis of a gene expression data set in 25 early-stage human PDAC specimens. We explain here that the parsimony phylogenetics is an evolutionary analytical method that offers important promise to uncover clonal (driver) and nonclonal (passenger) aberrations in complex diseases. In our analysis, parsimony and statistical analyses did not identify significant correlations between survival times and gene expression values. Thus, the survival rankings did not appear to be significantly different between patients for any specific gene (p > 0.05). Also, we did not find correlation between gene expression data and tumor stage in the present data set. While the present analysis was unable to identify in this relatively small sample of patients a molecular signature associated with pancreatic cancer prognosis, we suggest that future research and analyses with the parsimony phylogenetic algorithm in larger patient samples are worthwhile, given the devastating nature of pancreatic cancer and its early diagnosis, and the need for novel data analytic approaches. The future research practices might want to place greater emphasis on phylogenetics as one of the analytical paradigms, as our findings presented here are on the cusp of this shift, especially in the current era of Big Data and innovation policies advocating for greater data sharing and reanalysis.

  7. [MATCHE: Management Approach to Teaching Consumer and Homemaking Education.] Economically Depressed Areas Strand: Human Development: Module III-E-3: Resources for the Economically Depressed Family.

    ERIC Educational Resources Information Center

    Boogaert, John

    This competency-based preservice home economics teacher education module on resources for the economically depressed area family is the third in a set of three modules on human development in economically depressed areas. (This set is part of a larger set of sixty-seven modules on the Management Approach to Teaching Consumer and Homemaking…

  8. [MATCHE: Management Approach to Teaching Consumer and Homemaking Education.] Consumer Approach Strand: Housing. Module I-B-6: Maintenance Procedures for Surfaces and Appliances.

    ERIC Educational Resources Information Center

    Hennings, Patricia

    This competency-based preservice home economics teacher education module on maintenance procedures for surfaces and appliances is the sixth in a set of six modules on consumer education related to housing. (This set is part of a larger set of sixty-seven modules on the Management Approach to Teaching Consumer and Homemaking Education [MATCHE]--see…

  9. Impact of organizational policies and practices on workplace injuries in a hospital setting.

    PubMed

    Tveito, T H; Sembajwe, G; Boden, L I; Dennerlein, J T; Wagner, G R; Kenwood, C; Stoddard, A M; Reme, S E; Hopcia, K; Hashimoto, D; Shaw, W S; Sorensen, G

    2014-08-01

    This study aimed to assess relationships between perceptions of organizational practices and policies (OPP), social support, and injury rates among workers in hospital units. A total of 1230 hospital workers provided survey data on OPP, job flexibility, and social support. Demographic data and unit injury rates were collected from the hospitals' administrative databases. Injury rates were lower in units where workers reported higher OPP scores and high social support. These relationships were mainly observed among registered nurses. Registered nurses perceived coworker support and OPP as less satisfactory than patient care associates (PCAs). Nevertheless, because of the low number of PCAs at each unit, results for the PCAs are preliminary and should be further researched in future studies with larger sample sizes. Employers aiming to reduce injuries in hospitals could focus on good OPP and supportive work environment.

  10. Pathway to the Galactic Distribution of Planets: Combined Spitzer and Ground-Based Microlens Parallax Measurements of 21 Single-Lens Events

    NASA Technical Reports Server (NTRS)

    Novati, S. Calchi; Gould, A.; Udalski, A.; Menzies, J. W.; Bond, I. A.; Shvartzvald, Y.; Street, R. A.; Hundertmark, M.; Beichman, C. A.; Barry, R. K.

    2015-01-01

    We present microlens parallax measurements for 21 (apparently) isolated lenses observed toward the Galactic bulge that were imaged simultaneously from Earth and Spitzer, which was approximately 1 Astronomical Unit west of Earth in projection. We combine these measurements with a kinematic model of the Galaxy to derive distance estimates for each lens, with error bars that are small compared to the Sun's galactocentric distance. The ensemble therefore yields a well-defined cumulative distribution of lens distances. In principle, it is possible to compare this distribution against a set of planets detected in the same experiment in order to measure the Galactic distribution of planets. Since these Spitzer observations yielded only one planet, this is not yet possible in practice. However, it will become possible as larger samples are accumulated.

  11. Pathway to the Galactic Distribution of Planets: Combined Spitzer and Ground-Based Microlens Parallax Measurements of 21 Single-Lens Events

    NASA Astrophysics Data System (ADS)

    Calchi Novati, S.; Gould, A.; Udalski, A.; Menzies, J. W.; Bond, I. A.; Shvartzvald, Y.; Street, R. A.; Hundertmark, M.; Beichman, C. A.; Yee, J. C.; Carey, S.; Poleski, R.; Skowron, J.; Kozłowski, S.; Mróz, P.; Pietrukowicz, P.; Pietrzyński, G.; Szymański, M. K.; Soszyński, I.; Ulaczyk, K.; Wyrzykowski, Ł.; OGLE Collaboration; Albrow, M.; Beaulieu, J. P.; Caldwell, J. A. R.; Cassan, A.; Coutures, C.; Danielski, C.; Dominis Prester, D.; Donatowicz, J.; Lončarić, K.; McDougall, A.; Morales, J. C.; Ranc, C.; Zhu, W.; PLANET Collaboration; Abe, F.; Barry, R. K.; Bennett, D. P.; Bhattacharya, A.; Fukunaga, D.; Inayama, K.; Koshimoto, N.; Namba, S.; Sumi, T.; Suzuki, D.; Tristram, P. J.; Wakiyama, Y.; Yonehara, A.; MOA Collaboration; Maoz, D.; Kaspi, S.; Friedmann, M.; Wise Group; Bachelet, E.; Figuera Jaimes, R.; Bramich, D. M.; Tsapras, Y.; Horne, K.; Snodgrass, C.; Wambsganss, J.; Steele, I. A.; Kains, N.; RoboNet Collaboration; Bozza, V.; Dominik, M.; Jørgensen, U. G.; Alsubai, K. A.; Ciceri, S.; D'Ago, G.; Haugbølle, T.; Hessman, F. V.; Hinse, T. C.; Juncher, D.; Korhonen, H.; Mancini, L.; Popovas, A.; Rabus, M.; Rahvar, S.; Scarpetta, G.; Schmidt, R. W.; Skottfelt, J.; Southworth, J.; Starkey, D.; Surdej, J.; Wertz, O.; Zarucki, M.; MiNDSTEp Consortium; Gaudi, B. S.; Pogge, R. W.; DePoy, D. L.; μFUN Collaboration

    2015-05-01

    We present microlens parallax measurements for 21 (apparently) isolated lenses observed toward the Galactic bulge that were imaged simultaneously from Earth and Spitzer, which was ˜1 AU west of Earth in projection. We combine these measurements with a kinematic model of the Galaxy to derive distance estimates for each lens, with error bars that are small compared to the Sun’s galactocentric distance. The ensemble therefore yields a well-defined cumulative distribution of lens distances. In principle, it is possible to compare this distribution against a set of planets detected in the same experiment in order to measure the Galactic distribution of planets. Since these Spitzer observations yielded only one planet, this is not yet possible in practice. However, it will become possible as larger samples are accumulated.

  12. Testing for periodicity of extinction

    NASA Technical Reports Server (NTRS)

    Raup, David M.; Sepkoski, J. J., Jr.

    1988-01-01

    The statistical techniques used by Raup and Sepkoski (1984 and 1986) to identify a 26-Myr periodicity in the biological extinction record for the past 250 Myr are reexamined, responding in detail to the criticisms of Stigler and Wagner (1987). It is argued that evaluation of a much larger set of extinction data using a time scale with 51 sampling intervals supports the finding of periodicity. In a reply by Sigler and Wagner, the preference for a 26-Myr period is attributed to a numerical quirk in the Harland et al. (1982) time scale, in which the subinterval boundaries are not linear interpolations between the stage boundaries but have 25-Myr periodicity. It is stressed that the results of the stringent statistical tests imposed do not disprove periodicity but rather indicate that the evidence and analyses presented so far are inadequate.

  13. Magnetospheric Multiscale Observations of the Electron Diffusion Region of Large Guide Field Magnetic Reconnection

    NASA Technical Reports Server (NTRS)

    Eriksson, S.; Wilder, F. D.; Ergun, R. E.; Schwartz, S. J.; Cassak, P. A.; Burch, J. L.; Chen, Li-Jen; Torbert, R. B.; Phan, T. D.; Lavraud, B.; hide

    2016-01-01

    We report observations from the Magnetospheric Multiscale (MMS) satellites of a large guide field magnetic reconnection event. The observations suggest that two of the four MMS spacecraft sampled the electron diffusion region, whereas the other two spacecraft detected the exhaust jet from the event. The guide magnetic field amplitude is approximately 4 times that of the reconnecting field. The event is accompanied by a significant parallel electric field (E(sub parallel lines) that is larger than predicted by simulations. The high-speed (approximately 300 km/s) crossing of the electron diffusion region limited the data set to one complete electron distribution inside of the electron diffusion region, which shows significant parallel heating. The data suggest that E(sub parallel lines) is balanced by a combination of electron inertia and a parallel gradient of the gyrotropic electron pressure.

  14. Utilizing Ion-Mobility Data to Estimate Molecular Masses

    NASA Technical Reports Server (NTRS)

    Duong, Tuan; Kanik, Isik

    2008-01-01

    A method is being developed for utilizing readings of an ion-mobility spectrometer (IMS) to estimate molecular masses of ions that have passed through the spectrometer. The method involves the use of (1) some feature-based descriptors of structures of molecules of interest and (2) reduced ion mobilities calculated from IMS readings as inputs to (3) a neural network. This development is part of a larger effort to enable the use of IMSs as relatively inexpensive, robust, lightweight instruments to identify, via molecular masses, individual compounds or groups of compounds (especially organic compounds) that may be present in specific environments or samples. Potential applications include detection of organic molecules as signs of life on remote planets, modeling and detection of biochemicals of interest in the pharmaceutical and agricultural industries, and detection of chemical and biological hazards in industrial, homeland-security, and industrial settings.

  15. How thick are Mercury's polar water ice deposits?

    NASA Astrophysics Data System (ADS)

    Eke, Vincent R.; Lawrence, David J.; Teodoro, Luís F. A.

    2017-03-01

    An estimate is made of the thickness of the radar-bright deposits in craters near to Mercury's north pole. To construct an objective set of craters for this measurement, an automated crater finding algorithm is developed and applied to a digital elevation model based on data from the Mercury Laser Altimeter onboard the MESSENGER spacecraft. This produces a catalogue of 663 craters with diameters exceeding 4 km, northwards of latitude +55∘ . A subset of 12 larger, well-sampled and fresh polar craters are selected to search for correlations between topography and radar same-sense backscatter cross-section. It is found that the typical excess height associated with the radar-bright regions within these fresh polar craters is (50 ± 35) m. This puts an approximate upper limit on the total polar water ice deposits on Mercury of ∼ 3 × 1015 kg.

  16. Disequilibrium and human capital in pharmacy labor markets: evidence from four states.

    PubMed

    Cline, Richard R

    2003-01-01

    To estimate the association between pharmacists' stocks of human capital (work experience and education), practice setting, demographics, and wage rates in the overall labor market and to estimate the association between these same variables and wage rates within six distinct pharmacy employment sectors. Wage estimation is used as a proxy measure of demand for pharmacists' services. Descriptive survey analysis. Illinois, Minnesota, Ohio, and Wisconsin. Licensed pharmacists working 30 or more hours per week. Analysis of data collected with cross-sectional mail surveys conducted in four states. Hourly wage rates for all pharmacists working 30 or more hours per week and hourly wage rates for pharmacists employed in large chain, independent, mass-merchandiser, hospital, health maintenance organization (HMO), and other settings. A total of 2,235 responses were received, for an adjusted response rate of 53.1%. Application of exclusion criteria left 1,450 responses from full-time pharmacists to analyze. Results from estimations of wages in the pooled sample and for pharmacists in the hospital setting suggest that advanced training and years of experience are associated positively with higher hourly wages. Years of experience were also associated positively with higher wages in independent and other settings, while neither advanced education nor experience was related to wages in large chain, mass-merchandiser, or HMO settings. Overall, the market for full-time pharmacists' labor is competitive, and employers pay wage premiums to those with larger stocks of human capital, especially advanced education and more years of pharmacy practice experience. The evidence supports the hypothesis that demand is exceeding supply in select employment sectors.

  17. Strong-lensing analysis of a complete sample of 12 MACS clusters at z > 0.5: mass models and Einstein radii

    NASA Astrophysics Data System (ADS)

    Zitrin, Adi; Broadhurst, Tom; Barkana, Rennan; Rephaeli, Yoel; Benítez, Narciso

    2011-01-01

    We present the results of a strong-lensing analysis of a complete sample of 12 very luminous X-ray clusters at z > 0.5 using HST/ACS images. Our modelling technique has uncovered some of the largest known critical curves outlined by many accurately predicted sets of multiple images. The distribution of Einstein radii has a median value of ≃28 arcsec (for a source redshift of zs˜ 2), twice as large as other lower z samples, and extends to 55 arcsec for MACS J0717.5+3745, with an impressive enclosed Einstein mass of 7.4 × 1014 M⊙. We find that nine clusters cover a very large area (>2.5 arcmin2) of high magnification (μ > 10×) for a source redshift of zs˜ 8, providing primary targets for accessing the first stars and galaxies. We compare our results with theoretical predictions of the standard Λ cold dark matter (ΛCDM) model which we show systematically fall short of our measured Einstein radii by a factor of ≃1.4, after accounting for the effect of lensing projection. Nevertheless, a revised analysis, once arc redshifts become available, and similar analyses of larger samples, is needed in order to establish more precisely the level of discrepancy with ΛCDM predictions.

  18. Development of a compact terahertz time-domain spectrometer for the measurement of the optical properties of biological tissues.

    PubMed

    Wilmink, Gerald J; Ibey, Bennett L; Tongue, Thomas; Schulkin, Brian; Laman, Norman; Peralta, Xomalin G; Roth, Caleb C; Cerna, Cesario Z; Rivest, Benjamin D; Grundt, Jessica E; Roach, William P

    2011-04-01

    Terahertz spectrometers and imaging systems are currently being evaluated as biomedical tools for skin burn assessment. These systems show promise, but due to their size and weight, they have restricted portability, and are impractical for military and battlefield settings where space is limited. In this study, we developed and tested the performance of a compact, light, and portable THz time-domain spectroscopy (THz-TDS) device. Optical properties were collected with this system from 0.1 to 1.6 THz for water, ethanol, and several ex vivo porcine tissues (muscle, adipose, skin). For all samples tested, we found that the index of refraction (n) decreases with frequency, while the absorption coefficient (μ(a)) increases with frequency. Muscle, adipose, and frozen/thawed skin samples exhibited comparable n values ranging between 2.5 and 2.0, whereas the n values for freshly harvested skin were roughly 40% lower. Additionally, we found that the freshly harvested samples exhibited higher μ(a) values than the frozen/thawed skin samples. Overall, for all liquids and tissues tested, we found that our system measured optical property values that were consistent with those reported in the literature. These results suggest that our compact THz spectrometer performed comparable to its larger counterparts, and therefore may be a useful and practical tool for skin health assessment.

  19. Broad Hβ Emission-line Variability in a Sample of 102 Local Active Galaxies

    NASA Astrophysics Data System (ADS)

    Runco, Jordan N.; Cosens, Maren; Bennert, Vardha N.; Scott, Bryan; Komossa, S.; Malkan, Matthew A.; Lazarova, Mariana S.; Auger, Matthew W.; Treu, Tommaso; Park, Daeseong

    2016-04-01

    A sample of 102 local (0.02 ≤ z ≤ 0.1) Seyfert galaxies with black hole masses MBH > 107M⊙ was selected from the Sloan Digital Sky Survey (SDSS) and observed using the Keck 10 m telescope to study the scaling relations between MBH and host galaxy properties. We study profile changes of the broad Hβ emission line within the three to nine year time frame between the two sets of spectra. The variability of the broad Hβ emission line is of particular interest, not only because it is used to estimate MBH, but also because its strength and width are used to classify Seyfert galaxies into different types. At least some form of broad-line variability (in either width or flux) is observed in the majority (∼66%) of the objects, resulting in a Seyfert-type change for ∼38% of the objects, likely driven by variable accretion and/or obscuration. The broad Hβ line virtually disappears in 3/102 (∼3%) extreme cases. We discuss potential causes for these changing look active galactic nuclei. While similar dramatic transitions have previously been reported in the literature, either on a case-by-case basis or in larger samples focusing on quasars at higher redshifts, our study provides statistical information on the frequency of Hβ line variability in a sample of low-redshift Seyfert galaxies.

  20. 7 CFR 27.23 - Duplicate sets of samples of cotton.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...

  1. 7 CFR 27.23 - Duplicate sets of samples of cotton.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...

  2. 7 CFR 27.23 - Duplicate sets of samples of cotton.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...

  3. 7 CFR 27.23 - Duplicate sets of samples of cotton.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...

  4. 7 CFR 27.23 - Duplicate sets of samples of cotton.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Duplicate sets of samples of cotton. 27.23 Section 27... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Inspection and Samples § 27.23 Duplicate sets of samples of cotton. The duplicate sets of samples shall be inclosed in wrappers or...

  5. A study of hadronic decays of the chi(c) states produced in psi-prime radiative transitions at the Beijing Experimental Spectrometer

    NASA Astrophysics Data System (ADS)

    Varner, Gary Sim

    1999-11-01

    Utilizing the world's largest sample of resonant y' decays, as measured by the Beijing Experimental Spectrometer (BES) during 1993-1995, a comprehensive study of the hadronic decay modes of the χc (3P1 Charmonium) states has been undertaken. Compared with the data set for the Mark I detector, whose published measurements of many of these hadronic decays have been definitive for almost 20 years, roughly an order of magnitude larger statistics has been obtained. Taking advantage of these larger statistics, many new hadronic decay modes have been discovered, while others have been refined. An array of first observations, improvements, confirmations or limits are reported with respect to current world values. These higher precision and newly discovered decay modes are an excellent testing ground for recent theoretical interest in the contribution of higher Fock states and the color octet mechanism in heavy quarkonium annihilation and subsequent light hadronization. Because these calculations are largely tractable only in two body decays, these are the focus of this dissertation. A comparison of current theoretical calculations and experimental results is presented, indicating the success of these phenomenological advances. Measurements for which there are as yet no suitable theoretical prediction are indicated.

  6. Seeing ghosts - Photometry of Saturn's G Ring

    NASA Technical Reports Server (NTRS)

    Showalter, Mark R.; Cuzzi, Jeffrey N.

    1993-01-01

    Saturn's faint and narrow G Ring is only visible to the eye in two Voyager images, each taken at a rather high solar phase angle of about 160 deg. In this paper we introduce a new photometric technique for averaging across multiple Voyager images, and use it to detect the G Ring at several additional viewing geometries. The resultant phase curve suggests that the G Ring is composed of dust particles obeying a very steep power-law size distribution. The dust is generally smaller than that seen in other rings, ranging down to 0.03 micron. The G Ring occupies the region between orbital radii 166,000 and 173,000 km, and has a peak somewhat closer to the inner edge. Based on these limits, we demonstrate that Voyager 2 passed through and directly sampled this ring during its 1981 encounter with Saturn. Combined analysis of additional data sets suggests that a population of larger bodies is also present in the G Ring; these bodies occupy a narrower band near the observed peak and are likely the source for the visible dust. Based on some preliminary dynamical models, we propose that these larger bodies represent leftover debris from the collisional breakup of a small moon in Saturn's distant past.

  7. Accumulation of MRI contrast agents in malignant fibrous histiocytoma for gadolinium neutron capture therapy.

    PubMed

    Fujimoto, T; Ichikawa, H; Akisue, T; Fujita, I; Kishimoto, K; Hara, H; Imabori, M; Kawamitsu, H; Sharma, P; Brown, S C; Moudgil, B M; Fujii, M; Yamamoto, T; Kurosaka, M; Fukumori, Y

    2009-07-01

    Neutron-capture therapy with gadolinium (Gd-NCT) has therapeutic potential, especially that gadolinium is generally used as a contrast medium in magnetic resonance imaging (MRI). The accumulation of gadolinium in a human sarcoma cell line, malignant fibrosis histiocytoma (MFH) Nara-H, was visualized by the MRI system. The commercially available MRI contrast medium Gd-DTPA (Magnevist, dimeglumine gadopentetate aqueous solution) and the biodegradable and highly gadopentetic acid (Gd-DTPA)-loaded chitosan nanoparticles (Gd-nanoCPs) were prepared as MRI contrast agents. The MFH cells were cultured and collected into three falcon tubes that were set into the 3-tesra MRI system to acquire signal intensities from each pellet by the spin echo method, and the longitudinal relaxation time (T1) was calculated. The amount of Gd in the sample was measured by inductively coupled plasma atomic emission spectrography (ICP-AES). The accumulation of gadolinium in cells treated with Gd-nanoCPs was larger than that in cells treated with Gd-DTPA. In contrast, and compared with the control, Gd-DTPA was more effective than Gd-nanoCPs in reducing T1, suggesting that the larger accumulation exerted the adverse effect of lowering the enhancement of MRI. Further studies are warranted to gain insight into the therapeutic potential of Gd-NCT.

  8. Comparison of complex effluent treatability in different bench scale microbial electrolysis cells.

    PubMed

    Ullery, Mark L; Logan, Bruce E

    2014-10-01

    A range of wastewaters and substrates were examined using mini microbial electrolysis cells (mini MECs) to see if they could be used to predict the performance of larger-scale cube MECs. COD removals and coulombic efficiencies corresponded well between the two reactor designs for individual samples, with 66-92% of COD removed for all samples. Current generation was consistent between the reactor types for acetate (AC) and fermentation effluent (FE) samples, but less consistent with industrial (IW) and domestic wastewaters (DW). Hydrogen was recovered from all samples in cube MECs, but gas composition and volume varied significantly between samples. Evidence for direct conversion of substrate to methane was observed with two of the industrial wastewater samples (IW-1 and IW-3). Overall, mini MECs provided organic treatment data that corresponded well with larger scale reactor results, and therefore it was concluded that they can be a useful platform for screening wastewater sources. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Technique for Performing Dielectric Property Measurements at Microwave Frequencies

    NASA Technical Reports Server (NTRS)

    Barmatz, Martin B.; Jackson, Henry W.

    2010-01-01

    A paper discusses the need to perform accurate dielectric property measurements on larger sized samples, particularly liquids at microwave frequencies. These types of measurements cannot be obtained using conventional cavity perturbation methods, particularly for liquids or powdered or granulated solids that require a surrounding container. To solve this problem, a model has been developed for the resonant frequency and quality factor of a cylindrical microwave cavity containing concentric cylindrical samples. This model can then be inverted to obtain the real and imaginary dielectric constants of the material of interest. This approach is based on using exact solutions to Maxwell s equations for the resonant properties of a cylindrical microwave cavity and also using the effective electrical conductivity of the cavity walls that is estimated from the measured empty cavity quality factor. This new approach calculates the complex resonant frequency and associated electromagnetic fields for a cylindrical microwave cavity with lossy walls that is loaded with concentric, axially aligned, lossy dielectric cylindrical samples. In this approach, the calculated complex resonant frequency, consisting of real and imaginary parts, is related to the experimentally measured quantities. Because this approach uses Maxwell's equations to determine the perturbed electromagnetic fields in the cavity with the material(s) inserted, one can calculate the expected wall losses using the fields for the loaded cavity rather than just depending on the value of the fields obtained from the empty cavity quality factor. These additional calculations provide a more accurate determination of the complex dielectric constant of the material being studied. The improved approach will be particularly important when working with larger samples or samples with larger dielectric constants that will further perturb the cavity electromagnetic fields. Also, this approach enables the ability to have a larger sample of interest, such as a liquid or powdered or granulated solid, inside a cylindrical container.

  10. Dynamics of sediment carbon stocks across intertidal wetland habitats of Moreton Bay, Australia.

    PubMed

    Hayes, Matthew A; Jesse, Amber; Hawke, Bruce; Baldock, Jeff; Tabet, Basam; Lockington, David; Lovelock, Catherine E

    2017-10-01

    Coastal wetlands are known for high carbon storage within their sediments, but our understanding of the variation in carbon storage among intertidal habitats, particularly over geomorphological settings and along elevation gradients, is limited. Here, we collected 352 cores from 18 sites across Moreton Bay, Australia. We assessed variation in sediment organic carbon (OC) stocks among different geomorphological settings (wetlands within riverine settings along with those with reduced riverine influence located on tide-dominated sand islands), across elevation gradients, with distance from shore and among habitat and vegetation types. We used mid-infrared (MIR) spectroscopy combined with analytical data and partial least squares regression to quantify the carbon content of ~2500 sediment samples and provide fine-scale spatial coverage of sediment OC stocks to 150 cm depth. We found sites in river deltas had larger OC stocks (175-504 Mg/ha) than those in nonriverine settings (44-271 Mg/ha). Variation in OC stocks among nonriverine sites was high in comparison with riverine and mixed geomorphic settings, with sites closer to riverine outflow from the east and south of Moreton Bay having higher stocks than those located on the sand islands in the northwest of the bay. Sediment OC stocks increased with elevation within nonriverine settings, but not in riverine geomorphic settings. Sediment OC stocks did not differ between mangrove and saltmarsh habitats. OC stocks did, however, differ between dominant species across the research area and within geomorphic settings. At the landscape scale, the coastal wetlands of the South East Queensland catchments (17,792 ha) are comprised of approximately 4,100,000-5,200,000 Mg of sediment OC. Comparatively high variation in OC storage between riverine and nonriverine geomorphic settings indicates that the availability of mineral sediments and terrestrial derived OC may exert a strong influence over OC storage potential across intertidal wetland systems. © 2017 John Wiley & Sons Ltd.

  11. A novel sputum transport solution eliminates cold chain and supports routine tuberculosis testing in Nepal.

    PubMed

    Maharjan, Bhagwan; Shrestha, Bhabana; Weirich, Alexandra; Stewart, Andrew; Kelly-Cirino, Cassandra D

    2016-12-01

    This preliminary study evaluated the transport reagent OMNIgene SPUTUM (OMS) in a real-world, resource-limited setting: a zonal hospital and national tuberculosis (TB) reference laboratory, Nepal. The objectives were to: (1) assess the performance of OMS for transporting sputum from peripheral sites without cold chain stabilization; and (2) compare with Nepal's standard of care (SOC) for Mycobacterium tuberculosis smear and culture diagnostics. Sixty sputa were manually split into a SOC sample (airline-couriered to the laboratory, conventional processing) and an OMS sample (OMS added at collection, no cold chain transport or processing). Smear microscopy and solid culture were performed. Transport was 0-8days. Forty-one samples (68%) were smear-positive using both methods. Of the OMS cultures, 37 (62%) were positive, 22 (36%) were negative, and one (2%) was contaminated. Corresponding SOC results were 32 (53%), 21 (35%), and seven (12%). OMS "rescued" six (i.e., missed using SOC) compared with one rescue using SOC. Of smear-positives, six SOC samples produced contaminated cultures whereas only one OMS sample was contaminated. OMS reduced culture contamination from 12% to 2%, and improved TB detection by 9%. The results suggest that OMS could perform well as a no cold chain, long-term transport solution for smear and culture testing. The findings provide a basis for larger feasibility studies. Copyright © 2016 Ministry of Health, Saudi Arabia. Published by Elsevier Ltd. All rights reserved.

  12. African swine fever outbreak on a medium-sized farm in Uganda: biosecurity breaches and within-farm virus contamination.

    PubMed

    Chenais, Erika; Sternberg-Lewerin, Susanna; Boqvist, Sofia; Liu, Lihong; LeBlanc, Neil; Aliro, Tonny; Masembe, Charles; Ståhl, Karl

    2017-02-01

    In Uganda, a low-income country in east Africa, African swine fever (ASF) is endemic with yearly outbreaks. In the prevailing smallholder subsistence farming systems, farm biosecurity is largely non-existent. Outbreaks of ASF, particularly in smallholder farms, often go unreported, creating significant epidemiological knowledge gaps. The continuous circulation of ASF in smallholder settings also creates biosecurity challenges for larger farms. In this study, an on-going outbreak of ASF in an endemic area was investigated on farm level, including analyses of on-farm environmental virus contamination. The study was carried out on a medium-sized pig farm with 35 adult pigs and 103 piglets or growers at the onset of the outbreak. Within 3 months, all pigs had died or were slaughtered. The study included interviews with farm representatives as well as biological and environmental sampling. ASF was confirmed by the presence of ASF virus (ASFV) genomic material in biological (blood, serum) and environmental (soil, water, feed, manure) samples by real-time PCR. The ASFV-positive biological samples confirmed the clinical assessment and were consistent with known virus characteristics. Most environmental samples were found to be positive. Assessment of farm biosecurity, interviews, and the results from the biological and environmental samples revealed that breaches and non-compliance with biosecurity protocols most likely led to the introduction and within-farm spread of the virus. The information derived from this study provides valuable insight regarding the implementation of biosecurity measures, particularly in endemic areas.

  13. Exploring the parameter space of the coarse-grained UNRES force field by random search: selecting a transferable medium-resolution force field.

    PubMed

    He, Yi; Xiao, Yi; Liwo, Adam; Scheraga, Harold A

    2009-10-01

    We explored the energy-parameter space of our coarse-grained UNRES force field for large-scale ab initio simulations of protein folding, to obtain good initial approximations for hierarchical optimization of the force field with new virtual-bond-angle bending and side-chain-rotamer potentials which we recently introduced to replace the statistical potentials. 100 sets of energy-term weights were generated randomly, and good sets were selected by carrying out replica-exchange molecular dynamics simulations of two peptides with a minimal alpha-helical and a minimal beta-hairpin fold, respectively: the tryptophan cage (PDB code: 1L2Y) and tryptophan zipper (PDB code: 1LE1). Eight sets of parameters produced native-like structures of these two peptides. These eight sets were tested on two larger proteins: the engrailed homeodomain (PDB code: 1ENH) and FBP WW domain (PDB code: 1E0L); two sets were found to produce native-like conformations of these proteins. These two sets were tested further on a larger set of nine proteins with alpha or alpha + beta structure and found to locate native-like structures of most of them. These results demonstrate that, in addition to finding reasonable initial starting points for optimization, an extensive search of parameter space is a powerful method to produce a transferable force field. Copyright 2009 Wiley Periodicals, Inc.

  14. A reconnaissance study of 13C-13C clumping in ethane from natural gas

    NASA Astrophysics Data System (ADS)

    Clog, Matthieu; Lawson, Michael; Peterson, Brian; Ferreira, Alexandre A.; Santos Neto, Eugenio V.; Eiler, John M.

    2018-02-01

    Ethane is the second most abundant alkane in most natural gas reservoirs. Its bulk isotopic compositions (δ13C and δD) are used to understand conditions and progress of cracking reactions that lead to the accumulation of hydrocarbons. Bulk isotopic compositions are dominated by the concentrations of singly-substituted isotopologues (13CH3-12CH3 for δ13C and 12CDH2-12CH3 for δD). However, multiply-substituted isotopologues can bring additional independent constraints on the origins of natural ethane. The 13C2H6 isotopologue is particularly interesting as it can potentially inform the distribution of 13C atoms in the parent biomolecules whose thermal cracking lead to the production of natural gas. This work presents methods to purify ethane from natural gas samples and quantify the abundance of the rare isotopologue 13C2H6 in ethane at natural abundances to a precision of ±0.12 ‰ using a high-resolution gas source mass spectrometer. To investigate the natural variability in carbon-carbon clumping, we measured twenty-five samples of thermogenic ethane from a range of geological settings, supported by two hydrous pyrolysis of shales experiments and a dry pyrolysis of ethane experiment. The natural gas samples exhibit a range of 'clumped isotope' signatures (Δ13C2H6) at least 30 times larger than our analytical precision, and significantly larger than expected for thermodynamic equilibration of the carbon-carbon bonds during or after formation of ethane, inheritance from the distribution of isotopes in organic molecules or different extents of cracking of the source. However we show a relationship between the Δ13C2H6 and the proportion of alkanes in natural gas samples, which we believe can be associated to the extent of secondary ethane cracking. This scenario is consistent with the results of laboratory experiments, where breaking down ethane leaves the residue with a low Δ13C2H6 compared to the initial gas. Carbon-carbon clumping is therefore a new potential tracer suitable for the study of kinetic processes associated with natural gas.

  15. Use of Genomic Data in Risk Assessment Caes Study: II. Evaluation of the Dibutyl Phthalate Toxicogenomic Dataset

    EPA Science Inventory

    An evaluation of the toxicogenomic data set for dibutyl phthalate (DBP) and male reproductive developmental effects was performed as part of a larger case study to test an approach for incorporating genomic data in risk assessment. The DBP toxicogenomic data set is composed of ni...

  16. Known Data Problems | ECHO | US EPA

    EPA Pesticide Factsheets

    EPA manages a series of national information systems that include data flowing from staff in EPA and state/tribal/local offices. Given this fairly complex set of transactions, occasional problems occur with the migration of data into the national systems. This page is meant to explain known data quality problems with larger sets of data.

  17. Multilevel Latent Class Analysis: Parametric and Nonparametric Models

    ERIC Educational Resources Information Center

    Finch, W. Holmes; French, Brian F.

    2014-01-01

    Latent class analysis is an analytic technique often used in educational and psychological research to identify meaningful groups of individuals within a larger heterogeneous population based on a set of variables. This technique is flexible, encompassing not only a static set of variables but also longitudinal data in the form of growth mixture…

  18. Sample size determination for estimating antibody seroconversion rate under stable malaria transmission intensity.

    PubMed

    Sepúlveda, Nuno; Drakeley, Chris

    2015-04-03

    In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.

  19. Climate, fire size, and biophysical setting control fire severity and spatial pattern in the northern Cascade Range, USA

    Treesearch

    C. Alina Cansler; Donald. McKenzie

    2014-01-01

    Warmer and drier climate over the past few decades has brought larger fire sizes and increased annual area burned in forested ecosystems of western North America, and continued increases in annual area burned are expected due to climate change. As warming continues, fires may also increase in severity and produce larger contiguous patches of severely burned areas. We...

  20. 45 CFR 1356.84 - Sampling.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Correction (FPC). The State agency must increase the resulting number by 30 percent to allow for attrition... 30 percent to allow for attrition, but the sample size must not be larger than the number of youth...

  1. 45 CFR 1356.84 - Sampling.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Correction (FPC). The State agency must increase the resulting number by 30 percent to allow for attrition... 30 percent to allow for attrition, but the sample size must not be larger than the number of youth...

  2. 45 CFR 1356.84 - Sampling.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Correction (FPC). The State agency must increase the resulting number by 30 percent to allow for attrition... 30 percent to allow for attrition, but the sample size must not be larger than the number of youth...

  3. A qualitative understanding of patient falls in inpatient mental health units.

    PubMed

    Powell-Cope, Gail; Quigley, Patricia; Besterman-Dahan, Karen; Smith, Maureen; Stewart, Jonathan; Melillo, Christine; Haun, Jolie; Friedman, Yvonne

    2014-01-01

    Falls are the leading cause of injury-related deaths among people age 65 and older, and fractures are the major category of serious injuries produced by falls. Determine market segment-specific recommendations for "selling" falls prevention in acute inpatient psychiatry. Descriptive using focus groups. One inpatient unit at a Veterans' hospital in the Southeastern United States and one national conference of psychiatric and mental health nurses. A convenience sample of 22 registered nurses and advanced practice nurses, one physical therapist and two physicians participated in one of six focus groups. None. Focus groups were conducted by expert facilitators using a semistructured interview guide. Focus groups were recorded and transcribed. Content analysis was used to organize findings. Findings were grouped into fall risk assessment, clinical fall risk precautions, programmatic fall prevention, and "selling" fall prevention in psychiatry. Participants focused on falls prevention instead of fall injury prevention, were committed to reducing risk, and were receptive to learning how to improve safety. Participants recognized unique features of their patients and care settings that defined risk, and were highly motivated to work with other disciplines to keep patients safe. Selling fall injury prevention to staff in psychiatric settings is similar to selling fall injury prevention to staff in other health care settings. Appealing to the larger construct of patient safety will motivate staff in psychiatric settings to implement best practices and customize these to account for unique population needs characteristics. © The Author(s) 2014.

  4. The false security of blind dates: chrononymization's lack of impact on data privacy of laboratory data.

    PubMed

    Cimino, J J

    2012-01-01

    The reuse of clinical data for research purposes requires methods for the protection of personal privacy. One general approach is the removal of personal identifiers from the data. A frequent part of this anonymization process is the removal of times and dates, which we refer to as "chrononymization." While this step can make the association with identified data (such as public information or a small sample of patient information) more difficult, it comes at a cost to the usefulness of the data for research. We sought to determine whether removal of dates from common laboratory test panels offers any advantage in protecting such data from re-identification. We obtained a set of results for 5.9 million laboratory panels from the National Institutes of Health's (NIH) Biomedical Translational Research Information System (BTRIS), selected a random set of 20,000 panels from the larger source sets, and then identified all matches between the sets. We found that while removal of dates could hinder the re-identification of a single test result, such removal had almost no effect when entire panels were used. Our results suggest that reliance on chrononymization provides a false sense of security for the protection of laboratory test results. As a result of this study, the NIH has chosen to rely on policy solutions, such as strong data use agreements, rather than removal of dates when reusing clinical data for research purposes.

  5. The False Security of Blind Dates

    PubMed Central

    Cimino, J.J.

    2012-01-01

    Background The reuse of clinical data for research purposes requires methods for the protection of personal privacy. One general approach is the removal of personal identifiers from the data. A frequent part of this anonymization process is the removal of times and dates, which we refer to as “chrononymization.” While this step can make the association with identified data (such as public information or a small sample of patient information) more difficult, it comes at a cost to the usefulness of the data for research. Objectives We sought to determine whether removal of dates from common laboratory test panels offers any advantage in protecting such data from re-identification. Methods We obtained a set of results for 5.9 million laboratory panels from the National Institutes of Health’s (NIH) Biomedical Translational Research Information System (BTRIS), selected a random set of 20,000 panels from the larger source sets, and then identified all matches between the sets. Results We found that while removal of dates could hinder the re-identification of a single test result, such removal had almost no effect when entire panels were used. Conclusions Our results suggest that reliance on chrononymization provides a false sense of security for the protection of laboratory test results. As a result of this study, the NIH has chosen to rely on policy solutions, such as strong data use agreements, rather than removal of dates when reusing clinical data for research purposes. PMID:23646086

  6. A modified varying-stage adaptive phase II/III clinical trial design.

    PubMed

    Dong, Gaohong; Vandemeulebroecke, Marc

    2016-07-01

    Conventionally, adaptive phase II/III clinical trials are carried out with a strict two-stage design. Recently, a varying-stage adaptive phase II/III clinical trial design has been developed. In this design, following the first stage, an intermediate stage can be adaptively added to obtain more data, so that a more informative decision can be made. Therefore, the number of further investigational stages is determined based upon data accumulated to the interim analysis. This design considers two plausible study endpoints, with one of them initially designated as the primary endpoint. Based on interim results, another endpoint can be switched as the primary endpoint. However, in many therapeutic areas, the primary study endpoint is well established. Therefore, we modify this design to consider one study endpoint only so that it may be more readily applicable in real clinical trial designs. Our simulations show that, the same as the original design, this modified design controls the Type I error rate, and the design parameters such as the threshold probability for the two-stage setting and the alpha allocation ratio in the two-stage setting versus the three-stage setting have a great impact on the design characteristics. However, this modified design requires a larger sample size for the initial stage, and the probability of futility becomes much higher when the threshold probability for the two-stage setting gets smaller. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Simulation on Poisson and negative binomial models of count road accident modeling

    NASA Astrophysics Data System (ADS)

    Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.

    2016-11-01

    Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.

  8. Carbon Nanotube and Nanofiber Exposure Assessments: An Analysis of 14 Site Visits.

    PubMed

    Dahm, Matthew M; Schubauer-Berigan, Mary K; Evans, Douglas E; Birch, M Eileen; Fernback, Joseph E; Deddens, James A

    2015-07-01

    Recent evidence has suggested the potential for wide-ranging health effects that could result from exposure to carbon nanotubes (CNT) and carbon nanofibers (CNF). In response, the National Institute for Occupational Safety and Health (NIOSH) set a recommended exposure limit (REL) for CNT and CNF: 1 µg m(-3) as an 8-h time weighted average (TWA) of elemental carbon (EC) for the respirable size fraction. The purpose of this study was to conduct an industrywide exposure assessment among US CNT and CNF manufacturers and users. Fourteen total sites were visited to assess exposures to CNT (13 sites) and CNF (1 site). Personal breathing zone (PBZ) and area samples were collected for both the inhalable and respirable mass concentration of EC, using NIOSH Method 5040. Inhalable PBZ samples were collected at nine sites while at the remaining five sites both respirable and inhalable PBZ samples were collected side-by-side. Transmission electron microscopy (TEM) PBZ and area samples were also collected at the inhalable size fraction and analyzed to quantify and size CNT and CNF agglomerate and fibrous exposures. Respirable EC PBZ concentrations ranged from 0.02 to 2.94 µg m(-3) with a geometric mean (GM) of 0.34 µg m(-3) and an 8-h TWA of 0.16 µg m(-3). PBZ samples at the inhalable size fraction for EC ranged from 0.01 to 79.57 µg m(-3) with a GM of 1.21 µg m(-3). PBZ samples analyzed by TEM showed concentrations ranging from 0.0001 to 1.613 CNT or CNF-structures per cm(3) with a GM of 0.008 and an 8-h TWA concentration of 0.003. The most common CNT structure sizes were found to be larger agglomerates in the 2-5 µm range as well as agglomerates >5 µm. A statistically significant correlation was observed between the inhalable samples for the mass of EC and structure counts by TEM (Spearman ρ = 0.39, P < 0.0001). Overall, EC PBZ and area TWA samples were below the NIOSH REL (96% were <1 μg m(-3) at the respirable size fraction), while 30% of the inhalable PBZ EC samples were found to be >1 μg m(-3). Until more information is known about health effects associated with larger agglomerates, it seems prudent to assess worker exposure to airborne CNT and CNF materials by monitoring EC at both the respirable and inhalable size fractions. Concurrent TEM samples should be collected to confirm the presence of CNT and CNF. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2015.

  9. Bathymetric distribution of foraminifera in Jamaican reef environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, R.E.; Liddell, W.D.

    1985-02-01

    Recent foraminifera inhabiting Jamaican north-coast fringing reefs display variations in distributional patterns that are related to bathymetry and reef morphology. Sediment samples containing foraminifera were collected along a profile that traversed the back reef (depth 1-2 m), fore-reef terrace (3-15 m), fore-reef escarpment (15-27 m), fore-reef slope (30-55 m), and upper deep fore reef (70 m). Approximately 150 species distributed among 80 genera were identified from the samples. Preliminary analyses indicate that diversity values (S, H') are lowest on the fore-reef terrace (79, 3.0, respectively), increase similarly in back-reef and fore-reef escarpment and slope settings (93, 3.4), and are highestmore » on the deep fore reef (109, 3.7). Larger groupings (suborders) exhibit distinct bathymetric trends with miliolids occurring more commonly in back-reef (comprising 51% of the fauna) than in fore-reef (28%) zones, whereas agglutinated and planktonic species occur more commonly in deeper reef (> 15 m, 9% and 4%, respectively) than in shallower reef zones (< 15 m, 3%, and 0.5%, respectively). Among the more common species Amphistegina gibbosa (Rotolina) is much more abundant in fore-reef (3%) environments, and Sorites marginalis (Miliolina) occurs almost exclusively in the back reef, where it comprises 5.5% of the fauna. Q-mode cluster analysis, involving all species collected, enabled the delineation of back-reef, shallow fore-reef, and deeper fore-reef biofacies, also indicating the potential utility of foraminiferal distributions in detailed paleoenvironment interpretations of ancient reef settings.« less

  10. The influence of shyness on children's test performance.

    PubMed

    Crozier, W Ray; Hostettler, Kirsten

    2003-09-01

    Research has shown that shy children differ from their peers not only in their use of language in routine social encounters but also in formal assessments of their language development, including psychometric tests of vocabulary. There has been little examination of factors contributing to these individual differences. To investigate cognitive-competence and social anxiety interpretations of differences in children's performance on tests of vocabulary. To examine the performance of shy and less shy children under different conditions of test administration, individually with an examiner or among their peers within the familiar classroom setting. The sample consisted of 240 Year 5 pupils (122 male, 118 female) from 24 primary schools. Shy and less shy children, identified by teacher nomination and checklist ratings, completed vocabulary and mental arithmetic tests in one of three conditions, in a between-subjects design. The conditions varied individual and group administration, and oral and written responses. The conditions of test administration influenced the vocabulary test performance of shy children. They performed significantly more poorly than their peers in the two face-to-face conditions but not in the group test condition. A comparable trend for the arithmetic test was not statistically significant. Across the sample as a whole, shyness correlated significantly with test scores. Shyness does influence children's cognitive test performance and its impact is larger when children are tested face-to-face rather than in a more anonymous group setting. The results are of significance for theories of shyness and have implications for the assessment of schoolchildren.

  11. The composition of the river and lake waters of the United States

    USGS Publications Warehouse

    Clarke, Frank Wigglesworth

    1924-01-01

    In the summer of 1903 the late Richard B. Dole, chemist of the water-resources branch of the United States Geological Survey, began a systematic investigation of the composition of the river and lake waters of the United States. His plan, which developed gradually, was to have analyses made of the different waters in such a manner as to give the average composition of each one for an entire year. For a few waters, such completeness was impracticable, the analyses covered only part of a year, but even in these waters the data obtained were of much value. As a rule, samples of each water were collected day by day. They were then mixed in sets of ten and analyzed, so that for each river or lake from 34 to 37 analyses were made. For the Mississippi above New Orleans composite analyses were made in sets of seven, giving 52 analyses from which to compute the average. For the Great Lakes, however, only monthly samples were taken, for the reason that their waters vary so little in composition that greater elaboration was not necessary. Some of the larger rivers were treated even more thoroughly; their average composition was determined at more than one point – the Mississippi at six points. For some rivers the analyses cover two years of collection, and for the data, received from a contributor not connected with the Geological Survey, three years.

  12. Salivary testosterone levels in men at a U.S. sex club.

    PubMed

    Escasa, Michelle J; Casey, Jacqueline F; Gray, Peter B

    2011-10-01

    Vertebrate males commonly experience elevations in testosterone levels in response to sexual stimuli, such as presentation of a novel mating partner. Some previous human studies have shown that watching erotic movies increases testosterone levels in males although studies measuring testosterone changes during actual sexual intercourse or masturbation have yielded mixed results. Small sample sizes, "unnatural" lab-based settings, and invasive techniques may help account for mixed human findings. Here, we investigated salivary testosterone levels in men watching (n = 26) versus participating (n = 18) in sexual activity at a large U.S. sex club. The present study entailed minimally invasive sample collection (measuring testosterone in saliva), a naturalistic setting, and a larger number of subjects than previous work to test three hypotheses related to men's testosterone responses to sexual stimuli. Subjects averaged 40 years of age and participated between 11:00 pm and 2:10 am. Consistent with expectations, results revealed that testosterone levels increased 36% among men during a visit to the sex club, with the magnitude of testosterone change significantly greater among participants (72%) compared with observers (11%). Contrary to expectation, men's testosterone changes were unrelated to their age. These findings were generally consistent with vertebrate studies indicating elevated male testosterone in response to sexual stimuli, but also point out the importance of study context since participation in sexual behavior had a stronger effect on testosterone increases in this study but unlike some previous human lab-based studies.

  13. Pituitary gland volumes in bipolar disorder.

    PubMed

    Clark, Ian A; Mackay, Clare E; Goodwin, Guy M

    2014-12-01

    Bipolar disorder has been associated with increased Hypothalamic-Pituitary-Adrenal axis function. The mechanism is not well understood, but there may be associated increases in pituitary gland volume (PGV) and these small increases may be functionally significant. However, research investigating PGV in bipolar disorder reports mixed results. The aim of the current study was twofold. First, to assess PGV in two novel samples of patients with bipolar disorder and matched healthy controls. Second, to perform a meta-analysis comparing PGV across a larger sample of patients and matched controls. Sample 1 consisted of 23 established patients and 32 matched controls. Sample 2 consisted of 39 medication-naïve patients and 42 matched controls. PGV was measured on structural MRI scans. Seven further studies were identified comparing PGV between patients and matched controls (total n; 244 patients, 308 controls). Both novel samples showed a small (approximately 20mm(3) or 4%), but non-significant, increase in PGV in patients. Combining the two novel samples showed a significant association of age and PGV. Meta-analysis showed a trend towards a larger pituitary gland in patients (effect size: .23, CI: -.14, .59). While results suggest a possible small difference in pituitary gland volume between patients and matched controls, larger mega-analyses with sample sizes greater even than those used in the current meta-analysis are still required. There is a small but potentially functionally significant increase in PGV in patients with bipolar disorder compared to controls. Results demonstrate the difficulty of finding potentially important but small effects in functional brain disorders. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Assessment of volatile organic compounds in surface water at West Branch Canal Creek, Aberdeen Proving Ground, Maryland, 1999

    USGS Publications Warehouse

    Olsen, Lisa D.; Spencer, Tracey A.

    2000-01-01

    The U.S. Geological Survey (USGS) collected 13 surface-water samples and 3 replicates from 5 sites in the West Branch Canal Creek area at Aberdeen Proving Ground from February through August 1999, as a part of an investigation of ground-water contamination and natural attenuation processes. The samples were analyzed for volatile organic compounds, including trichloroethylene, 1,1,2,2-tetrachloroethane, carbon tetrachloride, and chloroform, which are the four major contaminants that were detected in ground water in the Canal Creek area in earlier USGS studies. Field blanks were collected during the sampling period to assess sample bias. Field replicates were used to assess sample variability, which was expressed as relative percent difference. The mean variability of the surface-water replicate analyses was larger (35.4 percent) than the mean variability of ground-water replicate analyses (14.6 percent) determined for West Branch Canal Creek from 1995 through 1996. The higher variability in surface-water analyses is probably due to heterogeneities in the composition of the surface water rather than differences in sampling or analytical procedures. The most frequently detected volatile organic compound was 1,1,2,2- tetrachloroethane, which was detected in every sample and in two of the replicates. The surface-water contamination is likely the result of cross-media transfer of contaminants from the ground water and sediments along the West Branch Canal Creek. The full extent of surface-water contamination in West Branch Canal Creek and the locations of probable contaminant sources cannot be determined from this limited set of data. Tidal mixing, creek flow patterns, and potential effects of a drought that occurred during the sampling period also complicate the evaluation of surface-water contamination.

  15. An expanded calibration study of the explicitly correlated CCSD(T)-F12b method using large basis set standard CCSD(T) atomization energies.

    PubMed

    Feller, David; Peterson, Kirk A

    2013-08-28

    The effectiveness of the recently developed, explicitly correlated coupled cluster method CCSD(T)-F12b is examined in terms of its ability to reproduce atomization energies derived from complete basis set extrapolations of standard CCSD(T). Most of the standard method findings were obtained with aug-cc-pV7Z or aug-cc-pV8Z basis sets. For a few homonuclear diatomic molecules it was possible to push the basis set to the aug-cc-pV9Z level. F12b calculations were performed with the cc-pVnZ-F12 (n = D, T, Q) basis set sequence and were also extrapolated to the basis set limit using a Schwenke-style, parameterized formula. A systematic bias was observed in the F12b method with the (VTZ-F12/VQZ-F12) basis set combination. This bias resulted in the underestimation of reference values associated with small molecules (valence correlation energies <0.5 E(h)) and an even larger overestimation of atomization energies for bigger systems. Consequently, caution should be exercised in the use of F12b for high accuracy studies. Root mean square and mean absolute deviation error metrics for this basis set combination were comparable to complete basis set values obtained with standard CCSD(T) and the aug-cc-pVDZ through aug-cc-pVQZ basis set sequence. However, the mean signed deviation was an order of magnitude larger. Problems partially due to basis set superposition error were identified with second row compounds which resulted in a weak performance for the smaller VDZ-F12/VTZ-F12 combination of basis sets.

  16. Relations of habitat-specific algal assemblages to land use and water chemistry in the Willamette Basin, Oregon

    USGS Publications Warehouse

    Carpenter, K.D.; Waite, I.R.

    2000-01-01

    Benthic algal assemblages, water chemistry, and habitat were characterized at 25 stream sites in the Willamette Basin, Oregon, during low flow in 1994. Seventy-three algal samples yielded 420 taxa - Mostly diatoms, blue-green algae, and green algae. Algal assemblages from depositional samples were strongly dominated by diatoms (76% mean relative abundance), whereas erosional samples were dominated by blue-green algae (68% mean relative abundance). Canonical correspondence analysis (CCA) of semiquantitative and qualitative (presence/absence) data sets identified four environmental variables (maximum specific conductance, % open canopy, pH, and drainage area) that were significant in describing patterns of algal taxa among sites. Based on CCA, four groups of sites were identified: Streams in forested basins that supported oligotrophic taxa, such as Diatoma mesodon; small streams in agricultural and urban basins that contained a variety of eutrophic and nitrogen-heterotrophic algal taxa; larger rivers draining areas of mixed land use that supported planktonic, eutrophic, and nitrogen-heterotrophic algal taxa; and streams with severely degraded or absent riparian vegetation (> 75% open canopy) that were dominated by other planktonic, eutrophic, and nitrogen-heterotrophic algal taxa. Patterns in water chemistry were consistent with the algal autecological interpretations and clearly demonstrated relationships between land use, water quality, and algal distribution patterns.

  17. Clinical evaluation of a miniaturized desktop breath hydrogen analyzer.

    PubMed

    Duan, L P; Braden, B; Clement, T; Caspary, W F; Lembcke, B

    1994-10-01

    A small desktop electrochemical H2 analyzer (EC-60-Hydrogen monitor) was compared with a stationary electrochemical H2 monitor (GMI-exhaled Hydrogen monitor). The EC-60-H2 monitor shows a high degree of precision for repetitive (n = 10) measurements of standard hydrogen mixtures (CV 1-8%). The response time for completion of measurement is shorter than that of the GMI-exhaled H2 monitor (37 sec. vs 53 sec.; p < 0.0001), while reset times are almost identical (54 sec. vs 51 sec. n.s). In a clinical setting, breath H2-concentrations measured with the EC-60-H2 monitor and the GMI-exhaled H2 monitor were in excellent agreement with a linear correlation (Y = 1.12X + 1.022, r2 = 0.9617, n = 115). With increasing H2-concentrations the EC-60-H2 monitor required larger sample volumes for maintaining sufficient precision, and sample volumes greater than 200 ml were required with H2-concentrations > 30 ppm. For routine gastrointestinal function testing, the EC-60-H2 monitor is an satisfactory and reliable, easy to use and inexpensive desktop breath hydrogen analyzer, whereas in patients with difficulty in cooperating (children, people with severe pulmonary insufficiency), special care has to be applied to obtain sufficiently large breath samples.

  18. How many stakes are required to measure the mass balance of a glacier?

    USGS Publications Warehouse

    Fountain, A.G.; Vecchia, A.

    1999-01-01

    Glacier mass balance is estimated for South Cascade Glacier and Maclure Glacier using a one-dimensional regression of mass balance with altitude as an alternative to the traditional approach of contouring mass balance values. One attractive feature of regression is that it can be applied to sparse data sets where contouring is not possible and can provide an objective error of the resulting estimate. Regression methods yielded mass balance values equivalent to contouring methods. The effect of the number of mass balance measurements on the final value for the glacier showed that sample sizes as small as five stakes provided reasonable estimates, although the error estimates were greater than for larger sample sizes. Different spatial patterns of measurement locations showed no appreciable influence on the final value as long as different surface altitudes were intermittently sampled over the altitude range of the glacier. Two different regression equations were examined, a quadratic, and a piecewise linear spline, and comparison of results showed little sensitivity to the type of equation. These results point to the dominant effect of the gradient of mass balance with altitude of alpine glaciers compared to transverse variations. The number of mass balance measurements required to determine the glacier balance appears to be scale invariant for small glaciers and five to ten stakes are sufficient.

  19. Low field magnetocaloric effect in bulk and ribbon alloy La(Fe0.88Si0.12)13

    NASA Astrophysics Data System (ADS)

    Vuong, Van-Hiep; Do-Thi, Kim-Anh; Nguyen, Duy-Thien; Nguyen, Quang-Hoa; Hoang, Nam-Nhat

    2018-03-01

    Low-field magnetocaloric effect occurred in itinerant metamagnetic materials is at core for magnetic cooling application. This works reports the magnetocaloric responses obtained at 1.35 T for the silicon-doped iron-based binary alloy La(Fe0.88Si0.12)13 in the bulk and ribbon form. Both samples possess a same symmetry but with different crystallite sizes and lattice parameters. The ribbon sample shows a larger maximum entropy change (nearly 8.5 times larger) and a higher Curie temperature (5 K higher) in comparison with that of the bulk sample. The obtained relative cooling power for the ribbon is also larger and very promising for application (RCP = 153 J/kg versus 25.2 J/kg for the bulk). The origin of the effect observed is assigned to the occurrence of negative magnetovolume effect in the ribbon structure with limit crystallization, caused by rapid cooling process at the preparation, which induced smaller crystallite size and large lattice constant at the overall weaker local crystal field.

  20. Veggie ISS Validation Test Results and Produce Consumption

    NASA Technical Reports Server (NTRS)

    Massa, Gioia; Hummerick, Mary; Spencer, LaShelle; Smith, Trent

    2015-01-01

    The Veggie vegetable production system flew to the International Space Station (ISS) in the spring of 2014. The first set of plants, Outredgeous red romaine lettuce, was grown, harvested, frozen, and returned to Earth in October. Ground control and flight plant tissue was sub-sectioned for microbial analysis, anthocyanin antioxidant phenolic analysis, and elemental analysis. Microbial analysis was also performed on samples swabbed on orbit from plants, Veggie bellows, and plant pillow surfaces, on water samples, and on samples of roots, media, and wick material from two returned plant pillows. Microbial levels of plants were comparable to ground controls, with some differences in community composition. The range in aerobic bacterial plate counts between individual plants was much greater in the ground controls than in flight plants. No pathogens were found. Anthocyanin concentrations were the same between ground and flight plants, while antioxidant and phenolic levels were slightly higher in flight plants. Elements varied, but key target elements for astronaut nutrition were similar between ground and flight plants. Aerobic plate counts of the flight plant pillow components were significantly higher than ground controls. Surface swab samples showed low microbial counts, with most below detection limits. Flight plant microbial levels were less than bacterial guidelines set for non-thermostabalized food and near or below those for fungi. These guidelines are not for fresh produce but are the closest approximate standards. Forward work includes the development of standards for space-grown produce. A produce consumption strategy for Veggie on ISS includes pre-flight assessments of all crops to down select candidates, wiping flight-grown plants with sanitizing food wipes, and regular Veggie hardware cleaning and microbial monitoring. Produce then could be consumed by astronauts, however some plant material would be reserved and returned for analysis. Implementation of this plan is a step toward developing pick-and-eat food production to supplement the packaged diet on ISS and for future exploration missions where plants could make up a larger portion of the diet. Supported by NASA Space Biology Program.

  1. Gas sorption and barrier properties of polymeric membranes from molecular dynamics and Monte Carlo simulations.

    PubMed

    Cozmuta, Ioana; Blanco, Mario; Goddard, William A

    2007-03-29

    It is important for many industrial processes to design new materials with improved selective permeability properties. Besides diffusion, the molecule's solubility contributes largely to the overall permeation process. This study presents a method to calculate solubility coefficients of gases such as O2, H2O (vapor), N2, and CO2 in polymeric matrices from simulation methods (Molecular Dynamics and Monte Carlo) using first principle predictions. The generation and equilibration (annealing) of five polymer models (polypropylene, polyvinyl alcohol, polyvinyl dichloride, polyvinyl chloride-trifluoroethylene, and polyethylene terephtalate) are extensively described. For each polymer, the average density and Hansen solubilities over a set of ten samples compare well with experimental data. For polyethylene terephtalate, the average properties between a small (n = 10) and a large (n = 100) set are compared. Boltzmann averages and probability density distributions of binding and strain energies indicate that the smaller set is biased in sampling configurations with higher energies. However, the sample with the lowest cohesive energy density from the smaller set is representative of the average of the larger set. Density-wise, low molecular weight polymers tend to have on average lower densities. Infinite molecular weight samples do however provide a very good representation of the experimental density. Solubility constants calculated with two ensembles (grand canonical and Henry's constant) are equivalent within 20%. For each polymer sample, the solubility constant is then calculated using the faster (10x) Henry's constant ensemble (HCE) from 150 ps of NPT dynamics of the polymer matrix. The influence of various factors (bad contact fraction, number of iterations) on the accuracy of Henry's constant is discussed. To validate the calculations against experimental results, the solubilities of nitrogen and carbon dioxide in polypropylene are examined over a range of temperatures between 250 and 650 K. The magnitudes of the calculated solubilities agree well with experimental results, and the trends with temperature are predicted correctly. The HCE method is used to predict the solubility constants at 298 K of water vapor and oxygen. The water vapor solubilities follow more closely the experimental trend of permeabilities, both ranging over 4 orders of magnitude. For oxygen, the calculated values do not follow entirely the experimental trend of permeabilities, most probably because at this temperature some of the polymers are in the glassy regime and thus are diffusion dominated. Our study also concludes large confidence limits are associated with the calculated Henry's constants. By investigating several factors (terminal ends of the polymer chains, void distribution, etc.), we conclude that the large confidence limits are intimately related to the polymer's conformational changes caused by thermal fluctuations and have to be regarded--at least at microscale--as a characteristic of each polymer and the nature of its interaction with the solute. Reducing the mobility of the polymer matrix as well as controlling the distribution of the free (occupiable) volume would act as mechanisms toward lowering both the gas solubility and the diffusion coefficients.

  2. A weighted exact test for mutually exclusive mutations in cancer

    PubMed Central

    Leiserson, Mark D.M.; Reyna, Matthew A.; Raphael, Benjamin J.

    2016-01-01

    Motivation: The somatic mutations in the pathways that drive cancer development tend to be mutually exclusive across tumors, providing a signal for distinguishing driver mutations from a larger number of random passenger mutations. This mutual exclusivity signal can be confounded by high and highly variable mutation rates across a cohort of samples. Current statistical tests for exclusivity that incorporate both per-gene and per-sample mutational frequencies are computationally expensive and have limited precision. Results: We formulate a weighted exact test for assessing the significance of mutual exclusivity in an arbitrary number of mutational events. Our test conditions on the number of samples with a mutation as well as per-event, per-sample mutation probabilities. We provide a recursive formula to compute P-values for the weighted test exactly as well as a highly accurate and efficient saddlepoint approximation of the test. We use our test to approximate a commonly used permutation test for exclusivity that conditions on per-event, per-sample mutation frequencies. However, our test is more efficient and it recovers more significant results than the permutation test. We use our Weighted Exclusivity Test (WExT) software to analyze hundreds of colorectal and endometrial samples from The Cancer Genome Atlas, which are two cancer types that often have extremely high mutation rates. On both cancer types, the weighted test identifies sets of mutually exclusive mutations in cancer genes with fewer false positives than earlier approaches. Availability and Implementation: See http://compbio.cs.brown.edu/projects/wext for software. Contact: braphael@cs.brown.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27587696

  3. Performance audits and laboratory comparisons for SCOS97-NARSTO measurements of speciated volatile organic compounds

    NASA Astrophysics Data System (ADS)

    Fujita, Eric M.; Harshfield, Gregory; Sheetz, Laurence

    Performance audits and laboratory comparisons were conducted as part of the quality assurance program for the 1997 Southern California Ozone Study (SCOS97-NARSTO) to document potential measurement biases among laboratories measuring speciated nonmethane hydrocarbons (NMHC), carbonyl compounds, halogenated compounds, and biogenic hydrocarbons. The results show that measurements of volatile organic compounds (VOC) made during SCOS97-NARSTO are generally consistent with specified data quality objectives. The hydrocarbon comparison involved nine laboratories and consisted of two sets of collocated ambient samples. The coefficients of variation among laboratories for the sum of the 55 PAM target compounds and total NMHC ranged from ±5 to 15 percent for ambient samples from Los Angeles and Azusa. Abundant hydrocarbons are consistently identified by all laboratories, but discrepancies occur for olefins greater than C 4 and for hydrocarbons greater than C 8. Laboratory comparisons for halogenated compounds and biogenic hydrocarbons consisted of both concurrent ambient sampling by different laboratories and round-robin analysis of ambient samples. The coefficients of variation among participating laboratories were about 10-20 percent. Performance audits were conducted for measurement of carbonyl compounds involving sampling from a standard mixture of carbonyl compounds. The values reported by most of the laboratories were within 10-20 percent of those of the reference laboratory. Results of field measurement comparisons showed larger variations among the laboratories ranging from 20 to 40 percent for C 1-C 3 carbonyl compounds. The greater variations observed in the field measurement comparison may reflect potential sampling artifacts, which the performance audits did not address.

  4. Uranium and Associated Heavy Metals in Ovis aries in a Mining Impacted Area in Northwestern New Mexico.

    PubMed

    Samuel-Nakamura, Christine; Robbins, Wendie A; Hodge, Felicia S

    2017-07-28

    The objective of this study was to determine uranium (U) and other heavy metal (HM) concentrations (As, Cd, Pb, Mo, and Se) in tissue samples collected from sheep ( Ovis aries ), the primary meat staple on the Navajo reservation in northwestern New Mexico. The study setting was a prime target of U mining, where more than 1100 unreclaimed abandoned U mines and structures remain. The forage and water sources for the sheep in this study were located within 3.2 km of abandoned U mines and structures. Tissue samples from sheep ( n = 3), their local forage grasses ( n = 24), soil ( n = 24), and drinking water ( n = 14) sources were collected. The samples were analyzed using Inductively Coupled Plasma-Mass Spectrometry. Results: In general, HMs concentrated more in the roots of forage compared to the above ground parts. The sheep forage samples fell below the National Research Council maximum tolerable concentration (5 mg/kg). The bioaccumulation factor ratio was >1 in several forage samples, ranging from 1.12 to 16.86 for Mo, Cd, and Se. The study findings showed that the concentrations of HMs were greatest in the liver and kidneys. Of the calculated human intake, Se Reference Dietary Intake and Mo Recommended Dietary Allowance were exceeded, but the tolerable upper limits for both were not exceeded. Food intake recommendations informed by research are needed for individuals especially those that may be more sensitive to HMs. Further study with larger sample sizes is needed to explore other impacted communities across the reservation.

  5. Uranium and Associated Heavy Metals in Ovis aries in a Mining Impacted Area in Northwestern New Mexico

    PubMed Central

    Samuel-Nakamura, Christine; Robbins, Wendie A.; Hodge, Felicia S.

    2017-01-01

    The objective of this study was to determine uranium (U) and other heavy metal (HM) concentrations (As, Cd, Pb, Mo, and Se) in tissue samples collected from sheep (Ovis aries), the primary meat staple on the Navajo reservation in northwestern New Mexico. The study setting was a prime target of U mining, where more than 1100 unreclaimed abandoned U mines and structures remain. The forage and water sources for the sheep in this study were located within 3.2 km of abandoned U mines and structures. Tissue samples from sheep (n = 3), their local forage grasses (n = 24), soil (n = 24), and drinking water (n = 14) sources were collected. The samples were analyzed using Inductively Coupled Plasma-Mass Spectrometry. Results: In general, HMs concentrated more in the roots of forage compared to the above ground parts. The sheep forage samples fell below the National Research Council maximum tolerable concentration (5 mg/kg). The bioaccumulation factor ratio was >1 in several forage samples, ranging from 1.12 to 16.86 for Mo, Cd, and Se. The study findings showed that the concentrations of HMs were greatest in the liver and kidneys. Of the calculated human intake, Se Reference Dietary Intake and Mo Recommended Dietary Allowance were exceeded, but the tolerable upper limits for both were not exceeded. Food intake recommendations informed by research are needed for individuals especially those that may be more sensitive to HMs. Further study with larger sample sizes is needed to explore other impacted communities across the reservation. PMID:28788090

  6. Accuracy and sampling error of two age estimation techniques using rib histomorphometry on a modern sample.

    PubMed

    García-Donas, Julieta G; Dyke, Jeffrey; Paine, Robert R; Nathena, Despoina; Kranioti, Elena F

    2016-02-01

    Most age estimation methods are proven problematic when applied in highly fragmented skeletal remains. Rib histomorphometry is advantageous in such cases; yet it is vital to test and revise existing techniques particularly when used in legal settings (Crowder and Rosella, 2007). This study tested Stout & Paine (1992) and Stout et al. (1994) histological age estimation methods on a Modern Greek sample using different sampling sites. Six left 4th ribs of known age and sex were selected from a modern skeletal collection. Each rib was cut into three equal segments. Two thin sections were acquired from each segment. A total of 36 thin sections were prepared and analysed. Four variables (cortical area, intact and fragmented osteon density and osteon population density) were calculated for each section and age was estimated according to Stout & Paine (1992) and Stout et al. (1994). The results showed that both methods produced a systemic underestimation of the individuals (to a maximum of 43 years) although a general improvement in accuracy levels was observed when applying the Stout et al. (1994) formula. There is an increase of error rates with increasing age with the oldest individual showing extreme differences between real age and estimated age. Comparison of the different sampling sites showed small differences between the estimated ages suggesting that any fragment of the rib could be used without introducing significant error. Yet, a larger sample should be used to confirm these results. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  7. A methodological study of genome-wide DNA methylation analyses using matched archival formalin-fixed paraffin embedded and fresh frozen breast tumors.

    PubMed

    Espinal, Allyson C; Wang, Dan; Yan, Li; Liu, Song; Tang, Li; Hu, Qiang; Morrison, Carl D; Ambrosone, Christine B; Higgins, Michael J; Sucheston-Campbell, Lara E

    2017-02-28

    DNA from archival formalin-fixed and paraffin embedded (FFPE) tissue is an invaluable resource for genome-wide methylation studies although concerns about poor quality may limit its use. In this study, we compared DNA methylation profiles of breast tumors using DNA from fresh-frozen (FF) tissues and three types of matched FFPE samples. For 9/10 patients, correlation and unsupervised clustering analysis revealed that the FF and FFPE samples were consistently correlated with each other and clustered into distinct subgroups. Greater than 84% of the top 100 loci previously shown to differentiate ER+ and ER- tumors in FF tissues were also FFPE DML. Weighted Correlation Gene Network Analyses (WCGNA) grouped the DML loci into 16 modules in FF tissue, with ~85% of the module membership preserved across tissue types. Restored FFPE and matched FF samples were profiled using the Illumina Infinium HumanMethylation450K platform. Methylation levels (β-values) across all loci and the top 100 loci previously shown to differentiate tumors by estrogen receptor status (ER+ or ER-) in a larger FF study, were compared between matched FF and FFPE samples using Pearson's correlation, hierarchical clustering and WCGNA. Positive predictive values and sensitivity levels for detecting differentially methylated loci (DML) in FF samples were calculated in an independent FFPE cohort. FFPE breast tumors samples show lower overall detection of DMLs versus FF, however FFPE and FF DMLs compare favorably. These results support the emerging consensus that the 450K platform can be employed to investigate epigenetics in large sets of archival FFPE tissues.

  8. [MATCHE: Management Approach to Teaching Consumer and Homemaking Education.] Economically Depressed Areas Strand: Management. Module III-F-3: Marketing Practices in Relation to Low Income Clientele.

    ERIC Educational Resources Information Center

    California State Univ., Fresno. Dept. of Home Economics.

    This competency-based preservice home economics teacher education module on marketing practices in relation to low income clientele is the third in a set of three modules on management in economically depressed areas (EDAs). (This set is part of a larger set of sixty-seven modules on the Management Approach to Teaching Consumer and Homemaking…

  9. [MATCHE: Management Approach to Teaching Consumer and Homemaking Education.] Economically Depressed Areas Strand: Foods and Nutrition. Module III-C-1: Food Availability in Economically Depressed Areas.

    ERIC Educational Resources Information Center

    California State Univ., Fresno. Dept. of Home Economics.

    This competency-based preservice home economics teacher education module on food availability in economically depressed areas (EDA) is the first in a set of three modules on foods and nutrition in economically depressed areas. (This set is part of a larger set of sixty-seven modules on the Management Approach to Teaching Consumer and Homemaking…

  10. Applications of Small Area Estimation to Generalization with Subclassification by Propensity Scores

    ERIC Educational Resources Information Center

    Chan, Wendy

    2018-01-01

    Policymakers have grown increasingly interested in how experimental results may generalize to a larger population. However, recently developed propensity score-based methods are limited by small sample sizes, where the experimental study is generalized to a population that is at least 20 times larger. This is particularly problematic for methods…

  11. Crystal Face Distributions and Surface Site Densities of Two Synthetic Goethites: Implications for Adsorption Capacities as a Function of Particle Size.

    PubMed

    Livi, Kenneth J T; Villalobos, Mario; Leary, Rowan; Varela, Maria; Barnard, Jon; Villacís-García, Milton; Zanella, Rodolfo; Goodridge, Anna; Midgley, Paul

    2017-09-12

    Two synthetic goethites of varying crystal size distributions were analyzed by BET, conventional TEM, cryo-TEM, atomic resolution STEM and HRTEM, and electron tomography in order to determine the effects of crystal size, shape, and atomic scale surface roughness on their adsorption capacities. The two samples were determined by BET to have very different site densities based on Cr VI adsorption experiments. Model specific surface areas generated from TEM observations showed that, based on size and shape, there should be little difference in their adsorption capacities. Electron tomography revealed that both samples crystallized with an asymmetric {101} tablet habit. STEM and HRTEM images showed a significant increase in atomic-scale surface roughness of the larger goethite. This difference in roughness was quantified based on measurements of relative abundances of crystal faces {101} and {201} for the two goethites, and a reactive surface site density was calculated for each goethite. Singly coordinated sites on face {210} are 2.5 more dense than on face {101}, and the larger goethite showed an average total of 36% {210} as compared to 14% for the smaller goethite. This difference explains the considerably larger adsorption capacitiy of the larger goethite vs the smaller sample and points toward the necessity of knowing the atomic scale surface structure in predicting mineral adsorption processes.

  12. Body mass estimates of hominin fossils and the evolution of human body size.

    PubMed

    Grabowski, Mark; Hatala, Kevin G; Jungers, William L; Richmond, Brian G

    2015-08-01

    Body size directly influences an animal's place in the natural world, including its energy requirements, home range size, relative brain size, locomotion, diet, life history, and behavior. Thus, an understanding of the biology of extinct organisms, including species in our own lineage, requires accurate estimates of body size. Since the last major review of hominin body size based on postcranial morphology over 20 years ago, new fossils have been discovered, species attributions have been clarified, and methods improved. Here, we present the most comprehensive and thoroughly vetted set of individual fossil hominin body mass predictions to date, and estimation equations based on a large (n = 220) sample of modern humans of known body masses. We also present species averages based exclusively on fossils with reliable taxonomic attributions, estimates of species averages by sex, and a metric for levels of sexual dimorphism. Finally, we identify individual traits that appear to be the most reliable for mass estimation for each fossil species, for use when only one measurement is available for a fossil. Our results show that many early hominins were generally smaller-bodied than previously thought, an outcome likely due to larger estimates in previous studies resulting from the use of large-bodied modern human reference samples. Current evidence indicates that modern human-like large size first appeared by at least 3-3.5 Ma in some Australopithecus afarensis individuals. Our results challenge an evolutionary model arguing that body size increased from Australopithecus to early Homo. Instead, we show that there is no reliable evidence that the body size of non-erectus early Homo differed from that of australopiths, and confirm that Homo erectus evolved larger average body size than earlier hominins. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Multi-year record of atmospheric and snow surface nitrate in the central Antarctic plateau.

    PubMed

    Traversi, R; Becagli, S; Brogioni, M; Caiazzo, L; Ciardini, V; Giardi, F; Legrand, M; Macelloni, G; Petkov, B; Preunkert, S; Scarchilli, C; Severi, M; Vitale, V; Udisti, R

    2017-04-01

    Continuous all year-round samplings of atmospheric aerosol and surface snow at high (daily to 4-day) resolution were carried out at Dome C since 2004-05 to 2013 and nitrate records are here presented. Basing on a larger statistical data set than previous studies, results confirm that nitrate seasonal pattern is characterized by maxima during austral summer for both aerosol and surface snow, occurring in-phase with solar UV irradiance. This temporal pattern is likely due to a combination of nitrate sources and post-depositional processes whose intensity usually enhances during the summer. Moreover, it should be noted that a case study of the synoptic conditions, which took place during a major nitrate event, showed the occurrence of a stratosphere-troposphere exchange. The sampling of both matrices at the same time with high resolution allowed the detection of a an about one-month long recurring lag of summer maxima in snow with respect to aerosol. This result can be explained by deposition and post-deposition processes occurring at the atmosphere-snow interface, such as a net uptake of gaseous nitric acid and a replenishment of the uppermost surface layers driven by a larger temperature gradient in summer. This hypothesis was preliminarily tested by a comparison with surface layers temperature data in the 2012-13 period. The analysis of the relationship between the nitrate concentration in the gas phase and total nitrate obtained at Dome C (2012-13) showed the major role of gaseous HNO 3 to the total nitrate budget suggesting the need to further investigate the gas-to-particle conversion processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Gender differences in self-conscious emotional experience: a meta-analysis.

    PubMed

    Else-Quest, Nicole M; Higgins, Ashley; Allison, Carlie; Morton, Lindsay C

    2012-09-01

    The self-conscious emotions (SCE) of guilt, shame, pride, and embarrassment are moral emotions, which motivate adherence to social norms and personal standards and emerge in early childhood following the development of self-awareness. Gender stereotypes of emotion maintain that women experience more guilt, shame, and embarrassment but that men experience more pride. To estimate the magnitude of gender differences in SCE experience and to determine the circumstances under which these gender differences vary, we meta-analyzed 697 effect sizes representing 236,304 individual ratings of SCE states and traits from 382 journal articles, dissertations, and unpublished data sets. Guilt (d = -0.27) and shame (d = -0.29) displayed small gender differences, whereas embarrassment (d = -0.08), authentic pride (d = -0.01), and hubristic pride (d = 0.09) showed gender similarities. Similar to previous findings of ethnic variations in gender differences in other psychological variables, gender differences in shame and guilt were significant only for White samples or samples with unspecified ethnicity. We found larger gender gaps in shame with trait (vs. state) scales, and in guilt and shame with situation- and scenario-based (vs. adjective- and statement-based) items, consistent with predictions that such scales and items tend to tap into global, nonspecific assessments of the self and thus reflect self-stereotyping and gender role assimilative effects. Gender differences in SCE about domains such as the body, sex, and food or eating tended to be larger than gender differences in SCE about other domains. These findings contribute to the literature demonstrating that blanket stereotypes about women's greater emotionality are inaccurate. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  15. Maternal asthma and transient tachypnea of the newborn.

    PubMed

    Demissie, K; Marcella, S W; Breckenridge, M B; Rhoads, G G

    1998-07-01

    To examine the relationship between transient tachypnea of the newborn and asthma complicating pregnancy. Historical cohort analysis. Setting. Singleton live deliveries in New Jersey hospitals during 1989 to 1992 (n = 447 963). Mother-infant dyads were identified from linked birth certificate and maternal and infant hospital claims data. Women with an International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnosis code (493) for asthma (n = 2289) were compared with a four-fold larger randomly selected control sample (n = 9156) from the remaining pool of women. Transient tachypnea of the newborn. In the overall sample, after controlling for the confounding effects of important variables, infants of asthmatic mothers were more likely [odds ratio (OR), 1. 79; 95% confidence interval (CI), 1.35-2.37] than infants of control mothers to exhibit transient tachypnea of the newborn. A stratified analysis by gestational age and sex revealed larger and statistically significant associations in term infants (OR, 2.02; 95% CI, 1.42-2.87) as opposed to preterm infants (OR, 1.51; 95% CI, 0.94-2.43) and in male infants (OR, 1.91; 95% CI, 1.35-2.71) as opposed to female infants (OR, 1.51; 95% CI, 0.92-2.47). On the other hand, after adjusting for important confounding variables, respiratory distress syndrome and maternal asthma were not found to be associated (OR, 1.14; 95% CI, 0.79-1.64). The results of this study provide evidence that maternal asthma is a risk factor for transient tachypnea of the newborn and differences in gestational age and sex were apparent in this association. The mechanism for this association remains to be determined.

  16. Mass calibration and cosmological analysis of the SPT-SZ galaxy cluster sample using velocity dispersion σ v and x-ray Y X measurements

    DOE PAGES

    Bocquet, S.; Saro, A.; Mohr, J. J.; ...

    2015-01-30

    Here, we present a velocity-dispersion-based mass calibration of the South Pole Telescope Sunyaev-Zel'dovich effect survey (SPT-SZ) galaxy cluster sample. Using a homogeneously selected sample of 100 cluster candidates from 720 deg 2 of the survey along with 63 velocity dispersion (σ v) and 16 X-ray Y X measurements of sample clusters, we simultaneously calibrate the mass-observable relation and constrain cosmological parameters. Our method accounts for cluster selection, cosmological sensitivity, and uncertainties in the mass calibrators. The calibrations using σ v and Y X are consistent at the 0.6σ level, with the σ v calibration preferring ~16% higher masses. We usemore » the full SPTCL data set (SZ clusters+σ v+Y X) to measure σ 8(Ωm/0.27) 0.3 = 0.809 ± 0.036 within a flat ΛCDM model. The SPT cluster abundance is lower than preferred by either the WMAP9 or Planck+WMAP9 polarization (WP) data, but assuming that the sum of the neutrino masses is m ν = 0.06 eV, we find the data sets to be consistent at the 1.0σ level for WMAP9 and 1.5σ for Planck+WP. Allowing for larger Σm ν further reconciles the results. When we combine the SPTCL and Planck+WP data sets with information from baryon acoustic oscillations and Type Ia supernovae, the preferred cluster masses are 1.9σ higher than the Y X calibration and 0.8σ higher than the σ v calibration. Given the scale of these shifts (~44% and ~23% in mass, respectively), we execute a goodness-of-fit test; it reveals no tension, indicating that the best-fit model provides an adequate description of the data. Using the multi-probe data set, we measure Ω m = 0.299 ± 0.009 and σ8 = 0.829 ± 0.011. Within a νCDM model we find Σm ν = 0.148 ± 0.081 eV. We present a consistency test of the cosmic growth rate using SPT clusters. Allowing both the growth index γ and the dark energy equation-of-state parameter w to vary, we find γ = 0.73 ± 0.28 and w = –1.007 ± 0.065, demonstrating that the eΣxpansion and the growth histories are consistent with a ΛCDM universe (γ = 0.55; w = –1).« less

  17. Mass Calibration and Cosmological Analysis of the SPT-SZ Galaxy Cluster Sample Using Velocity Dispersion σ v and X-Ray Y X Measurements

    NASA Astrophysics Data System (ADS)

    Bocquet, S.; Saro, A.; Mohr, J. J.; Aird, K. A.; Ashby, M. L. N.; Bautz, M.; Bayliss, M.; Bazin, G.; Benson, B. A.; Bleem, L. E.; Brodwin, M.; Carlstrom, J. E.; Chang, C. L.; Chiu, I.; Cho, H. M.; Clocchiatti, A.; Crawford, T. M.; Crites, A. T.; Desai, S.; de Haan, T.; Dietrich, J. P.; Dobbs, M. A.; Foley, R. J.; Forman, W. R.; Gangkofner, D.; George, E. M.; Gladders, M. D.; Gonzalez, A. H.; Halverson, N. W.; Hennig, C.; Hlavacek-Larrondo, J.; Holder, G. P.; Holzapfel, W. L.; Hrubes, J. D.; Jones, C.; Keisler, R.; Knox, L.; Lee, A. T.; Leitch, E. M.; Liu, J.; Lueker, M.; Luong-Van, D.; Marrone, D. P.; McDonald, M.; McMahon, J. J.; Meyer, S. S.; Mocanu, L.; Murray, S. S.; Padin, S.; Pryke, C.; Reichardt, C. L.; Rest, A.; Ruel, J.; Ruhl, J. E.; Saliwanchik, B. R.; Sayre, J. T.; Schaffer, K. K.; Shirokoff, E.; Spieler, H. G.; Stalder, B.; Stanford, S. A.; Staniszewski, Z.; Stark, A. A.; Story, K.; Stubbs, C. W.; Vanderlinde, K.; Vieira, J. D.; Vikhlinin, A.; Williamson, R.; Zahn, O.; Zenteno, A.

    2015-02-01

    We present a velocity-dispersion-based mass calibration of the South Pole Telescope Sunyaev-Zel'dovich effect survey (SPT-SZ) galaxy cluster sample. Using a homogeneously selected sample of 100 cluster candidates from 720 deg2 of the survey along with 63 velocity dispersion (σ v ) and 16 X-ray Y X measurements of sample clusters, we simultaneously calibrate the mass-observable relation and constrain cosmological parameters. Our method accounts for cluster selection, cosmological sensitivity, and uncertainties in the mass calibrators. The calibrations using σ v and Y X are consistent at the 0.6σ level, with the σ v calibration preferring ~16% higher masses. We use the full SPTCL data set (SZ clusters+σ v +Y X) to measure σ8(Ωm/0.27)0.3 = 0.809 ± 0.036 within a flat ΛCDM model. The SPT cluster abundance is lower than preferred by either the WMAP9 or Planck+WMAP9 polarization (WP) data, but assuming that the sum of the neutrino masses is ∑m ν = 0.06 eV, we find the data sets to be consistent at the 1.0σ level for WMAP9 and 1.5σ for Planck+WP. Allowing for larger ∑m ν further reconciles the results. When we combine the SPTCL and Planck+WP data sets with information from baryon acoustic oscillations and Type Ia supernovae, the preferred cluster masses are 1.9σ higher than the Y X calibration and 0.8σ higher than the σ v calibration. Given the scale of these shifts (~44% and ~23% in mass, respectively), we execute a goodness-of-fit test; it reveals no tension, indicating that the best-fit model provides an adequate description of the data. Using the multi-probe data set, we measure Ωm = 0.299 ± 0.009 and σ8 = 0.829 ± 0.011. Within a νCDM model we find ∑m ν = 0.148 ± 0.081 eV. We present a consistency test of the cosmic growth rate using SPT clusters. Allowing both the growth index γ and the dark energy equation-of-state parameter w to vary, we find γ = 0.73 ± 0.28 and w = -1.007 ± 0.065, demonstrating that the expansion and the growth histories are consistent with a ΛCDM universe (γ = 0.55; w = -1).

  18. Designing a national soil erosion monitoring network for England and Wales

    NASA Astrophysics Data System (ADS)

    Lark, Murray; Rawlins, Barry; Anderson, Karen; Evans, Martin; Farrow, Luke; Glendell, Miriam; James, Mike; Rickson, Jane; Quine, Timothy; Quinton, John; Brazier, Richard

    2014-05-01

    Although soil erosion is recognised as a significant threat to sustainable land use and may be a priority for action in any forthcoming EU Soil Framework Directive, those responsible for setting national policy with respect to erosion are constrained by a lack of robust, representative, data at large spatial scales. This reflects the process-orientated nature of much soil erosion research. Recognising this limitation, The UK Department for Environment, Food and Rural Affairs (Defra) established a project to pilot a cost-effective framework for monitoring of soil erosion in England and Wales (E&W). The pilot will compare different soil erosion monitoring methods at a site scale and provide statistical information for the final design of the full national monitoring network that will: provide unbiased estimates of the spatial mean of soil erosion rate across E&W (tonnes ha-1 yr-1) for each of three land-use classes - arable and horticultural grassland upland and semi-natural habitats quantify the uncertainty of these estimates with confidence intervals. Probability (design-based) sampling provides most efficient unbiased estimates of spatial means. In this study, a 16 hectare area (a square of 400 x 400 m) positioned at the centre of a 1-km grid cell, selected at random from mapped land use across E&W, provided the sampling support for measurement of erosion rates, with at least 94% of the support area corresponding to the target land use classes. Very small or zero erosion rates likely to be encountered at many sites reduce the sampling efficiency and make it difficult to compare different methods of soil erosion monitoring. Therefore, to increase the proportion of samples with larger erosion rates without biasing our estimates, we increased the inclusion probability density in areas where the erosion rate is likely to be large by using stratified random sampling. First, each sampling domain (land use class in E&W) was divided into strata; e.g. two sub-domains within which, respectively, small or no erosion rates, and moderate or larger erosion rates are expected. Each stratum was then sampled independently and at random. The sample density need not be equal in the two strata, but is known and is accounted for in the estimation of the mean and its standard error. To divide the domains into strata we used information on slope angle, previous interpretation of erosion susceptibility of the soil associations that correspond to the soil map of E&W at 1:250 000 (Soil Survey of England and Wales, 1983), and visual interpretation of evidence of erosion from aerial photography. While each domain could be stratified on the basis of the first two criteria, air photo interpretation across the whole country was not feasible. For this reason we used a two-phase random sampling for stratification (TPRS) design (de Gruijter et al., 2006). First, we formed an initial random sample of 1-km grid cells from the target domain. Second, each cell was then allocated to a stratum on the basis of the three criteria. A subset of the selected cells from each stratum were then selected for field survey at random, with a specified sampling density for each stratum so as to increase the proportion of cells where moderate or larger erosion rates were expected. Once measurements of erosion have been made, an estimate of the spatial mean of the erosion rate over the target domain, its standard error and associated uncertainty can be calculated by an expression which accounts for the estimated proportions of the two strata within the initial random sample. de Gruijter, J.J., Brus, D.J., Biekens, M.F.P. & Knotters, M. 2006. Sampling for Natural Resource Monitoring. Springer, Berlin. Soil Survey of England and Wales. 1983 National Soil Map NATMAP Vector 1:250,000. National Soil Research Institute, Cranfield University.

  19. Impacts of uncertainties in European gridded precipitation observations on regional climate analysis

    PubMed Central

    Gobiet, Andreas

    2016-01-01

    ABSTRACT Gridded precipitation data sets are frequently used to evaluate climate models or to remove model output biases. Although precipitation data are error prone due to the high spatio‐temporal variability of precipitation and due to considerable measurement errors, relatively few attempts have been made to account for observational uncertainty in model evaluation or in bias correction studies. In this study, we compare three types of European daily data sets featuring two Pan‐European data sets and a set that combines eight very high‐resolution station‐based regional data sets. Furthermore, we investigate seven widely used, larger scale global data sets. Our results demonstrate that the differences between these data sets have the same magnitude as precipitation errors found in regional climate models. Therefore, including observational uncertainties is essential for climate studies, climate model evaluation, and statistical post‐processing. Following our results, we suggest the following guidelines for regional precipitation assessments. (1) Include multiple observational data sets from different sources (e.g. station, satellite, reanalysis based) to estimate observational uncertainties. (2) Use data sets with high station densities to minimize the effect of precipitation undersampling (may induce about 60% error in data sparse regions). The information content of a gridded data set is mainly related to its underlying station density and not to its grid spacing. (3) Consider undercatch errors of up to 80% in high latitudes and mountainous regions. (4) Analyses of small‐scale features and extremes are especially uncertain in gridded data sets. For higher confidence, use climate‐mean and larger scale statistics. In conclusion, neglecting observational uncertainties potentially misguides climate model development and can severely affect the results of climate change impact assessments. PMID:28111497

  20. Impacts of uncertainties in European gridded precipitation observations on regional climate analysis.

    PubMed

    Prein, Andreas F; Gobiet, Andreas

    2017-01-01

    Gridded precipitation data sets are frequently used to evaluate climate models or to remove model output biases. Although precipitation data are error prone due to the high spatio-temporal variability of precipitation and due to considerable measurement errors, relatively few attempts have been made to account for observational uncertainty in model evaluation or in bias correction studies. In this study, we compare three types of European daily data sets featuring two Pan-European data sets and a set that combines eight very high-resolution station-based regional data sets. Furthermore, we investigate seven widely used, larger scale global data sets. Our results demonstrate that the differences between these data sets have the same magnitude as precipitation errors found in regional climate models. Therefore, including observational uncertainties is essential for climate studies, climate model evaluation, and statistical post-processing. Following our results, we suggest the following guidelines for regional precipitation assessments. (1) Include multiple observational data sets from different sources (e.g. station, satellite, reanalysis based) to estimate observational uncertainties. (2) Use data sets with high station densities to minimize the effect of precipitation undersampling (may induce about 60% error in data sparse regions). The information content of a gridded data set is mainly related to its underlying station density and not to its grid spacing. (3) Consider undercatch errors of up to 80% in high latitudes and mountainous regions. (4) Analyses of small-scale features and extremes are especially uncertain in gridded data sets. For higher confidence, use climate-mean and larger scale statistics. In conclusion, neglecting observational uncertainties potentially misguides climate model development and can severely affect the results of climate change impact assessments.

  1. A comparison of effectiveness of hepatitis B screening and linkage to care among foreign-born populations in clinical and nonclinical settings.

    PubMed

    Chandrasekar, Edwin; Kaur, Ravneet; Song, Sharon; Kim, Karen E

    2015-01-01

    Hepatitis B (HBV) is an urgent, unmet public health issue that affects Asian Americans disproportionately. Of the estimated 1.2 million living with chronic hepatitis B in USA, more than 50% are of Asian ethnicity, despite the fact that Asian Americans constitute less than 6% of the total US population. The Centers for Disease Control and Prevention recommends HBV screening of persons who are at high risk for the disease. Yet, large numbers of Asian Americans have not been diagnosed or tested, in large part because of perceived cultural and linguistic barriers. Primary care physicians are at the front line of the US health care system, and are in a position to identify individuals and families at risk. Clinical settings integrated into Asian American communities, where physicians are on staff and wellness care is emphasized, can provide testing for HBV. In this study, the Asian Health Coalition and its community partners conducted HBV screenings and follow-up linkage to care in both clinical and nonclinical settings. The nonclinic settings included health fair events organized by churches and social services agencies, and were able to reach large numbers of individuals. Twice as many Asian Americans were screened in nonclinical settings than in health clinics. Chi-square and independent samples t-test showed that participants from the two settings did not differ in test positivity, sex, insurance status, years of residence in USA, or education. Additionally, the same proportion of individuals found to be infected in the two groups underwent successful linkage to care. Nonclinical settings were as effective as clinical settings in screening for HBV, as well as in making treatment options available to those who tested positive; demographic factors did not confound the similarities. Further research is needed to evaluate if linkage to care can be accomplished equally efficiently on a larger scale.

  2. Climate Change Education in Informal Settings: Using Boundary Objects to Frame Network Dissemination

    ERIC Educational Resources Information Center

    Steiner, Mary Ann

    2016-01-01

    This study of climate change education dissemination takes place in the context of a larger project where institutions in four cities worked together to develop a linked set of informal learning experiences about climate change. Each city developed an organizational network to explore new ways to connect urban audiences with climate change…

  3. Aquatic plants: Test species sensitivity and minimum data requirement evaluations for chemical risk assessments and aquatic life criteria development for the USA

    EPA Science Inventory

    Phytotoxicity results from the publicly-available ECOTOX database were summarized for 20 chemicals and 188 aquatic plants to determine species sensitivities and the ability of a species-limited toxicity data set to serve as a surrogate for a larger data set. The lowest effect con...

  4. Understanding cross sample talk as a result of triboelectric charging on future mars missions

    NASA Astrophysics Data System (ADS)

    Beegle, L. W.; Anderson, R. C.; Fleming, G.

    2009-12-01

    Proper scientific analysis requires the material that is collected and analyzed by in-situ instruments be as close as possible (chemically and mineralogically) to the initial, unaltered surface material prior to its collection and delivery. However this is not always possible for automated robotic in situ analysis. Therefore it is vital to understanding how the sample has been changed/altered prior to analysis so that analysis can be put in the proper context. We have examined the transport of fines when transferred under ambient martian conditions in hardware analogous to that being developed for the Mars Science Laboratory (MSL) sample acquisition flight hardware. We will discuss the amount of cross sample contamination when different mineralogy’s are transferred under Martian environmental conditions. Similar issues have been identified as problems within the terrestrial mining, textile, and pharmaceutical research communities that may alter/change the chemical and mineralogical compositions of samples before they are delivered to the MSL Chemistry and Mineralogy (CheMin) and the Sample Analysis at Mars (SAM) analytical instruments. These cross-sample contamination will affect the overall quality of the science results and each of these processes need to be examined and understood prior to MSL landing on the surface of Mars. There are two forms of triboelectric charging that have been observed to occur on Earth and they are 1) when dissimilar material comes in contact (one material charges positive and the other negative depending on their relative positions on the triboelectric series and the work function of the material) and 2) when two similar materials come in contact, the larger particles can transfer one of their high energy electrons to a smaller particle. During the collisions, the transferred electron tends to lose energy and the charge tends not to move from the smaller particle back to the larger particle in further collisions. This transfer effect can occur multiple times on particles resulting in multiple charge states occurring on particles. While individual particles can have different charge sign, the bulk material can become charged due to contact of different minerals constituents in the sample and through contact of the wall. This results in a very complex system that has yet to be fully understood and characterized. We have begun to develop a characterize a data set which enable scientists to better relate arm and mast mounted measurements made on the surface by the Alpha Particle X-ray Spectrometer (APXS), the Mars Hand Lens Imager (MALHI), the Chemistry and Microimaging (ChemCam) and the Mast Camera (MastCam) instruments to the measurements made by the two onboard analytical instruments, CheMin and SAM after a sample is acquired, processed, and delivered.

  5. Reproducibility of preclinical animal research improves with heterogeneity of study samples

    PubMed Central

    Vogt, Lucile; Sena, Emily S.; Würbel, Hanno

    2018-01-01

    Single-laboratory studies conducted under highly standardized conditions are the gold standard in preclinical animal research. Using simulations based on 440 preclinical studies across 13 different interventions in animal models of stroke, myocardial infarction, and breast cancer, we compared the accuracy of effect size estimates between single-laboratory and multi-laboratory study designs. Single-laboratory studies generally failed to predict effect size accurately, and larger sample sizes rendered effect size estimates even less accurate. By contrast, multi-laboratory designs including as few as 2 to 4 laboratories increased coverage probability by up to 42 percentage points without a need for larger sample sizes. These findings demonstrate that within-study standardization is a major cause of poor reproducibility. More representative study samples are required to improve the external validity and reproducibility of preclinical animal research and to prevent wasting animals and resources for inconclusive research. PMID:29470495

  6. Simulation of Particle Size Effect on Dynamic Properties and Fracture of PTFE-W-Al Composites

    NASA Astrophysics Data System (ADS)

    Herbold, Eric; Cai, Jing; Benson, David; Nesterenko, Vitali

    2007-06-01

    Recent investigations of the dynamic compressive strength of cold isostatically pressed (CIP) composites of polytetrafluoroethylene (PTFE), tungsten and aluminum powders show significant differences depending on the size of metallic particles. PTFE and aluminum mixtures are known to be energetic under dynamic and thermal loading. The addition of tungsten increases density and overall strength of the sample. Multi-material Eulerian and arbitrary Lagrangian-Eulerian methods were used for the investigation due to the complexity of the microstructure, relatively large deformations and the ability to handle the formation of free surfaces in a natural manner. The calculations indicate that the observed dependence of sample strength on particle size is due to the formation of force chains under dynamic loading in samples with small particle sizes even at larger porosity in comparison with samples with large grain size and larger density.

  7. From the point-of-purchase perspective: a qualitative study of the feasibility of interventions aimed at portion-size.

    PubMed

    Vermeer, Willemijn M; Steenhuis, Ingrid H M; Seidell, Jacob C

    2009-04-01

    Food portion-sizes might be a promising starting point for interventions targeting obesity. The purpose of this qualitative study was to assess how representatives of point-of-purchase settings perceived the feasibility of interventions aimed at portion-size. Semi-structured interviews were conducted with 22 representatives of various point-of-purchase settings. Constructs derived from the diffusion of innovations theory were incorporated into the interview guide. Each interview was recorded and transcribed verbatim. Data were coded and analysed with Atlas.ti 5.2 using the framework approach. According to the participants, offering a larger variety of portion-sizes had the most relative advantages, and reducing portions was the most disadvantageous. The participants also considered portion-size reduction and linear pricing of portion-sizes to be risky. Lastly, a larger variety of portion-sizes, pricing strategies and portion-size labelling were seen as the most complex interventions. In general, participants considered offering a larger variety of portion-sizes, portion-size labelling and, to a lesser extent, pricing strategies with respect to portion-sizes as most feasible to implement. Interventions aimed at portion-size were seen as innovative by most participants. Developing adequate communication strategies about portion-size interventions with both decision-makers in point-of-purchase settings and the general public is crucial for successful implementation.

  8. Intradialytic Laughter Yoga therapy for haemodialysis patients: a pre-post intervention feasibility study.

    PubMed

    Bennett, Paul N; Parsons, Trisha; Ben-Moshe, Ros; Neal, Merv; Weinberg, Melissa K; Gilbert, Karen; Ockerby, Cherene; Rawson, Helen; Herbu, Corinne; Hutchinson, Alison M

    2015-06-09

    Laughter Yoga consists of physical exercise, relaxation techniques and simulated vigorous laughter. It has been associated with physical and psychological benefits for people in diverse clinical and non-clinical settings, but has not yet been tested in a haemodialysis setting. The study had three aims: 1) to examine the feasibility of conducting Laughter Yoga for patients with end stage kidney disease in a dialysis setting; 2) to explore the psychological and physiological impact of Laughter Yoga for these patients; and 3) to estimate the sample size required for future research. Pre/post intervention feasibility study. Eighteen participants were recruited into the study and Laughter Yoga therapists provided a four week intradialytic program (30-min intervention three times per week). Primary outcomes were psychological items measured at the first and last Laughter Yoga session, including: quality of life; subjective wellbeing; mood; optimism; control; self-esteem; depression, anxiety and stress. Secondary outcomes were: blood pressure, intradialytic hypotensive episodes and lung function (forced expiratory volume). Dialysis nurses exposed to the intervention completed a Laughter Yoga attitudes and perceptions survey (n = 11). Data were analysed using IBM SPSS Statistics v22, including descriptive and inferential statistics, and sample size estimates were calculated using G*Power. One participant withdrew from the study for medical reasons that were unrelated to the study during the first week (94 % retention rate). There were non-significant increases in happiness, mood, and optimism and a decrease in stress. Episodes of intradialytic hypotension decreased from 19 pre and 19 during Laughter Yoga to 4 post Laughter Yoga. There was no change in lung function or blood pressure. All nurses agreed or strongly agreed that Laughter Yoga had a positive impact on patients' mood, it was a feasible intervention and they would recommend Laughter Yoga to their patients. Sample size calculations for future research indicated that a minimum of 207 participants would be required to provide sufficient power to detect change in key psychological variables. This study provides evidence that Laughter Yoga is a safe, low-intensity form of intradialytic physical activity that can be successfully implemented for patients in dialysis settings. Larger studies are required, however, to determine the effect of Laughter Yoga on key psychological variables. Australian New Zealand Clinical Trials Registry - ACTRN12614001130651 . Registered 23 October 2014.

  9. Volume 19, Issue8 (December 2004)Articles in the Current Issue:Research ArticleTowards automation of palynology 1: analysis of pollen shape and ornamentation using simple geometric measures, derived from scanning electron microscope images

    NASA Astrophysics Data System (ADS)

    Treloar, W. J.; Taylor, G. E.; Flenley, J. R.

    2004-12-01

    This is the first of a series of papers on the theme of automated pollen analysis. The automation of pollen analysis could result in numerous advantages for the reconstruction of past environments, with larger data sets made practical, objectivity and fine resolution sampling. There are also applications in apiculture and medicine. Previous work on the classification of pollen using texture measures has been successful with small numbers of pollen taxa. However, as the number of pollen taxa to be identified increases, more features may be required to achieve a successful classification. This paper describes the use of simple geometric measures to augment the texture measures. The feasibility of this new approach is tested using scanning electron microscope (SEM) images of 12 taxa of fresh pollen taken from reference material collected on Henderson Island, Polynesia. Pollen images were captured directly from a SEM connected to a PC. A threshold grey-level was set and binary images were then generated. Pollen edges were then located and the boundaries were traced using a chain coding system. A number of simple geometric variables were calculated directly from the chain code of the pollen and a variable selection procedure was used to choose the optimal subset to be used for classification. The efficiency of these variables was tested using a leave-one-out classification procedure. The system successfully split the original 12 taxa sample into five sub-samples containing no more than six pollen taxa each. The further subdivision of echinate pollen types was then attempted with a subset of four pollen taxa. A set of difference codes was constructed for a range of displacements along the chain code. From these difference codes probability variables were calculated. A variable selection procedure was again used to choose the optimal subset of probabilities that may be used for classification. The efficiency of these variables was again tested using a leave-one-out classification procedure. The proportion of correctly classified pollen ranged from 81% to 100% depending on the subset of variables used. The best set of variables had an overall classification rate averaging at about 95%. This is comparable with the classification rates from the earlier texture analysis work for other types of pollen. Copyright

  10. A Two-Stage Meta-Analysis Identifies Several New Loci for Parkinson's Disease

    PubMed Central

    2011-01-01

    A previous genome-wide association (GWA) meta-analysis of 12,386 PD cases and 21,026 controls conducted by the International Parkinson's Disease Genomics Consortium (IPDGC) discovered or confirmed 11 Parkinson's disease (PD) loci. This first analysis of the two-stage IPDGC study focused on the set of loci that passed genome-wide significance in the first stage GWA scan. However, the second stage genotyping array, the ImmunoChip, included a larger set of 1,920 SNPs selected on the basis of the GWA analysis. Here, we analyzed this set of 1,920 SNPs, and we identified five additional PD risk loci (combined p<5×10−10, PARK16/1q32, STX1B/16p11, FGF20/8p22, STBD1/4q21, and GPNMB/7p15). Two of these five loci have been suggested by previous association studies (PARK16/1q32, FGF20/8p22), and this study provides further support for these findings. Using a dataset of post-mortem brain samples assayed for gene expression (n = 399) and methylation (n = 292), we identified methylation and expression changes associated with PD risk variants in PARK16/1q32, GPNMB/7p15, and STX1B/16p11 loci, hence suggesting potential molecular mechanisms and candidate genes at these risk loci. PMID:21738488

  11. A global analysis of Y-chromosomal haplotype diversity for 23 STR loci.

    PubMed

    Purps, Josephine; Siegert, Sabine; Willuweit, Sascha; Nagy, Marion; Alves, Cíntia; Salazar, Renato; Angustia, Sheila M T; Santos, Lorna H; Anslinger, Katja; Bayer, Birgit; Ayub, Qasim; Wei, Wei; Xue, Yali; Tyler-Smith, Chris; Bafalluy, Miriam Baeta; Martínez-Jarreta, Begoña; Egyed, Balazs; Balitzki, Beate; Tschumi, Sibylle; Ballard, David; Court, Denise Syndercombe; Barrantes, Xinia; Bäßler, Gerhard; Wiest, Tina; Berger, Burkhard; Niederstätter, Harald; Parson, Walther; Davis, Carey; Budowle, Bruce; Burri, Helen; Borer, Urs; Koller, Christoph; Carvalho, Elizeu F; Domingues, Patricia M; Chamoun, Wafaa Takash; Coble, Michael D; Hill, Carolyn R; Corach, Daniel; Caputo, Mariela; D'Amato, Maria E; Davison, Sean; Decorte, Ronny; Larmuseau, Maarten H D; Ottoni, Claudio; Rickards, Olga; Lu, Di; Jiang, Chengtao; Dobosz, Tadeusz; Jonkisz, Anna; Frank, William E; Furac, Ivana; Gehrig, Christian; Castella, Vincent; Grskovic, Branka; Haas, Cordula; Wobst, Jana; Hadzic, Gavrilo; Drobnic, Katja; Honda, Katsuya; Hou, Yiping; Zhou, Di; Li, Yan; Hu, Shengping; Chen, Shenglan; Immel, Uta-Dorothee; Lessig, Rüdiger; Jakovski, Zlatko; Ilievska, Tanja; Klann, Anja E; García, Cristina Cano; de Knijff, Peter; Kraaijenbrink, Thirsa; Kondili, Aikaterini; Miniati, Penelope; Vouropoulou, Maria; Kovacevic, Lejla; Marjanovic, Damir; Lindner, Iris; Mansour, Issam; Al-Azem, Mouayyad; Andari, Ansar El; Marino, Miguel; Furfuro, Sandra; Locarno, Laura; Martín, Pablo; Luque, Gracia M; Alonso, Antonio; Miranda, Luís Souto; Moreira, Helena; Mizuno, Natsuko; Iwashima, Yasuki; Neto, Rodrigo S Moura; Nogueira, Tatiana L S; Silva, Rosane; Nastainczyk-Wulf, Marina; Edelmann, Jeanett; Kohl, Michael; Nie, Shengjie; Wang, Xianping; Cheng, Baowen; Núñez, Carolina; Pancorbo, Marian Martínez de; Olofsson, Jill K; Morling, Niels; Onofri, Valerio; Tagliabracci, Adriano; Pamjav, Horolma; Volgyi, Antonia; Barany, Gusztav; Pawlowski, Ryszard; Maciejewska, Agnieszka; Pelotti, Susi; Pepinski, Witold; Abreu-Glowacka, Monica; Phillips, Christopher; Cárdenas, Jorge; Rey-Gonzalez, Danel; Salas, Antonio; Brisighelli, Francesca; Capelli, Cristian; Toscanini, Ulises; Piccinini, Andrea; Piglionica, Marilidia; Baldassarra, Stefania L; Ploski, Rafal; Konarzewska, Magdalena; Jastrzebska, Emila; Robino, Carlo; Sajantila, Antti; Palo, Jukka U; Guevara, Evelyn; Salvador, Jazelyn; Ungria, Maria Corazon De; Rodriguez, Jae Joseph Russell; Schmidt, Ulrike; Schlauderer, Nicola; Saukko, Pekka; Schneider, Peter M; Sirker, Miriam; Shin, Kyoung-Jin; Oh, Yu Na; Skitsa, Iulia; Ampati, Alexandra; Smith, Tobi-Gail; Calvit, Lina Solis de; Stenzl, Vlastimil; Capal, Thomas; Tillmar, Andreas; Nilsson, Helena; Turrina, Stefania; De Leo, Domenico; Verzeletti, Andrea; Cortellini, Venusia; Wetton, Jon H; Gwynne, Gareth M; Jobling, Mark A; Whittle, Martin R; Sumita, Denilce R; Wolańska-Nowak, Paulina; Yong, Rita Y Y; Krawczak, Michael; Nothnagel, Michael; Roewer, Lutz

    2014-09-01

    In a worldwide collaborative effort, 19,630 Y-chromosomes were sampled from 129 different populations in 51 countries. These chromosomes were typed for 23 short-tandem repeat (STR) loci (DYS19, DYS389I, DYS389II, DYS390, DYS391, DYS392, DYS393, DYS385ab, DYS437, DYS438, DYS439, DYS448, DYS456, DYS458, DYS635, GATAH4, DYS481, DYS533, DYS549, DYS570, DYS576, and DYS643) and using the PowerPlex Y23 System (PPY23, Promega Corporation, Madison, WI). Locus-specific allelic spectra of these markers were determined and a consistently high level of allelic diversity was observed. A considerable number of null, duplicate and off-ladder alleles were revealed. Standard single-locus and haplotype-based parameters were calculated and compared between subsets of Y-STR markers established for forensic casework. The PPY23 marker set provides substantially stronger discriminatory power than other available kits but at the same time reveals the same general patterns of population structure as other marker sets. A strong correlation was observed between the number of Y-STRs included in a marker set and some of the forensic parameters under study. Interestingly a weak but consistent trend toward smaller genetic distances resulting from larger numbers of markers became apparent. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  12. Sampled-data chain-observer design for a class of delayed nonlinear systems

    NASA Astrophysics Data System (ADS)

    Kahelras, M.; Ahmed-Ali, T.; Giri, F.; Lamnabhi-Lagarrigue, F.

    2018-05-01

    The problem of observer design is addressed for a class of triangular nonlinear systems with not-necessarily small delay and sampled output measurements. One more difficulty is that the system state matrix is dependent on the un-delayed output signal which is not accessible to measurement, making existing observers inapplicable. A new chain observer, composed of m elementary observers in series, is designed to compensate for output sampling and arbitrary large delays. The larger the time-delay the larger the number m. Each elementary observer includes an output predictor that is conceived to compensate for the effects of output sampling and a fractional delay. The predictors are defined by first-order ordinary differential equations (ODEs) much simpler than those of existing predictors which involve both output and state predictors. Using a small gain type analysis, sufficient conditions for the observer to be exponentially convergent are established in terms of the minimal number m of elementary observers and the maximum sampling interval.

  13. Cross Contamination: Are Hospital Gloves Reservoirs for Nosocomial Infections?

    PubMed

    Moran, Vicki; Heuertz, Rita

    2017-01-01

    Use of disposable nonsterile gloves in the hospital setting is second only to proper hand washing in reducing contamination during patient contact. Because proper handwashing is not consistently practiced, added emphasis on glove use is warranted. There is a growing body of evidence that glove boxes and dispensers available to healthcare workers are contaminated by daily exposure to environmental organisms. This finding, in conjunction with new and emerging antibiotic-resistant bacteria, poses a threat to patients and healthcare workers alike. A newly designed glove dispenser may reduce contamination of disposable gloves. The authors investigated contamination of nonsterile examination gloves in an Emergency Department setting according to the type of dispenser used to access gloves. A statistically significant difference existed between the number of bacterial colonies and the type of dispenser: the downward-facing glove dispenser had a lower number of bacteria on the gloves. There was no statistically significant difference in the number of gloves contaminated between the two types of glove dispensers. The study demonstrated that contamination of disposable gloves existed. Additional research using a larger sample size would validate a difference in the contamination of disposable gloves using outward or downward glove dispensers.

  14. Rural and urban park visits and park-based physical activity.

    PubMed

    Shores, Kindal A; West, Stephanie T

    2010-01-01

    A physical activity disparity exists between rural and urban residents. Community parks are resources for physical activity because they are publicly provided, available at a low cost, and accessible to most residents. We examine the use of and physical activity outcomes associated with rural and urban parks. Onsite observations were conducted using the System for Observing Play and Recreation in Communities (SOPARC) at four rural and four urban parks. Momentary sampling scans were conducted four times per day for seven days at each site. A total of 6,545 park visitors were observed. Both rural and urban park visitors were observed more often at larger parks with paved trails and attended most often on weekends. Rural park visits were more frequent than urban park visits but rural visits were less physically active. Although similarities were observed between rural and urban park visits, differences suggest that findings from park and physical activity studies in urban areas should not be considered representative of their rural counterparts. Given that the majority of existing park and physical activity research has been undertaken in urban settings, the need for complementary research in rural settings has been made evident through this presentation of baseline descriptive data.

  15. Promoting prosocial behavior and self-regulatory skills in preschool children through a mindfulness-based Kindness Curriculum.

    PubMed

    Flook, Lisa; Goldberg, Simon B; Pinger, Laura; Davidson, Richard J

    2015-01-01

    Self-regulatory abilities are robust predictors of important outcomes across the life span, yet they are rarely taught explicitly in school. Using a randomized controlled design, the present study investigated the effects of a 12-week mindfulness-based Kindness Curriculum (KC) delivered in a public school setting on executive function, self-regulation, and prosocial behavior in a sample of 68 preschool children. The KC intervention group showed greater improvements in social competence and earned higher report card grades in domains of learning, health, and social-emotional development, whereas the control group exhibited more selfish behavior over time. Interpretation of effect sizes overall indicate small to medium effects favoring the KC group on measures of cognitive flexibility and delay of gratification. Baseline functioning was found to moderate treatment effects with KC children initially lower in social competence and executive functioning demonstrating larger gains in social competence relative to the control group. These findings, observed over a relatively short intervention period, support the promise of this program for promoting self-regulation and prosocial behavior in young children. They also support the need for future investigation of program implementation across diverse settings.

  16. A Monte Carlo exploration of threefold base geometries for 4d F-theory vacua

    NASA Astrophysics Data System (ADS)

    Taylor, Washington; Wang, Yi-Nan

    2016-01-01

    We use Monte Carlo methods to explore the set of toric threefold bases that support elliptic Calabi-Yau fourfolds for F-theory compactifications to four dimensions, and study the distribution of geometrically non-Higgsable gauge groups, matter, and quiver structure. We estimate the number of distinct threefold bases in the connected set studied to be ˜ 1048. The distribution of bases peaks around h 1,1 ˜ 82. All bases encountered after "thermalization" have some geometric non-Higgsable structure. We find that the number of non-Higgsable gauge group factors grows roughly linearly in h 1,1 of the threefold base. Typical bases have ˜ 6 isolated gauge factors as well as several larger connected clusters of gauge factors with jointly charged matter. Approximately 76% of the bases sampled contain connected two-factor gauge group products of the form SU(3) × SU(2), which may act as the non-Abelian part of the standard model gauge group. SU(3) × SU(2) is the third most common connected two-factor product group, following SU(2) × SU(2) and G 2 × SU(2), which arise more frequently.

  17. A Monte Carlo exploration of threefold base geometries for 4d F-theory vacua

    DOE PAGES

    Taylor, Washington; Wang, Yi-Nan

    2016-01-22

    Here, we use Monte Carlo methods to explore the set of toric threefold bases that support elliptic Calabi-Yau fourfolds for F-theory compactifications to four dimensions, and study the distribution of geometrically non-Higgsable gauge groups, matter, and quiver structure. We estimate the number of distinct threefold bases in the connected set studied to be ~ 10 48. Moreover, the distribution of bases peaks around h 1,1 ~ 82. All bases encountered after "thermalization" have some geometric non-Higgsable structure. We also find that the number of non-Higgsable gauge group factors grows roughly linearly in h 1,1 of the threefold base. Typical basesmore » have ~ 6 isolated gauge factors as well as several larger connected clusters of gauge factors with jointly charged matter. Approximately 76% of the bases sampled contain connected two-factor gauge group products of the form SU(3) x SU(2), which may act as the non-Abelian part of the standard model gauge group. SU(3) x SU(2) is the third most common connected two-factor product group, following SU(2) x SU(2) and G2 x SU(2), which arise more frequently.« less

  18. The effect of prices on nutrition: Comparing the impact of product- and nutrient-specific taxes.

    PubMed

    Harding, Matthew; Lovenheim, Michael

    2017-05-01

    This paper provides an analysis of the role of prices in determining food purchases and nutrition using very detailed transaction-level observations for a large, nationally-representative sample of US consumers over the period 2002-2007. Using product-specific nutritional information, we develop a new method of partitioning the product space into relevant nutritional clusters that define a set of nutritionally-bundled goods, which parsimoniously characterize consumer choice sets. We then estimate a large utility-derived demand system over this joint product-nutrient space that allows us to calculate price and expenditure elasticities. Using our structural demand estimates, we simulate the role of product taxes on soda, sugar-sweetened beverages, packaged meals, and snacks, and nutrient taxes on fat, salt, and sugar. We find that a 20% nutrient tax has a significantly larger impact on nutrition than an equivalent product tax, due to the fact that these are broader-based taxes. However, the costs of these taxes in terms of consumer utility are only about 70 cents per household per day. A sugar tax in particular is a powerful tool to induce healthier nutritive bundles among consumers. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Management Systems, Patient Quality Improvement, Resource Availability, and Substance Abuse Treatment Quality

    PubMed Central

    Fields, Dail; Roman, Paul M; Blum, Terry C

    2012-01-01

    Objective To examine the relationships among general management systems, patient-focused quality management/continuous process improvement (TQM/CPI) processes, resource availability, and multiple dimensions of substance use disorder (SUD) treatment. Data Sources/Study Setting Data are from a nationally representative sample of 221 SUD treatment centers through the National Treatment Center Study (NTCS). Study Design The design was a cross-sectional field study using latent variable structural equation models. The key variables are management practices, TQM/continuous quality improvement (CQI) practices, resource availability, and treatment center performance. Data Collection Interviews and questionnaires provided data from treatment center administrative directors and clinical directors in 2007–2008. Principal Findings Patient-focused TQM/CQI practices fully mediated the relationship between internal management practices and performance. The effects of TQM/CQI on performance are significantly larger for treatment centers with higher levels of staff per patient. Conclusions Internal management practices may create a setting that supports implementation of specific patient-focused practices and protocols inherent to TQM/CQI processes. However, the positive effects of internal management practices on treatment center performance occur through use of specific patient-focused TQM/CPI practices and have more impact when greater amounts of supporting resources are present. PMID:22098342

  20. Promoting prosocial behavior and self-regulatory skills in preschool children through a mindfulness-based kindness curriculum

    PubMed Central

    Flook, Lisa; Goldberg, Simon B.; Pinger, Laura; Davidson, Richard J.

    2015-01-01

    Self-regulatory abilities are robust predictors of important outcomes across the lifespan, yet they are rarely taught explicitly in school. Using a randomized controlled design, the present study investigated the effects of a 12-week mindfulness-based Kindness Curriculum (KC) delivered in a public school setting on executive function, self-regulation, and prosocial behavior in a sample of 68 preschool children. The KC intervention group showed greater improvements in social competence and earned higher report card grades in domains of learning, health, and social-emotional development, whereas the control group exhibited more selfish behavior over time. Interpretation of effect sizes overall indicate small to medium effects favoring the KC group on measures of cognitive flexibility and delay of gratification . Baseline functioning was found to moderate treatment effects with KC children initially lower in social competence and executive functioning demonstrating larger gains in social competence relative to the control group. These findings, observed over a relatively short intervention period, support the promise of this program for promoting self-regulation and prosocial behavior in young children. They also support the need for future investigation of program implementation across diverse settings. PMID:25383689

Top