Implementation of false discovery rate for exploring novel paradigms and trait dimensions with ERPs.
Crowley, Michael J; Wu, Jia; McCreary, Scott; Miller, Kelly; Mayes, Linda C
2012-01-01
False discovery rate (FDR) is a multiple comparison procedure that targets the expected proportion of false discoveries among the discoveries. Employing FDR methods in event-related potential (ERP) research provides an approach to explore new ERP paradigms and ERP-psychological trait/behavior relations. In Study 1, we examined neural responses to escape behavior from an aversive noise. In Study 2, we correlated a relatively unexplored trait dimension, ostracism, with neural response. In both situations we focused on the frontal cortical region, applying a channel by time plots to display statistically significant uncorrected data and FDR corrected data, controlling for multiple comparisons.
Discovery of Host Factors and Pathways Utilized in Hantaviral Infection
2016-09-01
AWARD NUMBER: W81XWH-14-1-0204 TITLE: Discovery of Host Factors and Pathways Utilized in Hantaviral Infection PRINCIPAL INVESTIGATOR: Paul...Aug 2016 4. TITLE AND SUBTITLE Discovery of Host Factors and Pathways Utilized in Hantaviral Infection 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c...after significance values were calculated and corrected for false discovery rate. The top hit is ATP6V0A1, a gene encoding a subunit of a vacuolar
Juan-Albarracín, Javier; Fuster-Garcia, Elies; Pérez-Girbés, Alexandre; Aparici-Robles, Fernando; Alberich-Bayarri, Ángel; Revert-Ventura, Antonio; Martí-Bonmatí, Luis; García-Gómez, Juan M
2018-06-01
Purpose To determine if preoperative vascular heterogeneity of glioblastoma is predictive of overall survival of patients undergoing standard-of-care treatment by using an unsupervised multiparametric perfusion-based habitat-discovery algorithm. Materials and Methods Preoperative magnetic resonance (MR) imaging including dynamic susceptibility-weighted contrast material-enhanced perfusion studies in 50 consecutive patients with glioblastoma were retrieved. Perfusion parameters of glioblastoma were analyzed and used to automatically draw four reproducible habitats that describe the tumor vascular heterogeneity: high-angiogenic and low-angiogenic regions of the enhancing tumor, potentially tumor-infiltrated peripheral edema, and vasogenic edema. Kaplan-Meier and Cox proportional hazard analyses were conducted to assess the prognostic potential of the hemodynamic tissue signature to predict patient survival. Results Cox regression analysis yielded a significant correlation between patients' survival and maximum relative cerebral blood volume (rCBV max ) and maximum relative cerebral blood flow (rCBF max ) in high-angiogenic and low-angiogenic habitats (P < .01, false discovery rate-corrected P < .05). Moreover, rCBF max in the potentially tumor-infiltrated peripheral edema habitat was also significantly correlated (P < .05, false discovery rate-corrected P < .05). Kaplan-Meier analysis demonstrated significant differences between the observed survival of populations divided according to the median of the rCBV max or rCBF max at the high-angiogenic and low-angiogenic habitats (log-rank test P < .05, false discovery rate-corrected P < .05), with an average survival increase of 230 days. Conclusion Preoperative perfusion heterogeneity contains relevant information about overall survival in patients who undergo standard-of-care treatment. The hemodynamic tissue signature method automatically describes this heterogeneity, providing a set of vascular habitats with high prognostic capabilities. © RSNA, 2018.
Streiner, David L
2015-10-01
Testing many null hypotheses in a single study results in an increased probability of detecting a significant finding just by chance (the problem of multiplicity). Debates have raged over many years with regard to whether to correct for multiplicity and, if so, how it should be done. This article first discusses how multiple tests lead to an inflation of the α level, then explores the following different contexts in which multiplicity arises: testing for baseline differences in various types of studies, having >1 outcome variable, conducting statistical tests that produce >1 P value, taking multiple "peeks" at the data, and unplanned, post hoc analyses (i.e., "data dredging," "fishing expeditions," or "P-hacking"). It then discusses some of the methods that have been proposed for correcting for multiplicity, including single-step procedures (e.g., Bonferroni); multistep procedures, such as those of Holm, Hochberg, and Šidák; false discovery rate control; and resampling approaches. Note that these various approaches describe different aspects and are not necessarily mutually exclusive. For example, resampling methods could be used to control the false discovery rate or the family-wise error rate (as defined later in this article). However, the use of one of these approaches presupposes that we should correct for multiplicity, which is not universally accepted, and the article presents the arguments for and against such "correction." The final section brings together these threads and presents suggestions with regard to when it makes sense to apply the corrections and how to do so. © 2015 American Society for Nutrition.
Functional Brain Connectome and Its Relation to Hoehn and Yahr Stage in Parkinson Disease.
Suo, Xueling; Lei, Du; Li, Nannan; Cheng, Lan; Chen, Fuqin; Wang, Meiyun; Kemp, Graham J; Peng, Rong; Gong, Qiyong
2017-12-01
Purpose To use resting-state functional magnetic resonance (MR) imaging and graph theory approaches to investigate the brain functional connectome and its potential relation to disease severity in Parkinson disease (PD). Materials and Methods This case-control study was approved by the local research ethics committee, and all participants provided informed consent. There were 153 right-handed patients with PD and 81 healthy control participants recruited who were matched for age, sex, and handedness to undergo a 3-T resting-state functional MR examination. The whole-brain functional connectome was constructed by thresholding the Pearson correlation matrices of 90 brain regions, and the topologic properties were analyzed by using graph theory approaches. Nonparametric permutation tests were used to compare topologic properties, and their relationship to disease severity was assessed. Results The functional connectome in PD showed abnormalities at the global level (ie, decrease in clustering coefficient, global efficiency, and local efficiency, and increase in characteristic path length) and at the nodal level (decreased nodal centralities in the sensorimotor cortex, default mode, and temporal-occipital regions; P < .001, false discovery rate corrected). Further, the nodal centralities in left postcentral gyrus and left superior temporal gyrus correlated negatively with Unified Parkinson's Disease Rating Scale III score (P = .038, false discovery rate corrected, r = -0.198; and P = .009, false discovery rate corrected, r = -0.270, respectively) and decreased with increasing Hoehn and Yahr stage in patients with PD. Conclusion The configurations of brain functional connectome in patients with PD were perturbed and correlated with disease severity, notably with those responsible for motor functions. These results provide topologic insights into understanding the neural functional changes in relation to disease severity of PD. © RSNA, 2017 Online supplemental material is available for this article. An earlier incorrect version of this article appeared online. This article was corrected on September 11, 2017.
Better cancer biomarker discovery through better study design.
Rundle, Andrew; Ahsan, Habibul; Vineis, Paolo
2012-12-01
High-throughput laboratory technologies coupled with sophisticated bioinformatics algorithms have tremendous potential for discovering novel biomarkers, or profiles of biomarkers, that could serve as predictors of disease risk, response to treatment or prognosis. We discuss methodological issues in wedding high-throughput approaches for biomarker discovery with the case-control study designs typically used in biomarker discovery studies, especially focusing on nested case-control designs. We review principles for nested case-control study design in relation to biomarker discovery studies and describe how the efficiency of biomarker discovery can be effected by study design choices. We develop a simulated prostate cancer cohort data set and a series of biomarker discovery case-control studies nested within the cohort to illustrate how study design choices can influence biomarker discovery process. Common elements of nested case-control design, incidence density sampling and matching of controls to cases are not typically factored correctly into biomarker discovery analyses, inducing bias in the discovery process. We illustrate how incidence density sampling and matching of controls to cases reduce the apparent specificity of truly valid biomarkers 'discovered' in a nested case-control study. We also propose and demonstrate a new case-control matching protocol, we call 'antimatching', that improves the efficiency of biomarker discovery studies. For a valid, but as yet undiscovered, biomarker(s) disjunctions between correctly designed epidemiologic studies and the practice of biomarker discovery reduce the likelihood that true biomarker(s) will be discovered and increases the false-positive discovery rate. © 2012 The Authors. European Journal of Clinical Investigation © 2012 Stichting European Society for Clinical Investigation Journal Foundation.
NASA Astrophysics Data System (ADS)
Kolmašová, Ivana; Imai, Masafumi; Santolík, Ondřej; Kurth, William S.; Hospodarsky, George B.; Gurnett, Donald A.; Connerney, John E. P.; Bolton, Scott J.
2018-06-01
In the version of this Letter originally published, in the second sentence of the last paragraph before the Methods section the word `altitudes' was mistakenly used in place of the word `latitudes'. The sentence has now been corrected accordingly to: `Low-dispersion class 1 events indicate that low-density ionospheric regions predominantly occur in the northern hemisphere at latitudes between 20° and 70°.'
Marino, Michael J
2018-05-01
There is a clear perception in the literature that there is a crisis in reproducibility in the biomedical sciences. Many underlying factors contributing to the prevalence of irreproducible results have been highlighted with a focus on poor design and execution of experiments along with the misuse of statistics. While these factors certainly contribute to irreproducibility, relatively little attention outside of the specialized statistical literature has focused on the expected prevalence of false discoveries under idealized circumstances. In other words, when everything is done correctly, how often should we expect to be wrong? Using a simple simulation of an idealized experiment, it is possible to show the central role of sample size and the related quantity of statistical power in determining the false discovery rate, and in accurate estimation of effect size. According to our calculations, based on current practice many subfields of biomedical science may expect their discoveries to be false at least 25% of the time, and the only viable course to correct this is to require the reporting of statistical power and a minimum of 80% power (1 - β = 0.80) for all studies. Copyright © 2017 Elsevier Inc. All rights reserved.
Sakurai Prize: The Future of Higgs Physics
NASA Astrophysics Data System (ADS)
Dawson, Sally
2017-01-01
The discovery of the Higgs boson relied critically on precision calculations. The quantum contributions from the Higgs boson to the W and top quark masses suggested long before the Higgs discovery that a Standard Model Higgs boson should have a mass in the 100-200 GeV range. The experimental extraction of Higgs properties requires normalization to the predicted Higgs production and decay rates, for which higher order corrections are also essential. As Higgs physics becomes a mature subject, more and more precise calculations will be required. If there is new physics at high scales, it will contribute to the predictions and precision Higgs physics will be a window to beyond the Standard Model physics.
Strickland, Erin C; Geer, M Ariel; Hong, Jiyong; Fitzgerald, Michael C
2014-01-01
Detection and quantitation of protein-ligand binding interactions is important in many areas of biological research. Stability of proteins from rates of oxidation (SPROX) is an energetics-based technique for identifying the proteins targets of ligands in complex biological mixtures. Knowing the false-positive rate of protein target discovery in proteome-wide SPROX experiments is important for the correct interpretation of results. Reported here are the results of a control SPROX experiment in which chemical denaturation data is obtained on the proteins in two samples that originated from the same yeast lysate, as would be done in a typical SPROX experiment except that one sample would be spiked with the test ligand. False-positive rates of 1.2-2.2% and <0.8% are calculated for SPROX experiments using Q-TOF and Orbitrap mass spectrometer systems, respectively. Our results indicate that the false-positive rate is largely determined by random errors associated with the mass spectral analysis of the isobaric mass tag (e.g., iTRAQ®) reporter ions used for peptide quantitation. Our results also suggest that technical replicates can be used to effectively eliminate such false positives that result from this random error, as is demonstrated in a SPROX experiment to identify yeast protein targets of the drug, manassantin A. The impact of ion purity in the tandem mass spectral analyses and of background oxidation on the false-positive rate of protein target discovery using SPROX is also discussed.
40 CFR 22.19 - Prehearing information exchange; prehearing conference; other discovery.
Code of Federal Regulations, 2010 CFR
2010-07-01
... method of discovery sought, provide the proposed discovery instruments, and describe in detail the nature... finding that: (i) The information sought cannot reasonably be obtained by alternative methods of discovery... promptly supplement or correct the exchange when the party learns that the information exchanged or...
Clevert, Djork-Arné; Mitterecker, Andreas; Mayr, Andreas; Klambauer, Günter; Tuefferd, Marianne; De Bondt, An; Talloen, Willem; Göhlmann, Hinrich; Hochreiter, Sepp
2011-07-01
Cost-effective oligonucleotide genotyping arrays like the Affymetrix SNP 6.0 are still the predominant technique to measure DNA copy number variations (CNVs). However, CNV detection methods for microarrays overestimate both the number and the size of CNV regions and, consequently, suffer from a high false discovery rate (FDR). A high FDR means that many CNVs are wrongly detected and therefore not associated with a disease in a clinical study, though correction for multiple testing takes them into account and thereby decreases the study's discovery power. For controlling the FDR, we propose a probabilistic latent variable model, 'cn.FARMS', which is optimized by a Bayesian maximum a posteriori approach. cn.FARMS controls the FDR through the information gain of the posterior over the prior. The prior represents the null hypothesis of copy number 2 for all samples from which the posterior can only deviate by strong and consistent signals in the data. On HapMap data, cn.FARMS clearly outperformed the two most prevalent methods with respect to sensitivity and FDR. The software cn.FARMS is publicly available as a R package at http://www.bioinf.jku.at/software/cnfarms/cnfarms.html.
Mass univariate analysis of event-related brain potentials/fields I: a critical tutorial review.
Groppe, David M; Urbach, Thomas P; Kutas, Marta
2011-12-01
Event-related potentials (ERPs) and magnetic fields (ERFs) are typically analyzed via ANOVAs on mean activity in a priori windows. Advances in computing power and statistics have produced an alternative, mass univariate analyses consisting of thousands of statistical tests and powerful corrections for multiple comparisons. Such analyses are most useful when one has little a priori knowledge of effect locations or latencies, and for delineating effect boundaries. Mass univariate analyses complement and, at times, obviate traditional analyses. Here we review this approach as applied to ERP/ERF data and four methods for multiple comparison correction: strong control of the familywise error rate (FWER) via permutation tests, weak control of FWER via cluster-based permutation tests, false discovery rate control, and control of the generalized FWER. We end with recommendations for their use and introduce free MATLAB software for their implementation. Copyright © 2011 Society for Psychophysiological Research.
2004-03-08
KENNEDY SPACE CENTER, FLA. - One of four rudder speed brake actuators arrives at Cape Canaveral Air Force Station. The actuators, to be installed on the orbiter Discovery, are being X-rayed at the Radiographic High-Energy X-ray Facility to determine if the gears were installed correctly. Discovery has been assigned to the first Return to Flight mission, STS-114, a logistics flight to the International Space Station.
2004-03-08
KENNEDY SPACE CENTER, FLA. - A rudder speed brake actuator sits on an air-bearing pallet to undergo X-raying. Four actuators to be installed on the orbiter Discovery are being X-rayed at the Radiographic High-Energy X-ray Facility to determine if the gears were installed correctly. Discovery has been assigned to the first Return to Flight mission, STS-114, a logistics flight to the International Space Station.
Supernovae Discovery Efficiency
NASA Astrophysics Data System (ADS)
John, Colin
2018-01-01
Abstract:We present supernovae (SN) search efficiency measurements for recent Hubble Space Telescope (HST) surveys. Efficiency is a key component to any search, and is important parameter as a correction factor for SN rates. To achieve an accurate value for efficiency, many supernovae need to be discoverable in surveys. This cannot be achieved from real SN only, due to their scarcity, so fake SN are planted. These fake supernovae—with a goal of realism in mind—yield an understanding of efficiency based on position related to other celestial objects, and brightness. To improve realism, we built a more accurate model of supernovae using a point-spread function. The next improvement to realism is planting these objects close to galaxies and of various parameters of brightness, magnitude, local galactic brightness and redshift. Once these are planted, a very accurate SN is visible and discoverable by the searcher. It is very important to find factors that affect this discovery efficiency. Exploring the factors that effect detection yields a more accurate correction factor. Further inquires into efficiency give us a better understanding of image processing, searching techniques and survey strategies, and result in an overall higher likelihood to find these events in future surveys with Hubble, James Webb, and WFIRST telescopes. After efficiency is discovered and refined with many unique surveys, it factors into measurements of SN rates versus redshift. By comparing SN rates vs redshift against the star formation rate we can test models to determine how long star systems take from the point of inception to explosion (delay time distribution). This delay time distribution is compared to SN progenitors models to get an accurate idea of what these stars were like before their deaths.
The promise of disease gene discovery in South Asia
Nakatsuka, Nathan; Moorjani, Priya; Rai, Niraj; Sarkar, Biswanath; Tandon, Arti; Patterson, Nick; Bhavani, Gandham SriLakshmi; Girisha, Katta Mohan; Mustak, Mohammed S; Srinivasan, Sudha; Kaushik, Amit; Vahab, Saadi Abdul; Jagadeesh, Sujatha M.; Satyamoorthy, Kapaettu; Singh, Lalji; Reich, David; Thangaraj, Kumarasamy
2017-01-01
The more than 1.5 billion people who live in South Asia are correctly viewed not as a single large population, but as many small endogamous groups. We assembled genome-wide data from over 2,800 individuals from over 260 distinct South Asian groups. We identify 81 unique groups, of which 14 have estimated census sizes of more than a million, that descend from founder events more extreme than those in Ashkenazi Jews and Finns, both of which have high rates of recessive disease due to founder events. We identify multiple examples of recessive diseases in South Asia that are the result of such founder events. This study highlights an under-appreciated opportunity for reducing disease burden among South Asians through the discovery of and testing for recessive disease genes. PMID:28714977
Brain glucose and acetoacetate metabolism: a comparison of young and older adults.
Nugent, Scott; Tremblay, Sebastien; Chen, Kewei W; Ayutyanont, Napatkamon; Roontiva, Auttawut; Castellano, Christian-Alexandre; Fortier, Melanie; Roy, Maggie; Courchesne-Loyer, Alexandre; Bocti, Christian; Lepage, Martin; Turcotte, Eric; Fulop, Tamas; Reiman, Eric M; Cunnane, Stephen C
2014-06-01
The extent to which the age-related decline in regional brain glucose uptake also applies to other important brain fuels is presently unknown. Ketones are the brain's major alternative fuel to glucose, so we developed a dual tracer positron emission tomography protocol to quantify and compare regional cerebral metabolic rates for glucose and the ketone, acetoacetate. Twenty healthy young adults (mean age, 26 years) and 24 healthy older adults (mean age, 74 years) were studied. In comparison with younger adults, older adults had 8 ± 6% (mean ± SD) lower cerebral metabolic rates for glucose in gray matter as a whole (p = 0.035), specifically in several frontal, temporal, and subcortical regions, as well as in the cingulate and insula (p ≤ 0.01, false discovery rate correction). The effect of age on cerebral metabolic rates for acetoacetate in gray matter did not reach significance (p = 0.11). Rate constants (min(-1)) of glucose (Kg) and acetoacetate (Ka) were significantly lower (-11 ± 6%; [p = 0.005], and -19 ± 5%; [p = 0.006], respectively) in older adults compared with younger adults. There were differential effects of age on Kg and Ka as seen by significant interaction effects in the caudate (p = 0.030) and post-central gyrus (p = 0.023). The acetoacetate index, which expresses the scaled residuals of the voxel-wise linear regression of glucose on ketone uptake, identifies regions taking up higher or lower amounts of acetoacetate relative to glucose. The acetoacetate index was higher in the caudate of young adults when compared with older adults (p ≤ 0.05 false discovery rate correction). This study provides new information about glucose and ketone metabolism in the human brain and a comparison of the extent to which their regional use changes during normal aging. Copyright © 2014 Elsevier Inc. All rights reserved.
2004-03-08
KENNEDY SPACE CENTER, FLA. - Workers at Cape Canaveral Air Force Station place one of four rudder speed brake actuators onto a pallet for X-ray. The actuators, to be installed on the orbiter Discovery, are being X-rayed at the Radiographic High-Energy X-ray Facility to determine if the gears were installed correctly. Discovery has been assigned to the first Return to Flight mission, STS-114, a logistics flight to the International Space Station.
Evaluation of Second-Level Inference in fMRI Analysis
Roels, Sanne P.; Loeys, Tom; Moerkerke, Beatrijs
2016-01-01
We investigate the impact of decisions in the second-level (i.e., over subjects) inferential process in functional magnetic resonance imaging on (1) the balance between false positives and false negatives and on (2) the data-analytical stability, both proxies for the reproducibility of results. Second-level analysis based on a mass univariate approach typically consists of 3 phases. First, one proceeds via a general linear model for a test image that consists of pooled information from different subjects. We evaluate models that take into account first-level (within-subjects) variability and models that do not take into account this variability. Second, one proceeds via inference based on parametrical assumptions or via permutation-based inference. Third, we evaluate 3 commonly used procedures to address the multiple testing problem: familywise error rate correction, False Discovery Rate (FDR) correction, and a two-step procedure with minimal cluster size. Based on a simulation study and real data we find that the two-step procedure with minimal cluster size results in most stable results, followed by the familywise error rate correction. The FDR results in most variable results, for both permutation-based inference and parametrical inference. Modeling the subject-specific variability yields a better balance between false positives and false negatives when using parametric inference. PMID:26819578
Pe’er, Itsik
2017-01-01
Genome-wide association studies (GWAS) have identified hundreds of SNPs responsible for variation in human quantitative traits. However, genome-wide-significant associations often fail to replicate across independent cohorts, in apparent inconsistency with their apparent strong effects in discovery cohorts. This limited success of replication raises pervasive questions about the utility of the GWAS field. We identify all 332 studies of quantitative traits from the NHGRI-EBI GWAS Database with attempted replication. We find that the majority of studies provide insufficient data to evaluate replication rates. The remaining papers replicate significantly worse than expected (p < 10−14), even when adjusting for regression-to-the-mean of effect size between discovery- and replication-cohorts termed the Winner’s Curse (p < 10−16). We show this is due in part to misreporting replication cohort-size as a maximum number, rather than per-locus one. In 39 studies accurately reporting per-locus cohort-size for attempted replication of 707 loci in samples with similar ancestry, replication rate matched expectation (predicted 458, observed 457, p = 0.94). In contrast, ancestry differences between replication and discovery (13 studies, 385 loci) cause the most highly-powered decile of loci to replicate worse than expected, due to difference in linkage disequilibrium. PMID:28715421
2004-03-08
KENNEDY SPACE CENTER, FLA. - An X-ray machine is in place to take images of four rudder speed brake actuators to be installed on the orbiter Discovery. The actuators are being X-rayed at the Cape Canaveral Air Force Station’s Radiographic High-Energy X-ray Facility to determine if the gears were installed correctly. Discovery has been assigned to the first Return to Flight mission, STS-114, a logistics flight to the International Space Station.
OCEAN: Optimized Cross rEActivity estimatioN.
Czodrowski, Paul; Bolick, Wolf-Guido
2016-10-24
The prediction of molecular targets is highly beneficial during the drug discovery process, be it for off-target elucidation or deconvolution of phenotypic screens. Here, we present OCEAN, a target prediction tool exclusively utilizing publically available ChEMBL data. OCEAN uses a heuristics approach based on a validation set containing almost 1000 drug ← → target relationships. New ChEMBL data (ChEMBL20 as well as ChEMBL21) released after the validation was used for a prospective OCEAN performance check. The success rates of OCEAN to predict correctly the targets within the TOP10 ranks are 77% for recently marketed drugs and 62% for all new ChEMBL20 compounds and 51% for all new ChEMBL21 compounds. OCEAN is also capable of identifying polypharmacological compounds; the success rate for molecules simultaneously hitting at least two targets is 64% to be correctly predicted within the TOP10 ranks. The source code of OCEAN can be found at http://www.github.com/rdkit/OCEAN.
Giese, Sven H; Zickmann, Franziska; Renard, Bernhard Y
2014-01-01
Accurate estimation, comparison and evaluation of read mapping error rates is a crucial step in the processing of next-generation sequencing data, as further analysis steps and interpretation assume the correctness of the mapping results. Current approaches are either focused on sensitivity estimation and thereby disregard specificity or are based on read simulations. Although continuously improving, read simulations are still prone to introduce a bias into the mapping error quantitation and cannot capture all characteristics of an individual dataset. We introduce ARDEN (artificial reference driven estimation of false positives in next-generation sequencing data), a novel benchmark method that estimates error rates of read mappers based on real experimental reads, using an additionally generated artificial reference genome. It allows a dataset-specific computation of error rates and the construction of a receiver operating characteristic curve. Thereby, it can be used for optimization of parameters for read mappers, selection of read mappers for a specific problem or for filtering alignments based on quality estimation. The use of ARDEN is demonstrated in a general read mapper comparison, a parameter optimization for one read mapper and an application example in single-nucleotide polymorphism discovery with a significant reduction in the number of false positive identifications. The ARDEN source code is freely available at http://sourceforge.net/projects/arden/.
Ternès, Nils; Rotolo, Federico; Michiels, Stefan
2016-07-10
Correct selection of prognostic biomarkers among multiple candidates is becoming increasingly challenging as the dimensionality of biological data becomes higher. Therefore, minimizing the false discovery rate (FDR) is of primary importance, while a low false negative rate (FNR) is a complementary measure. The lasso is a popular selection method in Cox regression, but its results depend heavily on the penalty parameter λ. Usually, λ is chosen using maximum cross-validated log-likelihood (max-cvl). However, this method has often a very high FDR. We review methods for a more conservative choice of λ. We propose an empirical extension of the cvl by adding a penalization term, which trades off between the goodness-of-fit and the parsimony of the model, leading to the selection of fewer biomarkers and, as we show, to the reduction of the FDR without large increase in FNR. We conducted a simulation study considering null and moderately sparse alternative scenarios and compared our approach with the standard lasso and 10 other competitors: Akaike information criterion (AIC), corrected AIC, Bayesian information criterion (BIC), extended BIC, Hannan and Quinn information criterion (HQIC), risk information criterion (RIC), one-standard-error rule, adaptive lasso, stability selection, and percentile lasso. Our extension achieved the best compromise across all the scenarios between a reduction of the FDR and a limited raise of the FNR, followed by the AIC, the RIC, and the adaptive lasso, which performed well in some settings. We illustrate the methods using gene expression data of 523 breast cancer patients. In conclusion, we propose to apply our extension to the lasso whenever a stringent FDR with a limited FNR is targeted. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Statistical Selection of Biological Models for Genome-Wide Association Analyses.
Bi, Wenjian; Kang, Guolian; Pounds, Stanley B
2018-05-24
Genome-wide association studies have discovered many biologically important associations of genes with phenotypes. Typically, genome-wide association analyses formally test the association of each genetic feature (SNP, CNV, etc) with the phenotype of interest and summarize the results with multiplicity-adjusted p-values. However, very small p-values only provide evidence against the null hypothesis of no association without indicating which biological model best explains the observed data. Correctly identifying a specific biological model may improve the scientific interpretation and can be used to more effectively select and design a follow-up validation study. Thus, statistical methodology to identify the correct biological model for a particular genotype-phenotype association can be very useful to investigators. Here, we propose a general statistical method to summarize how accurately each of five biological models (null, additive, dominant, recessive, co-dominant) represents the data observed for each variant in a GWAS study. We show that the new method stringently controls the false discovery rate and asymptotically selects the correct biological model. Simulations of two-stage discovery-validation studies show that the new method has these properties and that its validation power is similar to or exceeds that of simple methods that use the same statistical model for all SNPs. Example analyses of three data sets also highlight these advantages of the new method. An R package is freely available at www.stjuderesearch.org/site/depts/biostats/maew. Copyright © 2018. Published by Elsevier Inc.
Klambauer, Günter; Schwarzbauer, Karin; Mayr, Andreas; Clevert, Djork-Arné; Mitterecker, Andreas; Bodenhofer, Ulrich; Hochreiter, Sepp
2012-01-01
Quantitative analyses of next-generation sequencing (NGS) data, such as the detection of copy number variations (CNVs), remain challenging. Current methods detect CNVs as changes in the depth of coverage along chromosomes. Technological or genomic variations in the depth of coverage thus lead to a high false discovery rate (FDR), even upon correction for GC content. In the context of association studies between CNVs and disease, a high FDR means many false CNVs, thereby decreasing the discovery power of the study after correction for multiple testing. We propose ‘Copy Number estimation by a Mixture Of PoissonS’ (cn.MOPS), a data processing pipeline for CNV detection in NGS data. In contrast to previous approaches, cn.MOPS incorporates modeling of depths of coverage across samples at each genomic position. Therefore, cn.MOPS is not affected by read count variations along chromosomes. Using a Bayesian approach, cn.MOPS decomposes variations in the depth of coverage across samples into integer copy numbers and noise by means of its mixture components and Poisson distributions, respectively. The noise estimate allows for reducing the FDR by filtering out detections having high noise that are likely to be false detections. We compared cn.MOPS with the five most popular methods for CNV detection in NGS data using four benchmark datasets: (i) simulated data, (ii) NGS data from a male HapMap individual with implanted CNVs from the X chromosome, (iii) data from HapMap individuals with known CNVs, (iv) high coverage data from the 1000 Genomes Project. cn.MOPS outperformed its five competitors in terms of precision (1–FDR) and recall for both gains and losses in all benchmark data sets. The software cn.MOPS is publicly available as an R package at http://www.bioinf.jku.at/software/cnmops/ and at Bioconductor. PMID:22302147
Klambauer, Günter; Schwarzbauer, Karin; Mayr, Andreas; Clevert, Djork-Arné; Mitterecker, Andreas; Bodenhofer, Ulrich; Hochreiter, Sepp
2012-05-01
Quantitative analyses of next-generation sequencing (NGS) data, such as the detection of copy number variations (CNVs), remain challenging. Current methods detect CNVs as changes in the depth of coverage along chromosomes. Technological or genomic variations in the depth of coverage thus lead to a high false discovery rate (FDR), even upon correction for GC content. In the context of association studies between CNVs and disease, a high FDR means many false CNVs, thereby decreasing the discovery power of the study after correction for multiple testing. We propose 'Copy Number estimation by a Mixture Of PoissonS' (cn.MOPS), a data processing pipeline for CNV detection in NGS data. In contrast to previous approaches, cn.MOPS incorporates modeling of depths of coverage across samples at each genomic position. Therefore, cn.MOPS is not affected by read count variations along chromosomes. Using a Bayesian approach, cn.MOPS decomposes variations in the depth of coverage across samples into integer copy numbers and noise by means of its mixture components and Poisson distributions, respectively. The noise estimate allows for reducing the FDR by filtering out detections having high noise that are likely to be false detections. We compared cn.MOPS with the five most popular methods for CNV detection in NGS data using four benchmark datasets: (i) simulated data, (ii) NGS data from a male HapMap individual with implanted CNVs from the X chromosome, (iii) data from HapMap individuals with known CNVs, (iv) high coverage data from the 1000 Genomes Project. cn.MOPS outperformed its five competitors in terms of precision (1-FDR) and recall for both gains and losses in all benchmark data sets. The software cn.MOPS is publicly available as an R package at http://www.bioinf.jku.at/software/cnmops/ and at Bioconductor.
Optimal False Discovery Rate Control for Dependent Data
Xie, Jichun; Cai, T. Tony; Maris, John; Li, Hongzhe
2013-01-01
This paper considers the problem of optimal false discovery rate control when the test statistics are dependent. An optimal joint oracle procedure, which minimizes the false non-discovery rate subject to a constraint on the false discovery rate is developed. A data-driven marginal plug-in procedure is then proposed to approximate the optimal joint procedure for multivariate normal data. It is shown that the marginal procedure is asymptotically optimal for multivariate normal data with a short-range dependent covariance structure. Numerical results show that the marginal procedure controls false discovery rate and leads to a smaller false non-discovery rate than several commonly used p-value based false discovery rate controlling methods. The procedure is illustrated by an application to a genome-wide association study of neuroblastoma and it identifies a few more genetic variants that are potentially associated with neuroblastoma than several p-value-based false discovery rate controlling procedures. PMID:23378870
Statistical testing and power analysis for brain-wide association study.
Gong, Weikang; Wan, Lin; Lu, Wenlian; Ma, Liang; Cheng, Fan; Cheng, Wei; Grünewald, Stefan; Feng, Jianfeng
2018-04-05
The identification of connexel-wise associations, which involves examining functional connectivities between pairwise voxels across the whole brain, is both statistically and computationally challenging. Although such a connexel-wise methodology has recently been adopted by brain-wide association studies (BWAS) to identify connectivity changes in several mental disorders, such as schizophrenia, autism and depression, the multiple correction and power analysis methods designed specifically for connexel-wise analysis are still lacking. Therefore, we herein report the development of a rigorous statistical framework for connexel-wise significance testing based on the Gaussian random field theory. It includes controlling the family-wise error rate (FWER) of multiple hypothesis testings using topological inference methods, and calculating power and sample size for a connexel-wise study. Our theoretical framework can control the false-positive rate accurately, as validated empirically using two resting-state fMRI datasets. Compared with Bonferroni correction and false discovery rate (FDR), it can reduce false-positive rate and increase statistical power by appropriately utilizing the spatial information of fMRI data. Importantly, our method bypasses the need of non-parametric permutation to correct for multiple comparison, thus, it can efficiently tackle large datasets with high resolution fMRI images. The utility of our method is shown in a case-control study. Our approach can identify altered functional connectivities in a major depression disorder dataset, whereas existing methods fail. A software package is available at https://github.com/weikanggong/BWAS. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Santosa, Hendrik; Aarabi, Ardalan; Perlman, Susan B.; Huppert, Theodore J.
2017-05-01
Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique that uses low levels of red to near-infrared light to measure changes in cerebral blood oxygenation. Spontaneous (resting state) functional connectivity (sFC) has become a critical tool for cognitive neuroscience for understanding task-independent neural networks, revealing pertinent details differentiating healthy from disordered brain function, and discovering fluctuations in the synchronization of interacting individuals during hyperscanning paradigms. Two of the main challenges to sFC-NIRS analysis are (i) the slow temporal structure of both systemic physiology and the response of blood vessels, which introduces false spurious correlations, and (ii) motion-related artifacts that result from movement of the fNIRS sensors on the participants' head and can introduce non-normal and heavy-tailed noise structures. In this work, we systematically examine the false-discovery rates of several time- and frequency-domain metrics of functional connectivity for characterizing sFC-NIRS. Specifically, we detail the modifications to the statistical models of these methods needed to avoid high levels of false-discovery related to these two sources of noise in fNIRS. We compare these analysis procedures using both simulated and experimental resting-state fNIRS data. Our proposed robust correlation method has better performance in terms of being more reliable to the noise outliers due to the motion artifacts.
Cratering time scales for the Galilean satellites
NASA Technical Reports Server (NTRS)
Shoemaker, E. M.; Wolfe, R. F.
1982-01-01
An attempt is made to estimate the present cratering rate for each Galilean satellite within the correct order of magnitude and to extend the cratering rates back into the geologic past on the basis of evidence from the earth-moon system. For collisions with long and short period comets, the magnitudes and size distributions of the comet nuclei, the distribution of their perihelion distances, and the completeness of discovery are addressed. The diameters and masses of cometary nuclei are assessed, as are crater diameters and cratering rates. The dynamical relations between long period and short period comets are discussed, and the population of Jupiter-crossing asteroids is assessed. Estimated present cratering rates on the Galilean satellites are compared and variations of cratering rate with time are considered. Finally, the consistency of derived cratering time scales with the cratering record of the icy Galilean satellites is discussed.
Han, Hyemin; Glenn, Andrea L
2018-06-01
In fMRI research, the goal of correcting for multiple comparisons is to identify areas of activity that reflect true effects, and thus would be expected to replicate in future studies. Finding an appropriate balance between trying to minimize false positives (Type I error) while not being too stringent and omitting true effects (Type II error) can be challenging. Furthermore, the advantages and disadvantages of these types of errors may differ for different areas of study. In many areas of social neuroscience that involve complex processes and considerable individual differences, such as the study of moral judgment, effects are typically smaller and statistical power weaker, leading to the suggestion that less stringent corrections that allow for more sensitivity may be beneficial and also result in more false positives. Using moral judgment fMRI data, we evaluated four commonly used methods for multiple comparison correction implemented in Statistical Parametric Mapping 12 by examining which method produced the most precise overlap with results from a meta-analysis of relevant studies and with results from nonparametric permutation analyses. We found that voxelwise thresholding with familywise error correction based on Random Field Theory provides a more precise overlap (i.e., without omitting too few regions or encompassing too many additional regions) than either clusterwise thresholding, Bonferroni correction, or false discovery rate correction methods.
Shen, Xiaomeng; Hu, Qiang; Li, Jun; Wang, Jianmin; Qu, Jun
2015-10-02
Comprehensive and accurate evaluation of data quality and false-positive biomarker discovery is critical to direct the method development/optimization for quantitative proteomics, which nonetheless remains challenging largely due to the high complexity and unique features of proteomic data. Here we describe an experimental null (EN) method to address this need. Because the method experimentally measures the null distribution (either technical or biological replicates) using the same proteomic samples, the same procedures and the same batch as the case-vs-contol experiment, it correctly reflects the collective effects of technical variability (e.g., variation/bias in sample preparation, LC-MS analysis, and data processing) and project-specific features (e.g., characteristics of the proteome and biological variation) on the performances of quantitative analysis. To show a proof of concept, we employed the EN method to assess the quantitative accuracy and precision and the ability to quantify subtle ratio changes between groups using different experimental and data-processing approaches and in various cellular and tissue proteomes. It was found that choices of quantitative features, sample size, experimental design, data-processing strategies, and quality of chromatographic separation can profoundly affect quantitative precision and accuracy of label-free quantification. The EN method was also demonstrated as a practical tool to determine the optimal experimental parameters and rational ratio cutoff for reliable protein quantification in specific proteomic experiments, for example, to identify the necessary number of technical/biological replicates per group that affords sufficient power for discovery. Furthermore, we assessed the ability of EN method to estimate levels of false-positives in the discovery of altered proteins, using two concocted sample sets mimicking proteomic profiling using technical and biological replicates, respectively, where the true-positives/negatives are known and span a wide concentration range. It was observed that the EN method correctly reflects the null distribution in a proteomic system and accurately measures false altered proteins discovery rate (FADR). In summary, the EN method provides a straightforward, practical, and accurate alternative to statistics-based approaches for the development and evaluation of proteomic experiments and can be universally adapted to various types of quantitative techniques.
DNA repair variants and breast cancer risk.
Grundy, Anne; Richardson, Harriet; Schuetz, Johanna M; Burstyn, Igor; Spinelli, John J; Brooks-Wilson, Angela; Aronson, Kristan J
2016-05-01
A functional DNA repair system has been identified as important in the prevention of tumour development. Previous studies have hypothesized that common polymorphisms in DNA repair genes could play a role in breast cancer risk and also identified the potential for interactions between these polymorphisms and established breast cancer risk factors such as physical activity. Associations with breast cancer risk for 99 single nucleotide polymorphisms (SNPs) from genes in ten DNA repair pathways were examined in a case-control study including both Europeans (644 cases, 809 controls) and East Asians (299 cases, 160 controls). Odds ratios in both additive and dominant genetic models were calculated separately for participants of European and East Asian ancestry using multivariate logistic regression. The impact of multiple comparisons was assessed by correcting for the false discovery rate within each DNA repair pathway. Interactions between several breast cancer risk factors and DNA repair SNPs were also evaluated. One SNP (rs3213282) in the gene XRCC1 was associated with an increased risk of breast cancer in the dominant model of inheritance following adjustment for the false discovery rate (P < 0.05), although no associations were observed for other DNA repair SNPs. Interactions of six SNPs in multiple DNA repair pathways with physical activity were evident prior to correction for FDR, following which there was support for only one of the interaction terms (P < 0.05). No consistent associations between variants in DNA repair genes and breast cancer risk or their modification by breast cancer risk factors were observed. © 2016 Wiley Periodicals, Inc.
PeV IceCube signals and Dark Matter relic abundance in modified cosmologies
NASA Astrophysics Data System (ADS)
Lambiase, G.; Mohanty, S.; Stabile, An.
2018-04-01
The discovery by the IceCube experiment of a high-energy astrophysical neutrino flux with energies of the order of PeV, has opened new scenarios in astroparticles physics. A possibility to explain this phenomenon is to consider the minimal models of Dark Matter (DM) decay, the 4-dimensional operator ˜ y_{α χ }\\overline{{L_{L_{α }}}} H χ , which is also able to generate the correct abundance of DM in the Universe. Assuming that the cosmological background evolves according to the standard cosmological model, it follows that the rate of DM decay Γ _χ ˜ |y_{α χ }|^2 needed to get the correct DM relic abundance (Γ _χ ˜ 10^{-58}) differs by many orders of magnitude with respect that one needed to explain the IceCube data (Γ _χ ˜ 10^{-25}), making the four-dimensional operator unsuitable. In this paper we show that assuming that the early Universe evolution is governed by a modified cosmology, the discrepancy between the two the DM decay rates can be reconciled, and both the IceCube neutrino rate and relic density can be explained in a minimal model.
Crisis or self-correction: Rethinking media narratives about the well-being of science
Jamieson, Kathleen Hall
2018-01-01
After documenting the existence and exploring some implications of three alternative news narratives about science and its challenges, this essay outlines ways in which those who communicate science can more accurately convey its investigatory process, self-correcting norms, and remedial actions, without in the process legitimizing an unwarranted “science is broken/in crisis” narrative. The three storylines are: (i) quest discovery, which features scientists producing knowledge through an honorable journey; (ii) counterfeit quest discovery, which centers on an individual or group of scientists producing a spurious finding through a dishonorable one; and (iii) a systemic problem structure, which suggests that some of the practices that protect science are broken, or worse, that science is no longer self-correcting or in crisis. PMID:29531076
Camilleri, Michael; Carlson, Paula; Zinsmeister, Alan R.; McKinzie, Sanna; Busciglio, Irene; Burton, Duane; Zucchelli, Marco; D’Amato, Mauro
2009-01-01
Background & Aims NPSR1, the receptor for neuropeptide S (NPS), is expressed by gastrointestinal (GI) enteroendocrine (EE) cells, and is involved in inflammation, anxiety and nociception. NPSR1 polymorphisms are associated with asthma and inflammatory bowel disease. We aimed to determine whether NPS induces expression of GI neuropeptides; and to associate NPSR1 single nucleotide polymorphisms (SNPs) with symptom phenotype and GI functions in health and functional GI disorders (FGID). Methods The effect of NPS on mRNA expression of neuropeptides was assessed using real-time PCR in NPSR1-tranfected HEK293 cells. Seventeen NPSR1 SNPs were successfully genotyped in 699 subjects from a regional cohort of 466 FGID patients and 233 healthy controls. Associations were sought using sex-adjusted regression analysis and false discovery rate (FDR) correction. Results NPS-NPSR1 signaling induced increased expression of CCK, VIP, PYY, and somatostatin. There were no significant associations with phenotypes of FGID symptoms. There were several NPSR1 SNPs associated with individual motor or sensory functions; the associations of SNPs rs2609234, rs6972158 and rs1379928 with colonic transit rate remained significant after FDR correction. The rs1379928 polymorphism was also associated with pain, gas and urgency sensory ratings at 36 mm Hg distension, the level pre-specified for formal testing. Associations with rectal sensory ratings were not significant after FDR correction. Conclusions Expression of several neuropeptides is induced upon NPS-NPSR1 signaling; NPSR1 variants are associated with colonic transit in FGID. The role of the NPS system in FGID deserves further study. PMID:19732772
Stanley, Jeffrey R.; Adkins, Joshua N.; Slysz, Gordon W.; Monroe, Matthew E.; Purvine, Samuel O.; Karpievitch, Yuliya V.; Anderson, Gordon A.; Smith, Richard D.; Dabney, Alan R.
2011-01-01
Current algorithms for quantifying peptide identification confidence in the accurate mass and time (AMT) tag approach assume that the AMT tags themselves have been correctly identified. However, there is uncertainty in the identification of AMT tags, as this is based on matching LC-MS/MS fragmentation spectra to peptide sequences. In this paper, we incorporate confidence measures for the AMT tag identifications into the calculation of probabilities for correct matches to an AMT tag database, resulting in a more accurate overall measure of identification confidence for the AMT tag approach. The method is referred to as Statistical Tools for AMT tag Confidence (STAC). STAC additionally provides a Uniqueness Probability (UP) to help distinguish between multiple matches to an AMT tag and a method to calculate an overall false discovery rate (FDR). STAC is freely available for download as both a command line and a Windows graphical application. PMID:21692516
New field discovery rates in lower 48 states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woods, T.J.; Hugman, R.; Vidas, H.
1989-03-01
Through 1982, AAPG reported new field discovery rates. In 1985, a paper demonstrated that through 1975 the AAPG survey of new field discoveries had significantly underreported the larger new field discoveries. This presentation updates the new field discovery data reported in that paper and extends the data through the mid-1980s. Regional details of the new field discoveries, including an explicit breakout of discoveries below 15,000 ft, are reported. The extent to which the observed relative stabilization in new field discoveries per wildcat reflects regional shifts in exploration activity is discussed. Finally, the rate of reserve growth reflected in the passagemore » of particular fields through the AAPG field size categories is discussed.« less
Quantitative trait Loci analysis using the false discovery rate.
Benjamini, Yoav; Yekutieli, Daniel
2005-10-01
False discovery rate control has become an essential tool in any study that has a very large multiplicity problem. False discovery rate-controlling procedures have also been found to be very effective in QTL analysis, ensuring reproducible results with few falsely discovered linkages and offering increased power to discover QTL, although their acceptance has been slower than in microarray analysis, for example. The reason is partly because the methodological aspects of applying the false discovery rate to QTL mapping are not well developed. Our aim in this work is to lay a solid foundation for the use of the false discovery rate in QTL mapping. We review the false discovery rate criterion, the appropriate interpretation of the FDR, and alternative formulations of the FDR that appeared in the statistical and genetics literature. We discuss important features of the FDR approach, some stemming from new developments in FDR theory and methodology, which deem it especially useful in linkage analysis. We review false discovery rate-controlling procedures--the BH, the resampling procedure, and the adaptive two-stage procedure-and discuss the validity of these procedures in single- and multiple-trait QTL mapping. Finally we argue that the control of the false discovery rate has an important role in suggesting, indicating the significance of, and confirming QTL and present guidelines for its use.
Bender, Andreas; Scheiber, Josef; Glick, Meir; Davies, John W; Azzaoui, Kamal; Hamon, Jacques; Urban, Laszlo; Whitebread, Steven; Jenkins, Jeremy L
2007-06-01
Preclinical Safety Pharmacology (PSP) attempts to anticipate adverse drug reactions (ADRs) during early phases of drug discovery by testing compounds in simple, in vitro binding assays (that is, preclinical profiling). The selection of PSP targets is based largely on circumstantial evidence of their contribution to known clinical ADRs, inferred from findings in clinical trials, animal experiments, and molecular studies going back more than forty years. In this work we explore PSP chemical space and its relevance for the prediction of adverse drug reactions. Firstly, in silico (computational) Bayesian models for 70 PSP-related targets were built, which are able to detect 93% of the ligands binding at IC(50) < or = 10 microM at an overall correct classification rate of about 94%. Secondly, employing the World Drug Index (WDI), a model for adverse drug reactions was built directly based on normalized side-effect annotations in the WDI, which does not require any underlying functional knowledge. This is, to our knowledge, the first attempt to predict adverse drug reactions across hundreds of categories from chemical structure alone. On average 90% of the adverse drug reactions observed with known, clinically used compounds were detected, an overall correct classification rate of 92%. Drugs withdrawn from the market (Rapacuronium, Suprofen) were tested in the model and their predicted ADRs align well with known ADRs. The analysis was repeated for acetylsalicylic acid and Benperidol which are still on the market. Importantly, features of the models are interpretable and back-projectable to chemical structure, raising the possibility of rationally engineering out adverse effects. By combining PSP and ADR models new hypotheses linking targets and adverse effects can be proposed and examples for the opioid mu and the muscarinic M2 receptors, as well as for cyclooxygenase-1 are presented. It is hoped that the generation of predictive models for adverse drug reactions is able to help support early SAR to accelerate drug discovery and decrease late stage attrition in drug discovery projects. In addition, models such as the ones presented here can be used for compound profiling in all development stages.
Quantum Error Correction with Biased Noise
NASA Astrophysics Data System (ADS)
Brooks, Peter
Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security. At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level. In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations. In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction. In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled states which decreases as the states are distilled to better quality. The interplay of of these different rates sets limits on the achievable distillation and how quickly states converge to that limit.
Jones, Andrew R.; Siepen, Jennifer A.; Hubbard, Simon J.; Paton, Norman W.
2010-01-01
Tandem mass spectrometry, run in combination with liquid chromatography (LC-MS/MS), can generate large numbers of peptide and protein identifications, for which a variety of database search engines are available. Distinguishing correct identifications from false positives is far from trivial because all data sets are noisy, and tend to be too large for manual inspection, therefore probabilistic methods must be employed to balance the trade-off between sensitivity and specificity. Decoy databases are becoming widely used to place statistical confidence in results sets, allowing the false discovery rate (FDR) to be estimated. It has previously been demonstrated that different MS search engines produce different peptide identification sets, and as such, employing more than one search engine could result in an increased number of peptides being identified. However, such efforts are hindered by the lack of a single scoring framework employed by all search engines. We have developed a search engine independent scoring framework based on FDR which allows peptide identifications from different search engines to be combined, called the FDRScore. We observe that peptide identifications made by three search engines are infrequently false positives, and identifications made by only a single search engine, even with a strong score from the source search engine, are significantly more likely to be false positives. We have developed a second score based on the FDR within peptide identifications grouped according to the set of search engines that have made the identification, called the combined FDRScore. We demonstrate by searching large publicly available data sets that the combined FDRScore can differentiate between between correct and incorrect peptide identifications with high accuracy, allowing on average 35% more peptide identifications to be made at a fixed FDR than using a single search engine. PMID:19253293
76 FR 36320 - Rules of Practice in Proceedings Relative to False Representation and Lottery Orders
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-22
... officers. 952.18 Evidence. 952.19 Subpoenas. 952.20 Witness fees. 952.21 Discovery. 952.22 Transcript. 952..., motions, proposed orders, and other documents for the record. Discovery need not be filed except as may be... witnesses, that the statement correctly states the witness's opinion or knowledge concerning the matters in...
Buschmann, Tilo; Zhang, Rong; Brash, Douglas E; Bystrykh, Leonid V
2014-08-07
DNA barcodes are short unique sequences used to label DNA or RNA-derived samples in multiplexed deep sequencing experiments. During the demultiplexing step, barcodes must be detected and their position identified. In some cases (e.g., with PacBio SMRT), the position of the barcode and DNA context is not well defined. Many reads start inside the genomic insert so that adjacent primers might be missed. The matter is further complicated by coincidental similarities between barcode sequences and reference DNA. Therefore, a robust strategy is required in order to detect barcoded reads and avoid a large number of false positives or negatives.For mass inference problems such as this one, false discovery rate (FDR) methods are powerful and balanced solutions. Since existing FDR methods cannot be applied to this particular problem, we present an adapted FDR method that is suitable for the detection of barcoded reads as well as suggest possible improvements. In our analysis, barcode sequences showed high rates of coincidental similarities with the Mus musculus reference DNA. This problem became more acute when the length of the barcode sequence decreased and the number of barcodes in the set increased. The method presented in this paper controls the tail area-based false discovery rate to distinguish between barcoded and unbarcoded reads. This method helps to establish the highest acceptable minimal distance between reads and barcode sequences. In a proof of concept experiment we correctly detected barcodes in 83% of the reads with a precision of 89%. Sensitivity improved to 99% at 99% precision when the adjacent primer sequence was incorporated in the analysis. The analysis was further improved using a paired end strategy. Following an analysis of the data for sequence variants induced in the Atp1a1 gene of C57BL/6 murine melanocytes by ultraviolet light and conferring resistance to ouabain, we found no evidence of cross-contamination of DNA material between samples. Our method offers a proper quantitative treatment of the problem of detecting barcoded reads in a noisy sequencing environment. It is based on the false discovery rate statistics that allows a proper trade-off between sensitivity and precision to be chosen.
Bias correction factors for near-Earth asteroids
NASA Technical Reports Server (NTRS)
Benedix, Gretchen K.; Mcfadden, Lucy Ann; Morrow, Esther M.; Fomenkova, Marina N.
1992-01-01
Knowledge of the population size and physical characteristics (albedo, size, and rotation rate) of near-Earth asteroids (NEA's) is biased by observational selection effects which are functions of the population's intrinsic properties and the size of the telescope, detector sensitivity, and search strategy used. The NEA population is modeled in terms of orbital and physical elements: a, e, i, omega, Omega, M, albedo, and diameter, and an asteroid search program is simulated using actual telescope pointings of right ascension, declination, date, and time. The position of each object in the model population is calculated at the date and time of each telescope pointing. The program tests to see if that object is within the field of view (FOV = 8.75 degrees) of the telescope and above the limiting magnitude (V = +1.65) of the film. The effect of the starting population on the outcome of the simulation's discoveries is compared to the actual discoveries in order to define a most probable starting population.
The role of serendipity in drug discovery
Ban, Thomas A.
2006-01-01
Serendipity is one of the many factors that may contribute to drug discovery. It has played a role in the discovery of prototype psychotropic drugs that led to modern pharmacological treatment in psychiatry. It has also played a role in the discovery of several drugs that have had an impact on the development of psychiatry, “Serendipity” in drug discovery implies the finding of one thing while looking for something else. This was the case in six of the twelve serendipitous discoveries reviewed in this paper, ie, aniline purple, penicillin, lysergic acid diethylamide, meprobamate, chlorpromazine, and imipramine, in the case of three drugs, ie, potassium bromide, chloral hydrate, and lithium, the discovery was serendipitous because an utterly false rationale led to correct empirical results; and in case of two others, ie, iproniazid and sildenafil, because valuable indications were found for these drugs which were not initially those sought. The discovery of one of the twelve drugs, chlordiazepoxide, was sheer luck. PMID:17117615
Nugent, Scott; Castellano, Christian-Alexandre; Goffaux, Philippe; Whittingstall, Kevin; Lepage, Martin; Paquet, Nancy; Bocti, Christian; Fulop, Tamas; Cunnane, Stephen C
2014-06-01
Several studies have suggested that glucose hypometabolism may be present in specific brain regions in cognitively normal older adults and could contribute to the risk of subsequent cognitive decline. However, certain methodological shortcomings, including a lack of partial volume effect (PVE) correction or insufficient cognitive testing, confound the interpretation of most studies on this topic. We combined [(18)F]fluorodeoxyglucose ([(18)F]FDG) positron emission tomography (PET) and magnetic resonance (MR) imaging to quantify cerebral metabolic rate of glucose (CMRg) as well as cortical volume and thickness in 43 anatomically defined brain regions from a group of cognitively normal younger (25 ± 3 yr old; n = 25) and older adults (71 ± 9 yr old; n = 31). After correcting for PVE, we observed 11-17% lower CMRg in three specific brain regions of the older group: the superior frontal cortex, the caudal middle frontal cortex, and the caudate (P ≤ 0.01 false discovery rate-corrected). In the older group, cortical volumes and cortical thickness were 13-33 and 7-18% lower, respectively, in multiple brain regions (P ≤ 0.01 FDR correction). There were no differences in CMRg between individuals who were or were not prescribed antihypertensive medication. There were no significant correlations between CMRg and cognitive performance or metabolic parameters measured in fasting plasma. We conclude that highly localized glucose hypometabolism and widespread cortical thinning and atrophy can be present in older adults who are cognitively normal, as assessed using age-normed neuropsychological testing measures. Copyright © 2014 the American Physiological Society.
C-band radar pulse Doppler error: Its discovery, modeling, and elimination
NASA Technical Reports Server (NTRS)
Krabill, W. B.; Dempsey, D. J.
1978-01-01
The discovery of a C Band radar pulse Doppler error is discussed and use of the GEOS 3 satellite's coherent transponder to isolate the error source is described. An analysis of the pulse Doppler tracking loop is presented and a mathematical model for the error was developed. Error correction techniques were developed and are described including implementation details.
How Formal Methods Impels Discovery: A Short History of an Air Traffic Management Project
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Hagen, George; Maddalon, Jeffrey M.; Munoz, Cesar A.; Narkawicz, Anthony; Dowek, Gilles
2010-01-01
In this paper we describe a process of algorithmic discovery that was driven by our goal of achieving complete, mechanically verified algorithms that compute conflict prevention bands for use in en route air traffic management. The algorithms were originally defined in the PVS specification language and subsequently have been implemented in Java and C++. We do not present the proofs in this paper: instead, we describe the process of discovery and the key ideas that enabled the final formal proof of correctness
Avoiding false discoveries in association studies.
Sabatti, Chiara
2007-01-01
We consider the problem of controlling false discoveries in association studies. We assume that the design of the study is adequate so that the "false discoveries" are potentially only because of random chance, not to confounding or other flaws. Under this premise, we review the statistical framework for hypothesis testing and correction for multiple comparisons. We consider in detail the currently accepted strategies in linkage analysis. We then examine the underlying similarities and differences between linkage and association studies and document some of the most recent methodological developments for association mapping.
Performance comparison of two commercial BGO-based PET/CT scanners using NEMA NU 2-2001.
Bolard, Grégory; Prior, John O; Modolo, Luca; Delaloye, Angelika Bischof; Kosinski, Marek; Wastiel, Claude; Malterre, Jérôme; Bulling, Shelley; Bochud, François; Verdun, Francis R
2007-07-01
Combined positron emission tomography and computed tomography (PET/CT) scanners play a major role in medicine for in vivo imaging in an increasing number of diseases in oncology, cardiology, neurology, and psychiatry. With the advent of short-lived radioisotopes other than 18F and newer scanners, there is a need to optimize radioisotope activity and acquisition protocols, as well as to compare scanner performances on an objective basis. The Discovery-LS (D-LS) was among the first clinical PET/CT scanners to be developed and has been extensively characterized with older National Electrical Manufacturer Association (NEMA) NU 2-1994 standards. At the time of publication of the latest version of the standards (NU 2-2001) that have been adapted for whole-body imaging under clinical conditions, more recent models from the same manufacturer, i.e., Discovery-ST (D-ST) and Discovery-STE (D-STE), were commercially available. We report on the full characterization both in the two- and three-dimensional acquisition mode of the D-LS according to latest NEMA NU 2-2001 standards (spatial resolution, sensitivity, count rate performance, accuracy of count losses, and random coincidence correction and image quality), as well as a detailed comparison with the newer D-ST widely used and whose characteristics are already published.
Future of fundamental discovery in US biomedical research
Levitt, Michael; Levitt, Jonathan M.
2017-01-01
Young researchers are crucially important for basic science as they make unexpected, fundamental discoveries. Since 1982, we find a steady drop in the number of grant-eligible basic-science faculty [principal investigators (PIs)] younger than 46. This fall occurred over a 32-y period when inflation-corrected congressional funds for NIH almost tripled. During this time, the PI success ratio (fraction of basic-science PIs who are R01 grantees) dropped for younger PIs (below 46) and increased for older PIs (above 55). This age-related bias seems to have caused the steady drop in the number of young basic-science PIs and could reduce future US discoveries in fundamental biomedical science. The NIH recognized this bias in its 2008 early-stage investigator (ESI) policy to fund young PIs at higher rates. We show this policy is working and recommend that it be enhanced by using better data. Together with the National Institute of General Medical Sciences (NIGMS) Maximizing Investigators’ Research Award (MIRA) program to reward senior PIs with research time in exchange for less funding, this may reverse a decades-long trend of more money going to older PIs. To prepare young scientists for increased demand, additional resources should be devoted to transitional postdoctoral fellowships already offered by NIH. PMID:28584129
Performance comparison of two commercial BGO-based PET/CT scanners using NEMA NU 2-2001
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolard, Gregory; Prior, John O.; Modolo, Luca
2007-07-15
Combined positron emission tomography and computed tomography (PET/CT) scanners play a major role in medicine for in vivo imaging in an increasing number of diseases in oncology, cardiology, neurology, and psychiatry. With the advent of short-lived radioisotopes other than {sup 18}F and newer scanners, there is a need to optimize radioisotope activity and acquisition protocols, as well as to compare scanner performances on an objective basis. The Discovery-LS (D-LS) was among the first clinical PET/CT scanners to be developed and has been extensively characterized with older National Electrical Manufacturer Association (NEMA) NU 2-1994 standards. At the time of publication ofmore » the latest version of the standards (NU 2-2001) that have been adapted for whole-body imaging under clinical conditions, more recent models from the same manufacturer, i.e., Discovery-ST (D-ST) and Discovery-STE (D-STE), were commercially available. We report on the full characterization both in the two- and three-dimensional acquisition mode of the D-LS according to latest NEMA NU 2-2001 standards (spatial resolution, sensitivity, count rate performance, accuracy of count losses, and random coincidence correction and image quality), as well as a detailed comparison with the newer D-ST widely used and whose characteristics are already published.« less
Jones, Andrew R; Siepen, Jennifer A; Hubbard, Simon J; Paton, Norman W
2009-03-01
LC-MS experiments can generate large quantities of data, for which a variety of database search engines are available to make peptide and protein identifications. Decoy databases are becoming widely used to place statistical confidence in result sets, allowing the false discovery rate (FDR) to be estimated. Different search engines produce different identification sets so employing more than one search engine could result in an increased number of peptides (and proteins) being identified, if an appropriate mechanism for combining data can be defined. We have developed a search engine independent score, based on FDR, which allows peptide identifications from different search engines to be combined, called the FDR Score. The results demonstrate that the observed FDR is significantly different when analysing the set of identifications made by all three search engines, by each pair of search engines or by a single search engine. Our algorithm assigns identifications to groups according to the set of search engines that have made the identification, and re-assigns the score (combined FDR Score). The combined FDR Score can differentiate between correct and incorrect peptide identifications with high accuracy, allowing on average 35% more peptide identifications to be made at a fixed FDR than using a single search engine.
Teaching APA Style Documentation: Discovery Learning, Scaffolding and Procedural Knowledge
ERIC Educational Resources Information Center
Skeen, Thomas; Zafonte, Maria
2015-01-01
Students struggle with learning correct documentation style as found in the Publication Manual of the American Psychological Association and teachers are often at a loss for how to best instruct students in correct usage of APA style. As such, the first part of this paper discusses the current research on teaching documentation styles as well as…
The petroleum exponential (again)
NASA Astrophysics Data System (ADS)
Bell, Peter M.
The U.S. production and reserves of liquid and gaseous petroleum have declined since 1960, at least in the lower 48 states. This decline stems from decreased discovery rates, as predicted by M. King Hubbert in the mid-1950's. Hubbert's once unpopular views were based on statistical analysis of the production history of the petroleum industry, and now, even with inclusion of the statistical perturbation caused by the Prudhoe Bay-North Alaskan Slope discovery (the largest oil field ever found in the United States), it seems clear again that production is following the exponential curve to depletion of the resource—to the end of the ultimate yield of petroleum from wells in the United States.In a recent report, C. Hall and C. Cleveland of Cornell University show that large atypical discoveries, such as the Prudhoe Bay find, are but minor influences on what now appears to be the crucial intersection of two exponentials [Science, 211, 576-579, 1981]: the production-per-drilled-foot curve of Hubbert, which crosses zero production no later than the year 2005; the other, a curve that plots the energy cost of drilling and extraction with time; that is, the cost-time rate of how much oil is used to drill and extract oil from the ground. The intersection, if no other discoveries the size of the Prudhoe Bay field are made, could be as early as 1990, the end of the present decade. The inclusion of each Prudhoe-Bay-size find extends the year of intersection by only about 6 years. Beyond that point, more than one barrel of petroleum would be expended for each barrel extracted from the ground. The oil exploration-extraction and refining industry is currently the second most energy-intensive industry in the U.S., and the message seems clear. Either more efficient drilling and production techniques are discovered, or domestic production will cease well before the end of this century if the Hubbert analysis modified by Hall and Cleveland is correct.
Search strategy has influenced the discovery rate of human viruses.
Rosenberg, Ronald; Johansson, Michael A; Powers, Ann M; Miller, Barry R
2013-08-20
A widely held concern is that the pace of infectious disease emergence has been increasing. We have analyzed the rate of discovery of pathogenic viruses, the preeminent source of newly discovered causes of human disease, from 1897 through 2010. The rate was highest during 1950-1969, after which it moderated. This general picture masks two distinct trends: for arthropod-borne viruses, which comprised 39% of pathogenic viruses, the discovery rate peaked at three per year during 1960-1969, but subsequently fell nearly to zero by 1980; however, the rate of discovery of nonarboviruses remained stable at about two per year from 1950 through 2010. The period of highest arbovirus discovery coincided with a comprehensive program supported by The Rockefeller Foundation of isolating viruses from humans, animals, and arthropod vectors at field stations in Latin America, Africa, and India. The productivity of this strategy illustrates the importance of location, approach, long-term commitment, and sponsorship in the discovery of emerging pathogens.
View of Mission Control Center during the Apollo 13 oxygen cell failure
NASA Technical Reports Server (NTRS)
1970-01-01
Mrs. Mary Haise receives an explanation of the revised flight plan of the Apollo 13 mission from Astronaut Gerald P. Carr in the Viewing Room of Mission Control Center, bldg 30, Manned Spacecraft Center (MSC). Her husband, Astronaut Fred W. Haise Jr., was joining the fellow crew members in making corrections in their spacecraft following discovery of an oxygen cell failure several hours earlier (34900); Dr. Charles A. Berry, Director of Medical Research and Operations Directorate at MSC, converses with Mrs. Marilyn Lovell in the Viewing Room of Mission Control Center. Mrs. Lovell's husband, Astronaut James A. Lovell Jr., was busily making corrections inside the spacecraft following discovery of an oxygen cell failure several hours earlier (34901).
Induced Pluripotent Stem Cells in Dermatology: Potentials, Advances, and Limitations
Bilousova, Ganna; Roop, Dennis R.
2014-01-01
The discovery of methods for reprogramming adult somatic cells into induced pluripotent stem cells (iPSCs) has raised the possibility of producing truly personalized treatment options for numerous diseases. Similar to embryonic stem cells (ESCs), iPSCs can give rise to any cell type in the body and are amenable to genetic correction by homologous recombination. These ESC properties of iPSCs allow for the development of permanent corrective therapies for many currently incurable disorders, including inherited skin diseases, without using embryonic tissues or oocytes. Here, we review recent progress and limitations of iPSC research with a focus on clinical applications of iPSCs and using iPSCs to model human diseases for drug discovery in the field of dermatology. PMID:25368014
Protein Complex Production from the Drug Discovery Standpoint.
Moarefi, Ismail
2016-01-01
Small molecule drug discovery critically depends on the availability of meaningful in vitro assays to guide medicinal chemistry programs that are aimed at optimizing drug potency and selectivity. As it becomes increasingly evident, most disease relevant drug targets do not act as a single protein. In the body, they are instead generally found in complex with protein cofactors that are highly relevant for their correct function and regulation. This review highlights selected examples of the increasing trend to use biologically relevant protein complexes for rational drug discovery to reduce costly late phase attritions due to lack of efficacy or toxicity.
Discovery of a Bow Shock around VELA X-1: Erratum
NASA Astrophysics Data System (ADS)
Kaper, L.; van Loon, J. Th.; Augusteijn, T.; Goudfrooij, P.; Patat, F.; Waters, L. B. F. M.; Zijlstra, A. A.
1997-04-01
In the Letter Discovery of a Bow Shock around Vela X-1 by L. Kaper, J. Th. van Loon, T. Augusteijn, P. Goudfrooij, F. Patat, L. B. F. M. Waters, and A. A. Zijlstra (ApJ, 475, L37 [1997]), Figure 1 (Plate L7) was printed without its axis labels, as the result of a printer's error. The corrected figure appears in this issue as Plate L12.
NASA Astrophysics Data System (ADS)
Miller, Arthur I.; Shanklin, Jon
2015-09-01
In your otherwise informative article on “Curing the Curie complex” (Features, August pp25-28), you wrote “Albert Einstein was awarded a Nobel prize for his discovery of the photoelectric effect”.
Mantini, Dante; Petrucci, Francesca; Del Boccio, Piero; Pieragostino, Damiana; Di Nicola, Marta; Lugaresi, Alessandra; Federici, Giorgio; Sacchetta, Paolo; Di Ilio, Carmine; Urbani, Andrea
2008-01-01
Independent component analysis (ICA) is a signal processing technique that can be utilized to recover independent signals from a set of their linear mixtures. We propose ICA for the analysis of signals obtained from large proteomics investigations such as clinical multi-subject studies based on MALDI-TOF MS profiling. The method is validated on simulated and experimental data for demonstrating its capability of correctly extracting protein profiles from MALDI-TOF mass spectra. The comparison on peak detection with an open-source and two commercial methods shows its superior reliability in reducing the false discovery rate of protein peak masses. Moreover, the integration of ICA and statistical tests for detecting the differences in peak intensities between experimental groups allows to identify protein peaks that could be indicators of a diseased state. This data-driven approach demonstrates to be a promising tool for biomarker-discovery studies based on MALDI-TOF MS technology. The MATLAB implementation of the method described in the article and both simulated and experimental data are freely available at http://www.unich.it/proteomica/bioinf/.
Genome-Wide Association Study of a Varroa-Specific Defense Behavior in Honeybees (Apis mellifera)
Spötter, Andreas; Gupta, Pooja; Mayer, Manfred; Reinsch, Norbert
2016-01-01
Honey bees are exposed to many damaging pathogens and parasites. The most devastating is Varroa destructor, which mainly affects the brood. A promising approach for preventing its spread is to breed Varroa-resistant honey bees. One trait that has been shown to provide significant resistance against the Varroa mite is hygienic behavior, which is a behavioral response of honeybee workers to brood diseases in general. Here, we report the use of an Affymetrix 44K SNP array to analyze SNPs associated with detection and uncapping of Varroa-parasitized brood by individual worker bees (Apis mellifera). For this study, 22 000 individually labeled bees were video-monitored and a sample of 122 cases and 122 controls was collected and analyzed to determine the dependence/independence of SNP genotypes from hygienic and nonhygienic behavior on a genome-wide scale. After false-discovery rate correction of the P values, 6 SNP markers had highly significant associations with the trait investigated (α < 0.01). Inspection of the genomic regions around these SNPs led to the discovery of putative candidate genes. PMID:26774061
Semantic Entity Pairing for Improved Data Validation and Discovery
NASA Astrophysics Data System (ADS)
Shepherd, Adam; Chandler, Cyndy; Arko, Robert; Chen, Yanning; Krisnadhi, Adila; Hitzler, Pascal; Narock, Tom; Groman, Robert; Rauch, Shannon
2014-05-01
One of the central incentives for linked data implementations is the opportunity to leverage the rich logic inherent in structured data. The logic embedded in semantic models can strengthen capabilities for data discovery and data validation when pairing entities from distinct, contextually-related datasets. The creation of links between the two datasets broadens data discovery by using the semantic logic to help machines compare similar entities and properties that exist on different levels of granularity. This semantic capability enables appropriate entity pairing without making inaccurate assertions as to the nature of the relationship. Entity pairing also provides a context to accurately validate the correctness of an entity's property values - an exercise highly valued by data management practices who seek to ensure the quality and correctness of their data. The Biological and Chemical Oceanography Data Management Office (BCO-DMO) semantically models metadata surrounding oceanographic researchcruises, but other sources outside of BCO-DMO exist that also model metadata about these same cruises. For BCO-DMO, the process of successfully pairing its entities to these sources begins by selecting sources that are decidedly trustworthy and authoritative for the modeled concepts. In this case, the Rolling Deck to Repository (R2R) program has a well-respected reputation among the oceanographic research community, presents a data context that is uniquely different and valuable, and semantically models its cruise metadata. Where BCO-DMO exposes the processed, analyzed data products generated by researchers, R2R exposes the raw shipboard data that was collected on the same research cruises. Interlinking these cruise entities expands data discovery capabilities but also allows for validating the contextual correctness of both BCO-DMO's and R2R's cruise metadata. Assessing the potential for a link between two datasets for a similar entity consists of aligning like properties and deciding on the appropriate semantic markup to describe the link. This highlights the desire for research organizations like BCO-DMO and R2R to ensure the complete accuracy of their exposed metadata, as it directly reflects on their reputations as successful and trustworthy source of research data. Therefore, data validation reaches beyond simple syntax of property values into contextual correctness. As a human process, this is a time-intensive task that does not scale well for finite human and funding resources. Therefore, to assess contextual correctness across datasets at different levels of granularity, BCO-DMO is developing a system that employs semantic technologies to aid the human process by organizing potential links and calculating a confidence coefficient as to the correctness of the potential pairing based on the distance between certain entity property values. The system allows humans to quickly scan potential links and their confidence coefficients for asserting persistence and correcting and investigating misaligned entity property values.
Hecht, Elizabeth S.; Scholl, Elizabeth H.; Walker, S. Hunter; Taylor, Amber D.; Cliby, William A.; Motsinger-Reif, Alison A.; Muddiman, David C.
2016-01-01
An early-stage, population-wide biomarker for ovarian cancer (OVC) is essential to reverse its high mortality rate. Aberrant glycosylation by OVC has been reported, but studies have yet to identify an N-glycan with sufficiently high specificity. We curated a human biorepository of 82 case-control plasma samples, with 27%, 12%, 46%, and 15% falling across stages I–IV, respectively. For relatve quantitation, glycans were analyzed by the individuality normalization when labeling with glycan hydrazide tags (INLIGHT) strategy for enhanced electrospray ionization, MS/MS analysis. Sixty-three glycan cancer burden ratios (GBRs), defined as the log10 ratio of the case-control extracted ion chromatogram abundances, were calculated above the limit of detection. The final GBR models, built using stepwise forward regression, included three significant terms: OVC stage, normalized mean GBR, and tag chemical purity; glycan class, fucosylation, or sialylation were not significant variables. After Bonferroni correction, seven N-glycans were identified as significant (p < 0.05), and after false discovery rate correction, an additional four glycans were determined to be significant (p < 0.05), with one borderline (p = 0.05). For all N-glycans, the vectors of the effects from stages II–IV were sequentially reversed, suggesting potential biological changes in OVC morphology or in host response. PMID:26347193
Identification of differentially expressed genes and false discovery rate in microarray studies.
Gusnanto, Arief; Calza, Stefano; Pawitan, Yudi
2007-04-01
To highlight the development in microarray data analysis for the identification of differentially expressed genes, particularly via control of false discovery rate. The emergence of high-throughput technology such as microarrays raises two fundamental statistical issues: multiplicity and sensitivity. We focus on the biological problem of identifying differentially expressed genes. First, multiplicity arises due to testing tens of thousands of hypotheses, rendering the standard P value meaningless. Second, known optimal single-test procedures such as the t-test perform poorly in the context of highly multiple tests. The standard approach of dealing with multiplicity is too conservative in the microarray context. The false discovery rate concept is fast becoming the key statistical assessment tool replacing the P value. We review the false discovery rate approach and argue that it is more sensible for microarray data. We also discuss some methods to take into account additional information from the microarrays to improve the false discovery rate. There is growing consensus on how to analyse microarray data using the false discovery rate framework in place of the classical P value. Further research is needed on the preprocessing of the raw data, such as the normalization step and filtering, and on finding the most sensitive test procedure.
Gene-Specific Demethylation as Targeted Therapy in MDS
2016-07-01
methylation remain elusive. This proposal builds on our recent discovery of a novel class of RNAs , the DiRs or DNMT1-interacting RNAs , involved in...cell type-specific DNA methylation patterns. Based on these findings, we hypothesize that DNA methylation changes can be corrected by RNAs . We aim to...aberrant DNA methylation remain elusive. This proposal builds on our recent discovery of a novel class of RNAs , the DiRs or DNMT1-interacting RNAs
Common variant rs356182 near SNCA defines a Parkinson's disease endophenotype.
Cooper, Christine A; Jain, Nimansha; Gallagher, Michael D; Weintraub, Daniel; Xie, Sharon X; Berlyand, Yosef; Espay, Alberto J; Quinn, Joseph; Edwards, Karen L; Montine, Thomas; Van Deerlin, Vivianna M; Trojanowski, John; Zabetian, Cyrus P; Chen-Plotkin, Alice S
2017-01-01
Parkinson's disease (PD) presents clinically with several motor subtypes that exhibit variable treatment response and prognosis. Here, we investigated genetic variants for their potential association with PD motor phenotype and progression. We screened 10 SNPs, previously associated with PD risk, for association with tremor-dominant (TD) versus postural-instability gait disorder (PIGD) motor subtypes. SNPs that correlated with the TD/PIGD ratio in a discovery cohort of 251 PD patients were then evaluated in a multi-site replication cohort of 559 PD patients. SNPs associated with motor phenotype in both cross-sectional cohorts were next evaluated for association with (1) rates of motor progression in a longitudinal subgroup of 230 PD patients and (2) brain alpha-synuclein ( SNCA ) expression in the GTEx (Genotype-Tissue Expression project) consortium database. Genotype at rs356182, near SNCA , correlated with the TD/PIGD ratio in both the discovery (Bonferroni-corrected P = 0.04) and replication cohorts ( P = 0.02). The rs356182 GG genotype was associated with a more tremor-predominant phenotype and predicted a slower rate of motor progression (1-point difference in annual rate of UPDRS-III motor score change, P = 0.01). The rs356182 genotype was associated with SNCA expression in the cerebellum ( P = 0.005). Our study demonstrates that the GG genotype at rs356182 provides molecular definition for a clinically important endophenotype associated with (1) more tremor-predominant motor phenomenology, (2) slower rates of motor progression, and (3) decreased brain expression of SNCA . Such molecularly defined endophenotyping in PD may benefit both clinical trial design and tailoring of clinical care as we enter the era of precision medicine.
Palomar Planet-Crossing Asteroid Survey (PCAS): Recent discovery rate
NASA Technical Reports Server (NTRS)
Helin, Eleanor F.
1992-01-01
The discovery rate of Near-Earth Asteroids (NEA's) has increased significantly in the last decade. As greater numbers of NEA's are discovered, worldwide interest has grown leading to new programs. With the introduction of CCD telescopes throughout the world, an increase of 1-2 orders of magnitude in the discovery rate can be anticipated. Nevertheless, it will take several decades of dedicated searching to accomplish a 95 percent completeness, even for large objects.
The dendritic spine story: an intriguing process of discovery.
DeFelipe, Javier
2015-01-01
Dendritic spines are key components of a variety of microcircuits and they represent the majority of postsynaptic targets of glutamatergic axon terminals in the brain. The present article will focus on the discovery of dendritic spines, which was possible thanks to the application of the Golgi technique to the study of the nervous system, and will also explore the early interpretation of these elements. This discovery represents an interesting chapter in the history of neuroscience as it shows us that progress in the study of the structure of the nervous system is based not only on the emergence of new techniques but also on our ability to exploit the methods already available and correctly interpret their microscopic images.
Wilson, L E; Harlid, S; Xu, Z; Sandler, D P; Taylor, J A
2017-01-01
The relationship between obesity and chronic disease risk is well-established; the underlying biological mechanisms driving this risk increase may include obesity-related epigenetic modifications. To explore this hypothesis, we conducted a genome-wide analysis of DNA methylation and body mass index (BMI) using data from a subset of women in the Sister Study. The Sister Study is a cohort of 50 884 US women who had a sister with breast cancer but were free of breast cancer themselves at enrollment. Study participants completed examinations which included measurements of height and weight, and provided blood samples. Blood DNA methylation data generated with the Illumina Infinium HumanMethylation27 BeadChip array covering 27,589 CpG sites was available for 871 women from a prior study of breast cancer and DNA methylation. To identify differentially methylated CpG sites associated with BMI, we analyzed this methylation data using robust linear regression with adjustment for age and case status. For those CpGs passing the false discovery rate significance level, we examined the association in a replication set comprised of a non-overlapping group of 187 women from the Sister Study who had DNA methylation data generated using the Infinium HumanMethylation450 BeadChip array. Analysis of this expanded 450 K array identified additional BMI-associated sites which were investigated with targeted pyrosequencing. Four CpG sites reached genome-wide significance (false discovery rate (FDR) q<0.05) in the discovery set and associations for all four were significant at strict Bonferroni correction in the replication set. An additional 23 sites passed FDR in the replication set and five were replicated by pyrosequencing in the discovery set. Several of the genes identified including ANGPT4, RORC, SOCS3, FSD2, XYLT1, ABCG1, STK39, ASB2 and CRHR2 have been linked to obesity and obesity-related chronic diseases. Our findings support the hypothesis that obesity-related epigenetic differences are detectable in blood and may be related to risk of chronic disease.
Genetic variation in cell death genes and risk of non-Hodgkin lymphoma.
Schuetz, Johanna M; Daley, Denise; Graham, Jinko; Berry, Brian R; Gallagher, Richard P; Connors, Joseph M; Gascoyne, Randy D; Spinelli, John J; Brooks-Wilson, Angela R
2012-01-01
Non-Hodgkin lymphomas are a heterogeneous group of solid tumours that constitute the 5(th) highest cause of cancer mortality in the United States and Canada. Poor control of cell death in lymphocytes can lead to autoimmune disease or cancer, making genes involved in programmed cell death of lymphocytes logical candidate genes for lymphoma susceptibility. We tested for genetic association with NHL and NHL subtypes, of SNPs in lymphocyte cell death genes using an established population-based study. 17 candidate genes were chosen based on biological function, with 123 SNPs tested. These included tagSNPs from HapMap and novel SNPs discovered by re-sequencing 47 cases in genes for which SNP representation was judged to be low. The main analysis, which estimated odds ratios by fitting data to an additive logistic regression model, used European ancestry samples that passed quality control measures (569 cases and 547 controls). A two-tiered approach for multiple testing correction was used: correction for number of tests within each gene by permutation-based methodology, followed by correction for the number of genes tested using the false discovery rate. Variant rs928883, near miR-155, showed an association (OR per A-allele: 2.80 [95% CI: 1.63-4.82]; p(F) = 0.027) with marginal zone lymphoma that is significant after correction for multiple testing. This is the first reported association between a germline polymorphism at a miRNA locus and lymphoma.
The Prevalence of Earth-size Planets Orbiting Sun-like Stars
NASA Astrophysics Data System (ADS)
Petigura, Erik; Marcy, Geoffrey W.; Howard, Andrew
2015-01-01
In less than two decades since the discovery of the first planet orbiting another Sun-like star, the study of extrasolar planets has matured beyond individual discoveries to detailed characterization of the planet population as a whole. No mission has played more of a role in this paradigm shift than NASA's Kepler mission. Kepler photometry has shown that planets like Earth are common throughout the Milky Way Galaxy. Our group performed an independent search of Kepler photometry using our custom transit-finding pipeline, TERRA, and produced our own catalog of planet candidates. We conducted spectroscopic follow-up of their host stars in order to rule out false positive scenarios and to better constrain host star properties. We measured TERRA's sensitivity to planets of different sizes and orbital periods by injecting synthetic planets into raw Kepler photometry and measuring the recovery rate. Correcting for orbital tilt and survey completeness, we found that ~80% of GK stars harbor one or more planets within 1 AU and that ~22% of Sun-like stars harbor an Earth-size planet that receives similar levels of stellar radiation as Earth. I will present the latest results from our efforts to characterize the demographics of small planets revealed by Kepler.
Long-term trends in oil and gas discovery rates in lower 48 United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woods, T.J.
1985-09-01
The Gas Research Institute (GRI), in association with Energy and Environmental Analysis, Inc. (EEA), has developed a data base characterizing the discovered oil and gas fields in the lower 48 United States. The number of fields in this data base reported to have been discovered since 1947 substantially exceeds the count presented in the AAPG survey of new-field discoveries since 1947. The greatest relative difference between the field counts is for fields larger than 10 million bbl of oil equivalent (BOE) (AAPG Class C fields or larger). Two factors contribute to the difference in reported discoveries by field size. First,more » the AAPG survey does not capture all new-field discoveries, particularly in the offshore. Second, the AAPG survey does not update field sizes past 6 years after the field discovery date. Because of reserve appreciation to discovered fields, discovery-trend data based on field-size data should be used with caution, particularly when field-size estimates have not been updated for a substantial period of time. Based on the GRI/EEA data base, the major decline in the discovery rates of large, new oil and gas fields in the lower 48 United States appears to have ended by the early 1960s. Since then, discovery rates seem to have improved. Thus, the outlook for future discoveries of large fields may be much better than previously believed.« less
The dendritic spine story: an intriguing process of discovery
DeFelipe, Javier
2015-01-01
Dendritic spines are key components of a variety of microcircuits and they represent the majority of postsynaptic targets of glutamatergic axon terminals in the brain. The present article will focus on the discovery of dendritic spines, which was possible thanks to the application of the Golgi technique to the study of the nervous system, and will also explore the early interpretation of these elements. This discovery represents an interesting chapter in the history of neuroscience as it shows us that progress in the study of the structure of the nervous system is based not only on the emergence of new techniques but also on our ability to exploit the methods already available and correctly interpret their microscopic images. PMID:25798090
Predicting discovery rates of genomic features.
Gravel, Simon
2014-06-01
Successful sequencing experiments require judicious sample selection. However, this selection must often be performed on the basis of limited preliminary data. Predicting the statistical properties of the final sample based on preliminary data can be challenging, because numerous uncertain model assumptions may be involved. Here, we ask whether we can predict "omics" variation across many samples by sequencing only a fraction of them. In the infinite-genome limit, we find that a pilot study sequencing 5% of a population is sufficient to predict the number of genetic variants in the entire population within 6% of the correct value, using an estimator agnostic to demography, selection, or population structure. To reach similar accuracy in a finite genome with millions of polymorphisms, the pilot study would require ∼15% of the population. We present computationally efficient jackknife and linear programming methods that exhibit substantially less bias than the state of the art when applied to simulated data and subsampled 1000 Genomes Project data. Extrapolating based on the National Heart, Lung, and Blood Institute Exome Sequencing Project data, we predict that 7.2% of sites in the capture region would be variable in a sample of 50,000 African Americans and 8.8% in a European sample of equal size. Finally, we show how the linear programming method can also predict discovery rates of various genomic features, such as the number of transcription factor binding sites across different cell types. Copyright © 2014 by the Genetics Society of America.
Short arc orbit determination and imminent impactors in the Gaia era
NASA Astrophysics Data System (ADS)
Spoto, F.; Del Vigna, A.; Milani, A.; Tommei, G.; Tanga, P.; Mignard, F.; Carry, B.; Thuillot, W.; David, P.
2018-06-01
Short-arc orbit determination is crucial when an asteroid is first discovered. In these cases usually the observations are so few that the differential correction procedure may not converge. We developed an initial orbit computation method, based on systematic ranging, which is an orbit determination technique that systematically explores a raster in the topocentric range and range-rate space region inside the admissible region. We obtained a fully rigorous computation of the probability for the asteroid that could impact the Earth within a few days from the discovery without any a priori assumption. We tested our method on the two past impactors, 2008 TC3 and 2014 AA, on some very well known cases, and on two particular objects observed by the European Space Agency Gaia mission.
The promise of discovering population-specific disease-associated genes in South Asia.
Nakatsuka, Nathan; Moorjani, Priya; Rai, Niraj; Sarkar, Biswanath; Tandon, Arti; Patterson, Nick; Bhavani, Gandham SriLakshmi; Girisha, Katta Mohan; Mustak, Mohammed S; Srinivasan, Sudha; Kaushik, Amit; Vahab, Saadi Abdul; Jagadeesh, Sujatha M; Satyamoorthy, Kapaettu; Singh, Lalji; Reich, David; Thangaraj, Kumarasamy
2017-09-01
The more than 1.5 billion people who live in South Asia are correctly viewed not as a single large population but as many small endogamous groups. We assembled genome-wide data from over 2,800 individuals from over 260 distinct South Asian groups. We identified 81 unique groups, 14 of which had estimated census sizes of more than 1 million, that descend from founder events more extreme than those in Ashkenazi Jews and Finns, both of which have high rates of recessive disease due to founder events. We identified multiple examples of recessive diseases in South Asia that are the result of such founder events. This study highlights an underappreciated opportunity for decreasing disease burden among South Asians through discovery of and testing for recessive disease-associated genes.
Near-Earth asteroid discovery rate review
NASA Technical Reports Server (NTRS)
Helin, Eleanor F.
1991-01-01
Fifteen to twenty years ago the discovery of 1 or 2 Near Earth Asteroids (NEAs) per year was typical from one systematic search program, Palomar Planet Crossing Asteroid Survey (PCAS), and the incidental discovery from a variety of other astronomical program. Sky coverage and magnitude were both limited by slower emulsions, requiring longer exposures. The 1970's sky coverage of 15,000 to 25,000 sq. deg. per year led to about 1 NEA discovery every 13,000 sq. deg. Looking at the years from 1987 through 1990, it was found that by comparing 1987/1988 and 1989/1990, the world discovery rate of NEAs went from 20 to 43. More specifically, PCAS' results when grouped into the two year periods, show an increase from 5 discoveries in the 1st period to 20 in the 2nd period, a fourfold increase. Also, the discoveries went from representing about 25 pct. of the world total to about 50 pct. of discoveries worldwide. The surge of discoveries enjoyed by PCAS in particular is attributed to new fine grain sensitive emulsions, film hypering, more uniformity in the quality of the photograph, more equitable scheduling, better weather, and coordination of efforts. The maximum discoveries seem to have been attained at Palomar Schmidt.
An automated assay for the assessment of cardiac arrest in fish embryo.
Puybareau, Elodie; Genest, Diane; Barbeau, Emilie; Léonard, Marc; Talbot, Hugues
2017-02-01
Studies on fish embryo models are widely developed in research. They are used in several research fields including drug discovery or environmental toxicology. In this article, we propose an entirely automated assay to detect cardiac arrest in Medaka (Oryzias latipes) based on image analysis. We propose a multi-scale pipeline based on mathematical morphology. Starting from video sequences of entire wells in 24-well plates, we focus on the embryo, detect its heart, and ascertain whether or not the heart is beating based on intensity variation analysis. Our image analysis pipeline only uses commonly available operators. It has a low computational cost, allowing analysis at the same rate as acquisition. From an initial dataset of 3192 videos, 660 were discarded as unusable (20.7%), 655 of them correctly so (99.25%) and only 5 incorrectly so (0.75%). The 2532 remaining videos were used for our test. On these, 45 errors were made, leading to a success rate of 98.23%. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fast radio burst event rate counts - I. Interpreting the observations
NASA Astrophysics Data System (ADS)
Macquart, J.-P.; Ekers, R. D.
2018-02-01
The fluence distribution of the fast radio burst (FRB) population (the `source count' distribution, N (>F) ∝Fα), is a crucial diagnostic of its distance distribution, and hence the progenitor evolutionary history. We critically reanalyse current estimates of the FRB source count distribution. We demonstrate that the Lorimer burst (FRB 010724) is subject to discovery bias, and should be excluded from all statistical studies of the population. We re-examine the evidence for flat, α > -1, source count estimates based on the ratio of single-beam to multiple-beam detections with the Parkes multibeam receiver, and show that current data imply only a very weak constraint of α ≲ -1.3. A maximum-likelihood analysis applied to the portion of the Parkes FRB population detected above the observational completeness fluence of 2 Jy ms yields α = -2.6_{-1.3}^{+0.7 }. Uncertainties in the location of each FRB within the Parkes beam render estimates of the Parkes event rate uncertain in both normalizing survey area and the estimated post-beam-corrected completeness fluence; this uncertainty needs to be accounted for when comparing the event rate against event rates measured at other telescopes.
Shen, Li; Saykin, Andrew J.; Williams, Scott M.; Moore, Jason H.
2016-01-01
ABSTRACT Although gene‐environment (G× E) interactions play an important role in many biological systems, detecting these interactions within genome‐wide data can be challenging due to the loss in statistical power incurred by multiple hypothesis correction. To address the challenge of poor power and the limitations of existing multistage methods, we recently developed a screening‐testing approach for G× E interaction detection that combines elastic net penalized regression with joint estimation to support a single omnibus test for the presence of G× E interactions. In our original work on this technique, however, we did not assess type I error control or power and evaluated the method using just a single, small bladder cancer data set. In this paper, we extend the original method in two important directions and provide a more rigorous performance evaluation. First, we introduce a hierarchical false discovery rate approach to formally assess the significance of individual G× E interactions. Second, to support the analysis of truly genome‐wide data sets, we incorporate a score statistic‐based prescreening step to reduce the number of single nucleotide polymorphisms prior to fitting the first stage penalized regression model. To assess the statistical properties of our method, we compare the type I error rate and statistical power of our approach with competing techniques using both simple simulation designs as well as designs based on real disease architectures. Finally, we demonstrate the ability of our approach to identify biologically plausible SNP‐education interactions relative to Alzheimer's disease status using genome‐wide association study data from the Alzheimer's Disease Neuroimaging Initiative (ADNI). PMID:27578615
An extended sequential goodness-of-fit multiple testing method for discrete data.
Castro-Conde, Irene; Döhler, Sebastian; de Uña-Álvarez, Jacobo
2017-10-01
The sequential goodness-of-fit (SGoF) multiple testing method has recently been proposed as an alternative to the familywise error rate- and the false discovery rate-controlling procedures in high-dimensional problems. For discrete data, the SGoF method may be very conservative. In this paper, we introduce an alternative SGoF-type procedure that takes into account the discreteness of the test statistics. Like the original SGoF, our new method provides weak control of the false discovery rate/familywise error rate but attains false discovery rate levels closer to the desired nominal level, and thus it is more powerful. We study the performance of this method in a simulation study and illustrate its application to a real pharmacovigilance data set.
Hagrot, Erika; Oddsdóttir, Hildur Æsa; Hosta, Joan Gonzalez; Jacobsen, Elling W; Chotteau, Véronique
2016-06-20
This article has been retracted: please see Elsevier Policy on Article Withdrawal (https://www.elsevier.com/about/our-business/policies/article-withdrawal). The authors of the paper wish to retract the paper due to the discovery of a calculation error in the processing of the raw data. The discovered error concerns the calculation of the specific uptake/secretion rates for several metabolites in one of the experimental conditions, i.e. glutamine omission (called Q0). In other words, in Figure 2, the variations of the metabolic fluxes for the condition Q0 are not correct. When this error is corrected, the resulting mathematical model changes (in particular for the results associated with Q0 conditions), several figures and tables are modified, and the interpretation of the fluxes in Q0 has to be slightly modified. Therefore the authors wish to retract the article. However, the error does not affect the modelling approach or the methodology presented in the article. Therefore, a revised version with the correct data has since been published: http://www.sciencedirect.com/science/article/pii/S0168165617302663. We apologize to the scientific community for the need to retract the article and the inconvenience caused. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Logue, Mark W.; Smith, Alicia K.; Baldwin, Clinton; Wolf, Erika J.; Guffanti, Guia; Ratanatharathorn, Andrew; Stone, Annjanette; Schichman, Steven A.; Humphries, Donald; Binder, Elisabeth B.; Arloth, Janine; Menke, Andreas; Uddin, Monica; Wildman, Derek; Galea, Sandro; Aiello, Allison E.; Koenen, Karestan C.; Miller, Mark W.
2015-01-01
We examined the association between posttraumatic stress disorder (PTSD) and gene expression using whole blood samples from a cohort of trauma-exposed white non-Hispanic male veterans (115 cases and 28 controls). 10,264 probes of genes and gene transcripts were analyzed. We found 41 that were differentially expressed in PTSD cases versus controls (multiple-testing corrected p<0.05). The most significant was DSCAM, a neurological gene expressed widely in the developing brain and in the amygdala and hippocampus of the adult brain. We then examined the 41 differentially expressed genes in a meta-analysis using two replication cohorts and found significant associations with PTSD for 7 of the 41 (p<0.05), one of which (ATP6AP1L) survived multiple-testing correction. There was also broad evidence of overlap across the discovery and replication samples for the entire set of genes implicated in the discovery data based on the direction of effect and an enrichment of p<0.05 significant probes beyond what would be expected under the null. Finally, we found that the set of differentially expressed genes from the discovery sample was enriched for genes responsive to glucocorticoid signaling with most showing reduced expression in PTSD cases compared to controls. PMID:25867994
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Roberts, Gary D.
2003-01-01
Procedures for modeling the effect of high strain rate on composite materials are needed for designing reliable composite engine cases that are lighter than the metal cases in current use. The types of polymer matrix composites that are likely to be used in such an application have a deformation response that is nonlinear and that varies with strain rate. The nonlinearity and strain rate dependence of the composite response is primarily due to the matrix constituent. Therefore, in developing material models to be used in the design of impact-resistant composite engine cases, the deformation of the polymer matrix must be correctly analyzed. However, unlike in metals, the nonlinear response of polymers depends on the hydrostatic stresses, which must be accounted for within an analytical model. An experimental program has been carried out through a university grant with the Ohio State University to obtain tensile and shear deformation data for a representative polymer for strain rates ranging from quasi-static to high rates of several hundred per second. This information has been used at the NASA Glenn Research Center to develop, characterize, and correlate a material model in which the strain rate dependence and nonlinearity (including hydrostatic stress effects) of the polymer are correctly analyzed. To obtain the material data, Glenn s researchers designed and fabricated test specimens of a representative toughened epoxy resin. Quasi-static tests at low strain rates and split Hopkinson bar tests at high strain rates were then conducted at the Ohio State University. The experimental data confirmed the strong effects of strain rate on both the tensile and shear deformation of the polymer. For the analytical model, Glenn researchers modified state variable constitutive equations previously used for the viscoplastic analysis of metals to allow for the analysis of the nonlinear, strain-rate-dependent polymer deformation. Specifically, we accounted for the effects of hydrostatic stresses. An important discovery in the course of this work was that the hydrostatic stress effects varied during the loading process, which needed to be accounted for within the constitutive equations. The model is characterized primarily by shear data, with tensile data used to characterize the hydrostatic stress effects.
Phylogenetic comparative methods complement discriminant function analysis in ecomorphology.
Barr, W Andrew; Scott, Robert S
2014-04-01
In ecomorphology, Discriminant Function Analysis (DFA) has been used as evidence for the presence of functional links between morphometric variables and ecological categories. Here we conduct simulations of characters containing phylogenetic signal to explore the performance of DFA under a variety of conditions. Characters were simulated using a phylogeny of extant antelope species from known habitats. Characters were modeled with no biomechanical relationship to the habitat category; the only sources of variation were body mass, phylogenetic signal, or random "noise." DFA on the discriminability of habitat categories was performed using subsets of the simulated characters, and Phylogenetic Generalized Least Squares (PGLS) was performed for each character. Analyses were repeated with randomized habitat assignments. When simulated characters lacked phylogenetic signal and/or habitat assignments were random, <5.6% of DFAs and <8.26% of PGLS analyses were significant. When characters contained phylogenetic signal and actual habitats were used, 33.27 to 45.07% of DFAs and <13.09% of PGLS analyses were significant. False Discovery Rate (FDR) corrections for multiple PGLS analyses reduced the rate of significance to <4.64%. In all cases using actual habitats and characters with phylogenetic signal, correct classification rates of DFAs exceeded random chance. In simulations involving phylogenetic signal in both predictor variables and predicted categories, PGLS with FDR was rarely significant, while DFA often was. In short, DFA offered no indication that differences between categories might be explained by phylogenetic signal, while PGLS did. As such, PGLS provides a valuable tool for testing the functional hypotheses at the heart of ecomorphology. Copyright © 2013 Wiley Periodicals, Inc.
Prosperini, Luca; Fanelli, Fulvia; Petsas, Nikolaos; Sbardella, Emilia; Tona, Francesca; Raz, Eytan; Fortuna, Deborah; De Angelis, Floriana; Pozzilli, Carlo; Pantano, Patrizia
2014-11-01
To determine if high-intensity, task-oriented, visual feedback training with a video game balance board (Nintendo Wii) induces significant changes in diffusion-tensor imaging ( DTI diffusion-tensor imaging ) parameters of cerebellar connections and other supratentorial associative bundles and if these changes are related to clinical improvement in patients with multiple sclerosis. The protocol was approved by local ethical committee; each participant provided written informed consent. In this 24-week, randomized, two-period crossover pilot study, 27 patients underwent static posturography and brain magnetic resonance (MR) imaging at study entry, after the first 12-week period, and at study termination. Thirteen patients started a 12-week training program followed by a 12-week period without any intervention, while 14 patients received the intervention in reverse order. Fifteen healthy subjects also underwent MR imaging once and underwent static posturography. Virtual dissection of white matter tracts was performed with streamline tractography; values of DTI diffusion-tensor imaging parameters were then obtained for each dissected tract. Repeated measures analyses of variance were performed to evaluate whether DTI diffusion-tensor imaging parameters significantly changed after intervention, with false discovery rate correction for multiple hypothesis testing. There were relevant differences between patients and healthy control subjects in postural sway and DTI diffusion-tensor imaging parameters (P < .05). Significant main effects of time by group interaction for fractional anisotropy and radial diffusivity of the left and right superior cerebellar peduncles were found (F2,23 range, 5.555-3.450; P = .036-.088 after false discovery rate correction). These changes correlated with objective measures of balance improvement detected at static posturography (r = -0.381 to 0.401, P < .05). However, both clinical and DTI diffusion-tensor imaging changes did not persist beyond 12 weeks after training. Despite the low statistical power (35%) due to the small sample size, the results showed that training with the balance board system modified the microstructure of superior cerebellar peduncles. The clinical improvement observed after training might be mediated by enhanced myelination-related processes, suggesting that high-intensity, task-oriented exercises could induce favorable microstructural changes in the brains of patients with multiple sclerosis.
Forecasting petroleum discoveries in sparsely drilled areas: Nigeria and the North Sea
DOE Office of Scientific and Technical Information (OSTI.GOV)
Attanasi, E.D.; Root, D.H.
1988-10-01
Decline function methods for projecting future discoveries generally capture the crowding effects of wildcat wells on the discovery rate. However, these methods do not accommodate easily situations where exploration areas and horizons are expanding. In this paper, a method is presented that uses a mapping algorithm for separating these often countervailing influences. The method is applied to Nigeria and the North Sea. For an amount of future drilling equivalent to past drilling (825 wildcat wells), future discoveries (in resources found) for Nigeria are expected to decline by 68% per well but still amount to 8.5 billion barrels of oil equivalentmore » (BOE). Similarly, for the total North Sea for an equivalent amount and mix among areas of past drilling (1322 wildcat wells), future discoveries are expected to amount to 17.9 billion BOE, whereas the average discovery rate per well is expected to decline by 71%.« less
Forecasting petroleum discoveries in sparsely drilled areas: Nigeria and the North Sea
Attanasi, E.D.; Root, D.H.
1988-01-01
Decline function methods for projecting future discoveries generally capture the crowding effects of wildcat wells on the discovery rate. However, these methods do not accommodate easily situations where exploration areas and horizons are expanding. In this paper, a method is presented that uses a mapping algorithm for separating these often countervailing influences. The method is applied to Nigeria and the North Sea. For an amount of future drilling equivalent to past drilling (825 wildcat wells), future discoveries (in resources found) for Nigeria are expected to decline by 68% per well but still amount to 8.5 billion barrels of oil equivalent (BOE). Similarly, for the total North Sea for an equivalent amount and mix among areas of past drilling (1322 wildcat wells), future discoveries are expected to amount to 17.9 billion BOE, whereas the average discovery rate per well is expected to decline by 71%. ?? 1988 International Association for Mathematical Geology.
Docking screens: right for the right reasons?
Kolb, Peter; Irwin, John J
2009-01-01
Whereas docking screens have emerged as the most practical way to use protein structure for ligand discovery, an inconsistent track record raises questions about how well docking actually works. In its favor, a growing number of publications report the successful discovery of new ligands, often supported by experimental affinity data and controls for artifacts. Few reports, however, actually test the underlying structural hypotheses that docking makes. To be successful and not just lucky, prospective docking must not only rank a true ligand among the top scoring compounds, it must also correctly orient the ligand so the score it receives is biophysically sound. If the correct binding pose is not predicted, a skeptic might well infer that the discovery was serendipitous. Surveying over 15 years of the docking literature, we were surprised to discover how rarely sufficient evidence is presented to establish whether docking actually worked for the right reasons. The paucity of experimental tests of theoretically predicted poses undermines confidence in a technique that has otherwise become widely accepted. Of course, solving a crystal structure is not always possible, and even when it is, it can be a lot of work, and is not readily accessible to all groups. Even when a structure can be determined, investigators may prefer to gloss over an erroneous structural prediction to better focus on their discovery. Still, the absence of a direct test of theory by experiment is a loss for method developers seeking to understand and improve docking methods. We hope this review will motivate investigators to solve structures and compare them with their predictions whenever possible, to advance the field.
40 CFR 61.185 - Recordkeeping requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Arsenic Emissions From Arsenic Trioxide and Metallic Arsenic Production Facilities § 61.185 Recordkeeping... arsenic to the atmosphere between the time of discovery and the time corrective action was taken. (c) Each... ambient inorganic arsenic concentrations at all sampling sites and other data needed to determine such...
40 CFR 61.185 - Recordkeeping requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Arsenic Emissions From Arsenic Trioxide and Metallic Arsenic Production Facilities § 61.185 Recordkeeping... arsenic to the atmosphere between the time of discovery and the time corrective action was taken. (c) Each... ambient inorganic arsenic concentrations at all sampling sites and other data needed to determine such...
Insecticide discovery: an evaluation and analysis.
Sparks, Thomas C
2013-09-01
There is an on-going need for the discovery and development of new insecticides due to the loss of existing products through the development of resistance, the desire for products with more favorable environmental and toxicological profiles, shifting pest spectrums, and changing agricultural practices. Since 1960, the number of research-based companies in the US and Europe involved in the discovery of new insecticidal chemistries has been declining. In part this is a reflection of the increasing costs of the discovery and development of new pesticides. Likewise, the number of compounds that need to be screened for every product developed has, until recently, been climbing. In the past two decades the agrochemical industry has been able to develop a range of new products that have more favorable mammalian vs. insect selectivity. This review provides an analysis of the time required for the discovery, or more correctly the building process, for a wide range of insecticides developed during the last 60 years. An examination of the data around the time requirements for the discovery of products based on external patents, prior internal products, or entirely new chemistry provides some unexpected observations. In light of the increasing costs of discovery and development, coupled with fewer companies willing or able to make the investment, insecticide resistance management takes on greater importance as a means to preserve existing and new insecticides. Copyright © 2013 Elsevier Inc. All rights reserved.
Orbiter processing facility service platform failure and redesign
NASA Technical Reports Server (NTRS)
Harris, Jesse L.
1988-01-01
In a high bay of the Orbiter Processing Facility (OPF) at the Kennedy Space Center, technicians were preparing the space shuttle orbiter Discovery for rollout to the Vehicle Assembly Building (VAB). A service platform, commonly referred to as an OPF Bucket, was being retracted when it suddenly fell, striking a technician and impacting Discovery's payload bay door. A critical component in the OPF Bucket hoist system had failed, allowing the platform to fall. The incident was thoroughly investigated by both NASA and Lockheed, revealing many design deficiencies within the system. The deficiencies and the design changes made to correct them are reviewed.
2011-07-01
that the object was indeed a proper motion object. For real ob- jects, Two Micron All Sky Survey ( 2MASS ) positions, epochs, and JHKs photometry were...and vice versa, and to ensure the correct 2MASS data were collected. The blinking process led to the discovery of many common proper motion (CPM...Proper motion or position angle suspect. f No 2MASS data available, so no distance estimate. g Coordinates not J2000.0 due to lack of proper motion or
Gale, Maggie; Ball, Linden J
2012-04-01
Hypothesis-testing performance on Wason's (Quarterly Journal of Experimental Psychology 12:129-140, 1960) 2-4-6 task is typically poor, with only around 20% of participants announcing the to-be-discovered "ascending numbers" rule on their first attempt. Enhanced solution rates can, however, readily be observed with dual-goal (DG) task variants requiring the discovery of two complementary rules, one labeled "DAX" (the standard "ascending numbers" rule) and the other labeled "MED" ("any other number triples"). Two DG experiments are reported in which we manipulated the usefulness of a presented MED exemplar, where usefulness denotes cues that can establish a helpful "contrast class" that can stand in opposition to the presented 2-4-6 DAX exemplar. The usefulness of MED exemplars had a striking facilitatory effect on DAX rule discovery, which supports the importance of contrast-class information in hypothesis testing. A third experiment ruled out the possibility that the useful MED triple seeded the correct rule from the outset and obviated any need for hypothesis testing. We propose that an extension of Oaksford and Chater's (European Journal of Cognitive Psychology 6:149-169, 1994) iterative counterfactual model can neatly capture the mechanisms by which DG facilitation arises.
Analysis of the rate of wildcat drilling and deposit discovery
Drew, L.J.
1975-01-01
The rate at which petroleum deposits were discovered during a 16-yr period (1957-72) was examined in relation to changes in a suite of economic and physical variables. The study area encompasses 11,000 mi2 and is located on the eastern flank of the Powder River Basin. A two-stage multiple-regression model was used as a basis for this analysis. The variables employed in this model were: (1) the yearly wildcat drilling rate, (2) a measure of the extent of the physical exhaustion of the resource base of the region, (3) a proxy for the discovery expectation of the exploration operators active in the region, (4) an exploration price/cost ratio, and (5) the expected depths of the exploration targets sought. The rate at which wildcat wells were drilled was strongly correlated with the discovery expectation of the exploration operators. Small additional variations in the wildcat drilling rate were explained by the price/cost ratio and target-depth variables. The number of deposits discovered each year was highly dependent on the wildcat drilling rate, but the aggregate quantity of petroleum discovered each year was independent of the wildcat drilling rate. The independence between these last two variables is a consequence of the cyclical behavior of the exploration play mechanism. Although the discovery success ratio declined sharply during the initial phases of the two exploration plays which developed in the study area, a learning effect occurred whereby the discovery success ratio improved steadily with the passage of time during both exploration plays. ?? 1975 Plenum Publishing Corporation.
22 CFR 34.18 - Waivers of indebtedness.
Code of Federal Regulations, 2011 CFR
2011-04-01
... known through the exercise of due diligence that an error existed but failed to take corrective action... elapsed between the erroneous payment and discovery of the error and notification of the employee; (D... to duty because of disability (supported by an acceptable medical certificate); and (D) Whether...
Automated Lipid A Structure Assignment from Hierarchical Tandem Mass Spectrometry Data
NASA Astrophysics Data System (ADS)
Ting, Ying S.; Shaffer, Scott A.; Jones, Jace W.; Ng, Wailap V.; Ernst, Robert K.; Goodlett, David R.
2011-05-01
Infusion-based electrospray ionization (ESI) coupled to multiple-stage tandem mass spectrometry (MS n ) is a standard methodology for investigating lipid A structural diversity (Shaffer et al. J. Am. Soc. Mass. Spectrom. 18(6), 1080-1092, 2007). Annotation of these MS n spectra, however, has remained a manual, expert-driven process. In order to keep up with the data acquisition rates of modern instruments, we devised a computational method to annotate lipid A MS n spectra rapidly and automatically, which we refer to as hierarchical tandem mass spectrometry (HiTMS) algorithm. As a first-pass tool, HiTMS aids expert interpretation of lipid A MS n data by providing the analyst with a set of candidate structures that may then be confirmed or rejected. HiTMS deciphers the signature ions (e.g., A-, Y-, and Z-type ions) and neutral losses of MS n spectra using a species-specific library based on general prior structural knowledge of the given lipid A species under investigation. Candidates are selected by calculating the correlation between theoretical and acquired MS n spectra. At a false discovery rate of less than 0.01, HiTMS correctly assigned 85% of the structures in a library of 133 manually annotated Francisella tularensis subspecies novicida lipid A structures. Additionally, HiTMS correctly assigned 85% of the structures in a smaller library of lipid A species from Yersinia pestis demonstrating that it may be used across species.
Toward a Quantitative Theory of Intellectual Discovery (Especially in Physics).
ERIC Educational Resources Information Center
Fowler, Richard G.
1987-01-01
Studies time intervals in a list of critical ideas in physics. Infers that the rate of growth of ideas has been proportional to the totality of known ideas multiplied by the totality of people in the world. Indicates that the rate of discovery in physics has been decreasing. (CW)
Sparse PCA corrects for cell type heterogeneity in epigenome-wide association studies.
Rahmani, Elior; Zaitlen, Noah; Baran, Yael; Eng, Celeste; Hu, Donglei; Galanter, Joshua; Oh, Sam; Burchard, Esteban G; Eskin, Eleazar; Zou, James; Halperin, Eran
2016-05-01
In epigenome-wide association studies (EWAS), different methylation profiles of distinct cell types may lead to false discoveries. We introduce ReFACTor, a method based on principal component analysis (PCA) and designed for the correction of cell type heterogeneity in EWAS. ReFACTor does not require knowledge of cell counts, and it provides improved estimates of cell type composition, resulting in improved power and control for false positives in EWAS. Corresponding software is available at http://www.cs.tau.ac.il/~heran/cozygene/software/refactor.html.
Erratum: "Discovery of a Second Millisecond Accreting Pulsar: XTE J1751-305"
NASA Technical Reports Server (NTRS)
Markwardt, Craig; Swank, J. H.; Strohmayer, T. E.; in 'tZand, J. J. M.; Marshall, F. E.
2007-01-01
The original Table 1 ("Timing Parameters of XTE J1751-305") contains one error. The epoch of pulsar mean longitude 90deg is incorrect due to a numerical conversion error in the preparation of the original table text. A corrected version of Table 1 is shown. For reference, the epoch of the ascending node is also included. The correct value was used in all of the analysis leading up to the paper. As T(sub 90) is a purely fiducial reference time, the scientific conclusions of the paper are unchanged.
Janero, David R
2016-09-01
Drug discovery depends critically upon published results from the academy. The reproducibility of preclinical research findings reported by academia in the peer-reviewed literature has been called into question, seriously jeopardizing the value of academic science for inventing therapeutics. The corrosive effects of the reproducibility issue on drug discovery are considered. Purported correctives imposed upon academia from the outside deal mainly with expunging fraudulent literature and imposing punitive sanctions on the responsible authors. The salutary influence of such post facto actions on the reproducibility of discovery-relevant preclinical research data from academia appears limited. Rather, intentional doctoral-scientist education focused on data replicability and translationally-meaningful science and active participation of university entities charged with research innovation and asset commercialization toward ensuring data quality are advocated as key academic initiatives for addressing the reproducibility issue. A mindset shift on the part of both senior university faculty and the academy to take responsibility for the data reproducibility crisis and commit proactively to positive educational, incentivization, and risk- and reward-sharing practices will be fundamental for improving the value of published preclinical academic research to drug discovery.
Separate class true discovery rate degree of association sets for biomarker identification.
Crager, Michael R; Ahmed, Murat
2014-01-01
In 2008, Efron showed that biological features in a high-dimensional study can be divided into classes and a separate false discovery rate (FDR) analysis can be conducted in each class using information from the entire set of features to assess the FDR within each class. We apply this separate class approach to true discovery rate degree of association (TDRDA) set analysis, which is used in clinical-genomic studies to identify sets of biomarkers having strong association with clinical outcome or state while controlling the FDR. Careful choice of classes based on prior information can increase the identification power of the separate class analysis relative to the overall analysis.
NASA Astrophysics Data System (ADS)
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
Guilloux, Jean-Philippe; Bassi, Sabrina; Ding, Ying; Walsh, Chris; Turecki, Gustavo; Tseng, George; Cyranowski, Jill M; Sibille, Etienne
2015-02-01
Major depressive disorder (MDD) in general, and anxious-depression in particular, are characterized by poor rates of remission with first-line treatments, contributing to the chronic illness burden suffered by many patients. Prospective research is needed to identify the biomarkers predicting nonremission prior to treatment initiation. We collected blood samples from a discovery cohort of 34 adult MDD patients with co-occurring anxiety and 33 matched, nondepressed controls at baseline and after 12 weeks (of citalopram plus psychotherapy treatment for the depressed cohort). Samples were processed on gene arrays and group differences in gene expression were investigated. Exploratory analyses suggest that at pretreatment baseline, nonremitting patients differ from controls with gene function and transcription factor analyses potentially related to elevated inflammation and immune activation. In a second phase, we applied an unbiased machine learning prediction model and corrected for model-selection bias. Results show that baseline gene expression predicted nonremission with 79.4% corrected accuracy with a 13-gene model. The same gene-only model predicted nonremission after 8 weeks of citalopram treatment with 76% corrected accuracy in an independent validation cohort of 63 MDD patients treated with citalopram at another institution. Together, these results demonstrate the potential, but also the limitations, of baseline peripheral blood-based gene expression to predict nonremission after citalopram treatment. These results not only support their use in future prediction tools but also suggest that increased accuracy may be obtained with the inclusion of additional predictors (eg, genetics and clinical scales).
Avershina, Ekaterina; Ravi, Anuradha; Storrø, Ola; Øien, Torbjørn; Johnsen, Roar; Rudi, Knut
2015-12-21
Westernized lifestyle and hygienic behavior have contributed to dramatic changes in the human-associated microbiota. This particularly relates to indoor activities such as house cleaning. We therefore investigated the associations between washing and vacuum cleaning frequency and the gut microbiota composition in a large longitudinal cohort of mothers and their children. The gut microbiota composition was determined using 16S ribosomal RNA (rRNA) gene Illumina deep sequencing. We found that high vacuum cleaning frequency about twice or more a week was associated with an altered gut microbiota composition both during pregnancy and for 2-year-old children, while there were no associations with house washing frequency. In total, six Operational Taxonomic Units (OTUs) showed significant False Discovery Rate (FDR) corrected associations with vacuum cleaning frequency for mothers (two positive and four negative) and five for 2-year-old children (four positive and one negative). For mothers and the 2-year-old children, OTUs among the dominant microbiota (average >5 %) showed correlation to vacuum cleaning frequency, with an increase in Faecalibacterium prausnitzii for mothers (p = 0.013, FDR corrected), and Blautia sp. for 2-year children (p = 0.012, FDR corrected). Bacteria showing significant associations are among the dominant gut microbiota, which may indicate indirect immunomodulation of the gut microbiota possibly through increased allergen (dust mites) exposure as a potential mechanism. However, further exploration is needed to unveil mechanistic details.
Nonstandard Yukawa couplings and Higgs portal dark matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bishara, Fady; Brod, Joachim; Uttayarat, Patipan
We study the implications of non-standard Higgs Yukawa couplings to light quarks on Higgs-portal dark matter phenomenology. Saturating the present experimental bounds on up-quark, down-quark, or strange-quark Yukawa couplings, the predicted direct dark matter detection scattering rate can increase by up to four orders of magnitude. The effect on the dark matter annihilation cross-section, on the other hand, is subleading unless the dark matter is very light — a scenario that is already excluded by measurements of the Higgs invisible decay width. We investigate the expected size of corrections in multi-Higgs-doublet models with natural flavor conservation, the type-II two-Higgs-doublet model,more » the Giudice-Lebedev model of light quark masses, minimal flavor violation new physics models, Randall-Sundrum, and composite Higgs models. We find that an enhancement in the dark matter scattering rate of an order of magnitude is possible. In conclusion, we point out that a discovery of Higgs-portal dark matter could lead to interesting bounds on the light-quark Yukawa couplings.« less
Nonstandard Yukawa couplings and Higgs portal dark matter
Bishara, Fady; Brod, Joachim; Uttayarat, Patipan; ...
2016-01-04
We study the implications of non-standard Higgs Yukawa couplings to light quarks on Higgs-portal dark matter phenomenology. Saturating the present experimental bounds on up-quark, down-quark, or strange-quark Yukawa couplings, the predicted direct dark matter detection scattering rate can increase by up to four orders of magnitude. The effect on the dark matter annihilation cross-section, on the other hand, is subleading unless the dark matter is very light — a scenario that is already excluded by measurements of the Higgs invisible decay width. We investigate the expected size of corrections in multi-Higgs-doublet models with natural flavor conservation, the type-II two-Higgs-doublet model,more » the Giudice-Lebedev model of light quark masses, minimal flavor violation new physics models, Randall-Sundrum, and composite Higgs models. We find that an enhancement in the dark matter scattering rate of an order of magnitude is possible. In conclusion, we point out that a discovery of Higgs-portal dark matter could lead to interesting bounds on the light-quark Yukawa couplings.« less
The 2015 Nobel Prize in Chemistry The Discovery of Essential Mechanisms that Repair DNA Damage.
Lindahl, Tomas; Modrich, Paul; Sancar, Aziz
2016-01-01
The Royal Swedish Academy awarded the Nobel Prize in Chemistry for 2015 to Tomas Lindahl, Paul Modrich and Aziz Sancar for their discoveries in fundamental mechanisms of DNA repair. This pioneering research described three different essential pathways that correct DNA damage, safeguard the integrity of the genetic code to ensure its accurate replication through generations, and allow proper cell division. Working independently of each other, Tomas Lindahl, Paul Modrich and Aziz Sancar delineated the mechanisms of base excision repair, mismatch repair and nucleotide excision repair, respectively. These breakthroughs challenged and dismissed the early view that the DNA molecule was very stable, paving the way for the discovery of human hereditary diseases associated with distinct DNA repair deficiencies and a susceptibility to cancer. It also brought a deeper understanding of cancer as well as neurodegenerative or neurological diseases, and let to novel strategies to treat cancer.
Low-z Type Ia Supernova Calibration
NASA Astrophysics Data System (ADS)
Hamuy, Mario
The discovery of acceleration and dark energy in 1998 arguably constitutes one of the most revolutionary discoveries in astrophysics in recent years. This paradigm shift was possible thanks to one of the most traditional cosmological tests: the redshift-distance relation between galaxies. This discovery was based on a differential measurement of the expansion rate of the universe: the current one provided by nearby (low-z) type Ia supernovae and the one in the past measured from distant (high-z) supernovae. This paper focuses on the first part of this journey: the calibration of the type Ia supernova luminosities and the local expansion rate of the universe, which was made possible thanks to the introduction of digital CCD (charge-coupled device) digital photometry. The new technology permitted us in the early 1990s to convert supernovae as precise tools to measure extragalactic distances through two key surveys: (1) the "Tololo Supernova Program" which made possible the critical discovery of the "peak luminosity-decline rate" relation for type Ia supernovae, the key underlying idea today behind precise cosmology from supernovae, and (2) the Calán/Tololo project which provided the low - z type Ia supernova sample for the discovery of acceleration.
NASA Astrophysics Data System (ADS)
Yavuz, Hande; Bai, Jinbo
2018-06-01
This paper deals with the dielectric barrier discharge assisted continuous plasma polypyrrole deposition on CNT-grafted carbon fibers for conductive composite applications. The simultaneous effects of three controllable factors have been studied on the electrical resistivity (ER) of these two material systems based on multivariate experimental design methodology. A posterior probability referring to Benjamini-Hochberg (BH) false discovery rate was explored as multiple testing corrections of the t-test p values. BH significance threshold of 0.05 was produced truly statistically significant coefficients to describe ER of two material systems. A group of plasma modified samples was chosen to be used for composite manufacturing to drive an assessment of interlaminar shear properties under static loading. Transversal and longitudinal electrical resistivity (DC, ω =0) of composite samples were studied to compare both the effects of CNT grafting and plasma modification on ER of resultant composites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukherjee, Abhik, E-mail: abhik.mukherjee@saha.ac.in; Janaki, M. S., E-mail: ms.janaki@saha.ac.in; Kundu, Anjan, E-mail: anjan.kundu@saha.ac.in
2015-07-15
A new, completely integrable, two dimensional evolution equation is derived for an ion acoustic wave propagating in a magnetized, collisionless plasma. The equation is a multidimensional generalization of a modulated wavepacket with weak transverse propagation, which has resemblance to nonlinear Schrödinger (NLS) equation and has a connection to Kadomtsev-Petviashvili equation through a constraint relation. Higher soliton solutions of the equation are derived through Hirota bilinearization procedure, and an exact lump solution is calculated exhibiting 2D structure. Some mathematical properties demonstrating the completely integrable nature of this equation are described. Modulational instability using nonlinear frequency correction is derived, and the correspondingmore » growth rate is calculated, which shows the directional asymmetry of the system. The discovery of this novel (2+1) dimensional integrable NLS type equation for a magnetized plasma should pave a new direction of research in the field.« less
A fully-automatic fast segmentation of the sub-basal layer nerves in corneal images.
Guimarães, Pedro; Wigdahl, Jeff; Poletti, Enea; Ruggeri, Alfredo
2014-01-01
Corneal nerves changes have been linked to damage caused by surgical interventions or prolonged contact lens wear. Furthermore nerve tortuosity has been shown to correlate with the severity of diabetic neuropathy. For these reasons there has been an increasing interest on the analysis of these structures. In this work we propose a novel, robust, and fast fully automatic algorithm capable of tracing the sub-basal plexus nerves from human corneal confocal images. We resort to logGabor filters and support vector machines to trace the corneal nerves. The proposed algorithm traced most of the corneal nerves correctly (sensitivity of 0.88 ± 0.06 and false discovery rate of 0.08 ± 0.06). The displayed performance is comparable to a human grader. We believe that the achieved processing time (0.661 ± 0.07 s) and tracing quality are major advantages for the daily clinical practice.
NASA Astrophysics Data System (ADS)
Yavuz, Hande; Bai, Jinbo
2017-09-01
This paper deals with the dielectric barrier discharge assisted continuous plasma polypyrrole deposition on CNT-grafted carbon fibers for conductive composite applications. The simultaneous effects of three controllable factors have been studied on the electrical resistivity (ER) of these two material systems based on multivariate experimental design methodology. A posterior probability referring to Benjamini-Hochberg (BH) false discovery rate was explored as multiple testing corrections of the t-test p values. BH significance threshold of 0.05 was produced truly statistically significant coefficients to describe ER of two material systems. A group of plasma modified samples was chosen to be used for composite manufacturing to drive an assessment of interlaminar shear properties under static loading. Transversal and longitudinal electrical resistivity (DC, ω =0) of composite samples were studied to compare both the effects of CNT grafting and plasma modification on ER of resultant composites.
Asymptotics of empirical eigenstructure for high dimensional spiked covariance.
Wang, Weichen; Fan, Jianqing
2017-06-01
We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.
Asymptotics of empirical eigenstructure for high dimensional spiked covariance
Wang, Weichen
2017-01-01
We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies. PMID:28835726
McBride, Christopher; Cheruvallath, Zacharia; Komandla, Mallareddy; Tang, Mingnam; Farrell, Pamela; Lawson, J David; Vanderpool, Darin; Wu, Yiqin; Dougan, Douglas R; Plonowski, Artur; Holub, Corine; Larson, Chris
2016-06-15
Methionine aminopeptidase-2 (MetAP2) is an enzyme that cleaves an N-terminal methionine residue from a number of newly synthesized proteins. This step is required before they will fold or function correctly. Pre-clinical and clinical studies with a MetAP2 inhibitor suggest that they could be used as a novel treatment for obesity. Herein we describe the discovery of a series of pyrazolo[4,3-b]indoles as reversible MetAP2 inhibitors. A fragment-based drug discovery (FBDD) approach was used, beginning with the screening of fragment libraries to generate hits with high ligand-efficiency (LE). An indazole core was selected for further elaboration, guided by structural information. SAR from the indazole series led to the design of a pyrazolo[4,3-b]indole core and accelerated knowledge-based fragment growth resulted in potent and efficient MetAP2 inhibitors, which have shown robust and sustainable body weight loss in DIO mice when dosed orally. Copyright © 2016 Elsevier Ltd. All rights reserved.
PERSONAL AND CIRCUMSTANTIAL FACTORS INFLUENCING THE ACT OF DISCOVERY.
ERIC Educational Resources Information Center
OSTRANDER, EDWARD R.
HOW STUDENTS SAY THEY LEARN WAS INVESTIGATED. INTERVIEWS WITH A RANDOM SAMPLE OF 74 WOMEN STUDENTS POSED QUESTIONS ABOUT THE NATURE, FREQUENCY, PATTERNS, AND CIRCUMSTANCES UNDER WHICH ACTS OF DISCOVERY TAKE PLACE IN THE ACADEMIC SETTING. STUDENTS WERE ASSIGNED DISCOVERY RATINGS BASED ON READINGS OF TYPESCRIPTS. EACH STUDENT WAS CLASSIFIED AND…
78 FR 54380 - Airworthiness Directives; Eurocopter France Helicopters
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-04
... they are misaligned. This AD is prompted by the discovery of a loose nut on the tail rotor control stop... nut or a misaligned stop screw, which, if not corrected, could limit yaw authority, and consequently... adjusting the screws if they are misaligned. The proposed requirements were intended to detect a loose nut...
78 FR 60656 - Airworthiness Directives; Sikorsky Aircraft Corporation Helicopters
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-02
... firewall center fire extinguisher discharge tube (No. 1 engine tube) and inspecting the outboard discharge tube to determine if it is correctly positioned. This AD was prompted by the discovery that the No. 1 engine tube installed on the helicopters is too long to ensure that a fire could be effectively...
Mindfulness as an organizational capability: Evidence from wildland firefighting
Michelle Barton; Kathleen Sutcliffe
2008-01-01
Mindful organizing has been proposed as an adaptive form for unpredictable, unknowable environments. Mindfulness induces a rich awareness of details and facilitates the discovery and correction of ill-structured contingencies so that adaptations can be made as action unfolds. Although these ideas are appealing, empirical studies examining mindfulness and its effects...
FRB180311: AstroSat CZTI upper limits and correction to FRB180301 upper limits
NASA Astrophysics Data System (ADS)
Anumarlapudi, A.; Aarthy, E.; Arvind, B.; Bhalerao, V.; Bhattacharya, D.; Rao, A. R.; Vadawale, S.
2018-03-01
We carried out offline analysis of data from Astrosat CZTI in a 200 second window centred on the FRB 180311 (Parkes discovery - Oslowski, S. et al., ATEL #11396) trigger time, 2018-03-11 04:11:54.80 UTC, to look for any coincident hard X-ray flash.
Earl Sutherland (1915-1974) [corrected] and the discovery of cyclic AMP.
Blumenthal, Stanley A
2012-01-01
In 1945, Earl Sutherland (1915-1974) [corrected] and associates began studies of the mechanism of hormone-induced glycogen breakdown in the liver. In 1956, their efforts culminated in the identification of cyclic AMP, an ancient molecule generated in many cell types in response to hormonal and other extracellular signals. Cyclic AMP, the original "second messenger," transmits such signals through pathways that regulate a diversity of cellular functions and capabilities: metabolic processes such as lipolysis and glycogenolysis; hormone secretion; the permeability of ion channels; gene expression; cell proliferation and survival. Indeed, it can be argued that the discovery of cyclic AMP initiated the study of intracellular signaling pathways, a major focus of contemporary biomedical inquiry. This review presents relevant details of Sutherland's career; summarizes key contributions of his mentors, Carl and Gerti Cori, to the knowledge of glycogen metabolism (contributions that were the foundation for his own research); describes the experiments that led to his identification, isolation, and characterization of cyclic AMP; assesses the significance of his work; and considers some aspects of the impact of cyclic nucleotide research on clinical medicine.
False Discovery Control in Large-Scale Spatial Multiple Testing
Sun, Wenguang; Reich, Brian J.; Cai, T. Tony; Guindani, Michele; Schwartzman, Armin
2014-01-01
Summary This article develops a unified theoretical and computational framework for false discovery control in multiple testing of spatial signals. We consider both point-wise and cluster-wise spatial analyses, and derive oracle procedures which optimally control the false discovery rate, false discovery exceedance and false cluster rate, respectively. A data-driven finite approximation strategy is developed to mimic the oracle procedures on a continuous spatial domain. Our multiple testing procedures are asymptotically valid and can be effectively implemented using Bayesian computational algorithms for analysis of large spatial data sets. Numerical results show that the proposed procedures lead to more accurate error control and better power performance than conventional methods. We demonstrate our methods for analyzing the time trends in tropospheric ozone in eastern US. PMID:25642138
Chung, Wendy K.; Patki, Amit; Matsuoka, Naoki; Boyer, Bert B.; Liu, Nianjun; Musani, Solomon K.; Goropashnaya, Anna V.; Tan, Perciliz L.; Katsanis, Nicholas; Johnson, Stephen B.; Gregersen, Peter K.; Allison, David B.; Leibel, Rudolph L.; Tiwari, Hemant K.
2009-01-01
Objective Human adiposity is highly heritable, but few of the genes that predispose to obesity in most humans are known. We tested candidate genes in pathways related to food intake and energy expenditure for association with measures of adiposity. Methods We studied 355 genetic variants in 30 candidate genes in 7 molecular pathways related to obesity in two groups of adult subjects: 1,982 unrelated European Americans living in the New York metropolitan area drawn from the extremes of their body mass index (BMI) distribution and 593 related Yup'ik Eskimos living in rural Alaska characterized for BMI, body composition, waist circumference, and skin fold thicknesses. Data were analyzed by using a mixed model in conjunction with a false discovery rate (FDR) procedure to correct for multiple testing. Results After correcting for multiple testing, two single nucleotide polymorphisms (SNPs) in Ghrelin (GHRL) (rs35682 and rs35683) were associated with BMI in the New York European Americans. This association was not replicated in the Yup'ik participants. There was no evidence for gene × gene interactions among genes within the same molecular pathway after adjusting for multiple testing via FDR control procedure. Conclusion Genetic variation in GHRL may have a modest impact on BMI in European Americans. PMID:19077438
Chung, Wendy K; Patki, Amit; Matsuoka, Naoki; Boyer, Bert B; Liu, Nianjun; Musani, Solomon K; Goropashnaya, Anna V; Tan, Perciliz L; Katsanis, Nicholas; Johnson, Stephen B; Gregersen, Peter K; Allison, David B; Leibel, Rudolph L; Tiwari, Hemant K
2009-01-01
Human adiposity is highly heritable, but few of the genes that predispose to obesity in most humans are known. We tested candidate genes in pathways related to food intake and energy expenditure for association with measures of adiposity. We studied 355 genetic variants in 30 candidate genes in 7 molecular pathways related to obesity in two groups of adult subjects: 1,982 unrelated European Americans living in the New York metropolitan area drawn from the extremes of their body mass index (BMI) distribution and 593 related Yup'ik Eskimos living in rural Alaska characterized for BMI, body composition, waist circumference, and skin fold thicknesses. Data were analyzed by using a mixed model in conjunction with a false discovery rate (FDR) procedure to correct for multiple testing. After correcting for multiple testing, two single nucleotide polymorphisms (SNPs) in Ghrelin (GHRL) (rs35682 and rs35683) were associated with BMI in the New York European Americans. This association was not replicated in the Yup'ik participants. There was no evidence for gene x gene interactions among genes within the same molecular pathway after adjusting for multiple testing via FDR control procedure. Genetic variation in GHRL may have a modest impact on BMI in European Americans.
Tsanas, Athanasios; Clifford, Gari D
2015-01-01
Sleep spindles are critical in characterizing sleep and have been associated with cognitive function and pathophysiological assessment. Typically, their detection relies on the subjective and time-consuming visual examination of electroencephalogram (EEG) signal(s) by experts, and has led to large inter-rater variability as a result of poor definition of sleep spindle characteristics. Hitherto, many algorithmic spindle detectors inherently make signal stationarity assumptions (e.g., Fourier transform-based approaches) which are inappropriate for EEG signals, and frequently rely on additional information which may not be readily available in many practical settings (e.g., more than one EEG channels, or prior hypnogram assessment). This study proposes a novel signal processing methodology relying solely on a single EEG channel, and provides objective, accurate means toward probabilistically assessing the presence of sleep spindles in EEG signals. We use the intuitively appealing continuous wavelet transform (CWT) with a Morlet basis function, identifying regions of interest where the power of the CWT coefficients corresponding to the frequencies of spindles (11-16 Hz) is large. The potential for assessing the signal segment as a spindle is refined using local weighted smoothing techniques. We evaluate our findings on two databases: the MASS database comprising 19 healthy controls and the DREAMS sleep spindle database comprising eight participants diagnosed with various sleep pathologies. We demonstrate that we can replicate the experts' sleep spindles assessment accurately in both databases (MASS database: sensitivity: 84%, specificity: 90%, false discovery rate 83%, DREAMS database: sensitivity: 76%, specificity: 92%, false discovery rate: 67%), outperforming six competing automatic sleep spindle detection algorithms in terms of correctly replicating the experts' assessment of detected spindles.
Podgoreanu, M V; White, W D; Morris, R W; Mathew, J P; Stafford-Smith, M; Welsby, I J; Grocott, H P; Milano, C A; Newman, M F; Schwinn, D A
2006-07-04
The inflammatory response triggered by cardiac surgery with cardiopulmonary bypass (CPB) is a primary mechanism in the pathogenesis of postoperative myocardial infarction (PMI), a multifactorial disorder with significant inter-patient variability poorly predicted by clinical and procedural factors. We tested the hypothesis that candidate gene polymorphisms in inflammatory pathways contribute to risk of PMI after cardiac surgery. We genotyped 48 polymorphisms from 23 candidate genes in a prospective cohort of 434 patients undergoing elective cardiac surgery with CPB. PMI was defined as creatine kinase-MB isoenzyme level > or = 10x upper limit of normal at 24 hours postoperatively. A 2-step analysis strategy was used: marker selection, followed by model building. To minimize false-positive associations, we adjusted for multiple testing by permutation analysis, Bonferroni correction, and controlling the false discovery rate; 52 patients (12%) experienced PMI. After adjusting for multiple comparisons and clinical risk factors, 3 polymorphisms were found to be independent predictors of PMI (adjusted P<0.05; false discovery rate <10%). These gene variants encode the proinflammatory cytokine interleukin 6 (IL6 -572G>C; odds ratio [OR], 2.47), and 2 adhesion molecules: intercellular adhesion molecule-1 (ICAM1 Lys469Glu; OR, 1.88), and E-selectin (SELE 98G>T; OR, 0.16). The inclusion of genotypic information from these polymorphisms improved prediction models for PMI based on traditional risk factors alone (C-statistic 0.764 versus 0.703). Functional genetic variants in cytokine and leukocyte-endothelial interaction pathways are independently associated with severity of myonecrosis after cardiac surgery. This may aid in preoperative identification of high-risk cardiac surgical patients and development of novel cardioprotective strategies.
Bias in Research Grant Evaluation Has Dire Consequences for Small Universities.
Murray, Dennis L; Morris, Douglas; Lavoie, Claude; Leavitt, Peter R; MacIsaac, Hugh; Masson, Michael E J; Villard, Marc-Andre
2016-01-01
Federal funding for basic scientific research is the cornerstone of societal progress, economy, health and well-being. There is a direct relationship between financial investment in science and a nation's scientific discoveries, making it a priority for governments to distribute public funding appropriately in support of the best science. However, research grant proposal success rate and funding level can be skewed toward certain groups of applicants, and such skew may be driven by systemic bias arising during grant proposal evaluation and scoring. Policies to best redress this problem are not well established. Here, we show that funding success and grant amounts for applications to Canada's Natural Sciences and Engineering Research Council (NSERC) Discovery Grant program (2011-2014) are consistently lower for applicants from small institutions. This pattern persists across applicant experience levels, is consistent among three criteria used to score grant proposals, and therefore is interpreted as representing systemic bias targeting applicants from small institutions. When current funding success rates are projected forward, forecasts reveal that future science funding at small schools in Canada will decline precipitously in the next decade, if skews are left uncorrected. We show that a recently-adopted pilot program to bolster success by lowering standards for select applicants from small institutions will not erase funding skew, nor will several other post-evaluation corrective measures. Rather, to support objective and robust review of grant applications, it is necessary for research councils to address evaluation skew directly, by adopting procedures such as blind review of research proposals and bibliometric assessment of performance. Such measures will be important in restoring confidence in the objectivity and fairness of science funding decisions. Likewise, small institutions can improve their research success by more strongly supporting productive researchers and developing competitive graduate programming opportunities.
Bias in Research Grant Evaluation Has Dire Consequences for Small Universities
Murray, Dennis L.; Morris, Douglas; Lavoie, Claude; Leavitt, Peter R.; MacIsaac, Hugh; Masson, Michael E. J.; Villard, Marc-Andre
2016-01-01
Federal funding for basic scientific research is the cornerstone of societal progress, economy, health and well-being. There is a direct relationship between financial investment in science and a nation’s scientific discoveries, making it a priority for governments to distribute public funding appropriately in support of the best science. However, research grant proposal success rate and funding level can be skewed toward certain groups of applicants, and such skew may be driven by systemic bias arising during grant proposal evaluation and scoring. Policies to best redress this problem are not well established. Here, we show that funding success and grant amounts for applications to Canada’s Natural Sciences and Engineering Research Council (NSERC) Discovery Grant program (2011–2014) are consistently lower for applicants from small institutions. This pattern persists across applicant experience levels, is consistent among three criteria used to score grant proposals, and therefore is interpreted as representing systemic bias targeting applicants from small institutions. When current funding success rates are projected forward, forecasts reveal that future science funding at small schools in Canada will decline precipitously in the next decade, if skews are left uncorrected. We show that a recently-adopted pilot program to bolster success by lowering standards for select applicants from small institutions will not erase funding skew, nor will several other post-evaluation corrective measures. Rather, to support objective and robust review of grant applications, it is necessary for research councils to address evaluation skew directly, by adopting procedures such as blind review of research proposals and bibliometric assessment of performance. Such measures will be important in restoring confidence in the objectivity and fairness of science funding decisions. Likewise, small institutions can improve their research success by more strongly supporting productive researchers and developing competitive graduate programming opportunities. PMID:27258385
Estimating the rate of biological introductions: Lessepsian fishes in the Mediterranean.
Belmaker, Jonathan; Brokovich, Eran; China, Victor; Golani, Daniel; Kiflawi, Moshe
2009-04-01
Sampling issues preclude the direct use of the discovery rate of exotic species as a robust estimate of their rate of introduction. Recently, a method was advanced that allows maximum-likelihood estimation of both the observational probability and the introduction rate from the discovery record. Here, we propose an alternative approach that utilizes the discovery record of native species to control for sampling effort. Implemented in a Bayesian framework using Markov chain Monte Carlo simulations, the approach provides estimates of the rate of introduction of the exotic species, and of additional parameters such as the size of the species pool from which they are drawn. We illustrate the approach using Red Sea fishes recorded in the eastern Mediterranean, after crossing the Suez Canal, and show that the two approaches may lead to different conclusions. The analytical framework is highly flexible and could provide a basis for easy modification to other systems for which first-sighting data on native and introduced species are available.
Shteynberg, David; Deutsch, Eric W.; Lam, Henry; Eng, Jimmy K.; Sun, Zhi; Tasman, Natalie; Mendoza, Luis; Moritz, Robert L.; Aebersold, Ruedi; Nesvizhskii, Alexey I.
2011-01-01
The combination of tandem mass spectrometry and sequence database searching is the method of choice for the identification of peptides and the mapping of proteomes. Over the last several years, the volume of data generated in proteomic studies has increased dramatically, which challenges the computational approaches previously developed for these data. Furthermore, a multitude of search engines have been developed that identify different, overlapping subsets of the sample peptides from a particular set of tandem mass spectrometry spectra. We present iProphet, the new addition to the widely used open-source suite of proteomic data analysis tools Trans-Proteomics Pipeline. Applied in tandem with PeptideProphet, it provides more accurate representation of the multilevel nature of shotgun proteomic data. iProphet combines the evidence from multiple identifications of the same peptide sequences across different spectra, experiments, precursor ion charge states, and modified states. It also allows accurate and effective integration of the results from multiple database search engines applied to the same data. The use of iProphet in the Trans-Proteomics Pipeline increases the number of correctly identified peptides at a constant false discovery rate as compared with both PeptideProphet and another state-of-the-art tool Percolator. As the main outcome, iProphet permits the calculation of accurate posterior probabilities and false discovery rate estimates at the level of sequence identical peptide identifications, which in turn leads to more accurate probability estimates at the protein level. Fully integrated with the Trans-Proteomics Pipeline, it supports all commonly used MS instruments, search engines, and computer platforms. The performance of iProphet is demonstrated on two publicly available data sets: data from a human whole cell lysate proteome profiling experiment representative of typical proteomic data sets, and from a set of Streptococcus pyogenes experiments more representative of organism-specific composite data sets. PMID:21876204
Minimizing DILI risk in drug discovery - A screening tool for drug candidates.
Schadt, S; Simon, S; Kustermann, S; Boess, F; McGinnis, C; Brink, A; Lieven, R; Fowler, S; Youdim, K; Ullah, M; Marschmann, M; Zihlmann, C; Siegrist, Y M; Cascais, A C; Di Lenarda, E; Durr, E; Schaub, N; Ang, X; Starke, V; Singer, T; Alvarez-Sanchez, R; Roth, A B; Schuler, F; Funk, C
2015-12-25
Drug-induced liver injury (DILI) is a leading cause of acute hepatic failure and a major reason for market withdrawal of drugs. Idiosyncratic DILI is multifactorial, with unclear dose-dependency and poor predictability since the underlying patient-related susceptibilities are not sufficiently understood. Because of these limitations, a pharmaceutical research option would be to reduce the compound-related risk factors in the drug-discovery process. Here we describe the development and validation of a methodology for the assessment of DILI risk of drug candidates. As a training set, 81 marketed or withdrawn compounds with differing DILI rates - according to the FDA categorization - were tested in a combination of assays covering different mechanisms and endpoints contributing to human DILI. These include the generation of reactive metabolites (CYP3A4 time-dependent inhibition and glutathione adduct formation), inhibition of the human bile salt export pump (BSEP), mitochondrial toxicity and cytotoxicity (fibroblasts and human hepatocytes). Different approaches for dose- and exposure-based calibrations were assessed and the same parameters applied to a test set of 39 different compounds. We achieved a similar performance to the training set with an overall accuracy of 79% correctly predicted, a sensitivity of 76% and a specificity of 82%. This test system may be applied in a prospective manner to reduce the risk of idiosyncratic DILI of drug candidates. Copyright © 2015 Elsevier B.V. All rights reserved.
Evaluation of PET Scanner Performance in PET/MR and PET/CT Systems: NEMA Tests.
Demir, Mustafa; Toklu, Türkay; Abuqbeitah, Mohammad; Çetin, Hüseyin; Sezgin, H Sezer; Yeyin, Nami; Sönmezoğlu, Kerim
2018-02-01
The aim of the present study was to compare the performance of positron emission tomography (PET) component of PET/computed tomography (CT) with new emerging PET/magnetic resonance (MR) of the same vendor. According to National Electrical Manufacturers Association NU2-07, five separate experimental tests were performed to evaluate the performance of PET scanner of General Electric GE company; SIGNATM model PET/MR and GE Discovery 710 model PET/CT. The main investigated aspects were spatial resolution, sensitivity, scatter fraction, count rate performance, image quality, count loss and random events correction accuracy. The findings of this study demonstrated superior sensitivity (~ 4 folds) of PET scanner in PET/MR compared to PET/CT system. Image quality test exhibited higher contrast in PET/MR (~ 9%) compared with PET/CT. The scatter fraction of PET/MR was 43.4% at noise equivalent count rate (NECR) peak of 218 kcps and the corresponding activity concentration was 17.7 kBq/cc. Whereas the scatter fraction of PET/CT was found as 39.2% at NECR peak of 72 kcps and activity concentration of 24.3 kBq/cc. The percentage error of the random event correction accuracy was 3.4% and 3.1% in PET/MR and PET/CT, respectively. It was concluded that PET/MR system is about 4 times more sensitive than PET/CT, and the contrast of hot lesions in PET/MR was ~ 9% higher than PET/CT. These outcomes also emphasize the possibility to achieve excellent clinical PET images with low administered dose and/or a short acquisition time in PET/MR.
Zhang, Jiangtao; Guo, Zhongwei; Liu, Xiaozheng; Jia, Xize; Li, Jiapeng; Li, Yaoyao; Lv, Danmei; Chen, Wei
2017-01-01
Background Depressive symptoms are significant and very common psychiatric complications in patients with Alzheimer’s disease (AD), which can aggravate the decline in social function. However, changes in the functional connectivity (FC) of the brain in AD patients with depressive symptoms (D-AD) remain unclear. Objective To investigate whether any differences exist in the FC of the posterior cingulate cortex (PCC) between D-AD patients and non-depressed AD patients (nD-AD). Materials and methods We recruited 15 D-AD patients and 17 age-, sex-, educational level-, and Mini-Mental State Examination (MMSE)-matched nD-AD patients to undergo tests using the Neuropsychiatric Inventory, Hamilton Depression Rating Scale, and 3.0T resting-state functional magnetic resonance imaging. Bilateral PCC were selected as the regions of interest and between-group differences in the PCC FC network were assessed using Student’s t-test. Results Compared with the nD-AD group, D-AD patients showed increased PCC FC in the right amygdala, right parahippocampus, right superior temporal pole, right middle temporal lobe, right middle temporal pole, and right hippocampus (AlphaSim correction; P<0.05). In the nD-AD group, MMSE scores were positively correlated with PCC FC in the right superior temporal pole and right hippocampus (false discovery rate corrected; P<0.05). Conclusion Differences were detected in PCC FC between nD-AD and D-AD patients, which may be related to depressive symptoms. Our study provides a significant enhancement to our understanding of the functional mechanisms underlying D-AD. PMID:29066900
Spacewatch search for near-Earth asteroids
NASA Technical Reports Server (NTRS)
Gehreis, Tom
1991-01-01
The objective of the Spacewatch Program is to develop new techniques for the discovery of near-earth asteroids and to prove the efficiency of the techniques. Extensive experience was obtained with the 0.91-m Spacewatch Telescope on Kitt Peak that now has the largest CCD detector in the world: a Tektronix 2048 x 2048 with 27-micron pixel size. During the past year, software and hardware for optimizing the discovery of near-earth asteroids were installed. As a result, automatic detection of objects that move with rates between 0.1 and 4 degrees per day has become routine since September 1990. Apparently, one or two near-earth asteroids are discovered per month, on average. The follow up is with astrometry over as long an arc as the geometry and faintness of the object allow, typically three months following the discovery observations. During the second half of 1990, replacing the 0.91-m mirror with a larger one, to increase the discovery rate, was considered. Studies and planning for this switch are proposed for funding during the coming year. It was also proposed that the Spacewatch Telescope be turned on the sky, instead of having the drive turned off, in order to increase the rate of discoveries by perhaps a factor of two.
Discovery of a new quasar: SDSS J022155.26-064916.6
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, J. M.; Tucker, D. L.; Smith, J. A.
Here, we report the discovery of a new quasar: SDSS J022155.26-064916.6. This object was discovered while reducing the spectra of a sample of stars being considered as spectrophotometric standards for the Dark Energy Survey. The flux- and wavelength-calibrated spectrum is presented with four spectral lines identified. From these lines, the redshift is determined to be z≈0.806. In addition, the rest-frame u-, g-, and r-band luminosity, determined using a k-correction obtained with synthetic photometry of a proxy quasi stellar object (QSO), are reported as 7.496×10 13 L ⊙, 2.049×10 13 L ⊙, and 1.896×10 13 L ⊙, respectively.
Geeleher, Paul; Cox, Nancy J; Huang, R Stephanie
2016-09-21
We show that variability in general levels of drug sensitivity in pre-clinical cancer models confounds biomarker discovery. However, using a very large panel of cell lines, each treated with many drugs, we could estimate a general level of sensitivity to all drugs in each cell line. By conditioning on this variable, biomarkers were identified that were more likely to be effective in clinical trials than those identified using a conventional uncorrected approach. We find that differences in general levels of drug sensitivity are driven by biologically relevant processes. We developed a gene expression based method that can be used to correct for this confounder in future studies.
Discovery of a new quasar: SDSS J022155.26-064916.6
Robertson, J. M.; Tucker, D. L.; Smith, J. A.; ...
2017-06-14
Here, we report the discovery of a new quasar: SDSS J022155.26-064916.6. This object was discovered while reducing the spectra of a sample of stars being considered as spectrophotometric standards for the Dark Energy Survey. The flux- and wavelength-calibrated spectrum is presented with four spectral lines identified. From these lines, the redshift is determined to be z≈0.806. In addition, the rest-frame u-, g-, and r-band luminosity, determined using a k-correction obtained with synthetic photometry of a proxy quasi stellar object (QSO), are reported as 7.496×10 13 L ⊙, 2.049×10 13 L ⊙, and 1.896×10 13 L ⊙, respectively.
ERIC Educational Resources Information Center
Reid, Neil
This paper addresses the problem of identifying and developing talent in children from culturally different backgrounds in New Zealand. The paper offers examples of how even applying the recommended "best practice" of multi-dimensional identification approaches can be inadequate for identifying gifted children from Maori, Polynesian, or…
Eickhoff, Simon B; Nichols, Thomas E; Laird, Angela R; Hoffstaedter, Felix; Amunts, Katrin; Fox, Peter T; Bzdok, Danilo; Eickhoff, Claudia R
2016-08-15
Given the increasing number of neuroimaging publications, the automated knowledge extraction on brain-behavior associations by quantitative meta-analyses has become a highly important and rapidly growing field of research. Among several methods to perform coordinate-based neuroimaging meta-analyses, Activation Likelihood Estimation (ALE) has been widely adopted. In this paper, we addressed two pressing questions related to ALE meta-analysis: i) Which thresholding method is most appropriate to perform statistical inference? ii) Which sample size, i.e., number of experiments, is needed to perform robust meta-analyses? We provided quantitative answers to these questions by simulating more than 120,000 meta-analysis datasets using empirical parameters (i.e., number of subjects, number of reported foci, distribution of activation foci) derived from the BrainMap database. This allowed to characterize the behavior of ALE analyses, to derive first power estimates for neuroimaging meta-analyses, and to thus formulate recommendations for future ALE studies. We could show as a first consequence that cluster-level family-wise error (FWE) correction represents the most appropriate method for statistical inference, while voxel-level FWE correction is valid but more conservative. In contrast, uncorrected inference and false-discovery rate correction should be avoided. As a second consequence, researchers should aim to include at least 20 experiments into an ALE meta-analysis to achieve sufficient power for moderate effects. We would like to note, though, that these calculations and recommendations are specific to ALE and may not be extrapolated to other approaches for (neuroimaging) meta-analysis. Copyright © 2016 Elsevier Inc. All rights reserved.
Eickhoff, Simon B.; Nichols, Thomas E.; Laird, Angela R.; Hoffstaedter, Felix; Amunts, Katrin; Fox, Peter T.
2016-01-01
Given the increasing number of neuroimaging publications, the automated knowledge extraction on brain-behavior associations by quantitative meta-analyses has become a highly important and rapidly growing field of research. Among several methods to perform coordinate-based neuroimaging meta-analyses, Activation Likelihood Estimation (ALE) has been widely adopted. In this paper, we addressed two pressing questions related to ALE meta-analysis: i) Which thresholding method is most appropriate to perform statistical inference? ii) Which sample size, i.e., number of experiments, is needed to perform robust meta-analyses? We provided quantitative answers to these questions by simulating more than 120,000 meta-analysis datasets using empirical parameters (i.e., number of subjects, number of reported foci, distribution of activation foci) derived from the BrainMap database. This allowed to characterize the behavior of ALE analyses, to derive first power estimates for neuroimaging meta-analyses, and to thus formulate recommendations for future ALE studies. We could show as a first consequence that cluster-level family-wise error (FWE) correction represents the most appropriate method for statistical inference, while voxel-level FWE correction is valid but more conservative. In contrast, uncorrected inference and false-discovery rate correction should be avoided. As a second consequence, researchers should aim to include at least 20 experiments into an ALE meta-analysis to achieve sufficient power for moderate effects. We would like to note, though, that these calculations and recommendations are specific to ALE and may not be extrapolated to other approaches for (neuroimaging) meta-analysis. PMID:27179606
Natural Language Search Interfaces: Health Data Needs Single-Field Variable Search.
Jay, Caroline; Harper, Simon; Dunlop, Ian; Smith, Sam; Sufi, Shoaib; Goble, Carole; Buchan, Iain
2016-01-14
Data discovery, particularly the discovery of key variables and their inter-relationships, is key to secondary data analysis, and in-turn, the evolving field of data science. Interface designers have presumed that their users are domain experts, and so they have provided complex interfaces to support these "experts." Such interfaces hark back to a time when searches needed to be accurate first time as there was a high computational cost associated with each search. Our work is part of a governmental research initiative between the medical and social research funding bodies to improve the use of social data in medical research. The cross-disciplinary nature of data science can make no assumptions regarding the domain expertise of a particular scientist, whose interests may intersect multiple domains. Here we consider the common requirement for scientists to seek archived data for secondary analysis. This has more in common with search needs of the "Google generation" than with their single-domain, single-tool forebears. Our study compares a Google-like interface with traditional ways of searching for noncomplex health data in a data archive. Two user interfaces are evaluated for the same set of tasks in extracting data from surveys stored in the UK Data Archive (UKDA). One interface, Web search, is "Google-like," enabling users to browse, search for, and view metadata about study variables, whereas the other, traditional search, has standard multioption user interface. Using a comprehensive set of tasks with 20 volunteers, we found that the Web search interface met data discovery needs and expectations better than the traditional search. A task × interface repeated measures analysis showed a main effect indicating that answers found through the Web search interface were more likely to be correct (F1,19=37.3, P<.001), with a main effect of task (F3,57=6.3, P<.001). Further, participants completed the task significantly faster using the Web search interface (F1,19=18.0, P<.001). There was also a main effect of task (F2,38=4.1, P=.025, Greenhouse-Geisser correction applied). Overall, participants were asked to rate learnability, ease of use, and satisfaction. Paired mean comparisons showed that the Web search interface received significantly higher ratings than the traditional search interface for learnability (P=.002, 95% CI [0.6-2.4]), ease of use (P<.001, 95% CI [1.2-3.2]), and satisfaction (P<.001, 95% CI [1.8-3.5]). The results show superior cross-domain usability of Web search, which is consistent with its general familiarity and with enabling queries to be refined as the search proceeds, which treats serendipity as part of the refinement. The results provide clear evidence that data science should adopt single-field natural language search interfaces for variable search supporting in particular: query reformulation; data browsing; faceted search; surrogates; relevance feedback; summarization, analytics, and visual presentation.
Natural Language Search Interfaces: Health Data Needs Single-Field Variable Search
Smith, Sam; Sufi, Shoaib; Goble, Carole; Buchan, Iain
2016-01-01
Background Data discovery, particularly the discovery of key variables and their inter-relationships, is key to secondary data analysis, and in-turn, the evolving field of data science. Interface designers have presumed that their users are domain experts, and so they have provided complex interfaces to support these “experts.” Such interfaces hark back to a time when searches needed to be accurate first time as there was a high computational cost associated with each search. Our work is part of a governmental research initiative between the medical and social research funding bodies to improve the use of social data in medical research. Objective The cross-disciplinary nature of data science can make no assumptions regarding the domain expertise of a particular scientist, whose interests may intersect multiple domains. Here we consider the common requirement for scientists to seek archived data for secondary analysis. This has more in common with search needs of the “Google generation” than with their single-domain, single-tool forebears. Our study compares a Google-like interface with traditional ways of searching for noncomplex health data in a data archive. Methods Two user interfaces are evaluated for the same set of tasks in extracting data from surveys stored in the UK Data Archive (UKDA). One interface, Web search, is “Google-like,” enabling users to browse, search for, and view metadata about study variables, whereas the other, traditional search, has standard multioption user interface. Results Using a comprehensive set of tasks with 20 volunteers, we found that the Web search interface met data discovery needs and expectations better than the traditional search. A task × interface repeated measures analysis showed a main effect indicating that answers found through the Web search interface were more likely to be correct (F 1,19=37.3, P<.001), with a main effect of task (F 3,57=6.3, P<.001). Further, participants completed the task significantly faster using the Web search interface (F 1,19=18.0, P<.001). There was also a main effect of task (F 2,38=4.1, P=.025, Greenhouse-Geisser correction applied). Overall, participants were asked to rate learnability, ease of use, and satisfaction. Paired mean comparisons showed that the Web search interface received significantly higher ratings than the traditional search interface for learnability (P=.002, 95% CI [0.6-2.4]), ease of use (P<.001, 95% CI [1.2-3.2]), and satisfaction (P<.001, 95% CI [1.8-3.5]). The results show superior cross-domain usability of Web search, which is consistent with its general familiarity and with enabling queries to be refined as the search proceeds, which treats serendipity as part of the refinement. Conclusions The results provide clear evidence that data science should adopt single-field natural language search interfaces for variable search supporting in particular: query reformulation; data browsing; faceted search; surrogates; relevance feedback; summarization, analytics, and visual presentation. PMID:26769334
Petroleum-resource appraisal and discovery rate forecasting in partially explored regions
Drew, Lawrence J.; Schuenemeyer, J.H.; Root, David H.; Attanasi, E.D.
1980-01-01
PART A: A model of the discovery process can be used to predict the size distribution of future petroleum discoveries in partially explored basins. The parameters of the model are estimated directly from the historical drilling record, rather than being determined by assumptions or analogies. The model is based on the concept of the area of influence of a drill hole, which states that the area of a basin exhausted by a drill hole varies with the size and shape of targets in the basin and with the density of previously drilled wells. It also uses the concept of discovery efficiency, which measures the rate of discovery within several classes of deposit size. The model was tested using 25 years of historical exploration data (1949-74) from the Denver basin. From the trend in the discovery rate (the number of discoveries per unit area exhausted), the discovery efficiencies in each class of deposit size were estimated. Using pre-1956 discovery and drilling data, the model accurately predicted the size distribution of discoveries for the 1956-74 period. PART B: A stochastic model of the discovery process has been developed to predict, using past drilling and discovery data, the distribution of future petroleum deposits in partially explored basins, and the basic mathematical properties of the model have been established. The model has two exogenous parameters, the efficiency of exploration and the effective basin size. The first parameter is the ratio of the probability that an actual exploratory well will make a discovery to the probability that a randomly sited well will make a discovery. The second parameter, the effective basin size, is the area of that part of the basin in which drillers are willing to site wells. Methods for estimating these parameters from locations of past wells and from the sizes and locations of past discoveries were derived, and the properties of estimators of the parameters were studied by simulation. PART C: This study examines the temporal properties and determinants of petroleum exploration for firms operating in the Denver basin. Expectations associated with the favorability of a specific area are modeled by using distributed lag proxy variables (of previous discoveries) and predictions from a discovery process model. In the second part of the study, a discovery process model is linked with a behavioral well-drilling model in order to predict the supply of new reserves. Results of the study indicate that the positive effects of new discoveries on drilling increase for several periods and then diminish to zero within 2? years after the deposit discovery date. Tests of alternative specifications of the argument of the distributed lag function using alternative minimum size classes of deposits produced little change in the model's explanatory power. This result suggests that, once an exploration play is underway, favorable operator expectations are sustained by the quantity of oil found per time period rather than by the discovery of specific size deposits. When predictions of the value of undiscovered deposits (generated from a discovery process model) were substituted for the expectations variable in models used to explain exploration effort, operator behavior was found to be consistent with these predictions. This result suggests that operators, on the average, were efficiently using information contained in the discovery history of the basin in carrying out their exploration plans. Comparison of the two approaches to modeling unobservable operator expectations indicates that the two models produced very similar results. The integration of the behavioral well-drilling model and discovery process model to predict the additions to reserves per unit time was successful only when the quarterly predictions were aggregated to annual values. The accuracy of the aggregated predictions was also found to be reasonably robust to errors in predictions from the behavioral well-drilling equation.
Sebastiano, Vittorio; Zhen, Hanson Hui; Haddad, Bahareh; Bashkirova, Elizaveta; Melo, Sandra P.; Wang, Pei; Leung, Thomas L.; Siprashvili, Zurab; Tichy, Andrea; Li, Jiang; Ameen, Mohammed; Hawkins, John; Lee, Susie; Li, Lingjie; Schwertschkow, Aaron; Bauer, Gerhard; Lisowski, Leszek; Kay, Mark A.; Kim, Seung K.; Lane, Alfred T.; Wernig, Marius; Oro, Anthony E.
2015-01-01
Patients with recessive dystrophic epidermolysis bullosa (RDEB) lack functional type VII collagen owing to mutations in the gene COL7A1 and suffer severe blistering and chronic wounds that ultimately lead to infection and development of lethal squamous cell carcinoma. The discovery of induced pluripotent stem cells (iPSCs) and the ability to edit the genome bring the possibility to provide definitive genetic therapy through corrected autologous tissues. We generated patient-derived COL7A1-corrected epithelial keratinocyte sheets for autologous grafting. We demonstrate the utility of sequential reprogramming and adenovirus-associated viral genome editing to generate corrected iPSC banks. iPSC-derived keratinocytes were produced with minimal heterogeneity, and these cells secreted wild-type type VII collagen, resulting in stratified epidermis in vitro in organotypic cultures and in vivo in mice. Sequencing of corrected cell lines before tissue formation revealed heterogeneity of cancer-predisposing mutations, allowing us to select COL7A1-corrected banks with minimal mutational burden for downstream epidermis production. Our results provide a clinical platform to use iPSCs in the treatment of debilitating genodermatoses, such as RDEB. PMID:25429056
CRISPR/Cas9-mediated targeted gene correction in amyotrophic lateral sclerosis patient iPSCs.
Wang, Lixia; Yi, Fei; Fu, Lina; Yang, Jiping; Wang, Si; Wang, Zhaoxia; Suzuki, Keiichiro; Sun, Liang; Xu, Xiuling; Yu, Yang; Qiao, Jie; Belmonte, Juan Carlos Izpisua; Yang, Ze; Yuan, Yun; Qu, Jing; Liu, Guang-Hui
2017-05-01
Amyotrophic lateral sclerosis (ALS) is a complex neurodegenerative disease with cellular and molecular mechanisms yet to be fully described. Mutations in a number of genes including SOD1 and FUS are associated with familial ALS. Here we report the generation of induced pluripotent stem cells (iPSCs) from fibroblasts of familial ALS patients bearing SOD1 +/A272C and FUS +/G1566A mutations, respectively. We further generated gene corrected ALS iPSCs using CRISPR/Cas9 system. Genome-wide RNA sequencing (RNA-seq) analysis of motor neurons derived from SOD1 +/A272C and corrected iPSCs revealed 899 aberrant transcripts. Our work may shed light on discovery of early biomarkers and pathways dysregulated in ALS, as well as provide a basis for novel therapeutic strategies to treat ALS.
Discovery of a New Nearby Star
NASA Technical Reports Server (NTRS)
Teegarden, B. J.; Pravdo, S. H.; Covey, K.; Frazier, O.; Hawley, S. L.; Hicks, M.; Lawrence, K.; McGlynn, T.; Reid, I. N.; Shaklan, S. B.
2003-01-01
We report the discovery of a nearby star with a very large proper motion of 5.06 +/- 0.03 arcsec/yr. The star is called SO025300.5+165258 and referred to herein as HPMS (high proper motion star). The discovery came as a result of a search of the SkyMorph database, a sensitive and persistent survey that is well suited for finding stars with high proper motions. There are currently only 7 known stars with proper motions greater than 5 arcsec/yr. We have determined a preliminary value for the parallax of pi = 0.43 +/- 0.13 arcsec. If this value holds our new star ranks behind only the Alpha Centauri system (including Proxima Centauri) and Barnard's star in the list of our nearest stellar neighbours. The spectrum and measured tangential velocity indicate that HPMS is a main-sequence star with spectral type M6.5. However, if our distance measurement is correct, the HPMS is underluminous by 1.2 +/- 0.7 mag.
Feedback-Driven Dynamic Invariant Discovery
NASA Technical Reports Server (NTRS)
Zhang, Lingming; Yang, Guowei; Rungta, Neha S.; Person, Suzette; Khurshid, Sarfraz
2014-01-01
Program invariants can help software developers identify program properties that must be preserved as the software evolves, however, formulating correct invariants can be challenging. In this work, we introduce iDiscovery, a technique which leverages symbolic execution to improve the quality of dynamically discovered invariants computed by Daikon. Candidate invariants generated by Daikon are synthesized into assertions and instrumented onto the program. The instrumented code is executed symbolically to generate new test cases that are fed back to Daikon to help further re ne the set of candidate invariants. This feedback loop is executed until a x-point is reached. To mitigate the cost of symbolic execution, we present optimizations to prune the symbolic state space and to reduce the complexity of the generated path conditions. We also leverage recent advances in constraint solution reuse techniques to avoid computing results for the same constraints across iterations. Experimental results show that iDiscovery converges to a set of higher quality invariants compared to the initial set of candidate invariants in a small number of iterations.
Accounting for Chromatic Atmospheric Effects on Barycentric Corrections
NASA Astrophysics Data System (ADS)
Blackman, Ryan T.; Szymkowiak, Andrew E.; Fischer, Debra A.; Jurgenson, Colby A.
2017-03-01
Atmospheric effects on stellar radial velocity measurements for exoplanet discovery and characterization have not yet been fully investigated for extreme precision levels. We carry out calculations to determine the wavelength dependence of barycentric corrections across optical wavelengths, due to the ubiquitous variations in air mass during observations. We demonstrate that radial velocity errors of at least several cm s-1 can be incurred if the wavelength dependence is not included in the photon-weighted barycentric corrections. A minimum of four wavelength channels across optical spectra (380-680 nm) are required to account for this effect at the 10 cm s-1 level, with polynomial fits of the barycentric corrections applied to cover all wavelengths. Additional channels may be required in poor observing conditions or to avoid strong telluric absorption features. Furthermore, consistent flux sampling on the order of seconds throughout the observation is necessary to ensure that accurate photon weights are obtained. Finally, we describe how a multiple-channel exposure meter will be implemented in the EXtreme PREcision Spectrograph (EXPRES).
Filovirus-Like Particles as Vaccines and Discovery Tools
2005-06-01
or MARV strains. Classic methods for vaccine development have been tried, including producing and testing attenuated and inactivated viral...MARV challenge [52]. However, an attenuated virus vac- cine is undesirable for filoviruses due to the danger of reversion to wild-type virulence...correct structural proteins is sufficient for forming VLPs. This is true for both nonenveloped viruses, such as parvovirus , papilloma- virus, rotavirus
Rositch, Anne F; Nowak, Rebecca G; Gravitt, Patti E
2014-07-01
Invasive cervical cancer is thought to decline in women over 65 years old, the age at which cessation of routine cervical cancer screening is recommended. However, national cervical cancer incidence rates do not account for the high prevalence of hysterectomy in the United States. Using estimates of hysterectomy prevalence from the Behavioral Risk Factor Surveillance System (BRFSS), hysterectomy-corrected age-standardized and age-specific incidence rates of cervical cancer were calculated from the Surveillance, Epidemiology, and End Results (SEER) 18 registry in the United States from 2000 to 2009. Trends in corrected cervical cancer incidence across age were analyzed using Joinpoint regression. Unlike the relative decline in uncorrected rates, corrected rates continue to increase after age 35-39 (APC(CORRECTED) = 10.43) but at a slower rate than in 20-34 years (APC(CORRECTED) = 161.29). The highest corrected incidence was among 65- to 69-year-old women, with a rate of 27.4 cases per 100,000 women as opposed to the highest uncorrected rate of 15.6 cases per 100,000 aged 40 to 44 years. Correction for hysterectomy had the largest impact on older, black women given their high prevalence of hysterectomy. Correction for hysterectomy resulted in higher age-specific cervical cancer incidence rates, a shift in the peak incidence to older women, and an increase in the disparity in cervical cancer incidence between black and white women. Given the high and nondeclining rate of cervical cancer in women over the age of 60 to 65 years, when women are eligible to exit screening, risk and screening guidelines for cervical cancer in older women may need to be reconsidered. © 2014 American Cancer Society.
NASA Astrophysics Data System (ADS)
Kelson, Daniel D; Illingworth, Garth D.; Freedman, Wendy F.; Graham, John A.; Hill, Robert; Madore, Barry F.; Saha, Abhijit; Stetson, Peter B.; Kennicutt, Robert C., Jr.; Mould, Jeremy R.; Hughes, Shaun M.; Ferrarese, Laura; Phelps, Randy; Turner, Anne; Cook, Kem H.; Ford, Holland; Hoessel, John G.; Huchra, John
1997-03-01
In the paper ``The Extragalactic Distance Scale Key Project. III. The Discovery of Cepheids and a New Distance to M101 Using the Hubble Space Telescope'' by Daniel D. Kelson, Garth D. Illingworth, Wendy F. Freedman, John A. Graham, Robert Hill, Barry F. Madore, Abhijit Saha, Peter B. Stetson, Robert C. Kennicutt, Jr., Jeremy R. Mould, Shaun M. Hughes, Laura Ferrarese, Randy Phelps, Anne Turner, Kem H. Cook, Holland Ford, John G. Hoessel, and John Huchra (ApJ, 463, 26 [1996]), two of the tables are in error. The magnitudes in Tables B1 and B2, in Appendix B, are ordered incorrectly. As a result, the Julian dates are not associated with their correct Cepheid magnitudes. We have now corrected these data, and updated versions of the tables are available on the World Wide Web. The tables are available in ASCII format at our Key Project site (http://www.ipac.caltech.edu/H0kp/) and will appear in volume 7 of the AAS CDROM. PostScript and paper copies are also available from the first author (http://www.ucolick.org/~kelson/H0/home.html or kelson@ucolick.org).
Correcting for sequencing error in maximum likelihood phylogeny inference.
Kuhner, Mary K; McGill, James
2014-11-04
Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. Copyright © 2014 Kuhner and McGill.
Glad, Camilla A M; Andersson-Assarsson, Johanna C; Berglund, Peter; Bergthorsdottir, Ragnhildur; Ragnarsson, Oskar; Johannsson, Gudmundur
2017-03-16
Patients with Cushing's Syndrome (CS) in remission were used as a model to test the hypothesis that long-standing excessive cortisol exposure induces changes in DNA methylation that are associated with persisting neuropsychological consequences. Genome-wide DNA methylation was assessed in 48 women with CS in long-term remission (cases) and 16 controls matched for age, gender and education. The Fatigue impact scale and the comprehensive psychopathological rating scale were used to evaluate fatigue, depression and anxiety. Cases had lower average global DNA methylation than controls (81.2% vs 82.7%; p = 0.002). Four hundred and sixty-one differentially methylated regions, containing 3,246 probes mapping to 337 genes were identified. After adjustment for age and smoking, 731 probes in 236 genes were associated with psychopathology (fatigue, depression and/or anxiety). Twenty-four gene ontology terms were associated with psychopathology; terms related to retinoic acid receptor signalling were the most common (adjusted p = 0.0007). One gene in particular, COL11A2, was associated with fatigue following a false discovery rate correction. Our findings indicate that hypomethylation of FKBP5 and retinoic acid receptor related genes serve a potential mechanistic explanation for long-lasting GC-induced psychopathology.
77 FR 37421 - Reimbursement Rates for Calendar Year 2012 Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-21
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Indian Health Service Reimbursement Rates for Calendar Year 2012 Correction AGENCY: Indian Health Service, HHS. ACTION: Notice; correction. SUMMARY: The Indian Health Service published a document in the Federal Register on June 6, 2012, concerning rates for...
Science of the science, drug discovery and artificial neural networks.
Patel, Jigneshkumar
2013-03-01
Drug discovery process many times encounters complex problems, which may be difficult to solve by human intelligence. Artificial Neural Networks (ANNs) are one of the Artificial Intelligence (AI) technologies used for solving such complex problems. ANNs are widely used for primary virtual screening of compounds, quantitative structure activity relationship studies, receptor modeling, formulation development, pharmacokinetics and in all other processes involving complex mathematical modeling. Despite having such advanced technologies and enough understanding of biological systems, drug discovery is still a lengthy, expensive, difficult and inefficient process with low rate of new successful therapeutic discovery. In this paper, author has discussed the drug discovery science and ANN from very basic angle, which may be helpful to understand the application of ANN for drug discovery to improve efficiency.
A note on the false discovery rate of novel peptides in proteogenomics.
Zhang, Kun; Fu, Yan; Zeng, Wen-Feng; He, Kun; Chi, Hao; Liu, Chao; Li, Yan-Chang; Gao, Yuan; Xu, Ping; He, Si-Min
2015-10-15
Proteogenomics has been well accepted as a tool to discover novel genes. In most conventional proteogenomic studies, a global false discovery rate is used to filter out false positives for identifying credible novel peptides. However, it has been found that the actual level of false positives in novel peptides is often out of control and behaves differently for different genomes. To quantitatively model this problem, we theoretically analyze the subgroup false discovery rates of annotated and novel peptides. Our analysis shows that the annotation completeness ratio of a genome is the dominant factor influencing the subgroup FDR of novel peptides. Experimental results on two real datasets of Escherichia coli and Mycobacterium tuberculosis support our conjecture. yfu@amss.ac.cn or xupingghy@gmail.com or smhe@ict.ac.cn Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
Gulliver, Kristina; Yoder, Bradley A
2018-05-09
To determine the effect of altitude correction on bronchopulmonary dysplasia (BPD) rates and to assess validity of the NICHD "Neonatal BPD Outcome Estimator" for predicting BPD with and without altitude correction. Retrospective analysis included neonates born <30 weeks gestational age (GA) between 2010 and 2016. "Effective" FiO 2 requirements were determined at 36 weeks corrected GA. Altitude correction performed via ratio of barometric pressure (BP) in our unit to sea level BP. Probability of death and/or moderate-to-severe BPD was calculated using the NICHD BPD Outcome Estimator. Five hundred and sixty-one infants were included. Rate of moderate-to-severe BPD decreased from 71 to 40% following altitude correction. Receiver-operating characteristic curves indicated high predictability of BPD Outcome Estimator for altitude-corrected moderate-to-severe BPD diagnosis. Correction for altitude reduced moderate-to-severe BPD rate by almost 50%, to a rate consistent with recent published values. NICHD BPD Outcome Estimator is a valid tool for predicting the risk of moderate-to-severe BPD following altitude correction.
Serendipity in Cancer Drug Discovery: Rational or Coincidence?
Prasad, Sahdeo; Gupta, Subash C; Aggarwal, Bharat B
2016-06-01
Novel drug development leading to final approval by the US FDA can cost as much as two billion dollars. Why the cost of novel drug discovery is so expensive is unclear, but high failure rates at the preclinical and clinical stages are major reasons. Although therapies targeting a given cell signaling pathway or a protein have become prominent in drug discovery, such treatments have done little in preventing or treating any disease alone because most chronic diseases have been found to be multigenic. A review of the discovery of numerous drugs currently being used for various diseases including cancer, diabetes, cardiovascular, pulmonary, and autoimmune diseases indicates that serendipity has played a major role in the discovery. In this review we provide evidence that rational drug discovery and targeted therapies have minimal roles in drug discovery, and that serendipity and coincidence have played and continue to play major roles. The primary focus in this review is on cancer-related drug discovery. Copyright © 2016 Elsevier Ltd. All rights reserved.
Inseparability of science history and discovery
NASA Astrophysics Data System (ADS)
Herndon, J. M.
2010-04-01
Science is very much a logical progression through time. Progressing along a logical path of discovery is rather like following a path through the wilderness. Occasionally the path splits, presenting a choice; the correct logical interpretation leads to further progress, the wrong choice leads to confusion. By considering deeply the relevant science history, one might begin to recognize past faltering in the logical progression of observations and ideas and, perhaps then, to discover new, more precise understanding. The following specific examples of science faltering are described from a historical perspective: (1) Composition of the Earth's inner core; (2) Giant planet internal energy production; (3) Physical impossibility of Earth-core convection and Earth-mantle convection, and; (4) Thermonuclear ignition of stars. For each example, a revised logical progression is described, leading, respectively, to: (1) Understanding the endo-Earth's composition; (2) The concept of nuclear georeactor origin of geo- and planetary magnetic fields; (3) The invalidation and replacement of plate tectonics; and, (4) Understanding the basis for the observed distribution of luminous stars in galaxies. These revised logical progressions clearly show the inseparability of science history and discovery. A different and more fundamental approach to making scientific discoveries than the frequently discussed variants of the scientific method is this: An individual ponders and through tedious efforts arranges seemingly unrelated observations into a logical sequence in the mind so that causal relationships become evident and new understanding emerges, showing the path for new observations, for new experiments, for new theoretical considerations, and for new discoveries. Science history is rich in "seemingly unrelated observations" just waiting to be logically and causally related to reveal new discoveries.
Gonzales, Gustavo F; Tapia, Vilma; Gasco, Manuel
2014-07-01
To determine if correction of cut-offs of haemoglobin levels to define anaemia at high altitudes affects rates of adverse perinatal outcomes. Data were obtained from 161,909 mothers and newborns whose births occurred between 1,000 and 4,500 m above sea level (masl). Anaemia was defined with or without correction of haemoglobin (Hb) for altitude as Hb <11 g/dL. Correction of haemoglobin per altitude was performed according to guidelines from the World Health Organization. Rates of stillbirths and preterm births were also calculated. Stillbirth and preterm rates were significantly reduced in cases of anaemia calculated after correction of haemoglobin for altitude compared to values obtained without Hb correction. At high altitudes (3,000-4,500 masl), after Hb correction, the rate of stillbirths was reduced from 37.7 to 18.3 per 1,000 live births (p < 0.01); similarly, preterm birth rates were reduced from 13.1 to 8.76 % (p < 0.01). The odds ratios for stillbirths and for preterm births were also reduced after haemoglobin correction. At high altitude, correction of maternal haemoglobin should not be performed to assess the risks for preterm birth and stillbirth. In fact, using low altitude Hb cut-off is associated with predicting those at risk.
77 FR 36563 - Indian Health Service; Reimbursement Rates for Calendar Year 2012 Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-19
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Indian Health Service; Reimbursement Rates for Calendar Year 2012 Correction AGENCY: Indian Health Service, HHS. ACTION: Notice; correction. SUMMARY: The Indian Health Service published a document in the Federal Register on June 6, 2012, concerning rates for...
Identification accuracy of children versus adults: a meta-analysis.
Pozzulo, J D; Lindsay, R C
1998-10-01
Identification accuracy of children and adults was examined in a meta-analysis. Preschoolers (M = 4 years) were less likely than adults to make correct identifications. Children over the age of 5 did not differ significantly from adults with regard to correct identification rate. Children of all ages examined were less likely than adults to correctly reject a target-absent lineup. Even adolescents (M = 12-13 years) did not reach an adult rate of correct rejection. Compared to simultaneous lineup presentation, sequential lineups increased the child-adult gap for correct rejections. Providing child witnesses with identification practice or training did not increase their correct rejection rates. Suggestions for children's inability to correctly reject target-absent lineups are discussed. Future directions for identification research are presented.
New Progress on Radiocarbon Geochronology in Southern Lake Tanganyika (East Africa)
NASA Astrophysics Data System (ADS)
McGlue, M. M.; Soreghan, M. J.
2017-12-01
Our limnogeological research in Lake Tanganyika focuses on elucidating the patterns of sediment accumulation on deepwater horsts, outer platforms, and littoral environments in the lake's southern basin ( 6-8°S latitude). Here, we present new radiocarbon (14C) dates from high-quality surface sediment cores, in order to make comparisons with previously published age models, to address the presence and spatiotemporal variability of a reservoir effect, and to constrain sedimentation rates and facies at sites that may be important targets for future scientific drilling. Plant macrofossils are rare in deepwater sediment cores, so charcoal and bulk organic matter have been the primary materials used for dating. On the Kavala Island Ridge (KIR) horst, initial core descriptions revealed variations in laminae presence, thickness, and chemistry. Sediment cores from the KIR at 172m water depth consist of thickly laminated diatom oozes. Charcoal from the bases of these cores returned median ages of 2.1-2.2 cal ka, suggesting linear accumulation rates on the order of 0.51 mm/yr. By contrast, a core from 420 m water depth on the KIR exhibited very thin laminations and diatom layers were much less prominent. Charcoal at the base of this core produced a median age of 8.1 cal ka, suggesting a linear accumulation rate of 0.11 mm/yr. These initial results suggest that sedimentation rates may vary considerably over sublacustrine horst blocks. We will test this initial discovery with additional sedimentation rate information from the Kalya and Nitiri horsts. In addition, we report new 14C dates made on both dead and live-collected shells of the endemic gastropod Neothauma tanganyicense. These shells form vast accumulations along shallow-water platforms of the lake and form an important substrate for a number of other endemic species. The discovery of living snails in southern Lake Tanganyika may allow for the development of a species-specific reservoir correction. A limited N. tanganyicense shell 14C dataset from the lake's northern basin exhibits time averaged over the past 1600 cal yrs; results from this project will begin to address spatial variability in time averaging, and therefore improve our understanding of shell bed formation and the extent to which anthropogenic sedimentation is impacting shell bed persistence.
Retrospective analysis of natural products provides insights for future discovery trends.
Pye, Cameron R; Bertin, Matthew J; Lokey, R Scott; Gerwick, William H; Linington, Roger G
2017-05-30
Understanding of the capacity of the natural world to produce secondary metabolites is important to a broad range of fields, including drug discovery, ecology, biosynthesis, and chemical biology, among others. Both the absolute number and the rate of discovery of natural products have increased significantly in recent years. However, there is a perception and concern that the fundamental novelty of these discoveries is decreasing relative to previously known natural products. This study presents a quantitative examination of the field from the perspective of both number of compounds and compound novelty using a dataset of all published microbial and marine-derived natural products. This analysis aimed to explore a number of key questions, such as how the rate of discovery of new natural products has changed over the past decades, how the average natural product structural novelty has changed as a function of time, whether exploring novel taxonomic space affords an advantage in terms of novel compound discovery, and whether it is possible to estimate how close we are to having described all of the chemical space covered by natural products. Our analyses demonstrate that most natural products being published today bear structural similarity to previously published compounds, and that the range of scaffolds readily accessible from nature is limited. However, the analysis also shows that the field continues to discover appreciable numbers of natural products with no structural precedent. Together, these results suggest that the development of innovative discovery methods will continue to yield compounds with unique structural and biological properties.
Cortical thickness and surface area in neonates at high risk for schizophrenia.
Li, Gang; Wang, Li; Shi, Feng; Lyall, Amanda E; Ahn, Mihye; Peng, Ziwen; Zhu, Hongtu; Lin, Weili; Gilmore, John H; Shen, Dinggang
2016-01-01
Schizophrenia is a neurodevelopmental disorder associated with subtle abnormal cortical thickness and cortical surface area. However, it is unclear whether these abnormalities exist in neonates associated with genetic risk for schizophrenia. To this end, this preliminary study was conducted to identify possible abnormalities of cortical thickness and surface area in the high-genetic-risk neonates. Structural magnetic resonance images were acquired from offspring of mothers (N = 21) who had schizophrenia (N = 12) or schizoaffective disorder (N = 9), and also matched healthy neonates of mothers who were free of psychiatric illness (N = 26). Neonatal cortical surfaces were reconstructed and parcellated as regions of interest (ROIs), and cortical thickness for each vertex was computed as the shortest distance between the inner and outer surfaces. Comparisons were made for the average cortical thickness and total surface area in each of 68 cortical ROIs. After false discovery rate (FDR) correction, it was found that the female high-genetic-risk neonates had significantly thinner cortical thickness in the right lateral occipital cortex than the female control neonates. Before FDR correction, the high-genetic-risk neonates had significantly thinner cortex in the left transverse temporal gyrus, left banks of superior temporal sulcus, left lingual gyrus, right paracentral cortex, right posterior cingulate cortex, right temporal pole, and right lateral occipital cortex, compared with the control neonates. Before FDR correction, in comparison with control neonates, male high-risk neonates had significantly thicker cortex in the left frontal pole, left cuneus cortex, and left lateral occipital cortex; while female high-risk neonates had significantly thinner cortex in the bilateral paracentral, bilateral lateral occipital, left transverse temporal, left pars opercularis, right cuneus, and right posterior cingulate cortices. The high-risk neonates also had significantly smaller cortical surface area in the right pars triangularis (before FDR correction), compared with control neonates. This preliminary study provides the first evidence that early development of cortical thickness and surface area might be abnormal in the neonates at genetic risk for schizophrenia.
Normalization, bias correction, and peak calling for ChIP-seq
Diaz, Aaron; Park, Kiyoub; Lim, Daniel A.; Song, Jun S.
2012-01-01
Next-generation sequencing is rapidly transforming our ability to profile the transcriptional, genetic, and epigenetic states of a cell. In particular, sequencing DNA from the immunoprecipitation of protein-DNA complexes (ChIP-seq) and methylated DNA (MeDIP-seq) can reveal the locations of protein binding sites and epigenetic modifications. These approaches contain numerous biases which may significantly influence the interpretation of the resulting data. Rigorous computational methods for detecting and removing such biases are still lacking. Also, multi-sample normalization still remains an important open problem. This theoretical paper systematically characterizes the biases and properties of ChIP-seq data by comparing 62 separate publicly available datasets, using rigorous statistical models and signal processing techniques. Statistical methods for separating ChIP-seq signal from background noise, as well as correcting enrichment test statistics for sequence-dependent and sonication biases, are presented. Our method effectively separates reads into signal and background components prior to normalization, improving the signal-to-noise ratio. Moreover, most peak callers currently use a generic null model which suffers from low specificity at the sensitivity level requisite for detecting subtle, but true, ChIP enrichment. The proposed method of determining a cell type-specific null model, which accounts for cell type-specific biases, is shown to be capable of achieving a lower false discovery rate at a given significance threshold than current methods. PMID:22499706
Concave 1-norm group selection
Jiang, Dingfeng; Huang, Jian
2015-01-01
Grouping structures arise naturally in many high-dimensional problems. Incorporation of such information can improve model fitting and variable selection. Existing group selection methods, such as the group Lasso, require correct membership. However, in practice it can be difficult to correctly specify group membership of all variables. Thus, it is important to develop group selection methods that are robust against group mis-specification. Also, it is desirable to select groups as well as individual variables in many applications. We propose a class of concave \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$1$\\end{document}-norm group penalties that is robust to grouping structure and can perform bi-level selection. A coordinate descent algorithm is developed to calculate solutions of the proposed group selection method. Theoretical convergence of the algorithm is proved under certain regularity conditions. Comparison with other methods suggests the proposed method is the most robust approach under membership mis-specification. Simulation studies and real data application indicate that the \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$1$\\end{document}-norm concave group selection approach achieves better control of false discovery rates. An R package grppenalty implementing the proposed method is available at CRAN. PMID:25417206
Pedraza, Otto; Graff-Radford, Neill R.; Smith, Glenn E.; Ivnik, Robert J.; Willis, Floyd B.; Petersen, Ronald C.; Lucas, John A.
2010-01-01
Scores on the Boston Naming Test (BNT) are frequently lower for African American when compared to Caucasian adults. Although demographically-based norms can mitigate the impact of this discrepancy on the likelihood of erroneous diagnostic impressions, a growing consensus suggests that group norms do not sufficiently address or advance our understanding of the underlying psychometric and sociocultural factors that lead to between-group score discrepancies. Using item response theory and methods to detect differential item functioning (DIF), the current investigation moves beyond comparisons of the summed total score to examine whether the conditional probability of responding correctly to individual BNT items differs between African American and Caucasian adults. Participants included 670 adults age 52 and older who took part in Mayo's Older Americans and Older African Americans Normative Studies. Under a 2-parameter logistic IRT framework and after correction for the false discovery rate, 12 items where shown to demonstrate DIF. Six of these 12 items (“dominoes,” “escalator,” “muzzle,” “latch,” “tripod,” and “palette”) were also identified in additional analyses using hierarchical logistic regression models and represent the strongest evidence for race/ethnicity-based DIF. These findings afford a finer characterization of the psychometric properties of the BNT and expand our understanding of between-group performance. PMID:19570311
Statistical analysis of fNIRS data: a comprehensive review.
Tak, Sungho; Ye, Jong Chul
2014-01-15
Functional near-infrared spectroscopy (fNIRS) is a non-invasive method to measure brain activities using the changes of optical absorption in the brain through the intact skull. fNIRS has many advantages over other neuroimaging modalities such as positron emission tomography (PET), functional magnetic resonance imaging (fMRI), or magnetoencephalography (MEG), since it can directly measure blood oxygenation level changes related to neural activation with high temporal resolution. However, fNIRS signals are highly corrupted by measurement noises and physiology-based systemic interference. Careful statistical analyses are therefore required to extract neuronal activity-related signals from fNIRS data. In this paper, we provide an extensive review of historical developments of statistical analyses of fNIRS signal, which include motion artifact correction, short source-detector separation correction, principal component analysis (PCA)/independent component analysis (ICA), false discovery rate (FDR), serially-correlated errors, as well as inference techniques such as the standard t-test, F-test, analysis of variance (ANOVA), and statistical parameter mapping (SPM) framework. In addition, to provide a unified view of various existing inference techniques, we explain a linear mixed effect model with restricted maximum likelihood (ReML) variance estimation, and show that most of the existing inference methods for fNIRS analysis can be derived as special cases. Some of the open issues in statistical analysis are also described. Copyright © 2013 Elsevier Inc. All rights reserved.
Current and Prospective Protein Biomarkers of Lung Cancer
Zamay, Tatiana N.; Zamay, Galina S.; Kolovskaya, Olga S.; Zukov, Ruslan A.; Petrova, Marina M.; Gargaun, Ana; Berezovski, Maxim V.
2017-01-01
Lung cancer is a malignant lung tumor with various histological variants that arise from different cell types, such as bronchial epithelium, bronchioles, alveoli, or bronchial mucous glands. The clinical course and treatment efficacy of lung cancer depends on the histological variant of the tumor. Therefore, accurate identification of the histological type of cancer and respective protein biomarkers is crucial for adequate therapy. Due to the great diversity in the molecular-biological features of lung cancer histological types, detection is impossible without knowledge of the nature and origin of malignant cells, which release certain protein biomarkers into the bloodstream. To date, different panels of biomarkers are used for screening. Unfortunately, a uniform serum biomarker composition capable of distinguishing lung cancer types is yet to be discovered. As such, histological analyses of tumor biopsies and immunohistochemistry are the most frequently used methods for establishing correct diagnoses. Here, we discuss the recent advances in conventional and prospective aptamer based strategies for biomarker discovery. Aptamers like artificial antibodies can serve as molecular recognition elements for isolation detection and search of novel tumor-associated markers. Here we will describe how these small synthetic single stranded oligonucleotides can be used for lung cancer biomarker discovery and utilized for accurate diagnosis and targeted therapy. Furthermore, we describe the most frequently used in-clinic and novel lung cancer biomarkers, which suggest to have the ability of differentiating between histological types of lung cancer and defining metastasis rate. PMID:29137182
Accelerating the Rate of Astronomical Discovery
NASA Astrophysics Data System (ADS)
Norris, Ray P. Ruggles, Clive L. N.
2010-05-01
Special Session 5 on Accelerating the Rate of Astronomical Discovery addressed a range of potential limits to progress - paradigmatic, technological, organisational, and political - examining each issue both from modern and historical perspectives, and drawing lessons to guide future progress. A number of issues were identified which potentially regulate the flow of discoveries, such as the balance between large strongly-focussed projects and instruments, designed to answer the most fundamental questions confronting us, and the need to maintain a creative environment with room for unorthodox thinkers and bold, high risk, projects. Also important is the need to maintain historical and cultural perspectives, and the need to engage the minds of the most brilliant young people on the planet, regardless of their background, ethnicity, gender, or geography.
SpS5: Accelerating the Rate of Astronomical Discovery
NASA Astrophysics Data System (ADS)
Norris, Ray P.
2010-11-01
Special Session 5 on Accelerating the Rate of Astronomical Discovery addressed a range of potential limits to progress: paradigmatic, technological, organizational, and political. It examined each issue both from modern and historical perspectives, and drew lessons to guide future progress. A number of issues were identified which may regulate the flow of discoveries, such as the balance between large strongly-focussed projects and instruments, designed to answer the most fundamental questions confronting us, and the need to maintain a creative environment with room for unorthodox thinkers and bold, high risk, projects. Also important is the need to maintain historical and cultural perspectives, and the need to engage the minds of the most brilliant young people on the planet, regardless of their background, ethnicity, gender, or geography.
Climatic shocks associate with innovation in science and technology.
De Dreu, Carsten K W; van Dijk, Mathijs A
2018-01-01
Human history is shaped by landmark discoveries in science and technology. However, across both time and space the rate of innovation is erratic: Periods of relative inertia alternate with bursts of creative science and rapid cascades of technological innovations. While the origins of the rise and fall in rates of discovery and innovation remain poorly understood, they may reflect adaptive responses to exogenously emerging threats and pressures. Here we examined this possibility by fitting annual rates of scientific discovery and technological innovation to climatic variability and its associated economic pressures and resource scarcity. In time-series data from Europe (1500-1900CE), we indeed found that rates of innovation are higher during prolonged periods of cold (versus warm) surface temperature and during the presence (versus absence) of volcanic dust veils. This negative temperature-innovation link was confirmed in annual time-series for France, Germany, and the United Kingdom (1901-1965CE). Combined, across almost 500 years and over 5,000 documented innovations and discoveries, a 0.5°C increase in temperature associates with a sizable 0.30-0.60 standard deviation decrease in innovation. Results were robust to controlling for fluctuations in population size. Furthermore, and consistent with economic theory and micro-level data on group innovation, path analyses revealed that the relation between harsher climatic conditions between 1500-1900CE and more innovation is mediated by climate-induced economic pressures and resource scarcity.
An epigenome-wide association analysis of cardiac autonomic responses among a population of welders.
Zhang, Jinming; Liu, Zhonghua; Umukoro, Peter E; Cavallari, Jennifer M; Fang, Shona C; Weisskopf, Marc G; Lin, Xihong; Mittleman, Murray A; Christiani, David C
2017-02-01
DNA methylation is one of the potential epigenetic mechanisms associated with various adverse cardiovascular effects; however, its association with cardiac autonomic dysfunction, in particular, is unknown. In the current study, we aimed to identify epigenetic variants associated with alterations in cardiac autonomic responses. Cardiac autonomic responses were measured with two novel markers: acceleration capacity (AC) and deceleration capacity (DC). We examined DNA methylation levels at more than 472,506 CpG probes through the Illumina Infinium HumanMethylation450 BeadChip assay. We conducted separate linear mixed models to examine associations of DNA methylation levels at each CpG with AC and DC. One CpG (cg26829071) located in the GPR133 gene was negatively associated with DC values after multiple testing corrections through false discovery rate. Our study suggests the potential functional importance of methylation in cardiac autonomic responses. Findings from the current study need to be replicated in future studies in a larger population.
Sanders, Alison P; Gennings, Chris; Svensson, Katherine; Motta, Valeria; Mercado-Garcia, Adriana; Solano, Maritsa; Baccarelli, Andrea A; Tellez-Rojo, Martha M; Wright, Robert O; Burris, Heather H
2017-01-01
Bacterial vaginosis may lead to preterm birth through epigenetic programming of the inflammatory response, specifically via miRNA expression. We quantified bacterial 16S rRNA, cytokine mRNA and 800 miRNA from cervical swabs obtained from 80 women at 16-19 weeks' gestation. We generated bacterial and cytokine indices using weighted quantile sum regression and examined associations with miRNA and gestational age at delivery. Each decile of the bacterial and cytokine indices was associated with shorter gestations (p < 0.005). The bacterial index was associated with miR-494, 371a, 4286, 185, 320e, 888 and 23a (p < 0.05). miR-494 remained significant after false discovery rate correction (q < 0.1). The cytokine index was associated with 27 miRNAs (p < 0.05; q < 0.01). Future investigation into the role of bacterial vaginosis- and inflammation-associated miRNA and preterm birth is warranted.
Neumann, Wolf-Julian; Degen, Katharina; Schneider, Gerd-Helge; Brücke, Christof; Huebl, Julius; Brown, Peter; Kühn, Andrea A.
2016-01-01
Objective Beta band oscillations in the subthalamic nucleus (STN) have been proposed as a pathophysiological signature in patients with Parkinson’s disease (PD). The aim of this study was to investigate the potential association between oscillatory activity in the STN and symptom severity in PD. Methods Subthalamic local field potentials were recorded from 63 PD patients in a dopaminergic OFF state. Power-spectra were analyzed for the frequency range from 5 to 95 Hz and correlated with individual UPDRS-III motor scores in the OFF state. Results A correlation between total UPDRS-III scores and 8 to 35 Hz activity was revealed across all patients (ρ = 0.44, P <.0001). When correlating each frequency bin, a narrow range from 10 to 15 Hz remained significant for the correlation (false discovery rate corrected P <.05). Conclusion Our results show a correlation between local STN 8 to 35 Hz power and impairment in PD, further supporting the role of subthalamic oscillatory activity as a potential biomarker for PD. PMID:27548068
Innovative Methodology in the Discovery of Novel Drug Targets in the Free-Living Amoebae
Baig, Abdul Mannan
2018-04-25
Despite advances in drug discovery and modifications in the chemotherapeutic regimens, human infections caused by free-living amoebae (FLA) have high mortality rates (~95%). The FLA that cause fatal human cerebral infections include Naegleria fowleri, Balamuthia mandrillaris and Acanthamoeba spp. Novel drug-target discovery remains the only viable option to tackle these central nervous system (CNS) infection in order to lower the mortality rates caused by the FLA. Of these FLA, N. fowleri causes primary amoebic meningoencephalitis (PAM), while the A. castellanii and B. Mandrillaris are known to cause granulomatous amoebic encephalitis (GAE). The infections caused by the FLA have been treated with drugs like Rifampin, Fluconazole, Amphotericin-B and Miltefosine. Miltefosine is an anti-leishmanial agent and an experimental anti-cancer drug. With only rare incidences of success, these drugs have remained unsuccessful to lower the mortality rates of the cerebral infection caused by FLA. Recently, with the help of bioinformatic computational tools and the discovered genomic data of the FLA, discovery of newer drug targets has become possible. These cellular targets are proteins that are either unique to the FLA or shared between the humans and these unicellular eukaryotes. The latter group of proteins has shown to be targets of some FDA approved drugs prescribed in non-infectious diseases. This review out-lines the bioinformatic methodologies that can be used in the discovery of such novel drug-targets, their chronicle by in-vitro assays done in the past and the translational value of such target discoveries in human diseases caused by FLA. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Corrections and clarifications.
1994-01-21
The Research News article by Faye Flam about the 1993 physics Nobel Prize ("A prize for patient listening," 22 Oct., p. 507), awarded to Joseph Taylor and Russell Hulse for the discovery of a binary pulsar, incorrectly attributed key observations. The measurements implying that the pulsar is emitting gravitational waves were made by Taylor in collaboration with Joel Weisberg, Lee Fowler, and Peter McCulloch, not by Taylor and Hulse.
NASA Astrophysics Data System (ADS)
Park, Haesun
2005-12-01
Given the role electricity and natural gas sectors play in the North American economy, an understanding of how markets for these commodities interact is important. This dissertation independently characterizes the price dynamics of major electricity and natural gas spot markets in North America by combining directed acyclic graphs with time series analyses. Furthermore, the dissertation explores a generalization of price difference bands associated with the law of one price. Interdependencies among 11 major electricity spot markets are examined in Chapter II using a vector autoregression model. Results suggest that the relationships between the markets vary by time. Western markets are separated from the eastern markets and the Electricity Reliability Council of Texas. At longer time horizons these separations disappear. Palo Verde is the important spot market in the west for price discovery. Southwest Power Pool is the dominant market in Eastern Interconnected System for price discovery. Interdependencies among eight major natural gas spot markets are investigated using a vector error correction model and the Greedy Equivalence Search Algorithm in Chapter III. Findings suggest that the eight price series are tied together through six long-run cointegration relationships, supporting the argument that the natural gas market has developed into a single integrated market in North America since deregulation. Results indicate that price discovery tends to occur in the excess consuming regions and move to the excess producing regions. Across North America, the U.S. Midwest region, represented by the Chicago spot market, is the most important for price discovery. The Ellisburg-Leidy Hub in Pennsylvania and Malin Hub in Oregon are important for eastern and western markets. In Chapter IV, a threshold vector error correction model is applied to the natural gas markets to examine nonlinearities in adjustments to the law of one price. Results show that there are nonlinear adjustments to the law of one price in seven pair-wise markets. Four alternative cases for the law of one price are presented as a theoretical background. A methodology is developed for finding a threshold cointegration model that accounts for seasonality in the threshold levels. Results indicate that dynamic threshold effects vary depending on geographical location and whether the markets are excess producing or excess consuming markets.
The Detection and Correction of Bias in Student Ratings of Instruction.
ERIC Educational Resources Information Center
Haladyna, Thomas; Hess, Robert K.
1994-01-01
A Rasch model was used to detect and correct bias in Likert rating scales used to assess student perceptions of college teaching, using a database of ratings. Statistical corrections were significant, supporting the model's potential utility. Recommendations are made for a theoretical rationale and further research on the model. (Author/MSE)
Tertiary oil discoveries whet explorer interest off Tunisia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, M.
Prospects for increased Tertiary oil production in the S. Mediterranean have brightened with discoveries off Tunisia, but more evaluation is needed before commercial potential is known. Several groups of U.S. and European companies have tested oil in the relatively unexplored Miocene in the Gulf of Hannamet. These include groups operated by Buttes Resources Tunisia, Elf-Aquitaine Tunisia, and Shell Tunirex. Oil test rates of 1,790 to 1,800 bpd have been reported by the Buttes group in 2 Gulf of Hammamet wells. The initial discovery probably was the first Tertiary oil ever tested in that part of the Mediterranean. The discoveries havemore » helped boost exploratory interest in the northern waters of Tunisia and northeast toward Sicily. There are reports more U.S. and European companies are requesting exploration permits from the government of Tunisia. Companies with permits are planning new exploration for 1978. Probably the most significant discovery to date has been the Buttes group's 1 Jasmine (2 BGH). The group tested high-quality 39.5'-gravity oil at a rate of 1,790 bpd. Test flow was from the Sabri Sand at 6,490 to 6,590 ft. The well was drilled in 458 ft of water.« less
High-throughput discovery of rare human nucleotide polymorphisms by Ecotilling
Till, Bradley J.; Zerr, Troy; Bowers, Elisabeth; Greene, Elizabeth A.; Comai, Luca; Henikoff, Steven
2006-01-01
Human individuals differ from one another at only ∼0.1% of nucleotide positions, but these single nucleotide differences account for most heritable phenotypic variation. Large-scale efforts to discover and genotype human variation have been limited to common polymorphisms. However, these efforts overlook rare nucleotide changes that may contribute to phenotypic diversity and genetic disorders, including cancer. Thus, there is an increasing need for high-throughput methods to robustly detect rare nucleotide differences. Toward this end, we have adapted the mismatch discovery method known as Ecotilling for the discovery of human single nucleotide polymorphisms. To increase throughput and reduce costs, we developed a universal primer strategy and implemented algorithms for automated band detection. Ecotilling was validated by screening 90 human DNA samples for nucleotide changes in 5 gene targets and by comparing results to public resequencing data. To increase throughput for discovery of rare alleles, we pooled samples 8-fold and found Ecotilling to be efficient relative to resequencing, with a false negative rate of 5% and a false discovery rate of 4%. We identified 28 new rare alleles, including some that are predicted to damage protein function. The detection of rare damaging mutations has implications for models of human disease. PMID:16893952
Safavi, Maliheh; Sabourian, Reyhaneh; Abdollahi, Mohammad
2016-10-01
The task of discovery and development of novel therapeutic agents remains an expensive, uncertain, time-consuming, competitive, and inefficient enterprise. Due to a steady increase in the cost and time of drug development and the considerable amount of resources required, a predictive tool is needed for assessing the safety and efficacy of a new chemical entity. This study is focused on the high attrition rate in discovery and development of oncology and central nervous system (CNS) medicines, because the failure rate of these medicines is higher than others. Some approaches valuable in reducing attrition rates are proposed and the judicious use of biomarkers is discussed. Unlike the significant progress made in identifying and characterizing novel mechanisms of disease processes and targeted therapies, the process of novel drug development is associated with an unacceptably high attrition rate. The application of clinically qualified predictive biomarkers holds great promise for further development of therapeutic targets, improved survival, and ultimately personalized medicine sets for patients. Decisions such as candidate selection, development risks, dose ranging, early proof of concept/principle, and patient stratification are based on the measurements of biologically and/or clinically validated biomarkers.
Limitations and potentials of current motif discovery algorithms
Hu, Jianjun; Li, Bin; Kihara, Daisuke
2005-01-01
Computational methods for de novo identification of gene regulation elements, such as transcription factor binding sites, have proved to be useful for deciphering genetic regulatory networks. However, despite the availability of a large number of algorithms, their strengths and weaknesses are not sufficiently understood. Here, we designed a comprehensive set of performance measures and benchmarked five modern sequence-based motif discovery algorithms using large datasets generated from Escherichia coli RegulonDB. Factors that affect the prediction accuracy, scalability and reliability are characterized. It is revealed that the nucleotide and the binding site level accuracy are very low, while the motif level accuracy is relatively high, which indicates that the algorithms can usually capture at least one correct motif in an input sequence. To exploit diverse predictions from multiple runs of one or more algorithms, a consensus ensemble algorithm has been developed, which achieved 6–45% improvement over the base algorithms by increasing both the sensitivity and specificity. Our study illustrates limitations and potentials of existing sequence-based motif discovery algorithms. Taking advantage of the revealed potentials, several promising directions for further improvements are discussed. Since the sequence-based algorithms are the baseline of most of the modern motif discovery algorithms, this paper suggests substantial improvements would be possible for them. PMID:16284194
Radiation Dose-Rate Effects on Gene Expression in a Mouse Biodosimetry Model
Paul, Sunirmal; Smilenov, Lubomir B.; Elliston, Carl D.; Amundson, Sally A.
2015-01-01
In the event of a nuclear accident or radiological terrorist attack, there will be a pressing need for biodosimetry to triage a large, potentially exposed population and to assign individuals to appropriate treatment. Exposures from fallout are likely, resulting in protracted dose delivery that would, in turn, impact the extent of injury. Biodosimetry approaches that can distinguish such low-dose-rate (LDR) exposures from acute exposures have not yet been developed. In this study, we used the C57BL/6 mouse model in an initial investigation of the impact of low-dose-rate delivery on the transcriptomic response in blood. While a large number of the same genes responded to LDR and acute radiation exposures, for many genes the magnitude of response was lower after LDR exposures. Some genes, however, were differentially expressed (P < 0.001, false discovery rate < 5%) in mice exposed to LDR compared with mice exposed to acute radiation. We identified a set of 164 genes that correctly classified 97% of the samples in this experiment as exposed to acute or LDR radiation using a support vector machine algorithm. Gene expression is a promising approach to radiation biodosimetry, enhanced greatly by this first demonstration of its potential for distinguishing between acute and LDR exposures. Further development of this aspect of radiation biodosimetry, either as part of a complete gene expression biodosimetry test or as an adjunct to other methods, could provide vital triage information in a mass radiological casualty event. PMID:26114327
1994-09-30
relational versus object oriented DBMS, knowledge discovery, data models, rnetadata, data filtering, clustering techniques, and synthetic data. A secondary...The first was the investigation of Al/ES Lapplications (knowledge discovery, data mining, and clustering ). Here CAST collabo.rated with Dr. Fred Petry...knowledge discovery system based on clustering techniques; implemented an on-line data browser to the DBMS; completed preliminary efforts to apply object
Pan, Si-Yuan; Zhou, Shu-Feng; Gao, Si-Hua; Yu, Zhi-Ling; Zhang, Shuo-Feng; Tang, Min-Ke; Sun, Jian-Ning; Ma, Dik-Lung; Han, Yi-Fan; Fong, Wang-Fun; Ko, Kam-Ming
2013-01-01
With tens of thousands of plant species on earth, we are endowed with an enormous wealth of medicinal remedies from Mother Nature. Natural products and their derivatives represent more than 50% of all the drugs in modern therapeutics. Because of the low success rate and huge capital investment need, the research and development of conventional drugs are very costly and difficult. Over the past few decades, researchers have focused on drug discovery from herbal medicines or botanical sources, an important group of complementary and alternative medicine (CAM) therapy. With a long history of herbal usage for the clinical management of a variety of diseases in indigenous cultures, the success rate of developing a new drug from herbal medicinal preparations should, in theory, be higher than that from chemical synthesis. While the endeavor for drug discovery from herbal medicines is "experience driven," the search for a therapeutically useful synthetic drug, like "looking for a needle in a haystack," is a daunting task. In this paper, we first illustrated various approaches of drug discovery from herbal medicines. Typical examples of successful drug discovery from botanical sources were given. In addition, problems in drug discovery from herbal medicines were described and possible solutions were proposed. The prospect of drug discovery from herbal medicines in the postgenomic era was made with the provision of future directions in this area of drug development.
Evidence for formation of DNA repair centers and dose-response nonlinearity in human cells
Neumaier, Teresa; Swenson, Joel; Pham, Christopher; Polyzos, Aris; Lo, Alvin T.; Yang, PoAn; Dyball, Jane; Asaithamby, Aroumougame; Chen, David J.; Bissell, Mina J.; Thalhammer, Stefan; Costes, Sylvain V.
2012-01-01
The concept of DNA “repair centers” and the meaning of radiation-induced foci (RIF) in human cells have remained controversial. RIFs are characterized by the local recruitment of DNA damage sensing proteins such as p53 binding protein (53BP1). Here, we provide strong evidence for the existence of repair centers. We used live imaging and mathematical fitting of RIF kinetics to show that RIF induction rate increases with increasing radiation dose, whereas the rate at which RIFs disappear decreases. We show that multiple DNA double-strand breaks (DSBs) 1 to 2 μm apart can rapidly cluster into repair centers. Correcting mathematically for the dose dependence of induction/resolution rates, we observe an absolute RIF yield that is surprisingly much smaller at higher doses: 15 RIF/Gy after 2 Gy exposure compared to approximately 64 RIF/Gy after 0.1 Gy. Cumulative RIF counts from time lapse of 53BP1-GFP in human breast cells confirmed these results. The standard model currently in use applies a linear scale, extrapolating cancer risk from high doses to low doses of ionizing radiation. However, our discovery of DSB clustering over such large distances casts considerable doubts on the general assumption that risk to ionizing radiation is proportional to dose, and instead provides a mechanism that could more accurately address risk dose dependency of ionizing radiation. PMID:22184222
ERIC Educational Resources Information Center
McCane-Bowling, Sara J.; Strait, Andrea D.; Guess, Pamela E.; Wiedo, Jennifer R.; Muncie, Eric
2014-01-01
This study examined the predictive utility of five formative reading measures: words correct per minute, number of comprehension questions correct, reading comprehension rate, number of maze correct responses, and maze accurate response rate (MARR). Broad Reading cluster scores obtained via the Woodcock-Johnson III (WJ III) Tests of Achievement…
Speck-Planche, Alejandro; Kleandrova, Valeria V; Luan, Feng; Cordeiro, M Natália D S
2012-08-01
The discovery of new and more potent anti-cancer agents constitutes one of the most active fields of research in chemotherapy. Colorectal cancer (CRC) is one of the most studied cancers because of its high prevalence and number of deaths. In the current pharmaceutical design of more efficient anti-CRC drugs, the use of methodologies based on Chemoinformatics has played a decisive role, including Quantitative-Structure-Activity Relationship (QSAR) techniques. However, until now, there is no methodology able to predict anti-CRC activity of compounds against more than one CRC cell line, which should constitute the principal goal. In an attempt to overcome this problem we develop here the first multi-target (mt) approach for the virtual screening and rational in silico discovery of anti-CRC agents against ten cell lines. Here, two mt-QSAR classification models were constructed using a large and heterogeneous database of compounds. The first model was based on linear discriminant analysis (mt-QSAR-LDA) employing fragment-based descriptors while the second model was obtained using artificial neural networks (mt-QSAR-ANN) with global 2D descriptors. Both models correctly classified more than 90% of active and inactive compounds in training and prediction sets. Some fragments were extracted from the molecules and their contributions to anti-CRC activity were calculated using mt-QSAR-LDA model. Several fragments were identified as potential substructural features responsible for the anti-CRC activity and new molecules designed from those fragments with positive contributions were suggested and correctly predicted by the two models as possible potent and versatile anti-CRC agents. Copyright © 2012 Elsevier Ltd. All rights reserved.
Discovery of new enzymes and metabolic pathways by using structure and genome context.
Zhao, Suwen; Kumar, Ritesh; Sakai, Ayano; Vetting, Matthew W; Wood, B McKay; Brown, Shoshana; Bonanno, Jeffery B; Hillerich, Brandan S; Seidel, Ronald D; Babbitt, Patricia C; Almo, Steven C; Sweedler, Jonathan V; Gerlt, John A; Cronan, John E; Jacobson, Matthew P
2013-10-31
Assigning valid functions to proteins identified in genome projects is challenging: overprediction and database annotation errors are the principal concerns. We and others are developing computation-guided strategies for functional discovery with 'metabolite docking' to experimentally derived or homology-based three-dimensional structures. Bacterial metabolic pathways often are encoded by 'genome neighbourhoods' (gene clusters and/or operons), which can provide important clues for functional assignment. We recently demonstrated the synergy of docking and pathway context by 'predicting' the intermediates in the glycolytic pathway in Escherichia coli. Metabolite docking to multiple binding proteins and enzymes in the same pathway increases the reliability of in silico predictions of substrate specificities because the pathway intermediates are structurally similar. Here we report that structure-guided approaches for predicting the substrate specificities of several enzymes encoded by a bacterial gene cluster allowed the correct prediction of the in vitro activity of a structurally characterized enzyme of unknown function (PDB 2PMQ), 2-epimerization of trans-4-hydroxy-L-proline betaine (tHyp-B) and cis-4-hydroxy-D-proline betaine (cHyp-B), and also the correct identification of the catabolic pathway in which Hyp-B 2-epimerase participates. The substrate-liganded pose predicted by virtual library screening (docking) was confirmed experimentally. The enzymatic activities in the predicted pathway were confirmed by in vitro assays and genetic analyses; the intermediates were identified by metabolomics; and repression of the genes encoding the pathway by high salt concentrations was established by transcriptomics, confirming the osmolyte role of tHyp-B. This study establishes the utility of structure-guided functional predictions to enable the discovery of new metabolic pathways.
75 FR 22394 - Combined Notice of Filings No. 2
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-28
... 21, 2010. Take notice that the Commission has received the following Natural Gas Pipeline Rate and Refund Report filings: Docket Numbers: RP10-539-001. Applicants: Discovery Gas Transmission LLC. Description: Discovery Gas Transmission, LLC submits Substitute First Revised Sheet 225 et al. to FERC Gas...
1990-04-24
Through the large window panes of Firing Room 1, KSC launch team members reap the rewards of their work with a glimpse of the space shuttle Discovery soaring into the sky. Discovery was launched for the tenth time at 8:34 a.m. EDT on April 24 beginning the five-day STS-31 mission to deploy the Hubble Space Telescope. A ray of morning sunlight highlights the red and white stripes of Old Glory hanging high in the Firing Room. Launch team members overcame a last minute challenge in the STS-31 countdown when software detected a main propulsion system valve was out of position. The situation was quickly corrected and verified by the team from consoles in the Firing Room and the countdown was returned in a matter of minutes. Photo credit: NASA
1990-04-24
Through the large window panes of Firing Room 1, KSC launch team members reap the rewards of their work with a glimpse of the space shuttle Discovery soaring into the sky. Discovery was launched for the tenth time at 8:34 a.m. EDT on April 24 beginning the five-day STS-31 mission to deploy the Hubble Space Telescope. A ray of morning sunlight highlights the red and white stripes of Old Glory hanging high in the Firing Room. Launch team members overcame a last minute challenge in the STS-31 countdown when software detected a main propulsion system valve was out of position. The situation was quickly corrected and verified by the team from consoles in the Firing Room and the countdown was returned in a matter of minutes. Photo credit: NASA
Searching for Majorana Neutrinos in the Like-Sign Dilepton Final State
NASA Astrophysics Data System (ADS)
Clarida, Warren
2010-02-01
The Standard Model can be extended to include massive neutrinos as observed in the recent oscillation experiments. Perhaps the most commonly studied model is the type-I seesaw mechanism. This model introduces a new neutrino with a Majorana nature with an unknown mass. In this study we present the potential for the discovery of a Majorana neutrino during the first year of data collection from the Large Hadron Collider. In the analysis we used muon triggers, muon isolation, jet energy corrections, b-tagging, and an examination of the combinatorial background. We conclude that the discovery potential can be reached in the first year of running at the LHC at 10 TeV startup collision energy with the CMS detector for the Majorana neutrino mass range near 100 GeV. )
NASA Astrophysics Data System (ADS)
Hart, Stan; Basu, Asish
Publication of this monograph will coincide, to a precision of a few per mil, with the centenary of Henri Becquerel's discovery of "radiations actives" (C. R. Acad. Sci., Feb. 24, 1896). In 1896 the Earth was only 40 million years old according to Lord Kelvin. Eleven years later, Boltwood had pushed the Earth's age past 2000 million years, based on the first U/Pb chemical dating results. In exciting progression came discovery of isotopes by J. J. Thomson in 1912, invention of the mass spectrometer by Dempster (1918) and Aston (1919), the first measurement of the isotopic composition of Pb (Aston, 1927) and the final approach, using Pb-Pb isotopic dating, to the correct age of the Earth: close—2.9 Ga (Gerling, 1942), closer—3.0 Ga (Holmes, 1949) and closest—4.50 Ga (Patterson, Tilton and Inghram, 1953).
Pinto, Israel de Souza; Chagas, Bruna Dias das; Rodrigues, Andressa Alencastre Fuzari; Ferreira, Adelson Luiz; Rezende, Helder Ricas; Bruno, Rafaela Vieira; Falqueto, Aloisio; Andrade-Filho, José Dilermando; Galati, Eunice Aparecida Bianchi; Shimabukuro, Paloma Helena Fernandes; Brazil, Reginaldo Peçanha
2015-01-01
DNA barcoding has been an effective tool for species identification in several animal groups. Here, we used DNA barcoding to discriminate between 47 morphologically distinct species of Brazilian sand flies. DNA barcodes correctly identified approximately 90% of the sampled taxa (42 morphologically distinct species) using clustering based on neighbor-joining distance, of which four species showed comparatively higher maximum values of divergence (range 4.23–19.04%), indicating cryptic diversity. The DNA barcodes also corroborated the resurrection of two species within the shannoni complex and provided an efficient tool to differentiate between morphologically indistinguishable females of closely related species. Taken together, our results validate the effectiveness of DNA barcoding for species identification and the discovery of cryptic diversity in sand flies from Brazil. PMID:26506007
Pinto, Israel de Souza; Chagas, Bruna Dias das; Rodrigues, Andressa Alencastre Fuzari; Ferreira, Adelson Luiz; Rezende, Helder Ricas; Bruno, Rafaela Vieira; Falqueto, Aloisio; Andrade-Filho, José Dilermando; Galati, Eunice Aparecida Bianchi; Shimabukuro, Paloma Helena Fernandes; Brazil, Reginaldo Peçanha; Peixoto, Alexandre Afranio
2015-01-01
DNA barcoding has been an effective tool for species identification in several animal groups. Here, we used DNA barcoding to discriminate between 47 morphologically distinct species of Brazilian sand flies. DNA barcodes correctly identified approximately 90% of the sampled taxa (42 morphologically distinct species) using clustering based on neighbor-joining distance, of which four species showed comparatively higher maximum values of divergence (range 4.23-19.04%), indicating cryptic diversity. The DNA barcodes also corroborated the resurrection of two species within the shannoni complex and provided an efficient tool to differentiate between morphologically indistinguishable females of closely related species. Taken together, our results validate the effectiveness of DNA barcoding for species identification and the discovery of cryptic diversity in sand flies from Brazil.
Media reaction to a SETI success.
Shostak, G S
1997-01-01
Consideration of the reaction to a SETI detection by the media, and the effect this will have on the public, is more than mere sociological speculation. An accurate forecast of the media's interest can lead to actions that will help ensure that correct and comprehensible information reaches the public. This is most critical in the first few weeks following a discovery. While a widely accepted protocol for dealing with a detection exists in the "Declaration of Principles Following the Detection of Extraterrestrial Intelligence," it gives scant consideration to the fact that the actual situation will be chaotic and not subject to easy control. The 1996 story about the possible discovery of martian microfossils has provided a useful precedent for what will happen if astronomers uncover the existence of alien intelligence.
Position Corrections for Airspeed and Flow Angle Measurements on Fixed-Wing Aircraft
NASA Technical Reports Server (NTRS)
Grauer, Jared A.
2017-01-01
This report addresses position corrections made to airspeed and aerodynamic flow angle measurements on fixed-wing aircraft. These corrections remove the effects of angular rates, which contribute to the measurements when the sensors are installed away from the aircraft center of mass. Simplified corrections, which are routinely used in practice and assume small flow angles and angular rates, are reviewed. The exact, nonlinear corrections are then derived. The simplified corrections are sufficient in most situations; however, accuracy diminishes for smaller aircraft that incur higher angular rates, and for flight at high air flow angles. This is demonstrated using both flight test data and a nonlinear flight dynamics simulation of a subscale transport aircraft in a variety of low-speed, subsonic flight conditions.
76 FR 50726 - Integrated System Power Rates: Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-16
... DEPARTMENT OF ENERGY Southwestern Power Administration Integrated System Power Rates: Correction AGENCY: Southwestern Power Administration, DOE. ACTION: Notice of public review and comment; Correction. SUMMARY: Southwestern Power Administration published a document in the Federal Register (76 FR 48159) on...
Retrospective analysis of natural products provides insights for future discovery trends
Pye, Cameron R.; Bertin, Matthew J.; Lokey, R. Scott; Gerwick, William H.
2017-01-01
Understanding of the capacity of the natural world to produce secondary metabolites is important to a broad range of fields, including drug discovery, ecology, biosynthesis, and chemical biology, among others. Both the absolute number and the rate of discovery of natural products have increased significantly in recent years. However, there is a perception and concern that the fundamental novelty of these discoveries is decreasing relative to previously known natural products. This study presents a quantitative examination of the field from the perspective of both number of compounds and compound novelty using a dataset of all published microbial and marine-derived natural products. This analysis aimed to explore a number of key questions, such as how the rate of discovery of new natural products has changed over the past decades, how the average natural product structural novelty has changed as a function of time, whether exploring novel taxonomic space affords an advantage in terms of novel compound discovery, and whether it is possible to estimate how close we are to having described all of the chemical space covered by natural products. Our analyses demonstrate that most natural products being published today bear structural similarity to previously published compounds, and that the range of scaffolds readily accessible from nature is limited. However, the analysis also shows that the field continues to discover appreciable numbers of natural products with no structural precedent. Together, these results suggest that the development of innovative discovery methods will continue to yield compounds with unique structural and biological properties. PMID:28461474
Academic drug discovery: current status and prospects.
Everett, Jeremy R
2015-01-01
The contraction in pharmaceutical drug discovery operations in the past decade has been counter-balanced by a significant rise in the number of academic drug discovery groups. In addition, pharmaceutical companies that used to operate in completely independent, vertically integrated operations for drug discovery, are now collaborating more with each other, and with academic groups. We are in a new era of drug discovery. This review provides an overview of the current status of academic drug discovery groups, their achievements and the challenges they face, together with perspectives on ways to achieve improved outcomes. Academic groups have made important contributions to drug discovery, from its earliest days and continue to do so today. However, modern drug discovery and development is exceedingly complex, and has high failure rates, principally because human biology is complex and poorly understood. Academic drug discovery groups need to play to their strengths and not just copy what has gone before. However, there are lessons to be learnt from the experiences of the industrial drug discoverers and four areas are highlighted for attention: i) increased validation of targets; ii) elimination of false hits from high throughput screening (HTS); iii) increasing the quality of molecular probes; and iv) investing in a high-quality informatics infrastructure.
The optimal power puzzle: scrutiny of the monotone likelihood ratio assumption in multiple testing.
Cao, Hongyuan; Sun, Wenguang; Kosorok, Michael R
2013-01-01
In single hypothesis testing, power is a non-decreasing function of type I error rate; hence it is desirable to test at the nominal level exactly to achieve optimal power. The puzzle lies in the fact that for multiple testing, under the false discovery rate paradigm, such a monotonic relationship may not hold. In particular, exact false discovery rate control may lead to a less powerful testing procedure if a test statistic fails to fulfil the monotone likelihood ratio condition. In this article, we identify different scenarios wherein the condition fails and give caveats for conducting multiple testing in practical settings.
To assess the value of satellite imagery in resource inventorization on a national scale
NASA Technical Reports Server (NTRS)
Malan, O. G. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Production of 1:500,000 scale false color photolithoprints proved to be very valuable. Significant results were obtained in geomorphological mapping, mapping of disturbed and undisturbed natural vegetation as well as in the discovery of major geologic lineaments, some of which may be associated with mineralization. The cartographic quality of system corrected MSS imagery was also evaluated.
Japanese Studies of Asteroids Following the Discovery of the Hirayama Families
NASA Astrophysics Data System (ADS)
Nakamura, Tsuko
This paper reviews studies relating to asteroids conducted by Japanese astronomers since the discovery of asteroid families by Kiyotsugu Hirayama in 1918. First, the situation is mentioned that it took quite some time for the concept of an `asteroid family' to be understood correctly by the astronomical community worldwide. It is no wonder that some eminent researches on the dynamics of asteroids based on secular perturbation theories appeared in Japan after WWII, as represented by the `Kozai mechanism' (1962), which probably was influenced by Hirayama's monumental discovery. As for studies of the physical nature of asteroids, we must note the pioneering work by M. Kitamura in 1959 when the observed colors of about 40 asteroids were compared with reflectance spectra of several meteorites measured in the laboratory, even though this result unfortunately was not pursued further at the time. Modern impact experiments initiated by A. Fujiwara in 1975 soon became an important means of investigating the origin of asteroid families, and of the ubiquitous craters seen on the surfaces of airless Solar System bodies.
Accounting for Chromatic Atmospheric Effects on Barycentric Corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blackman, Ryan T.; Szymkowiak, Andrew E.; Fischer, Debra A.
2017-03-01
Atmospheric effects on stellar radial velocity measurements for exoplanet discovery and characterization have not yet been fully investigated for extreme precision levels. We carry out calculations to determine the wavelength dependence of barycentric corrections across optical wavelengths, due to the ubiquitous variations in air mass during observations. We demonstrate that radial velocity errors of at least several cm s{sup −1} can be incurred if the wavelength dependence is not included in the photon-weighted barycentric corrections. A minimum of four wavelength channels across optical spectra (380–680 nm) are required to account for this effect at the 10 cm s{sup −1} level,more » with polynomial fits of the barycentric corrections applied to cover all wavelengths. Additional channels may be required in poor observing conditions or to avoid strong telluric absorption features. Furthermore, consistent flux sampling on the order of seconds throughout the observation is necessary to ensure that accurate photon weights are obtained. Finally, we describe how a multiple-channel exposure meter will be implemented in the EXtreme PREcision Spectrograph (EXPRES).« less
Reproducibility of results in preclinical studies: a perspective from the bone field.
Manolagas, Stavros C; Kronenberg, Henry M
2014-10-01
The biomedical research enterprise-and the public support for it-is predicated on the belief that discoveries and the conclusions drawn from them can be trusted to build a body of knowledge which will be used to improve human health. As in all other areas of scientific inquiry, knowledge and understanding grow by layering new discoveries upon earlier ones. The process self-corrects and distills knowledge by discarding false ideas and unsubstantiated claims. Although self-correction is inexorable in the long-term, in recent years biomedical scientists and the public alike have become alarmed and deeply troubled by the fact that many published results cannot be reproduced. The chorus of concern reached a high pitch with a recent commentary from the NIH Director, Francis S. Collins, and Principal Deputy Director, Lawrence A. Tabak, and their announcement of specific plans to enhance reproducibility of preclinical research that relies on animal models. In this invited perspective, we highlight the magnitude of the problem across biomedical fields and address the relevance of these concerns to the field of bone and mineral metabolism. We also suggest how our specialty journals, our scientific organizations, and our community of bone and mineral researchers can help to overcome this troubling trend. © 2014 American Society for Bone and Mineral Research.
From Residency to Lifelong Learning.
Brandt, Keith
2015-11-01
The residency training experience is the perfect environment for learning. The university/institution patient population provides a never-ending supply of patients with unique management challenges. Resources abound that allow the discovery of knowledge about similar situations. Senior teachers provide counseling and help direct appropriate care. Periodic testing and evaluations identify deficiencies, which can be corrected with future study. What happens, however, when the resident graduates? Do they possess all the knowledge they'll need for the rest of their career? Will medical discovery stand still limiting the need for future study? If initial certification establishes that the physician has the skills and knowledge to function as an independent physician and surgeon, how do we assure the public that plastic surgeons will practice lifelong learning and remain safe throughout their career? Enter Maintenance of Certification (MOC). In an ideal world, MOC would provide many of the same tools as residency training: identification of gaps in knowledge, resources to correct those deficiencies, overall assessment of knowledge, feedback about communication skills and professionalism, and methods to evaluate and improve one's practice. This article discusses the need; for education and self-assessment that extends beyond residency training and a commitment to lifelong learning. The American Board of Plastic Surgery MOC program is described to demonstrate how it helps the diplomate reach the goal of continuous practice improvement.
Climatic shocks associate with innovation in science and technology
van Dijk, Mathijs A.
2018-01-01
Human history is shaped by landmark discoveries in science and technology. However, across both time and space the rate of innovation is erratic: Periods of relative inertia alternate with bursts of creative science and rapid cascades of technological innovations. While the origins of the rise and fall in rates of discovery and innovation remain poorly understood, they may reflect adaptive responses to exogenously emerging threats and pressures. Here we examined this possibility by fitting annual rates of scientific discovery and technological innovation to climatic variability and its associated economic pressures and resource scarcity. In time-series data from Europe (1500–1900CE), we indeed found that rates of innovation are higher during prolonged periods of cold (versus warm) surface temperature and during the presence (versus absence) of volcanic dust veils. This negative temperature–innovation link was confirmed in annual time-series for France, Germany, and the United Kingdom (1901–1965CE). Combined, across almost 500 years and over 5,000 documented innovations and discoveries, a 0.5°C increase in temperature associates with a sizable 0.30–0.60 standard deviation decrease in innovation. Results were robust to controlling for fluctuations in population size. Furthermore, and consistent with economic theory and micro-level data on group innovation, path analyses revealed that the relation between harsher climatic conditions between 1500–1900CE and more innovation is mediated by climate-induced economic pressures and resource scarcity. PMID:29364910
Signal Detection and Frame Synchronization of Multiple Wireless Networking Waveforms
2007-09-01
punctured to obtain coding rates of 2 3 and 3 4 . Convolutional forward error correction coding is used to detect and correct bit...likely to be isolated and be correctable by the convolutional decoder. 44 Data rate (Mbps) Modulation Coding Rate Coded bits per subcarrier...binary convolutional code . A shortened Reed-Solomon technique is employed first. The code is shortened depending upon the data
NASA Astrophysics Data System (ADS)
Lockhart, M.; Henzlova, D.; Croft, S.; Cutler, T.; Favalli, A.; McGahee, Ch.; Parker, R.
2018-01-01
Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli(DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory and implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. The current paper discusses and presents the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. In order to assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. The DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.
Pan, Si-Yuan; Zhou, Shu-Feng; Gao, Si-Hua; Yu, Zhi-Ling; Zhang, Shuo-Feng; Tang, Min-Ke; Sun, Jian-Ning; Han, Yi-Fan; Fong, Wang-Fun; Ko, Kam-Ming
2013-01-01
With tens of thousands of plant species on earth, we are endowed with an enormous wealth of medicinal remedies from Mother Nature. Natural products and their derivatives represent more than 50% of all the drugs in modern therapeutics. Because of the low success rate and huge capital investment need, the research and development of conventional drugs are very costly and difficult. Over the past few decades, researchers have focused on drug discovery from herbal medicines or botanical sources, an important group of complementary and alternative medicine (CAM) therapy. With a long history of herbal usage for the clinical management of a variety of diseases in indigenous cultures, the success rate of developing a new drug from herbal medicinal preparations should, in theory, be higher than that from chemical synthesis. While the endeavor for drug discovery from herbal medicines is “experience driven,” the search for a therapeutically useful synthetic drug, like “looking for a needle in a haystack,” is a daunting task. In this paper, we first illustrated various approaches of drug discovery from herbal medicines. Typical examples of successful drug discovery from botanical sources were given. In addition, problems in drug discovery from herbal medicines were described and possible solutions were proposed. The prospect of drug discovery from herbal medicines in the postgenomic era was made with the provision of future directions in this area of drug development. PMID:23634172
A Tutorial on Multiple Testing: False Discovery Control
NASA Astrophysics Data System (ADS)
Chatelain, F.
2016-09-01
This paper presents an overview of criteria and methods in multiple testing, with an emphasis on the false discovery rate control. The popular Benjamini and Hochberg procedure is described. The rationale for this approach is explained through a simple Bayesian interpretation. Some state-of-the-art variations and extensions are also presented.
A petroleum discovery-rate forecast revisited-The problem of field growth
Drew, L.J.; Schuenemeyer, J.H.
1992-01-01
A forecast of the future rates of discovery of crude oil and natural gas for the 123,027-km2 Miocene/Pliocene trend in the Gulf of Mexico was made in 1980. This forecast was evaluated in 1988 by comparing two sets of data: (1) the actual versus the forecasted number of fields discovered, and (2) the actual versus the forecasted volumes of crude oil and natural gas discovered with the drilling of 1,820 wildcat wells along the trend between January 1, 1977, and December 31, 1985. The forecast specified that this level of drilling would result in the discovery of 217 fields containing 1.78 billion barrels of oil equivalent; however, 238 fields containing 3.57 billion barrels of oil equivalent were actually discovered. This underestimation is attributed to biases introduced by field growth and, to a lesser degree, the artificially low, pre-1970's price of natural gas that prevented many smaller gas fields from being brought into production at the time of their discovery; most of these fields contained less than 50 billion cubic feet of producible natural gas. ?? 1992 Oxford University Press.
Benson, Neil
2015-08-01
Phase II attrition remains the most important challenge for drug discovery. Tackling the problem requires improved understanding of the complexity of disease biology. Systems biology approaches to this problem can, in principle, deliver this. This article reviews the reports of the application of mechanistic systems models to drug discovery questions and discusses the added value. Although we are on the journey to the virtual human, the length, path and rate of learning from this remain an open question. Success will be dependent on the will to invest and make the most of the insight generated along the way. Copyright © 2015 Elsevier Ltd. All rights reserved.
Drug Discovery for Neglected Diseases: Molecular Target-Based and Phenotypic Approaches
2013-01-01
Drug discovery for neglected tropical diseases is carried out using both target-based and phenotypic approaches. In this paper, target-based approaches are discussed, with a particular focus on human African trypanosomiasis. Target-based drug discovery can be successful, but careful selection of targets is required. There are still very few fully validated drug targets in neglected diseases, and there is a high attrition rate in target-based drug discovery for these diseases. Phenotypic screening is a powerful method in both neglected and non-neglected diseases and has been very successfully used. Identification of molecular targets from phenotypic approaches can be a way to identify potential new drug targets. PMID:24015767
78 FR 67951 - Price Cap Rules for Certain Postal Rate Adjustments; Corrections
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-13
... POSTAL REGULATORY COMMISSION 39 CFR Part 3010 [Docket No. RM2013-2; Order No. 1786] Price Cap Rules for Certain Postal Rate Adjustments; Corrections AGENCY: Postal Regulatory Commission. ACTION: Correcting amendments. SUMMARY: The Postal Regulatory Commission published a document in the Federal Register...
Katzenellenbogen, Judith M; Sanfilippo, Frank M; Hobbs, Michael S T; Briffa, Tom G; Ridout, Steve C; Knuiman, Matthew W; Dimer, Lyn; Taylor, Kate P; Thompson, Peter L; Thompson, Sandra C
2011-06-01
To investigate the impact of prevalence correction of population denominators on myocardial infarction (MI) incidence rates, rate ratios, and rate differences in Aboriginal vs. non-Aboriginal Western Australians aged 25-74 years during the study period 2000-2004. Person-based linked hospital and mortality data sets were used to estimate the number of prevalent and first-ever MI cases each year from 2000 to 2004 using a 15-year look-back period. Age-specific and -standardized MI incidence rates were calculated using both prevalence-corrected and -uncorrected population denominators, by sex and Aboriginality. The impact of prevalence correction on rates increased with age, was higher for men than women, and substantially greater for Aboriginal than non-Aboriginal people. Despite the systematic underestimation of incidence, prevalence correction had little impact on the Aboriginal to non-Aboriginal age-standardized rate ratios (6% and 4% underestimate in men and women, respectively), although the impact on rate differences was more marked (12% and 6%, respectively). The percentage underestimate of differentials was greater at older ages. Prevalence correction of denominators, while more accurate, is difficult to apply and may add modestly to the quantification of relative disparities in MI incidence between populations. Absolute incidence disparities using uncorrected denominators may have an error >10%. Copyright © 2011 Elsevier Inc. All rights reserved.
Correcting reaction rates measured by saturation-transfer magnetic resonance spectroscopy
NASA Astrophysics Data System (ADS)
Gabr, Refaat E.; Weiss, Robert G.; Bottomley, Paul A.
2008-04-01
Off-resonance or spillover irradiation and incomplete saturation can introduce significant errors in the estimates of chemical rate constants measured by saturation-transfer magnetic resonance spectroscopy (MRS). Existing methods of correction are effective only over a limited parameter range. Here, a general approach of numerically solving the Bloch-McConnell equations to calculate exchange rates, relaxation times and concentrations for the saturation-transfer experiment is investigated, but found to require more measurements and higher signal-to-noise ratios than in vivo studies can practically afford. As an alternative, correction formulae for the reaction rate are provided which account for the expected parameter ranges and limited measurements available in vivo. The correction term is a quadratic function of experimental measurements. In computer simulations, the new formulae showed negligible bias and reduced the maximum error in the rate constants by about 3-fold compared to traditional formulae, and the error scatter by about 4-fold, over a wide range of parameters for conventional saturation transfer employing progressive saturation, and for the four-angle saturation-transfer method applied to the creatine kinase (CK) reaction in the human heart at 1.5 T. In normal in vivo spectra affected by spillover, the correction increases the mean calculated forward CK reaction rate by 6-16% over traditional and prior correction formulae.
Frye, M A; Nassan, M; Jenkins, G D; Kung, S; Veldic, M; Palmer, B A; Feeder, S E; Tye, S J; Choi, D S; Biernacka, J M
2015-01-01
The objective of this study was to determine whether proteomic profiling in serum samples can be utilized in identifying and differentiating mood disorders. A consecutive sample of patients with a confirmed diagnosis of unipolar (UP n=52) or bipolar depression (BP-I n=46, BP-II n=49) and controls (n=141) were recruited. A 7.5-ml blood sample was drawn for proteomic multiplex profiling of 320 proteins utilizing the Myriad RBM Discovery Multi-Analyte Profiling platform. After correcting for multiple testing and adjusting for covariates, growth differentiation factor 15 (GDF-15), hemopexin (HPX), hepsin (HPN), matrix metalloproteinase-7 (MMP-7), retinol-binding protein 4 (RBP-4) and transthyretin (TTR) all showed statistically significant differences among groups. In a series of three post hoc analyses correcting for multiple testing, MMP-7 was significantly different in mood disorder (BP-I+BP-II+UP) vs controls, MMP-7, GDF-15, HPN were significantly different in bipolar cases (BP-I+BP-II) vs controls, and GDF-15, HPX, HPN, RBP-4 and TTR proteins were all significantly different in BP-I vs controls. Good diagnostic accuracy (ROC-AUC⩾0.8) was obtained most notably for GDF-15, RBP-4 and TTR when comparing BP-I vs controls. While based on a small sample not adjusted for medication state, this discovery sample with a conservative method of correction suggests feasibility in using proteomic panels to assist in identifying and distinguishing mood disorders, in particular bipolar I disorder. Replication studies for confirmation, consideration of state vs trait serial assays to delineate proteomic expression of bipolar depression vs previous mania, and utility studies to assess proteomic expression profiling as an advanced decision making tool or companion diagnostic are encouraged. PMID:26645624
O’Connell, Grant C; Petrone, Ashley B; Treadway, Madison B; Tennant, Connie S; Lucke-Wold, Noelle; Chantler, Paul D; Barr, Taura L
2016-01-01
Early and accurate diagnosis of stroke improves the probability of positive outcome. The objective of this study was to identify a pattern of gene expression in peripheral blood that could potentially be optimised to expedite the diagnosis of acute ischaemic stroke (AIS). A discovery cohort was recruited consisting of 39 AIS patients and 24 neurologically asymptomatic controls. Peripheral blood was sampled at emergency department admission, and genome-wide expression profiling was performed via microarray. A machine-learning technique known as genetic algorithm k-nearest neighbours (GA/kNN) was then used to identify a pattern of gene expression that could optimally discriminate between groups. This pattern of expression was then assessed via qRT-PCR in an independent validation cohort, where it was evaluated for its ability to discriminate between an additional 39 AIS patients and 30 neurologically asymptomatic controls, as well as 20 acute stroke mimics. GA/kNN identified 10 genes (ANTXR2, STK3, PDK4, CD163, MAL, GRAP, ID3, CTSZ, KIF1B and PLXDC2) whose coordinate pattern of expression was able to identify 98.4% of discovery cohort subjects correctly (97.4% sensitive, 100% specific). In the validation cohort, the expression levels of the same 10 genes were able to identify 95.6% of subjects correctly when comparing AIS patients to asymptomatic controls (92.3% sensitive, 100% specific), and 94.9% of subjects correctly when comparing AIS patients with stroke mimics (97.4% sensitive, 90.0% specific). The transcriptional pattern identified in this study shows strong diagnostic potential, and warrants further evaluation to determine its true clinical efficacy. PMID:29263821
Earth Processes: Reading the Isotopic Code
NASA Astrophysics Data System (ADS)
Basu, Asish; Hart, Stan
Publication of this monograph will coincide, to a precision of a few per mil, with the centenary of Henri Becquerel's discovery of "radiations actives" (C. R. Acad. Sci., Feb. 24, 1896). In 1896 the Earth was only 40 million years old according to Lord Kelvin. Eleven years later, Boltwood had pushed the Earth's age past 2000 million years, based on the first U/Pb chemical dating results. In exciting progression came discovery of isotopes by J. J. Thomson in 1912, invention of the mass spectrometer by Dempster (1918) and Aston (1919), the first measurement of the isotopic composition of Pb (Aston, 1927) and the final approach, using Pb-Pb isotopic dating, to the correct age of the Earth: close—2.9 Ga (Gerling, 1942), closer—3.0 Ga (Holmes, 1949) and closest—4.50 Ga (Patterson, Tilton and Inghram, 1953).
Shestakova, M V
2011-01-01
Recent revolution in the knowledge about structure, physiological and pathophysiological effects of renin-angiotensin-aldosteron system (RAAS) took place recently when it was discovered that local synthesis of all the RAAS components occurs in target organs and their tissues (the heart, kidneys, vessels, brain tissues). It was found that besides classic RAAS acting via activation of angiotensin II (Ang-II) and its receptors, there is an alternative RAAS opposed to atherogenic potential of Ang-II. Renin and prorenin are shown to have both enzymatic and hormonal activities. Wider understanding appeared of extrarenal effects of aldosteron, its non-genomic activity. The above discoveries open new opportunities for pharmacological regulation of RAAS activity, which enables more effectively correct overactivity of this system in organs at risk of negativeAng-II impact.
Authorship Discovery in Blogs Using Bayesian Classification with Corrective Scaling
2008-06-01
4 2.3 W. Fucks ’ Diagram of n-Syllable Word Frequencies . . . . . . . . . . . . . . 5 3.1 Confusion Matrix for All Test Documents of 500...of the books which scholars believed he had. • Wilhelm Fucks discriminated between authors using the average number of syllables per word and average...distance between equal-syllabled words [8]. Fucks , too, concluded that a study such as his reveals a “possibility of a quantitative classification
NASA Astrophysics Data System (ADS)
Sannino, Francesco
I discuss the impact of the discovery of a Higgs-like state on composite dynamics starting by critically examining the reasons in favour of either an elementary or composite nature of this state. Accepting the standard model interpretation I re-address the standard model vacuum stability within a Weyl-consistent computation. I will carefully examine the fundamental reasons why what has been discovered might not be the standard model Higgs. Dynamical electroweak breaking naturally addresses a number of the fundamental issues unsolved by the standard model interpretation. However this paradigm has been challenged by the discovery of a not-so-heavy Higgs-like state. I will therefore review the recent discovery1 that the standard model top-induced radiative corrections naturally reduce the intrinsic non-perturbative mass of the composite Higgs state towards the desired experimental value. Not only we have a natural and testable working framework but we have also suggested specic gauge theories that can realise, at the fundamental level, these minimal models of dynamical electroweak symmetry breaking. These strongly coupled gauge theories are now being heavily investigated via first principle lattice simulations with encouraging results. The new findings show that the recent naive claims made about new strong dynamics at the electroweak scale being disfavoured by the discovery of a not-so-heavy composite Higgs are unwarranted. I will then introduce the more speculative idea of extreme compositeness according to which not only the Higgs sector of the standard model is composite but also quarks and leptons, and provide a toy example in the form of gauge-gauge duality.
Janky, Rekin's; van Helden, Jacques
2008-01-23
The detection of conserved motifs in promoters of orthologous genes (phylogenetic footprints) has become a common strategy to predict cis-acting regulatory elements. Several software tools are routinely used to raise hypotheses about regulation. However, these tools are generally used as black boxes, with default parameters. A systematic evaluation of optimal parameters for a footprint discovery strategy can bring a sizeable improvement to the predictions. We evaluate the performances of a footprint discovery approach based on the detection of over-represented spaced motifs. This method is particularly suitable for (but not restricted to) Bacteria, since such motifs are typically bound by factors containing a Helix-Turn-Helix domain. We evaluated footprint discovery in 368 Escherichia coli K12 genes with annotated sites, under 40 different combinations of parameters (taxonomical level, background model, organism-specific filtering, operon inference). Motifs are assessed both at the levels of correctness and significance. We further report a detailed analysis of 181 bacterial orthologs of the LexA repressor. Distinct motifs are detected at various taxonomical levels, including the 7 previously characterized taxon-specific motifs. In addition, we highlight a significantly stronger conservation of half-motifs in Actinobacteria, relative to Firmicutes, suggesting an intermediate state in specificity switching between the two Gram-positive phyla, and thereby revealing the on-going evolution of LexA auto-regulation. The footprint discovery method proposed here shows excellent results with E. coli and can readily be extended to predict cis-acting regulatory signals and propose testable hypotheses in bacterial genomes for which nothing is known about regulation.
Brettschneider, Anna-Kristin; Brettschneidera, Anna-Kristin; Schaffrath Rosario, Angelika; Kuhnert, Ronny; Schmidt, Steffen; Wiegand, Susanna; Ellert, Ute; Kurth, Bärbel-Maria
2015-11-06
The nationwide "German Health Interview and Examination Survey for Children and Adolescents" (KiGGS), conducted in 2003-2006, showed an increase in the prevalence rates of overweight and obesity compared to the early 1990s, indicating the need for regularly monitoring. Recently, a follow-up-KiGGS Wave 1 (2009-2012)-was carried out as a telephone-based survey, providing self-reported height and weight. Since self-reports lead to a bias in prevalence rates of weight status, a correction is needed. The aim of the present study is to obtain updated prevalence rates for overweight and obesity for 11- to 17-year olds living in Germany after correction for bias in self-reports. In KiGGS Wave 1, self-reported height and weight were collected from 4948 adolescents during a telephone interview. Participants were also asked about their body perception. From a subsample of KiGGS Wave 1 participants, measurements for height and weight were collected in a physical examination. In order to correct prevalence rates derived from self-reports, weight status categories based on self-reported and measured height and weight were used to estimate a correction formula according to an established procedure under consideration of body perception. The correction procedure was applied and corrected rates were estimated. The corrected prevalence of overweight, including obesity, derived from KiGGS Wave 1, showed that the rate has not further increased compared to the KiGGS baseline survey (18.9 % vs. 18.8 % based on the German reference). The rates of overweight still remain at a high level. The results of KiGGS Wave 1 emphasise the significance of this health issue and the need for prevention of overweight and obesity in children and adolescents.
Gene correction in patient-specific iPSCs for therapy development and disease modeling
Jang, Yoon-Young
2018-01-01
The discovery that mature cells can be reprogrammed to become pluripotent and the development of engineered endonucleases for enhancing genome editing are two of the most exciting and impactful technology advances in modern medicine and science. Human pluripotent stem cells have the potential to establish new model systems for studying human developmental biology and disease mechanisms. Gene correction in patient-specific iPSCs can also provide a novel source for autologous cell therapy. Although historically challenging, precise genome editing in human iPSCs is becoming more feasible with the development of new genome-editing tools, including ZFNs, TALENs, and CRISPR. iPSCs derived from patients of a variety of diseases have been edited to correct disease-associated mutations and to generate isogenic cell lines. After directed differentiation, many of the corrected iPSCs showed restored functionality and demonstrated their potential in cell replacement therapy. Genome-wide analyses of gene-corrected iPSCs have collectively demonstrated a high fidelity of the engineered endonucleases. Remaining challenges in clinical translation of these technologies include maintaining genome integrity of the iPSC clones and the differentiated cells. Given the rapid advances in genome-editing technologies, gene correction is no longer the bottleneck in developing iPSC-based gene and cell therapies; generating functional and transplantable cell types from iPSCs remains the biggest challenge needing to be addressed by the research field. PMID:27256364
Ianakiev, Kiril D [Los Alamos, NM; Hsue, Sin Tao [Santa Fe, NM; Browne, Michael C [Los Alamos, NM; Audia, Jeffrey M [Abiquiu, NM
2006-07-25
The present invention includes an apparatus and corresponding method for temperature correction and count rate expansion of inorganic scintillation detectors. A temperature sensor is attached to an inorganic scintillation detector. The inorganic scintillation detector, due to interaction with incident radiation, creates light pulse signals. A photoreceiver processes the light pulse signals to current signals. Temperature correction circuitry that uses a fast light component signal, a slow light component signal, and the temperature signal from the temperature sensor to corrected an inorganic scintillation detector signal output and expanded the count rate.
77 FR 2910 - Schedule for Rating Disabilities; Evaluation of Scars; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-20
...; Evaluation of Scars; Correction AGENCY: Department of Veterans Affairs. ACTION: Final rule; correction... that addresses the Skin, so that it more clearly reflected VA's policies concerning the evaluation of... Rating Disabilities that addresses the Skin, 38 CFR 4.118, by revising the criteria for the evaluation of...
Clinical decision support alert malfunctions: analysis and empirically derived taxonomy.
Wright, Adam; Ai, Angela; Ash, Joan; Wiesen, Jane F; Hickman, Thu-Trang T; Aaron, Skye; McEvoy, Dustin; Borkowsky, Shane; Dissanayake, Pavithra I; Embi, Peter; Galanter, William; Harper, Jeremy; Kassakian, Steve Z; Ramoni, Rachel; Schreiber, Richard; Sirajuddin, Anwar; Bates, David W; Sittig, Dean F
2018-05-01
To develop an empirically derived taxonomy of clinical decision support (CDS) alert malfunctions. We identified CDS alert malfunctions using a mix of qualitative and quantitative methods: (1) site visits with interviews of chief medical informatics officers, CDS developers, clinical leaders, and CDS end users; (2) surveys of chief medical informatics officers; (3) analysis of CDS firing rates; and (4) analysis of CDS overrides. We used a multi-round, manual, iterative card sort to develop a multi-axial, empirically derived taxonomy of CDS malfunctions. We analyzed 68 CDS alert malfunction cases from 14 sites across the United States with diverse electronic health record systems. Four primary axes emerged: the cause of the malfunction, its mode of discovery, when it began, and how it affected rule firing. Build errors, conceptualization errors, and the introduction of new concepts or terms were the most frequent causes. User reports were the predominant mode of discovery. Many malfunctions within our database caused rules to fire for patients for whom they should not have (false positives), but the reverse (false negatives) was also common. Across organizations and electronic health record systems, similar malfunction patterns recurred. Challenges included updates to code sets and values, software issues at the time of system upgrades, difficulties with migration of CDS content between computing environments, and the challenge of correctly conceptualizing and building CDS. CDS alert malfunctions are frequent. The empirically derived taxonomy formalizes the common recurring issues that cause these malfunctions, helping CDS developers anticipate and prevent CDS malfunctions before they occur or detect and resolve them expediently.
Lockhart, M.; Henzlova, D.; Croft, S.; ...
2017-09-20
Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli (DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory andmore » implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. Here, we discuss and present the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. To assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. In conclusion, the DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lockhart, M.; Henzlova, D.; Croft, S.
Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli (DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory andmore » implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. Here, we discuss and present the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. To assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. In conclusion, the DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.« less
A novel algorithm for validating peptide identification from a shotgun proteomics search engine.
Jian, Ling; Niu, Xinnan; Xia, Zhonghang; Samir, Parimal; Sumanasekera, Chiranthani; Mu, Zheng; Jennings, Jennifer L; Hoek, Kristen L; Allos, Tara; Howard, Leigh M; Edwards, Kathryn M; Weil, P Anthony; Link, Andrew J
2013-03-01
Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) has revolutionized the proteomics analysis of complexes, cells, and tissues. In a typical proteomic analysis, the tandem mass spectra from a LC-MS/MS experiment are assigned to a peptide by a search engine that compares the experimental MS/MS peptide data to theoretical peptide sequences in a protein database. The peptide spectra matches are then used to infer a list of identified proteins in the original sample. However, the search engines often fail to distinguish between correct and incorrect peptides assignments. In this study, we designed and implemented a novel algorithm called De-Noise to reduce the number of incorrect peptide matches and maximize the number of correct peptides at a fixed false discovery rate using a minimal number of scoring outputs from the SEQUEST search engine. The novel algorithm uses a three-step process: data cleaning, data refining through a SVM-based decision function, and a final data refining step based on proteolytic peptide patterns. Using proteomics data generated on different types of mass spectrometers, we optimized the De-Noise algorithm on the basis of the resolution and mass accuracy of the mass spectrometer employed in the LC-MS/MS experiment. Our results demonstrate De-Noise improves peptide identification compared to other methods used to process the peptide sequence matches assigned by SEQUEST. Because De-Noise uses a limited number of scoring attributes, it can be easily implemented with other search engines.
NASA Astrophysics Data System (ADS)
Fischer, John Arthur
For 70 years, the physics community operated under the assumption that the expansion of the Universe must be slowing due to gravitational attraction. Then, in 1998, two teams of scientists used Type Ia supernovae to discover that cosmic expansion was actually acceler- ating due to a mysterious "dark energy." As a result, Type Ia supernovae have become the most cosmologically important transient events in the last 20 years, with a large amount of effort going into their discovery as well as understanding their progenitor systems. One such probe for understanding Type Ia supernovae is to use rate measurements to de- termine the time delay between star formation and supernova explosion. For the last 30 years, the discovery of individual Type Ia supernova events has been accelerating. How- ever, those discoveries were happening in time-domain surveys that probed only a portion of the redshift range where expansion was impacted by dark energy. The Dark Energy Survey (DES) is the first project in the "next generation" of time-domain surveys that will discovery thousands of Type Ia supernovae out to a redshift of 1.2 (where dark energy be- comes subdominant) and DES will have better systematic uncertainties over that redshift range than any survey to date. In order to gauge the discovery effectiveness of this survey, we will use the first season's 469 photometrically typed supernovee and compare it with simulations in order to update the full survey Type Ia projections from 3500 to 2250. We will then use 165 of the 469 supernovae out to a redshift of 0.6 to measure the supernovae rate both as a function of comoving volume and of the star formation rate as it evolves with redshift. We find the most statistically significant prompt fraction of any survey to date (with a 3.9? prompt fraction detection). We will also reinforce the already existing tension in the measurement of the delayed fraction between high (z > 1.2) and low red- shift rate measurements, where we find no significant evidence of a delayed fraction at all in our photometric sample.
Emilio Segrè and Spontaneous Fission
fissioned instead. The discovery of fission led in turn to the discovery of the chain reaction that, if material apart before it had a chance to undergo an efficient chain reaction. The possibility of chain reaction. If a similar rate was found in plutonium, it might rule out the use of that element as
Pacini, Clare; Ajioka, James W; Micklem, Gos
2017-04-12
Correlation matrices are important in inferring relationships and networks between regulatory or signalling elements in biological systems. With currently available technology sample sizes for experiments are typically small, meaning that these correlations can be difficult to estimate. At a genome-wide scale estimation of correlation matrices can also be computationally demanding. We develop an empirical Bayes approach to improve covariance estimates for gene expression, where we assume the covariance matrix takes a block diagonal form. Our method shows lower false discovery rates than existing methods on simulated data. Applied to a real data set from Bacillus subtilis we demonstrate it's ability to detecting known regulatory units and interactions between them. We demonstrate that, compared to existing methods, our method is able to find significant covariances and also to control false discovery rates, even when the sample size is small (n=10). The method can be used to find potential regulatory networks, and it may also be used as a pre-processing step for methods that calculate, for example, partial correlations, so enabling the inference of the causal and hierarchical structure of the networks.
Teng, Rui; Leibnitz, Kenji; Miura, Ryu
2013-01-01
An essential application of wireless sensor networks is to successfully respond to user queries. Query packet losses occur in the query dissemination due to wireless communication problems such as interference, multipath fading, packet collisions, etc. The losses of query messages at sensor nodes result in the failure of sensor nodes reporting the requested data. Hence, the reliable and successful dissemination of query messages to sensor nodes is a non-trivial problem. The target of this paper is to enable highly successful query delivery to sensor nodes by localized and energy-efficient discovery, and recovery of query losses. We adopt local and collective cooperation among sensor nodes to increase the success rate of distributed discoveries and recoveries. To enable the scalability in the operations of discoveries and recoveries, we employ a distributed name resolution mechanism at each sensor node to allow sensor nodes to self-detect the correlated queries and query losses, and then efficiently locally respond to the query losses. We prove that the collective discovery of query losses has a high impact on the success of query dissemination and reveal that scalability can be achieved by using the proposed approach. We further study the novel features of the cooperation and competition in the collective recovery at PHY and MAC layers, and show that the appropriate number of detectors can achieve optimal successful recovery rate. We evaluate the proposed approach with both mathematical analyses and computer simulations. The proposed approach enables a high rate of successful delivery of query messages and it results in short route lengths to recover from query losses. The proposed approach is scalable and operates in a fully distributed manner. PMID:23748172
NASA Technical Reports Server (NTRS)
Richards, W. Lance
1996-01-01
Significant strain-gage errors may exist in measurements acquired in transient-temperature environments if conventional correction methods are applied. As heating or cooling rates increase, temperature gradients between the strain-gage sensor and substrate surface increase proportionally. These temperature gradients introduce strain-measurement errors that are currently neglected in both conventional strain-correction theory and practice. Therefore, the conventional correction theory has been modified to account for these errors. A new experimental method has been developed to correct strain-gage measurements acquired in environments experiencing significant temperature transients. The new correction technique has been demonstrated through a series of tests in which strain measurements were acquired for temperature-rise rates ranging from 1 to greater than 100 degrees F/sec. Strain-gage data from these tests have been corrected with both the new and conventional methods and then compared with an analysis. Results show that, for temperature-rise rates greater than 10 degrees F/sec, the strain measurements corrected with the conventional technique produced strain errors that deviated from analysis by as much as 45 percent, whereas results corrected with the new technique were in good agreement with analytical results.
Li, Dongmei; Le Pape, Marc A; Parikh, Nisha I; Chen, Will X; Dye, Timothy D
2013-01-01
Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth's ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth's parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth's parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially expressed genes is large for both normally and non-normally distributed data. Finally, the Resampling-based empirical Bayes Methods are generalizable to next generation sequencing RNA-seq data analysis.
77 FR 47582 - Great Lakes Pilotage Rates-2013 Annual Review and Adjust; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-09
... DEPARTMENT OF HOMELAND SECURITY Coast Guard 46 CFR Part 401 [Docket No. USCG-2012-0409] RIN 1625-AB89 Great Lakes Pilotage Rates--2013 Annual Review and Adjust; Correction AGENCY: Coast Guard, DHS. ACTION: Notice of proposed rulemaking; correction. SUMMARY: The Coast Guard published a Notice of...
Self-Correcting Electronically-Scanned Pressure Sensor
NASA Technical Reports Server (NTRS)
Gross, C.; Basta, T.
1982-01-01
High-data-rate sensor automatically corrects for temperature variations. Multichannel, self-correcting pressure sensor can be used in wind tunnels, aircraft, process controllers and automobiles. Offers data rates approaching 100,000 measurements per second with inaccuracies due to temperature shifts held below 0.25 percent (nominal) of full scale over a temperature span of 55 degrees C.
Nonintrusive Flow Rate Determination Through Space Shuttle Water Coolant Loop Floodlight Coldplate
NASA Technical Reports Server (NTRS)
Werlink, Rudolph; Johnson, Harry; Margasahayam, Ravi
1997-01-01
Using a Nonintrusive Flow Measurement System (NFMS), the flow rates through the Space Shuttle water coolant coldplate were determined. The objective of this in situ flow measurement was to prove or disprove a potential block inside the affected coldplate had contributed to a reduced flow rate and the subsequent ice formation on the Space Shuttle Discovery. Flow through the coldplate was originally calculated to be 35 to 38 pounds per hour. This application of ultrasonic technology advanced the envelope of flow measurements through use of 1/4-inch-diameter tubing, which resulted in extremely low flow velocities (5 to 30 pounds per hour). In situ measurements on the orbiters Discovery and Atlantis indicated both vehicles, on the average, experienced similar flow rates through the coldplate (around 25 pounds per hour), but lower rates than the designed flow. Based on the noninvasive checks, further invasive troubleshooting was eliminated. Permanent monitoring using the NFMS was recommended.
Discovery and Classification in Astronomy
NASA Astrophysics Data System (ADS)
Dick, Steven J.
2012-01-01
Three decades after Martin Harwit's pioneering Cosmic Discovery (1981), and following on the recent IAU Symposium "Accelerating the Rate of Astronomical Discovery,” we have revisited the problem of discovery in astronomy, emphasizing new classes of objects. 82 such classes have been identified and analyzed, including 22 in the realm of the planets, 36 in the realm of the stars, and 24 in the realm of the galaxies. We find an extended structure of discovery, consisting of detection, interpretation and understanding, each with its own nuances and a microstructure including conceptual, technological and social roles. This is true with a remarkable degree of consistency over the last 400 years of telescopic astronomy, ranging from Galileo's discovery of satellites, planetary rings and star clusters, to the discovery of quasars and pulsars. Telescopes have served as "engines of discovery” in several ways, ranging from telescope size and sensitivity (planetary nebulae and spiral galaxies), to specialized detectors (TNOs) and the opening of the electromagnetic spectrum for astronomy (pulsars, pulsar planets, and most active galaxies). A few classes (radiation belts, the solar wind and cosmic rays), were initially discovered without the telescope. Classification also plays an important role in discovery. While it might seem that classification marks the end of discovery, or a post-discovery phase, in fact it often marks the beginning, even a pre-discovery phase. Nowhere is this more clearly seen than in the classification of stellar spectra, long before dwarfs, giants and supergiants were known, or their evolutionary sequence recognized. Classification may also be part of a post-discovery phase, as in the MK system of stellar classification, constructed after the discovery of stellar luminosity classes. Some classes are declared rather than discovered, as in the case of gas and ice giant planets, and, infamously, Pluto as a dwarf planet.
How molecular profiling could revolutionize drug discovery.
Stoughton, Roland B; Friend, Stephen H
2005-04-01
Information from genomic, proteomic and metabolomic measurements has already benefited target discovery and validation, assessment of efficacy and toxicity of compounds, identification of disease subgroups and the prediction of responses of individual patients. Greater benefits can be expected from the application of these technologies on a significantly larger scale; by simultaneously collecting diverse measurements from the same subjects or cell cultures; by exploiting the steadily improving quantitative accuracy of the technologies; and by interpreting the emerging data in the context of underlying biological models of increasing sophistication. The benefits of applying molecular profiling to drug discovery and development will include much lower failure rates at all stages of the drug development pipeline, faster progression from discovery through to clinical trials and more successful therapies for patient subgroups. Upheavals in existing organizational structures in the current 'conveyor belt' models of drug discovery might be required to take full advantage of these methods.
Can Functional Magnetic Resonance Imaging Improve Success Rates in CNS Drug Discovery?
Borsook, David; Hargreaves, Richard; Becerra, Lino
2011-01-01
Introduction The bar for developing new treatments for CNS disease is getting progressively higher and fewer novel mechanisms are being discovered, validated and developed. The high costs of drug discovery necessitate early decisions to ensure the best molecules and hypotheses are tested in expensive late stage clinical trials. The discovery of brain imaging biomarkers that can bridge preclinical to clinical CNS drug discovery and provide a ‘language of translation’ affords the opportunity to improve the objectivity of decision-making. Areas Covered This review discusses the benefits, challenges and potential issues of using a science based biomarker strategy to change the paradigm of CNS drug development and increase success rates in the discovery of new medicines. The authors have summarized PubMed and Google Scholar based publication searches to identify recent advances in functional, structural and chemical brain imaging and have discussed how these techniques may be useful in defining CNS disease state and drug effects during drug development. Expert opinion The use of novel brain imaging biomarkers holds the bold promise of making neuroscience drug discovery smarter by increasing the objectivity of decision making thereby improving the probability of success of identifying useful drugs to treat CNS diseases. Functional imaging holds the promise to: (1) define pharmacodynamic markers as an index of target engagement (2) improve translational medicine paradigms to predict efficacy; (3) evaluate CNS efficacy and safety based on brain activation; (4) determine brain activity drug dose-response relationships and (5) provide an objective evaluation of symptom response and disease modification. PMID:21765857
76 FR 39006 - Medicare Program; Hospital Inpatient Value-Based Purchasing Program; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-05
... Pneumonia (PN) 30-Day .8818 Mortality Rate. 7. On page 26516, Table 7 is corrected to read as follows... Day Mortality Rate. MORT-30 PN Pneumonia (PN) 30-Day .9021 Mortality Rate. 8. On page 26527, in the...
Metabolic alterations in patients with Parkinson disease and visual hallucinations.
Boecker, Henning; Ceballos-Baumann, Andres O; Volk, Dominik; Conrad, Bastian; Forstl, Hans; Haussermann, Peter
2007-07-01
Visual hallucinations (VHs) occur frequently in advanced stages of Parkinson disease (PD). Which brain regions are affected in PD with VH is not well understood. To characterize the pattern of affected brain regions in PD with VH and to determine whether functional changes in PD with VH occur preferentially in visual association areas, as is suggested by the complex clinical symptomatology. Positron emission tomography measurements using fluorodeoxyglucose F 18. Between-group statistical analysis, accounting for the variance related to disease stage. University hospital. Patients Eight patients with PD and VH and 11 patients with PD without VH were analyzed. The presence of VH during the month before positron emission tomography was rated using the Neuropsychiatric Inventory subscale for VH (PD and VH, 4.63; PD without VH, 0.00; P < .002). Parkinson disease with VH, compared with PD without VH, was characterized by reduction in the regional cerebral metabolic rate for glucose consumption (P < .05, corrected for false discovery rate) in occipitotemporoparietal regions, sparing the occipital pole. No significant increase in regional glucose metabolism was detected in patients with PD and VH. The pattern of resting-state metabolic changes in regions of the dorsal and ventral visual streams, but not in primary visual cortex, in patients with PD and VH, is compatible with the functional roles of visual association areas in higher-order visual processing. These findings may help to further elucidate the functional mechanisms underlying VH in PD.
De Benedetti, Pier G; Fanelli, Francesca
2018-03-21
Simple comparative correlation analyses and quantitative structure-kinetics relationship (QSKR) models highlight the interplay of kinetic rates and binding affinity as an essential feature in drug design and discovery. The choice of the molecular series, and their structural variations, used in QSKR modeling is fundamental to understanding the mechanistic implications of ligand and/or drug-target binding and/or unbinding processes. Here, we discuss the implications of linear correlations between kinetic rates and binding affinity constants and the relevance of the computational approaches to QSKR modeling. Copyright © 2018 Elsevier Ltd. All rights reserved.
Continuous quantum error correction for non-Markovian decoherence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oreshkov, Ognyan; Brun, Todd A.; Communication Sciences Institute, University of Southern California, Los Angeles, California 90089
2007-08-15
We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximatelymore » follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Priel, Nadav; Landsman, Hagar; Manfredini, Alessandro
We propose a safeguard procedure for statistical inference that provides universal protection against mismodeling of the background. The method quantifies and incorporates the signal-like residuals of the background model into the likelihood function, using information available in a calibration dataset. This prevents possible false discovery claims that may arise through unknown mismodeling, and corrects the bias in limit setting created by overestimated or underestimated background. We demonstrate how the method removes the bias created by an incomplete background model using three realistic case studies.
[The history of correction of refractive errors: spectacles].
Wojtyczkak, E
2000-01-01
An historical analysis of discoveries related to the treatment of defects of vision is described. Opinions on visual processes, optics and methods of treating myopia, hypermetropia and astigmatism from ancient times through the Middle Ages, the renaissance and the following centuries are presented in particular. The beginning of the usage of glasses is discussed. Examples of the techniques which have been used to improve the subjective and objective methods of measuring refractive errors are also presented.
2016-01-01
Background As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. Objective To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. Methods A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. Results The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. Conclusions A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. PMID:27986644
NASA Astrophysics Data System (ADS)
Dou, Jiangpei; Ren, Deqing; Zhang, Xi; Zhu, Yongtian; Zhao, Gang; Wu, Zhen; Chen, Rui; Liu, Chengchao; Yang, Feng; Yang, Chao
2014-08-01
Almost all high-contrast imaging coronagraphs proposed until now are based on passive coronagraph optical components. Recently, Ren and Zhu proposed for the first time a coronagraph that integrates a liquid crystal array (LCA) for the active pupil apodizing and a deformable mirror (DM) for the phase corrections. Here, for demonstration purpose, we present the initial test result of a coronagraphic system that is based on two liquid crystal spatial light modulators (SLM). In the system, one SLM is served as active pupil apodizing and amplitude correction to suppress the diffraction light; another SLM is used to correct the speckle noise that is caused by the wave-front distortions. In this way, both amplitude and phase error can be actively and efficiently compensated. In the test, we use the stochastic parallel gradient descent (SPGD) algorithm to control two SLMs, which is based on the point spread function (PSF) sensing and evaluation and optimized for a maximum contrast in the discovery area. Finally, it has demonstrated a contrast of 10-6 at an inner working angular distance of ~6.2 λ/D, which is a promising technique to be used for the direct imaging of young exoplanets on ground-based telescopes.
Jain, Ram B
2017-07-01
Prevalence of smoking is needed to estimate the need for future public health resources. To compute and compare smoking prevalence rates by using self-reported smoking statuses, two serum cotinine (SCOT) based biomarker methods, and one urinary 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanol (NNAL) based biomarker method. These estimates were then used to develop correction factors to be applicable to self-reported prevalences to arrive at corrected smoking prevalence rates. Data from National Health and Nutrition Examination Survey (NHANES) for 2007-2012 for those aged ≥20 years (N = 16826) were used. Self-reported prevalence rate for the total population computed as the weighted number of self-reported smokers divided by weighted number of all participants was 21.6% and 24% when computed by weighted number of self-reported smokers divided by the weighted number of self-reported smokers and nonsmokers. The corrected prevalence rate was found to be 25.8%. A 1% underestimate in smoking prevalence is equivalent to not being able to identify 2.2 million smokers in US in a given year. This underestimation, if not corrected, could lead to serious gap in the public health services available and needed to provide adequate preventive and corrective treatment to smokers.
Children acquire the later-greater principle after the cardinal principle
Le Corre, Mathieu
2014-01-01
Many have proposed that the acquisition of the cardinal principle is a result of the discovery of the numerical significance of the order of the number words in the count list. However, this need not be the case. Indeed, the cardinal principle does not state anything about the numerical significance of the order of the number words. It only states that the last word of a correct count denotes the numerosity of the counted set. Here we test whether the acquisition of the cardinal principle involves the discovery of the later-greater principle – i.e., that the order of the number words corresponds to the relative size of the numerosities they denote. Specifically, we tested knowledge of verbal numerical comparisons (e.g., Is “ten” more than “six”?) in children who had recently learned the cardinal principle. We find that these children can compare number words between “six” and “ten” only if they have mapped them onto non-verbal representations of numerosity. We suggest that this means that the acquisition of the cardinal principle does not involve the discovery of the correspondence between the order of the number words and the relative size of the numerosities they denote. PMID:24372336
The Unseen Companion of HD 114762
NASA Astrophysics Data System (ADS)
Latham, David W.
2014-01-01
I have told the story of the discovery of the unseen companion of HD114762 (Latham et al. 1989, Nature, 389, 38-40) in a recent publication (Latham 2012, New Astronomy Reviews 56, 16-18). The discovery was enabled by a happy combination of some thinking outside the box by Tsevi Mazeh at Tel Aviv University and the development of new technology for measuring stellar spectra at the Harvard-Smithsonian Center for Astrophysics. Tsevi's unconventional idea was that giant exoplanets might be found much closer to their host stars than Jupiter and Saturn are to the Sun, well inside the snow line. Our instrument was a high-resolution echelle spectrograph optimized for measuring radial velocities of stars similar to the Sun. The key technological developments were an intensified Reticon photon-counting detector under computer control combined with sophisticated analysis of the digital spectra. The detector signal-processing electronics eliminated persistence, which had plagued other intensified systems. This allowed bright Th-Ar calibration exposures before and after every stellar observation, which in turn enabled careful correction for spectrograph drifts. We built three of these systems for telescopes in Massachusetts and Arizona and christened them the "CfA Digital Speedometers". The discovery of HD 114762-b was serendipitous, but not accidental.
Children acquire the later-greater principle after the cardinal principle.
Le Corre, Mathieu
2014-06-01
Many have proposed that the acquisition of the cardinal principle (CP) is a result of the discovery of the numerical significance of the order of the number words in the count list. However, this need not be the case. Indeed, the CP does not state anything about the numerical significance of the order of the number words. It only states that the last word of a correct count denotes the numerosity of the counted set. Here, we test whether the acquisition of the CP involves the discovery of the later-greater principle - that is, that the order of the number words corresponds to the relative size of the numerosities they denote. Specifically, we tested knowledge of verbal numerical comparisons (e.g., Is 'ten' more than 'six'?) in children who had recently learned the CP. We find that these children can compare number words between 'six' and 'ten' only if they have mapped them onto non-verbal representations of numerosity. We suggest that this means that the acquisition of the CP does not involve the discovery of the correspondence between the order of the number words and the relative size of the numerosities they denote. © 2013 The British Psychological Society.
Broca’s area network in language function: a pooling-data connectivity study
Bernal, Byron; Ardila, Alfredo; Rosselli, Monica
2015-01-01
Background and Objective: Modern neuroimaging developments have demonstrated that cognitive functions correlate with brain networks rather than specific areas. The purpose of this paper was to analyze the connectivity of Broca’s area based on language tasks. Methods: A connectivity modeling study was performed by pooling data of Broca’s activation in language tasks. Fifty-seven papers that included 883 subjects in 84 experiments were analyzed. Analysis of Likelihood Estimates of pooled data was utilized to generate the map; thresholds at p < 0.01 were corrected for multiple comparisons and false discovery rate. Resulting images were co-registered into MNI standard space. Results: A network consisting of 16 clusters of activation was obtained. Main clusters were located in the frontal operculum, left posterior temporal region, supplementary motor area, and the parietal lobe. Less common clusters were seen in the sub-cortical structures including the left thalamus, left putamen, secondary visual areas, and the right cerebellum. Conclusion: Broca’s area-44-related networks involved in language processing were demonstrated utilizing a pooling-data connectivity study. Significance, interpretation, and limitations of the results are discussed. PMID:26074842
Fischer, Corinne E; Ting, Windsor Kwan-Chun; Millikin, Colleen P; Ismail, Zahinoor; Schweizer, Tom A
2016-01-01
We conducted a neuroimaging analysis to understand the neuroanatomical correlates of gray matter loss in a group of mild cognitive impairment and early Alzheimer's disease patients who developed delusions. With data collected as part of the Alzheimer's Disease Neuroimaging Initiative, we conducted voxel-based morphometry to determine areas of gray matter change in the same Alzheimer's Disease Neuroimaging Initiative participants, before and after they developed delusions. We identified 14 voxel clusters with significant gray matter decrease in patient scans post-delusional onset, correcting for multiple comparisons (false discovery rate, p < 0.05). Major areas of difference included the right and left insulae, left precuneus, the right and left cerebellar culmen, the left superior temporal gyrus, the right posterior cingulate, the right thalamus, and the left parahippocampal gyrus. Although contrary to our initial predictions of enhanced right frontal atrophy, our preliminary work identifies several neuroanatomical areas, including the cerebellum and left posterior hemisphere, which may be involved in delusional development in these patients. Copyright © 2015 John Wiley & Sons, Ltd.
Hypersonic Navier-Stokes Comparisons to Orbiter Flight Data
NASA Technical Reports Server (NTRS)
Candler, Graham V.; Campbell, Charles H.
2010-01-01
During the STS-119 flight of Space Shuttle Discovery, two sets of surface temperature measurements were made. Under the HYTHIRM program3 quantitative thermal images of the windward side of the Orbiter with a were taken. In addition, the Boundary Layer Transition Flight Experiment 4 made thermocouple measurements at discrete locations on the Orbiter wind side. Most of these measurements were made downstream of a surface protuberance designed to trip the boundary layer to turbulent flow. In this paper, we use the US3D computational fluid dynamics code to simulate the Orbiter flow field at conditions corresponding to the STS-119 re-entry. We employ a standard two-temperature, five-species finite-rate model for high-temperature air, and the surface catalysis model of Stewart.1 This work is similar to the analysis of Wood et al . 2 except that we use a different approach for modeling turbulent flow. We use the one-equation Spalart-Allmaras turbulence model8 with compressibility corrections 9 and an approach for tripping the boundary layer at discrete locations. In general, the comparison between the simulations and flight data is remarkably good
Pharmacophore-Map-Pick: A Method to Generate Pharmacophore Models for All Human GPCRs.
Dai, Shao-Xing; Li, Gong-Hua; Gao, Yue-Dong; Huang, Jing-Fei
2016-02-01
GPCR-based drug discovery is hindered by a lack of effective screening methods for most GPCRs that have neither ligands nor high-quality structures. With the aim to identify lead molecules for these GPCRs, we developed a new method called Pharmacophore-Map-Pick to generate pharmacophore models for all human GPCRs. The model of ADRB2 generated using this method not only predicts the binding mode of ADRB2-ligands correctly but also performs well in virtual screening. Findings also demonstrate that this method is powerful for generating high-quality pharmacophore models. The average enrichment for the pharmacophore models of the 15 targets in different GPCR families reached 15-fold at 0.5 % false-positive rate. Therefore, the pharmacophore models can be applied in virtual screening directly with no requirement for any ligand information or shape constraints. A total of 2386 pharmacophore models for 819 different GPCRs (99 % coverage (819/825)) were generated and are available at http://bsb.kiz.ac.cn/GPCRPMD. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Education, occupation, leisure activities, and brain reserve: a population-based study.
Foubert-Samier, Alexandra; Catheline, Gwenaelle; Amieva, Hélène; Dilharreguy, Bixente; Helmer, Catherine; Allard, Michèle; Dartigues, Jean-François
2012-02-01
The influence of education, occupation, and leisure activities on the passive and active components of reserve capacity remains unclear. We used the voxel-based morphometry (VBM) technique in a population-based sample of 331 nondemented people in order to investigate the relationship between these factors and the cerebral volume (a marker of brain reserve). The results showed a positive and significant association between education, occupation, and leisure activities and the cognitive performances on Isaac's set test. Among these factors, only education was significantly associated with a cerebral volume including gray and white matter (p = 0.01). In voxel-based morphometry analyses, the difference in gray matter volume was located in the temporoparietal lobes and in the orbitofrontal lobes bilaterally (a p-value corrected <0.05 by false discovery rate [FDR]). Although smaller, the education-related difference in white matter volume appeared in areas connected to the education-related difference in gray matter volume. Education, occupation attainment, and leisure activities were found to contribute differently to reserve capacity. Education could play a role in the constitution of cerebral reserve capacity. Copyright © 2012 Elsevier Inc. All rights reserved.
Early prediction of extreme stratospheric polar vortex states based on causal precursors
NASA Astrophysics Data System (ADS)
Kretschmer, Marlene; Runge, Jakob; Coumou, Dim
2017-08-01
Variability in the stratospheric polar vortex (SPV) can influence the tropospheric circulation and thereby winter weather. Early predictions of extreme SPV states are thus important to improve forecasts of winter weather including cold spells. However, dynamical models are usually restricted in lead time because they poorly capture low-frequency processes. Empirical models often suffer from overfitting problems as the relevant physical processes and time lags are often not well understood. Here we introduce a novel empirical prediction method by uniting a response-guided community detection scheme with a causal discovery algorithm. This way, we objectively identify causal precursors of the SPV at subseasonal lead times and find them to be in good agreement with known physical drivers. A linear regression prediction model based on the causal precursors can explain most SPV variability (r2 = 0.58), and our scheme correctly predicts 58% (46%) of extremely weak SPV states for lead times of 1-15 (16-30) days with false-alarm rates of only approximately 5%. Our method can be applied to any variable relevant for (sub)seasonal weather forecasts and could thus help improving long-lead predictions.
A Non-parametric Cutout Index for Robust Evaluation of Identified Proteins*
Serang, Oliver; Paulo, Joao; Steen, Hanno; Steen, Judith A.
2013-01-01
This paper proposes a novel, automated method for evaluating sets of proteins identified using mass spectrometry. The remaining peptide-spectrum match score distributions of protein sets are compared to an empirical absent peptide-spectrum match score distribution, and a Bayesian non-parametric method reminiscent of the Dirichlet process is presented to accurately perform this comparison. Thus, for a given protein set, the process computes the likelihood that the proteins identified are correctly identified. First, the method is used to evaluate protein sets chosen using different protein-level false discovery rate (FDR) thresholds, assigning each protein set a likelihood. The protein set assigned the highest likelihood is used to choose a non-arbitrary protein-level FDR threshold. Because the method can be used to evaluate any protein identification strategy (and is not limited to mere comparisons of different FDR thresholds), we subsequently use the method to compare and evaluate multiple simple methods for merging peptide evidence over replicate experiments. The general statistical approach can be applied to other types of data (e.g. RNA sequencing) and generalizes to multivariate problems. PMID:23292186
SU-E-T-472: Improvement of IMRT QA Passing Rate by Correcting Angular Dependence of MatriXX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Q; Watkins, W; Kim, T
2015-06-15
Purpose: Multi-channel planar detector arrays utilized for IMRT-QA, such as the MatriXX, exhibit an incident-beam angular dependent response which can Result in false-positive gamma-based QA results, especially for helical tomotherapy plans which encompass the full range of beam angles. Although MatriXX can use with gantry angle sensor to provide automatically angular correction, this sensor does not work with tomotherapy. The purpose of the study is to reduce IMRT-QA false-positives by correcting for the MatriXX angular dependence. Methods: MatriXX angular dependence was characterized by comparing multiple fixed-angle irradiation measurements with corresponding TPS computed doses. For 81 Tomo-helical IMRT-QA measurements, two differentmore » correction schemes were tested: (1) A Monte-Carlo dose engine was used to compute MatriXX signal based on the angular-response curve. The computed signal was then compared with measurement. (2) Uncorrected computed signal was compared with measurements uniformly scaled to account for the average angular dependence. Three scaling factor (+2%, +2.5%, +3%) were tested. Results: The MatriXX response is 8% less than predicted for a PA beam even when the couch is fully accounted for. Without angular correction, only 67% of the cases pass the >90% points γ<1 (3%, 3mm). After full angular correction, 96% of the cases pass the criteria. Of three scaling factors, +2% gave the highest passing rate (89%), which is still less than the full angular correction method. With a stricter γ(2%,3mm) criteria, the full angular correction method was still able to achieve the 90% passing rate while the scaling method only gives 53% passing rate. Conclusion: Correction for the MatriXX angular dependence reduced the false-positives rate of our IMRT-QA process. It is necessary to correct for the angular dependence to achieve the IMRT passing criteria specified in TG129.« less
Fatigue Crack Growth Rate and Stress-Intensity Factor Corrections for Out-of-Plane Crack Growth
NASA Technical Reports Server (NTRS)
Forth, Scott C.; Herman, Dave J.; James, Mark A.
2003-01-01
Fatigue crack growth rate testing is performed by automated data collection systems that assume straight crack growth in the plane of symmetry and use standard polynomial solutions to compute crack length and stress-intensity factors from compliance or potential drop measurements. Visual measurements used to correct the collected data typically include only the horizontal crack length, which for cracks that propagate out-of-plane, under-estimates the crack growth rates and over-estimates the stress-intensity factors. The authors have devised an approach for correcting both the crack growth rates and stress-intensity factors based on two-dimensional mixed mode-I/II finite element analysis (FEA). The approach is used to correct out-of-plane data for 7050-T7451 and 2025-T6 aluminum alloys. Results indicate the correction process works well for high DeltaK levels but fails to capture the mixed-mode effects at DeltaK levels approaching threshold (da/dN approximately 10(exp -10) meter/cycle).
Successes in drug discovery and design.
2004-04-01
The Society for Medicines Research (SMR) held a one-day meeting on case histories in drug discovery on December 4, 2003, at the National Heart and Lung Institute in London. These meetings have been organized by the SMR biannually for many years, and this latest meeting proved extremely popular, attracting a capacity audience of more than 130 registrants. The purpose of these meetings is educational; they allow those interested in drug discovery to hear key learnings from recent successful drug discovery programs. There was no overall linking theme between the talks, other than each success story has led to the introduction of a new and improved product of therapeutic use. The drug discovery stories covered in the meeting were extremely varied and, put together, they emphasized that each successful story is unique and special. This meeting is also special for the SMR because it presents the "SMR Award for Drug Discovery" in recognition of outstanding achievement and contribution in the area. It should be remembered that drug discovery is an extremely risky business and an extremely costly and complicated process in which the success rate is, at best, low. (c) 2004 Prous Science. All rights reserved.
Strategies for bringing drug delivery tools into discovery.
Kwong, Elizabeth; Higgins, John; Templeton, Allen C
2011-06-30
The past decade has yielded a significant body of literature discussing approaches for development and discovery collaboration in the pharmaceutical industry. As a result, collaborations between discovery groups and development scientists have increased considerably. The productivity of pharma companies to deliver new drugs to the market, however, has not increased and development costs continue to rise. Inability to predict clinical and toxicological response underlies the high attrition rate of leads at every step of drug development. A partial solution to this high attrition rate could be provided by better preclinical pharmacokinetics measurements that inform PD response based on key pathways that drive disease progression and therapeutic response. A critical link between these key pharmacology, pharmacokinetics and toxicology studies is the formulation. The challenges in pre-clinical formulation development include limited availability of compounds, rapid turn-around requirements and the frequent un-optimized physical properties of the lead compounds. Despite these challenges, this paper illustrates some successes resulting from close collaboration between formulation scientists and discovery teams. This close collaboration has resulted in development of formulations that meet biopharmaceutical needs from early stage preclinical in vivo model development through toxicity testing and development risk assessment of pre-clinical drug candidates. Published by Elsevier B.V.
Andrade, E L; Bento, A F; Cavalli, J; Oliveira, S K; Freitas, C S; Marcon, R; Schwanke, R C; Siqueira, J M; Calixto, J B
2016-10-24
This review presents a historical overview of drug discovery and the non-clinical stages of the drug development process, from initial target identification and validation, through in silico assays and high throughput screening (HTS), identification of leader molecules and their optimization, the selection of a candidate substance for clinical development, and the use of animal models during the early studies of proof-of-concept (or principle). This report also discusses the relevance of validated and predictive animal models selection, as well as the correct use of animal tests concerning the experimental design, execution and interpretation, which affect the reproducibility, quality and reliability of non-clinical studies necessary to translate to and support clinical studies. Collectively, improving these aspects will certainly contribute to the robustness of both scientific publications and the translation of new substances to clinical development.
MSClique: Multiple Structure Discovery through the Maximum Weighted Clique Problem.
Sanroma, Gerard; Penate-Sanchez, Adrian; Alquézar, René; Serratosa, Francesc; Moreno-Noguer, Francesc; Andrade-Cetto, Juan; González Ballester, Miguel Ángel
2016-01-01
We present a novel approach for feature correspondence and multiple structure discovery in computer vision. In contrast to existing methods, we exploit the fact that point-sets on the same structure usually lie close to each other, thus forming clusters in the image. Given a pair of input images, we initially extract points of interest and extract hierarchical representations by agglomerative clustering. We use the maximum weighted clique problem to find the set of corresponding clusters with maximum number of inliers representing the multiple structures at the correct scales. Our method is parameter-free and only needs two sets of points along with their tentative correspondences, thus being extremely easy to use. We demonstrate the effectiveness of our method in multiple-structure fitting experiments in both publicly available and in-house datasets. As shown in the experiments, our approach finds a higher number of structures containing fewer outliers compared to state-of-the-art methods.
NASA Technical Reports Server (NTRS)
Trauger, John T.
2005-01-01
Eclipse is a proposed NASA Discovery mission to perform a sensitive imaging survey of nearby planetary systems, including a survey for jovian-sized planets orbiting Sun-like stars to distances of 15 pc. We outline the science objectives of the Eclipse mission and review recent developments in the key enabling technologies. Eclipse is a space telescope concept for high-contrast visible-wavelength imaging and spectrophotometry. Its design incorporates a telescope with an unobscured aperture of 1.8 meters, a coronographic camera for suppression of diffracted light, and precise active wavefront correction for the suppression of scattered background light. For reference, Eclipse is designed to reduce the diffracted and scattered starlight between 0.33 and 1.5 arcseconds from the star by three orders of magnitude compared to any HST instrument. The Eclipse mission provides precursor science exploration and technology experience in support of NASA's Terrestrial Planet Finder (TPF) program.
NASA Astrophysics Data System (ADS)
Du, Xiaofeng; Song, William; Munro, Malcolm
Web Services as a new distributed system technology has been widely adopted by industries in the areas, such as enterprise application integration (EAI), business process management (BPM), and virtual organisation (VO). However, lack of semantics in the current Web Service standards has been a major barrier in service discovery and composition. In this chapter, we propose an enhanced context-based semantic service description framework (CbSSDF+) that tackles the problem and improves the flexibility of service discovery and the correctness of generated composite services. We also provide an agile transformation method to demonstrate how the various formats of Web Service descriptions on the Web can be managed and renovated step by step into CbSSDF+ based service description without large amount of engineering work. At the end of the chapter, we evaluate the applicability of the transformation method and the effectiveness of CbSSDF+ through a series of experiments.
Horn, Kevin M.
2013-07-09
A method reconstructs the charge collection from regions beneath opaque metallization of a semiconductor device, as determined from focused laser charge collection response images, and thereby derives a dose-rate dependent correction factor for subsequent broad-area, dose-rate equivalent, laser measurements. The position- and dose-rate dependencies of the charge-collection magnitude of the device are determined empirically and can be combined with a digital reconstruction methodology to derive an accurate metal-correction factor that permits subsequent absolute dose-rate response measurements to be derived from laser measurements alone. Broad-area laser dose-rate testing can thereby be used to accurately determine the peak transient current, dose-rate response of semiconductor devices to penetrating electron, gamma- and x-ray irradiation.
78 FR 53152 - Prescription Drug User Fee Rates for Fiscal Year 2014; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-28
...] Prescription Drug User Fee Rates for Fiscal Year 2014; Correction AGENCY: Food and Drug Administration, HHS... ``Prescription Drug User Fee Rates for Fiscal Year 2014'' that appeared in the Federal Register of August 2, 2013 (78 FR 46980). The document announced the Fiscal Year 2014 fee rates for the Prescription Drug User...
2010-04-04
Contrails are seen as workers leave the Launch Control Center after the launch of the space shuttle Discovery and the start of the STS-131 mission at NASA Kennedy Space Center in Cape Canaveral, Fla. on Monday April 5, 2010. Discovery is carrying a multi-purpose logistics module filled with science racks for the laboratories aboard the station. The mission has three planned spacewalks, with work to include replacing an ammonia tank assembly, retrieving a Japanese experiment from the station’s exterior, and switching out a rate gyro assembly on the station’s truss structure. Photo Credit: (NASA/Bill Ingalls)
2010-04-04
NASA Administrator Charles Bolden looks out the window of Firing Room Four in the Launch Control Center during the launch of the space shuttle Discovery and the start of the STS-131 mission at NASA Kennedy Space Center in Cape Canaveral, Fla. on Monday April 5, 2010. Discovery is carrying a multi-purpose logistics module filled with science racks for the laboratories aboard the station. The mission has three planned spacewalks, with work to include replacing an ammonia tank assembly, retrieving a Japanese experiment from the station’s exterior, and switching out a rate gyro assembly on the station’s truss structure. Photo Credit: (NASA/Bill Ingalls)
Incidence of Speech-Correcting Surgery in Children With Isolated Cleft Palate.
Gustafsson, Charlotta; Heliövaara, Arja; Leikola, Junnu; Rautio, Jorma
2018-01-01
Speech-correcting surgeries (pharyngoplasty) are performed to correct velopharyngeal insufficiency (VPI). This study aimed to analyze the need for speech-correcting surgery in children with isolated cleft palate (ICP) and to determine differences among cleft extent, gender, and primary technique used. In addition, we assessed the timing and number of secondary procedures performed and the incidence of operated fistulas. Retrospective medical chart review study from hospital archives and electronic records. These comprised the 423 consecutive nonsyndromic children (157 males and 266 females) with ICP treated at the Cleft Palate and Craniofacial Center of Helsinki University Hospital during 1990 to 2016. The total incidence of VPI surgery was 33.3% and the fistula repair rate, 7.8%. Children with cleft of both the hard and soft palate (n = 300) had a VPI secondary surgery rate of 37.3% (fistula repair rate 10.7%), whereas children with only cleft of the soft palate (n = 123) had a corresponding rate of 23.6% (fistula repair rate 0.8%). Gender and primary palatoplasty technique were not considered significant factors in need for VPI surgery. The majority of VPI surgeries were performed before school age. One fifth of patients receiving speech-correcting surgery had more than one subsequent procedure. The need for speech-correcting surgery and fistula repair was related to the severity of the cleft. Although the majority of the corrective surgeries were done before the age of 7 years, a considerable number were performed at a later stage, necessitating long-term observation.
75 FR 11502 - Schedule of Water Charges; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-11
... DELAWARE RIVER BASIN COMMISSION 18 CFR Part 410 Schedule of Water Charges; Correction AGENCY: Delaware River Basin Commission. ACTION: Proposed rule; correction. SUMMARY: This document corrects the... of water charges. This correction clarifies that the amended rates are proposed to take effect in two...
Dead time corrections for inbeam γ-spectroscopy measurements
NASA Astrophysics Data System (ADS)
Boromiza, M.; Borcea, C.; Negret, A.; Olacel, A.; Suliman, G.
2017-08-01
Relatively high counting rates were registered in a proton inelastic scattering experiment on 16O and 28Si using HPGe detectors which was performed at the Tandem facility of IFIN-HH, Bucharest. In consequence, dead time corrections were needed in order to determine the absolute γ-production cross sections. Considering that the real counting rate follows a Poisson distribution, the dead time correction procedure is reformulated in statistical terms. The arriving time interval between the incoming events (Δt) obeys an exponential distribution with a single parameter - the average of the associated Poisson distribution. We use this mathematical connection to calculate and implement the dead time corrections for the counting rates of the mentioned experiment. Also, exploiting an idea introduced by Pommé et al., we describe a consistent method for calculating the dead time correction which completely eludes the complicated problem of measuring the dead time of a given detection system. Several comparisons are made between the corrections implemented through this method and by using standard (phenomenological) dead time models and we show how these results were used for correcting our experimental cross sections.
Performance of the STIS CCD Dark Rate Temperature Correction
NASA Astrophysics Data System (ADS)
Branton, Doug; STScI STIS Team
2018-06-01
Since July 2001, the Space Telescope Imaging Spectrograph (STIS) onboard Hubble has operated on its Side-2 electronics due to a failure in the primary Side-1 electronics. While nearly identical, Side-2 lacks a functioning temperature sensor for the CCD, introducing a variability in the CCD operating temperature. Previous analysis utilized the CCD housing temperature telemetry to characterize the relationship between the housing temperature and the dark rate. It was found that a first-order 7%/°C uniform dark correction demonstrated a considerable improvement in the quality of dark subtraction on Side-2 era CCD data, and that value has been used on all Side-2 CCD darks since. In this report, we show how this temperature correction has performed historically. We compare the current 7%/°C value against the ideal first-order correction at a given time (which can vary between ~6%/°C and ~10%/°C) as well as against a more complex second-order correction that applies a unique slope to each pixel as a function of dark rate and time. At worst, the current correction has performed ~1% worse than the second-order correction. Additionally, we present initial evidence suggesting that the variability in pixel temperature-sensitivity is significant enough to warrant a temperature correction that considers pixels individually rather than correcting them uniformly.
Djulbegovic, Benjamin
2009-01-01
Background Progress in clinical medicine relies on the willingness of patients to take part in experimental clinical trials, particularly randomized controlled trials (RCTs). Before agreeing to enroll in clinical trials, patients require guarantees that they will not knowingly be harmed and will have the best possible chances of receiving the most favorable treatments. This guarantee is provided by the acknowledgment of uncertainty (equipoise), which removes ethical dilemmas and makes it easier for patients to enroll in clinical trials. Methods Since the design of clinical trials is mostly affected by clinical equipoise, the “clinical equipoise hypothesis” has been postulated. If the uncertainty requirement holds, this means that investigators cannot predict what they are going to discover in any individual trial that they undertake. In some instances, new treatments will be superior to standard treatments, while in others, standard treatments will be superior to experimental treatments, and in still others, no difference will be detected between new and standard treatments. It is hypothesized that there must be a relationship between the overall pattern of treatment successes and the uncertainties that RCTs are designed to address. Results An analysis of published trials shows that the results cannot be predicted at the level of individual trials. However, the results also indicate that the overall pattern of discovery of treatment success across a series of trials is predictable and is consistent with clinical equipoise hypothesis. The analysis shows that we can discover no more than 25% to 50% of successful treatments when they are tested in RCTs. The analysis also indicates that this discovery rate is optimal in helping to preserve the clinical trial system; a high discovery rate (eg, a 90% to 100% probability of success) is neither feasible nor desirable since under these circumstances, neither the patient nor the researcher has an interest in randomization. This in turn would halt the RCT system as we know it. Conclusions The “principle or law of clinical discovery” described herein predicts the efficiency of the current system of RCTs at generating discoveries of new treatments. The principle is derived from the requirement for uncertainty or equipoise as a precondition for RCTs, the precept that paradoxically drives discoveries of new treatments while limiting the proportion and rate of new therapeutic discoveries. PMID:19910921
Jiang, Wei; Yu, Weichuan
2017-01-01
In genome-wide association studies, we normally discover associations between genetic variants and diseases/traits in primary studies, and validate the findings in replication studies. We consider the associations identified in both primary and replication studies as true findings. An important question under this two-stage setting is how to determine significance levels in both studies. In traditional methods, significance levels of the primary and replication studies are determined separately. We argue that the separate determination strategy reduces the power in the overall two-stage study. Therefore, we propose a novel method to determine significance levels jointly. Our method is a reanalysis method that needs summary statistics from both studies. We find the most powerful significance levels when controlling the false discovery rate in the two-stage study. To enjoy the power improvement from the joint determination method, we need to select single nucleotide polymorphisms for replication at a less stringent significance level. This is a common practice in studies designed for discovery purpose. We suggest this practice is also suitable in studies with validation purpose in order to identify more true findings. Simulation experiments show that our method can provide more power than traditional methods and that the false discovery rate is well-controlled. Empirical experiments on datasets of five diseases/traits demonstrate that our method can help identify more associations. The R-package is available at: http://bioinformatics.ust.hk/RFdr.html .
Compound annotation with real time cellular activity profiles to improve drug discovery.
Fang, Ye
2016-01-01
In the past decade, a range of innovative strategies have been developed to improve the productivity of pharmaceutical research and development. In particular, compound annotation, combined with informatics, has provided unprecedented opportunities for drug discovery. In this review, a literature search from 2000 to 2015 was conducted to provide an overview of the compound annotation approaches currently used in drug discovery. Based on this, a framework related to a compound annotation approach using real-time cellular activity profiles for probe, drug, and biology discovery is proposed. Compound annotation with chemical structure, drug-like properties, bioactivities, genome-wide effects, clinical phenotypes, and textural abstracts has received significant attention in early drug discovery. However, these annotations are mostly associated with endpoint results. Advances in assay techniques have made it possible to obtain real-time cellular activity profiles of drug molecules under different phenotypes, so it is possible to generate compound annotation with real-time cellular activity profiles. Combining compound annotation with informatics, such as similarity analysis, presents a good opportunity to improve the rate of discovery of novel drugs and probes, and enhance our understanding of the underlying biology.
The in silico drug discovery toolbox: applications in lead discovery and optimization.
Bruno, Agostino; Costantino, Gabriele; Sartori, Luca; Radi, Marco
2017-11-06
Discovery and development of a new drug is a long lasting and expensive journey that takes around 15 years from starting idea to approval and marketing of new medication. Despite the R&D expenditures have been constantly increasing in the last few years, number of new drugs introduced into market has been steadily declining. This is mainly due to preclinical and clinical safety issues, which still represent about 40% of drug discontinuation. From this point of view, it is clear that if we want to increase drug-discovery success rate and reduce costs associated with development of a new drug, a comprehensive evaluation/prediction of potential safety issues should be conducted as soon as possible during early drug discovery phase. In the present review, we will analyse the early steps of drug-discovery pipeline, describing the sequence of steps from disease selection to lead optimization and focusing on the most common in silico tools used to assess attrition risks and build a mitigation plan. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
D'estanque, Emmanuel; Hedon, Christophe; Lattuca, Benoît; Bourdon, Aurélie; Benkiran, Meriem; Verd, Aurélie; Roubille, François; Mariano-Goulart, Denis
2017-08-01
Dual-isotope 201 Tl/ 123 I-MIBG SPECT can assess trigger zones (dysfunctions in the autonomic nervous system located in areas of viable myocardium) that are substrate for ventricular arrhythmias after STEMI. This study evaluated the necessity of delayed acquisition and scatter correction for dual-isotope 201 Tl/ 123 I-MIBG SPECT studies with a CZT camera to identify trigger zones after revascularization in patients with STEMI in routine clinical settings. Sixty-nine patients were prospectively enrolled after revascularization to undergo 201 Tl/ 123 I-MIBG SPECT using a CZT camera (Discovery NM 530c, GE). The first acquisition was a single thallium study (before MIBG administration); the second and the third were early and late dual-isotope studies. We compared the scatter-uncorrected and scatter-corrected (TEW method) thallium studies with the results of magnetic resonance imaging or transthoracic echography (reference standard) to diagnose myocardial necrosis. Summed rest scores (SRS) were significantly higher in the delayed MIBG studies than the early MIBG studies. SRS and necrosis surface were significantly higher in the delayed thallium studies with scatter correction than without scatter correction, leading to less trigger zone diagnosis for the scatter-corrected studies. Compared with the scatter-uncorrected studies, the late thallium scatter-corrected studies provided the best diagnostic values for myocardial necrosis assessment. Delayed acquisitions and scatter-corrected dual-isotope 201 Tl/ 123 I-MIBG SPECT acquisitions provide an improved evaluation of trigger zones in routine clinical settings after revascularization for STEMI.
Fragment-based drug discovery and molecular docking in drug design.
Wang, Tao; Wu, Mian-Bin; Chen, Zheng-Jie; Chen, Hua; Lin, Jian-Ping; Yang, Li-Rong
2015-01-01
Fragment-based drug discovery (FBDD) has caused a revolution in the process of drug discovery and design, with many FBDD leads being developed into clinical trials or approved in the past few years. Compared with traditional high-throughput screening, it displays obvious advantages such as efficiently covering chemical space, achieving higher hit rates, and so forth. In this review, we focus on the most recent developments of FBDD for improving drug discovery, illustrating the process and the importance of FBDD. In particular, the computational strategies applied in the process of FBDD and molecular-docking programs are highlighted elaborately. In most cases, docking is used for predicting the ligand-receptor interaction modes and hit identification by structurebased virtual screening. The successful cases of typical significance and the hits identified most recently are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knill, C; Wayne State University School of Medicine, Detroit, MI; Snyder, M
Purpose: PTW’s Octavius 1000 SRS array performs IMRT QA measurements with liquid filled ionization chambers (LICs). Collection efficiencies of LICs have been shown to change during IMRT delivery as a function of LINAC pulse frequency and pulse dose, which affects QA results. In this study, two methods were developed to correct changes in collection efficiencies during IMRT QA measurements, and the effects of these corrections on QA pass rates were compared. Methods: For the first correction, Matlab software was developed that calculates pulse frequency and pulse dose for each detector, using measurement and DICOM RT Plan files. Pulse information ismore » converted to collection efficiency and measurements are corrected by multiplying detector dose by ratios of calibration to measured collection efficiencies. For the second correction, MU/min in daily 1000 SRS calibration was chosen to match average MU/min of the VMAT plan. Usefulness of derived corrections were evaluated using 6MV and 10FFF SBRT RapidArc plans delivered to the OCTAVIUS 4D system using a TrueBeam equipped with an HD- MLC. Effects of the two corrections on QA results were examined by performing 3D gamma analysis comparing predicted to measured dose, with and without corrections. Results: After complex Matlab corrections, average 3D gamma pass rates improved by [0.07%,0.40%,1.17%] for 6MV and [0.29%,1.40%,4.57%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. Maximum changes in gamma pass rates were [0.43%,1.63%,3.05%] for 6MV and [1.00%,4.80%,11.2%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. On average, pass rates of simple daily calibration corrections were within 1% of complex Matlab corrections. Conclusion: Ion recombination effects can potentially be clinically significant for OCTAVIUS 1000 SRS measurements, especially for higher pulse dose unflattened beams when using tighter gamma tolerances. Matching daily 1000 SRS calibration MU/min to average planned MU/min is a simple correction that greatly reduces ion recombination effects, improving measurements accuracy and gamma pass rates. This work was supported by PTW.« less
Rowlands, Derek J
2012-01-01
The QT interval on the electrocardiogram is an increasingly important measurement, especially in relation to drug action and interaction. The QT interval varies inversely as the heart rate and numerous rate correction formulae have been proposed. It is difficult to compare the effect of applying different formulae at different heart rates and for different measured QT intervals. A simple graphical display of the results from different formulae is proposed. This display is dependent on the concept of the absolute correction factor. This graphical presentation is useful (a) in comparing the effect of the application of different formulae and (b) in directly reading the correction produced by any individual formula. Copyright © 2012 Elsevier Inc. All rights reserved.
The Impact of Traumatic Brain Injury on Prison Health Services and Offender Management.
Piccolino, Adam L; Solberg, Kenneth B
2014-07-01
A large percentage of incarcerated offenders report a history of traumatic brain injury (TBI) with concomitant neuropsychiatric and social sequelae. However, research looking at the relationship between TBI and delivery of correctional health services and offender management is limited. In this study, the relationships between TBI and use of correctional medical/psychological services, chemical dependency (CD) treatment completion rates, in-prison rule infractions, and recidivism were investigated. Findings indicated that TBI history has a statistically significant association with increased usage of correctional medical/psychological services, including crisis interventions services, and with higher recidivism rates. Results also showed a trend toward offenders with TBI incurring higher rates of in-prison rule infractions and lower rates of CD treatment completion. Implications and future directions for correctional systems are discussed. © The Author(s) 2014.
Caliber Corrected Markov Modeling (C2M2): Correcting Equilibrium Markov Models.
Dixit, Purushottam D; Dill, Ken A
2018-02-13
Rate processes are often modeled using Markov State Models (MSMs). Suppose you know a prior MSM and then learn that your prediction of some particular observable rate is wrong. What is the best way to correct the whole MSM? For example, molecular dynamics simulations of protein folding may sample many microstates, possibly giving correct pathways through them while also giving the wrong overall folding rate when compared to experiment. Here, we describe Caliber Corrected Markov Modeling (C 2 M 2 ), an approach based on the principle of maximum entropy for updating a Markov model by imposing state- and trajectory-based constraints. We show that such corrections are equivalent to asserting position-dependent diffusion coefficients in continuous-time continuous-space Markov processes modeled by a Smoluchowski equation. We derive the functional form of the diffusion coefficient explicitly in terms of the trajectory-based constraints. We illustrate with examples of 2D particle diffusion and an overdamped harmonic oscillator.
NASA Technical Reports Server (NTRS)
Challa, M. S.; Natanson, G. A.; Baker, D. F.; Deutschmann, J. K.
1994-01-01
This paper describes real-time attitude determination results for the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX), a gyroless spacecraft, using a Kalman filter/Euler equation approach denoted the real-time sequential filter (RTSF). The RTSF is an extended Kalman filter whose state vector includes the attitude quaternion and corrections to the rates, which are modeled as Markov processes with small time constants. The rate corrections impart a significant robustness to the RTSF against errors in modeling the environmental and control torques, as well as errors in the initial attitude and rates, while maintaining a small state vector. SAMPLEX flight data from various mission phases are used to demonstrate the robustness of the RTSF against a priori attitude and rate errors of up to 90 deg and 0.5 deg/sec, respectively, as well as a sensitivity of 0.0003 deg/sec in estimating rate corrections in torque computations. In contrast, it is shown that the RTSF attitude estimates without the rate corrections can degrade rapidly. RTSF advantages over single-frame attitude determination algorithms are also demonstrated through (1) substantial improvements in attitude solutions during sun-magnetic field coalignment and (2) magnetic-field-only attitude and rate estimation during the spacecraft's sun-acquisition mode. A robust magnetometer-only attitude-and-rate determination method is also developed to provide for the contingency when both sun data as well as a priori knowledge of the spacecraft state are unavailable. This method includes a deterministic algorithm used to initialize the RTSF with coarse estimates of the spacecraft attitude and rates. The combined algorithm has been found effective, yielding accuracies of 1.5 deg in attitude and 0.01 deg/sec in the rates and convergence times as little as 400 sec.
The ranking probability approach and its usage in design and analysis of large-scale studies.
Kuo, Chia-Ling; Zaykin, Dmitri
2013-01-01
In experiments with many statistical tests there is need to balance type I and type II error rates while taking multiplicity into account. In the traditional approach, the nominal [Formula: see text]-level such as 0.05 is adjusted by the number of tests, [Formula: see text], i.e., as 0.05/[Formula: see text]. Assuming that some proportion of tests represent "true signals", that is, originate from a scenario where the null hypothesis is false, power depends on the number of true signals and the respective distribution of effect sizes. One way to define power is for it to be the probability of making at least one correct rejection at the assumed [Formula: see text]-level. We advocate an alternative way of establishing how "well-powered" a study is. In our approach, useful for studies with multiple tests, the ranking probability [Formula: see text] is controlled, defined as the probability of making at least [Formula: see text] correct rejections while rejecting hypotheses with [Formula: see text] smallest P-values. The two approaches are statistically related. Probability that the smallest P-value is a true signal (i.e., [Formula: see text]) is equal to the power at the level [Formula: see text], to an very good excellent approximation. Ranking probabilities are also related to the false discovery rate and to the Bayesian posterior probability of the null hypothesis. We study properties of our approach when the effect size distribution is replaced for convenience by a single "typical" value taken to be the mean of the underlying distribution. We conclude that its performance is often satisfactory under this simplification; however, substantial imprecision is to be expected when [Formula: see text] is very large and [Formula: see text] is small. Precision is largely restored when three values with the respective abundances are used instead of a single typical effect size value.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fraga, Carlos G.; Clowers, Brian H.; Moore, Ronald J.
2010-05-15
This report demonstrates the use of bioinformatic and chemometric tools on liquid chromatography mass spectrometry (LC-MS) data for the discovery of ultra-trace forensic signatures for sample matching of various stocks of the nerve-agent precursor known as methylphosphonic dichloride (dichlor). The use of the bioinformatic tool known as XCMS was used to comprehensively search and find candidate LC-MS peaks in a known set of dichlor samples. These candidate peaks were down selected to a group of 34 impurity peaks. Hierarchal cluster analysis and factor analysis demonstrated the potential of these 34 impurities peaks for matching samples based on their stock source.more » Only one pair of dichlor stocks was not differentiated from one another. An acceptable chemometric approach for sample matching was determined to be variance scaling and signal averaging of normalized duplicate impurity profiles prior to classification by k-nearest neighbors. Using this approach, a test set of dichlor samples were all correctly matched to their source stock. The sample preparation and LC-MS method permitted the detection of dichlor impurities presumably in the parts-per-trillion (w/w). The detection of a common impurity in all dichlor stocks that were synthesized over a 14-year period and by different manufacturers was an unexpected discovery. Our described signature-discovery approach should be useful in the development of a forensic capability to help in criminal investigations following chemical attacks.« less
NASA Astrophysics Data System (ADS)
von Berlepsch, R.; Strassmeier, K. G.
2009-06-01
We present facsimiles of some of the scientifically and historically most relevant papers published in Astronomische Nachrichten/Astronomical Notes (AN) between 1821 and 1938. Almost all of these papers were written and printed in German and it is sometimes not completely straightforward to find these original works and then to cite the historically correct version, e.g. in case of a series of articles or editorial letters. It was common during the early years that many contributions were made in form of letters to the editor. We present a summary for these original works with an English translation of their titles. Among the highlights are the originals of the discovery of stellar parallaxes by Friedrich Wilhelm Bessel, the discovery of the solar cycle by Heinrich Schwabe, the discovery of the planet Neptune by Johann Gottfried Galle, the first ever measured stellar radial velocity by Hermann Vogel, the discovery of radio emission from the Sun by Wilsing and Scheiner, the first ever conducted photoelectric photometry of stars by Paul Guthnick and up to the pioneering work by Karl Schwarzschild, Ejnar Hertzsprung, Erwin Finlay Freundlich and others. As a particular gimmick we present the still world record holding shortest paper ever published; by Johannes Hartmann in AN 226, 63 (1926) on Nova Pictoris. Our focus is on contributions in the early years and published until 1938 near the verge of the second world war.
The need for scientific software engineering in the pharmaceutical industry
NASA Astrophysics Data System (ADS)
Luty, Brock; Rose, Peter W.
2017-03-01
Scientific software engineering is a distinct discipline from both computational chemistry project support and research informatics. A scientific software engineer not only has a deep understanding of the science of drug discovery but also the desire, skills and time to apply good software engineering practices. A good team of scientific software engineers can create a software foundation that is maintainable, validated and robust. If done correctly, this foundation enable the organization to investigate new and novel computational ideas with a very high level of efficiency.
The need for scientific software engineering in the pharmaceutical industry.
Luty, Brock; Rose, Peter W
2017-03-01
Scientific software engineering is a distinct discipline from both computational chemistry project support and research informatics. A scientific software engineer not only has a deep understanding of the science of drug discovery but also the desire, skills and time to apply good software engineering practices. A good team of scientific software engineers can create a software foundation that is maintainable, validated and robust. If done correctly, this foundation enable the organization to investigate new and novel computational ideas with a very high level of efficiency.
Kitty Field, Campbell County, Wyoming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, C.R.
1970-01-01
Kitty production and success, when viewed on a per well basis, is quite erratic. The geology, simplified in this study, is quite erratic and complex, awaiting further study to place it in the correct perspective. It should be remembered that ''Kitty'' a pre-Bell Creek field discovery, remained dormant for approx. 2 yr because of adverse economic factors. An aggressive and optimistic approach for geologists will be needed for further exploration and exploitation of the Muddy potential in the Powder River Basin of Wyoming. (10 refs.)
Factors affecting survival of patients in the acute phase of upper cervical spine injuries.
Morita, Tomonori; Takebayashi, Tsuneo; Irifune, Hideto; Ohnishi, Hirofumi; Hirayama, Suguru; Yamashita, Toshihiko
2017-04-01
In recent years, on the one hand, the mortality rates of upper cervical spine injuries, such as odontoid fractures, were suggested to be not so high, but on the other hand reported to be significantly high. Furthermore, it has not been well documented the relationship between survival rates and various clinical features in those patients during the acute phase of injury because of few reports. This study aimed to evaluate survival rates and acute-phase clinical features of upper cervical spine injuries. We conducted a retrospective review of all patients who were transported to the advanced emergency medical center and underwent computed tomography of the cervical spine at our hospital between January 2006 and December 2015. We excluded the patients who were discovered in a state of cardiopulmonary arrest (CPA) and could not be resuscitated after transportation. Of the 215 consecutive patients with cervical spine injuries, we examined 40 patients (18.6%) diagnosed with upper cervical spine injury (males, 28; females, 12; median age, 58.5 years). Age, sex, mechanism of injury, degree of paralysis, the level of cervical injury, injury severity score (ISS), and incidence of CPA at discovery were evaluated and compared among patients classified into the survival and mortality groups. The survival rate was 77.5% (31/40 patients). In addition, complete paralysis was observed in 32.5% of patients. The median of ISS was 34.0 points, and 14 patients (35.0%) presented with CPA at discovery. Age, the proportion of patients with complete paralysis, a high ISS, and incidence of CPA at discovery were significantly higher in the mortality group (p = 0.038, p = 0.038, p < 0.001, and p < 0.001, respectively). Elderly people were more likely to experience upper cervical spine injuries, and their mortality rate was significantly higher than that in injured younger people. In addition, complete paralysis, high ISS, a state of CPA at discovery, was significantly higher in the mortality group.
Discovery of Hubble's Law as a Series of Type III Errors
ERIC Educational Resources Information Center
Belenkiy, Ari
2015-01-01
Recently much attention has been paid to the history of the discovery of Hubble's law--the linear relation between the rate of recession of the remote galaxies and distance to them from Earth. Though historians of cosmology now mention several names associated with this law instead of just one, the motivation of each actor of that remarkable…
NASA Astrophysics Data System (ADS)
Wainscoat, Richard J.; Chambers, Kenneth C.; Chastel, Serge; Denneau, Larry; Lilly Schunova, Eva; Micheli, Marco; Weryk, Robert J.
2016-10-01
The Pan-STARRS1 telescope has been spending most of its time for the last 2.5 years searching the sky for Near Earth Objects (NEOs). The surveyed area covers the entire northern sky and extends south to -49 degrees declination. Because Pan-STARRS1 has a large field-of-view, it has been able survey large areas of the sky, and we are now able to examine NEO discovery rates relative to ecliptic latitude.Most contemporary searches, including Pan-STARRS1, have been spending large amounts of their observing time during the dark moon period searching for NEOs close to the ecliptic. The rationale for this is that many objects have low inclination, and all objects in orbit around the Sun must cross the ecliptic. New search capabilities are now available, including Pan-STARRS2, and the upgraded camera in Catalina Sky Survey's G96 telescope. These allow NEO searches to be conducted over wider areas of the sky, and to extend further from the ecliptic.We have examined the discovery rates relative to location on the sky for new NEOs from Pan-STARRS1, and find that the new NEO discoveries are less concentrated on the ecliptic than might be expected. This finding also holds for larger objects. The southern sky has proven to be very productive in new NEO discoveries - this is a direct consequence of the major NEO surveys being located in the northern hemisphere.Our preliminary findings suggest that NEO searches should extend to at least 30 degrees from the ecliptic during the more sensitive dark moon period. At least 6,000 deg2 should therefore be searched each lunation. This is possible with the newly augmented NEO search assets, and repeat coverage will be needed in order to recover most of the NEO candidates found. However, weather challenges will likely make full and repeated coverage of such a large area of sky difficult to achieve. Some simple coordination between observing sites will likely lead to improvement in efficiency.
NASA Astrophysics Data System (ADS)
Nguyen, Huong Giang T.; Horn, Jarod C.; Thommes, Matthias; van Zee, Roger D.; Espinal, Laura
2017-12-01
Addressing reproducibility issues in adsorption measurements is critical to accelerating the path to discovery of new industrial adsorbents and to understanding adsorption processes. A National Institute of Standards and Technology Reference Material, RM 8852 (ammonium ZSM-5 zeolite), and two gravimetric instruments with asymmetric two-beam balances were used to measure high-pressure adsorption isotherms. This work demonstrates how common approaches to buoyancy correction, a key factor in obtaining the mass change due to surface excess gas uptake from the apparent mass change, can impact the adsorption isotherm data. Three different approaches to buoyancy correction were investigated and applied to the subcritical CO2 and supercritical N2 adsorption isotherms at 293 K. It was observed that measuring a collective volume for all balance components for the buoyancy correction (helium method) introduces an inherent bias in temperature partition when there is a temperature gradient (i.e. analysis temperature is not equal to instrument air bath temperature). We demonstrate that a blank subtraction is effective in mitigating the biases associated with temperature partitioning, instrument calibration, and the determined volumes of the balance components. In general, the manual and subtraction methods allow for better treatment of the temperature gradient during buoyancy correction. From the study, best practices specific to asymmetric two-beam balances and more general recommendations for measuring isotherms far from critical temperatures using gravimetric instruments are offered.
Nguyen, Huong Giang T; Horn, Jarod C; Thommes, Matthias; van Zee, Roger D; Espinal, Laura
2017-12-01
Addressing reproducibility issues in adsorption measurements is critical to accelerating the path to discovery of new industrial adsorbents and to understanding adsorption processes. A National Institute of Standards and Technology Reference Material, RM 8852 (ammonium ZSM-5 zeolite), and two gravimetric instruments with asymmetric two-beam balances were used to measure high-pressure adsorption isotherms. This work demonstrates how common approaches to buoyancy correction, a key factor in obtaining the mass change due to surface excess gas uptake from the apparent mass change, can impact the adsorption isotherm data. Three different approaches to buoyancy correction were investigated and applied to the subcritical CO 2 and supercritical N 2 adsorption isotherms at 293 K. It was observed that measuring a collective volume for all balance components for the buoyancy correction (helium method) introduces an inherent bias in temperature partition when there is a temperature gradient (i.e. analysis temperature is not equal to instrument air bath temperature). We demonstrate that a blank subtraction is effective in mitigating the biases associated with temperature partitioning, instrument calibration, and the determined volumes of the balance components. In general, the manual and subtraction methods allow for better treatment of the temperature gradient during buoyancy correction. From the study, best practices specific to asymmetric two-beam balances and more general recommendations for measuring isotherms far from critical temperatures using gravimetric instruments are offered.
Bunch mode specific rate corrections for PILATUS3 detectors
Trueb, P.; Dejoie, C.; Kobas, M.; ...
2015-04-09
PILATUS X-ray detectors are in operation at many synchrotron beamlines around the world. This article reports on the characterization of the new PILATUS3 detector generation at high count rates. As for all counting detectors, the measured intensities have to be corrected for the dead-time of the counting mechanism at high photon fluxes. The large number of different bunch modes at these synchrotrons as well as the wide range of detector settings presents a challenge for providing accurate corrections. To avoid the intricate measurement of the count rate behaviour for every bunch mode, a Monte Carlo simulation of the counting mechanismmore » has been implemented, which is able to predict the corrections for arbitrary bunch modes and a wide range of detector settings. This article compares the simulated results with experimental data acquired at different synchrotrons. It is found that the usage of bunch mode specific corrections based on this simulation improves the accuracy of the measured intensities by up to 40% for high photon rates and highly structured bunch modes. For less structured bunch modes, the instant retrigger technology of PILATUS3 detectors substantially reduces the dependency of the rate correction on the bunch mode. The acquired data also demonstrate that the instant retrigger technology allows for data acquisition up to 15 million photons per second per pixel.« less
Weidel, Elisabeth; Negri, Matthias; Empting, Martin; Hinsberger, Stefan; Hartmann, Rolf W
2014-01-01
In order to identify new scaffolds for drug discovery, surface plasmon resonance is frequently used to screen structurally diverse libraries. Usually, hit rates are low and identification processes are time consuming. Hence, approaches which improve hit rates and, thus, reduce the library size are required. In this work, we studied three often used strategies for their applicability to identify inhibitors of PqsD. In two of them, target-specific aspects like inhibition of a homologous protein or predicted binding determined by virtual screening were used for compound preselection. Finally, a fragment library, covering a large chemical space, was screened and served as comparison. Indeed, higher hit rates were observed for methods employing preselected libraries indicating that target-oriented compound selection provides a time-effective alternative.
Haralambieva, Iana H.; Ovsyannikova, Inna G.; Umlauf, Benjamin J.; Vierkant, Robert A.; Pankratz, V. Shane; Jacobson, Robert M.; Poland, Gregory A.
2014-01-01
Host antiviral genes are important regulators of antiviral immunity and plausible genetic determinants of immune response heterogeneity after vaccination. We genotyped and analyzed 307 common candidate tagSNPs from 12 antiviral genes in a cohort of 745 schoolchildren immunized with two doses of measles-mumps-rubella vaccine. Associations between SNPs/haplotypes and measles virus-specific immune outcomes were assessed using linear regression methodologies in Caucasians and African-Americans. Genetic variants within the DDX58/RIG-I gene, including a coding polymorphism (rs3205166/Val800Val), were associated as single-SNPs (p≤0.017; although these SNPs did not remain significant after correction for false discovery rate/FDR) and in haplotype-level analysis, with measles-specific antibody variations in Caucasians (haplotype allele p-value=0.021; haplotype global p-value=0.076). Four DDX58 polymorphisms, in high LD, demonstrated also associations (after correction for FDR) with variations in both measles-specific IFN-γ and IL-2 secretion in Caucasians (p≤0.001, q=0.193). Two intronic OAS1 polymorphisms, including the functional OAS1 SNP rs10774671 (p=0.003), demonstrated evidence of association with a significant allele-dose-related increase in neutralizing antibody levels in African-Americans. Genotype and haplotype-level associations demonstrated the role of ADAR genetic variants, including a non-synonymous SNP (rs2229857/Arg384Lys; p=0.01), in regulating measles virus-specific IFN-γ Elispot responses in Caucasians (haplotype global p-value=0.017). After correction FDR, 15 single-SNP associations (11 SNPs in Caucasians and 4 SNPs in African-Americans) still remained significant at the q-value<0.20. In conclusion, our findings strongly point to genetic variants/genes, involved in antiviral sensing and antiviral control, as critical determinants, differentially modulating the adaptive immune responses to live attenuated measles vaccine in Caucasians and African-Americans. PMID:21939710
GAMA/H-ATLAS: The Dust Opacity-Stellar Mass Surface Density Relation for Spiral Galaxies
NASA Astrophysics Data System (ADS)
Grootes, M. W.; Tuffs, R. J.; Popescu, C. C.; Pastrav, B.; Andrae, E.; Gunawardhana, M.; Kelvin, L. S.; Liske, J.; Seibert, M.; Taylor, E. N.; Graham, Alister W.; Baes, M.; Baldry, I. K.; Bourne, N.; Brough, S.; Cooray, A.; Dariush, A.; De Zotti, G.; Driver, S. P.; Dunne, L.; Gomez, H.; Hopkins, A. M.; Hopwood, R.; Jarvis, M.; Loveday, J.; Maddox, S.; Madore, B. F.; Michałowski, M. J.; Norberg, P.; Parkinson, H. R.; Prescott, M.; Robotham, A. S. G.; Smith, D. J. B.; Thomas, D.; Valiante, E.
2013-03-01
We report the discovery of a well-defined correlation between B-band face-on central optical depth due to dust, τ ^f_B, and the stellar mass surface density, μ*, of nearby (z <= 0.13) spiral galaxies: {log}(τ ^{f}_{B}) = 1.12(+/- 0.11) \\cdot {log}({μ _{*}}/{{M}_{⊙ } {kpc}^{-2}}) - 8.6(+/- 0.8). This relation was derived from a sample of spiral galaxies taken from the Galaxy and Mass Assembly (GAMA) survey, which were detected in the FIR/submillimeter (submm) in the Herschel-ATLAS science demonstration phase field. Using a quantitative analysis of the NUV attenuation-inclination relation for complete samples of GAMA spirals categorized according to stellar mass surface density, we demonstrate that this correlation can be used to statistically correct for dust attenuation purely on the basis of optical photometry and Sérsic-profile morphological fits. Considered together with previously established empirical relationships of stellar mass to metallicity and gas mass, the near linearity and high constant of proportionality of the τ ^f_B - μ_{*} relation disfavors a stellar origin for the bulk of refractory grains in spiral galaxies, instead being consistent with the existence of a ubiquitous and very rapid mechanism for the growth of dust in the interstellar medium. We use the τ ^f_B - μ_{*} relation in conjunction with the radiation transfer model for spiral galaxies of Popescu & Tuffs to derive intrinsic scaling relations between specific star formation rate (SFR), stellar mass, and stellar surface density, in which attenuation of the UV light used for the measurement of SFR is corrected on an object-to-object basis. A marked reduction in scatter in these relations is achieved which we demonstrate is due to correction of both the inclination-dependent and face-on components of attenuation. Our results are consistent with a general picture of spiral galaxies in which most of the submm emission originates from grains residing in translucent structures, exposed to UV in the diffuse interstellar radiation field.
The variable sky of deep synoptic surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ridgway, Stephen T.; Matheson, Thomas; Mighell, Kenneth J.
2014-11-20
The discovery of variable and transient sources is an essential product of synoptic surveys. The alert stream will require filtering for personalized criteria—a process managed by a functionality commonly described as a Broker. In order to understand quantitatively the magnitude of the alert generation and Broker tasks, we have undertaken an analysis of the most numerous types of variable targets in the sky—Galactic stars, quasi-stellar objects (QSOs), active galactic nuclei (AGNs), and asteroids. It is found that the Large Synoptic Survey Telescope (LSST) will be capable of discovering ∼10{sup 5} high latitude (|b| > 20°) variable stars per night atmore » the beginning of the survey. (The corresponding number for |b| < 20° is orders of magnitude larger, but subject to caveats concerning extinction and crowding.) However, the number of new discoveries may well drop below 100 per night within less than one year. The same analysis applied to GAIA clarifies the complementarity of the GAIA and LSST surveys. Discovery of AGNs and QSOs are each predicted to begin at ∼3000 per night and decrease by 50 times over four years. Supernovae are expected at ∼1100 per night, and after several survey years will dominate the new variable discovery rate. LSST asteroid discoveries will start at >10{sup 5} per night, and if orbital determination has a 50% success rate per epoch, they will drop below 1000 per night within two years.« less
Oncology drug discovery: planning a turnaround.
Toniatti, Carlo; Jones, Philip; Graham, Hilary; Pagliara, Bruno; Draetta, Giulio
2014-04-01
We have made remarkable progress in our understanding of the pathophysiology of cancer. This improved understanding has resulted in increasingly effective targeted therapies that are better tolerated than conventional cytotoxic agents and even curative in some patients. Unfortunately, the success rate of drug approval has been limited, and therapeutic improvements have been marginal, with too few exceptions. In this article, we review the current approach to oncology drug discovery and development, identify areas in need of improvement, and propose strategies to improve patient outcomes. We also suggest future directions that may improve the quality of preclinical and early clinical drug evaluation, which could lead to higher approval rates of anticancer drugs.
Discrete False-Discovery Rate Improves Identification of Differentially Abundant Microbes.
Jiang, Lingjing; Amir, Amnon; Morton, James T; Heller, Ruth; Arias-Castro, Ery; Knight, Rob
2017-01-01
Differential abundance testing is a critical task in microbiome studies that is complicated by the sparsity of data matrices. Here we adapt for microbiome studies a solution from the field of gene expression analysis to produce a new method, discrete false-discovery rate (DS-FDR), that greatly improves the power to detect differential taxa by exploiting the discreteness of the data. Additionally, DS-FDR is relatively robust to the number of noninformative features, and thus removes the problem of filtering taxonomy tables by an arbitrary abundance threshold. We show by using a combination of simulations and reanalysis of nine real-world microbiome data sets that this new method outperforms existing methods at the differential abundance testing task, producing a false-discovery rate that is up to threefold more accurate, and halves the number of samples required to find a given difference (thus increasing the efficiency of microbiome experiments considerably). We therefore expect DS-FDR to be widely applied in microbiome studies. IMPORTANCE DS-FDR can achieve higher statistical power to detect significant findings in sparse and noisy microbiome data compared to the commonly used Benjamini-Hochberg procedure and other FDR-controlling procedures.
MicroRNA array normalization: an evaluation using a randomized dataset as the benchmark.
Qin, Li-Xuan; Zhou, Qin
2014-01-01
MicroRNA arrays possess a number of unique data features that challenge the assumption key to many normalization methods. We assessed the performance of existing normalization methods using two microRNA array datasets derived from the same set of tumor samples: one dataset was generated using a blocked randomization design when assigning arrays to samples and hence was free of confounding array effects; the second dataset was generated without blocking or randomization and exhibited array effects. The randomized dataset was assessed for differential expression between two tumor groups and treated as the benchmark. The non-randomized dataset was assessed for differential expression after normalization and compared against the benchmark. Normalization improved the true positive rate significantly in the non-randomized data but still possessed a false discovery rate as high as 50%. Adding a batch adjustment step before normalization further reduced the number of false positive markers while maintaining a similar number of true positive markers, which resulted in a false discovery rate of 32% to 48%, depending on the specific normalization method. We concluded the paper with some insights on possible causes of false discoveries to shed light on how to improve normalization for microRNA arrays.
MicroRNA Array Normalization: An Evaluation Using a Randomized Dataset as the Benchmark
Qin, Li-Xuan; Zhou, Qin
2014-01-01
MicroRNA arrays possess a number of unique data features that challenge the assumption key to many normalization methods. We assessed the performance of existing normalization methods using two microRNA array datasets derived from the same set of tumor samples: one dataset was generated using a blocked randomization design when assigning arrays to samples and hence was free of confounding array effects; the second dataset was generated without blocking or randomization and exhibited array effects. The randomized dataset was assessed for differential expression between two tumor groups and treated as the benchmark. The non-randomized dataset was assessed for differential expression after normalization and compared against the benchmark. Normalization improved the true positive rate significantly in the non-randomized data but still possessed a false discovery rate as high as 50%. Adding a batch adjustment step before normalization further reduced the number of false positive markers while maintaining a similar number of true positive markers, which resulted in a false discovery rate of 32% to 48%, depending on the specific normalization method. We concluded the paper with some insights on possible causes of false discoveries to shed light on how to improve normalization for microRNA arrays. PMID:24905456
iPTF14yb: The First Discovery of a Gamma-Ray Burst Afterglow Independent of a High-Energy Trigger
NASA Technical Reports Server (NTRS)
Cenko, S. Bradley; Urban, Alex L.; Perley, Daniel A.; Horesh, Assaf; Corsi, Alessandra; Fox, Derek B.; Cao, Yi; Kasliwal, Mansi M.; Lien, Amy; Arcavi, Iair;
2015-01-01
We report here the discovery by the Intermediate Palomar Transient Factory (iPTF) of iPTF14yb, a luminous (Mr >> -27.8 mag), cosmological (redshift 1.9733), rapidly fading optical transient. We demonstrate, based on probabilistic arguments and a comparison with the broader population, that iPTF14yb is the optical afterglow of the long-duration gamma-ray burst GRB 140226A. This marks the first unambiguous discovery of a GRB afterglow prior to (and thus entirely independent of) an associated high-energy trigger. We estimate the rate of iPTF14yb-like sources (i.e., cosmologically distant relativistic explosions) based on iPTF observations, inferring an all-sky value of Rrel = 610/yr (68% confidence interval of 110-2000/yr). Our derived rate is consistent (within the large uncertainty) with the all-sky rate of on-axis GRBs derived by the Swift satellite. Finally, we briefly discuss the implications of the nondetection to date of bona fide "orphan" afterglows (i.e., those lacking detectable high-energy emission) on GRB beaming and the degree of baryon loading in these relativistic jets.
iPTF14yb: The First Discovery of a Gamma-Ray Burst Afterglow Independent of a High-energy Trigger
NASA Astrophysics Data System (ADS)
Cenko, S. Bradley; Urban, Alex L.; Perley, Daniel A.; Horesh, Assaf; Corsi, Alessandra; Fox, Derek B.; Cao, Yi; Kasliwal, Mansi M.; Lien, Amy; Arcavi, Iair; Bloom, Joshua S.; Butler, Nat R.; Cucchiara, Antonino; de Diego, José A.; Filippenko, Alexei V.; Gal-Yam, Avishay; Gehrels, Neil; Georgiev, Leonid; Jesús González, J.; Graham, John F.; Greiner, Jochen; Kann, D. Alexander; Klein, Christopher R.; Knust, Fabian; Kulkarni, S. R.; Kutyrev, Alexander; Laher, Russ; Lee, William H.; Nugent, Peter E.; Prochaska, J. Xavier; Ramirez-Ruiz, Enrico; Richer, Michael G.; Rubin, Adam; Urata, Yuji; Varela, Karla; Watson, Alan M.; Wozniak, Przemek R.
2015-04-01
We report here the discovery by the Intermediate Palomar Transient Factory (iPTF) of iPTF14yb, a luminous ({{M}r}≈ -27.8 mag), cosmological (redshift 1.9733), rapidly fading optical transient. We demonstrate, based on probabilistic arguments and a comparison with the broader population, that iPTF14yb is the optical afterglow of the long-duration gamma-ray burst GRB 140226A. This marks the first unambiguous discovery of a GRB afterglow prior to (and thus entirely independent of) an associated high-energy trigger. We estimate the rate of iPTF14yb-like sources (i.e., cosmologically distant relativistic explosions) based on iPTF observations, inferring an all-sky value of {{\\Re }rel}=610 yr-1 (68% confidence interval of 110-2000 yr-1). Our derived rate is consistent (within the large uncertainty) with the all-sky rate of on-axis GRBs derived by the Swift satellite. Finally, we briefly discuss the implications of the nondetection to date of bona fide “orphan” afterglows (i.e., those lacking detectable high-energy emission) on GRB beaming and the degree of baryon loading in these relativistic jets.
Modern approaches to accelerate discovery of new antischistosomal drugs.
Neves, Bruno Junior; Muratov, Eugene; Machado, Renato Beilner; Andrade, Carolina Horta; Cravo, Pedro Vitor Lemos
2016-06-01
The almost exclusive use of only praziquantel for the treatment of schistosomiasis has raised concerns about the possible emergence of drug-resistant schistosomes. Consequently, there is an urgent need for new antischistosomal drugs. The identification of leads and the generation of high quality data are crucial steps in the early stages of schistosome drug discovery projects. Herein, the authors focus on the current developments in antischistosomal lead discovery, specifically referring to the use of automated in vitro target-based and whole-organism screens and virtual screening of chemical databases. They highlight the strengths and pitfalls of each of the above-mentioned approaches, and suggest possible roadmaps towards the integration of several strategies, which may contribute for optimizing research outputs and led to more successful and cost-effective drug discovery endeavors. Increasing partnerships and access to funding for drug discovery have strengthened the battle against schistosomiasis in recent years. However, the authors believe this battle also includes innovative strategies to overcome scientific challenges. In this context, significant advances of in vitro screening as well as computer-aided drug discovery have contributed to increase the success rate and reduce the costs of drug discovery campaigns. Although some of these approaches were already used in current antischistosomal lead discovery pipelines, the integration of these strategies in a solid workflow should allow the production of new treatments for schistosomiasis in the near future.
Werker, Paul M N; Pess, Gary M; van Rijssen, Annet L; Denkler, Keith
2012-10-01
To call attention to the wide variety of definitions for recurrence that have been employed in studies of different invasive procedures for the treatment of Dupuytren contracture and how this important limitation has contributed to the wide range of reported results. This study reviewed definitions and rates of contracture correction and recurrence in patients undergoing invasive treatment of Dupuytren contracture. A literature search was carried out in January 2011 using the terms "Dupuytren" AND ("fasciectomy" OR "fasciotomy" OR "dermofasciectomy" OR "aponeurotomy" OR "aponeurectomy") and limited to studies in English. The search returned 218 studies, of which 21 had definitions, quantitative results for contracture correction and recurrence, and a sample size of at least 20 patients. Definitions for correction of contracture and recurrence varied greatly among articles and were almost always qualitative. Percentages of patients who achieved correction of contracture (ie, responder rate) when evaluated at various times after completion of surgery ranged from 15% to 96% for fasciectomy/aponeurectomy. Responder rates were not reported for fasciotomy/aponeurotomy. Recurrence rates ranged from 12% to 73% for patients treated with fasciectomy/aponeurectomy and from 33% to 100% for fasciotomy/aponeurotomy. Review of these reports underscored the difficulty involved in comparing correction of contracture and recurrence rates for different surgical interventions because of differences in definition and duration of follow-up. Clearly defined objective definitions for correction of contracture and for recurrence are needed for more meaningful comparisons of results achieved with different surgical interventions. Recurrence after surgical intervention for Dupuytren contracture is common. This study, which evaluated reported rates of recurrence following surgical treatment of Dupuytren contracture, provides clinicians with practical information regarding expected long-term outcomes of surgical treatment choices. Economic and decision analysis III. Copyright © 2012 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Covington, Brett C; McLean, John A; Bachmann, Brian O
2017-01-04
Covering: 2000 to 2016The labor-intensive process of microbial natural product discovery is contingent upon identifying discrete secondary metabolites of interest within complex biological extracts, which contain inventories of all extractable small molecules produced by an organism or consortium. Historically, compound isolation prioritization has been driven by observed biological activity and/or relative metabolite abundance and followed by dereplication via accurate mass analysis. Decades of discovery using variants of these methods has generated the natural pharmacopeia but also contributes to recent high rediscovery rates. However, genomic sequencing reveals substantial untapped potential in previously mined organisms, and can provide useful prescience of potentially new secondary metabolites that ultimately enables isolation. Recently, advances in comparative metabolomics analyses have been coupled to secondary metabolic predictions to accelerate bioactivity and abundance-independent discovery work flows. In this review we will discuss the various analytical and computational techniques that enable MS-based metabolomic applications to natural product discovery and discuss the future prospects for comparative metabolomics in natural product discovery.
Abou-Gharbia, Magid; Childers, Wayne E
2014-07-10
The pharmaceutical industry is facing enormous challenges, including reduced efficiency, stagnant success rate, patent expirations for key drugs, fierce price competition from generics, high regulatory hurdles, and the industry's perceived tarnished image. Pharma has responded by embarking on a range of initiatives. Other sectors, including NIH, have also responded. Academic drug discovery groups have appeared to support the transition of innovative academic discoveries and ideas into attractive drug discovery opportunities. Part 1 of this two-part series discussed the criticisms that have been leveled at the pharmaceutical industry over the past 3 decades and summarized the supporting data for and against these criticisms. This second installment will focus on the current challenges facing the pharmaceutical industry and Pharma's responses, focusing on the industry's changing perspective and new business models for coping with the loss of talent and declining clinical pipelines as well as presenting some examples of recent drug discovery successes.
The Relationship among Correct and Error Oral Reading Rates and Comprehension.
ERIC Educational Resources Information Center
Roberts, Michael; Smith, Deborah Deutsch
1980-01-01
Eight learning disabled boys (10 to 12 years old) who were seriously deficient in both their oral reading and comprehension performances participated in the study which investigated, through an applied behavior analysis model, the interrelationships of three reading variables--correct oral reading rates, error oral reading rates, and percentage of…
Analysis of ionospheric refraction error corrections for GRARR systems
NASA Technical Reports Server (NTRS)
Mallinckrodt, A. J.; Parker, H. C.; Berbert, J. H.
1971-01-01
A determination is presented of the ionospheric refraction correction requirements for the Goddard range and range rate (GRARR) S-band, modified S-band, very high frequency (VHF), and modified VHF systems. The relation ships within these four systems are analyzed to show that the refraction corrections are the same for all four systems and to clarify the group and phase nature of these corrections. The analysis is simplified by recognizing that the range rate is equivalent to a carrier phase range change measurement. The equation for the range errors are given.
Koh, Hyunwook; Blaser, Martin J; Li, Huilin
2017-04-24
The role of the microbiota in human health and disease has been increasingly studied, gathering momentum through the use of high-throughput technologies. Further identification of the roles of specific microbes is necessary to better understand the mechanisms involved in diseases related to microbiome perturbations. Here, we introduce a new microbiome-based group association testing method, optimal microbiome-based association test (OMiAT). OMiAT is a data-driven testing method which takes an optimal test throughout different tests from the sum of powered score tests (SPU) and microbiome regression-based kernel association test (MiRKAT). We illustrate that OMiAT efficiently discovers significant association signals arising from varying microbial abundances and different relative contributions from microbial abundance and phylogenetic information. We also propose a way to apply it to fine-mapping of diverse upper-level taxa at different taxonomic ranks (e.g., phylum, class, order, family, and genus), as well as the entire microbial community, within a newly introduced microbial taxa discovery framework, microbiome comprehensive association mapping (MiCAM). Our extensive simulations demonstrate that OMiAT is highly robust and powerful compared with other existing methods, while correctly controlling type I error rates. Our real data analyses also confirm that MiCAM is especially efficient for the assessment of upper-level taxa by integrating OMiAT as a group analytic method. OMiAT is attractive in practice due to the high complexity of microbiome data and the unknown true nature of the state. MiCAM also provides a hierarchical association map for numerous microbial taxa and can also be used as a guideline for further investigation on the roles of discovered taxa in human health and disease.
Shum, David; Bhinder, Bhavneet; Djaballah, Hakim
2013-01-01
MicroRNAs (miRNAs) are small endogenous and conserved non-coding RNA molecules that regulate gene expression. Although the first miRNA was discovered well over sixteen years ago, little is known about their biogenesis and it is only recently that we have begun to understand their scope and diversity. For this purpose, we performed an RNAi screen aimed at identifying genes involved in their biogenesis pathway with a potential use as biomarkers. Using a previously developed miRNA 21 (miR-21) EGFP-based biosensor cell based assay monitoring green fluorescence enhancements, we performed an arrayed short hairpin RNA (shRNA) screen against a lentiviral particle ready TRC1 library covering 16,039 genes in 384-well plate format, and interrogating the genome one gene at a time building a panoramic view of endogenous miRNA activity. Using the BDA method for RNAi data analysis, we nominate 497 gene candidates the knockdown of which increased the EGFP fluorescence and yielding an initial hit rate of 3.09%; of which only 22, with reported validated clones, are deemed high-confidence gene candidates. An unexpected and surprising result was that only DROSHA was identified as a hit out of the seven core essential miRNA biogenesis genes; suggesting that perhaps intracellular shRNA processing into the correct duplex may be cell dependent and with differential outcome. Biological classification revealed several major control junctions among them genes involved in transport and vesicular trafficking. In summary, we report on 22 high confidence gene candidate regulators of miRNA biogenesis with potential use in drug and biomarker discovery. PMID:23977983
Lence, Emilio; van der Kamp, Marc W; González-Bello, Concepción; Mulholland, Adrian J
2018-05-16
Type II dehydroquinase enzymes (DHQ2), recognized targets for antibiotic drug discovery, show significantly different activities dependent on the species: DHQ2 from Mycobacterium tuberculosis (MtDHQ2) and Helicobacter pylori (HpDHQ2) show a 50-fold difference in catalytic efficiency. Revealing the determinants of this activity difference is important for our understanding of biological catalysis and further offers the potential to contribute to tailoring specificity in drug design. Molecular dynamics simulations using a quantum mechanics/molecular mechanics potential, with correlated ab initio single point corrections, identify and quantify the subtle determinants of the experimentally observed difference in efficiency. The rate-determining step involves the formation of an enolate intermediate: more efficient stabilization of the enolate and transition state of the key step in MtDHQ2, mainly by the essential residues Tyr24 and Arg19, makes it more efficient than HpDHQ2. Further, a water molecule, which is absent in MtDHQ2 but involved in generation of the catalytic Tyr22 tyrosinate in HpDHQ2, was found to destabilize both the transition state and the enolate intermediate. The quantification of the contribution of key residues and water molecules in the rate-determining step of the mechanism also leads to improved understanding of higher potencies and specificity of known inhibitors, which should aid ongoing inhibitor design.
Net present value approaches for drug discovery.
Svennebring, Andreas M; Wikberg, Jarl Es
2013-12-01
Three dedicated approaches to the calculation of the risk-adjusted net present value (rNPV) in drug discovery projects under different assumptions are suggested. The probability of finding a candidate drug suitable for clinical development and the time to the initiation of the clinical development is assumed to be flexible in contrast to the previously used models. The rNPV of the post-discovery cash flows is calculated as the probability weighted average of the rNPV at each potential time of initiation of clinical development. Practical considerations how to set probability rates, in particular during the initiation and termination of a project is discussed.
2010-04-05
201004050001hq (5 April 2010) --- NASA Administrator Charles Bolden looks out the window of Firing Room Four in the Launch Control Center during the launch of the space shuttle Discovery and the start of the STS-131 mission at NASA Kennedy Space Center in Cape Canaveral, Fla. on April 5, 2010. Discovery is carrying a multi-purpose logistics module filled with science racks for the laboratories aboard the International Space Station. The mission has three planned spacewalks, with work to include replacing an ammonia tank assembly, retrieving a Japanese experiment from the station?s exterior, and switching out a rate gyro assembly on the station?s truss structure. Photo Credit: NASA/Bill Ingalls
An investigation of the false discovery rate and the misinterpretation of p-values
Colquhoun, David
2014-01-01
If you use p=0.05 to suggest that you have made a discovery, you will be wrong at least 30% of the time. If, as is often the case, experiments are underpowered, you will be wrong most of the time. This conclusion is demonstrated from several points of view. First, tree diagrams which show the close analogy with the screening test problem. Similar conclusions are drawn by repeated simulations of t-tests. These mimic what is done in real life, which makes the results more persuasive. The simulation method is used also to evaluate the extent to which effect sizes are over-estimated, especially in underpowered experiments. A script is supplied to allow the reader to do simulations themselves, with numbers appropriate for their own work. It is concluded that if you wish to keep your false discovery rate below 5%, you need to use a three-sigma rule, or to insist on p≤0.001. And never use the word ‘significant’. PMID:26064558
Bates, Anthony; Miles, Kenneth
2017-12-01
To validate MR textural analysis (MRTA) for detection of transition zone (TZ) prostate cancer through comparison with co-registered prostate-specific membrane antigen (PSMA) PET-MR. Retrospective analysis was performed for 30 men who underwent simultaneous PSMA PET-MR imaging for staging of prostate cancer. Thirty texture features were derived from each manually contoured T2-weighted, transaxial, prostatic TZ using texture analysis software that applies a spatial band-pass filter and quantifies texture through histogram analysis. Texture features of the TZ were compared to PSMA expression on the corresponding PET images. The Benjamini-Hochberg correction controlled the false discovery rate at <5%. Eighty-eight T2-weighted images in 18 patients demonstrated abnormal PSMA expression within the TZ on PET-MR. 123 images were PSMA negative. Based on the corrected p-value of 0.005, significant differences between PSMA positive and negative slices were found for 16 texture parameters: Standard deviation and mean of positive pixels for all spatial filters (p = <0.0001 for both at all spatial scaling factor (SSF) values) and mean intensity following filtration for SSF 3-6 mm (p = 0.0002-0.0018). Abnormal expression of PSMA within the TZ is associated with altered texture on T2-weighted MR, providing validation of MRTA for the detection of TZ prostate cancer. • Prostate transition zone (TZ) MR texture analysis may assist in prostate cancer detection. • Abnormal transition zone PSMA expression correlates with altered texture on T2-weighted MR. • TZ with abnormal PSMA expression demonstrates significantly reduced MI, SD and MPP.
Net Improvement of Correct Answers to Therapy Questions After PubMed Searches: Pre/Post Comparison
Keepanasseril, Arun
2013-01-01
Background Clinicians search PubMed for answers to clinical questions although it is time consuming and not always successful. Objective To determine if PubMed used with its Clinical Queries feature to filter results based on study quality would improve search success (more correct answers to clinical questions related to therapy). Methods We invited 528 primary care physicians to participate, 143 (27.1%) consented, and 111 (21.0% of the total and 77.6% of those who consented) completed the study. Participants answered 14 yes/no therapy questions and were given 4 of these (2 originally answered correctly and 2 originally answered incorrectly) to search using either the PubMed main screen or PubMed Clinical Queries narrow therapy filter via a purpose-built system with identical search screens. Participants also picked 3 of the first 20 retrieved citations that best addressed each question. They were then asked to re-answer the original 14 questions. Results We found no statistically significant differences in the rates of correct or incorrect answers using the PubMed main screen or PubMed Clinical Queries. The rate of correct answers increased from 50.0% to 61.4% (95% CI 55.0%-67.8%) for the PubMed main screen searches and from 50.0% to 59.1% (95% CI 52.6%-65.6%) for Clinical Queries searches. These net absolute increases of 11.4% and 9.1%, respectively, included previously correct answers changing to incorrect at a rate of 9.5% (95% CI 5.6%-13.4%) for PubMed main screen searches and 9.1% (95% CI 5.3%-12.9%) for Clinical Queries searches, combined with increases in the rate of being correct of 20.5% (95% CI 15.2%-25.8%) for PubMed main screen searches and 17.7% (95% CI 12.7%-22.7%) for Clinical Queries searches. Conclusions PubMed can assist clinicians answering clinical questions with an approximately 10% absolute rate of improvement in correct answers. This small increase includes more correct answers partially offset by a decrease in previously correct answers. PMID:24217329
Net improvement of correct answers to therapy questions after pubmed searches: pre/post comparison.
McKibbon, Kathleen Ann; Lokker, Cynthia; Keepanasseril, Arun; Wilczynski, Nancy L; Haynes, R Brian
2013-11-08
Clinicians search PubMed for answers to clinical questions although it is time consuming and not always successful. To determine if PubMed used with its Clinical Queries feature to filter results based on study quality would improve search success (more correct answers to clinical questions related to therapy). We invited 528 primary care physicians to participate, 143 (27.1%) consented, and 111 (21.0% of the total and 77.6% of those who consented) completed the study. Participants answered 14 yes/no therapy questions and were given 4 of these (2 originally answered correctly and 2 originally answered incorrectly) to search using either the PubMed main screen or PubMed Clinical Queries narrow therapy filter via a purpose-built system with identical search screens. Participants also picked 3 of the first 20 retrieved citations that best addressed each question. They were then asked to re-answer the original 14 questions. We found no statistically significant differences in the rates of correct or incorrect answers using the PubMed main screen or PubMed Clinical Queries. The rate of correct answers increased from 50.0% to 61.4% (95% CI 55.0%-67.8%) for the PubMed main screen searches and from 50.0% to 59.1% (95% CI 52.6%-65.6%) for Clinical Queries searches. These net absolute increases of 11.4% and 9.1%, respectively, included previously correct answers changing to incorrect at a rate of 9.5% (95% CI 5.6%-13.4%) for PubMed main screen searches and 9.1% (95% CI 5.3%-12.9%) for Clinical Queries searches, combined with increases in the rate of being correct of 20.5% (95% CI 15.2%-25.8%) for PubMed main screen searches and 17.7% (95% CI 12.7%-22.7%) for Clinical Queries searches. PubMed can assist clinicians answering clinical questions with an approximately 10% absolute rate of improvement in correct answers. This small increase includes more correct answers partially offset by a decrease in previously correct answers.
High speed and adaptable error correction for megabit/s rate quantum key distribution.
Dixon, A R; Sato, H
2014-12-02
Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.
High speed and adaptable error correction for megabit/s rate quantum key distribution
Dixon, A. R.; Sato, H.
2014-01-01
Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90–94% of the ideal secure key rate over all fibre distances from 0–80 km. PMID:25450416
Molecular model of the mitochondrial genome segregation machinery in Trypanosoma brucei
Hoffmann, Anneliese; Käser, Sandro; Jakob, Martin; Amodeo, Simona; Peitsch, Camille; Týč, Jiří; Vaughan, Sue; Schneider, André
2018-01-01
In almost all eukaryotes, mitochondria maintain their own genome. Despite the discovery more than 50 y ago, still very little is known about how the genome is correctly segregated during cell division. The protozoan parasite Trypanosoma brucei contains a single mitochondrion with a singular genome, the kinetoplast DNA (kDNA). Electron microscopy studies revealed the tripartite attachment complex (TAC) to physically connect the kDNA to the basal body of the flagellum and to ensure correct segregation of the mitochondrial genome via the basal bodies movement, during the cell cycle. Using superresolution microscopy, we precisely localize each of the currently known TAC components. We demonstrate that the TAC is assembled in a hierarchical order from the base of the flagellum toward the mitochondrial genome and that the assembly is not dependent on the kDNA itself. Based on the biochemical analysis, the TAC consists of several nonoverlapping subcomplexes, suggesting an overall size of the TAC exceeding 2.8 mDa. We furthermore demonstrate that the TAC is required for correct mitochondrial organelle positioning but not for organelle biogenesis or segregation. PMID:29434039
Ironic Effects of Drawing Attention to Story Errors
Eslick, Andrea N.; Fazio, Lisa K.; Marsh, Elizabeth J.
2014-01-01
Readers learn errors embedded in fictional stories and use them to answer later general knowledge questions (Marsh, Meade, & Roediger, 2003). Suggestibility is robust and occurs even when story errors contradict well-known facts. The current study evaluated whether suggestibility is linked to participants’ inability to judge story content as correct versus incorrect. Specifically, participants read stories containing correct and misleading information about the world; some information was familiar (making error discovery possible), while some was more obscure. To improve participants’ monitoring ability, we highlighted (in red font) a subset of story phrases requiring evaluation; readers no longer needed to find factual information. Rather, they simply needed to evaluate its correctness. Readers were more likely to answer questions with story errors if they were highlighted in red font, even if they contradicted well-known facts. Though highlighting to-be-evaluated information freed cognitive resources for monitoring, an ironic effect occurred: Drawing attention to specific errors increased rather than decreased later suggestibility. Failure to monitor for errors, not failure to identify the information requiring evaluation, leads to suggestibility. PMID:21294039
Ligand solvation in molecular docking.
Shoichet, B K; Leach, A R; Kuntz, I D
1999-01-01
Solvation plays an important role in ligand-protein association and has a strong impact on comparisons of binding energies for dissimilar molecules. When databases of such molecules are screened for complementarity to receptors of known structure, as often occurs in structure-based inhibitor discovery, failure to consider ligand solvation often leads to putative ligands that are too highly charged or too large. To correct for the different charge states and sizes of the ligands, we calculated electrostatic and non-polar solvation free energies for molecules in a widely used molecular database, the Available Chemicals Directory (ACD). A modified Born equation treatment was used to calculate the electrostatic component of ligand solvation. The non-polar component of ligand solvation was calculated based on the surface area of the ligand and parameters derived from the hydration energies of apolar ligands. These solvation energies were subtracted from the ligand-receptor interaction energies. We tested the usefulness of these corrections by screening the ACD for molecules that complemented three proteins of known structure, using a molecular docking program. Correcting for ligand solvation improved the rankings of known ligands and discriminated against molecules with inappropriate charge states and sizes.
Dienel, Gerald A; Cruz, Nancy F; Sokoloff, Louis; Driscoll, Bernard F
2017-01-01
2-Deoxy-D-[ 14 C]glucose ([ 14 C]DG) is commonly used to determine local glucose utilization rates (CMR glc ) in living brain and to estimate CMR glc in cultured brain cells as rates of [ 14 C]DG phosphorylation. Phosphorylation rates of [ 14 C]DG and its metabolizable fluorescent analog, 2-(N-(7-nitrobenz-2-oxa-1,3-diazol-4-yl)amino)-2-deoxyglucose (2-NBDG), however, do not take into account differences in the kinetics of transport and metabolism of [ 14 C]DG or 2-NBDG and glucose in neuronal and astrocytic cells in cultures or in single cells in brain tissue, and conclusions drawn from these data may, therefore, not be correct. As a first step toward the goal of quantitative determination of CMR glc in astrocytes and neurons in cultures, the steady-state intracellular-to-extracellular concentration ratios (distribution spaces) for glucose and [ 14 C]DG were determined in cultured striatal neurons and astrocytes as functions of extracellular glucose concentration. Unexpectedly, the glucose distribution spaces rose during extreme hypoglycemia, exceeding 1.0 in astrocytes, whereas the [ 14 C]DG distribution space fell at the lowest glucose levels. Calculated CMR glc was greatly overestimated in hypoglycemic and normoglycemic cells because the intracellular glucose concentrations were too high. Determination of the distribution space for [ 14 C]glucose revealed compartmentation of intracellular glucose in astrocytes, and probably, also in neurons. A smaller metabolic pool is readily accessible to hexokinase and communicates with extracellular glucose, whereas the larger pool is sequestered from hexokinase activity. A new experimental approach using double-labeled assays with DG and glucose is suggested to avoid the limitations imposed by glucose compartmentation on metabolic assays.
Response to comments on "Can we name Earth's species before they go extinct?".
Costello, Mark J; May, Robert M; Stork, Nigel E
2013-07-19
Mora et al. disputed that most species will be discovered before they go extinct, but not our main recommendations to accelerate species' discoveries. We show that our conclusions would be unaltered by discoveries of more microscopic species and reinforce our estimates of species description and extinction rates, that taxonomic effort has never been greater, and that there are 2 to 8 million species on Earth.
Generalized quantum kinetic expansion: Higher-order corrections to multichromophoric Förster theory
NASA Astrophysics Data System (ADS)
Wu, Jianlan; Gong, Zhihao; Tang, Zhoufei
2015-08-01
For a general two-cluster energy transfer network, a new methodology of the generalized quantum kinetic expansion (GQKE) method is developed, which predicts an exact time-convolution equation for the cluster population evolution under the initial condition of the local cluster equilibrium state. The cluster-to-cluster rate kernel is expanded over the inter-cluster couplings. The lowest second-order GQKE rate recovers the multichromophoric Förster theory (MCFT) rate. The higher-order corrections to the MCFT rate are systematically included using the continued fraction resummation form, resulting in the resummed GQKE method. The reliability of the GQKE methodology is verified in two model systems, revealing the relevance of higher-order corrections.
Controlling the Rate of GWAS False Discoveries
Brzyski, Damian; Peterson, Christine B.; Sobczyk, Piotr; Candès, Emmanuel J.; Bogdan, Malgorzata; Sabatti, Chiara
2017-01-01
With the rise of both the number and the complexity of traits of interest, control of the false discovery rate (FDR) in genetic association studies has become an increasingly appealing and accepted target for multiple comparison adjustment. While a number of robust FDR-controlling strategies exist, the nature of this error rate is intimately tied to the precise way in which discoveries are counted, and the performance of FDR-controlling procedures is satisfactory only if there is a one-to-one correspondence between what scientists describe as unique discoveries and the number of rejected hypotheses. The presence of linkage disequilibrium between markers in genome-wide association studies (GWAS) often leads researchers to consider the signal associated to multiple neighboring SNPs as indicating the existence of a single genomic locus with possible influence on the phenotype. This a posteriori aggregation of rejected hypotheses results in inflation of the relevant FDR. We propose a novel approach to FDR control that is based on prescreening to identify the level of resolution of distinct hypotheses. We show how FDR-controlling strategies can be adapted to account for this initial selection both with theoretical results and simulations that mimic the dependence structure to be expected in GWAS. We demonstrate that our approach is versatile and useful when the data are analyzed using both tests based on single markers and multiple regression. We provide an R package that allows practitioners to apply our procedure on standard GWAS format data, and illustrate its performance on lipid traits in the North Finland Birth Cohort 66 cohort study. PMID:27784720
Controlling the Rate of GWAS False Discoveries.
Brzyski, Damian; Peterson, Christine B; Sobczyk, Piotr; Candès, Emmanuel J; Bogdan, Malgorzata; Sabatti, Chiara
2017-01-01
With the rise of both the number and the complexity of traits of interest, control of the false discovery rate (FDR) in genetic association studies has become an increasingly appealing and accepted target for multiple comparison adjustment. While a number of robust FDR-controlling strategies exist, the nature of this error rate is intimately tied to the precise way in which discoveries are counted, and the performance of FDR-controlling procedures is satisfactory only if there is a one-to-one correspondence between what scientists describe as unique discoveries and the number of rejected hypotheses. The presence of linkage disequilibrium between markers in genome-wide association studies (GWAS) often leads researchers to consider the signal associated to multiple neighboring SNPs as indicating the existence of a single genomic locus with possible influence on the phenotype. This a posteriori aggregation of rejected hypotheses results in inflation of the relevant FDR. We propose a novel approach to FDR control that is based on prescreening to identify the level of resolution of distinct hypotheses. We show how FDR-controlling strategies can be adapted to account for this initial selection both with theoretical results and simulations that mimic the dependence structure to be expected in GWAS. We demonstrate that our approach is versatile and useful when the data are analyzed using both tests based on single markers and multiple regression. We provide an R package that allows practitioners to apply our procedure on standard GWAS format data, and illustrate its performance on lipid traits in the North Finland Birth Cohort 66 cohort study. Copyright © 2017 by the Genetics Society of America.
Terrestrial cosmogenic 3He: where are we 30 years after its discovery?
NASA Astrophysics Data System (ADS)
Blard, Pierre-Henri; Pik, Raphaël; Farley, Kenneth A.; Lavé, Jérôme; Marrocchi, Yves
2016-04-01
It is now 30 years since cosmogenic 3He has been detected for the first time in a terrestrial sample (Kurz, 1986). 3He is now a widely used geochemical tool in many fields of Earth sciences: volcanology, tectonics, paleoclimatology. 3He has the advantage to have a high "production rate" to "detection limit" ratio, allowing surfaces as young as hundred of years to be dated. Although its nuclear stability implies several limitations, it moreover represents a useful alternative to 10Be in mafic environments. This contribution is a review of the progresses that have been accomplished since this discovery, and discuss strategies to improve both the accuracy and the precision of this geochronometer. 1) Measurement of cosmogenic 3He Correction of magmatic 3He. To estimate the non-cosmogenic magmatic 3He, Kurz (1986) invented a two steps method involving crushing of phenocrysts (to analyze the isotopic ratio of the magmatic component), followed by a subsequent melting of the sample, to extract the remaining components, including the cosmogenic 3He: 3Hec = 3Hemelt -4Hemelt x (3He/4He)magmatic (1) Several studies suggested that the preliminary crushing may induce a loss of cosmogenic 3He (Hilton et al., 1993; Yokochi et al., 2005; Blard et al., 2006), implying an underestimate of the cosmogenic 3He measurement. However, subsequent work did not replicate these observations (Blard et al., 2008; Goerhing et al., 2010), suggesting an influence of the used apparatus. An isochron method (by directly melting several phenocrysts aliquots) is an alternative to avoid the preliminary crushing step (Blard and Pik, 2008). Atmospheric contamination. Protin et al. (in press) provides robust evidences for a large and irreversible contamination of atmospheric helium on silicate surfaces. This unexpected behavior may reconcile the contrasted observations about the amplitude of crushing loss. This undesirable atmospheric contamination is negligible if grain fractions smaller than 150 mm are removed before melting. Correction of radiogenic 4He and nucleogenic 3He. Equation 1 is valid only if the 4He extracted by melting is entirely magmatic. To account for a possible radiogenic 4He component, it is crucial to properly estimate the radiogenic 4He production rate, by measuring the U, Th and Sm concentrations of both phenocryst and host, and the phenocryst size. Estimating the nucleogenic 3He also requires measuring Li in the phenocryst. Accuracy of analytical systems. A recent inter-laboratory comparison involving 6 different groups indicated systematic offsets between labs (up to 7%) (Blard et al., 2015). Efforts must be pursued to remove these inaccuracies. 2) Production rates Absolute calibration. There are 25 3He calibration sites among the world, from -47° S to 64° N in latitude, and from 35 to 3800 m in elevation. After scaling these production rates to sea level high latitude, this dataset reveals a significant statistical dispersion (ca. 13%). Efforts should be focused on regions that are free of data and others, such as the Eastern Atlantic that yields values systematically off. 3He/10Be cross calibrations. Some studies (Gayer et al., 2004 ; Amidon et al., 2009) identified an altitude dependence of the 3He/10Be production ratio in the Himalayas, while other data from the Andes and Africa did not (Blard et al., 2013b ; Schimmelpfennig et al., 2011). There is thus a crucial need for new data at high and low elevation, with and without snow, to precisely quantify the cosmogenic thermal neutron production. Artificial target experiments may also be useful.
Ibrahim, T; Gabbar, O A; El-Abed, K; Hutchinson, M J; Nelson, I W
2008-11-01
Our aim in this prospective radiological study was to determine whether the flexibility rate calculated from radiographs obtained during forced traction under general anaesthesia, was better than that of fulcrum-bending radiographs before corrective surgery in predicting the extent of the available correction in patients with idiopathic scoliosis. We evaluated 33 patients with a Cobb angle > 60 degrees on a standing posteroanterior radiograph, who had been treated by posterior correction. Pre-operative standing fulcrum-bending radiographs and those with forced-traction under general anaesthesia were obtained. Post-operative standing radiographs were taken after surgical correction. The mean forced-traction flexibility rate was 55% (SD 11.3) which was significantly higher than the mean fulcrum-bending flexibility rate of 32% (SD 16.1) (p < 0.001). We found no correlation between either the forced-traction or fulcrum-bending flexibility rates and the correction rate post-operatively (p = 0.24 and p = 0.44, respectively). Radiographs obtained during forced traction under general anaesthesia were better at predicting the flexibility of the curve than fulcrum-bending radiographs in curves with a Cobb angle > 60 degrees in the standing position and may identify those patients for whom supplementary anterior surgery can be avoided.
Mack, David L; Guan, Xuan; Wagoner, Ashley; Walker, Stephen J; Childers, Martin K
2014-11-01
Advances in regenerative medicine technologies will lead to dramatic changes in how patients in rehabilitation medicine clinics are treated in the upcoming decades. The multidisciplinary field of regenerative medicine is developing new tools for disease modeling and drug discovery based on induced pluripotent stem cells. This approach capitalizes on the idea of personalized medicine by using the patient's own cells to discover new drugs, increasing the likelihood of a favorable outcome. The search for compounds that can correct disease defects in the culture dish is a conceptual departure from how drug screens were done in the past. This system proposes a closed loop from sample collection from the diseased patient, to in vitro disease model, to drug discovery and Food and Drug Administration approval, to delivering that drug back to the same patient. Here, recent progress in patient-specific induced pluripotent stem cell derivation, directed differentiation toward diseased cell types, and how those cells can be used for high-throughput drug screens are reviewed. Given that restoration of normal function is a driving force in rehabilitation medicine, the authors believe that this drug discovery platform focusing on phenotypic rescue will become a key contributor to therapeutic compounds in regenerative rehabilitation.
Leonardi, Giorgio; Striani, Manuel; Quaglini, Silvana; Cavallini, Anna; Montani, Stefania
2018-05-21
Many medical information systems record data about the executed process instances in the form of an event log. In this paper, we present a framework, able to convert actions in the event log into higher level concepts, at different levels of abstraction, on the basis of domain knowledge. Abstracted traces are then provided as an input to trace comparison and semantic process discovery. Our abstraction mechanism is able to manage non trivial situations, such as interleaved actions or delays between two actions that abstract to the same concept. Trace comparison resorts to a similarity metric able to take into account abstraction phase penalties, and to deal with quantitative and qualitative temporal constraints in abstracted traces. As for process discovery, we rely on classical algorithms embedded in the framework ProM, made semantic by the capability of abstracting the actions on the basis of their conceptual meaning. The approach has been tested in stroke care, where we adopted abstraction and trace comparison to cluster event logs of different stroke units, to highlight (in)correct behavior, abstracting from details. We also provide process discovery results, showing how the abstraction mechanism allows to obtain stroke process models more easily interpretable by neurologists. Copyright © 2018. Published by Elsevier Inc.
Kopps, Anna M; Kang, Jungkoo; Sherwin, William B; Palsbøll, Per J
2015-06-30
Kinship analyses are important pillars of ecological and conservation genetic studies with potentially far-reaching implications. There is a need for power analyses that address a range of possible relationships. Nevertheless, such analyses are rarely applied, and studies that use genetic-data-based-kinship inference often ignore the influence of intrinsic population characteristics. We investigated 11 questions regarding the correct classification rate of dyads to relatedness categories (relatedness category assignments; RCA) using an individual-based model with realistic life history parameters. We investigated the effects of the number of genetic markers; marker type (microsatellite, single nucleotide polymorphism SNP, or both); minor allele frequency; typing error; mating system; and the number of overlapping generations under different demographic conditions. We found that (i) an increasing number of genetic markers increased the correct classification rate of the RCA so that up to >80% first cousins can be correctly assigned; (ii) the minimum number of genetic markers required for assignments with 80 and 95% correct classifications differed between relatedness categories, mating systems, and the number of overlapping generations; (iii) the correct classification rate was improved by adding additional relatedness categories and age and mitochondrial DNA data; and (iv) a combination of microsatellite and single-nucleotide polymorphism data increased the correct classification rate if <800 SNP loci were available. This study shows how intrinsic population characteristics, such as mating system and the number of overlapping generations, life history traits, and genetic marker characteristics, can influence the correct classification rate of an RCA study. Therefore, species-specific power analyses are essential for empirical studies. Copyright © 2015 Kopps et al.
Kim, Dahan; Curthoys, Nikki M.; Parent, Matthew T.; Hess, Samuel T.
2015-01-01
Multi-colour localization microscopy has enabled sub-diffraction studies of colocalization between multiple biological species and quantification of their correlation at length scales previously inaccessible with conventional fluorescence microscopy. However, bleed-through, or misidentification of probe species, creates false colocalization and artificially increases certain types of correlation between two imaged species, affecting the reliability of information provided by colocalization and quantified correlation. Despite the potential risk of these artefacts of bleed-through, neither the effect of bleed-through on correlation nor methods of its correction in correlation analyses has been systematically studied at typical rates of bleed-through reported to affect multi-colour imaging. Here, we present a reliable method of bleed-through correction applicable to image rendering and correlation analysis of multi-colour localization microscopy. Application of our bleed-through correction shows our method accurately corrects the artificial increase in both types of correlations studied (Pearson coefficient and pair correlation), at all rates of bleed-through tested, in all types of correlations examined. In particular, anti-correlation could not be quantified without our bleed-through correction, even at rates of bleed-through as low as 2%. Demonstrated with dichroic-based multi-colour FPALM here, our presented method of bleed-through correction can be applied to all types of localization microscopy (PALM, STORM, dSTORM, GSDIM, etc.), including both simultaneous and sequential multi-colour modalities, provided the rate of bleed-through can be reliably determined. PMID:26185614
Kim, Dahan; Curthoys, Nikki M; Parent, Matthew T; Hess, Samuel T
2013-09-01
Multi-colour localization microscopy has enabled sub-diffraction studies of colocalization between multiple biological species and quantification of their correlation at length scales previously inaccessible with conventional fluorescence microscopy. However, bleed-through, or misidentification of probe species, creates false colocalization and artificially increases certain types of correlation between two imaged species, affecting the reliability of information provided by colocalization and quantified correlation. Despite the potential risk of these artefacts of bleed-through, neither the effect of bleed-through on correlation nor methods of its correction in correlation analyses has been systematically studied at typical rates of bleed-through reported to affect multi-colour imaging. Here, we present a reliable method of bleed-through correction applicable to image rendering and correlation analysis of multi-colour localization microscopy. Application of our bleed-through correction shows our method accurately corrects the artificial increase in both types of correlations studied (Pearson coefficient and pair correlation), at all rates of bleed-through tested, in all types of correlations examined. In particular, anti-correlation could not be quantified without our bleed-through correction, even at rates of bleed-through as low as 2%. Demonstrated with dichroic-based multi-colour FPALM here, our presented method of bleed-through correction can be applied to all types of localization microscopy (PALM, STORM, dSTORM, GSDIM, etc.), including both simultaneous and sequential multi-colour modalities, provided the rate of bleed-through can be reliably determined.
Birch, Joanne L; Walsh, Neville G; Cantrill, David J; Holmes, Gareth D; Murphy, Daniel J
2017-01-01
In Australia, Poaceae tribe Poeae are represented by 19 genera and 99 species, including economically and environmentally important native and introduced pasture grasses [e.g. Poa (Tussock-grasses) and Lolium (Ryegrasses)]. We used this tribe, which are well characterised in regards to morphological diversity and evolutionary relationships, to test the efficacy of DNA barcoding methods. A reference library was generated that included 93.9% of species in Australia (408 individuals, [Formula: see text] = 3.7 individuals per species). Molecular data were generated for official plant barcoding markers (rbcL, matK) and the nuclear ribosomal internal transcribed spacer (ITS) region. We investigated accuracy of specimen identifications using distance- (nearest neighbour, best-close match, and threshold identification) and tree-based (maximum likelihood, Bayesian inference) methods and applied species discovery methods (automatic barcode gap discovery, Poisson tree processes) based on molecular data to assess congruence with recognised species. Across all methods, success rate for specimen identification of genera was high (87.5-99.5%) and of species was low (25.6-44.6%). Distance- and tree-based methods were equally ineffective in providing accurate identifications for specimens to species rank (26.1-44.6% and 25.6-31.3%, respectively). The ITS marker achieved the highest success rate for specimen identification at both generic and species ranks across the majority of methods. For distance-based analyses the best-close match method provided the greatest accuracy for identification of individuals with a high percentage of "correct" (97.6%) and a low percentage of "incorrect" (0.3%) generic identifications, based on the ITS marker. For tribe Poeae, and likely for other grass lineages, sequence data in the standard DNA barcode markers are not variable enough for accurate identification of specimens to species rank. For recently diverged grass species similar challenges are encountered in the application of genetic and morphological data to species delimitations, with taxonomic signal limited by extensive infra-specific variation and shared polymorphisms among species in both data types.
Discovery of 4ms and 7 MS Pulsars in M15 (F & H)
NASA Astrophysics Data System (ADS)
Middleditch, J.
1992-12-01
Observations of M15 taken during Oct. 23-Nov. 1 1991 with the Arecibo 305-m telescope at 430 MHz, which were analyzed using 2-billion point Fourier transforms on supercomputers at Los Alamos National Laboratory, reveal two new ms pulsars in the globular cluster, M15. The sixth and fastest yet discovered in this cluster, M15F, has a spin rate of 248.3 Hz, while the eighth and latest to be discovered in this cluster has a spin rate of 148.3 Hz, the only one known so far in the frequency interval of 100-200 Hz. Further details and implications of these discoveries will be discussed.
NASA Astrophysics Data System (ADS)
Larson, Stephen
2007-05-01
The state and discovery rate of current NEO surveys reflects incremental improvements in a number of areas, such as detector size and sensitivity, computing capacity and availability of larger apertures. The result has been an increased discovery rate even with the expected reduction of objects left to discover. There are currently about 10 telescopes ranging in size from 0.5 - 1.5-meters carrying out full or part-time, regular surveying in both hemispheres. The sky is covered between 1-2 times per lunation to V~19, with a band near the ecliptic to V~20.5. We review the current survey programs and their contribution towards the Spaceguard goal of discovering at least 90% of the NEOs larger than 1 km.
Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes
NASA Astrophysics Data System (ADS)
Jing, Lin; Brun, Todd; Quantum Research Team
Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.
The solar cycle variation of the rates of CMEs and related activity
NASA Technical Reports Server (NTRS)
Webb, David F.
1991-01-01
Coronal mass ejections (CMEs) are an important aspect of the physics of the corona and heliosphere. This paper presents results of a study of occurrence frequencies of CMEs and related activity tracers over more than a complete solar activity cycle. To properly estimate occurrence rates, observed CME rates must be corrected for instrument duty cycles, detection efficiencies away from the skyplane, mass detection thresholds, and geometrical considerations. These corrections are evaluated using CME data from 1976-1989 obtained with the Skylab, SMM and SOLWIND coronagraphs and the Helios-2 photometers. The major results are: (1) the occurrence rate of CMEs tends to track the activity cycle in both amplitude and phase; (2) the corrected rates from different instruments are reasonably consistent; and (3) over the long term, no one class of solar activity tracer is better correlated with CME rate than any other (with the possible exception of type II bursts).
45 CFR 284.45 - What are the contents and duration of the corrective action plan?
Code of Federal Regulations, 2011 CFR
2011-10-01
... POVERTY RATE IS THE RESULT OF THE TANF PROGRAM § 284.45 What are the contents and duration of the... manner in which the State or Territory will reduce its child poverty rate; (2) A description of the... corrective action plan until it determines and notifies us that its child poverty rate, as determined in...
45 CFR 284.45 - What are the contents and duration of the corrective action plan?
Code of Federal Regulations, 2014 CFR
2014-10-01
... POVERTY RATE IS THE RESULT OF THE TANF PROGRAM § 284.45 What are the contents and duration of the... manner in which the State or Territory will reduce its child poverty rate; (2) A description of the... corrective action plan until it determines and notifies us that its child poverty rate, as determined in...
45 CFR 284.45 - What are the contents and duration of the corrective action plan?
Code of Federal Regulations, 2013 CFR
2013-10-01
... POVERTY RATE IS THE RESULT OF THE TANF PROGRAM § 284.45 What are the contents and duration of the... manner in which the State or Territory will reduce its child poverty rate; (2) A description of the... corrective action plan until it determines and notifies us that its child poverty rate, as determined in...
45 CFR 284.45 - What are the contents and duration of the corrective action plan?
Code of Federal Regulations, 2012 CFR
2012-10-01
... POVERTY RATE IS THE RESULT OF THE TANF PROGRAM § 284.45 What are the contents and duration of the... manner in which the State or Territory will reduce its child poverty rate; (2) A description of the... corrective action plan until it determines and notifies us that its child poverty rate, as determined in...
45 CFR 284.45 - What are the contents and duration of the corrective action plan?
Code of Federal Regulations, 2010 CFR
2010-10-01
... POVERTY RATE IS THE RESULT OF THE TANF PROGRAM § 284.45 What are the contents and duration of the... manner in which the State or Territory will reduce its child poverty rate; (2) A description of the... corrective action plan until it determines and notifies us that its child poverty rate, as determined in...
1985-07-25
renovation is not a recent discovery . In May 1984, I also rejected /Mrs Peron’s/ offer to appoint me to the tactical command that she created, and I...been marked by emphasis placed on greater discoveries of reserves. For example, at present, the proven crude supplies will suffice to cover only 14... Cobre and Mid-Claren- don and provide other irrigation facilities where they are necessary throughout the country. ’■ 4 Special rate of electricity
Feng, Yan; Mitchison, Timothy J; Bender, Andreas; Young, Daniel W; Tallarico, John A
2009-07-01
Multi-parameter phenotypic profiling of small molecules provides important insights into their mechanisms of action, as well as a systems level understanding of biological pathways and their responses to small molecule treatments. It therefore deserves more attention at an early step in the drug discovery pipeline. Here, we summarize the technologies that are currently in use for phenotypic profiling--including mRNA-, protein- and imaging-based multi-parameter profiling--in the drug discovery context. We think that an earlier integration of phenotypic profiling technologies, combined with effective experimental and in silico target identification approaches, can improve success rates of lead selection and optimization in the drug discovery process.
How automatic is the hand's automatic pilot? Evidence from dual-task studies.
McIntosh, Robert D; Mulroue, Amy; Brockmole, James R
2010-10-01
The ability to correct reaching movements for changes in target position has been described as the hand's 'automatic pilot'. These corrections are preconscious and occur by default in double-step reaching tasks, even if the goal is to react to the target jump in some other way, for instance by stopping the movement (STOP instruction). Nonetheless, corrections are strongly modulated by conscious intention: participants make more corrections when asked to follow the target (GO instruction) and can suppress them when explicitly asked not to follow the target (NOGO instruction). We studied the influence of a cognitively demanding (auditory 1-back) task upon correction behaviour under GO, STOP and NOGO instructions. Correction rates under the STOP instruction were unaffected by cognitive load, consistent with the assumption that they reflect the default behaviour of the automatic pilot. Correction rates under the GO instruction were also unaffected, suggesting that minimal cognitive resources are required to enhance online correction. By contrast, cognitive load impeded the ability to suppress online corrections under the NOGO instruction. These data reveal a constitutional bias in the automatic pilot system: intentional suppression of the default correction behaviour is cognitively demanding, but enhancement towards greater responsiveness is seemingly effortless.
Linkage effects between deposit discovery and postdiscovery exploratory drilling
Drew, Lawrence J.
1975-01-01
For the 1950-71 period of petroleum exploration in the Powder River Basin, northeastern Wyoming and southeastern Montana, three specific topics were investigated. First, the wildcat wells drilled during the ambient phases of exploration are estimated to have discovered 2.80 times as much petroleum per well as the wildcat wells drilled during the cyclical phases of exploration, periods when exploration plays were active. Second, the hypothesis was tested and verified that during ambient phases of exploration the discovery of deposits could be anticipated by a small but statistically significant rise in the ambient drilling rate during the year prior to the year of discovery. Closer examination of the data suggests that this anticipation effect decreases through time. Third, a regression model utilizing the two independent variables of (1) the volume of petroleum contained in each deposit discovered in a cell and the directly adjacent cells and (2) the respective depths of these deposits was constructed to predict the expected yearly cyclical wildcat drilling rate in four 30 by 30 min (approximately 860 mi2) sized cells. In two of these cells relatively large volumes of petroleum were discovered, whereas in the other two cells smaller volumes were discovered. The predicted and actual rates of wildcat drilling which occurred in each cell agreed rather closely.
Jiang, Wei; Yu, Weichuan
2017-02-15
In genome-wide association studies (GWASs) of common diseases/traits, we often analyze multiple GWASs with the same phenotype together to discover associated genetic variants with higher power. Since it is difficult to access data with detailed individual measurements, summary-statistics-based meta-analysis methods have become popular to jointly analyze datasets from multiple GWASs. In this paper, we propose a novel summary-statistics-based joint analysis method based on controlling the joint local false discovery rate (Jlfdr). We prove that our method is the most powerful summary-statistics-based joint analysis method when controlling the false discovery rate at a certain level. In particular, the Jlfdr-based method achieves higher power than commonly used meta-analysis methods when analyzing heterogeneous datasets from multiple GWASs. Simulation experiments demonstrate the superior power of our method over meta-analysis methods. Also, our method discovers more associations than meta-analysis methods from empirical datasets of four phenotypes. The R-package is available at: http://bioinformatics.ust.hk/Jlfdr.html . eeyu@ust.hk. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
iPTF14yb: The First Discovery of a Gamma-Ray Burst Afterglow Independent of a High-Energy Trigger
Cenko, S. Bradley; Urban, Alex L.; Perley, Daniel A.; ...
2015-04-20
We report here the discovery by the Intermediate Palomar Transient Factory (iPTF) of iPTF14yb, a luminous (Msub>r ≈ ₋27.8 mag), cosmological (redshift 1.9733), rapidly fading optical transient. We demonstrate, based on probabilistic arguments and a comparison with the broader population, that iPTF14yb is the optical afterglow of the long-duration gamma-ray burst GRB140226A. This marks the rst unambiguous discovery of a GRB afterglow prior to (and thus entirely independent of) an associated high-energy trigger. We estimate the rate of iPTF14yb-like sources (i.e., cosmologically dis- tant relativistic explosions) based on iPTF observations, inferring an all-sky value ofmore » $$R_{rel}$$ = 610yr -1 (68% con dence interval of 110{2000 yr -1). Our derived rate is consistent (within the large uncer- tainty) with the all-sky rate of on-axis GRBs derived by the Swift satellite. Finally, we brie y discuss the implications of the nondetection to date of bona de \\orphan" afterglows (i.e., those lacking de- tectable high-energy emission) on GRB beaming and the degree of baryon loading in these relativistic jets.« less
Long-Period Planets in Open Clusters and the Evolution of Planetary Systems
NASA Astrophysics Data System (ADS)
Quinn, Samuel N.; White, Russel; Latham, David W.; Stefanik, Robert
2018-01-01
Recent discoveries of giant planets in open clusters confirm that they do form and migrate in relatively dense stellar groups, though overall occurrence rates are not yet well constrained because the small sample of giant planets discovered thus far predominantly have short periods. Moreover, planet formation rates and the architectures of planetary systems in clusters may vary significantly -- e.g., due to intercluster differences in the chemical properties that regulate the growth of planetary embryos or in the stellar space density and binary populations, which can influence the dynamical evolution of planetary systems. Constraints on the population of long-period Jovian planets -- those representing the reservoir from which many hot Jupiters likely form, and which are most vulnerable to intracluster dynamical interactions -- can help quantify how the birth environment affects formation and evolution, particularly through comparison of populations possessing a range of ages and chemical and dynamical properties. From our ongoing RV survey of open clusters, we present the discovery of several long-period planets and candidate substellar companions in the Praesepe, Coma Berenices, and Hyades open clusters. From these discoveries, we improve estimates of giant planet occurrence rates in clusters, and we note that high eccentricities in several of these systems support the prediction that the birth environment helps shape planetary system architectures.
Shafiee, Mohammad Javad; Chung, Audrey G; Khalvati, Farzad; Haider, Masoom A; Wong, Alexander
2017-10-01
While lung cancer is the second most diagnosed form of cancer in men and women, a sufficiently early diagnosis can be pivotal in patient survival rates. Imaging-based, or radiomics-driven, detection methods have been developed to aid diagnosticians, but largely rely on hand-crafted features that may not fully encapsulate the differences between cancerous and healthy tissue. Recently, the concept of discovery radiomics was introduced, where custom abstract features are discovered from readily available imaging data. We propose an evolutionary deep radiomic sequencer discovery approach based on evolutionary deep intelligence. Motivated by patient privacy concerns and the idea of operational artificial intelligence, the evolutionary deep radiomic sequencer discovery approach organically evolves increasingly more efficient deep radiomic sequencers that produce significantly more compact yet similarly descriptive radiomic sequences over multiple generations. As a result, this framework improves operational efficiency and enables diagnosis to be run locally at the radiologist's computer while maintaining detection accuracy. We evaluated the evolved deep radiomic sequencer (EDRS) discovered via the proposed evolutionary deep radiomic sequencer discovery framework against state-of-the-art radiomics-driven and discovery radiomics methods using clinical lung CT data with pathologically proven diagnostic data from the LIDC-IDRI dataset. The EDRS shows improved sensitivity (93.42%), specificity (82.39%), and diagnostic accuracy (88.78%) relative to previous radiomics approaches.
Identifying UMLS concepts from ECG Impressions using KnowledgeMap
Denny, Joshua C.; Spickard, Anderson; Miller, Randolph A; Schildcrout, Jonathan; Darbar, Dawood; Rosenbloom, S. Trent; Peterson, Josh F.
2005-01-01
Electrocardiogram (ECG) impressions represent a wealth of medical information for potential decision support and drug-effect discovery. Much of this information is inaccessible to automated methods in the free-text portion of the ECG report. We studied the application of the KnowledgeMap concept identifier (KMCI) to map Unified Medical Language System (UMLS) concepts from ECG impressions. ECGs were processed by KMCI and the results scored for accuracy by multiple raters. Reviewers also recorded unidentified concepts through the scoring interface. Overall, KMCI correctly identified 1059 out of 1171 concepts for a recall of 0.90. Precision, indicating the proportion of ECG concepts correctly identified, was 0.94. KMCI was particularly effective at identifying ECG rhythms (330/333), perfusion changes (65/66), and noncardiac medical concepts (11/11). In conclusion, KMCI is an effective method for mapping ECG impressions to UMLS concepts. PMID:16779029
Three-loop corrections to the Higgs boson mass and implications for supersymmetry at the LHC.
Feng, Jonathan L; Kant, Philipp; Profumo, Stefano; Sanford, David
2013-09-27
In supersymmetric models with minimal particle content and without left-right squark mixing, the conventional wisdom is that the 125.6 GeV Higgs boson mass implies top squark masses of O(10) TeV, far beyond the reach of colliders. This conclusion is subject to significant theoretical uncertainties, however, and we provide evidence that it may be far too pessimistic. We evaluate the Higgs boson mass, including the dominant three-loop terms at O(αtαs2), in currently viable models. For multi-TeV top squarks, the three-loop corrections can increase the Higgs boson mass by as much as 3 GeV and lower the required top-squark masses to 3-4 TeV, greatly improving prospects for supersymmetry discovery at the upcoming run of the LHC and its high-luminosity upgrade.
Hawking radiation as tunneling in Schwarzschild anti-de Sitter black hole
NASA Astrophysics Data System (ADS)
Sefiedgar, A. S.; Ashrafinejad, A.
2017-08-01
The Hawking radiation from a (d+1) -dimensional Schwarzschild Anti-de Sitter (SAdS) black hole is investigated within rainbow gravity. Based on the method proposed by Kraus, Parikh and Wilczek, the Hawking radiation is considered as a tunneling process across the horizon. The emission rate of massless particles which are tunneling across the quantum-corrected horizon is calculated. Enforcing the energy conservation law leads to a dynamical geometry. Both the dynamical geometry and the quantum effects of space-time yield some corrections to the emission rate. The corrected radiation spectrum is not purely thermal. The emission rate is related to the changes of modified entropy in rainbow gravity and the corrected thermal spectrum may be consistent with an underlying unitary quantum theory. The correlations between emitted particles are also investigated in order to address the recovery of information.
Bolea, Juan; Pueyo, Esther; Orini, Michele; Bailón, Raquel
2016-01-01
The purpose of this study is to characterize and attenuate the influence of mean heart rate (HR) on nonlinear heart rate variability (HRV) indices (correlation dimension, sample, and approximate entropy) as a consequence of being the HR the intrinsic sampling rate of HRV signal. This influence can notably alter nonlinear HRV indices and lead to biased information regarding autonomic nervous system (ANS) modulation. First, a simulation study was carried out to characterize the dependence of nonlinear HRV indices on HR assuming similar ANS modulation. Second, two HR-correction approaches were proposed: one based on regression formulas and another one based on interpolating RR time series. Finally, standard and HR-corrected HRV indices were studied in a body position change database. The simulation study showed the HR-dependence of non-linear indices as a sampling rate effect, as well as the ability of the proposed HR-corrections to attenuate mean HR influence. Analysis in a body position changes database shows that correlation dimension was reduced around 21% in median values in standing with respect to supine position ( p < 0.05), concomitant with a 28% increase in mean HR ( p < 0.05). After HR-correction, correlation dimension decreased around 18% in standing with respect to supine position, being the decrease still significant. Sample and approximate entropy showed similar trends. HR-corrected nonlinear HRV indices could represent an improvement in their applicability as markers of ANS modulation when mean HR changes.
On the Discovery of Evolving Truth
Li, Yaliang; Li, Qi; Gao, Jing; Su, Lu; Zhao, Bo; Fan, Wei; Han, Jiawei
2015-01-01
In the era of big data, information regarding the same objects can be collected from increasingly more sources. Unfortunately, there usually exist conflicts among the information coming from different sources. To tackle this challenge, truth discovery, i.e., to integrate multi-source noisy information by estimating the reliability of each source, has emerged as a hot topic. In many real world applications, however, the information may come sequentially, and as a consequence, the truth of objects as well as the reliability of sources may be dynamically evolving. Existing truth discovery methods, unfortunately, cannot handle such scenarios. To address this problem, we investigate the temporal relations among both object truths and source reliability, and propose an incremental truth discovery framework that can dynamically update object truths and source weights upon the arrival of new data. Theoretical analysis is provided to show that the proposed method is guaranteed to converge at a fast rate. The experiments on three real world applications and a set of synthetic data demonstrate the advantages of the proposed method over state-of-the-art truth discovery methods. PMID:26705502
Getting physical to fix pharma
NASA Astrophysics Data System (ADS)
Connelly, Patrick R.; Vuong, T. Minh; Murcko, Mark A.
2011-09-01
Powerful technologies allow the synthesis and testing of large numbers of new compounds, but the failure rate of pharmaceutical R&D remains very high. Greater understanding of the fundamental physical chemical behaviour of molecules could be the key to greatly enhancing the success rate of drug discovery.
Beyond the standard Higgs after the 125 GeV Higgs discovery.
Grojean, C
2015-01-13
An elementary weakly coupled and solitary Higgs boson allows one to extend the validity of the Standard Model up to very high energy, maybe as high as the Planck scale. Nonetheless, this scenario fails to fill the universe with dark matter and does not explain the matter-antimatter asymmetry. However, amending the Standard Model tends to destabilize the weak scale by large quantum corrections to the Higgs potential. New degrees of freedom, new forces, new organizing principles are required to provide a consistent and natural description of physics beyond the standard Higgs.
Discovery of New Eunicellins from an Indonesian Octocoral Cladiella sp.
Chen, Yung-Husan; Tai, Chia-Ying; Su, Yin-Di; Chang, Yu-Chia; Lu, Mei-Chin; Weng, Ching-Feng; Su, Jui-Hsin; Hwang, Tsong-Long; Wu, Yang-Chang; Sung, Ping-Jyun
2011-01-01
Two new 11-hydroxyeunicellin diterpenoids, cladieunicellin F (1) and (–)-solenopodin C (2), were isolated from an Indonesian octocoral Cladiella sp. The structures of eunicellins 1 and 2 were established by spectroscopic methods, and eunicellin 2 was found to be an enantiomer of the known eunicellin solenopodin C (3). Eunicellin 2 displayed inhibitory effects on the generation of superoxide anion and the release of elastase by human neutrophils. The previously reported structures of two eunicellin-based compounds, cladielloides A and B, are corrected in this study. PMID:21747739
Beyond the standard Higgs after the 125 GeV Higgs discovery
Grojean, C.
2015-01-01
An elementary, weakly coupled and solitary Higgs boson allows one to extend the validity of the Standard Model up to very high energy, maybe as high as the Planck scale. Nonetheless, this scenario fails to fill the universe with dark matter and does not explain the matter–antimatter asymmetry. However, amending the Standard Model tends to destabilize the weak scale by large quantum corrections to the Higgs potential. New degrees of freedom, new forces, new organizing principles are required to provide a consistent and natural description of physics beyond the standard Higgs.
Fire/security staff member instructs STS-29 crew on fire extinguisher usage
NASA Technical Reports Server (NTRS)
1988-01-01
STS-29 Discovery, Orbiter Vehicle (OV) 103, crewmembers are trained in procedures to follow in the event of a fire. Here, the crew is briefed on the correct handling of the fire extinguisher by Robert Fife (far left) of NASA's fire / security staff. Pictured, left to right are Pilot John E. Blaha, Commander Michael L. Coats, Mission Specialist (MS) Robert C. Springer, MS James F. Buchli, and MS James P. Bagian. The in fire fighting training took place at JSC's fire training pit across from the Gilruth Center Bldg 207.
Mrs. Haise in viewing room overlooking FCR
1970-04-14
S70-34900 (14 April 1970) --- Mrs. Mary Haise receives an explanation of the revised flight plan of the Apollo 13 mission from astronaut Gerald P. Carr in the viewing room of the Mission Control Center (MCC), Building 30, at the Manned Spacecraft Center (MSC). Her husband, astronaut Fred W. Haise Jr., lunar module pilot for the Apollo 13 mission, was joining fellow crew members, astronauts James A. Lovell Jr., and John L. Swigert Jr. in making correction in their spacecraft following discovery of an oxygen cell failure several hours earlier.
Peng, Jen -Chieh; Qiu, Jian -Wei
2016-09-01
The Drell-Yan process, proposed over 45 years ago by Sid Drell and Tung-Mow Yan to describe high-mass lepton-pair production in hadron-hadron collision, has played an important role in validating QCD as the correct theory for strong interaction. This process has also become a powerful tool for probing the partonic structures of hadrons. The Drell-Yan mechanism has led to the discovery of new particles, and will continue to be an important tool to search for new physics. In this study, we review some highlights and future prospects of the Drell-Yan process.
Verfaillie, Sander C J; Pichet Binette, Alexa; Vachon-Presseau, Etienne; Tabrizi, Shirin; Savard, Mélissa; Bellec, Pierre; Ossenkoppele, Rik; Scheltens, Philip; van der Flier, Wiesje M; Breitner, John C S; Villeneuve, Sylvia
2018-05-01
Both subjective cognitive decline (SCD) and a family history of Alzheimer's disease (AD) portend risk of brain abnormalities and progression to dementia. Posterior default mode network (pDMN) connectivity is altered early in the course of AD. It is unclear whether SCD predicts similar outcomes in cognitively normal individuals with a family history of AD. We studied 124 asymptomatic individuals with a family history of AD (age 64 ± 5 years). Participants were categorized as having SCD if they reported that their memory was becoming worse (SCD + ). We used extensive neuropsychological assessment to investigate five different cognitive domain performances at baseline (n = 124) and 1 year later (n = 59). We assessed interconnectivity among three a priori defined ROIs: pDMN, anterior ventral DMN, medial temporal memory system (MTMS), and the connectivity of each with the rest of brain. Sixty-eight (55%) participants reported SCD. Baseline cognitive performance was comparable between groups (all false discovery rate-adjusted p values > .05). At follow-up, immediate and delayed memory improved across groups, but the improvement in immediate memory was reduced in SCD + compared with SCD - (all false discovery rate-adjusted p values < .05). When compared with SCD - , SCD + subjects showed increased pDMN-MTMS connectivity (false discovery rate-adjusted p < .05). Higher connectivity between the MTMS and the rest of the brain was associated with better baseline immediate memory, attention, and global cognition, whereas higher MTMS and pDMN-MTMS connectivity were associated with lower immediate memory over time (all false discovery rate-adjusted p values < .05). SCD in cognitively normal individuals is associated with diminished immediate memory practice effects and a brain connectivity pattern that mirrors early AD-related connectivity failure. Copyright © 2017 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Machine learning models for lipophilicity and their domain of applicability.
Schroeter, Timon; Schwaighofer, Anton; Mika, Sebastian; Laak, Antonius Ter; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-01-01
Unfavorable lipophilicity and water solubility cause many drug failures; therefore these properties have to be taken into account early on in lead discovery. Commercial tools for predicting lipophilicity usually have been trained on small and neutral molecules, and are thus often unable to accurately predict in-house data. Using a modern Bayesian machine learning algorithm--a Gaussian process model--this study constructs a log D7 model based on 14,556 drug discovery compounds of Bayer Schering Pharma. Performance is compared with support vector machines, decision trees, ridge regression, and four commercial tools. In a blind test on 7013 new measurements from the last months (including compounds from new projects) 81% were predicted correctly within 1 log unit, compared to only 44% achieved by commercial software. Additional evaluations using public data are presented. We consider error bars for each method (model based error bars, ensemble based, and distance based approaches), and investigate how well they quantify the domain of applicability of each model.
NASA Astrophysics Data System (ADS)
Liao, Chun-Chih; Xiao, Furen; Wong, Jau-Min; Chiang, I.-Jen
Computed tomography (CT) of the brain is preferred study on neurological emergencies. Physicians use CT to diagnose various types of intracranial hematomas, including epidural, subdural and intracerebral hematomas according to their locations and shapes. We propose a novel method that can automatically diagnose intracranial hematomas by combining machine vision and knowledge discovery techniques. The skull on the CT slice is located and the depth of each intracranial pixel is labeled. After normalization of the pixel intensities by their depth, the hyperdense area of intracranial hematoma is segmented with multi-resolution thresholding and region-growing. We then apply C4.5 algorithm to construct a decision tree using the features of the segmented hematoma and the diagnoses made by physicians. The algorithm was evaluated on 48 pathological images treated in a single institute. The two discovered rules closely resemble those used by human experts, and are able to make correct diagnoses in all cases.
Systematic Evaluation of Molecular Networks for Discovery of Disease Genes.
Huang, Justin K; Carlin, Daniel E; Yu, Michael Ku; Zhang, Wei; Kreisberg, Jason F; Tamayo, Pablo; Ideker, Trey
2018-04-25
Gene networks are rapidly growing in size and number, raising the question of which networks are most appropriate for particular applications. Here, we evaluate 21 human genome-wide interaction networks for their ability to recover 446 disease gene sets identified through literature curation, gene expression profiling, or genome-wide association studies. While all networks have some ability to recover disease genes, we observe a wide range of performance with STRING, ConsensusPathDB, and GIANT networks having the best performance overall. A general tendency is that performance scales with network size, suggesting that new interaction discovery currently outweighs the detrimental effects of false positives. Correcting for size, we find that the DIP network provides the highest efficiency (value per interaction). Based on these results, we create a parsimonious composite network with both high efficiency and performance. This work provides a benchmark for selection of molecular networks in human disease research. Copyright © 2018 Elsevier Inc. All rights reserved.
Solving Upwind-Biased Discretizations: Defect-Correction Iterations
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
1999-01-01
This paper considers defect-correction solvers for a second order upwind-biased discretization of the 2D convection equation. The following important features are reported: (1) The asymptotic convergence rate is about 0.5 per defect-correction iteration. (2) If the operators involved in defect-correction iterations have different approximation order, then the initial convergence rates may be very slow. The number of iterations required to get into the asymptotic convergence regime might grow on fine grids as a negative power of h. In the case of a second order target operator and a first order driver operator, this number of iterations is roughly proportional to h-1/3. (3) If both the operators have the second approximation order, the defect-correction solver demonstrates the asymptotic convergence rate after three iterations at most. The same three iterations are required to converge algebraic error below the truncation error level. A novel comprehensive half-space Fourier mode analysis (which, by the way, can take into account the influence of discretized outflow boundary conditions as well) for the defect-correction method is developed. This analysis explains many phenomena observed in solving non-elliptic equations and provides a close prediction of the actual solution behavior. It predicts the convergence rate for each iteration and the asymptotic convergence rate. As a result of this analysis, a new very efficient adaptive multigrid algorithm solving the discrete problem to within a given accuracy is proposed. Numerical simulations confirm the accuracy of the analysis and the efficiency of the proposed algorithm. The results of the numerical tests are reported.
Automated measurements for individualized heart rate correction of the QT interval.
Mason, Jay W; Moon, Thomas E
2015-04-01
Subject-specific electrocardiographic QT interval correction for heart rate is often used in clinical trials with frequent electrocardiographic recordings. However, in these studies relatively few 10-s, 12-lead electrocardiograms may be available for calculating the individual correction. Highly automated QT and RR measurement tools have made it practical to measure electrocardiographic intervals on large volumes of continuous electrocardiogram data. The purpose of this study was to determine whether an automated method can be used in lieu of a manual method. In 49 subjects who completed all treatments in a four-armed crossover study we compared two methods for derivation of individualized rate-correction coefficients: manual measurement on 10-s electrocardiograms and automated measurement of QT and RR during continuous 24-h electrocardiogram recordings. The four treatments, received by each subject in a latin-square randomization sequence were placebo, moxifloxacin, and two doses of an investigational drug. Analysis of continuous electrocardiogram data yielded a lower standard deviation of QT:RR regression values than the manual method, though the differences were not statistically significant. The within-subject and within-treatment coefficients of variation between the manual and automated methods were not significantly different. Corrected QT values from the two methods had similar rates of true and false positive identification of moxifloxacin's QT prolonging effect. An automated method for individualized rate correction applied to continuous electrocardiogram data could be advantageous in clinical trials, as the automated method is simpler, is based upon a much larger volume of data, yields similar results, and requires no human over-reading of the measurements. © The Author(s) 2015.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-13
... DEPARTMENT OF THE INTERIOR National Indian Gaming Commission Fee Rate Correction In notice document 2013-05334, appearing on page 14821 in the issue of Thursday, March 7, 2013, make the following correction: On page 14821, in the second column, in the eighth line from the bottom of the page, ``Dated...
WebArray: an online platform for microarray data analysis
Xia, Xiaoqin; McClelland, Michael; Wang, Yipeng
2005-01-01
Background Many cutting-edge microarray analysis tools and algorithms, including commonly used limma and affy packages in Bioconductor, need sophisticated knowledge of mathematics, statistics and computer skills for implementation. Commercially available software can provide a user-friendly interface at considerable cost. To facilitate the use of these tools for microarray data analysis on an open platform we developed an online microarray data analysis platform, WebArray, for bench biologists to utilize these tools to explore data from single/dual color microarray experiments. Results The currently implemented functions were based on limma and affy package from Bioconductor, the spacings LOESS histogram (SPLOSH) method, PCA-assisted normalization method and genome mapping method. WebArray incorporates these packages and provides a user-friendly interface for accessing a wide range of key functions of limma and others, such as spot quality weight, background correction, graphical plotting, normalization, linear modeling, empirical bayes statistical analysis, false discovery rate (FDR) estimation, chromosomal mapping for genome comparison. Conclusion WebArray offers a convenient platform for bench biologists to access several cutting-edge microarray data analysis tools. The website is freely available at . It runs on a Linux server with Apache and MySQL. PMID:16371165
Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang
2014-06-01
We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models.
Cheng, Chia-Ying; Tsai, Chia-Feng; Chen, Yu-Ju; Sung, Ting-Yi; Hsu, Wen-Lian
2013-05-03
As spectral library searching has received increasing attention for peptide identification, constructing good decoy spectra from the target spectra is the key to correctly estimating the false discovery rate in searching against the concatenated target-decoy spectral library. Several methods have been proposed to construct decoy spectral libraries. Most of them construct decoy peptide sequences and then generate theoretical spectra accordingly. In this paper, we propose a method, called precursor-swap, which directly constructs decoy spectral libraries directly at the "spectrum level" without generating decoy peptide sequences by swapping the precursors of two spectra selected according to a very simple rule. Our spectrum-based method does not require additional efforts to deal with ion types (e.g., a, b or c ions), fragment mechanism (e.g., CID, or ETD), or unannotated peaks, but preserves many spectral properties. The precursor-swap method is evaluated on different spectral libraries and the results of obtained decoy ratios show that it is comparable to other methods. Notably, it is efficient in time and memory usage for constructing decoy libraries. A software tool called Precursor-Swap-Decoy-Generation (PSDG) is publicly available for download at http://ms.iis.sinica.edu.tw/PSDG/.
NASA Astrophysics Data System (ADS)
Li, Wenjing; He, Huiguang; Lu, Jingjing; Lv, Bin; Li, Meng; Jin, Zhengyu
2009-10-01
Tensor-based morphometry (TBM) is an automated technique for detecting the anatomical differences between populations by examining the gradients of the deformation fields used to nonlinearly warp MR images. The purpose of this study was to investigate the whole-brain volume changes between the patients with unilateral temporal lobe epilepsy (TLE) and the controls using TBM with DARTEL, which could achieve more accurate inter-subject registration of brain images. T1-weighted images were acquired from 21 left-TLE patients, 21 right-TLE patients and 21 healthy controls, which were matched in age and gender. The determinants of the gradient of deformation fields at voxel level were obtained to quantify the expansion or contraction for individual images relative to the template, and then logarithmical transformation was applied on it. A whole brain analysis was performed using general lineal model (GLM), and the multiple comparison was corrected by false discovery rate (FDR) with p<0.05. For left-TLE patients, significant volume reductions were found in hippocampus, cingulate gyrus, precentral gyrus, right temporal lobe and cerebellum. These results potentially support the utility of TBM with DARTEL to study the structural changes between groups.
Discovery of a dual active galactic nucleus with ˜8 kpc separation
NASA Astrophysics Data System (ADS)
Ellison, Sara L.; Secrest, Nathan J.; Mendel, J. Trevor; Satyapal, Shobita; Simard, Luc
2017-09-01
Targeted searches for dual active galactic nuclei (AGNs), with separations 1-10 kpc, have yielded relatively few successes. A recent pilot survey by Satyapal et al. has demonstrated that mid-infrared (mid-IR) pre-selection has the potential to significantly improve the success rate for dual AGN confirmation in late stage galaxy mergers. In this Letter, we combine mid-IR selection with spatially resolved optical AGN diagnostics from the Mapping Nearby Galaxies at Apache Point Observatory survey to identify a candidate dual AGN in the late stage major galaxy merger SDSS J140737.17+442856.2 at z = 0.143. The nature of the dual AGN is confirmed with Chandra X-ray observations that identify two hard X-ray point sources with intrinsic (corrected for absorption) 2-10 keV luminosities of 4 × 1041 and 3.5 × 1043 erg s-1 separated by 8.3 kpc. The neutral hydrogen absorption (˜1022 cm-2) towards the two AGNs is lower than in duals selected solely on their mid-IR colours, indicating that strategies that combine optical and mid-IR diagnostics may complement techniques that identify the highly obscured dual phase, such as at high X-ray energies or mid-IR only.
Missing value imputation strategies for metabolomics data.
Armitage, Emily Grace; Godzien, Joanna; Alonso-Herranz, Vanesa; López-Gonzálvez, Ángeles; Barbas, Coral
2015-12-01
The origin of missing values can be caused by different reasons and depending on these origins missing values should be considered differently and dealt with in different ways. In this research, four methods of imputation have been compared with respect to revealing their effects on the normality and variance of data, on statistical significance and on the approximation of a suitable threshold to accept missing data as truly missing. Additionally, the effects of different strategies for controlling familywise error rate or false discovery and how they work with the different strategies for missing value imputation have been evaluated. Missing values were found to affect normality and variance of data and k-means nearest neighbour imputation was the best method tested for restoring this. Bonferroni correction was the best method for maximizing true positives and minimizing false positives and it was observed that as low as 40% missing data could be truly missing. The range between 40 and 70% missing values was defined as a "gray area" and therefore a strategy has been proposed that provides a balance between the optimal imputation strategy that was k-means nearest neighbor and the best approximation of positioning real zeros. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Statistical 3D shape analysis of gender differences in lateral ventricles
NASA Astrophysics Data System (ADS)
He, Qing; Karpman, Dmitriy; Duan, Ye
2010-03-01
This paper aims at analyzing gender differences in the 3D shapes of lateral ventricles, which will provide reference for the analysis of brain abnormalities related to neurological disorders. Previous studies mostly focused on volume analysis, and the main challenge in shape analysis is the required step of establishing shape correspondence among individual shapes. We developed a simple and efficient method based on anatomical landmarks. 14 females and 10 males with matching ages participated in this study. 3D ventricle models were segmented from MR images by a semiautomatic method. Six anatomically meaningful landmarks were identified by detecting the maximum curvature point in a small neighborhood of a manually clicked point on the 3D model. Thin-plate spline was used to transform a randomly selected template shape to each of the rest shape instances, and the point correspondence was established according to Euclidean distance and surface normal. All shapes were spatially aligned by Generalized Procrustes Analysis. Hotelling T2 twosample metric was used to compare the ventricle shapes between males and females, and False Discovery Rate estimation was used to correct for the multiple comparison. The results revealed significant differences in the anterior horn of the right ventricle.
Reduced genetic influence on childhood obesity in small for gestational age children
2013-01-01
Background Children born small-for-gestational-age (SGA) are at increased risk of developing obesity and metabolic diseases later in life, a risk which is magnified if followed by accelerated postnatal growth. We investigated whether common gene variants associated with adult obesity were associated with increased postnatal growth, as measured by BMI z-score, in children born SGA and appropriate for gestational age (AGA) in the Auckland Birthweight Collaborative. Methods A total of 37 candidate SNPs were genotyped on 547 European children (228 SGA and 319 AGA). Repeated measures of BMI (z-score) were used for assessing obesity status, and results were corrected for multiple testing using the false discovery rate. Results SGA children had a lower BMI z-score than non-SGA children at assessment age 3.5, 7 and 11 years. We confirmed 27 variants within 14 obesity risk genes to be individually associated with increasing early childhood BMI, predominantly in those born AGA. Conclusions Genetic risk variants are less important in influencing early childhood BMI in those born SGA than in those born AGA, suggesting that non-genetic or environmental factors may be more important in influencing childhood BMI in those born SGA. PMID:23339409
Functional and Genomic Features of Human Genes Mutated in Neuropsychiatric Disorders.
Forero, Diego A; Prada, Carlos F; Perry, George
2016-01-01
In recent years, a large number of studies around the world have led to the identification of causal genes for hereditary types of common and rare neurological and psychiatric disorders. To explore the functional and genomic features of known human genes mutated in neuropsychiatric disorders. A systematic search was used to develop a comprehensive catalog of genes mutated in neuropsychiatric disorders (NPD). Functional enrichment and protein-protein interaction analyses were carried out. A false discovery rate approach was used for correction for multiple testing. We found several functional categories that are enriched among NPD genes, such as gene ontologies, protein domains, tissue expression, signaling pathways and regulation by brain-expressed miRNAs and transcription factors. Sixty six of those NPD genes are known to be druggable. Several topographic parameters of protein-protein interaction networks and the degree of conservation between orthologous genes were identified as significant among NPD genes. These results represent one of the first analyses of enrichment of functional categories of genes known to harbor mutations for NPD. These findings could be useful for a future creation of computational tools for prioritization of novel candidate genes for NPD.
Functional and Genomic Features of Human Genes Mutated in Neuropsychiatric Disorders
Forero, Diego A.; Prada, Carlos F.; Perry, George
2016-01-01
Background: In recent years, a large number of studies around the world have led to the identification of causal genes for hereditary types of common and rare neurological and psychiatric disorders. Objective: To explore the functional and genomic features of known human genes mutated in neuropsychiatric disorders. Methods: A systematic search was used to develop a comprehensive catalog of genes mutated in neuropsychiatric disorders (NPD). Functional enrichment and protein-protein interaction analyses were carried out. A false discovery rate approach was used for correction for multiple testing. Results: We found several functional categories that are enriched among NPD genes, such as gene ontologies, protein domains, tissue expression, signaling pathways and regulation by brain-expressed miRNAs and transcription factors. Sixty six of those NPD genes are known to be druggable. Several topographic parameters of protein-protein interaction networks and the degree of conservation between orthologous genes were identified as significant among NPD genes. Conclusion: These results represent one of the first analyses of enrichment of functional categories of genes known to harbor mutations for NPD. These findings could be useful for a future creation of computational tools for prioritization of novel candidate genes for NPD. PMID:27990183
NASA Technical Reports Server (NTRS)
Watts, Anna L.; Strohmayer, Tod E.
2005-01-01
The recent discovery of high frequency oscillations in giant flares from SGR 1806-20 and SGR 1900+14 may be the first direct detection of vibrations in a neutron star crust. If this interpretation is correct it offers a novel means of testing the neutron star equation of state, crustal breaking strain, and magnetic field configuration. Using timing data from RHESSI, we have confirmed the detection of a 92.5 Hz Quasi-Periodic Oscillation (QPO) in the tail of the SGR 1806-20 giant flare. We also find another, stronger, QPO at higher energies, at 626.5 Hz. Both QPOs are visible only at particular (but different) rotational phases, implying an association with a specific area of the neutron star surface or magnetosphere. At lower frequencies we confirm the detection of an 18 Hz QPO, at the same rotational phase as the 92.5 Hz QPO, and report the additional presence of a broad 26 Hz QPO. We are however unable to make a robust confirmation of the presence of a 30 Hz QPO, despite higher count rates. We discuss our results in the light of neutron star vibration models.
NASA Astrophysics Data System (ADS)
Shangguan, Jingbo; Li, Zhongbao
2017-06-01
Thirty-five new microsatellite loci from the sea cucumbers Holothurian scabra (Jaeger, 1833) and Apostichopus japonicas (Selenka, 1867) were screened and characterized using the method of magnetic bead enrichment. Of the twenty-four polymorphic loci tested, eighteen were consistent with Hardy-Weinberg equilibrium after a modified false discovery rate (B-Y FDR) correction, whereas six showed statistically significant deviations (CHS2 and CHS11: P <0.014790; FCS1, FCS6, FCS8 and FCS14: P <0.015377). Furthermore, four species of plesiomorphous and related sea cucumbers (Holothurian scabra, Holothuria leucospilota, Stichopus horrens and Apostichopus japonicas) were tested for mutual cross-amplification using a total of ninety microsatellite loci. Although transferability and universality of all loci were generally low, the results of the cross-species study showed that the markers can be applied to identify individuals to species according to the presence or absence of specific microsatellite alleles. The microsatellite markers reported here will contribute to the study of genetic diversity, assisted breeding, and population conservation in sea cucumbers, as well as allow for the identification of individuals to closely related species.
NASA Astrophysics Data System (ADS)
Shangguan, Jingbo; Li, Zhongbao
2018-03-01
Thirty-five new microsatellite loci from the sea cucumbers Holothurian scabra (Jaeger, 1833) and Apostichopus japonicas (Selenka, 1867) were screened and characterized using the method of magnetic bead enrichment. Of the twenty-four polymorphic loci tested, eighteen were consistent with Hardy-Weinberg equilibrium after a modified false discovery rate (B-Y FDR) correction, whereas six showed statistically significant deviations (CHS2 and CHS11: P<0.014 790; FCS1, FCS6, FCS8 and FCS14: P<0.015 377). Furthermore, four species of plesiomorphous and related sea cucumbers ( Holothurian scabra, Holothuria leucospilota, Stichopus horrens and Apostichopus japonicas) were tested for mutual cross-amplification using a total of ninety microsatellite loci. Although transferability and universality of all loci were generally low, the results of the cross-species study showed that the markers can be applied to identify individuals to species according to the presence or absence of specific microsatellite alleles. The microsatellite markers reported here will contribute to the study of genetic diversity, assisted breeding, and population conservation in sea cucumbers, as well as allow for the identification of individuals to closely related species.
Kim, Eun-Ju; Kim, Dae-Hong; Lee, Sang Hoon; Huh, Yong-Min; Song, Ho-Taek; Suh, Jin-Suck
2004-04-01
This study compared two methods, corrected (separation of T(1) and T(2)* effects) and uncorrected, in order to determine the suitability of the perfusion and permeability measures through Delta R(2)* and Delta R(1) analyses. A dynamic susceptibility contrast dual gradient echo (DSC-DGE) was used to image the fixed phantoms and flow phantoms (Sephadex perfusion phantoms and dialyzer phantom for the permeability measurements). The results confirmed that the corrected relaxation rate was linearly proportional to gadolinium-diethyltriamine pentaacetic acid (Gd-DTPA) concentration, whereas the uncorrected relaxation rate did not in the fixed phantom and simulation experiments. For the perfusion measurements, it was found that the correction process was necessary not only for the Delta R(1) time curve but also for the Delta R(2)* time curve analyses. Perfusion could not be measured without correcting the Delta R(2)* time curve. The water volume, which was expressed as the perfusion amount, was found to be closer to the theoretical value when using the corrected Delta R(1) curve in the calculations. However, this may occur in the low concentration of Gd-DTPA in tissue used in this study. For the permeability measurements based on the two-compartment model, the permeability factor (k(ev); e = extravascular, v = vascular) from the outside to the inside of the hollow fibers was greater in the corrected Delta R(1) method than in the uncorrected Delta R(1) method. The differences between the corrected and the uncorrected Delta R(1) values were confirmed by the simulation experiments. In conclusion, this study proposes that the correction for the relaxation rates, Delta R(2)* and Delta R(1), is indispensable in making accurate perfusion and permeability measurements, and that DSC-DGE is a useful method for obtaining information on perfusion and permeability, simultaneously.
Zhang, Chao; Yang, Hongyu; Qin, Wen; Liu, Chang; Qi, Zhigang; Chen, Nan; Li, Kuncheng
2017-01-01
Executive control function (ECF) deficit is a common complication of temporal lobe epilepsy (TLE). Characteristics of brain network connectivity in TLE with ECF dysfunction are still unknown. The aim of this study was to investigate resting-state functional connectivity (FC) changes in patients with unilateral intractable TLE with impaired ECF. Forty right-handed patients with left TLE confirmed by comprehensive preoperative evaluation and postoperative pathological findings were enrolled. The patients were divided into normal ECF (G1) and decreased ECF (G2) groups according to whether they showed ECF impairment on the Wisconsin Card Sorting Test (WCST). Twenty-three healthy volunteers were recruited as the healthy control (HC) group. All subjects underwent resting-state functional magnetic resonance imaging (rs-fMRI). Group-information-guided independent component analysis (GIG-ICA) was performed to estimate resting-state networks (RSNs) for all subjects. General linear model (GLM) was employed to analyze intra-network FC (p < 0.05, false discovery rate, FDR correction) and inter-network FC (p < 0.05, Bonferroni correction) of RSN among three groups. Pearson correlations between FC and neuropsychological tests were also determined through partial correlation analysis (p < 0.05). Eleven meaningful RSNs were identified from 40 left TLE and 23 HC subjects. Comparison of intra-network FC of all 11 meaningful RSNs did not reveal significant difference among the three groups (p > 0.05, FDR correction). For inter-network analysis, G2 exhibited decreased FC between the executive control network (ECN) and default-mode network (DMN) when compared with G1 (p = 0.000, Bonferroni correction) and HC (p = 0.000, Bonferroni correction). G1 showed no significant difference of FC between ECN and DMN when compared with HC. Furthermore, FC between ECN and DMN had significant negative correlation with perseverative responses (RP), response errors (RE) and perseverative errors (RPE) and had significant positive correlation categories completed (CC) in both G1 and G2 (p < 0.05). No significant difference of Montreal Cognitive Assessment (MoCA) was found between G1 and G2, while intelligence quotient (IQ) testing showed significant difference between G1and G2.There was no correlation between FC and either MoCA or IQ performance. Our findings suggest that ECF impairment in unilateral TLE is not confined to the diseased temporal lobe. Decreased FC between DMN and ECN may be an important characteristic of RSN in intractable unilateral TLE. PMID:29375338
Hickey, John M; Chiurugwi, Tinashe; Mackay, Ian; Powell, Wayne
2017-08-30
The rate of annual yield increases for major staple crops must more than double relative to current levels in order to feed a predicted global population of 9 billion by 2050. Controlled hybridization and selective breeding have been used for centuries to adapt plant and animal species for human use. However, achieving higher, sustainable rates of improvement in yields in various species will require renewed genetic interventions and dramatic improvement of agricultural practices. Genomic prediction of breeding values has the potential to improve selection, reduce costs and provide a platform that unifies breeding approaches, biological discovery, and tools and methods. Here we compare and contrast some animal and plant breeding approaches to make a case for bringing the two together through the application of genomic selection. We propose a strategy for the use of genomic selection as a unifying approach to deliver innovative 'step changes' in the rate of genetic gain at scale.
Frequency of under-corrected refractive errors in elderly Chinese in Beijing.
Xu, Liang; Li, Jianjun; Cui, Tongtong; Tong, Zhongbiao; Fan, Guizhi; Yang, Hua; Sun, Baochen; Zheng, Yuanyuan; Jonas, Jost B
2006-07-01
The aim of the study was to evaluate the prevalence of under-corrected refractive error among elderly Chinese in the Beijing area. The population-based, cross-sectional, cohort study comprised 4,439 subjects out of 5,324 subjects asked to participate (response rate 83.4%) with an age of 40+ years. It was divided into a rural part [1,973 (44.4%) subjects] and an urban part [2,466 (55.6%) subjects]. Habitual and best-corrected visual acuity was measured. Under-corrected refractive error was defined as an improvement in visual acuity of the better eye of at least two lines with best possible refractive correction. The rate of under-corrected refractive error was 19.4% (95% confidence interval, 18.2, 20.6). In a multiple regression analysis, prevalence and size of under-corrected refractive error in the better eye was significantly associated with lower level of education (P<0.001), female gender (P<0.001), and age (P=0.001). Under-correction of refractive error is relatively common among elderly Chinese in the Beijing area when compared with data from other populations.
2009-04-01
Services , Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware...Within four years, there were 43 additional discoveries , the highest rate of any location in the world.13 Deep-water oil fields provide the region and...for continued discoveries of high quality crude oil is extremely likely, spurring interest and development in the region. The geographic
NASA Astrophysics Data System (ADS)
Liu, Xing-fa; Cen, Ming
2007-12-01
Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.
The Production of 3D Tumor Spheroids for Cancer Drug Discovery
Sant, Shilpa; Johnston, Paul A.
2017-01-01
New cancer drug approval rates are ≤ 5% despite significant investments in cancer research, drug discovery and development. One strategy to improve the rate of success of new cancer drugs transitioning into the clinic would be to more closely align the cellular models used in the early lead discovery with pre-clinical animal models and patient tumors. For solid tumors, this would mandate the development and implementation of three dimensional (3D) in vitro tumor models that more accurately recapitulate human solid tumor architecture and biology. Recent advances in tissue engineering and regenerative medicine have provided new techniques for 3D spheroid generation and a variety of in vitro 3D cancer models are being explored for cancer drug discovery. Although homogeneous assay methods and high content imaging approaches to assess tumor spheroid morphology, growth and viability have been developed, the implementation of 3D models in HTS remains challenging due to reasons that we discuss in this review. Perhaps the biggest obstacle to achieve acceptable HTS assay performance metrics occurs in 3D tumor models that produce spheroids with highly variable morphologies and/or sizes. We highlight two methods that produce uniform size-controlled 3D multicellular tumor spheroids that are compatible with cancer drug research and HTS; tumor spheroids formed in ultra-low attachment microplates, or in polyethylene glycol dimethacrylate hydrogel microwell arrays. PMID:28647083
An Integrated Microfluidic Processor for DNA-Encoded Combinatorial Library Functional Screening
2017-01-01
DNA-encoded synthesis is rekindling interest in combinatorial compound libraries for drug discovery and in technology for automated and quantitative library screening. Here, we disclose a microfluidic circuit that enables functional screens of DNA-encoded compound beads. The device carries out library bead distribution into picoliter-scale assay reagent droplets, photochemical cleavage of compound from the bead, assay incubation, laser-induced fluorescence-based assay detection, and fluorescence-activated droplet sorting to isolate hits. DNA-encoded compound beads (10-μm diameter) displaying a photocleavable positive control inhibitor pepstatin A were mixed (1920 beads, 729 encoding sequences) with negative control beads (58 000 beads, 1728 encoding sequences) and screened for cathepsin D inhibition using a biochemical enzyme activity assay. The circuit sorted 1518 hit droplets for collection following 18 min incubation over a 240 min analysis. Visual inspection of a subset of droplets (1188 droplets) yielded a 24% false discovery rate (1166 pepstatin A beads; 366 negative control beads). Using template barcoding strategies, it was possible to count hit collection beads (1863) using next-generation sequencing data. Bead-specific barcodes enabled replicate counting, and the false discovery rate was reduced to 2.6% by only considering hit-encoding sequences that were observed on >2 beads. This work represents a complete distributable small molecule discovery platform, from microfluidic miniaturized automation to ultrahigh-throughput hit deconvolution by sequencing. PMID:28199790
An Integrated Microfluidic Processor for DNA-Encoded Combinatorial Library Functional Screening.
MacConnell, Andrew B; Price, Alexander K; Paegel, Brian M
2017-03-13
DNA-encoded synthesis is rekindling interest in combinatorial compound libraries for drug discovery and in technology for automated and quantitative library screening. Here, we disclose a microfluidic circuit that enables functional screens of DNA-encoded compound beads. The device carries out library bead distribution into picoliter-scale assay reagent droplets, photochemical cleavage of compound from the bead, assay incubation, laser-induced fluorescence-based assay detection, and fluorescence-activated droplet sorting to isolate hits. DNA-encoded compound beads (10-μm diameter) displaying a photocleavable positive control inhibitor pepstatin A were mixed (1920 beads, 729 encoding sequences) with negative control beads (58 000 beads, 1728 encoding sequences) and screened for cathepsin D inhibition using a biochemical enzyme activity assay. The circuit sorted 1518 hit droplets for collection following 18 min incubation over a 240 min analysis. Visual inspection of a subset of droplets (1188 droplets) yielded a 24% false discovery rate (1166 pepstatin A beads; 366 negative control beads). Using template barcoding strategies, it was possible to count hit collection beads (1863) using next-generation sequencing data. Bead-specific barcodes enabled replicate counting, and the false discovery rate was reduced to 2.6% by only considering hit-encoding sequences that were observed on >2 beads. This work represents a complete distributable small molecule discovery platform, from microfluidic miniaturized automation to ultrahigh-throughput hit deconvolution by sequencing.
A statistical method for the conservative adjustment of false discovery rate (q-value).
Lai, Yinglei
2017-03-14
q-value is a widely used statistical method for estimating false discovery rate (FDR), which is a conventional significance measure in the analysis of genome-wide expression data. q-value is a random variable and it may underestimate FDR in practice. An underestimated FDR can lead to unexpected false discoveries in the follow-up validation experiments. This issue has not been well addressed in literature, especially in the situation when the permutation procedure is necessary for p-value calculation. We proposed a statistical method for the conservative adjustment of q-value. In practice, it is usually necessary to calculate p-value by a permutation procedure. This was also considered in our adjustment method. We used simulation data as well as experimental microarray or sequencing data to illustrate the usefulness of our method. The conservativeness of our approach has been mathematically confirmed in this study. We have demonstrated the importance of conservative adjustment of q-value, particularly in the situation that the proportion of differentially expressed genes is small or the overall differential expression signal is weak.
Haley, Valerie B; DiRienzo, A Gregory; Lutterloh, Emily C; Stricof, Rachel L
2014-01-01
To assess the effect of multiple sources of bias on state- and hospital-specific National Healthcare Safety Network (NHSN) laboratory-identified Clostridium difficile infection (CDI) rates. Sensitivity analysis. A total of 124 New York hospitals in 2010. New York NHSN CDI events from audited hospitals were matched to New York hospital discharge billing records to obtain additional information on patient age, length of stay, and previous hospital discharges. "Corrected" hospital-onset (HO) CDI rates were calculated after (1) correcting inaccurate case reporting found during audits, (2) incorporating knowledge of laboratory results from outside hospitals, (3) excluding days when patients were not at risk from the denominator of the rates, and (4) adjusting for patient age. Data sets were simulated with each of these sources of bias reintroduced individually and combined. The simulated rates were compared with the corrected rates. Performance (ie, better, worse, or average compared with the state average) was categorized, and misclassification compared with the corrected data set was measured. Counting days patients were not at risk in the denominator reduced the state HO rate by 45% and resulted in 8% misclassification. Age adjustment and reporting errors also shifted rates (7% and 6% misclassification, respectively). Changing the NHSN protocol to require reporting of age-stratified patient-days and adjusting for patient-days at risk would improve comparability of rates across hospitals. Further research is needed to validate the risk-adjustment model before these data should be used as hospital performance measures.
Historical Perspective: What Constitutes Discovery (of a New Virus)?
Murphy, F A
2016-01-01
A historic review of the discovery of new viruses leads to reminders of traditions that have evolved over 118 years. One such tradition gives credit for the discovery of a virus to the investigator(s) who not only carried out the seminal experiments but also correctly interpreted the findings (within the technological context of the day). Early on, ultrafiltration played a unique role in "proving" that an infectious agent was a virus, as did a failure to find any microscopically visible agent, failure to show replication of the agent in the absence of viable cells, thermolability of the agent, and demonstration of a specific immune response to the agent so as to rule out duplicates and close variants. More difficult was "proving" that the new virus was the etiologic agent of the disease ("proof of causation")-for good reasons this matter has been revisited several times over the years as technologies and perspectives have changed. One tradition is that the discoverers get to name their discovery, their new virus (unless some grievous convention has been broken)-the stability of these virus names has been a way to honor the discoverer(s) over the long term. Several vignettes have been chosen to illustrate several difficulties in holding to the traditions (vignettes chosen include vaccinia and variola viruses, yellow fever virus, and influenza viruses. Crimean-Congo hemorrhagic fever virus, Murray Valley encephalitis virus, human immunodeficiency virus 1, Sin Nombre virus, and Ebola virus). Each suggests lessons for the future. One way to assure that discoveries are forever linked with discoverers would be a permanent archive in one of the universal virus databases that have been constructed for other purposes. However, no current database seems ideal-perhaps members of the global community of virologists will have an ideal solution. © 2016 Elsevier Inc. All rights reserved.
Stau coannihilation, compressed spectrum, and SUSY discovery potential at the LHC
NASA Astrophysics Data System (ADS)
Aboubrahim, Amin; Nath, Pran; Spisak, Andrew B.
2017-06-01
The lack of observation of supersymmetry thus far implies that the weak supersymmetry scale is larger than what was thought before the LHC era. This observation is strengthened by the Higgs boson mass measurement at ˜125 GeV , which within supersymmetric models implies a large loop correction and a weak supersymmetry scale lying in the several TeV region. In addition if neutralino is the dark matter, its relic density puts further constraints on models often requiring coannihilation to reduce the neutralino relic density to be consistent with experimental observation. The coannihilation in turn implies that the mass gap between the lightest supersymmetric particle and the next to lightest supersymmetric particle will be small, leading to softer final states and making the observation of supersymmetry challenging. In this work we investigate stau coannihilation models within supergravity grand unified models and the potential of discovery of such models at the LHC in the post-Higgs boson discovery era. We utilize a variety of signal regions to optimize the discovery of supersymmetry in the stau coannihilation region. In the analysis presented we impose the relic density constraint as well as the constraint of the Higgs boson mass. The range of sparticle masses discoverable up to the optimal integrated luminosity of the HL-LHC is investigated. It is found that the mass difference between the stau and the neutralino does not exceed ˜20 GeV over the entire mass range of the models explored. Thus the discovery of a supersymmetric signal arising from the stau coannihilation region will also provide a measurement of the neutralino mass. The direct detection of neutralino dark matter is analyzed within the class of stau coannihilation models investigated. The analysis is extended to include multiparticle coannihilation where stau along with chargino and the second neutralino enter into the coannihilation process.
Isgut, Monica; Rao, Mukkavilli; Yang, Chunhua; Subrahmanyam, Vangala; Rida, Padmashree C G; Aneja, Ritu
2018-03-01
Modern drug discovery efforts have had mediocre success rates with increasing developmental costs, and this has encouraged pharmaceutical scientists to seek innovative approaches. Recently with the rise of the fields of systems biology and metabolomics, network pharmacology (NP) has begun to emerge as a new paradigm in drug discovery, with a focus on multiple targets and drug combinations for treating disease. Studies on the benefits of drug combinations lay the groundwork for a renewed focus on natural products in drug discovery. Natural products consist of a multitude of constituents that can act on a variety of targets in the body to induce pharmacodynamic responses that may together culminate in an additive or synergistic therapeutic effect. Although natural products cannot be patented, they can be used as starting points in the discovery of potent combination therapeutics. The optimal mix of bioactive ingredients in natural products can be determined via phenotypic screening. The targets and molecular mechanisms of action of these active ingredients can then be determined using chemical proteomics, and by implementing a reverse pharmacokinetics approach. This review article provides evidence supporting the potential benefits of natural product-based combination drugs, and summarizes drug discovery methods that can be applied to this class of drugs. © 2017 Wiley Periodicals, Inc.
Pandey, Udai Bhan
2011-01-01
The common fruit fly, Drosophila melanogaster, is a well studied and highly tractable genetic model organism for understanding molecular mechanisms of human diseases. Many basic biological, physiological, and neurological properties are conserved between mammals and D. melanogaster, and nearly 75% of human disease-causing genes are believed to have a functional homolog in the fly. In the discovery process for therapeutics, traditional approaches employ high-throughput screening for small molecules that is based primarily on in vitro cell culture, enzymatic assays, or receptor binding assays. The majority of positive hits identified through these types of in vitro screens, unfortunately, are found to be ineffective and/or toxic in subsequent validation experiments in whole-animal models. New tools and platforms are needed in the discovery arena to overcome these limitations. The incorporation of D. melanogaster into the therapeutic discovery process holds tremendous promise for an enhanced rate of discovery of higher quality leads. D. melanogaster models of human diseases provide several unique features such as powerful genetics, highly conserved disease pathways, and very low comparative costs. The fly can effectively be used for low- to high-throughput drug screens as well as in target discovery. Here, we review the basic biology of the fly and discuss models of human diseases and opportunities for therapeutic discovery for central nervous system disorders, inflammatory disorders, cardiovascular disease, cancer, and diabetes. We also provide information and resources for those interested in pursuing fly models of human disease, as well as those interested in using D. melanogaster in the drug discovery process. PMID:21415126
2017-11-01
Reports an error in "Replicability and other features of a high-quality science: Toward a balanced and empirical approach" by Eli J. Finkel, Paul W. Eastwick and Harry T. Reis ( Journal of Personality and Social Psychology , 2017[Aug], Vol 113[2], 244-253). In the commentary, there was an error in the References list. The publishing year for the 18th article was cited incorrectly as 2016. The in-text acronym associated with this citation should read instead as LCL2017. The correct References list citation should read as follows: LeBel, E. P., Campbell, L., & Loving, T. J. (2017). Benefits of open and high-powered research outweigh costs. Journal of Personality and Social Psychology , 113, 230-243. http://dx.doi.org/10 .1037/pspi0000049. The online version of this article has been corrected. (The following abstract of the original article appeared in record 2017-30567-002.) Finkel, Eastwick, and Reis (2015; FER2015) argued that psychological science is better served by responding to apprehensions about replicability rates with contextualized solutions than with one-size-fits-all solutions. Here, we extend FER2015's analysis to suggest that much of the discussion of best research practices since 2011 has focused on a single feature of high-quality science-replicability-with insufficient sensitivity to the implications of recommended practices for other features, like discovery, internal validity, external validity, construct validity, consequentiality, and cumulativeness. Thus, although recommendations for bolstering replicability have been innovative, compelling, and abundant, it is difficult to evaluate their impact on our science as a whole, especially because many research practices that are beneficial for some features of scientific quality are harmful for others. For example, FER2015 argued that bigger samples are generally better, but also noted that very large samples ("those larger than required for effect sizes to stabilize"; p. 291) could have the downside of commandeering resources that would have been better invested in other studies. In their critique of FER2015, LeBel, Campbell, and Loving (2016) concluded, based on simulated data, that ever-larger samples are better for the efficiency of scientific discovery (i.e., that there are no tradeoffs). As demonstrated here, however, this conclusion holds only when the replicator's resources are considered in isolation. If we widen the assumptions to include the original researcher's resources as well, which is necessary if the goal is to consider resource investment for the field as a whole, the conclusion changes radically-and strongly supports a tradeoff-based analysis. In general, as psychologists seek to strengthen our science, we must complement our much-needed work on increasing replicability with careful attention to the other features of a high-quality science. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, T; Du, K; Bayouth, J
Purpose: Ventilation change caused by radiation therapy (RT) can be predicted using four-dimensional computed tomography (4DCT) and image registration. This study tested the dependency of predicted post-RT ventilation on effort correction and pre-RT lung function. Methods: Pre-RT and 3 month post-RT 4DCT images were obtained for 13 patients. The 4DCT images were used to create ventilation maps using a deformable image registration based Jacobian expansion calculation. The post-RT ventilation maps were predicted in four different ways using the dose delivered, pre-RT ventilation, and effort correction. The pre-RT ventilation and effort correction were toggled to determine dependency. The four different predictedmore » ventilation maps were compared to the post-RT ventilation map calculated from image registration to establish the best prediction method. Gamma pass rates were used to compare the different maps with the criteria of 2mm distance-to-agreement and 6% ventilation difference. Paired t-tests of gamma pass rates were used to determine significant differences between the maps. Additional gamma pass rates were calculated using only voxels receiving over 20 Gy. Results: The predicted post-RT ventilation maps were in agreement with the actual post-RT maps in the following percentage of voxels averaged over all subjects: 71% with pre-RT ventilation and effort correction, 69% with no pre-RT ventilation and effort correction, 60% with pre-RT ventilation and no effort correction, and 58% with no pre-RT ventilation and no effort correction. When analyzing only voxels receiving over 20 Gy, the gamma pass rates were respectively 74%, 69%, 65%, and 55%. The prediction including both pre- RT ventilation and effort correction was the only prediction with significant improvement over using no prediction (p<0.02). Conclusion: Post-RT ventilation is best predicted using both pre-RT ventilation and effort correction. This is the only prediction that provided a significant improvement on agreement. Research support from NIH grants CA166119 and CA166703, a gift from Roger Koch, and a Pilot Grant from University of Iowa Carver College of Medicine.« less
ADDME – Avoiding Drug Development Mistakes Early: central nervous system drug discovery perspective
Tsaioun, Katya; Bottlaender, Michel; Mabondzo, Aloise
2009-01-01
The advent of early absorption, distribution, metabolism, excretion, and toxicity (ADMET) screening has increased the attrition rate of weak drug candidates early in the drug-discovery process, and decreased the proportion of compounds failing in clinical trials for ADMET reasons. This paper reviews the history of ADMET screening and its place in pharmaceutical development, and central nervous system drug discovery in particular. Assays that have been developed in response to specific needs and improvements in technology that result in higher throughput and greater accuracy of prediction of human mechanisms of absorption and toxicity are discussed. The paper concludes with the authors' forecast of new models that will better predict human efficacy and toxicity. PMID:19534730
How to revive breakthrough innovation in the pharmaceutical industry.
Munos, Bernard H; Chin, William W
2011-06-29
Over the past 20 years, pharmaceutical companies have implemented conservative management practices to improve the predictability of therapeutics discovery and success rates of drug candidates. This approach has often yielded compounds that are only marginally better than existing therapies, yet require larger, longer, and more complex trials. To fund them, companies have shifted resources away from drug discovery to late clinical development; this has hurt innovation and amplified the crisis brought by the expiration of patents on many best-selling drugs. Here, we argue that more breakthrough therapeutics will reach patients only if the industry ceases to pursue "safe" incremental innovation, re-engages in high-risk discovery research, and adopts collaborative innovation models that allow sharing of knowledge and costs among collaborators.
Girard, R; Amazian, K; Fabry, J
2001-02-01
The aim of the study was to demonstrate that the introduction of rub-in hand disinfection (RHD) in hospital units, with the implementation of suitable equipment, drafting of specific protocols, and training users, improved compliance of hand disinfection and tolerance of user's hands. In four hospital units not previously using RHD an external investigator conducted two identical studies in order to measure the rate of compliance with, and the quality of, disinfection practices, [rate of adapted (i.e., appropriate) procedures, rate of correct (i.e., properly performed) procedures, rate of adapted and correct procedures carried out] and to assess the state of hands (clinical scores of dryness and irritation, measuring hydration with a corneometer). Between the two studies, the units were equipped with dispensers for RHD products and staff were trained. Compliance improved from 62.2 to 66.5%, quality was improved (rate of adapted procedures from 66.8% to 84.3%, P > or = 10(-6), rate of correct procedures from 11.1% to 28.9%, P > or = 10(-8), rate of adapted and correct procedures from 6.0 to 17.8%, P > or = 10(-8)). The tolerance was improved significantly (P > or = 10(-2)) for clinical dryness and irritation scores, although not significantly for measurements using a corneometer. This study shows the benefit of introducing RHD with a technical and educational accompaniment. Copyright 2001 The Hospital Infection Society.
ERIC Educational Resources Information Center
Servetti, Sara
2010-01-01
This paper focuses on cooperative learning (CL) used as a correction and grammar revision technique and considers the data collected in six Italian parallel classes, three of which (sample classes) corrected mistakes and revised grammar through cooperative learning, while the other three (control classes) in a traditional way. All the classes…
NASA Astrophysics Data System (ADS)
Thomas, Philipp; Straube, Arthur V.; Grima, Ramon
2010-11-01
Chemical reactions inside cells occur in compartment volumes in the range of atto- to femtoliters. Physiological concentrations realized in such small volumes imply low copy numbers of interacting molecules with the consequence of considerable fluctuations in the concentrations. In contrast, rate equation models are based on the implicit assumption of infinitely large numbers of interacting molecules, or equivalently, that reactions occur in infinite volumes at constant macroscopic concentrations. In this article we compute the finite-volume corrections (or equivalently the finite copy number corrections) to the solutions of the rate equations for chemical reaction networks composed of arbitrarily large numbers of enzyme-catalyzed reactions which are confined inside a small subcellular compartment. This is achieved by applying a mesoscopic version of the quasisteady-state assumption to the exact Fokker-Planck equation associated with the Poisson representation of the chemical master equation. The procedure yields impressively simple and compact expressions for the finite-volume corrections. We prove that the predictions of the rate equations will always underestimate the actual steady-state substrate concentrations for an enzyme-reaction network confined in a small volume. In particular we show that the finite-volume corrections increase with decreasing subcellular volume, decreasing Michaelis-Menten constants, and increasing enzyme saturation. The magnitude of the corrections depends sensitively on the topology of the network. The predictions of the theory are shown to be in excellent agreement with stochastic simulations for two types of networks typically associated with protein methylation and metabolism.
Gogtay, Nitin; Hua, Xue; Stidd, Reva; Boyle, Christina P.; Lee, Suh; Weisinger, Brian; Chavez, Alex; Giedd, Jay N.; Clasen, Liv; Toga, Arthur W.; Rapoport, Judith L.; Thompson, Paul M.
2013-01-01
Context Nonpsychotic siblings of patients with childhood-onset schizophrenia (COS) share cortical gray matter abnormalities with their probands at an early age; these normalize by the time the siblings are aged 18 years, suggesting that the gray matter abnormalities in schizophrenia could be an age-specific endophenotype. Patients with COS also show significant white matter (WM) growth deficits, which have not yet been explored in nonpsychotic siblings. Objective To study WM growth differences in non-psychotic siblings of patients with COS. Design Longitudinal (5-year) anatomic magnetic resonance imaging study mapping WM growth using a novel tensor-based morphometry analysis. Setting National Institutes of Health Clinical Center, Bethesda, Maryland. Participants Forty-nine healthy siblings of patients with COS (mean [SD] age, 16.1[5.3] years; 19 male, 30 female) and 57 healthy persons serving as controls (age, 16.9[5.3] years; 29 male, 28 female). Intervention Magnetic resonance imaging. Main Outcome Measure White matter growth rates. Results We compared the WM growth rates in 3 age ranges. In the youngest age group (7 to <14 years), we found a significant difference in growth rates, with siblings of patients with COS showing slower WM growth rates in the parietal lobes of the brain than age-matched healthy controls (false discovery rate, q = 0.05; critical P = .001 in the bilateral parietal WM; a post hoc analysis identified growth rate differences only on the left side, critical P =.004). A growth rate difference was not detectable at older ages. In 3-dimensional maps, growth rates in the siblings even appeared to surpass those of healthy individuals at later ages, at least locally in the brain, but this effect did not survive a multiple comparisons correction. Conclusions In this first longitudinal study of nonpsychotic siblings of patients with COS, the siblings showed early WM growth deficits, which normalized with age. As reported before for gray matter, WM growth may also be an age-specific endophenotype that shows compensatory normalization with age. PMID:22945617
Multiplicity Control in Structural Equation Modeling
ERIC Educational Resources Information Center
Cribbie, Robert A.
2007-01-01
Researchers conducting structural equation modeling analyses rarely, if ever, control for the inflated probability of Type I errors when evaluating the statistical significance of multiple parameters in a model. In this study, the Type I error control, power and true model rates of famsilywise and false discovery rate controlling procedures were…
37 CFR 351.5 - Discovery in royalty rate proceedings.
Code of Federal Regulations, 2010 CFR
2010-07-01
... proceedings. 351.5 Section 351.5 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF... matter, not privileged, that is relevant to the claim or defense of any party. Relevant information need... information and materials. (1) In any royalty rate proceeding scheduled to commence prior to January 1, 2011...
NASA Astrophysics Data System (ADS)
Crivori, Patrizia; Zamora, Ismael; Speed, Bill; Orrenius, Christian; Poggesi, Italo
2004-03-01
A number of computational approaches are being proposed for an early optimization of ADME (absorption, distribution, metabolism and excretion) properties to increase the success rate in drug discovery. The present study describes the development of an in silico model able to estimate, from the three-dimensional structure of a molecule, the stability of a compound with respect to the human cytochrome P450 (CYP) 3A4 enzyme activity. Stability data were obtained by measuring the amount of unchanged compound remaining after a standardized incubation with human cDNA-expressed CYP3A4. The computational method transforms the three-dimensional molecular interaction fields (MIFs) generated from the molecular structure into descriptors (VolSurf and Almond procedures). The descriptors were correlated to the experimental metabolic stability classes by a partial least squares discriminant procedure. The model was trained using a set of 1800 compounds from the Pharmacia collection and was validated using two test sets: the first one including 825 compounds from the Pharmacia collection and the second one consisting of 20 known drugs. This model correctly predicted 75% of the first and 85% of the second test set and showed a precision above 86% to correctly select metabolically stable compounds. The model appears a valuable tool in the design of virtual libraries to bias the selection toward more stable compounds. Abbreviations: ADME - absorption, distribution, metabolism and excretion; CYP - cytochrome P450; MIFs - molecular interaction fields; HTS - high throughput screening; DDI - drug-drug interactions; 3D - three-dimensional; PCA - principal components analysis; CPCA - consensus principal components analysis; PLS - partial least squares; PLSD - partial least squares discriminant; GRIND - grid independent descriptors; GRID - software originally created and developed by Professor Peter Goodford.
Identification of FGF7 as a novel susceptibility locus for chronic obstructive pulmonary disease.
Brehm, John M; Hagiwara, Koichi; Tesfaigzi, Yohannes; Bruse, Shannon; Mariani, Thomas J; Bhattacharya, Soumyaroop; Boutaoui, Nadia; Ziniti, John P; Soto-Quiros, Manuel E; Avila, Lydiana; Cho, Michael H; Himes, Blanca; Litonjua, Augusto A; Jacobson, Francine; Bakke, Per; Gulsvik, Amund; Anderson, Wayne H; Lomas, David A; Forno, Erick; Datta, Soma; Silverman, Edwin K; Celedón, Juan C
2011-12-01
Traditional genome-wide association studies (GWASs) of large cohorts of subjects with chronic obstructive pulmonary disease (COPD) have successfully identified novel candidate genes, but several other plausible loci do not meet strict criteria for genome-wide significance after correction for multiple testing. The authors hypothesise that by applying unbiased weights derived from unique populations we can identify additional COPD susceptibility loci. Methods The authors performed a homozygosity haplotype analysis on a group of subjects with and without COPD to identify regions of conserved homozygosity haplotype (RCHHs). Weights were constructed based on the frequency of these RCHHs in case versus controls, and used to adjust the p values from a large collaborative GWAS of COPD. The authors identified 2318 RCHHs, of which 576 were significantly (p<0.05) over-represented in cases. After applying the weights constructed from these regions to a collaborative GWAS of COPD, the authors identified two single nucleotide polymorphisms (SNPs) in a novel gene (fibroblast growth factor-7 (FGF7)) that gained genome-wide significance by the false discovery rate method. In a follow-up analysis, both SNPs (rs12591300 and rs4480740) were significantly associated with COPD in an independent population (combined p values of 7.9E-7 and 2.8E-6, respectively). In another independent population, increased lung tissue FGF7 expression was associated with worse measures of lung function. Weights constructed from a homozygosity haplotype analysis of an isolated population successfully identify novel genetic associations from a GWAS on a separate population. This method can be used to identify promising candidate genes that fail to meet strict correction for multiple testing.
Balow, James E; Ryan, John G; Chae, Jae Jin; Booty, Matthew G; Bulua, Ariel; Stone, Deborah; Sun, Hong-Wei; Greene, James; Barham, Beverly; Goldbach-Mansky, Raphaela; Kastner, Daniel L; Aksentijevich, Ivona
2013-06-01
To analyse gene expression patterns and to define a specific gene expression signature in patients with the severe end of the spectrum of cryopyrin-associated periodic syndromes (CAPS). The molecular consequences of interleukin 1 inhibition were examined by comparing gene expression patterns in 16 CAPS patients before and after treatment with anakinra. We collected peripheral blood mononuclear cells from 22 CAPS patients with active disease and from 14 healthy children. Transcripts that passed stringent filtering criteria (p values≤false discovery rate 1%) were considered as differentially expressed genes (DEG). A set of DEG was validated by quantitative reverse transcription PCR and functional studies with primary cells from CAPS patients and healthy controls. We used 17 CAPS and 66 non-CAPS patient samples to create a set of gene expression models that differentiates CAPS patients from controls and from patients with other autoinflammatory conditions. Many DEG include transcripts related to the regulation of innate and adaptive immune responses, oxidative stress, cell death, cell adhesion and motility. A set of gene expression-based models comprising the CAPS-specific gene expression signature correctly classified all 17 samples from an independent dataset. This classifier also correctly identified 15 of 16 post-anakinra CAPS samples despite the fact that these CAPS patients were in clinical remission. We identified a gene expression signature that clearly distinguished CAPS patients from controls. A number of DEG were in common with other systemic inflammatory diseases such as systemic onset juvenile idiopathic arthritis. The CAPS-specific gene expression classifiers also suggest incomplete suppression of inflammation at low doses of anakinra.
Balow, James E; Ryan, John G; Chae, Jae Jin; Booty, Matthew G; Bulua, Ariel; Stone, Deborah; Sun, Hong-Wei; Greene, James; Barham, Beverly; Goldbach-Mansky, Raphaela; Kastner, Daniel L; Aksentijevich, Ivona
2014-01-01
Objective To analyse gene expression patterns and to define a specific gene expression signature in patients with the severe end of the spectrum of cryopyrin-associated periodic syndromes (CAPS). The molecular consequences of interleukin 1 inhibition were examined by comparing gene expression patterns in 16 CAPS patients before and after treatment with anakinra. Methods We collected peripheral blood mononuclear cells from 22 CAPS patients with active disease and from 14 healthy children. Transcripts that passed stringent filtering criteria (p values ≤ false discovery rate 1%) were considered as differentially expressed genes (DEG). A set of DEG was validated by quantitative reverse transcription PCR and functional studies with primary cells from CAPS patients and healthy controls. We used 17 CAPS and 66 non-CAPS patient samples to create a set of gene expression models that differentiates CAPS patients from controls and from patients with other autoinflammatory conditions. Results Many DEG include transcripts related to the regulation of innate and adaptive immune responses, oxidative stress, cell death, cell adhesion and motility. A set of gene expression-based models comprising the CAPS-specific gene expression signature correctly classified all 17 samples from an independent dataset. This classifier also correctly identified 15 of 16 postanakinra CAPS samples despite the fact that these CAPS patients were in clinical remission. Conclusions We identified a gene expression signature that clearly distinguished CAPS patients from controls. A number of DEG were in common with other systemic inflammatory diseases such as systemic onset juvenile idiopathic arthritis. The CAPS-specific gene expression classifiers also suggest incomplete suppression of inflammation at low doses of anakinra. PMID:23223423
The interaction of early life experiences with COMT val158met affects anxiety sensitivity.
Baumann, C; Klauke, B; Weber, H; Domschke, K; Zwanzger, P; Pauli, P; Deckert, J; Reif, A
2013-11-01
The pathogenesis of anxiety disorders is considered to be multifactorial with a complex interaction of genetic factors and individual environmental factors. Therefore, the aim of this study was to examine gene-by-environment interactions of the genes coding for catechol-O-methyltransferase (COMT) and monoamine oxidase A (MAOA) with life events on measures related to anxiety. A sample of healthy subjects (N = 782; thereof 531 women; mean age M = 24.79, SD = 6.02) was genotyped for COMT rs4680 and MAOA-uVNTR (upstream variable number of tandem repeats), and was assessed for childhood adversities [Childhood Trauma Questionnaire (CTQ)], anxiety sensitivity [Anxiety Sensitivity Index (ASI)] and anxious apprehension [Penn State Worry Questionnaire (PSWQ)]. Main and interaction effects of genotype, environment and gender on measures related to anxiety were assessed by means of regression analyses. Association analysis showed no main gene effect on either questionnaire score. A significant interactive effect of childhood adversities and COMT genotype was observed: Homozygosity for the low-active met allele and high CTQ scores was associated with a significant increment of explained ASI variance [R(2) = 0.040, false discovery rate (FDR) corrected P = 0.04]. A borderline interactive effect with respect to MAOA-uVNTR was restricted to the male subgroup. Carriers of the low-active MAOA allele who reported more aversive experiences in childhood exhibited a trend for enhanced anxious apprehension (R(2) = 0.077, FDR corrected P = 0.10). Early aversive life experiences therefore might increase the vulnerability to anxiety disorders in the presence of homozygosity for the COMT 158met allele or low-active MAOA-uVNTR alleles. © 2013 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.
Kljaic-Bukvic, Blazenka; Blekic, Mario; Aberle, Neda; Curtin, John A; Hankinson, Jenny; Semic-Jusufagic, Aida; Belgrave, Danielle; Simpson, Angela; Custovic, Adnan
2014-10-01
We investigated the interaction between genetic variants in endotoxin signalling pathway and domestic endotoxin exposure in relation to asthma presence, and amongst children with asthma, we explored the association of these genetic variants and endotoxin exposure with hospital admissions due to asthma exacerbations. In a case-control study, we analysed data from 824 children (417 asthmatics, 407 controls; age 5-18 yr). Amongst asthmatics, we extracted data on hospitalization for asthma exacerbation from medical records. Endotoxin exposure was measured in dust samples collected from homes. We included 26 single-nucleotide polymorphisms (SNPs) in the final analysis (5 CD14, 7LY96 and 14 TLR4). Two variants remained significantly associated with hospital admissions with asthma exacerbations after correction for multiple testing: for CD14 SNP rs5744455, carriers of T allele had decreased risk of repeated hospital admissions compared with homozygotes for C allele [OR (95% CI), 0.42 (0.25-0.88), p = 0.01, False Discovery Rate (FDR) p = 0.02]; for LY96 SNP rs17226566, C-allele carriers were at a lower risk of hospital admissions compared with T-allele homozygotes [0.59 (0.38-0.90), p = 0.01, FDR p = 0.04]. We observed two interactions between SNPs in CD14 and LY96 with environmental endotoxin exposure in relation to hospital admissions due to asthma exacerbation which remained significant after correction for multiple testing (CD14 SNPs rs2915863 and LY96 SNP rs17226566). Amongst children with asthma, genetic variants in CD14 and LY96 may increase the risk of hospital admissions with acute exacerbations. Polymorphisms in endotoxin pathway interact with domestic endotoxin exposure in further modification of the risk of hospitalization. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
The Discovery of Extrasolar Planets via Transits
NASA Astrophysics Data System (ADS)
Dunham, Edward W.; Borucki, W. J.; Jenkins, J. M.; Batalha, N. M.; Caldwell, D. A.; Mandushev, G.
2014-01-01
The goal of detecting extrasolar planets has been part of human thought for many centuries and several plausible approaches for detecting them have been discussed for many decades. At this point in history the two most successful approaches have been the reflex radial velocity and transit approaches. These each have the additional merit of corroborating a discovery by the other approach, at least in some cases, thereby producing very convincing detections of objects that can't be seen. In the transit detection realm the key enabling technical factors were development of: - high quality large area electronic detectors - practical fast optics with wide fields of view - automated telescope systems - analysis algorithms to correct for inadequacies in the instrumentation - computing capability sufficient to cope with all of this This part of the equation is relatively straightforward. The more important part is subliminal, namely what went on in the minds of the proponents and detractors of the transit approach as events unfolded. Three major paradigm shifts had to happen. First, we had to come to understand that not all solar systems look like ours. The motivating effect of the hot Jupiter class of planet was profound. Second, the fact that CCD detectors can be much more stable than anybody imagined had to be understood. Finally, the ability of analysis methods to correct the data sufficiently well for the differential photometry task at hand had to be understood by proponents and detractors alike. The problem of capturing this changing mind-set in a collection of artifacts is a difficult one but is essential for a proper presentation of this bit of history.
Luo, Wei; Phung, Dinh; Tran, Truyen; Gupta, Sunil; Rana, Santu; Karmakar, Chandan; Shilton, Alistair; Yearwood, John; Dimitrova, Nevenka; Ho, Tu Bao; Venkatesh, Svetha; Berk, Michael
2016-12-16
As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. ©Wei Luo, Dinh Phung, Truyen Tran, Sunil Gupta, Santu Rana, Chandan Karmakar, Alistair Shilton, John Yearwood, Nevenka Dimitrova, Tu Bao Ho, Svetha Venkatesh, Michael Berk. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 16.12.2016.
Yao, Qian; Cao, Xiao-Mei; Zong, Wen-Gang; Sun, Xiao-Hui; Li, Ze-Rong; Li, Xiang-Yuan
2018-05-31
The isodesmic reaction method is applied to calculate the potential energy surface (PES) along the reaction coordinates and the rate constants of the barrierless reactions for unimolecular dissociation reactions of alkanes to form two alkyl radicals and their reverse recombination reactions. The reaction class is divided into 10 subclasses depending upon the type of carbon atoms in the reaction centers. A correction scheme based on isodesmic reaction theory is proposed to correct the PESs at UB3LYP/6-31+G(d,p) level. To validate the accuracy of this scheme, a comparison of the PESs at B3LYP level and the corrected PESs with the PESs at CASPT2/aug-cc-pVTZ level is performed for 13 representative reactions, and it is found that the deviations of the PESs at B3LYP level are up to 35.18 kcal/mol and are reduced to within 2 kcal/mol after correction, indicating that the PESs for barrierless reactions in a subclass can be calculated meaningfully accurately at a low level of ab initio method using our correction scheme. High-pressure limit rate constants and pressure dependent rate constants of these reactions are calculated based on their corrected PESs and the results show the pressure dependence of the rate constants cannot be ignored, especially at high temperatures. Furthermore, the impact of molecular size on the pressure-dependent rate constants of decomposition reactions of alkanes and their reverse reactions has been studied. The present work provides an effective method to generate meaningfully accurate PESs for large molecular system.
The gut microbiome composition associates with bipolar disorder and illness severity.
Evans, Simon J; Bassis, Christine M; Hein, Robert; Assari, Shervin; Flowers, Stephanie A; Kelly, Marisa B; Young, Vince B; Ellingrod, Vicky E; McInnis, Melvin G
2017-04-01
The gut microbiome is emerging as an important factor in regulating mental health yet it remains unclear what the target should be for psychiatric treatment. We aimed to elucidate the complement of the gut-microbiome community for individuals with bipolar disorder relative to controls; and test for relationships with burden of disease measures. We compared the stool microbiome from individuals with bipolar disorder (n = 115) and control subjects (n = 64) using 16S ribosomal RNA (rRNA) gene sequence analysis. Analysis of molecular variance (AMOVA) revealed global community case-control differences (AMOVA p = 0.047). Operational Taxonomical Unit (OTU) level analysis revealed significantly decreased fractional representation (p < 0.001) of Faecalibacterium after adjustment for age, sex, BMI and false discovery rate (FDR) correction at the p < 0.05 level. Within individuals with bipolar disorder, the fractional representation of Faecalibacterium associated with better self-reported health outcomes based on the Short Form Health Survey (SF12); the Patient Health Questionnaire (PHQ9); the Pittsburg Sleep Quality Index (PSQI); the Generalized Anxiety Disorder scale (GAD7); and the Altman Mania Rating Scale (ASRM), independent of covariates. This study provides the first detailed analysis of the gut microbiome relationships with multiple psychiatric domains from a bipolar population. The data support the hypothesis that targeting the microbiome may be an effective treatment paradigm for bipolar disorder. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Accuracy and Precision of Flow Measurements Using Phase Contrast Techniques
NASA Astrophysics Data System (ADS)
Tang, Chao
Quantitative volume flow rate measurements using the magnetic resonance imaging technique are studied in this dissertation because the volume flow rates have a special interest in the blood supply of the human body. The method of quantitative volume flow rate measurements is based on the phase contrast technique, which assumes a linear relationship between the phase and flow velocity of spins. By measuring the phase shift of nuclear spins and integrating velocity across the lumen of the vessel, we can determine the volume flow rate. The accuracy and precision of volume flow rate measurements obtained using the phase contrast technique are studied by computer simulations and experiments. The various factors studied include (1) the partial volume effect due to voxel dimensions and slice thickness relative to the vessel dimensions; (2) vessel angulation relative to the imaging plane; (3) intravoxel phase dispersion; (4) flow velocity relative to the magnitude of the flow encoding gradient. The partial volume effect is demonstrated to be the major obstacle to obtaining accurate flow measurements for both laminar and plug flow. Laminar flow can be measured more accurately than plug flow in the same condition. Both the experiment and simulation results for laminar flow show that, to obtain the accuracy of volume flow rate measurements to within 10%, at least 16 voxels are needed to cover the vessel lumen. The accuracy of flow measurements depends strongly on the relative intensity of signal from stationary tissues. A correction method is proposed to compensate for the partial volume effect. The correction method is based on a small phase shift approximation. After the correction, the errors due to the partial volume effect are compensated, allowing more accurate results to be obtained. An automatic program based on the correction method is developed and implemented on a Sun workstation. The correction method is applied to the simulation and experiment results. The results show that the correction significantly reduces the errors due to the partial volume effect. We apply the correction method to the data of in vivo studies. Because the blood flow is not known, the results of correction are tested according to the common knowledge (such as cardiac output) and conservation of flow. For example, the volume of blood flowing to the brain should be equal to the volume of blood flowing from the brain. Our measurement results are very convincing.
Re-evaluation of the correction factors for the GROVEX
NASA Astrophysics Data System (ADS)
Ketelhut, Steffen; Meier, Markus
2018-04-01
The GROVEX (GROssVolumige EXtrapolationskammer, large-volume extrapolation chamber) is the primary standard for the dosimetry of low-dose-rate interstitial brachytherapy at the Physikalisch-Technische Bundesanstalt (PTB). In the course of setup modifications and re-measuring of several dimensions, the correction factors have been re-evaluated in this work. The correction factors for scatter and attenuation have been recalculated using the Monte Carlo software package EGSnrc, and a new expression has been found for the divergence correction. The obtained results decrease the measured reference air kerma rate by approximately 0.9% for the representative example of a seed of type Bebig I25.S16C. This lies within the expanded uncertainty (k = 2).
Wang, Jianwei; Zhang, Yong; Wang, Lin-Wang
2015-07-31
We propose a systematic approach that can empirically correct three major errors typically found in a density functional theory (DFT) calculation within the local density approximation (LDA) simultaneously for a set of common cation binary semiconductors, such as III-V compounds, (Ga or In)X with X = N,P,As,Sb, and II-VI compounds, (Zn or Cd)X, with X = O,S,Se,Te. By correcting (1) the binary band gaps at high-symmetry points , L, X, (2) the separation of p-and d-orbital-derived valence bands, and (3) conduction band effective masses to experimental values and doing so simultaneously for common cation binaries, the resulting DFT-LDA-based quasi-first-principles methodmore » can be used to predict the electronic structure of complex materials involving multiple binaries with comparable accuracy but much less computational cost than a GW level theory. This approach provides an efficient way to evaluate the electronic structures and other material properties of complex systems, much needed for material discovery and design.« less
CRISPR-Cas9: a promising genetic engineering approach in cancer research.
Ratan, Zubair Ahmed; Son, Young-Jin; Haidere, Mohammad Faisal; Uddin, Bhuiyan Mohammad Mahtab; Yusuf, Md Abdullah; Zaman, Sojib Bin; Kim, Jong-Hoon; Banu, Laila Anjuman; Cho, Jae Youl
2018-01-01
Bacteria and archaea possess adaptive immunity against foreign genetic materials through clustered regularly interspaced short palindromic repeat (CRISPR) systems. The discovery of this intriguing bacterial system heralded a revolutionary change in the field of medical science. The CRISPR and CRISPR-associated protein 9 (Cas9) based molecular mechanism has been applied to genome editing. This CRISPR-Cas9 technique is now able to mediate precise genetic corrections or disruptions in in vitro and in vivo environments. The accuracy and versatility of CRISPR-Cas have been capitalized upon in biological and medical research and bring new hope to cancer research. Cancer involves complex alterations and multiple mutations, translocations and chromosomal losses and gains. The ability to identify and correct such mutations is an important goal in cancer treatment. In the context of this complex cancer genomic landscape, there is a need for a simple and flexible genetic tool that can easily identify functional cancer driver genes within a comparatively short time. The CRISPR-Cas system shows promising potential for modeling, repairing and correcting genetic events in different types of cancer. This article reviews the concept of CRISPR-Cas, its application and related advantages in oncology.
Lessons Learned from the Space Shuttle Engine Cutoff System (ECO) Anomalies
NASA Technical Reports Server (NTRS)
Martinez, Hugo E.; Welzyn, Ken
2011-01-01
The Space Shuttle Orbiter's main engine cutoff (ECO) system first failed ground checkout in April, 2005 during a first tanking test prior to Return-to-Flight. Despite significant troubleshooting and investigative efforts that followed, the root cause could not be found and intermittent anomalies continued to plague the Program. By implementing hardware upgrades, enhancing monitoring capability, and relaxing the launch rules, the Shuttle fleet was allowed to continue flying in spite of these unexplained failures. Root cause was finally determined following the launch attempts of STS-122 in December, 2007 when the anomalies repeated, which allowed drag-on instrumentation to pinpoint the fault (the ET feedthrough connector). The suspect hardware was removed and provided additional evidence towards root cause determination. Corrective action was implemented and the system has performed successfully since then. This white paper presents the lessons learned from the entire experience, beginning with the anomalies since Return-to-Flight through discovery and correction of the problem. To put these lessons in better perspective for the reader, an overview of the ECO system is presented first. Next, a chronological account of the failures and associated investigation activities is discussed. Root cause and corrective action are summarized, followed by the lessons learned.
NASA Astrophysics Data System (ADS)
Wang, Jianwei; Zhang, Yong; Wang, Lin-Wang
2015-07-01
We propose a systematic approach that can empirically correct three major errors typically found in a density functional theory (DFT) calculation within the local density approximation (LDA) simultaneously for a set of common cation binary semiconductors, such as III-V compounds, (Ga or In)X with X =N ,P ,As ,Sb , and II-VI compounds, (Zn or Cd)X , with X =O ,S ,Se ,Te . By correcting (1) the binary band gaps at high-symmetry points Γ , L , X , (2) the separation of p -and d -orbital-derived valence bands, and (3) conduction band effective masses to experimental values and doing so simultaneously for common cation binaries, the resulting DFT-LDA-based quasi-first-principles method can be used to predict the electronic structure of complex materials involving multiple binaries with comparable accuracy but much less computational cost than a GW level theory. This approach provides an efficient way to evaluate the electronic structures and other material properties of complex systems, much needed for material discovery and design.
Modeling bias and variation in the stochastic processes of small RNA sequencing
Etheridge, Alton; Sakhanenko, Nikita; Galas, David
2017-01-01
Abstract The use of RNA-seq as the preferred method for the discovery and validation of small RNA biomarkers has been hindered by high quantitative variability and biased sequence counts. In this paper we develop a statistical model for sequence counts that accounts for ligase bias and stochastic variation in sequence counts. This model implies a linear quadratic relation between the mean and variance of sequence counts. Using a large number of sequencing datasets, we demonstrate how one can use the generalized additive models for location, scale and shape (GAMLSS) distributional regression framework to calculate and apply empirical correction factors for ligase bias. Bias correction could remove more than 40% of the bias for miRNAs. Empirical bias correction factors appear to be nearly constant over at least one and up to four orders of magnitude of total RNA input and independent of sample composition. Using synthetic mixes of known composition, we show that the GAMLSS approach can analyze differential expression with greater accuracy, higher sensitivity and specificity than six existing algorithms (DESeq2, edgeR, EBSeq, limma, DSS, voom) for the analysis of small RNA-seq data. PMID:28369495
History of Hubble Space Telescope (HST)
1994-01-01
A comparison image of the M100 Galactic Nucleus, taken by the Hubble Space Telescope (HST) Wide Field Planetary Camera-1 (WF/PC1) and Wide Field Planetary Camera-2 (WF/PC2). The HST was placed in a low-Earth orbit by the Space Shuttle Discovery, STS-31 mission, in April 1990. Two months after its deployment in space, scientists detected a 2-micron spherical aberration in the primary mirror of the HST that affected the telescope's ability to focus faint light sources into a precise point. This imperfection was very slight, one-fiftieth of the width of a human hair. During four spacewalks, the STS-61 crew replaced the solar panel with its flexing problems; the WF/PC1 with the WF/PC2, with built-in corrective optics; and the High-Speed Photometer with the Corrective Optics Space Telescope Axial Replacement (COSTAR), to correct the aberration for the remaining instruments. The purpose of the HST, the most complex and sensitive optical telescope ever made, is to study the cosmos from a low-Earth orbit for 15 years or more. The HST provides fine detail imaging, produces ultraviolet images and spectra, and detects very faint objects.
Discovery and development of new antibacterial drugs: learning from experience?
Jackson, Nicole; Czaplewski, Lloyd; Piddock, Laura J V
2018-06-01
Antibiotic (antibacterial) resistance is a serious global problem and the need for new treatments is urgent. The current antibiotic discovery model is not delivering new agents at a rate that is sufficient to combat present levels of antibiotic resistance. This has led to fears of the arrival of a 'post-antibiotic era'. Scientific difficulties, an unfavourable regulatory climate, multiple company mergers and the low financial returns associated with antibiotic drug development have led to the withdrawal of many pharmaceutical companies from the field. The regulatory climate has now begun to improve, but major scientific hurdles still impede the discovery and development of novel antibacterial agents. To facilitate discovery activities there must be increased understanding of the scientific problems experienced by pharmaceutical companies. This must be coupled with addressing the current antibiotic resistance crisis so that compounds and ultimately drugs are delivered to treat the most urgent clinical challenges. By understanding the causes of the failures and successes of the pharmaceutical industry's research history, duplication of discovery programmes will be reduced, increasing the productivity of the antibiotic drug discovery pipeline by academia and small companies. The most important scientific issues to address are getting molecules into the Gram-negative bacterial cell and avoiding their efflux. Hence screening programmes should focus their efforts on whole bacterial cells rather than cell-free systems. Despite falling out of favour with pharmaceutical companies, natural product research still holds promise for providing new molecules as a basis for discovery.
Discovery of Novel Mammary Developmental and Cancer Genes Using ENU Mutagenesis
2002-10-01
death rates we need new therapeutic targets, currently a major challenge facing cancer researchers This requires an understanding of the undiscovered pathways that operate to drive breast cancer cell proliferation, cell survival and cell differentiation, pathways which are also likely to operate during normal mammary development, and which go awry in cancer The discovery of signalling pathways operative in breast cancer has utilised examination of mammary gland development following systemic endocrine ablation or viral insertion, positional cloning in affected families and
77 FR 57990 - Interest Rate Risk Policy and Program
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-19
... NATIONAL CREDIT UNION ADMINISTRATION 12 CFR Part 741 RIN 3133-AD66 Interest Rate Risk Policy and Program Correction In rule document 2012-02091, appearing on pages 55155-5167 in the issue of Thursday, February 2, 2012, make the following corrections: 1. On page 5157, in the second column, in the first line...
Corrected High-Frame Rate Anchored Ultrasound with Software Alignment
ERIC Educational Resources Information Center
Miller, Amanda L.; Finch, Kenneth B.
2011-01-01
Purpose: To improve lingual ultrasound imaging with the Corrected High Frame Rate Anchored Ultrasound with Software Alignment (CHAUSA; Miller, 2008) method. Method: A production study of the IsiXhosa alveolar click is presented. Articulatory-to-acoustic alignment is demonstrated using a Tri-Modal 3-ms pulse generator. Images from 2 simultaneous…
Partial-Interval Estimation of Count: Uncorrected and Poisson-Corrected Error Levels
ERIC Educational Resources Information Center
Yoder, Paul J.; Ledford, Jennifer R.; Harbison, Amy L.; Tapp, Jon T.
2018-01-01
A simulation study that used 3,000 computer-generated event streams with known behavior rates, interval durations, and session durations was conducted to test whether the main and interaction effects of true rate and interval duration affect the error level of uncorrected and Poisson-transformed (i.e., "corrected") count as estimated by…
Sánchez, Cecilia Castaño; Smith, Timothy P L; Wiedmann, Ralph T; Vallejo, Roger L; Salem, Mohamed; Yao, Jianbo; Rexroad, Caird E
2009-11-25
To enhance capabilities for genomic analyses in rainbow trout, such as genomic selection, a large suite of polymorphic markers that are amenable to high-throughput genotyping protocols must be identified. Expressed Sequence Tags (ESTs) have been used for single nucleotide polymorphism (SNP) discovery in salmonids. In those strategies, the salmonid semi-tetraploid genomes often led to assemblies of paralogous sequences and therefore resulted in a high rate of false positive SNP identification. Sequencing genomic DNA using primers identified from ESTs proved to be an effective but time consuming methodology of SNP identification in rainbow trout, therefore not suitable for high throughput SNP discovery. In this study, we employed a high-throughput strategy that used pyrosequencing technology to generate data from a reduced representation library constructed with genomic DNA pooled from 96 unrelated rainbow trout that represent the National Center for Cool and Cold Water Aquaculture (NCCCWA) broodstock population. The reduced representation library consisted of 440 bp fragments resulting from complete digestion with the restriction enzyme HaeIII; sequencing produced 2,000,000 reads providing an average 6 fold coverage of the estimated 150,000 unique genomic restriction fragments (300,000 fragment ends). Three independent data analyses identified 22,022 to 47,128 putative SNPs on 13,140 to 24,627 independent contigs. A set of 384 putative SNPs, randomly selected from the sets produced by the three analyses were genotyped on individual fish to determine the validation rate of putative SNPs among analyses, distinguish apparent SNPs that actually represent paralogous loci in the tetraploid genome, examine Mendelian segregation, and place the validated SNPs on the rainbow trout linkage map. Approximately 48% (183) of the putative SNPs were validated; 167 markers were successfully incorporated into the rainbow trout linkage map. In addition, 2% of the sequences from the validated markers were associated with rainbow trout transcripts. The use of reduced representation libraries and pyrosequencing technology proved to be an effective strategy for the discovery of a high number of putative SNPs in rainbow trout; however, modifications to the technique to decrease the false discovery rate resulting from the evolutionary recent genome duplication would be desirable.
Counting-loss correction for X-ray spectroscopy using unit impulse pulse shaping.
Hong, Xu; Zhou, Jianbin; Ni, Shijun; Ma, Yingjie; Yao, Jianfeng; Zhou, Wei; Liu, Yi; Wang, Min
2018-03-01
High-precision measurement of X-ray spectra is affected by the statistical fluctuation of the X-ray beam under low-counting-rate conditions. It is also limited by counting loss resulting from the dead-time of the system and pile-up pulse effects, especially in a high-counting-rate environment. In this paper a detection system based on a FAST-SDD detector and a new kind of unit impulse pulse-shaping method is presented, for counting-loss correction in X-ray spectroscopy. The unit impulse pulse-shaping method is evolved by inverse deviation of the pulse from a reset-type preamplifier and a C-R shaper. It is applied to obtain the true incoming rate of the system based on a general fast-slow channel processing model. The pulses in the fast channel are shaped to unit impulse pulse shape which possesses small width and no undershoot. The counting rate in the fast channel is corrected by evaluating the dead-time of the fast channel before it is used to correct the counting loss in the slow channel.
NASA Astrophysics Data System (ADS)
Fabian, Henry Joel
Educators have long tried to understand what stimulates students to learn. The Swiss psychologist and zoologist, Jean Claude Piaget, suggested that students are stimulated to learn when they attempt to resolve confusion. He reasoned that students try to explain the world with the knowledge they have acquired in life. When they find their own explanations to be inadequate to explain phenomena, students find themselves in a temporary state of confusion. This prompts students to seek more plausible explanations. At this point, students are primed for learning (Piaget 1964). The Piagetian approach described above is called learning by discovery. To promote discovery learning, a teacher must first allow the student to recognize his misconception and then provide a plausible explanation to replace that misconception (Chinn and Brewer 1993). One application of this method is found in the various learning cycles, which have been demonstrated to be effective means for teaching science (Renner and Lawson 1973, Lawson 1986, Marek and Methven 1991, and Glasson & Lalik 1993). In contrast to the learning cycle, tutorial computer programs are generally not designed to correct student misconceptions, but rather follow a passive, didactic method of teaching. In the didactic or expositional method, the student is told about a phenomenon, but is neither encouraged to explore it, nor explain it in his own terms (Schneider and Renner 1980).
The Higgs and Supersymmetry at Run II of the LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shih, David
2016-04-14
Prof. David Shih was supported by DOE grant DE-SC0013678 from April 2015 to April 2016. His research during this year focused on the phenomenology of super-symmetry (SUSY) and maximizing its future discovery potential at Run II of the LHC. SUSY is one of the most well-motivated frameworks for physics beyond the Standard Model. It solves the "naturalness" or "hierarchy" problem by stabilizing the Higgs mass against otherwise uncontrolled quantum corrections, predicts "grand unification" of the fundamental forces, and provides many potential candidates for dark matter. However, after decades of null results from direct and indirect searches, the viable parameter spacemore » for SUSY is increasingly constrained. Also, the discovery of a Standard Model-like Higgs with a mass at 125 GeV places a stringent constraint on SUSY models. In the work supported on this grant, Shih has worked on four different projects motivated by these issues. He has built natural SUSY models that explain the Higgs mass and provide viable dark matter; he has studied the parameter space of "gauge mediated supersymmetry breaking" (GMSB) that satisfies the Higgs mass constraint; he has developed new tools for the precision calculation of flavor and CP observables in general SUSY models; and he has studied new techniques for discovery of supersymmetric partners of the top quark.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Leary, J.; Hayward, T.; Addison, F.
The Llanos Foothills petroleum trend of the Eastern Cordillera in Colombia containing the giant Cusiana Field has proven to be one of the most exciting hydrocarbon provinces discovered in recent years. The Llanos Foothills trend is a fold and thrust belt with cumulative discovered reserves to date of nearly 6 billion barrels of oil equivalent. This paper summarizes the critical exploration techniques used in unlocking the potential of this major petroleum system. The first phase of exploration in the Llanos Foothills lasted from the early 1960's to the mid-70's. Several large structures defined by surface geology and seismic data weremore » drilled. Although no major discoveries were made, evidence of a petroleum play was found. The seismic imaging and drilling technology combined with the geological understanding which was then available did not allow the full potential of the trend to be realized. In the late 1980's better data and a revised geological perception of the trend led BP, Triton and Total into active exploration, which resulted in the discovery of the Cusiana Field. The subsequent discovery of the Cupiagua, Volcanera, Florena and Pauto Sur Fields confirmed the trend as a major hydrocarbon province. The exploration programme has used a series of geological and geophysical practices and techniques which have allowed the successful exploitation of the trend. The critical success factor has been the correct application of technology in seismic acquisition and recessing and drilling techniques.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
O`Leary, J.; Hayward, T.; Addison, F.
The Llanos Foothills petroleum trend of the Eastern Cordillera in Colombia containing the giant Cusiana Field has proven to be one of the most exciting hydrocarbon provinces discovered in recent years. The Llanos Foothills trend is a fold and thrust belt with cumulative discovered reserves to date of nearly 6 billion barrels of oil equivalent. This paper summarizes the critical exploration techniques used in unlocking the potential of this major petroleum system. The first phase of exploration in the Llanos Foothills lasted from the early 1960`s to the mid-70`s. Several large structures defined by surface geology and seismic data weremore » drilled. Although no major discoveries were made, evidence of a petroleum play was found. The seismic imaging and drilling technology combined with the geological understanding which was then available did not allow the full potential of the trend to be realized. In the late 1980`s better data and a revised geological perception of the trend led BP, Triton and Total into active exploration, which resulted in the discovery of the Cusiana Field. The subsequent discovery of the Cupiagua, Volcanera, Florena and Pauto Sur Fields confirmed the trend as a major hydrocarbon province. The exploration programme has used a series of geological and geophysical practices and techniques which have allowed the successful exploitation of the trend. The critical success factor has been the correct application of technology in seismic acquisition and recessing and drilling techniques.« less
Network Discovery Pipeline Elucidates Conserved Time-of-Day–Specific cis-Regulatory Modules
McEntee, Connor; Byer, Amanda; Trout, Jonathan D; Hazen, Samuel P; Shen, Rongkun; Priest, Henry D; Sullivan, Christopher M; Givan, Scott A; Yanovsky, Marcelo; Hong, Fangxin; Kay, Steve A; Chory, Joanne
2008-01-01
Correct daily phasing of transcription confers an adaptive advantage to almost all organisms, including higher plants. In this study, we describe a hypothesis-driven network discovery pipeline that identifies biologically relevant patterns in genome-scale data. To demonstrate its utility, we analyzed a comprehensive matrix of time courses interrogating the nuclear transcriptome of Arabidopsis thaliana plants grown under different thermocycles, photocycles, and circadian conditions. We show that 89% of Arabidopsis transcripts cycle in at least one condition and that most genes have peak expression at a particular time of day, which shifts depending on the environment. Thermocycles alone can drive at least half of all transcripts critical for synchronizing internal processes such as cell cycle and protein synthesis. We identified at least three distinct transcription modules controlling phase-specific expression, including a new midnight specific module, PBX/TBX/SBX. We validated the network discovery pipeline, as well as the midnight specific module, by demonstrating that the PBX element was sufficient to drive diurnal and circadian condition-dependent expression. Moreover, we show that the three transcription modules are conserved across Arabidopsis, poplar, and rice. These results confirm the complex interplay between thermocycles, photocycles, and the circadian clock on the daily transcription program, and provide a comprehensive view of the conserved genomic targets for a transcriptional network key to successful adaptation. PMID:18248097
Studies of a Next-Generation Silicon-Photomultiplier-Based Time-of-Flight PET/CT System.
Hsu, David F C; Ilan, Ezgi; Peterson, William T; Uribe, Jorge; Lubberink, Mark; Levin, Craig S
2017-09-01
This article presents system performance studies for the Discovery MI PET/CT system, a new time-of-flight system based on silicon photomultipliers. System performance and clinical imaging were compared between this next-generation system and other commercially available PET/CT and PET/MR systems, as well as between different reconstruction algorithms. Methods: Spatial resolution, sensitivity, noise-equivalent counting rate, scatter fraction, counting rate accuracy, and image quality were characterized with the National Electrical Manufacturers Association NU-2 2012 standards. Energy resolution and coincidence time resolution were measured. Tests were conducted independently on two Discovery MI scanners installed at Stanford University and Uppsala University, and the results were averaged. Back-to-back patient scans were also performed between the Discovery MI, Discovery 690 PET/CT, and SIGNA PET/MR systems. Clinical images were reconstructed using both ordered-subset expectation maximization and Q.Clear (block-sequential regularized expectation maximization with point-spread function modeling) and were examined qualitatively. Results: The averaged full widths at half maximum (FWHMs) of the radial/tangential/axial spatial resolution reconstructed with filtered backprojection at 1, 10, and 20 cm from the system center were, respectively, 4.10/4.19/4.48 mm, 5.47/4.49/6.01 mm, and 7.53/4.90/6.10 mm. The averaged sensitivity was 13.7 cps/kBq at the center of the field of view. The averaged peak noise-equivalent counting rate was 193.4 kcps at 21.9 kBq/mL, with a scatter fraction of 40.6%. The averaged contrast recovery coefficients for the image-quality phantom were 53.7, 64.0, 73.1, 82.7, 86.8, and 90.7 for the 10-, 13-, 17-, 22-, 28-, and 37-mm-diameter spheres, respectively. The average photopeak energy resolution was 9.40% FWHM, and the average coincidence time resolution was 375.4 ps FWHM. Clinical image comparisons between the PET/CT systems demonstrated the high quality of the Discovery MI. Comparisons between the Discovery MI and SIGNA showed a similar spatial resolution and overall imaging performance. Lastly, the results indicated significantly enhanced image quality and contrast-to-noise performance for Q.Clear, compared with ordered-subset expectation maximization. Conclusion: Excellent performance was achieved with the Discovery MI, including 375 ps FWHM coincidence time resolution and sensitivity of 14 cps/kBq. Comparisons between reconstruction algorithms and other multimodal silicon photomultiplier and non-silicon photomultiplier PET detector system designs indicated that performance can be substantially enhanced with this next-generation system. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Hard decoding algorithm for optimizing thresholds under general Markovian noise
NASA Astrophysics Data System (ADS)
Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond
2017-04-01
Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.
Adult perceptions of dental fluorosis and select dental conditions-an Asian perspective.
Nair, Rahul; Chuang, Janice Cheah Ping; Lee, Pauline Shih Jia; Leo, Song Jie; Yang, Naomi QiYue; Yee, Robert; Tong, Huei Jinn
2016-04-01
To compare lay people's perceptions with regard to various levels of dental fluorosis and select dental defects versus normal dentition. Adults rated digitally created photographs made showing lips (without retraction) and teeth depicting the following conditions: no apparent aesthetic defects (normal, Thylstrup- Fejerskov score 0 - TF0), 6 levels of fluorosis (TF1-6), carious lesions (two cavitated and one noncavitated), malocclusions (Class II, Class III, anterior open bite and greater spacing), extrinsic staining and an incisal chip. The photographs were displayed on colour-calibrated iPads(™) . Participants used a self-administered questionnaire to rate their perceptions on (Item 1) how normal teeth were, (Item 2) how attractive the teeth were, (Item 3) need to seek correction of teeth, (Item 4) how well the person took care of their teeth and (Item 5) whether the person was born like this. Data from Item 5 were excluded due to low reliability. Ratings for Item 1 showed that TF1-4 was similar or significantly better than TF0. For Item 2, TF1 and TF4 were significantly better than TF0, with TF2 and TF3 being similar. For Item 3, there was significantly lower need to seek correction with TF2 and TF4 versus TF0, whereas TF1 and TF3 were similar to TF0. TF5 and TF6 were rated significantly lower than TF0 for Item 1 and Item 2, and significantly higher rating for Item 3 (need to seek correction). Ratings for Item 4 were similar, with TF1, TF2 and TF4 being rated significantly higher than TF0, and TF5 and TF6 being rated lower. Cavitated caries and staining were generally perceived as being significantly less favourable than TF6, with higher need to seek correction as well. Noncavitated carious lesion and incisal chip were rated similar to TF0. Cavitated carious lesions were rated aesthetically similar or significantly worse than TF0 and TF6. Severe fluorosis (TF5 and 6) was perceived to be less aesthetically pleasing and received higher ratings for need to seek correction than normal teeth. Mild-to-moderate fluorosis (TF1-4) showed similar or better aesthetic perceptions and similar or lower need to seek correction, when compared to normal teeth (TF0). Easily visible cavitated dental caries was rated worse than teeth with severe fluorosis (TF6) and normal teeth (TF0). © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Bradley and Lacaille: Praxis as Passionate Pursuit of Exact Science
NASA Astrophysics Data System (ADS)
Wilson, C. A.
1997-12-01
From 1700 to 1800, astronomical observation and prediction improved in accuracy by an order of magnitude or more: by century's end astronomers could trust catalogued and predicted positions to within a few arcseconds. Crucial to this improvement were the discoveries of Bradley, which grew out of an endeavor of "normal science," the attempt to confirm with precision Robert Hooke's earlier supposed discovery of annual parallax in Gamma Draconis. On the theoretical side, Bradley's discoveries led to the quiet demise of two earlier doctrines, the Tychonic System and the Aristotelian and Cartesian doctrine of the instantaneous transmission of light. On the side of praxis, Bradley's discoveries meant that observational astronomy must be re-done from the ground up. In 1742 Nicolas-Louis Lacaille (1713-62), who had been admitted to the Paris Academie des Sciences only the year before, proposed to his astronomer colleagues that they take up this task as a cooperative enterprise. His proposal met with silence, but he undertook the project on his own, making it his life's work. By 1757 he had completed his Fundamenta Astronomiae, including a catalogue of 400 bright stars in which for the first time star positions were corrected for aberration and nutation. In 1758 he published his solar tables, the first to incorporate lunar and planetary perturbations as well as aberration and nutation. Lacaille's pendulum clock was not temperature-compensated, and his sextant poorly calibrated, but he was to some extent able to compensate for these flaws by bringing a massive number of observations to bear. Till the 1790s his Fundamenta Astronomiae and Tabulae Solares were important for the increments in accuracy they brought about, and for the inspiration they gave to later astronomers such as Delambre.
Fine figure correction and other applications using novel MRF fluid designed for ultra-low roughness
NASA Astrophysics Data System (ADS)
Maloney, Chris; Oswald, Eric S.; Dumas, Paul
2015-10-01
An increasing number of technologies require ultra-low roughness (ULR) surfaces. Magnetorheological Finishing (MRF) is one of the options for meeting the roughness specifications for high-energy laser, EUV and X-ray applications. A novel MRF fluid, called C30, has been developed to finish surfaces to ULR. This novel MRF fluid is able to achieve <1.5Å RMS roughness on fused silica and other materials, but has a lower material removal rate with respect to other MRF fluids. As a result of these properties, C30 can also be used for applications in addition to finishing ULR surfaces. These applications include fine figure correction, figure correcting extremely soft materials and removing cosmetic defects. The effectiveness of these new applications is explored through experimental data. The low removal rate of C30 gives MRF the capability to fine figure correct low amplitude errors that are usually difficult to correct with higher removal rate fluids. The ability to figure correct extremely soft materials opens up MRF to a new realm of materials that are difficult to polish. C30 also offers the ability to remove cosmetic defects that often lead to failure during visual quality inspections. These new applications for C30 expand the niche in which MRF is typically used for.
Machine Learned Replacement of N-Labels for Basecalled Sequences in DNA Barcoding.
Ma, Eddie Y T; Ratnasingham, Sujeevan; Kremer, Stefan C
2018-01-01
This study presents a machine learning method that increases the number of identified bases in Sanger Sequencing. The system post-processes a KB basecalled chromatogram. It selects a recoverable subset of N-labels in the KB-called chromatogram to replace with basecalls (A,C,G,T). An N-label correction is defined given an additional read of the same sequence, and a human finished sequence. Corrections are added to the dataset when an alignment determines the additional read and human agree on the identity of the N-label. KB must also rate the replacement with quality value of in the additional read. Corrections are only available during system training. Developing the system, nearly 850,000 N-labels are obtained from Barcode of Life Datasystems, the premier database of genetic markers called DNA Barcodes. Increasing the number of correct bases improves reference sequence reliability, increases sequence identification accuracy, and assures analysis correctness. Keeping with barcoding standards, our system maintains an error rate of percent. Our system only applies corrections when it estimates low rate of error. Tested on this data, our automation selects and recovers: 79 percent of N-labels from COI (animal barcode); 80 percent from matK and rbcL (plant barcodes); and 58 percent from non-protein-coding sequences (across eukaryotes).
Unconfirmed Near-Earth Objects
NASA Astrophysics Data System (ADS)
Vereš, Peter; Payne, Matthew J.; Holman, Matthew J.; Farnocchia, Davide; Williams, Gareth V.; Keys, Sonia; Boardman, Ian
2018-07-01
We studied the Near-Earth Asteroid (NEA) candidates posted on the Minor Planet Center’s Near-Earth Object Confirmation Page (NEOCP) between years 2013 and 2016. Out of more than 17000 NEA candidates, while the majority became either new discoveries or were associated with previously known objects, about 11% were unable to be followed-up or confirmed. We further demonstrate that of the unconfirmed candidates, 926 ± 50 are likely to be NEAs, representing 18% of discovered NEAs in that period. Only 11% (∼93) of the unconfirmed NEA candidates were large (having absolute magnitude H < 22). To identify the reasons why these NEAs were not recovered, we analyzed those from the most prolific asteroid surveys: Pan-STARRS, the Catalina Sky Survey, the Dark Energy Survey, and the Space Surveillance Telescope. We examined the influence of plane-of-sky positions and rates of motion, brightnesses, submission delays, and computed absolute magnitudes, as well as correlations with the phase of the moon and seasonal effects. We find that delayed submission of newly discovered NEA candidate to the NEOCP drove a large fraction of the unconfirmed NEA candidates. A high rate of motion was another significant contributing factor. We suggest that prompt submission of suspected NEA discoveries and rapid response to fast-moving targets and targets with fast growing ephemeris uncertainty would allow better coordination among dedicated follow-up observers, decrease the number of unconfirmed NEA candidates, and increase the discovery rate of NEAs.
Parnell, S; Gottwald, T R; Cunniffe, N J; Alonso Chavez, V; van den Bosch, F
2015-09-07
Emerging plant pathogens are a significant problem for conservation and food security. Surveillance is often instigated in an attempt to detect an invading epidemic before it gets out of control. Yet in practice many epidemics are not discovered until already at a high prevalence, partly due to a lack of quantitative understanding of how surveillance effort and the dynamics of an invading epidemic relate. We test a simple rule of thumb to determine, for a surveillance programme taking a fixed number of samples at regular intervals, the distribution of the prevalence an epidemic will have reached on first discovery (discovery-prevalence) and its expectation E(q*). We show that E(q*) = r/(N/Δ), i.e. simply the rate of epidemic growth divided by the rate of sampling; where r is the epidemic growth rate, N is the sample size and Δ is the time between sampling rounds. We demonstrate the robustness of this rule of thumb using spatio-temporal epidemic models as well as data from real epidemics. Our work supports the view that, for the purposes of early detection surveillance, simple models can provide useful insights in apparently complex systems. The insight can inform decisions on surveillance resource allocation in plant health and has potential applicability to invasive species generally. © 2015 The Author(s).
Parnell, S.; Gottwald, T. R.; Cunniffe, N. J.; Alonso Chavez, V.; van den Bosch, F.
2015-01-01
Emerging plant pathogens are a significant problem for conservation and food security. Surveillance is often instigated in an attempt to detect an invading epidemic before it gets out of control. Yet in practice many epidemics are not discovered until already at a high prevalence, partly due to a lack of quantitative understanding of how surveillance effort and the dynamics of an invading epidemic relate. We test a simple rule of thumb to determine, for a surveillance programme taking a fixed number of samples at regular intervals, the distribution of the prevalence an epidemic will have reached on first discovery (discovery-prevalence) and its expectation E(q*). We show that E(q*) = r/(N/Δ), i.e. simply the rate of epidemic growth divided by the rate of sampling; where r is the epidemic growth rate, N is the sample size and Δ is the time between sampling rounds. We demonstrate the robustness of this rule of thumb using spatio-temporal epidemic models as well as data from real epidemics. Our work supports the view that, for the purposes of early detection surveillance, simple models can provide useful insights in apparently complex systems. The insight can inform decisions on surveillance resource allocation in plant health and has potential applicability to invasive species generally. PMID:26336177
Poisson Statistics of Combinatorial Library Sampling Predict False Discovery Rates of Screening
2017-01-01
Microfluidic droplet-based screening of DNA-encoded one-bead-one-compound combinatorial libraries is a miniaturized, potentially widely distributable approach to small molecule discovery. In these screens, a microfluidic circuit distributes library beads into droplets of activity assay reagent, photochemically cleaves the compound from the bead, then incubates and sorts the droplets based on assay result for subsequent DNA sequencing-based hit compound structure elucidation. Pilot experimental studies revealed that Poisson statistics describe nearly all aspects of such screens, prompting the development of simulations to understand system behavior. Monte Carlo screening simulation data showed that increasing mean library sampling (ε), mean droplet occupancy, or library hit rate all increase the false discovery rate (FDR). Compounds identified as hits on k > 1 beads (the replicate k class) were much more likely to be authentic hits than singletons (k = 1), in agreement with previous findings. Here, we explain this observation by deriving an equation for authenticity, which reduces to the product of a library sampling bias term (exponential in k) and a sampling saturation term (exponential in ε) setting a threshold that the k-dependent bias must overcome. The equation thus quantitatively describes why each hit structure’s FDR is based on its k class, and further predicts the feasibility of intentionally populating droplets with multiple library beads, assaying the micromixtures for function, and identifying the active members by statistical deconvolution. PMID:28682059
Hosey, Chelsea M; Benet, Leslie Z
2015-01-01
The Biopharmaceutics Drug Disposition Classification System (BDDCS) can be utilized to predict drug disposition, including interactions with other drugs and transporter or metabolizing enzyme effects based on the extent of metabolism and solubility of a drug. However, defining the extent of metabolism relies upon clinical data. Drugs exhibiting high passive intestinal permeability rates are extensively metabolized. Therefore, we aimed to determine if in vitro measures of permeability rate or in silico permeability rate predictions could predict the extent of metabolism, to determine a reference compound representing the permeability rate above which compounds would be expected to be extensively metabolized, and to predict the major route of elimination of compounds in a two-tier approach utilizing permeability rate and a previously published model predicting the major route of elimination of parent drug. Twenty-two in vitro permeability rate measurement data sets in Caco-2 and MDCK cell lines and PAMPA were collected from the literature, while in silico permeability rate predictions were calculated using ADMET Predictor™ or VolSurf+. The potential for permeability rate to differentiate between extensively and poorly metabolized compounds was analyzed with receiver operating characteristic curves. Compounds that yielded the highest sensitivity-specificity average were selected as permeability rate reference standards. The major route of elimination of poorly permeable drugs was predicted by our previously published model and the accuracies and predictive values were calculated. The areas under the receiver operating curves were >0.90 for in vitro measures of permeability rate and >0.80 for the VolSurf+ model of permeability rate, indicating they were able to predict the extent of metabolism of compounds. Labetalol and zidovudine predicted greater than 80% of extensively metabolized drugs correctly and greater than 80% of poorly metabolized drugs correctly in Caco-2 and MDCK, respectively, while theophylline predicted greater than 80% of extensively and poorly metabolized drugs correctly in PAMPA. A two-tier approach predicting elimination route predicts 72±9%, 49±10%, and 66±7% of extensively metabolized, biliarily eliminated, and renally eliminated parent drugs correctly when the permeability rate is predicted in silico and 74±7%, 85±2%, and 73±8% of extensively metabolized, biliarily eliminated, and renally eliminated parent drugs correctly, respectively when the permeability rate is determined in vitro. PMID:25816851
Staiger, Douglas O; Sharp, Sandra M; Gottlieb, Daniel J; Bevan, Gwyn; McPherson, Klim; Welch, H Gilbert
2013-01-01
Objective To determine the bias associated with frequency of visits by physicians in adjusting for illness, using diagnoses recorded in administrative databases. Setting Claims data from the US Medicare program for services provided in 2007 among 306 US hospital referral regions. Design Cross sectional analysis. Participants 20% sample of fee for service Medicare beneficiaries residing in the United States in 2007 (n=5 153 877). Main outcome measures The effect of illness adjustment on regional mortality and spending rates using standard and visit corrected illness methods for adjustment. The standard method adjusts using comorbidity measures based on diagnoses listed in administrative databases; the modified method corrects these measures for the frequency of visits by physicians. Three conventions for measuring comorbidity are used: the Charlson comorbidity index, Iezzoni chronic conditions, and hierarchical condition categories risk scores. Results The visit corrected Charlson comorbidity index explained more of the variation in age, sex, and race mortality across the 306 hospital referral regions than did the standard index (R2=0.21 v 0.11, P<0.001) and, compared with sex and race adjusted mortality, reduced regional variation, whereas adjustment using the standard Charlson comorbidity index increased it. Although visit corrected and age, sex, and race adjusted mortality rates were similar in hospital referral regions with the highest and lowest fifths of visits, adjustment using the standard index resulted in a rate that was 18% lower in the highest fifth (46.4 v 56.3 deaths per 1000, P<0.001). Age, sex, and race adjusted spending as well as visit corrected spending was more than 30% greater in the highest fifth of visits than in the lowest fifth, but only 12% greater after adjustment using the standard index. Similar results were obtained using the Iezzoni and the hierarchical condition categories conventions for measuring comorbidity. Conclusion The rates of visits by physicians introduce substantial bias when regional mortality and spending rates are adjusted for illness using comorbidity measures based on the observed number of diagnoses recorded in Medicare’s administrative database. Adjusting without correction for regional variation in visit rates tends to make regions with high rates of visits seem to have lower mortality and lower costs, and vice versa. Visit corrected comorbidity measures better explain variation in age, sex, and race mortality than observed measures, and reduce observational intensity bias. PMID:23430282
HU deviation in lung and bone tissues: Characterization and a corrective strategy.
Ai, Hua A; Meier, Joseph G; Wendt, Richard E
2018-05-01
In the era of precision medicine, quantitative applications of x-ray Computed Tomography (CT) are on the rise. These require accurate measurement of the CT number, also known as the Hounsfield Unit. In this study, we evaluated the effect of patient attenuation-induced beam hardening of the x-ray spectrum on the accuracy of the HU values and a strategy to correct for the resulting deviations in the measured HU values. A CIRS electron density phantom was scanned on a Siemens Biograph mCT Flow CT scanner and a GE Discovery 710 CT scanner using standard techniques that are employed in the clinic to assess the HU deviation caused by beam hardening in different tissue types. In addition, an anthropomorphic ATOM adult male upper torso phantom was scanned on the GE Discovery 710 scanner. Various amounts of Superflab bolus material were wrapped around the phantoms to simulate different patient sizes. The mean HU values that were measured in the phantoms were evaluated as a function of the water-equivalent area (A w ), a parameter that is described in the report of AAPM Task Group 220. A strategy by which to correct the HU values was developed and tested. The variation in the HU values in the anthropomorphic ATOM phantom under different simulated body sizes, both before and after correction, were compared, with a focus on the lung and bone tissues. Significant HU deviations that depended on the simulated patient size were observed. A positive correlation between HU and A w was observed for tissue types that have an HU of less than zero, while a negative correlation was observed for tissue types with HU values that are greater than zero. The magnitude of the difference increases as the underlying attenuation property deviates further away from that of water. In the electron density phantom study, the maximum observed HU differences between the measured and reference values in the cortical bone and lung materials were 426 and 94 HU, respectively. In the anthropomorphic phantom study, the HU difference was as much as -136.7 ± 8.2 HU (or -7.6% ± 0.5% of the attenuation coefficient, AC) in the spine region, and up to 37.6 ± 1.6 HU (or 17.3% ± 0.8% of AC) in the lung region between scenarios that simulated normal and obese patients. Our HU correction method reduced the HU deviations to 8.5 ± 9.1 HU (or 0.5% ± 0.5%) for bone and to -6.4 ± 1.7 HU (or -3.0% ± 0.8%) for lung. The HU differences in the soft tissue materials before and after the correction were insignificant. Visual improvement of the tissue contrast was also achieved in the data of the simulated obese patient. The effect of a patient's size on the HU values of lung and bone tissues can be significant. The accuracy of those HU values was substantially improved by the correction method that was developed for and employed in this study. © 2018 American Association of Physicists in Medicine.
Natural-product-derived fragments for fragment-based ligand discovery
NASA Astrophysics Data System (ADS)
Over, Björn; Wetzel, Stefan; Grütter, Christian; Nakai, Yasushi; Renner, Steffen; Rauh, Daniel; Waldmann, Herbert
2013-01-01
Fragment-based ligand and drug discovery predominantly employs sp2-rich compounds covering well-explored regions of chemical space. Despite the ease with which such fragments can be coupled, this focus on flat compounds is widely cited as contributing to the attrition rate of the drug discovery process. In contrast, biologically validated natural products are rich in stereogenic centres and populate areas of chemical space not occupied by average synthetic molecules. Here, we have analysed more than 180,000 natural product structures to arrive at 2,000 clusters of natural-product-derived fragments with high structural diversity, which resemble natural scaffolds and are rich in sp3-configured centres. The structures of the cluster centres differ from previously explored fragment libraries, but for nearly half of the clusters representative members are commercially available. We validate their usefulness for the discovery of novel ligand and inhibitor types by means of protein X-ray crystallography and the identification of novel stabilizers of inactive conformations of p38α MAP kinase and of inhibitors of several phosphatases.
Organic synthesis provides opportunities to transform drug discovery
NASA Astrophysics Data System (ADS)
Blakemore, David C.; Castro, Luis; Churcher, Ian; Rees, David C.; Thomas, Andrew W.; Wilson, David M.; Wood, Anthony
2018-04-01
Despite decades of ground-breaking research in academia, organic synthesis is still a rate-limiting factor in drug-discovery projects. Here we present some current challenges in synthetic organic chemistry from the perspective of the pharmaceutical industry and highlight problematic steps that, if overcome, would find extensive application in the discovery of transformational medicines. Significant synthesis challenges arise from the fact that drug molecules typically contain amines and N-heterocycles, as well as unprotected polar groups. There is also a need for new reactions that enable non-traditional disconnections, more C-H bond activation and late-stage functionalization, as well as stereoselectively substituted aliphatic heterocyclic ring synthesis, C-X or C-C bond formation. We also emphasize that syntheses compatible with biomacromolecules will find increasing use, while new technologies such as machine-assisted approaches and artificial intelligence for synthesis planning have the potential to dramatically accelerate the drug-discovery process. We believe that increasing collaboration between academic and industrial chemists is crucial to address the challenges outlined here.
Drew, L.J.; Attanasi, E.D.; Schuenemeyer, J.H.
1988-01-01
If observed oil and gas field size distributions are obtained by random samplings, the fitted distributions should approximate that of the parent population of oil and gas fields. However, empirical evidence strongly suggests that larger fields tend to be discovered earlier in the discovery process than they would be by random sampling. Economic factors also can limit the number of small fields that are developed and reported. This paper examines observed size distributions in state and federal waters of offshore Texas. Results of the analysis demonstrate how the shape of the observable size distributions change with significant hydrocarbon price changes. Comparison of state and federal observed size distributions in the offshore area shows how production cost differences also affect the shape of the observed size distribution. Methods for modifying the discovery rate estimation procedures when economic factors significantly affect the discovery sequence are presented. A primary conclusion of the analysis is that, because hydrocarbon price changes can significantly affect the observed discovery size distribution, one should not be confident about inferring the form and specific parameters of the parent field size distribution from the observed distributions. ?? 1988 International Association for Mathematical Geology.
Apparently low reproducibility of true differential expression discoveries in microarray studies.
Zhang, Min; Yao, Chen; Guo, Zheng; Zou, Jinfeng; Zhang, Lin; Xiao, Hui; Wang, Dong; Yang, Da; Gong, Xue; Zhu, Jing; Li, Yanhui; Li, Xia
2008-09-15
Differentially expressed gene (DEG) lists detected from different microarray studies for a same disease are often highly inconsistent. Even in technical replicate tests using identical samples, DEG detection still shows very low reproducibility. It is often believed that current small microarray studies will largely introduce false discoveries. Based on a statistical model, we show that even in technical replicate tests using identical samples, it is highly likely that the selected DEG lists will be very inconsistent in the presence of small measurement variations. Therefore, the apparently low reproducibility of DEG detection from current technical replicate tests does not indicate low quality of microarray technology. We also demonstrate that heterogeneous biological variations existing in real cancer data will further reduce the overall reproducibility of DEG detection. Nevertheless, in small subsamples from both simulated and real data, the actual false discovery rate (FDR) for each DEG list tends to be low, suggesting that each separately determined list may comprise mostly true DEGs. Rather than simply counting the overlaps of the discovery lists from different studies for a complex disease, novel metrics are needed for evaluating the reproducibility of discoveries characterized with correlated molecular changes. Supplementaty information: Supplementary data are available at Bioinformatics online.
NASA Technical Reports Server (NTRS)
Feiveson, Alan H.; Ploutz-Snyder, Robert; Fiedler, James
2011-01-01
As part of a 2009 Annals of Statistics paper, Gavrilov, Benjamini, and Sarkar report results of simulations that estimated the false discovery rate (FDR) for equally correlated test statistics using a well-known multiple-test procedure. In our study we estimate the distribution of the false discovery proportion (FDP) for the same procedure under a variety of correlation structures among multiple dependent variables in a MANOVA context. Specifically, we study the mean (the FDR), skewness, kurtosis, and percentiles of the FDP distribution in the case of multiple comparisons that give rise to correlated non-central t-statistics when results at several time periods are being compared to baseline. Even if the FDR achieves its nominal value, other aspects of the distribution of the FDP depend on the interaction between signed effect sizes and correlations among variables, proportion of true nulls, and number of dependent variables. We show examples where the mean FDP (the FDR) is 10% as designed, yet there is a surprising probability of having 30% or more false discoveries. Thus, in a real experiment, the proportion of false discoveries could be quite different from the stipulated FDR.
Deficits in allergy knowledge among physicians at academic medical centers.
Stukus, David R; Green, Todd; Montandon, Shari V; Wada, Kara J
2015-07-01
Allergic conditions have high prevalence in the general population. Misconceptions regarding the diagnosis and management of allergic disease among physicians can lead to suboptimal clinical care. To determine the extent of allergy-related knowledge deficits among physicians. Pediatric and internal medicine resident and attending physicians from 2 separate academic medical centers were asked to answer an anonymous electronic survey. Survey questions addressed 7 different allergy content areas. Four hundred eight physicians completed surveys (23.9% response rate). Respondents had few correct answers (mean ± SD 1.91 ± 1.43). Pediatric respondents had a larger number of correct answers compared with medicine-trained physicians (P < .001). No individual answered every survey question correctly, and 50 respondents (12.3%) had no correct answer. Three hundred seventy-eight respondents (92.6%) were unable to provide correct answers for at least 50% of survey questions. Level of residency training and prior rotation through an allergy and immunology elective correlated with a larger number of correct responses (P < .01). Only 1 survey question had an overall correct response rate higher than 50% (n = 261, 64%). Correct response rate was lower than 30% for 7 of the 9 possible questions. There are significant knowledge deficits in many areas of allergy-related content among pediatric and internal medicine physicians and across all levels of training and specialty. Given the prevalence of allergic conditions, the potential implications of a negative impact on clinical care are staggering. Copyright © 2015 American College of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.
Akashi, Hiroyuki; Tsujii, Noa; Mikawa, Wakako; Adachi, Toru; Kirime, Eiji; Shirakawa, Osamu
2015-03-15
Studies on major depressive disorder (MDD) show that the degree of correlation between the Beck Depression Inventory (BDI) and Hamilton Depression Rating Scale (HAMD) varies widely. We aimed to determine whether this discrepancy reflects specific functional abnormalities in the frontotemporal cortex. Mildly depressed or euthymic patients with MDD (n=52), including 21 patients with MDD with the discrepancy, i.e., those with low HAMD17 scores (≤13) but high BDI-II scores (>28), and 31 patients without the discrepancy, i.e., those with low HAMD17 scores and low BDI-II scores (≤28), participated in the study along with 48 control subjects. Regional changes of oxygenated hemoglobin (oxy-Hb) levels during a verbal fluency task (VFT) were monitored using a 52-channel near-infrared spectroscopy (NIRS) device. In the frontotemporal regions, mean oxy-Hb changes induced by the VFT were significantly smaller in patients with MDD than in control subjects. In 5 channels within frontal regions, the increase in mean oxy-Hb levels was significantly greater in MDD patients with the BDI-HAMD discrepancy than in those without the discrepancy. In 6 channels within the frontal region of the patients with MDD, significant positive correlations were observed between mean oxy-Hb changes and BDI total scores (ρ=0.38-0.59; P<0.05, false discovery rate corrected). Our findings required replication in severely depressed patients, particularly those with melancholia. The distinct pattern of activation of the prefrontal cortex suggests that MDD with the BDI-HAMD discrepancy is pathophysiologically different from MDD without the discrepancy. Copyright © 2014 Elsevier B.V. All rights reserved.
2015-01-01
Molecular docking is a powerful tool used in drug discovery and structural biology for predicting the structures of ligand–receptor complexes. However, the accuracy of docking calculations can be limited by factors such as the neglect of protein reorganization in the scoring function; as a result, ligand screening can produce a high rate of false positive hits. Although absolute binding free energy methods still have difficulty in accurately rank-ordering binders, we believe that they can be fruitfully employed to distinguish binders from nonbinders and reduce the false positive rate. Here we study a set of ligands that dock favorably to a newly discovered, potentially allosteric site on the flap of HIV-1 protease. Fragment binding to this site stabilizes a closed form of protease, which could be exploited for the design of allosteric inhibitors. Twenty-three top-ranked protein–ligand complexes from AutoDock were subject to the free energy screening using two methods, the recently developed binding energy analysis method (BEDAM) and the standard double decoupling method (DDM). Free energy calculations correctly identified most of the false positives (≥83%) and recovered all the confirmed binders. The results show a gap averaging ≥3.7 kcal/mol, separating the binders and the false positives. We present a formula that decomposes the binding free energy into contributions from the receptor conformational macrostates, which provides insights into the roles of different binding modes. Our binding free energy component analysis further suggests that improving the treatment for the desolvation penalty associated with the unfulfilled polar groups could reduce the rate of false positive hits in docking. The current study demonstrates that the combination of docking with free energy methods can be very useful for more accurate ligand screening against valuable drug targets. PMID:25189630
Mozaffari, S Mohammad
2017-03-01
Argument In the Almagest, Ptolemy finds that the apogee of Mercury moves progressively at a speed equal to his value for the rate of precession, namely one degree per century, in the tropical reference system of the ecliptic coordinates. He generalizes this to the other planets, so that the motions of the apogees of all five planets are assumed to be equal, while the solar apsidal line is taken to be fixed. In medieval Islamic astronomy, one change in this general proposition took place because of the discovery of the motion of the solar apogee in the ninth century, which gave rise to lengthy discussions on the speed of its motion. Initially Bīrūnī and later Ibn al-Zarqālluh assigned a proper motion to it, although at different rates. Nevertheless, appealing to the Ptolemaic generalization and interpreting it as a methodological axiom, the dominant idea became to extend it in order to include the motion of the solar apogee as well. Another change occurred after correctly making a distinction between the motion of the apogees and the rate of precession. Some Western Islamic astronomers generalized Ibn al-Zarqālluh's proper motion of the solar apogee to the apogees of the planets. Analogously, Ibn al-Shāṭir maintained that the motion of the apogees is faster than precession. Nevertheless, the Ptolemaic generalization in the case of the equality of the motions of the apogees remained untouchable, despite the notable development of planetary astronomy, in both theoretical and observational aspects, in the late Islamic period.
Cannabis Dampens the Effects of Music in Brain Regions Sensitive to Reward and Emotion
Pope, Rebecca A; Wall, Matthew B; Bisby, James A; Luijten, Maartje; Hindocha, Chandni; Mokrysz, Claire; Lawn, Will; Moss, Abigail; Bloomfield, Michael A P; Morgan, Celia J A; Nutt, David J; Curran, H Valerie
2018-01-01
Abstract Background Despite the current shift towards permissive cannabis policies, few studies have investigated the pleasurable effects users seek. Here, we investigate the effects of cannabis on listening to music, a rewarding activity that frequently occurs in the context of recreational cannabis use. We additionally tested how these effects are influenced by cannabidiol, which may offset cannabis-related harms. Methods Across 3 sessions, 16 cannabis users inhaled cannabis with cannabidiol, cannabis without cannabidiol, and placebo. We compared their response to music relative to control excerpts of scrambled sound during functional Magnetic Resonance Imaging within regions identified in a meta-analysis of music-evoked reward and emotion. All results were False Discovery Rate corrected (P<.05). Results Compared with placebo, cannabis without cannabidiol dampened response to music in bilateral auditory cortex (right: P=.005, left: P=.008), right hippocampus/parahippocampal gyrus (P=.025), right amygdala (P=.025), and right ventral striatum (P=.033). Across all sessions, the effects of music in this ventral striatal region correlated with pleasure ratings (P=.002) and increased functional connectivity with auditory cortex (right: P< .001, left: P< .001), supporting its involvement in music reward. Functional connectivity between right ventral striatum and auditory cortex was increased by cannabidiol (right: P=.003, left: P=.030), and cannabis with cannabidiol did not differ from placebo on any functional Magnetic Resonance Imaging measures. Both types of cannabis increased ratings of wanting to listen to music (P<.002) and enhanced sound perception (P<.001). Conclusions Cannabis dampens the effects of music in brain regions sensitive to reward and emotion. These effects were offset by a key cannabis constituent, cannabidol. PMID:29025134
Park, Eunjung; Gintant, Gary A; Bi, Daoqin; Kozeli, Devi; Pettit, Syril D; Skinner, Matthew; Willard, James; Wisialowski, Todd; Koerner, John; Valentin, Jean‐Pierre
2018-01-01
Background and Purpose Translation of non‐clinical markers of delayed ventricular repolarization to clinical prolongation of the QT interval corrected for heart rate (QTc) (a biomarker for torsades de pointes proarrhythmia) remains an issue in drug discovery and regulatory evaluations. We retrospectively analysed 150 drug applications in a US Food and Drug Administration database to determine the utility of established non‐clinical in vitro IKr current human ether‐à‐go‐go‐related gene (hERG), action potential duration (APD) and in vivo (QTc) repolarization assays to detect and predict clinical QTc prolongation. Experimental Approach The predictive performance of three non‐clinical assays was compared with clinical thorough QT study outcomes based on free clinical plasma drug concentrations using sensitivity and specificity, receiver operating characteristic (ROC) curves, positive (PPVs) and negative predictive values (NPVs) and likelihood ratios (LRs). Key Results Non‐clinical assays demonstrated robust specificity (high true negative rate) but poor sensitivity (low true positive rate) for clinical QTc prolongation at low‐intermediate (1×–30×) clinical exposure multiples. The QTc assay provided the most robust PPVs and NPVs (ability to predict clinical QTc prolongation). ROC curves (overall test accuracy) and LRs (ability to influence post‐test probabilities) demonstrated overall marginal performance for hERG and QTc assays (best at 30× exposures), while the APD assay demonstrated minimal value. Conclusions and Implications The predictive value of hERG, APD and QTc assays varies, with drug concentrations strongly affecting translational performance. While useful in guiding preclinical candidates without clinical QT prolongation, hERG and QTc repolarization assays provide greater value compared with the APD assay. PMID:29181850
Comparison of normalization methods for the analysis of metagenomic gene abundance data.
Pereira, Mariana Buongermino; Wallroth, Mikael; Jonsson, Viktor; Kristiansson, Erik
2018-04-20
In shotgun metagenomics, microbial communities are studied through direct sequencing of DNA without any prior cultivation. By comparing gene abundances estimated from the generated sequencing reads, functional differences between the communities can be identified. However, gene abundance data is affected by high levels of systematic variability, which can greatly reduce the statistical power and introduce false positives. Normalization, which is the process where systematic variability is identified and removed, is therefore a vital part of the data analysis. A wide range of normalization methods for high-dimensional count data has been proposed but their performance on the analysis of shotgun metagenomic data has not been evaluated. Here, we present a systematic evaluation of nine normalization methods for gene abundance data. The methods were evaluated through resampling of three comprehensive datasets, creating a realistic setting that preserved the unique characteristics of metagenomic data. Performance was measured in terms of the methods ability to identify differentially abundant genes (DAGs), correctly calculate unbiased p-values and control the false discovery rate (FDR). Our results showed that the choice of normalization method has a large impact on the end results. When the DAGs were asymmetrically present between the experimental conditions, many normalization methods had a reduced true positive rate (TPR) and a high false positive rate (FPR). The methods trimmed mean of M-values (TMM) and relative log expression (RLE) had the overall highest performance and are therefore recommended for the analysis of gene abundance data. For larger sample sizes, CSS also showed satisfactory performance. This study emphasizes the importance of selecting a suitable normalization methods in the analysis of data from shotgun metagenomics. Our results also demonstrate that improper methods may result in unacceptably high levels of false positives, which in turn may lead to incorrect or obfuscated biological interpretation.
Clinical and Metabolic Characterization of Lean Caucasian Subjects With Non-alcoholic Fatty Liver.
Feldman, Alexandra; Eder, Sebastian K; Felder, Thomas K; Kedenko, Lyudmyla; Paulweber, Bernhard; Stadlmayr, Andreas; Huber-Schönauer, Ursula; Niederseer, David; Stickel, Felix; Auer, Simon; Haschke-Becher, Elisabeth; Patsch, Wolfgang; Datz, Christian; Aigner, Elmar
2017-01-01
Non-alcoholic fatty liver disease (NAFLD) is closely linked to obesity; however, 5-8% of lean subjects also have evidence of NAFLD. We aimed to investigate clinical, genetic, metabolic and lifestyle characteristics in lean Caucasian subjects with NAFLD. Data from 187 subjects allocated to one of the three groups according to body mass index (BMI) and hepatic steatosis on ultrasound were obtained: lean healthy (BMI≤25 kg/m 2 , no steatosis, N=71), lean NAFLD (BMI≤25 kg/m 2 , steatosis, N=55), obese NAFLD (BMI≥30 kg/m 2 , steatosis; N=61). All subjects received a detailed clinical and laboratory examination including oral glucose tolerance test. The serum metabolome was assessed using the Metabolomics AbsoluteIDQ p180 kit (BIOCRATES Life Sciences). Genotyping for single-nucleotide polymorphisms (SNPs) associated with NAFLD was performed. Lean NAFLD subjects had fasting insulin concentrations similar to lean healthy subjects but had markedly impaired glucose tolerance. Lean NAFLD subjects had a higher rate of the mutant PNPLA3 CG/GG variant compared to lean controls (P=0.007). Serum adiponectin concentrations were decreased in both NAFLD groups compared to controls (P<0.001 for both groups) The metabolomics study revealed a potential role for various lysophosphatidylcholines (lyso-PC C18:0, lyso-PC C17:0) and phosphatidylcholines (PCaa C36:3; false discovery rate (FDR)-corrected P-value<0.001) as well as lysine, tyrosine, and valine (FDR<0.001). Lean subjects with evidence of NAFLD have clinically relevant impaired glucose tolerance, low adiponectin concentrations and a distinct metabolite profile with an increased rate of PNPLA3 risk allele carriage.
An High Resolution Near-Earth Objects Population Enabling Next-Generation Search Strategies
NASA Technical Reports Server (NTRS)
Tricaico, Pasquale; Beshore, E. C.; Larson, S. M.; Boattini, A.; Williams, G. V.
2010-01-01
Over the past decade, the dedicated search for kilometer-size near-Earth objects (NEOs), potentially hazardous objects (PHOs), and potential Earth impactors has led to a boost in the rate of discoveries of these objects. The catalog of known NEOs is the fundamental ingredient used to develop a model for the NEOs population, either by assessing and correcting for the observational bias (Jedicke et al., 2002), or by evaluating the migration rates from the NEOs source regions (Bottke et al., 2002). The modeled NEOs population is a necessary tool used to track the progress in the search of large NEOs (Jedicke et al., 2003) and to try to predict the distribution of the ones still undiscovered, as well as to study the sky distribution of potential Earth impactors (Chesley & Spahr, 2004). We present a method to model the NEOs population in all six orbital elements, on a finely grained grid, allowing us the design and test of targeted and optimized search strategies. This method relies on the observational data routinely reported to the Minor Planet Center (MPC) by the Catalina Sky Survey (CSS) and by other active NEO surveys over the past decade, to determine on a nightly basis the efficiency in detecting moving objects as a function of observable quantities including apparent magnitude, rate of motion, airmass, and galactic latitude. The cumulative detection probability is then be computed for objects within a small range in orbital elements and absolute magnitude, and the comparison with the number of know NEOs within the same range allows us to model the population. When propagated to the present epoch and projected on the sky plane, this provides the distribution of the missing large NEOs, PHOs, and potential impactors.
CSF neurofilament light chain and phosphorylated tau 181 predict disease progression in PSP.
Rojas, Julio C; Bang, Jee; Lobach, Iryna V; Tsai, Richard M; Rabinovici, Gil D; Miller, Bruce L; Boxer, Adam L
2018-01-23
To determine the ability of CSF biomarkers to predict disease progression in progressive supranuclear palsy (PSP). We compared the ability of baseline CSF β-amyloid 1-42 , tau, phosphorylated tau 181 (p-tau), and neurofilament light chain (NfL) concentrations, measured by INNO-BIA AlzBio3 or ELISA, to predict 52-week changes in clinical (PSP Rating Scale [PSPRS] and Schwab and England Activities of Daily Living [SEADL]), neuropsychological, and regional brain volumes on MRI using linear mixed effects models controlled for age, sex, and baseline disease severity, and Fisher F density curves to compare effect sizes in 50 patients with PSP. Similar analyses were done using plasma NfL measured by single molecule arrays in 141 patients. Higher CSF NfL concentration predicted more rapid decline (biomarker × time interaction) over 52 weeks in PSPRS ( p = 0.004, false discovery rate-corrected) and SEADL ( p = 0.008), whereas lower baseline CSF p-tau predicted faster decline on PSPRS ( p = 0.004). Higher CSF tau concentrations predicted faster decline by SEADL ( p = 0.004). The CSF NfL/p-tau ratio was superior for predicting change in PSPRS, compared to p-tau ( p = 0.003) or NfL ( p = 0.001) alone. Higher NfL concentrations in CSF or blood were associated with greater superior cerebellar peduncle atrophy (fixed effect, p ≤ 0.029 and 0.008, respectively). Both CSF p-tau and NfL correlate with disease severity and rate of disease progression in PSP. The inverse correlation of p-tau with disease severity suggests a potentially different mechanism of tau pathology in PSP as compared to Alzheimer disease. Copyright © 2017 American Academy of Neurology.
Reliable Channel-Adapted Error Correction: Bacon-Shor Code Recovery from Amplitude Damping
NASA Astrophysics Data System (ADS)
Piedrafita, Álvaro; Renes, Joseph M.
2017-12-01
We construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve exact correction to a desired order in the damping rate. The first, employing one-bit teleportation and single-qubit measurements, needs only one-fourth as many physical qubits, while the second, using just stabilizer measurements and Pauli corrections, needs only half. The improvements stem from the fact that damping events need only be detected, not corrected, and that effective phase errors arising due to undamped qubits occur at a lower rate than damping errors. For error correction that is itself subject to damping noise, we show that existing fault-tolerance methods can be employed for the latter scheme, while the former can be made to avoid potential catastrophic errors and can easily cope with damping faults in ancilla qubits.
Nonuniformity correction for an infrared focal plane array based on diamond search block matching.
Sheng-Hui, Rong; Hui-Xin, Zhou; Han-Lin, Qin; Rui, Lai; Kun, Qian
2016-05-01
In scene-based nonuniformity correction algorithms, artificial ghosting and image blurring degrade the correction quality severely. In this paper, an improved algorithm based on the diamond search block matching algorithm and the adaptive learning rate is proposed. First, accurate transform pairs between two adjacent frames are estimated by the diamond search block matching algorithm. Then, based on the error between the corresponding transform pairs, the gradient descent algorithm is applied to update correction parameters. During the process of gradient descent, the local standard deviation and a threshold are utilized to control the learning rate to avoid the accumulation of matching error. Finally, the nonuniformity correction would be realized by a linear model with updated correction parameters. The performance of the proposed algorithm is thoroughly studied with four real infrared image sequences. Experimental results indicate that the proposed algorithm can reduce the nonuniformity with less ghosting artifacts in moving areas and can also overcome the problem of image blurring in static areas.
POWER-ENHANCED MULTIPLE DECISION FUNCTIONS CONTROLLING FAMILY-WISE ERROR AND FALSE DISCOVERY RATES.
Peña, Edsel A; Habiger, Joshua D; Wu, Wensong
2011-02-01
Improved procedures, in terms of smaller missed discovery rates (MDR), for performing multiple hypotheses testing with weak and strong control of the family-wise error rate (FWER) or the false discovery rate (FDR) are developed and studied. The improvement over existing procedures such as the Šidák procedure for FWER control and the Benjamini-Hochberg (BH) procedure for FDR control is achieved by exploiting possible differences in the powers of the individual tests. Results signal the need to take into account the powers of the individual tests and to have multiple hypotheses decision functions which are not limited to simply using the individual p -values, as is the case, for example, with the Šidák, Bonferroni, or BH procedures. They also enhance understanding of the role of the powers of individual tests, or more precisely the receiver operating characteristic (ROC) functions of decision processes, in the search for better multiple hypotheses testing procedures. A decision-theoretic framework is utilized, and through auxiliary randomizers the procedures could be used with discrete or mixed-type data or with rank-based nonparametric tests. This is in contrast to existing p -value based procedures whose theoretical validity is contingent on each of these p -value statistics being stochastically equal to or greater than a standard uniform variable under the null hypothesis. Proposed procedures are relevant in the analysis of high-dimensional "large M , small n " data sets arising in the natural, physical, medical, economic and social sciences, whose generation and creation is accelerated by advances in high-throughput technology, notably, but not limited to, microarray technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patrick Matthews
Corrective Action Unit (CAU) 371 is located in Areas 11 and 18 of the Nevada Test Site, which is approximately 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit 371 is comprised of the two corrective action sites (CASs) listed below: • 11-23-05, Pin Stripe Contamination Area • 18-45-01, U-18j-2 Crater (Johnnie Boy) These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives. Additional information will be obtained by conducting a corrective action investigation before evaluating corrective action alternatives and selecting the appropriate correctivemore » action for each CAS. The results of the field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed on November 19, 2008, by representatives of the Nevada Division of Environmental Protection; U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office; Stoller-Navarro Joint Venture; and National Security Technologies, LLC. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 371. Appendix A provides a detailed discussion of the DQO methodology and the DQOs specific to each CAS. The scope of the corrective action investigation for CAU 371 includes the following activities: • Move surface debris and/or materials, as needed, to facilitate sampling. • Conduct radiological surveys. • Measure in situ external dose rates using thermoluminescent dosimeters or other dose measurement devices. • Collect and submit environmental samples for laboratory analysis to determine internal dose rates. • Combine internal and external dose rates to determine whether total dose rates exceed final action levels (FALs). • Collect and submit environmental samples for laboratory analysis to determine whether chemical contaminants are present at concentrations exceeding FALs. • If contamination exceeds FALs, define the extent of the contamination exceeding FALs. • Investigate waste to determine whether potential source material is present. This Corrective Action Investigation Plan has been developed in accordance with the Federal Facility Agreement and Consent Order that was agreed to by the State of Nevada; U.S. Department of Energy; and U.S. Department of Defense. Under the Federal Facility Agreement and Consent Order, this Corrective Action Investigation Plan will be submitted to the Nevada Division of Environmental Protection for approval. Fieldwork will be conducted following approval of the plan.« less
Brettschneider, Anna-Kristin; Schienkiewitz, Anja; Schmidt, Steffen; Ellert, Ute; Kurth, Bärbel-Maria
2017-04-01
The nationwide 'German Health Interview and Examination Survey for Children and Adolescents' (KiGGS), conducted in 2003-2006, showed an increase in the prevalence rates of overweight and obesity compared to the early 1990s, indicating the need for regular monitoring. Recently, a follow-up-KiGGS Wave 1 (2009-2012)-was carried out as a telephone-based survey, providing parent-reported height and weight from 5155 children aged 4-10 years. Since parental reports lead to a bias in prevalence rates of weight status, a correction is needed. From a subsample of KiGGS Wave 1 participants, measurements for height and weight were collected in a physical examination. In order to correct prevalence rates derived from parent reports, weight status categories based on parent-reported and measured height and weight were used to estimate a correction formula according to an established procedure. The corrected prevalence rates derived from KiGGS Wave 1 for overweight, including obesity, in children aged 4-10 years in Germany showed that stagnation is reached compared to the KiGGS baseline study (2003-2006). The rates for overweight, including obesity, in Germany have levelled off. However, they still remain at a high level, indicating a need for further public health action. What is Known: • In the last decades, prevalence of overweight and obesity has risen. Now a days, the prevalence seems to be stagnating. • In Germany, prevalence estimates of overweight and obesity are only available from regional or non-representative studies. What is New: • This article gives an update for prevalence rates of overweight and obesity amongst children aged 4-10 years in Germany based on a nationwide and representative sample. • Results show that stagnation in prevalence rates for overweight in children in Germany is reached.
Hydrogen storage materials discovery via high throughput ball milling and gas sorption.
Li, Bin; Kaye, Steven S; Riley, Conor; Greenberg, Doron; Galang, Daniel; Bailey, Mark S
2012-06-11
The lack of a high capacity hydrogen storage material is a major barrier to the implementation of the hydrogen economy. To accelerate discovery of such materials, we have developed a high-throughput workflow for screening of hydrogen storage materials in which candidate materials are synthesized and characterized via highly parallel ball mills and volumetric gas sorption instruments, respectively. The workflow was used to identify mixed imides with significantly enhanced absorption rates relative to Li2Mg(NH)2. The most promising material, 2LiNH2:MgH2 + 5 atom % LiBH4 + 0.5 atom % La, exhibits the best balance of absorption rate, capacity, and cycle-life, absorbing >4 wt % H2 in 1 h at 120 °C after 11 absorption-desorption cycles.
Discovery and Orbital Determination of the Transient X-Ray Pulsar GRO J1750-27
NASA Technical Reports Server (NTRS)
Scott, D. M.; Finger, M. H.; Wilson, R. B.; Koh, D. T.; Prince, T. A.; Vaughan, B. A.; Chakrabarty, D.
1997-01-01
We report on the discovery and hard X-ray (20 - 70 keV) observations of the 4.45 s period transient X-ray pulsar GRO J1750-27 with the BATSE all-sky monitor on board CGRO. A relatively faint out- burst (less than 30 mcrab peak) lasting at least 60 days was observed during which the spin-up rate peaked at 38 pHz/s and was correlated with the pulsed intensity. An orbit with a period of 29.8 days was found. The large spin-up rate, spin period, and orbital period together suggest that accretion is occurring from a disk and that the outburst is a "giant" outburst typical of a Be/X-ray transient system. No optical counterpart has yet been reported.
Discovery and Orbital Determination of the Transient X-Ray Pulsar GRO J1750-27
NASA Technical Reports Server (NTRS)
Scott, D. M.; Finger, M. H.; Wilson, R. B.; Koh, D. T.; Prince, T. A.; Vaughan, B. A.; Chakrabarty, D.
1997-01-01
We report on the discovery and hard X-ray (20-70 keV) observations of the 4.45 second period transient X-ray pulsar GRO J1750-27 with the BATSE all-sky monitor on board CCRO. A relatively faint outburst (< 30 mCrab peak) lasting at least 60 days was observed during which the spin-up rate peaked at 38 pHz/sec and was correlated with the pulsed intensity. An orbit with a period of 29.8 days was found. The large spin-up rate, spin period and orbital period together suggest that accretion is occurring from a disk and that the outburst is a 'giant' outburst typical of a Be/X-ray transient system. No optical counterpart has been reported yet.
Financing drug discovery for orphan diseases.
Fagnan, David E; Gromatzky, Austin A; Stein, Roger M; Fernandez, Jose-Maria; Lo, Andrew W
2014-05-01
Recently proposed 'megafund' financing methods for funding translational medicine and drug development require billions of dollars in capital per megafund to de-risk the drug discovery process enough to issue long-term bonds. Here, we demonstrate that the same financing methods can be applied to orphan drug development but, because of the unique nature of orphan diseases and therapeutics (lower development costs, faster FDA approval times, lower failure rates and lower correlation of failures among disease targets) the amount of capital needed to de-risk such portfolios is much lower in this field. Numerical simulations suggest that an orphan disease megafund of only US$575 million can yield double-digit expected rates of return with only 10-20 projects in the portfolio. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
76 FR 31221 - Truth in Lending; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-31
... remaining portion of the $5,000 and $1,000 balances at the variable rate determined using the 10-point... protected balances. * * * * * Supplement I to Part 226 [Corrected] 0 4. On page 23016, in the first column... applies to a $5,000 balance on a credit card account is a variable rate that is determined by adding a...
ERIC Educational Resources Information Center
Szadokierski, Isadora; Burns, Matthew K.; McComas, Jennifer J.
2017-01-01
The current study used the learning hierarchy/instructional hierarchy phases of acquisition and fluency to predict intervention effectiveness based on preintervention reading skills. Preintervention reading accuracy (percentage of words read correctly) and rate (number of words read correctly per minute) were assessed for 49 second- and…
Two Long-Term Intermittent Pulsars Discovered in the PALFA Survey
NASA Astrophysics Data System (ADS)
Lyne, A. G.; Stappers, B. W.; Freire, P. C. C.; Hessels, J. W. T.; Kaspi, V. M.; Allen, B.; Bogdanov, S.; Brazier, A.; Camilo, F.; Cardoso, F.; Chatterjee, S.; Cordes, J. M.; Crawford, F.; Deneva, J. S.; Ferdman, R. D.; Jenet, F. A.; Knispel, B.; Lazarus, P.; van Leeuwen, J.; Lynch, R.; Madsen, E.; McLaughlin, M. A.; Parent, E.; Patel, C.; Ransom, S. M.; Scholz, P.; Seymour, A.; Siemens, X.; Spitler, L. G.; Stairs, I. H.; Stovall, K.; Swiggum, J.; Wharton, R. S.; Zhu, W. W.
2017-01-01
We report the discovery of two long-term intermittent radio pulsars in the ongoing Pulsar Arecibo L-Band Feed Array survey. Following discovery with the Arecibo Telescope, extended observations of these pulsars over several years at Jodrell Bank Observatory have revealed the details of their rotation and radiation properties. PSRs J1910+0517 and J1929+1357 show long-term extreme bimodal intermittency, switching between active (ON) and inactive (OFF) emission states and indicating the presence of a large, hitherto unrecognized underlying population of such objects. For PSR J1929+1357, the initial duty cycle was fON = 0.008, but two years later, this changed quite abruptly to fON = 0.16. This is the first time that a significant evolution in the activity of an intermittent pulsar has been seen, and we show that the spin-down rate of the pulsar is proportional to the activity. The spin-down rate of PSR J1929+1357 is increased by a factor of 1.8 when it is in active mode, similar to the increase seen in the other three known long-term intermittent pulsars. These discoveries increase the number of known pulsars displaying long-term intermittency to five. These five objects display a remarkably narrow range of spin-down power (\\dot{E} ˜ {10}32 {erg} {{{s}}}-1) and accelerating potential above their polar caps. If confirmed by further discoveries, this trend might be important for understanding the physical mechanisms that cause intermittency.
Terrestrial Gamma Radiation Dose Rate of West Sarawak
NASA Astrophysics Data System (ADS)
Izham, A.; Ramli, A. T.; Saridan Wan Hassan, W. M.; Idris, H. N.; Basri, N. A.
2017-10-01
A study of terrestrial gamma radiation (TGR) dose rate was conducted in west of Sarawak, covering Kuching, Samarahan, Serian, Sri Aman, and Betong divisions to construct a baseline TGR dose rate level data of the areas. The total area covered was 20,259.2 km2, where in-situ measurements of TGR dose rate were taken using NaI(Tl) scintillation detector Ludlum 19 micro R meter NaI(Tl) approximately 1 meter above ground level. Twenty-nine soil samples were taken across the 5 divisions covering 26 pairings of 9 geological formations and 7 soil types. A hyperpure Germanium detector was then used to find the samples' 238U, 232Th, and 40K radionuclides concentrations producing a correction factor Cf = 0.544. A total of239 measured data were corrected with Cf resulting in a mean Dm of 47 ± 1 nGy h-1, with a range between 5 nGy h-1 - 103 nGy h-1. A multiple regression analysis was conducted between geological means and soil types means against the corrected TGR dose rate Dm, generating Dg,s= 0.847Dg+ 0.637Ds- 22.313 prediction model with a normalized Beta equation of Dg,s= 0.605Dg+ 0.395Ds. The model has an 84.6% acceptance of Whitney- Mann test null hypothesis when tested against the corrected TGR dose rates.